I don’t care if it’s in a shitposting community, a meme community, or a news community. If the image or text is generated it should be labeled as such, and failing to label it should be grounds to remove the post. AI slop is a plague and its only going to get worse as the tech matures (if it hasn’t already peaked).
I’m so tired of having to call it out every time I see it, especially when people in the comments think it’s a photoshop work or (heavens help us) real. Human labor has real tangible value that plagiarism machines can’t even pretend to imitate and I’m sick of seeing that shit without it being labeled (so I can filter it out).
Whatever mate people didn’t volunteer their art to be scraped by ai so even if it’s not plagiarism exactly, as defined by you or whomever, that doesn’t mean that it’s ethical or people like it.
And most don’t.
And again this isn’t just about images, there’s also the environment and misinformation, plagiarism in academia (and that fits your definition) and a plethora of other issues which are not related to capitalism at all.
Most of the data used to train AI, especially image models, came from publicly available content accessible by anyone. Artists have been doing this kind of thing for centuries: looking at existing work, internalizing styles, and creating something new. AI is doing that at scale — it’s not copying, it’s learning patterns. Just like humans do.
Consent is important, absolutely, but if your art is posted publicly, you’re already consenting to it being seen and learned from. That’s how influence works. If someone draws in your style after following you online, that’s not theft. You might not like it, but it’s not unethical in itself.
Also, let’s not pretend this conversation is only about artists’ rights. It’s become a catch-all for every fear around new tech. People are worried about the impact of AI on the environment? Understandable and totally valid, although way less than you might think
https://www.nature.com/articles/s41598-024-54271-x
https://www.nature.com/articles/s41598-024-76682-6
Misinformation? Agreed, serious concern and one I share. But saying AI is inherently unethical because of how some people use it is like saying the internet is inherently unethical because people post lies.
We should absolutely talk about regulation, transparency, and compensation, but let’s not throw out the entire field because it challenges the comfort zone of some industries. Ethics matter, yes, but so does clarity. Not everything that feels unfair is a violation.
Ok, first of all, AI doesn’t “learn” the way humans do. That’s not how AI imaging works. It basically translates images into a form of static computers can read, uses an algorithm to mix those into a new static, then translates it back. That’s easy different than someone studying what negative space is or learning how to draw hands.
Second, posting a picture implies consent for people to see and learn from it, but that doesn’t imply consent for people to use it however they want. A 16 year old girl posting pictures of her birthday party isn’t really consenting to people using that to generate pornography based off of her body. There’s also the issue of copyright, which is there to protect your works from just being used by anyone. (Yes, it’s advised by corporations, don’t bother trying to bring that up, I’m already pissed at Disney.) But even with people saying specifically that they don’t want their art to be used for AI, even prominent artists like Miyazaki, doesn’t stop AI from taking those images and doing something they don’t consent to, scraping, with them.
Third, trying to say that it’s only fear over new tech is a bullshit, hand waving way of dismissing people legitimate concerns with the issue. I like new technology and how it can help people. I even like some applications for AI. Using a bread checkout tool to detect breast cancer is awesome. The problems that have come up with other applications of it are pretty terrible, and you shouldn’t stick your head in the sand about them.
(As an aside, trying to compare ai generated slop to all other arts is apples and oranges. There’s much more art than digital images, so saying that an AI image takes less energy to make than a Ming vase or literally any other pottery for that matter is a false equivalence. They are not the same even if they have similarities, so comparing their physical costs doesn’t track.)
Fourth, I’m not just talking about people using AI to make lies, I’m talking about AI making lies unintentionally. Like putting glue on pizza to keep the cheese on. Or to eat rocks. AI doesn’t know what’s a joke or misinformation, and will present it as true, and people will believe it as true if they don’t know any better. It’s inaccurate, and can’t be accurate because it doesn’t have a filter for its summeries. It’s typing only using the suggested next word on your cell phone.
I didn’t say to get rid of AI entirely, like I said, some applications are great, like with the breast cancer. But to say that the only issues people have with AI are because of capitalism is incorrect. It’s a poorly working machine and saying that communism will make it magically not broken, when the problems are intrinsic to it, is a false and delusional statement.
The comparison to human learning isn’t about identical processes, it’s about function. Human artists absorb influences and styles, often without realizing it, and create new works based on that synthesis. AI models, in a very different but still meaningful way, also synthesize patterns based on what they’re exposed to. When people say AI ‘learns from art,’ they aren’t claiming it mimics human cognition. They mean that, functionally, it analyzes patterns and structures in vast amounts of data, just as a human might analyze color, composition, and form across many works. So no, AI doesn’t learn “what negative space means” it learns that certain pixel distributions tend to occur in successful compositions. That’s not emotional or intellectual, but it’s not random either.
I agree, posting art online doesn’t give others the right to do anything they want with it. However, there’s a difference between viewing and learning from art versus directly copying or redistributing it. AI models don’t store or reproduce exact images — they extract statistical representations and blend features across many sources. They aren’t taking a single image and copying it. That’s why, legally and technically, it isn’t considered theft. Equating all AI art generation with nonconsensual exploitation like kiddie porn is conflating separate issues: ethical misuse of outputs is not the same as the core technology being inherently unethical.
Also, re your point on copyright, it’s important to remember that copyright is designed to protect specific expressions of ideas not general styles or patterns. AI-generated content that does not directly replicate existing images does not typically violate copyright, which is why lawsuits over this remain unresolved or unsuccessful so far.
This thread and conversation isspecifically talking about AI art, so the comparison and data is still apt.
Concerns about misinformation, environmental impact, and misuse are real. That’s why the responsible use of AI must involve regulation, transparency, and ethical boundaries. But that’s very different from claiming that AI is an ‘eyeball stabbing machine’. That kind of absolutist framing isn’t helpful. It stifles productive discussion about how we can use these tools in ways that are helpful, including in medicine like you mention.
I have never once mentioned capitalism or communism.