I don’t care if it’s in a shitposting community, a meme community, or a news community. If the image or text is generated it should be labeled as such, and failing to label it should be grounds to remove the post. AI slop is a plague and its only going to get worse as the tech matures (if it hasn’t already peaked).

I’m so tired of having to call it out every time I see it, especially when people in the comments think it’s a photoshop work or (heavens help us) real. Human labor has real tangible value that plagiarism machines can’t even pretend to imitate and I’m sick of seeing that shit without it being labeled (so I can filter it out).

  • YungOnions@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Ok, first of all, AI doesn’t “learn” the way humans do. That’s not how AI imaging works. It basically translates images into a form of static computers can read, uses an algorithm to mix those into a new static, then translates it back. That’s easy different than someone studying what negative space is or learning how to draw hands.

    The comparison to human learning isn’t about identical processes, it’s about function. Human artists absorb influences and styles, often without realizing it, and create new works based on that synthesis. AI models, in a very different but still meaningful way, also synthesize patterns based on what they’re exposed to. When people say AI ‘learns from art,’ they aren’t claiming it mimics human cognition. They mean that, functionally, it analyzes patterns and structures in vast amounts of data, just as a human might analyze color, composition, and form across many works. So no, AI doesn’t learn “what negative space means” it learns that certain pixel distributions tend to occur in successful compositions. That’s not emotional or intellectual, but it’s not random either.

    Second, posting a picture implies consent for people to see and learn from it, but that doesn’t imply consent for people to use it however they want. A 16 year old girl posting pictures of her birthday party isn’t really consenting to people using that to generate pornography based off of her body. There’s also the issue of copyright, which is there to protect your works from just being used by anyone. (Yes, it’s advised by corporations, don’t bother trying to bring that up, I’m already pissed at Disney.) But even with people saying specifically that they don’t want their art to be used for AI, even prominent artists like Miyazaki, doesn’t stop AI from taking those images and doing something they don’t consent to, scraping, with them.

    I agree, posting art online doesn’t give others the right to do anything they want with it. However, there’s a difference between viewing and learning from art versus directly copying or redistributing it. AI models don’t store or reproduce exact images — they extract statistical representations and blend features across many sources. They aren’t taking a single image and copying it. That’s why, legally and technically, it isn’t considered theft. Equating all AI art generation with nonconsensual exploitation like kiddie porn is conflating separate issues: ethical misuse of outputs is not the same as the core technology being inherently unethical.

    Also, re your point on copyright, it’s important to remember that copyright is designed to protect specific expressions of ideas not general styles or patterns. AI-generated content that does not directly replicate existing images does not typically violate copyright, which is why lawsuits over this remain unresolved or unsuccessful so far.

    (As an aside, trying to compare ai generated slop to all other arts is apples and oranges. There’s much more art than digital images, so saying that an AI image takes less energy to make than a Ming vase or literally any other pottery for that matter is a false equivalence. They are not the same even if they have similarities, so comparing their physical costs doesn’t track.)

    This thread and conversation isspecifically talking about AI art, so the comparison and data is still apt.

    Fourth, I’m not just talking about people using AI to make lies, I’m talking about AI making lies unintentionally. Like putting glue on pizza to keep the cheese on. Or to eat rocks. AI doesn’t know what’s a joke or misinformation, and will present it as true, and people will believe it as true if they don’t know any better. It’s inaccurate, and can’t be accurate because it doesn’t have a filter for its summeries. It’s typing only using the suggested next word on your cell phone.

    Concerns about misinformation, environmental impact, and misuse are real. That’s why the responsible use of AI must involve regulation, transparency, and ethical boundaries. But that’s very different from claiming that AI is an ‘eyeball stabbing machine’. That kind of absolutist framing isn’t helpful. It stifles productive discussion about how we can use these tools in ways that are helpful, including in medicine like you mention.

    I didn’t say to get rid of AI entirely, like I said, some applications are great, like with the breast cancer. But to say that the only issues people have with AI are because of capitalism is incorrect. It’s a poorly working machine and saying that communism will make it magically not broken, when the problems are intrinsic to it, is a false and delusional statement.

    I have never once mentioned capitalism or communism.