I don’t care if it’s in a shitposting community, a meme community, or a news community. If the image or text is generated it should be labeled as such, and failing to label it should be grounds to remove the post. AI slop is a plague and its only going to get worse as the tech matures (if it hasn’t already peaked).
I’m so tired of having to call it out every time I see it, especially when people in the comments think it’s a photoshop work or (heavens help us) real. Human labor has real tangible value that plagiarism machines can’t even pretend to imitate and I’m sick of seeing that shit without it being labeled (so I can filter it out).
There is a paradox here, as there are 2 possibilities either
A) AI generated “slop” is obviously bad quality, theirfor a label is unnecessary as it is obvious.
Or
B) the AI generated content looks as good as human creations therfore is not slop and a label is unnecessary.
A) Some people are really really bad at noticing AI slop. I’ve seen some really obvious AI generated images with people debating if it’s real or not. Unless those comments were AI and I’m the one who can’t tell…
B) Honestly even good AI generated content should come with a disclaimer IMO.
*therefore
Even if it looks good, it’s slop.
If someone makes an ai clip of a politician saying something they didn’t should we believe it cause the ai was convincing enough?
Really photoshopped images meant to seem as real as possible should be flagged. It sounds ridiculous just because it has become the norm to accept them.
Relevant xkcd