China has released a set of guidelines on labeling internet content that is generated or composed by artificial intelligence (AI) technology, which are set to take effect on Sept. 1.
I think there was a similar idea in the USA with the COPIED Act, but I haven’t heard about it since.
As an exception to most regulations that we hear about from China, this approach actually seems well considered - something that might benefit people and work.
Similar regulations should be considered by other countries.
USA announces plan to ban the buds of the cannabis plant
They plan to ban hating on the supreme leader.
China is long ahead with that so maybe there is hope.
About as enforceable as banning bitcoin.
Stable Diffusion has the option to include an invisible watermark. I saw this in the settings when I was running it locally. It does something like adds a pattern that is easy to detect with machines but impossible to see. The idea was that you could check an image for it before putting it into training sets. Because I never needed to lie about things I generated I left it on.
Something that we’ve needed for too long. Good on China :)
China, oh you Remembering something about go green and bla bla, but continue to create coal plants.
The Chinese government has been caught using AI for propaganda and claiming to be real. So I don’t see it happening within the Chinese government etc.
It makes more sense to mark authentic content but sure.
Me: “hé <AI name> remove the small text which is at the bottom right in this picture”
AI: “Done, here is the picture cleaned of the text”
Lol. So everything and anything can just be AI generated fakenews.
Will be interesting to see how they actually plan on controlling this. It seems unenforceable to me as long as people can generate images locally.
That’s what they want. When people doing it locally, they can discredit anything as AI generated. The point isn’t about enforability, but can it be a tool to control narative.
Edit: it doesn’t matter if people actually generating locally, but if people can possibly doing it. As long as it is plausible, the argument stands and the loop completes.
It’s not like this wasn’t always the issue.
Anything and everything can be labelled as misinformation.
Having some AIs that do this and some not will only muddy the waters of what’s believable. We’ll get gullible people seeing the ridiculous and thinking “Well there’s no watermark so it MUST be true.”
Sorry but the problem right now is much simpler. Gullibility doesn’t require some logical premise. “It sounds right so it MUST be true” is where the thought process ends.
And the lack of label just reinforced the confirmation bias.
This is a bad idea. It creates a stigma and bias against innocent Artificial beings. This is the equivalent of forcing a human to wear a collar. TM watermark
Forgot the /s I assume
But I put in the water mark!
Imma be honest with ya, did not notice it at all lol now I see what you did there and no /s needed of course
That’s something that was really needed.
Would it be more effective to have something where cameras digitally sign the photos? Then, it also makes photos more attributable, which sounds like China’s thing.
This is the one area where blockchain could have been useful instead of greater-fool money schemes. A system where people can verify provenance of images or videos pertaining to matters of importance such as news stories. All reputable journalism already attributes their photos anyways. Cryptographic signing is just taking it to a logical conclusion. But of course the scary word ‘china’ is involved here therefore we must only contrarian post.
That’s actually already a thing: https://www.theregister.com/2022/08/15/sony_launches_forgeryproof_incamera_digital/
That’s a different thing. C2PA is proving a photo is came from a real camera, with all the editing trails. All in a cryptographic manner. This in the topic is trying to prove what not real is not real, by self claiming. You can add the watermark, remove it, add another watermark of another AI, or whatever you want. You can just forge it outright because I didn’t see cryptographic proof like a digital sign is required.
Btw, the C2PA data can be stripped if you know how, just like any watermarks and digital signatures.
Stripping C2PA simply removes the reliability part, which is fine if you don’t need it. It is something that is effective when present and not when it isn’t.
It’s never effective. At best, you could make the argument that a certain person lacks the wherewithal to have manipulated a signature, or gotten someone else to do it. One has to hope that the marketing BS does not convince courts to assign undue weight to forged evidence.
No, I don’t want my photos digitally signed and tracked, and I’m sure no whistleblower wants that either.
Of course not. Why would they? I don’t want that either. But we are considering the actions of an authoritarian system.
Individual privacy isn’t relevant in such a country. However, it’s an interesting choice that they implement it this way.
Apart from the privacy issues, I guess the challenge would be how you preserve the signature through ordinary editing. You could embed the unedited, signed photo into the edited one, but you’d need new formats and it would make the files huge. Or maybe you could deposit the original to some public and unalterable storage using something like a blockchain, but it would bring large storage and processing requirements. Or you could have the editing software apply a digital signature to track the provenance of an edit, but then anyone could make a signed edit and it wouldn’t prove anything about the veracity of the photo’s content.
Hm, that’s true there’s no way to distinguish between editing software and photos that have been completely generated. It only helps if you want to preserve unmodified photos. And of course, I’m making assumptions here that China doesn’t care very much about privacy.