Watermarks can be helpful, but I believe that provenance via digital signatures is ultimately a better solution. Curious why Google doesn’t join the CAI (https://contentauthenticity.org/) and use their approach for provenance of Google’s generated audio files.
Bob produces something with AI but claims he produced it himself and signs it with his private key.
AI produces something and signs it or doesn't, but if it's signed you can just throw the signature away and either publish it as unsigned or sign it again with a different key.
Signatures allow Alice to verify that something is signed by someone who has Bob's private key. If only Bob has Bob's private key, that means it was signed by Bob. It doesn't tell you whether it was generated by AI or not if Bob doesn't want you to know, because Bob can sign whatever he wants with his private key.
In this case "Bob" is presumably supposed to be some camera with DRM, but that means it will be in the physical control of attackers and anybody who can crack any camera by any manufacturer can extract the private key and use it to sign whatever they want, which is inevitably going to happen. Keys will be available for sale to anyone who wants one and doesn't have the technical acumen to extract one themselves. Since that makes the whole system worthless, what's the point?
> Bob produces something with AI but claims he produced it himself and signs it with his private key. … because Bob can sign whatever he wants with his private key.
Whether or not to trust Bob is an entirely different problem space than being able to prove an image came from Bob. In most scenarios Bob would be “trustworthy news source” who cares about their reputability. The important piece here is that if someone shares something on e.g. twitter and says Bob produced it, that claim can be verified.
> crack any camera by any manufacturer can extract the private key and use it to sign whatever they want, which is inevitably going to happen … Since that makes the whole system worthless, what's the point?
Think about what happens today when a private key is leaked - that key is no longer trusted. Will it be such a large scale problem such that the day any camera is released the keys are leaked? Maybe. Even in that scenario though we end up in the same spot as today except with the additional benefit of being able to verify stuff coming from NPR/CNN/your preferred news source that is shared on third party platforms.
> In most scenarios Bob would be “trustworthy news source” who cares about their reputability. The important piece here is that if someone shares something on e.g. twitter and says Bob produced it, that claim can be verified.
We don't need some new system for that. You go to the website of your preferred news source and the connection is secured with TLS which certifies that the server is the one for the domain your browser shows you're visiting.
> Think about what happens today when a private key is leaked - that key is no longer trusted. Will it be such a large scale problem such that the day any camera is released the keys are leaked?
It's not that some camera's keys will be leaked and then you'll know not to trust them. It's that someone publishes how to extract the keys from some camera and then everything signed with any of those keys is called into question. Or figures out how to extract the keys from some camera or swipe them from one of the bureaucracies that generate them and doesn't tell anyone, they just use them to forge signatures.
And then because that is not only possible but likely to happen in practice, and you have no way to know when it has, you can't actually trust the signatures for anything.
> you can't actually trust the signatures for anything.
Do you bank online? Public-private key encryption work well enough to support millions (billions?) of dollars worth of transactions per day - I don't think it's as broken as you make it seem