Regardless of that we all should have cameras running everywhere all the time
It is only recent history that cell phone cameras have become ubiquitous and it has caused a huge shift in the authorities ability to squash their abusive behaviors
In the same way that TV played a fundamental part of progress in the civil rights and Vietnam wars, one of the best tools the average person has to hold people accountable is to control the narrative via video
My only concern is that gen AI will mean that nobody will ever trust video evidence again. I hope we get some kind of signature based crypto verification on recordings to prove they aren’t fake. Like every device is keyed to authenticate the recordings it produces
Any idea of trying to prove that a random video isn't fake by crypto verification is very, very brittle - the trust relies on having almost 100% certainty of key secrecy from a global, heterogenous system of low-margin commodity manufacturers.
Like, ok, every device is keyed to authenticate the recordings it produces, using a unique key signed by the manufacturer - as long as a few valid device keys ever leak from the device or the manufacturer, any fake video can get signed with a valid key from Camera#1234 from ShenzhenCameraCorp567, ltd.; you're not going to make every $1 camera module in cheap embedded devices tamper resistant.
Kind of, but the requirements on such a system are far stricter than what we get (and expect) from certificate authorities. For example, the CA system doesn't fail in its goals if I publish (no matter if accidentally or incidentally) the private key for *.mydomain.com; but the proposed image verification scheme does become useless if one of the many manufacturers does that; the CA system doesn't fail just because CAs will issue a certificate to phishing sites run by some criminal, but the proposed image verification scheme does become useless if some manufacturers will issue a "camera" certificate that can be extracted and used in some criminal's Photoshop workstation instead of a real camera.
For web CA's to work, all you need is that the single certificate for the site you're choosing to visit is good - but if you want to use a similar system to verify trustworthiness of viral images originating from strangers through social media, you need 100% of the camera certificates to be valid - if there are any leaked certificates, then manufacturers of fake images will use those; and on the other hand if you "revoke" everything from any compromised manufacturer, people won't just replace their cameras, they'll simply keep posting data with their valid-but-invalid certificates and you'll either have to automatically mistrust lots of genuine true content or be vulnerable to fake data, and most people will choose the latter.
>> Regardless of that we all should have cameras running everywhere all the time
I would like to opt out of this nightmarish safety hellscape. I never use the phrase Orwellian because it’s so often misused, but yikes is this some 1984 badthink.
I held this same opinion until recently but I've come to realize that it only disallows citizens from recording in public -- that is, if this opinion were adopted in policy, the police could use said policy against me to prevent my filming of police activity.
I'd also like to opt-out of having cameras everywhere in public but the fact of the matter is they are here to stay. Additionally, most of the cameras which capture your image in public are not cameras which you installed and they're not cameras which you have the authority to remove. Adding your own cameras to the mix is functionally equivalent to exercising your freedom to speak; really, to document, in this context.
David Brin wrote on this in "The Transparent Society". From the Amazon summary:
> David Brin is worried, but not just about privacy. He fears that society will overreact to these technologies by restricting the flow of information, frantically enforcing a reign of secrecy. Such measures, he warns, won't really preserve our privacy. Governments, the wealthy, criminals, and the techno-elite will still find ways to watch us. But we'll have fewer ways to watch them. We'll lose the key to a free society: accountability.
Note this was published in 1999, so one can argue about how far we went in either direction. I think we mostly ended up with a collective shrug.
Today's "nightmarish safety hellscape" is brought to you by (amongst others) Toby Roberts, a former technical surveillance officer at the UK's Eastern Region Special's Operations Unit, and the Raspberry Pi Foundation where he's the official "Maker In Residence". -- https://www.theregister.com/2022/12/09/rpi_maker_in_residenc...
I think the distinction is who controls the tools. Everyone having their own cameras is very different from the party controlling cameras around everyone.
Sometimes it's both, like in the case of Ring doorbell cameras. I may install a camera and think I'm in control, until my footage in the cloud is subpoenaed without my knowledge for an alleged crime I have nothing to do with.
I went with a doorbell with local storage (Eufy, in my case) for this very reason.
My knowledge of the law here is virtually nonexistent. It seems likely that I could still be subpoenaed to turn over footage under some circumstances. But at least I'm in control of that footage and it's not automatically being given to some third party.
That's a meaningless distinction in a world with room 641A, rubberstamp FISA warrants, etc. If the party wants the data, it can get it; whose disks it is stored on is an irrelevant implementation detail.
I don't know about should, but given that cameras and microphones and processors and power and communication are all probably going to continue to get cheaper and smaller and lighter it seems to me that this is nearly inevitable. So the question really should be - how do we adapt to it? How can we try to mitigate the harm (through social, legal, and/or technical means) and steer our changing society closer to a future that we'd actually want to live in?
Yeah, not only are cameras going to get cheaper and smaller and lighter (and way more ubiquitous as a result), but there's other factors to consider too. Face recognition is also getting cheaper and more ubiquitous (and other similar technologies like gait recognition and even skeletal kinematics identification).
The privacy implications are astounding. But, as you say, this is all inevitable (I intentionally left out your "nearly" there), and it's a very good question about how that's going to change society and whether we (I) want to live in that.
Cryptographically verified recordings don't sound practical to me (sensors and video processing electronics sound like a lot of hardware to put in a secure element), but I'm sure we will see generative AI inflating away the value of blackmail material soon; one mitigation for this could be cryptographically signing material and then publishing the signature long before it becomes practical to fake it (i.e. the past, increasingly), then periodically creating signatures with new algorithms in advance of the discovery of practical attacks on existing ones.
The only thing that'd need to be in a secure element would be the signing keys. This has existed for a while for digital cameras. Canon, Nikon, and Sony have all brought still image solutions to market for use in situations like photojournalism or forensic evidence collection.
Device signing can be used very effectively to tell if a particular devices was involved in an action - but it is far more difficult to tell if some non-specific device was the source or whether it was generated. When it comes to fabricated video evidence we'd need to establish a circle of trust that included every camera ever produced but was somehow secure and unforgeable. We've seen this approach break down previously with Diginotar[1] - it really only takes on weak link in the system to compromise the verification. At the scale with which cameras are demanded it seems unreasonable to expect a centralized signing administration to be able to keep their tokens all completely secured.
> When it comes to fabricated video evidence we'd need to establish a circle of trust that included every camera ever produced
Stopping short of that, there'd still be value in being able to cryptographically prove that your home surveillance video (or dash cam video) came from _your_ camera and is unaltered from the original recording.
I think going forward, the "circle of trust" for the next "capital insurrection type event" video evidence will be founded on multiple videos of the same scenes from multiple angles and from devices owned by un related individuals.
Although, the biggest category of cameras these days is cell phones, and all (most?) of them have some sort of hardware trust store with private keys that are extremely difficult to extract, so it wouldn't be to much of a stretch to consider having Android and iOS default camera app being able to digitally sign photos/video - all without "a centralized signing administration" and piggybacking on existing token security methods...
I don't think that the signer would be able to verify the authenticity of the data that it received from the sensor and image processing circuitry unless they were able to authenticate each other securely. I know that an attack on a system like you proposed would still be expensive, but it would become more attractive if its characteristics were overplayed (and would then be subject to legal challenge). Forensics, of course on the other hand is based on experts saying "yes, by all accounts this appears to have happened".
Yes, and then governments will require that any sale of recording devices are registered so that footage can be traced back to.... undesirables who undermine the great leader.
It is only recent history that cell phone cameras have become ubiquitous and it has caused a huge shift in the authorities ability to squash their abusive behaviors
In the same way that TV played a fundamental part of progress in the civil rights and Vietnam wars, one of the best tools the average person has to hold people accountable is to control the narrative via video
My only concern is that gen AI will mean that nobody will ever trust video evidence again. I hope we get some kind of signature based crypto verification on recordings to prove they aren’t fake. Like every device is keyed to authenticate the recordings it produces