He said: "I stopped doing CV research because I saw the impact my work was having. I loved the work but the military applications and privacy concerns eventually became impossible to ignore."
Initial good results on CV doesn't mean that you realize all the ways it'll start to be used and the implications thereof.
IMO this is ethically more simple than many like to believe.
There are existing power structures (planned ones and emerged ones) in this world and the technology we create can either be used to reinforce them or to question them. Sometimes it is both and things cancel each other out and move on a sideways trajectory – but in the case of CV, it is quite clear who will benefit: those in power, those who need to quantify, control and punish the human element, but don't have the manpower (=legitimacy?) or funds (=priority?) to do so manually.
I get that working in CV is interesting and cool stuff, but the collective suffering it might help creating and keeping is something one should seriously think about as well.
Exactly. CV is interesting and can be completely harmless. But the benefits of the technology for the average person are extremely small relatively to the benefits it brings to any potential oppressive power. Mass surveillance tech can be convenient but it's a deal with the devil and I think we sometimes willfully ignore that under the guise of a perceived amorality of progress. "it's just science, it's neutral and you can also use it for good" can sound like a good argument that I usually even agree with.
But in this case it's very simple:
"Good guys" using of CV gets them things like good auto sorting in Google photos
"Bad guys" using CV can reduce the complexity of creating a fully Orwellian, big brother like surveillance state from "absurdly complex to implement and impossible to maintain" to "we can already put in place a solid implementation today and it will get better by the day".
Now I used to work in CV and as you said,I get how great and exciting the underlying tech is. But I definitely lean more towards fear than excitement these days. I also realize that it's already everywhere, with heavy research efforts and that you can't ever stop it at this point. But that really goes to show that the Yolo creator was right.
Think about it, what does the chinese people gain from CV or face recognitio right now? Maybe cool filters. The chinese government? Unimaginable levels of surveillance and control over it's entire population and it's just getting started.
Just like nuclear energy or genetic engineering, once the technology is there you can't put it back. The problem the chinese people have is not with CV, it's with their government. The way to go is having strong regulations a la GDPR, otherwise a black technology market will appear is there is enough incentive.
Also, if the state-of-the-art in these systems is in the public domain, people on the 'counter-CV' side will be able to experiment and find effective countermeasures. If all the knowledge is kept secret (i.e. it's only researched by secretive organisations) the average person will have less understanding of the technology and its limitations.
You're saying Chinese gov, but the US gov or UK gov would do the same.
I agree that society needs to come together as a whole and regulate this into law, because that's how bad actors from governments can be stopped. At least in democratic countries.
I was answering the previous comment, but yes you are also right, though I tend to think citizen's rights are more at risk in China. Anyway that cannot also be taken for granted in the US or Europe, regulation must be agreed and evolved through time.
Come to think of it, is there any tech that we would erase from history if we could.
To qualify, it needs:
- To have a negative net effect. Explosives for example don't qualify. They are certainly used as weapons and for all sorts of nasty reasons, but they are invaluable in many areas, including safety systems.
- Not to be an essential stepping stone for other, positive discoveries. For example the V2 missile made space exploration possible.
We could put all sorts of weapons in the list, with nuclear bombs in a top position. But think about it. Hydrogen bombs didn't kill anyone. Although the idea is debatable, they may even have acted as deterrents, preventing conflict. As for Hiroshima and Nagasaki, it basically ended the war, who knows for how long it would have continued otherwise, with potentially more victims than what the nuclear bombing have caused. More generally, the most technologically advanced countries are now living in an unprecedented time of peace, despite having the most advanced killing machines ever.
In the end I don't see any tech that I would put in that list because of abuse. The ones I would put there would be of the "oops, didn't know it was bad, let's stop using it" kind. Leaded gasoline comes to mind.
>As for Hiroshima and Nagasaki, it basically ended the war, who knows for how long it would have continued otherwise, with potentially more victims than what the nuclear bombing have caused. More generally, the most technologically advanced countries are now living in an unprecedented time of peace, despite having the most advanced killing machines ever.
as a german I'm always amazed how casually americans are willing to forgive themselves and even rationalize their states war crimes as good
- To have a negative net effect. Explosives for example don't qualify...
I was more idealistic about this kind of thing at one time, but it really is easy to rationalize the "Work on what you want to, don't worry about the end use" attitude that I have now. All I needed to convince myself was a good example.
Imagine that it's the early 1980s. Reagan is in office, the Cold War with the Soviet bloc is still very much a thing, and Star Wars is ramping up. Every other week it seems that somebody proposes yet another batshit-crazy weapons system. You're an engineer with progressive political views, and your bosses are asking you to work on a vast, global satellite network that will allow the military to locate both targets and assets with pinpoint accuracy anywhere on Earth. As far as you're concerned, ol' Ronny Raygun can fuck right off, and you tell them as much. "I'm not working on anything like that!"
20 years later, it turns out you missed your chance to get in on the ground floor of the most important public utility since the telephone system, all because you could only see the destructive uses for the technology.
For me that's hypothetical since I was nowhere near old enough to be employed at the time, but it's easy to say the same thing about applications like UAVs, autonomous vehicles, and ML/AI in general. CV is nowhere near enough of a defense-centric technology to justify refusing to work on it, IMO. Someone who refuses to work on CV on ethical grounds is walking away from their share of our technological future, just like the hypothetical engineer who refused to work on GPS.
We could put all sorts of weapons in the list, with nuclear bombs in a top position. But think about it. Hydrogen bombs didn't kill anyone. Although the idea is debatable, they may even have acted as deterrents, preventing conflict. As for Hiroshima and Nagasaki, it basically ended the war, who knows for how long it would have continued otherwise, with potentially more victims than what the nuclear bombing have caused. More generally, the most technologically advanced countries are now living in an unprecedented time of peace, despite having the most advanced killing machines ever.
I tend to agree with the general notion that nukes are a net win for world peace, but it's debatable whether the net death toll due to warfare has been that much lower in the post-Hiroshima age. Superpowers just conduct proxy wars nowadays instead of beating up on each other in person. If you added up the civilian toll of those proxy wars, it would probably be right up there with many WWIII scenarios, but since those conflicts are happening somewhere else besides major American or Soviet cities, nobody much cares.
The other concern I have is a relatively new one: people are going to forget what those things are and what they do. Eventually, the last person to see a nuclear explosion in person will die of old age. Long before that happens, morons with microphones will deny that Hiroshima and Nagasaki ever happened, just like they do now for events ranging from the Apollo landings to Sandy Hook. Others will take the position that nukes are just bigger versions of regular bombs, nothing that special.
So long term, who knows... maybe it would be better to put that genie back in the bottle if we could. Dunno, and in any case, it's hardly the same thing as CV. I can understand if someone is reluctant to work on nuclear weapons technology, but that understanding stops well short of refusing to work on CV.
>it is quite clear who will benefit: those in power
I think it's more likely that you want to detect a person so you don't hit them with a car than trying to hit them with a drone missile. Detecting cancer with CV, and other improvements to diagnostics also save lives. If you could be working on these technologies and stop, your decision could cost lives.
The same neural network can be trained for both problems. Advances in medical image neural networks translate to advances in people detecting neural networks.
Of course, they are not directly used without changing the training data, etc. However, if we look at the big picture to make a general statement, then the advances brought by Computer Vision on natural images often help push research on medical images forward, albeit with a year (or two) of delay. One such example is U-Net (https://en.wikipedia.org/wiki/U-Net).
>but in the case of CV, it is quite clear who will benefit: those in power, those who need to quantify, control and punish the human element, but don't have the manpower (=legitimacy?) or funds (=priority?) to do so manually.
Technology that exists for surveillance can also be turned on the surveillants. The most relevant case probably being police abuse being caught on smartphone cameras. These tools don't just discipline citizens, they also discipline the police. If I'm in a room with someone in a position of authority far above me, I'd rather have the camera on both of us than none of us.
So it's not actually that simple, and I don't see opting out as realistic or helpful, because other benefits these technologies bring, for example security, will always convince the population to drive adoption forward.
I work in FR, at a leading company. I do not write any ML code, I'm an applications developer with a background in visual effects.
My attitude is being there in the developer group influencing how the applications behave, interfacing with our management, sales and clients and being a voice unafraid to raise ethics questions within these groups is my method of knowing what is the state of this dangerous technology. I'd rather be there influencing its development and use than on the outside, frankly blind to what it's in-deployment capabilities are.
You can only influence these things so far. The company ultimately needs to make money and sometimes the money is coming from people that want drone-guiding or surveillance technology and so if your suggestions are leading too far away from that, you'll be fired.
I mostly agree on your thinking but I think it is not so simple to decide.
If we focus on reinforce vs question part. It is kind of prisoners dilemma. If we assume governments(reinforcers) will anyway work on this technology, and lets assume they will end up with some lower quality version (lets say 50/100) . Questioners have 2 options:
- dont work on this, accept 50/0
- or work and improve both sides, 70/20
Without deeper context hard to decide which will be better for questioners.
Would you prefer to have a gun against rifle, or no weapon vs gun?
The question is, if we are so aware of the dangers posed by these power structures, agreeing that the technology itself is "apolitical", then why have we allowed power to be concentrated so highly in a given institution or individual? It seems that, if we are concerned with ethics, we should be pushing much more for the dilution of power back to the people, rather than "taking sides" in terms of who to work with or what to work on.
Just trying to provoke an answer with a little humour ...
It is obvious to anyone that one of the primary uses of computer vision is to watch other humans at scale. This cannot be surprising to anyone in the field yet now there are ethical concerns mixed with politics through the roof.
Maybe we can be more precise about this?
The work here is a technical achievement but there are some weird comments here which I think has something to do with the author being Chinese.
I'm not sure how you got to this idea, but it's just not plausible.