Yes, and pjreddie seems to have concluded computer vision is mostly (or too often for their liking) used for the digital equivalent of those bad things.
I think prjeddie's concerns are extremely relevant. However he's not the only one working on things like this, thus it's unlikely that research and development will stop, although I certainly think such development is ethically questionable. In some ways this thing seems similar to the ethical problems facing the scientists working on the nuclear bomb. I just hope to God that this tech will be used for good rather than bad, but the way things are going with political censorship (government sponsored or otherwise) and people of opposing camps doing their best to dox political opponents–let's just say I'm not too optimistic...
> it's unlikely that research and development will stop
I know you didn't make this argument here, but I still want to point out that that's ethically irrelevant for his decision.
Or the other way around: "Someone else would have done it" is not a defense when you've built something that was clearly gonna be used for Bad Things(TM).
> Yes, and pjreddie seems to have concluded computer vision is mostly (or too often for their liking) used for the digital equivalent of those bad things.
Indeed, there are many nefarious applications of computer vision. But applications to the medical industry are plentiful too.
I see weighing up the net benefit as a tricky and a personal matter.
That's fine - that's a personal choice he is free to make. But I completely disagree with it. I also don't think that unencumbered AI research is going to lead to the overthrow of the human race by machines like Elon does.
Making cheap computer vision is just as dangerous to the "tyrant" as his supposed victims. You can already make a plausible anti-president suicide drone, A Ticket To Tranai style.