It's disheartening that so many bright people work on research that will undoubtedly be abused by authoritarian governments and make the world a worse place for everyone. Ask a thousand people what video analysis can be used for and I bet the vast majority will say surveillance, and most would probably struggle to think about something else.
I wish researchers would start to ask themselves if what they're building can be abused by authoritarian governments, and if so, switch focus to a field of research that will make the world a better, rather than a worse, place to live in.
History will not look kindly on researchers who built the tools that enabled Xi the Pooh to achieve a dystopian surveillance state.
Google wasn't responsible for any of the major breakthroughs in AI. The current AI boom is driven by deep learning[1], due to Alex Krizhevsky [2][3], who was then a Ph.D. student @ University of Toronto. Subsequently, several significant breakthroughs have been in Universities & outside Google or FAANGs (Siri, DeepMind were all questions)
Google certainly played a major role, and it wasn't my intention to shelter companies/governments from blame. However, I certainly don't believe holding individuals accountable is futile. They need to consider if people can be suppressed, stripped of their rights, imprisoned, and murdered thanks to their research/code.
We should encourage developers to fight for privacy, and discourage them from engaging in research/writing code that can be used to commit evil acts.
And yes, I wholeheartedly agree that everyone should vote.
I wish researchers would start to ask themselves if what they're building can be abused by authoritarian governments, and if so, switch focus to a field of research that will make the world a better, rather than a worse, place to live in.
History will not look kindly on researchers who built the tools that enabled Xi the Pooh to achieve a dystopian surveillance state.