Hacker News new | past | comments | ask | show | jobs | submit login

The tech examples will often involve employees directly responsible for building, improving and maintaining the systems of abuse that are central to the ill deeds of the senior leaders you're referring to. The senior leaders can't do what they do without those systems.

This concept is fundamentally why 4,000 of Google's employees staged a protest against Google working on AI systems for the military. They have an inkling about what such systems will be used for.

So it begs the obvious question about how complicit you are if you build software systems that you know are going to be used for immoral things; that you know ahead of time how they're going to be used by said senior leadership. To say nothing of the fact that often said systems are built for the sole purpose of enabling abuse, so there's very little question about the line of moral responsibility (whether of privacy or in the aiding of censorship in authoritarian nations, et al). This obviously isn't a new debate within tech though, it goes all the way back in the industry (eg IBM's counting machines).




> how complicit you are if you build software systems that you know are going to be used for immoral things

There's another path to not building and not participating it. It might be possible to be subversive and design these systems to best fit your values:

"My bias was always to build decentralization into the net. That way it would be hard for one group to gain control. I didn’t trust large central organizations. It was just in my nature to distrust them." -- Robert Taylor

https://theymadethat.com/people/l8tu24/bob-taylor




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: