Hacker News new | past | comments | ask | show | jobs | submit login

If you realize something horrific, your options are to decide it's not your problem (and feel guilty when it blows up), carefully forget you learned it, or try to do something to get it changed.

Since the last of these involves overcoming everyone else's shared distress in admitting the emperor has no clothes, and the first of these involves a lot of distress for you personally, a lot of people opt for option B.




> overcoming everyone else's shared distress in admitting the emperor has no clothes

I don't disagree, but why do we do we react this way? Doesn't knowing the emperor has no clothes instill a bit of hope that things can change? I feel for the people who were impacted by this, but I'm also a little bit excited. Like... NOW can we fix it? Please?


The problem is, making noise incurs risk.

The higher up in large organizations you go, in politics or employment or w/e, the more what matters is not facts, but avoiding being visibly seen to have made a mistake, so you become risk-averse, and just ride the status quo unless it's an existential threat to you or something you can capitalize on for someone else's misjudgment.

So if you can't directly gain from pointing out the emperor's missing clothes, there's no incentive to call it out, there's active risk to calling it out if other people won't agree, and moreover, this provides an active incentive for those with political capital in the organization to suppress the embarrassment of anyone pointing out they did not admit the problem everyone knew was there.

(This is basically how you get the "actively suppress any exceptions to people collectively treating something as a missing stair" behaviors.)


I've not seen that at my fortune 100. I found other's willing to agree and we walked it up to the most senior evp in the corporation. Got face time andbwe weren't punished. Just, nothing changed. Some of the directors that helped walk it up the chain eventually became more powerful and the suggested actions took place about 15 years later.


Sure, I've certainly seen exceptions, and valued them a lot.

But often, at least in my experience, exceptions are limited in scope to whatever part of the org chart the person who is the exception is in charge of, and then that still governs everything outside of that box...


And, another problem, if I may: This, too will be soon forgotten. Our "attention cycle" is too short.-

(Look all the recent, severe supply chain attacks by state actors, and how soon they have been displaced off-focus ...)


So if we want our technology to be more reliable, we need to make its current unreliability into an existential threat to certain people?

I mean, it already is to some people, as shown elsewhere in this thread. Seems like it's the wrong people though.


It's a nice idea, but has that worked historically? Some people will make changes, but I think we'd be naive to think that things will change in any large and meaningful way.

Having another I-told-you-so isn't so bad, though - it does give us IT people a little more latitude when we tell people that buying the insecurity fix du jour increases work and adds more problems than it addresses.


Sure, on long enough timescales. I mean, there's less lead in the environment than there used to be. We don't practice blood letting anymore. Things change. Eventually enough will be enough and we'll start using systems that are transparent about what their inputs are and have a way of operating in cases where the user disables one of those inputs because it's causing problems (e.g. crowdstrike updates).

I'd just like it to be soon because I'm interested in building such systems and I'd rather be paid to do so instead of doing it on my off time.


> We don't practice blood letting anymore. Things change

Gonna make me a Tshirt outta this :)


Going off of that framework:

Because doing three requires convincing a bunch of people that are currently doing number two, to do number one instead.


Option C is to quietly distance yourself from it.


there are way too many horrific things in the world to learn about... and then realizing you can't do something about every of those things. But at least you can tackle one of them! (In my case, antibiotic resistance)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: