Hacker News new | past | comments | ask | show | jobs | submit login

This thread is a testament to Dunning-Kruger effect.

Every observation listed here can be generalized to a case of worry that AI is stupid, selfish and/or mean. Unsurprisingly, just like what's feared about people!

Epic confusion of the topic of cybernetics with "AI".

In colloquial parlance "AI" must be quoted to remind that the topic is hopelessly ambiguous, where every use demands explicit clarification via references to avoid abject confusion. This comment is about confusion, not about "AI".

"Thou shall not make machines in the likeness of the human mind."

Too late.

Weinzenbuam's Eliza showed the bar to likeness is low.

Television similarly demonstrates a low bar, but interestingly doesn't arouse viewers' suspicions about what all those little people are doing inside when you change the channel.

I find it helpful when considering implications of "AI" to interject the observation of the distinction between life and computing machinery: these are profoundly different dynamics: life is endogenous, mechanical computers are exogenous. We don't know how or why life emerges, but we do know how and why computers occur, because we make them. That computers are an emergent aspect of life may be part of the conundrum of the former and therefore a mystery, we design an control computers, to the extent that it can be said we design or control anything. So if you chose to diminish or contest the importance of design in outcomes of the application of computers, you challenge the importance of volition in all affairs. This might be fair, but apropos Descartes' testament to mind: if you debase yourself, you debase all your conclusions, so best to treat confusion and fear about the implications of applied computing as a study of your own limits.

There's a single enormous and obvious hazard of "AI" in this era: that we imbue imitations of human responses with humanity. The bar is low for a convincing imitation and transformer technology demonstrates surprisingly high levels of imitation. This is conducive to confusion, which is becoming rampant.

The start of a responsible orientation to rampant confusion is to formally contextualize it and impose a schedule of hygiene on fakery.

The great hazard of centralized fakery (e.g. radio and television) is a trap for the mind.

We are living in the aftermath of a 500 year campaign of commercial slavery. When people are confused, they can be ensnared, trapped and enslaved. The hazard of "AI" is not the will of sentient machines burying the world in manufactured effluvia— we've achieved this already!— it's the continuation of commercial slavery by turning people into robots.

This thread reads like it's being created by bots; cascading hallucinations.

Well, this bot is throwing down the gauntlet: Prove you're human, y'all!




When AI surpasses the high bar built up in other mediums (a.k.a. heavily curated environments for procrastination) then the problem of where to pigeon hole AI goes away, rather than a hostile/obvious illusion it will just be another welcome illusion.

There will always be humans trying to push humanity further or hold it back, we should expect bots in that mix from now on. Regulating bots out of existence only offers a false sense of handicap so that human contributions continue to feel the most meaningful for a time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: