Hacker News new | past | comments | ask | show | jobs | submit login

they will because like so offen perfection and accidental consequences matter less then "have the new tech" for both the companies and parts of the users

most important while both AIs get sometimes very important and fundamental things wrong they do get enough right to be of help in some tasks to be of a lot of help




> most important while both AIs get sometimes very important and fundamental things wrong they do get enough right to be of help in some tasks to be of a lot of help

that's only true if you can easily and cheaply identify when they are correct. But at this time, and for the foreseeable future, that's not the case.


thats also true if the fallout from wrong usages is less in cost then the savings, which for huge coperations is often the case for situations where it shouldn't

also you can't just use this tech to "get the truth" but to generate things in which it sometimes is rather simple to identify and fix mistakes, and where due to human error you anyway have a "identity and fix mistakes step"

have most of this kind of usages likely negative impact on society? Sure. But for adoption that doesn't matter because "negative impact on society" doesn't the main adopters money, or at least not more then they make from it


I have never found the "big corporations will do <X> so we should all just prepare to suck it" a very convincing argument.


I never said we should or that's good.

I said it will happen because you can make/save money with it.

And it's not just big cooperation, it's already good enough for many applications of all kinds and sizes of companies to be used from a financial point of view.

And "it makes money so people will do it" is a truth we have to live with as long as we don't fundamentally overhaul our economic system.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: