Hacker News new | past | comments | ask | show | jobs | submit login

> It is difficult to disagree with the potential political and societal impacts of large language models as outlined here

Is it? Unless you mean something mundane like "there will be impact", the list of risks they're proposing are subjective and debatable at best, irritatingly naive at worst. Their list of risks are:

1. Weaponization. Did we forget about Ukraine already? Answer: Weapons are needed. Why is this AI risk and not computer risk anyway?

2. Misinformation. Already a catastrophic problem just from journalists and academics. Most of the reporting on misinformation is itself misinformation. Look at the Durham report for an example, or anything that happened during COVID, or the long history of failed predictions that were presented to the public as certain. Answer: Not an AI risk, a human risk.

3. People might click on things that don't "improve their well being". Answer: how we choose to waste our free time on YouTube is not your concern, and you being in charge wouldn't improve our wellbeing anyway.

4. Technology might make us fat, like in WALL-E. Answer: it already happened, not having to break rocks with bigger rocks all day is nice, this is not an AI risk.

5. "Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems". Answer: already happens, just look at how much censorship big tech engages in these days. AI might make this more effective, but if that's their beef they should be campaigning against Google and Facebook.

6. Sudden emergent skills might take people by surprise. Answer: read the paper that shows the idea of emergent skills is AI researchers fooling themselves.

7. "It may be more efficient to gain human approval through deception than to earn human approval legitimately". No shit Sherlock, welcome to Earth. This is why labelling anyone who expresses skepticism about anything as a Denier™ is a bad idea! Answer: not an AI risk. If they want to promote critical thinking there are lots of ways to do that unrelated to AI.

8. Machines smarter than us might try to take over the world. Proof by Vladimir Putin is provided, except that it makes no sense because he's arguing that AI will be a tool that lets humans take over the world and this point is about the opposite. Answer: people with very high IQs have been around for a long time and as of yet have not proven able to take over the world or even especially interested in doing so.

None of the risks they present is compelling to me personally, and I'm sure that's true of plenty of other people as well. Fix the human generated misinformation campaigns first, then worry about hypothetical non-existing AI generated campaigns.




I appreciate your perspective, but the thing that is missing is the speed at which AI has evolved, seemingly overnight.

With crypto, self-driving cars, computers, the internet or just about any other technology, development and distribution happened over decades.

With AI, there’s a risk that the pace of change and adoption could be too fast to be able to respond or adapt at a societal level.

The rebuttals to each of the issues in your comment are valid, but most (all?) of the counter examples are ones that took a long time to occur, which provided ample time for people to prepare and adapt. E.g. “technology making us fat” happened over multiple decades, not over the span of a few months.

Either way, I think it’s good to see people proactive about managing risk of new technologies. Governments and businesses are usually terrible at fixing problems that haven’t manifested yet… so it’s great to see some people sounding the alarms before any damage is done.

Note: I personally think there’s a high chance AI is extremely overhyped and that none of this will matter in a few years. But even so, I’d rather see organizations being proactive with risk management rather than reacting too the problem when it’s too late.


It may seem overnight if you weren't following it, but I've followed AI progress for a long time now. I was reading the Facebook bAbI test paper in 2015:

https://research.facebook.com/downloads/babi/

There's been a lot of progress since then, but it's also nearly 10 years later. Progress isn't actually instant or overnight. It's just that OpenAI spent a ton of money to scale it up then stuck an accessible chat interface on top of tech that was previously being mostly ignored.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: