This isn't his take on AI. It's his opinion about the arguments presented in Mustafa's fear mongering book about how dangerous AI is and what government should do about it.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
You're right, he wasn't trying to obtain information — which isn't required for a rhetorical question, as the excerpt you quoted says.
He was emphasizing his opinion by voicing it as a question. The text immediately after the excerpt you posted says this makes a valid rhetorical question:
> ... or as a means of displaying or emphasize the speaker's or author's opinion on a topic.
I think the most likely way to achieve “AI alignment” is the peasants with crossbows. Everyone having a powerful AI at their fingertips provides a balancing factor that is only overcome if those tools are in the hands of very few. Trusting Google and various governments to “do the right thing” is absurd - even assuming good behavior and intention on their part, they just aren’t nearly as competent as people seem to assume. Everyone having access everywhere all at once provides the transparency and space to innovate, but also to counterbalance the use of AI in dark ways.
I fundamentally don’t like the crossbow and guns and second amendment argument. AI isn’t a tool to murder with, having no other function than the death of your neighbor. I feel like it’s more like the “should you be required to license your seeds from a megacorp” type thing. Food is existentially important, and it could be weaponized. But it’s not fundamentally a weapon, it’s fundamentally a thing of utility.
A finite resource fairly allocated to all? Which civilization are we talking about again? Certainly not humans.
Also a billion or trillion or infinite ants can’t outsmart a single human. So unless some hard cap is enforced (how?) then mini AIs will probably not do anything
Around 37 trillion cells, of which 25 trillion are red blood cells. Plus a 100 trillion gut bacteria.
The brain is about 0.1 trillion cells, of which 0.02 trillion cells are the cerebral cortex, which is occasionally consulted by the rest of the brain on matters such as what words to enter into this textarea.
Well somehow I trust google more on having my data, than anyone in the world. I prefer the knowledge to be in a known identifiable entity that can be accounted for, than run in the wild and be used by anyone with bad intention (once again, regarding my data). So this is not about data, but AI, but I think it still applies.
this is how I schedule my daily life. it is inevitable that I use software services. I'd like to keep them limited to one megalith, and that is the Google ecosystem. at least their biz model is not predatory, where using my individual data exceeds the risk of doing so.
AI is not powerful. AI doesn't increase information complexity or create new information. It actually does the opposite - it averages out the complexity peaks.
In effect, AI isn't intelligence, it dumbs everything down and reduces information to a common-denominator baseline.
(In the future the real difference between the "haves" and the "have nots" will be those who can understand and deal with information complexity and those who can't. Creating information complexity is the only thing humans are naturally good at, it's our biological imperative.)
AI is not here to dumb down anything but automate tasks and find and reveal complexity esp hidden complexity. Humans are limited by their short and long term memory and bandwidth esp in modern digital environment.
I would still rate opportunity gap as higher priority over understanding complexity. its still funny when people try to be digital nomads just to discover geopolitics or time zones
>I think the most likely way to achieve “AI alignment” is the peasants with crossbows. Everyone having a powerful AI at their fingertips provides a balancing factor that is only overcome if those tools are in the hands of very few.
I think neither elite actors like Sam Altman nor the peasants will manage to get their AI to do what they want it to do once the AI becomes knowledgeable enough and good enough at planning to make plans resistant to determined human opposition.
I.e., I think inequality is a distraction from the main problem with frontier AI research, namely, we are not going to figure out how to aim one such that it stays aimed after its cognitive capabilities have become dangerous.
I would prefer a "peasants and longbows" analogy; any country that trained its people in AI would become super effective but would have to keep its peasants very happy.
It's less "good guy with a gun" and more "some random country with a small to medium army". Access to violence doesn't democratize on the individual level, but as the parent comment alludes to weapons changes allowed peasant armies to defeat trained knights, changing the structure of society.
You personally might not be able to exploit open AI fully, but there are lots of countries and orgs that could. Remember millions of dollars is loose change in the defense context.
For me the more pressing concern is more societal unrest, economic and monopolistic - e.g workers being displaced without enough new jobs being created. Or every communication channel being spammed with no ability to discern trustworthy/authentic content. Or too few large companies hoarding the GPUs and owning the biggest most effective models which allows them not only to consolidate power in the AI industry but own/decide winners in every other industry. You can kinda see this now with Nvidia basically trading GPUs for equity.
The 'regulate it like nukes' thing because teenagers might ask it to end the world thing seems pretty far down the line.
In terms of communication channels being spammed: I am actually hopeful for this because it will just create enormous incentives for us to rethink our communication channels to be much more selective. For example, moving email to be a pull-based system from people I'm actually interested in hearing from rather than an open door with less-than-perfect spam filters. Or making telephone numbers something that have to be mutually exchanged to allow access.
In person seems impractical for businesses or for foreign correspondence, which is kind of the thing phone excels at. And having to accept a sent request just makes that the vector for abuse doesn't it?
There's potential for abusing a sent request, but because most people won't accept them it basically removes the incentive to bother abusing it. It's at least much less abusive then calling people.
>The 'regulate it like nukes' thing because teenagers might ask it to end the world thing seems pretty far down the line.
It's straight up absurd. AI is going to have no appreciable effect on this, because it would simply be disseminating existing knowledge. You can already look up how to build a nuclear bomb. It's not some kind of big secret that nobody's allowed to know. The same goes for all the other world-ending scenarios. AI can't act on its own in the physical world. Even when it eventually does, it will be bound by the same limitations people are.
I agree that the real dangers AI presents are much more mundane. People being able to do things so much more efficiently is going to cause instability in people's lives whose jobs it will affect. It won't replace everyone, but it will cause problems - people have to learn new skills, find new jobs etc.
AIs are already acting in the physical world: Any AI (e.g., ChatGPT) changes the configuration of the human brain of those who use it.
Sure, it's a rather philosophical argument currently, but this will change.
If I would be a malicious actor, I'd create a chat bot that sneaks in responses that are in favor of my agenda. In that I'll make my users the puppets of the AI.
Its not like this hasnt been always happening. Its the reason billionares buy media companies, fake news, astroturfing, bans of books… does it even matter if its Andrew Tate or “AI” spewing nonsense?
Oh yeah i thought so. This shows who is biased here.
The AI doesnt care or know. its about what source the material is taught on. Turns out a lot literature, academic texts and even stuff on the internet is leaning left (and right). So this naturally seeps in.
When you live in a bubble left or right you will see the problem everywhere.
As soon as I observe the behavior or output of an agent or AI, the configuration of my brain changes. This is somewhat tautological as observation is connected to memory and thus brain changes.
I guess the more interesting question is whether there are lasting changes. I'm not an expert in learning, memory, or brain development. Even without much knowledge it should be clear that frequent interaction will have lasting changes. For example, ChatGPT is biased in the sense that it does not know about all the books that haven't been digitalized. Used frequently, those missing books will be reflected in my brain.
Another example could be a chess training AI. If I train with an AI, I could have blind spots in the landscape of chess skills that are deliberately (or not) excluded. So clearly, this AI would change my brain but not only in a way I intend.
AI alone, but logically AI eventually combines with capabilities from robotics, or is able to reach into auto-pilots of cars and planes, drones, self-manned turrets, missile controls, etc. No?
I hope the uber captialists decide they can use AI to build a space utopa for their families in another solar system and go there :)
I honestly don't give a shit about jobs, or having a job, I have one because the system says I need one, if I don't need a job, I'd be so so so much better off.
I'm certain we have enough technology and nohow to feed, house and provide electricity for everyone on earth without them having a 9-5.
This view is hilarious. That you think you can sit around and produce nothing while everyone else around you produces everything you need. Or that everyone can stop what they're doing and still provide you with that. You're just lazy with dreams of contributing nothing and thats not admirable. Until the day where robots are literally walking around wiping our ass, people need to do things.
> That you think you can sit around and produce nothing while everyone else around you produces everything you need.
Yet that's exactly what the "leisure class" do. Inherit enough money and that's your life. Why is that not equally hilarious? Why do only some of us get to have that and it's OK?
The vast majority of people who have money didnt inherit it. The vast majority worked for it.
You dont have it because you didnt earn it. Instead you're on the internet demanding like a child that you dont need to work for anything. Thats the difference between people with money and yourself.
I'm not asking for money. I'm asking why it's OK to inherit money from your parents and do nothing useful your whole life. Why do we give those people a free pass?
This is what people who have never built anything like to say.
Bezos as an example built a great service. An exchange of value between willing customers and what he created happens repeatedly. Therefore he earned the billions. Thats how it works. The more value you create for the most people makes you more money. And I cant think of a more fair system.
It's not fair because some people inherit enough money to never have to work. If you actually wanted a fair system then we would all spend our childhoods in a boarding school with no contact with our parents, and no inheritance. But that's not feasible, so we're stuck with this unfair system. But let's not pretend it's fair.
Those who "create value" often don't. The continuing enshittification of everything is removing value from customers, yet the CEOs who are forcing it on their customers are getting paid vast salaries. We all know that they're destroying value and that their organisations will eventually be replaced by ones that actually value customers, but they continue to be rewarded.
The simple fix is to scale the maximum work week based on available work, partially socializing the efficient gain from AI to society which provided much of the training data.
How much can be up for debate, but giving it all to corporates is going to be a replay of the run up to WW2
I like working, I like making things, I just don't care about having a "job" which I tie my identity too.
I know that must jobs are bullshit jobs and the fact many need to show up to them just to provide for their family is stupid. Maybe a 3 hour day would be more realistic.
Also, I have a Japanese Toto I imported from Japan, it's literally a robotic toilet that wipes my ass for me. Try again angry man.
Think of it a bit differently. Think of production being so efficient and labor free that producing goods and services is extremely cheap and in turn produces abundance.
Theres a reason why Jamie Dimon is saying people will be working 3 days per week in a generation because of AI.
And its why in capitalistic and technology forward societies the poor are generally better off than every generation prior.
Didn't people say that about personal computers as well? Compared to a few generations ago we have everything in cheap abundance but we're still working 5+ days a week.
> Modern technic has made it possible to diminish enormously the amount of labor necessary to produce the necessaries of life for every one.
...
> Suppose that at a given moment a certain number of people are engaged in the manufacture of pins. They make as many pins as the world needs, working (say) eight hours a day. Someone makes an invention by which the same number of men can make twice as many pins as before. But the world does not need twice as many pins: pins are already so cheap that hardly any more will be bought at a lower price. In a sensible world everybody concerned in the manufacture of pins would take to working four hours instead of eight, and everything else would go on as before. But in the actual world this would be thought demoralizing. The men still work eight hours, there are too many pins, some employers go bankrupt, and half the men previously concerned in making pins are thrown out of work.
...
> If the ordinary wage-earner worked four hours a day there would be enough for everybody, and no unemployment — assuming a certain very moderate amount of sensible organization. This idea shocks the well-to-do, because they are convinced that the poor would not know how to use so much leisure. In America men often work long hours even when they are already well-off; such men, naturally, are indignant at the idea of leisure for wage-earners except as the grim punishment of unemployment, in fact, they dislike leisure even for their sons.
Bertrand Russell was already talking about how the eight hours workday was a wasteful bullshit in 1932.
The well-to-do also need to have an underclass to distinguish themselves from, otherwise they become common. To them, becoming common is one of the worst things that could happen.
The work week hasn't shrunk at all, and many goods and services are cheaper but housing isn't and many people live paycheck to paycheck.
I have my reservations that THIS revolution will be the one that solves our problems. A jump in technology is simply a way for those with the means to get more out of those without. AI isn't an equalizer, compute seems like it'll be the new valuable resource and you can buy that with cash.
Yes but pay has gone down in real terms and people afford less of what’s important - owning a home, raising a family. We do indeed have more of the stuff we dont need, or want.
I could see this happening ... in a generation. We're a long way off and tech changes quickly but the large physical plant investments in producing real products is slower. It will be some time before AI and robots are good enough to replace a lot of the labor that humans do, but I see no inherent reason that it won't happen.
Right now the main thing AI is good at is quickly generating bullshit, so those are the jobs that are going to be first to be replaced.
Asking the average Joe to crank out their own personal LLM is like asking the average Joe to stand up their own Mastodon (i.e. unrealistic).
When given the choice between owning your data and spending two minutes to do so vs. opening $MEGA_LLM in your browser and telling it to make you Instagram for Guinea Pigs, $MEGA_LLM will win every time. Doubly so now that $MEGA_LLM is starting to appear in _everything_.
Case in point: OpenAI is already well-past 100M users. many of them are _paid_ users (which is unprecedented).
I don't think running your own LLM should be illegal (under what grounds would that happen anyway?), but I'm also concerned about the world depending on, like, five huge entities for all knowledge.
why is this bad? we already rely on 1 entity (Google) for 80% of query searches and another (Amazon) for 45% of product searches. the world hasn't crumbled, yet retrieving information has gotten magnitudes easier than trying to find an entertaining stream (Netflix, Prime, Peacock, Pluto, Tubi, Disney, Max, Paramount+, YouTube and so on)
Because those platforms provide an illusion of choice, and LLMs do not (by default).
Having a single entity that tells you what it thinks is the right answer short circuits needing to think critically about the answer that was provided. This is especially bad when, in the case of ChatGPT, people are using it as a therapist, doctor, teacher, lawyer, pair programmer, and more.
There is also the possibility of a central authority using the platform to tell you how to think. When you control the model, you control the narrative. I think this could be extremely dangerous in the wrong hands.
Does this idea work at all though? Like for one, did peasants ever in history actually have crossbows and if they did, did it ever actually help them? Like to me the crossbow sounds like an expensive weapon that would likely be issued to soldiers in an army and the idea that a peasant would have one of their own seems unlikely. I don't know the history but it just doesn't sound right at all to me. Also the closest modern analogy I guess is America where everyone has a gun and the idea that those guns actually pose a threat to the US government or the hegemony of large corporations is... fanciful. The practical effect is inner city gun violence and school shootings, not some balancing effect against corporate greed.
I'm just saying... if the analogy you're using to justify your position seems to be entirely wrong doesn't that kind of undermine your position? It's not obvious to me that "peasants with crossbows" is helping here.
Nukes are not that useful for stuff other than extremely destructive explosions.
Besides, we have knives. I'm afraid this mindset will soon demand everything to be pre-cut in licensed facilities and make the knives illegal.
This desire for censorship and restrictions is infinitely ridiculous. Instead of addressing the issues of why would someone want to hurt someone or themselves, they try to eliminate the tools.
This is going to fail, maybe people in UK or in the USA might end up not knowing how things work without formal licensing but people in Russia, Ukraine, Bulgaria, China, Vietnam will know.
This attitude can actually lead to the downfall of the western societies. WTF the computer wouldn't tell me how to cook meth? That's something that junkies do and it's not the lack of knowing how to cook it that keeps me from doing it. I'm just curious, not looking to harm myself or anybody but I'm being lectured by a computer.
You can fantasise about it, sure. Knowing how things work helps with that. Thanks to knowing thins, you can even calculate how it would work, write science fiction about it but if the contemporary mentality was in charge during the nuclear age a category of fiction wouldn't exists or would suck badly.
> Nukes are not that useful for stuff other than extremely destructive explosions.
I would like to mention that in the Fallout series, a game that depicts a different version of history where nuclear power was widely accessible, cars and robots have their own nuclear generators. The reality is that if nuclear power was common, we would not have any problems with energy or a power grid.
It’s also not a secret how to do these things, they teach it to kids. The complexities come from the engineering the device and that’s actually the secret part.
They aren't that different, all told. There is a reason IEDs are a huge safety concern. I'm failing to see how that wouldn't be worse with nuclear power sources.
Or am I mistaken and you could make it so that IEDs from nuclear energy couldn't be done?
IED's are not that much of safety concern actually. The people who intend to use those are. The risks of IEDs don't modulate by the number of people who know chemistry but by the number of people pissed of or delusional enough to make and use one.
Portugal is not much safer from IEDs than Irak because Portuguese are bad at chemistry and Iraqis are all heisenbergs.
I mean... you aren't wrong, having people willing to use an IED is the dangerous thing. More, they are as dangerous as the supplies that they can get ahold of.
I... don't see how that doesn't put a massive hazard on the availability of nuclear power generators everywhere? We have people willing to make and deploy IEDs today. Why would we not have them if nuclear sources were ubiquitous?
(If the idea is we would have the nuclear plants, but that the ubiquitous electronics would not have their own nuclear sources, I can see that. :) )
I don't see how this is relevant. Knowing how to do something is not the same as doing it.
Personally, I'm not anti-nuclear but I'm also not for proliferation of it simply because I don't trust that there are enough serious people to operate a nuclear plant in every city at the same safety levels we have today. That said, I will never advocate for hiding the knowledge for building those.
IEDs are a huge safety concern in battle zones, this isn't terry gilliam's brazil
as the old analogy goes, how big of a chunk of metal do you need to weld to the rail to kill everyone on the train? which part of that should be more illegal?
gun violence is a culture problem, not a security problem
Language models are not nukes and there is no real world evidence that they will become anywhere near this dangerous, or dangerous at all really. Meanwhile you can buy an AR-15 in many states if you can fog glass, or order the supplies to do CRISPR gene editing yourself.
We have people calling for bans on code and math based on nothing more than sci-fi.
I too agree with Carmack that AI doomerism has a definite subtext of concern for changes in power dynamics.
So far, there hasn't been evidence that AI poses an actual danger to human society. There are lots of predictions that it could, or fictional scenarios where it does. But no actual real-world cases.
There hasn't been evidence that AI doesn't pose an actual danger to human society. There are lots of predictions that it could not, or fictional scenarios where it does not. But no actual real-world cases.
There is no evidence you will become a serial killer, but there is no evidence you won’t. I could certainly construct plausible scenarios where you do and even write a book about it. Therefore we should lock you up now.
Banning things on the basis of evidence free speculation is a really dangerous road to go down.
Probabilistically there is some evidence you'll become a serial killer.
For example, if you were being hired by law enforcement and you had written a number of books that stated that being a serial killer was a good thing, then you wouldn't get hired.
There appears to be a particular type of black and white response in this thread that seemingly ignores that the world is black and white.
This is like someone saying there's no proof of God and then responding with there's no proof that there isn't a God. It's safe to say that the burden of proof falls on the one making outrageous claims which is you.
By this very logic you should deny global warming. 1) no human-made global catastrophe ever happened before (no evidence) and 2) it's an outrageous claim with severe implications to the economy, social life, etc - so, why bother?
Except there is evidence that climate change is happening today and it's caused by human soceity. We also have evidence of very different past climates on earth that we know are uninhabitable for humans.
Meanwhile AI dooms day scenarios use sci-fi movies as evidence and humanity has "survived" many technological revolutions in the past. Do you think everyone would care as much if the Terminator movies weren't so popular?
Personally I think there's some risk and problems to solve but it doesn't warrant the authoritarian panic responses it gets.
In my opinion, AI carries an existential risk to the humanity. It has nothing to do with Sci-Fi, but rather is a mere observation of what happens to forest inhabitants when humans come and start building houses. I live in such area. Shall we ban it? No - it's impossible. Too late.
Evidence necessarily belongs to the past. It is something that already exists - a known fact, an event, etc. How on earth there could be evidence of things that have never been?
"There is no evidence that a black swan will be ever found". Hmm. OK.
It's a really nice and succinct quote, I should remember it.
Makes me think of the second amendment and John's stance on that; a "well-armed militia" might be a force to be reckoned with, but if it comes to an actual confrontation they're just peasants with crossbows vs the US military.
Not that I believe the US military would ever go up against their own people, mind you.
>if it comes to an actual confrontation they're just peasants with crossbows vs the US military
History is littered with armies losing to peasants. The US military lost to a bunch of poor people in the jungle then lost to a bunch of poor people in the desert.
US citizens, even just Texas, would most likely prevail easily over the US military especially as the soldiers would be wary of attacking their neighbors.
Guns aren't AI, and I've yet to find a good explanation as to why AI is dangerous outside of science fiction where it becomes self-aware and takes over the military.
This is the core irony of the AI enthusiasts - by massively over-stating the capabilities of AI they provide the perfect ammunition to AI doomsters. If the AI enthusiasts focused on what can reasonably done with AI today they would have a lot less to worry about from regulators, but instead they keep making claims that are so ambitious that actually.. yes, we probably should heavily regulate it if the claims are true.
My cynical take is that this is actually the PR strategy at OpenAI (and others that had to jump on the train): by talking up vague "singularity" scenarios, they build up the hype with investors.
Meanwhile, they're distracting the public away from the real issues, which are the same as always: filling the web with more garbage, getting people to buy stuff, decreasing attention spans and critical thinking.
The threat of job loss is also real, though we should admit that many of the writing jobs eliminated by LLMs are part of the same attention economy.
I haven't read the book, so I feel I can't give it an honest criticism, but I'm assuming the book isn't so much about what the "peasant" or "teenager" would do with AI, but more what smaller unstable nations with rogue dictators, or terrorist groups, or cartels, etc. would do with it?
At least this is what I find more worrisome personally. But realistically I'm not sure we can keep it away from them, similar to how North Korea managed to acquire nukes, except it seems way easier to acquire powerful AI in this case.
I find it somewhat depressing how there are already two starkly divided camps on the safety subject.
It's like the culture wars again; there also seems quite a consistent mapping with the left being more on the safety/control side; the right reckoning it will all sort itself out if we let the market get on with it, contemptuous dismissals marking lines in the sand.
because the dilemma between personal freedom vs living in group is a fundamental one. You'll find it everywhere, and that's also the reason why political groups have been historically structured around those kind of concerns.
Just like once you start reading greek philosophers, you realize all the important questions have been asked 2K years ago. Since then, we've only been trying to find the best ways to accommodate with our human nature.
I wonder, if I had a child, would it be possible to raise it to not outsource the thinking process to a program?
With the internet, I ask search engines (usually just the one) for answers other people might give to a problem I also have.
Pre ~y2k that yielded great results, because the internet was populated with sensible information but smart and educated people.
When the masses came, the results became gradually worse, but I still use the internet for answers instead of thinking on my own, but in many cases I'm forced to think on my own anyway.
With AI the source information is already polluted and the program is fantasizing its answers.
But there will be a point where it delivers accurate answers.
My child, should I ever have one, would then use AI like any other tool. Will it have more resources at its disposal or would it be better that it comes up with its own answers?
I am a single child. I spent a lot of time alone, thinking, making sense of the world, figuring things out on my own.
I played C64 games, with terrible graphics, which trained my imagination. I learned English by playing those computer games.
There came a time where I outgrew those simplistic computer games, turned to having a social life, going to parties, getting into music creation, DJing.
Would that also be something my child would do? Use AI, grow up with values the AI suggests to it, abandon it later, search for its own way in life?
Would it even learn another language, when the AI conveniently translates everything on the fly?
I think, before adding yet another thing on our plate most people, including me, don't fully understand, maybe we should focus on the energy problem and having a sustainable place to live.
Amy technological advance is based on having enough energy to fuel it, and in the state this planet, our habitat is right now because of years of uncontrolled energy use, energy must be a priority before anything else.
Unless you believe that AI can solve that riddle.
Recently in the news Microsoft wanted to build small nuclear reactors to power its AI construct, adding a long term health and waste problem.
It's October, it's been over 25°C the recent days, the sea is still 21-23°C. Usually it's much colder.
The amount of fires and floods this year is over the top.
I am a GPT luddite myself. My coworkers/friends keep using it as a base of thought. I ask them "how would you do it yourself", don't ask this program to make your program. I'm not arguing it's a fast way to learn but still... it's like will they be able to imagine on their own how to make something new.
The only way I have ever really learned how to do something is to scratch and claw my way through it, making a million mistakes, and eventually having it crystalize into clarity. That has never really happened by just having something done for me.
Keep in mind that every generation thinks that about the following; in your case, you were using computers and whatnot, while your parents may have thought you'd never achieve much because you don't read as many books or do the same thing they did.
Every generation has its "kids these days", but up until now they've all ended up all right.
(I say until now, because the current young generation is struggling economically because of the preceding generations messing up the system)
I thought the discussion was getting to the important points once hyperbole was resolved but Carmack ducks out
Carmack:
> To answer the question directly: there is no way, even with an infinitely smart (but not supernatural) oracle, that a teenager can destroy all of humanity in six months.
Even the most devastating disasters people imagine will not kill every human being even over the space of years. This is a common failure of imaginative scope, not understanding all the diverse situations that people are already in, and how they will respond to disasters.
You could walk it back to “kill a million people” and get into the realm of an actual discussion, but it isn’t one I am very interested in having.
---
I do think that's a discussion very worth having, asymmetrical advantage that AI gives is the concern here. A virus is a special arrangement of a few thousand atoms that we can now control and design on a computer. The defence is global vaccination programs costing billions of dollars and many lives in the process
We will no doubt be able to use AI to develop vaccines, that isn't the bottleneck, the bottleneck is in testing and distribution. It's a fundamentally asymmetrical situation
To be clear this is a risk without AI, but AI brings down the threshold of effort to make this more likely. I would love to be convinced otherwise
The hyperbole could go other way too. Eg. When only government have access to an omniscient oracle advisor and advanced AI technology, how long would it take on average for a democracy to turn into a dictatorship? Considering the asymmetry in access to information and technology between a government and civilians, a bad government would have a power to take complete control over a country. With sufficiently advanced AI technology only a handful of people are needed to maintain control over a large population and prevent any attempts to overthrow it. I don't have time right now to come up with an proper transition, but the hyperbole concludes that only one insane dictator with access to an omniscient oracle advisor is needed to destroy the whole world. This might be even more "probable" than a teenager destroying all of humanity in six months.
Can we actually control viruses? No. 2020 was evidence enough of that. We were comically bad at it. Thankfully, immune systems have co-evolved with viruses for millions of years and there's limits to what viruses can do. After all, the most successful viruses don't kill off their hosts (since they would also die off).
It quickly became apparent that Covid could mutate quicker than we could possibly keep up with. Like the common cold and the flu, it iterates too quickly to eliminate entirely like smallpox.
At any rate, it seems all you need for AI is attention (transformers + processing power). If AI ever becomes a real threat outside of generating believable fake images and videos, it's likely the counter is a diversity of AI models that can compete with the "evil one".
It was surprising to me to learn that we can be _a lot_ faster than evolution when it comes to 'improving' viruses. The process is called gain-of-function. It can create mutations that are rare or impossible in nature. Viruses don't select for devastation on their own, they select only for propagation
We will no doubt be able to use AI to develop vaccines that isn't the bottleneck, the bottleneck is in testing and distribution (and humans not wanting to get vaccinated). It's a fundamentally asymmetrical situation
You don't need an AI to construct a virus that can destroy humanity. We already have that capability, and bioprinting one by average grad at random biotech on their lunch break will become commonplace soon.
Frankly, hearing him speak about various things, he just sticks to the position he thinks is right and doesn’t really give a rat’s bottom what the “left” and “right” thinks.
No, when you say you're right or left wing it means you have the same opinions as people on one end of the spectrum or the other. Nobody is "independent" in that sense, only in party affiliation. Being someone who complains about statists and avows their support for the 2nd amendment sounds like a textbook conservative