Hacker News new | past | comments | ask | show | jobs | submit login

And... How do you do that? Seriously, people have been talking about stoping CO2 emissions for what, 50 years? We've barely made progress there.



With threat of military intervention.

People have been talking about curtailing carbon emissions for 50 years, but they haven't been serious about it. Being serious doesn't look like saying "oh shoot, we said we will... try to keep emissions down, but we didn't... even try; sorry...". Being serous looks like "stop now, or we will eminent domain your data centre from under you via an emergency court order, or failing that, bomb it to rubble (because if we don't, some other signatory of the treaty will bomb you for us)".


So you're proposing replacing a hypothetical risk (Geoffrey Hinton puts the probability at 1%) with guaranteed pain and destruction, which might escalate to nuclear war? Thank you, but no thank you.


No, I'm proposing replacing a high risk scenario (I disagree with Hinton here) with a low risk scenario, which has low chance to escalate to any serious fighting.

It seems people have missed what the proposed alternative is, so let me spell it out: it's about getting governments to SIGN AN INTERNATIONAL TREATY. It is NOT about having US (or anyone else) policing the rest of the world.

It's not fundamentally more dangerous than the few existing treaties of similar kind, such as those about nuclear non-proliferation. And obviously, all nuclear powers need to be on-board with it, because being implicitly backed by nukes is the only way any agreement can truly stick on the international level.

This level of coordination may seem near-impossible to achieve now, but then it's much more possible than surviving an accidental creation of an AGI.


> No, I'm proposing replacing a high risk scenario (I disagree with Hinton here)

I disagree with you here, I think the risk is low, not high.

> it's about getting governments to SIGN AN INTERNATIONAL TREATY. It is NOT about having US (or anyone else) policing the rest of the world

We haven't been able to do that for climate change. When we do, then I'll be convinced enough that it would be feasible for AI. Until then, show me this coordination for the damage that's already happening (climate change).

> This level of coordination may seem near-impossible to achieve now, but then it's much more possible than surviving an accidental creation of an AGI.

I think the coordination required is much less possible than a scenario where we need to "survive" some sort of danger from the creation of an AGI. But we can find out for sure with climate change as an example. Let's see the global coordination. Have we solved that actual problem, yet?

Remember, everyone has to first agree to this "bomb AI" to avoid war. Otherwise, bombing/war starts. The equivalent for climate change would be bombing carbon producers. I don't see either agreement happening globally.


To continue contributing to climate change takes very little: you need some guy willing to spin a stick for long enough to start a fire, or to feed and protect a flock of cows. To continue contributing to AI, you need to maintain a global multi-billion supply chain with cutting edge technology that might have a four-digit bus factor.

The mechanisms that advance climate change are also grandfathered in to the point that we are struggling to conceive of a society that does not use them, which makes "stop doing all of that" a hard sell. On the other hand, every society at least has cultural memory of living without several necessary ingredients of AI.


the collective noun for cows is herd - not flock.

<https://en.wikipedia.org/wiki/List_of_animal_names>


to continue contributing to the harm caused by AI only requires that 1 person use an existing model running off their laptop to foment real-world violence or spread disinformation en masse on social media, or use it on a cheap swarm of weaponized/suicide drones, for a few examples


There's a qualitative barrier there. The AI risk people in the know are afraid of is not a flood of AI-generated articles, but something that probably can't be achieved yet with current levels of AI (and more slop generated from current-day AI won't hasten the advent of). On the other hand, modern-day greenhouse gases are exactly the greenhouse gases that climate activists are afraid of in the limit.


The AI risks people in the know are afraid of, are indeed what I listed: pretty much any person, anywhere, using an existing model running off their laptop to foment real-world violence or spread disinformation en masse on social media, or use it on a cheap swarm of weaponized/suicide drones, for a few examples

That's what we're already seeing today, so we know the risk is there and has a probability of 100%. The "skynet" AI risk is far more fringe and farfetched.

So, like you said about climate change, the harm can come from 1 person. In the case of climate change, though, the risks people in the know are afraid of, aren't "some guy willing to spin a stick for long enough to start a fire, or to feed and protect a flock of cows"


Are you really suggesting drone striking data centers? What about running AI models on personal hardware? Are we going to require that everyone must have a chip installed in their computer that will report to the government illegal AI software running on the machine?

You are proposing an authoritarian nightmare.


Hence the entire point of this discussion... can we avoid the future nightmares that are coming in one way or another? Make AI = nightmare, stop people from making AI = nightmare. The option that seems to be off the table is people going, hell making AI is a nightmare so I won't voluntarily without the need for a police state.


> With threat of military intervention.

We've seen how hard it is to do that when the fear is nuclear proliferation. Now consider how hard it is to do that when the research can be done on any sufficiently large set of computational devices, and doesn't even need to be geographically concentrated, or even in your own country.

If I was a country wanting to continue AI research under threat of military intervention, I'd run it all in cloud providers in the country making the threat, via shell companies in countries I considered rivals.


Yes, it's as hard as you describe. But it seems that there are no easier options available - so this is the least hard thing we can do to mitigate the AI x-risk right now.


How do you intend to prevent people from carrying out general purpose computation?

Because that is what you'd need to do. You'd need to prevent the availability of any device where users can either directly run software that has not been reviewed, or that can be cheaply enough stripped of CPU's or GPU's that are capable of running un-reviewed software.

That review would need to include reviewing all software for "computational back doors". Given how often we accidentally create Turing-complete mechanisms in games or file formats where it was never the intent, preventing people from intentionally trying to sneak past a way of doing computations is a losing proposition.

There is no way of achieving this compatible with any resembling a free society.


> How do you intend to prevent people from carrying out general purpose computation?

Ask MAFIAA, and Intel, and AMD, and Google, and other major tech companies, and don't forget the banks too. We are already well on our way to that future. Remember Cory Doctorow's "War on general-purpose computation"? It's here, it's happening, and we're losing it.

Therefore, out of many possible objections, this one I wouldn't put much stock in - the governments and markets are already aligned in trying to make this reality happen. Regardless of anything AI-related, generally-available general-purpose computing is on its way out.


I don't think you understand how little it takes to have Turing complete computation, and how hard it is to stop even end-users from accessing it, much less companies or a motivated nation state. The "war on general-purpose computation" is more like a "war on convenient general-purpose computation for end users who aren't motivated or skilled enough to work around it".

Are you going to ban all spreadsheets? The ability to run SQL queries? The ability to do simple regexp based search replace? The ability for users to template mail responses and set up mail filters? All of those allows general purpose computation either directly or as part of a system where each part may seem innocuous (e.g. the simple ability to repeatedly trigger the same operation is enough to make regexp based search and replace Turing complete; the ability to pass messages between a templated mailing list system and mail filters can be Turing complete even if neither the template system and filter in isolation is)

The ability for developers to test their own code without having it reviewed and signed off by someone trustworthy before each and every run?

Let one little mechanism through and the whole thing is moot.

EDIT: As an illustration, here's a Turing machine using only Notepad++'s find/replace: https://github.com/0xdanelia/regex_turing_machine


>I don't think you understand how little it takes to have Turing complete computation

This is a dumb take. No one's calculator is going implement an AGI. It will only happen in a datacenter with an ocean of H100 GPUs. This computing power does not materialize out of thin air. It can be monitored and restricted.


No one's calculator needs to. The point was to reply to the notion that there is any possible way the "war on general-purpose computation" actually has any shot of actually stopping general-purpose computation to make a point about how hard limiting computation is in general.

> It will only happen in a datacenter with an ocean of H100 GPUs. This computing power does not materialize out of thin air. It can be monitored and restricted.

Access to H100 could perhaps be restricted. That will drive up the cost temporarily, that's all. It would not stop a nation state actor that wanted to from finding alternatives.

The computation cost required to train models of a given quality keeps dropping, and there's no reason to believe that won't continue for a long time.

But the notion you couldn't also sneak training past monitoring is based on the same flawed notion of being able to restrict general purpose computation:

It rests on beliefs about being able to recognise what can be used to do computation you do not want. And we consistently keep failing to do that for even the very simplest of cases.

The notion that you will be able to monitor which set of operations are "legitimate" and which involves someone smuggling parts of some AI training effort past you as part of, say, a complex shader is as ludicrous as the notion you will be able to stop general purpose computation.

You can drive up the cost, that is all. But if you try to do so you will kill your ability to compete at the same time.


There are physical limits to what can make an effective and efficient computational substrate. There are physical limits to how fast/cheap these GPUs can be made. It is highly doubtful that some rogue nation is going to invent some entirely unknown but equally effective computational method or substrate. Controlling the source of the known substrate is possible and effective. Most nations aren't in a position to just develop their own using purely internal resources without being noticed by effective monitoring agencies. Any nation that could plausibly do that would have to be a party to the agreement. This fatalism at the impossibility of monitoring is pure gaslighting.


Unless you're going to ban the sales of current capacity gaming level gpus and destroy all of the ones already in the market, the horse bolted a long time ago, and even if you managed to do that it'd still not be enough.

As it is, we keep seeing researchers with relatively modest funding steadily driving down the amount of compute required for equivalent quality models month by month. Couple that with steady improvements in realigning models for peanuts reducing the need to even start from scratch.

There's enough room for novel reductions in compute to keep the process of cost reducing training going for many years.

As it is, I can now personally afford to buy hardware sufficient to train a GPT3 level model from scratch. I'm well off, but I'm not that well off. There are plenty of people just on HN with magnitudes more personal wealth, and access to far more corporate wealth.

Even a developing country can afford enough resources to train something vastly larger already today.

When your premise requires fictional international monitoring agencies and fictional agreements that there's no reason to think would get off the ground in anything less than multiple years of negotiations to even create a regime that could try to limit access to compute, the notion that you would manage to get in place such a regime before various parties will have stockpiled vast quantities in preparation is wildly unrealistic.

Heck, if I see people start planning something like that, I'll personally stockpile. It'll be a good investment.

If anything is gaslighting, it's pushing the idea it's possible to stop this.


>Unless you're going to ban the sales of current capacity gaming level gpus and destroy all of the ones already in the market, the horse bolted a long time ago, and even if you managed to do that it'd still not be enough.

That's silly. We can detect the acquisition of tens of thousands of high end GPUs by a single entity. And smaller nations training their own models is besides the point. The issue isn't to stop random nation from training their own GPT-4, it's to short circuit the possibility of training a potentially dangerous AGI, which is at least multiple game changing innovations down the line. The knowledge to train GPT-4 is already out there. The knowledge to train an AGI doesn't exist yet. We can ensure that only a few entities are even in a position to take a legitimate stab at it, and ensure that the knowledge is tightly controlled. We just have to be willing.

The precedence is nuclear arms controls and the monitoring of the raw materials to develop deadly pathogens. The claim that this monitoring isn't possible doesn't pass the smell test.


The big cloud providers would be obligated to ensure their systems were nor being used for AI training.

Yes, this would be a substantial regulatory burden.


It'd be more than a substantial regulatory burden, it's impossible. At most they'd be able to drive up the cost by forcing users to obscure it by not doing full sets of calculations at a single provider.

Any attempt even remotely close to extreme enough to have any hope of being effective would require such invasive monitoring of computations run that it'd kill their entirely cloud industry.

Basically, you'll only succeed in preventing it if you make it so much more expensive that it's cheaper to do the computations elsewhere, but for a country at threat of military intervention of it's discovered what they're doing, the cost threshold of moving elsewhere might be much higher than it would for "regular" customers.


Even if it's this costly, it's still possible, and much cheaper than the consequences of an unaligned AGI coming into existence.


Sure, you can insist on monitoring which software can be run on every cloud service, every managed server, on the servers in every colo, or on every computer owned by everyone (of any kind, including basic routers or phones, or even toys, - anything with a CPU that can be bought in sufficient bulk and used for computations), and subject every piece of code allowed to run at any kind of scale to a formal review by government censors.

At which point you've destroyed your economy and created a regime so oppressive it makes China seem libertarian.

Now you need to repeat that for every country in the world, including ones which are intensely hostile to you, while everyone will be witnessing your economic collapse.

It may be theoretically possible to put in place a sufficiently oppressive regime, but it is not remotely plausible.

Even if a country somehow did this to themselves, the rest of the world would rightly see such a regime as an existential threat if they tried enforcing it on the rest of the world.


It all sounds very bad, but the truth is, we're going to get there anyway - in fact, we're half-way there, thanks to the influence of media conglomerates pushing DRM, adtech living off surveillance economy, all the other major tech vendors seeking control of end-user devices, and security industry that provides them all with means to achieve their goals.

Somehow, no one is worried about the economy. Perhaps because it's the economy that's pushing this dystopian hellhole on us.


We're nowhere near there. Even on the most locked down devices we have, getting computation past the gatekeepers doesn't require reviews remotely detailed enough to catch attempts at sneaking these kinds of computation past, or even making them expensive, and at no point in history have even individuals been able to rent as much general purpose computational capacity for as little money.

Even almost all the most locked down devices available on the market today can still either be coaxed to do general purpose calculation one way or another or mined for GPU's or CPU's.

No one is worried about the economy because none of the restrictions being pushed are even close to the level that'd be needed to stop even individuals from continuing general purpose computation, much less a nation state actor.

This is without considering the really expensive ways: Almost every application you use can either directly or in combinations become a general purpose computational device. You'd need to ensure e.g. that there'd be no way to do sufficiently fast batch processing of anything that can be coaxed into suitable calculations. Any automation of spreadsheets or databases, for example. Even just general purpose querying of data. A lot of image processing. Bulk mail filtering where there's any way of conditionally triggering and controlling responses (you'd need multiple addresses, but even a filtering system like e.g. Sieve that in isolation is not Turing complete becomes trivially Turing complete once you allow multiple filters to pass messages between multiple accounts).

Even regexp driven search and replace, which is not in itself Turing complete becomes Turing complete with the simple addition of a mechanism for repeatedly executing the same search and replace on its own output. Say a text editor with a "repeat last command" macro key.

And your reviewers would only need to slip up once and let something like that slip past once with some way of making it fast enough to be cheap enough (say, couple the innocuous search-and-replace and the "repeat last command" with a an option to run macros in a batch mode) before an adversary has a signed exploitable program to use to run computations.


There's also the bomb every datacenter doing AI thing that LeCun has proposed, which is a lot more practical than this. Or perhaps a self replicating work that destroys AIs.


This was why I pointed out that if I was a nation state worried about this, I'd host my AI training at the cloud providers of the country making those threats... Let them bomb themselves. And once they've then destroyed their own tech industry, you can still buy a bunch of computational resources elsewhere and just not put them in easily identifiable data centres.


We put our servers in datacenters for a real reason, not just for fun though. Good luck training an AI when your intra node communication is over the internet (this is obviously a technical problem, but the truth is we haven't solved it).


At least nuclear weapons pose an actual existential risk as opposed to AI - and even then we don't just go to war.


Nukes being a risk was suggested as a reason why the US was willing to invade Iraq for trying to get one but not North Korea for succeeding.

It's almost certainly more complex of course, but the UK called it's arsenal a "deterrent" before I left, and I've heard the same for why China stopped in the hundreds of warheads.


Even if that were true for Iraq, which I doubt, it would have been the odd one out.

Btw., China is increasing its arsenal at the moment.


Nukes don't have a mind of their own. They're operated by people, who fortunately turned out sane enough that they can successfully threaten each other into a stable state (MAD doctrine). Still, adding more groups to the mix increases risk, which is why non-proliferation treaties are a thing, and are taken seriously.

Powerful enough AI creates whole new classes of risks, but it also magnifies all the current ones. E.g. nuclear weapons become more of an existential risk once AI is in the picture, as it could intentionally or accidentally provoke or trick us into using them.


AI could pose those risks, but it also could not - that is the difference to nuclear weapons.


AI does pose all those risks, unless we solve alignment first. Which is the whole problem.


I does not, because that kind of AI doesn't exist at the moment and the malevolence etc. are bunch of hypotheses about a hypothetical AI, not facts.


You keep repeating this tired argument in this thread, so just subtract the artificial element from it.

Instead imagine a non-human intelligence. Maybe its alien carbon organic. Maybe it's silicon based life. Maybe it's based on electrons and circuits.

In this situation, what are the rules of intelligence outside of the container it executes in?

Also, every military in the world wargames on hypotheticals because making your damned war plan after the enemy attacks is a great way to wear your enemies flag.


How would you feel if militaries planned for fighting Egyptian gods? Just because I can imagine something doesn't mean it is real and that it needs planning for. Using effort on imaginary risks isn't free.


That's long covered already. Ever heard of the StarGate franchise? That's literally an Air Force approved exercise in fighting Ancient Egyptian gods with modern weapons :).

More seriously though, Egyptian gods are equivalent to aliens in general, and adjacent to AI, and close enough to fighting a nation that somehow made a major tech leap, so militaries absolutely plan for that.


> AI could pose those risks, but it also could not

"I'm going to build a more powerful AI; don't worry, it could end the world, but it also could not."


No-one wants a nuclear war over imaginary risks, so not happening.


We better find a way to make it happen, because right now, climate change x-risk mitigation is also not happening, for the same reason.


Climate change mitigation is actually happening. Huge investment flows are being redirected and things are being retooled, you might not see it, but don't get caught out by the doomers there.


> With threat of military intervention.

I assume you are american, right? Cause that sounds pretty american. Although I suppose you might also be chinese or russian, they also like that solution a lot.

Whichever of those you are, I'm sure your side is definitely the one with all the right answers, and will be a valiant and correct ruler of the world, blessing us all with your great values.


I'm Polish. My country is the one whose highest distinction on the international level was being marked by the US for preemptive glassing, to make it more difficult for the Soviets to drive their tanks west.

Which is completely irrelevant to the topic at hand anyway.


> Which is completely irrelevant to the topic at hand anyway.

It's not irrelevant. I was implying that you did not specify who is doing this military intervention that you see as a solution. What are their values, who decides the rules that the rest of the world will have to follow, with what authority, and who (and how) will the policing be done.


> I was implying that you did not specify who is doing this military intervention that you see as a solution. What are their values, who decides the rules that the rest of the world will have to follow, with what authority, and who (and how) will the policing be done.

Like nuclear non-proliferation treaties or various international bans on bioweapons, but more so. The idea is that humanity is racing full steam ahead into an AI x-risk event, so it needs to slow down, and the only way it can slow down is through an international moratorium that's signed by most nations and treated seriously, where "seriously" includes the option of signatories[0] authorizing a military strike against a facility engaged in banned AI R&D work.

In short: think of UN that works.

--

[0] - Via some international council or whatever authority would be formed as part of the treaty.


I see. I'm perhaps leaning skeptical most of the time, but it's hard to see how an international consensus can be reached on a topic that does not have the in-your-face type danger & fear that nuclear & bioweapons do.

(It's why I'm also pessimistic on other global consensus policies - like effective climate change action.)

I would be happy to be wrong on both counts though.


It gets me wondering whether some AGI Hiroshima scenario would be possible. Scare people's pants off, yes. Collapse, no.


Since when are military spooks and political opportunists better at deciding on our technological future than startups and corporations? The degree of global policing and surveillance necessary to fully prevent secret labs from working on AI would be mind-boggling. How would you ensure all government actors are sticking to the same safety standards rather than seizing power by implementing AI hastily? This problem has long been known as quis custodiet ipsos custodes - "who guards the guards themselves?".


> The degree of global policing and surveillance necessary to fully prevent secret labs from working on AI would be mind-boggling.

It's not that bad given the compute requirements for training even the basic LLMs we have today.

But yes, it's a long shot.


Training a SOTA model is expensive, but you only need to do it once, and fine-tune a thousand times for various purposes.

And it's not even that expensive when compared to the cost of building other large scale projects. How much is a dam, or a subway station? There are also corporations who would profit from making models widely available, such as chip makers, they would commoditise the complement.

Once you have your very capable, open sourced model, that runs on phones and laptops locally, then fine-tuning is almost free.

This is not make belief. A few recent fine-tunes of Mistral-7B for example are excellent quality, and run surprisingly fast on a 5 year old GPU - 40T/s. I foresee a new era of grassroots empowerment and privacy.

In a few years we will have more powerful phones and laptops, with specialised LLM chips, better pre-trained models and better fine-tuning datasets distilled from SOTA models of the day. We might have good enough AI on our terms.


> Once you have your very capable, open sourced model, that runs on phones and laptops locally, then fine-tuning is almost free.

Hence the idea to ban development of more capable models.

(We're really pretty lucky that LLM based AGI might be the first type of AGI made, it seems much lower risk and lower power than some of the other possibilties)


But that's just not going to work in the real world is it?

If a country uses military force in another country, that's a declaration of war. We'll never convince every single country from banning AI research. And even if we do, you don't need much resources to do AI research. A few people and a few computers is enough.

This is not something like uranium refining.


And there it is.

The full agenda.

Blood and empire.


Only in exactly the same way that the police have the same agenda as the mafia.

International bans need treaties, with agreed standards for intervention in order to not escalate to war.

The argument is that if you're not willing to go all the way with enforcement, then you were never serious in the first place. Saying you won't go that far even if necessary to enforce the treaty is analogous to "sure murder is illegal and if you're formally accused we'll send armed cops to arrest you, but they can't shoot you if you resist because shooting you would be assault with a deadly weapon".


I don't agree.

The police enforce civil order by consent expressed through democracy. There is no analogy in international affairs. Who is it that is going to get bombed? I am thinking it will not be the NSA data centre's in Utah, or any data centres owned by nuclear armed states.


> The police enforce civil order by consent expressed through democracy. There is no analogy in international affairs.

Which is the point exactly. All agreements in international affairs are ultimately backed by threats of violence. Most negotiations don't go as far as explicitly mentioning it, because no one really wants to go there, but the threat is always there in the background, implied.


All policing is to a certain degree by consent, even in dictatorships which only use democratic as a fig-leaf. International law likewise.

Just as the criminal justice system is a deterrent against crime despite imperfections, so are treaties, international courts, sanctions, and warfare.

> I am thinking it will not be the NSA data centre's in Utah, or any data centres owned by nuclear armed states.

For now, this would indeed seem unlikely. But so did the fall of the British Empire and later the Soviet Union before they happened.


The problem with that is that the entity supposed to "bomb it to rubble" and the entity pushing for AI development happens to be the same entity.

Maybe the confusion why people can't see this clearly stems from the fact that tech development in the US has mostly been done under the umbrella of a private enterprise.

But if one has a look at companies like Palantir, it's going to become quite obvious what is the main driver behind AI development.


So the next War will be the Good AI against the Bad AI and humans as collateral damage?


No, that's the best outcome of the bad scenario that the above is meant to avoid.

How do people jump from "we need an actual international ban on this" to "oh so this is arguing for an arms race"? It's like the polar opposite of what's being said.


This underscores the crux of the issue. Our collective response to curbing emissions has been lackluster, largely due to the fact that the argument for it being beneficial hasn't sufficiently swayed a significant number of people. The case asserting that AI R&D poses an existential risk will be even harder to make. It's hard to enforce a ban when a large part of the population, and perhaps entire countries, do not believe the ban is justified. This contrasts with the near-universal agreement surrounding the prohibition of chemical and biological weapons, or the prevention of nuclear weapons proliferation, where the consensus is considerably stronger.


I think you are really underestimating how much progress has been made with renewable energy. Taking a quick look a CAISO, the California grid is currently 65% powered by renewables (mostly solar). This is a bit higher than average, but it certainly wouldnt have been nearly as high even 5 years ago, much less 25 years ago.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: