I don't think you understand how little it takes to have Turing complete computation, and how hard it is to stop even end-users from accessing it, much less companies or a motivated nation state. The "war on general-purpose computation" is more like a "war on convenient general-purpose computation for end users who aren't motivated or skilled enough to work around it".
Are you going to ban all spreadsheets? The ability to run SQL queries? The ability to do simple regexp based search replace? The ability for users to template mail responses and set up mail filters? All of those allows general purpose computation either directly or as part of a system where each part may seem innocuous (e.g. the simple ability to repeatedly trigger the same operation is enough to make regexp based search and replace Turing complete; the ability to pass messages between a templated mailing list system and mail filters can be Turing complete even if neither the template system and filter in isolation is)
The ability for developers to test their own code without having it reviewed and signed off by someone trustworthy before each and every run?
Let one little mechanism through and the whole thing is moot.
>I don't think you understand how little it takes to have Turing complete computation
This is a dumb take. No one's calculator is going implement an AGI. It will only happen in a datacenter with an ocean of H100 GPUs. This computing power does not materialize out of thin air. It can be monitored and restricted.
No one's calculator needs to. The point was to reply to the notion that there is any possible way the "war on general-purpose computation" actually has any shot of actually stopping general-purpose computation to make a point about how hard limiting computation is in general.
> It will only happen in a datacenter with an ocean of H100 GPUs. This computing power does not materialize out of thin air. It can be monitored and restricted.
Access to H100 could perhaps be restricted. That will drive up the cost temporarily, that's all. It would not stop a nation state actor that wanted to from finding alternatives.
The computation cost required to train models of a given quality keeps dropping, and there's no reason to believe that won't continue for a long time.
But the notion you couldn't also sneak training past monitoring is based on the same flawed notion of being able to restrict general purpose computation:
It rests on beliefs about being able to recognise what can be used to do computation you do not want. And we consistently keep failing to do that for even the very simplest of cases.
The notion that you will be able to monitor which set of operations are "legitimate" and which involves someone smuggling parts of some AI training effort past you as part of, say, a complex shader is as ludicrous as the notion you will be able to stop general purpose computation.
You can drive up the cost, that is all. But if you try to do so you will kill your ability to compete at the same time.
There are physical limits to what can make an effective and efficient computational substrate. There are physical limits to how fast/cheap these GPUs can be made. It is highly doubtful that some rogue nation is going to invent some entirely unknown but equally effective computational method or substrate. Controlling the source of the known substrate is possible and effective. Most nations aren't in a position to just develop their own using purely internal resources without being noticed by effective monitoring agencies. Any nation that could plausibly do that would have to be a party to the agreement. This fatalism at the impossibility of monitoring is pure gaslighting.
Unless you're going to ban the sales of current capacity gaming level gpus and destroy all of the ones already in the market, the horse bolted a long time ago, and even if you managed to do that it'd still not be enough.
As it is, we keep seeing researchers with relatively modest funding steadily driving down the amount of compute required for equivalent quality models month by month. Couple that with steady improvements in realigning models for peanuts reducing the need to even start from scratch.
There's enough room for novel reductions in compute to keep the process of cost reducing training going for many years.
As it is, I can now personally afford to buy hardware sufficient to train a GPT3 level model from scratch. I'm well off, but I'm not that well off. There are plenty of people just on HN with magnitudes more personal wealth, and access to far more corporate wealth.
Even a developing country can afford enough resources to train something vastly larger already today.
When your premise requires fictional international monitoring agencies and fictional agreements that there's no reason to think would get off the ground in anything less than multiple years of negotiations to even create a regime that could try to limit access to compute, the notion that you would manage to get in place such a regime before various parties will have stockpiled vast quantities in preparation is wildly unrealistic.
Heck, if I see people start planning something like that, I'll personally stockpile. It'll be a good investment.
If anything is gaslighting, it's pushing the idea it's possible to stop this.
>Unless you're going to ban the sales of current capacity gaming level gpus and destroy all of the ones already in the market, the horse bolted a long time ago, and even if you managed to do that it'd still not be enough.
That's silly. We can detect the acquisition of tens of thousands of high end GPUs by a single entity. And smaller nations training their own models is besides the point. The issue isn't to stop random nation from training their own GPT-4, it's to short circuit the possibility of training a potentially dangerous AGI, which is at least multiple game changing innovations down the line. The knowledge to train GPT-4 is already out there. The knowledge to train an AGI doesn't exist yet. We can ensure that only a few entities are even in a position to take a legitimate stab at it, and ensure that the knowledge is tightly controlled. We just have to be willing.
The precedence is nuclear arms controls and the monitoring of the raw materials to develop deadly pathogens. The claim that this monitoring isn't possible doesn't pass the smell test.
Are you going to ban all spreadsheets? The ability to run SQL queries? The ability to do simple regexp based search replace? The ability for users to template mail responses and set up mail filters? All of those allows general purpose computation either directly or as part of a system where each part may seem innocuous (e.g. the simple ability to repeatedly trigger the same operation is enough to make regexp based search and replace Turing complete; the ability to pass messages between a templated mailing list system and mail filters can be Turing complete even if neither the template system and filter in isolation is)
The ability for developers to test their own code without having it reviewed and signed off by someone trustworthy before each and every run?
Let one little mechanism through and the whole thing is moot.
EDIT: As an illustration, here's a Turing machine using only Notepad++'s find/replace: https://github.com/0xdanelia/regex_turing_machine