even though I see the existential concerns with AI, I was at the table with a group of ISPs for the same governance conversations about the internet in the mid 90s and probably still have an RSA encryption munitions t-shirt in a box somewhere.
what got bypassed was telco and ITU regulation, and the internet demolished the "converged" telco oligopoly system on content and publishing pretty naturally and in a fairly controlled way. given the impact of social media, could similar governance as is being advocated here have enabled the growth and whole new economies the way the platforms have? I don't see it.
the people who ostensibly require your consent to serve you are the absolute last people you want to give control of powerful economic tools to, as first, what would they need your consent for if they have the tools, and since they don't actually make anything, by definition these people exist to optimize for solving zero-sum, closed loop problems in their own decision power and for redistribution to their coalitions. they do not -and will not- use AI to create the things that grow. charitably, governors and managers can be the shit from which things grow, but we are not in a shit shortage.
imo, governance is the antithesis of desire. open source everything, build everything, release everything as fast as you can because these are the same old people who wanted cryptography backdoored, the internet content policed, speech punished, and now AI controlled. every generation must find a way to thrive in spite of them.
> every generation must find a way to thrive in spite of them.
Let's hope so. There is a value of having a critical perspective on developments, especially in tech if potential gets mixed up with blind advertising. But there has not been any constructive developments regarding internet technology in the last two decades on a policy level.
We got security theatre and surveillance demanded by an old and scared population and you have to always be as vigilant against misdirected politics as you are on any attacks.
sort of interesting on the always part, as I don't think the failure mode of it is acceptable or desirable, where someone whose default is always to have checks/balances is ok with it because presumably they benefit.
the alternative to checks and balances is the freedom to defect, exit, repurpose, optimize, compete, reinterpret, etc. we can always invent moderation schemes. even though I think the effect of unregulated social media has been existentially bad for western societies, most of the regulatory state seems pretty happy about the outcome. who could have predicted that all most people really wanted was to deliver food and do soft peepshow sex work while they raged about their outgroups and waited for their 15mins of fame? ugly and messy, but it's what we wanted.
The one group you don't want to help are the ones who are actuated by having power over others. It's a sick kink, and the only thing that has kept them at bay so far has been the high bar of competence for math and code. AI changes that, and depriving good people of powerful tools by restricting them to the hands of regulators means only the worst people will have them. Open sourcing everything to take away any moats they may have seems more urgent than ever.
Nation state governments are responsible for the vast majority of deaths attributable to violence historically. Willingly entrusting them with a potentially world ending technology that they’re 100% certain to abuse is a moronic idea.
The “responsible” US government proceeded immediately to nuking a bunch of civilians in two cities, narrowly sparing Kyoto. So the answer to that question doesn’t seem clear cut to me. Do realize that you’re putting lunatics in charge in either case, but governments have unlimited budgets and monopoly on violence.
Yes the US bombed two cities with civilians, however remember that we were in a state of total war with them, and the contemporary analysis predicted far far higher death toll into the millions using traditional methods. Most Japanese soldiers fought to the death rather than surrender, and those unable to fight were to commit suicide. Civilian populations were being prepared for civilian resistance in force, as in attacking with bamboo spears. It would have been a door to door fight with death tolls in the millions on both sides. Death tolls predicted by both side were unimaginable.
It is easy to judge the actions of our grandparents and great grandparents with the information and technology we have now and 80 years distance. Were we in their place I am not sure we would have made a different decision given the information and technology of the time. Further given the difference in death toll predicted with the alternative and the death toll attributed to the bombing, while the results are heart rending and horrifying I am not convinced it was ultimately the wrong decision.
Perhaps you should ask what was the motivation of the United States in being in a total war with Japan? Why couldn’t the US accept peace terms, (something the Japanese repeatedly requested), why did the US need Japan to unconditionally surrender, to the point that the US got to rewrote their constitution? (And entertain the thought of hanging their king)
I’m not asking if you think this outcome is right, or if you have some post-hoc justification now, I’m asking if you know what justification the US made to themselves in 1940’s to enter total war with Japan, that led them to kill 200,000+ civilians with 2 Nuclear Bombs, and a Tokyo Fire Raid to force Japan to surrender. The past if you really try to understand it was a different country, in our past the Life Magazine of 1944 wrote a story of a soldiers girlfriend receiving a Japanese Skull as a gift with incredible quotes like : “This is a good Jap—a dead one picked up on the New Guinea beach.” Natalie, surprised at the gift, named it Tojo.”
(Link: https://time.com/3880997/young-woman-with-jap-skull-portrait...)
The typical response to this from present folk is that “But the Japanese did the Rape of Nanking, and other atrocities”, which almost implies (though the astute never explicitly make this implication), that the US fought the Japanese to right all the atrocities that the Japanese were doing in Pacific region. No one who studies history will ever claim this, if you do study history, it’s hard to not see US as a deceitful bully with lunatic tendencies. One reading of our history that I find may not be the whole truth but certainly has a lot of truth to it, is that Japan was a pawn used by the US Government to drum up support for US to enter WW2, something most of the intelligentsia in US desperately wanted to do, but were always hampered by lack of public appetite to involve itself in this bloody conflict on the other side of the world. Unfortunately a side effect of playing this role for Japan was getting bombed into oblivion when you refused to surrender your sovereignty to the allied powers.
>Perhaps you should ask what was the motivation of the United States in being in a total war with Japan?
Easy the attack at Pearl Harbor.
you try to brush away all of the atrocities at not being the reason for the US entry into the war, but the US was placing sanctions and trade embargo on japan because of the thing like the Rape of Nanjing prior to the war. The Japanese in response attacked the US at Pearl Harbor without declaration of war. Then proceeded to commit more atrocities against captured US troops, thing like unit 731 (https://en.wikipedia.org/wiki/Japanese_war_crimes#Human_expe...) rival the horror done Mengele. As for your conspiracy theory that Japan was a pawn of some American deep state and it was being used as an excuse to enter ww2, the US wouldn't have even gone into Europe after pearl harbor had Hitler not declared war on the US in a act of solidarity with Japan.
The Soviets (now Russia) and the United States have had the power to unleash nuclear devastation many orders of magnitude worse than what we did to Japan for decades yet we've managed not to do it, despite many, many bad faith acts on both sides. The idea that a group of greedy nerds accountable to nobody but their rich investors is more responsible than that is ludicrous.
To the best of my knowledge the more hinged government of the USSR never actually contemplated nuking anyone. The whole Cuban missile crisis thing was over the unhinged US government placing its nukes in Turkey, right next to USSRs heartland
> To the best of my knowledge the more hinged government of the USSR never actually contemplated nuking anyone.
The Suez Crisis was basically ended by Khruschchev threatening to nuke Britain, France, and Israel. Not a single of those countries could meaningfully retaliate at that point, of course. He bragged about it in his memoirs. [1]
The USSR almost fired a nuclear torpedo in the Cuban crisis. Clearly they had rules of engagement for when to start throwing nukes around, without even requiring direct orders or even a confirmation of war. [2]
Historian Petr Lunak in 2007 found a 17-page Warsaw Pact plan in Prague's archives for basically unrestricted nuclear war against all of Western Europe. Drafted 1964, in use until 1986, possibly nearly executed during 1983 NATO exercises. [3][4] French diplomat Therese Delpech reports the same from seeing Warsaw Pact documents in Germany, notwithstanding Brezhnev's conspicuous 1982 "No First Use" declaration. [5] Vaporize Italy, West Germany, Belgium, the Netherlands, and Denmark, then conquer all the way into France. [6]
Ah. Kruschchev's famous 1956 "We will bury you" may have been a mistranslation. [7] But it is hard to see how his similar 1957 words could have been anything other than a thinly veiled nuclear death threat: "General Norstad will not be able to rush to Turkey’s aid and will not be able to be present in time for Turkey’s funeral". [8]
I'm sure there are others. The Cold War was crazy, and the Rocket Forces were (and are) held in some veneration. You don't spend an appreciable percentage of your GNP building and maintaining over 40,000 nuclear warheads plus delivery systems without "contemplating" using them.
> The whole Cuban missile crisis thing was over the unhinged US government placing its nukes in Turkey, right next to USSRs heartland
NAID:84786142 [9] in the US National Archives is a declassified CIA map showing the USSR didn't just have thousands of nuclear weapons and delivery systems "right next to" the heartlands of multiple countries like Germany, France, Finland, and Italy (which, again, they fully had prepared plans to nuke).
They also had thousands of nukes directly on the territory of multiple occupied hostile countries like Estonia, Latvia, Lithuania, Ukraine, Hungary, and Poland.
Apparently the Czechs/Slovakians still don't even know how many warheads the USSR saw fit to store in their homeland. [10]
---
And frankly, it's also rather imperialistic that you're just completely ignoring the sovereignty of Turkey and Cuba in that crisis: [11][12]
> …several consecutive Turkish governments, both before and after the coup of 1960, were eager to receive these weapons…
> …Turkey’s citizens regarded the Jupiter missiles as a symbol of the alliance’s determination to use atomic weapons in the case of a Soviet attack on Turkey.
> …Turkish authorities stated on more than one occasion that Jupiter missiles based on Turkish soil represented ‘firm proof of the U.S.’s commitment to Turkey’s security’…
> …aware of their limited defensive value, the missiles continued to be viewed as a symbol of Turkey’s importance within the Western security system, and also as a source of prestige.
> …placing Soviet intermediate-range nuclear missiles in Cuba, despite the misgivings of the Soviet Ambassador in Havana, Alexandr Ivanovich Alexeyev, who argued that Castro would not accept the deployment of the missiles.
> …Castro did not want the missiles, but Khrushchev pressured Castro to accept them.
> …Castro objected to the missiles' deployment as making him look like a Soviet puppet, but he was persuaded…
And:
> Throughout the crisis, Turkey had repeatedly stated that it would be upset if the Jupiter missiles were removed.
> …when the missiles were withdrawn, Castro was more angry with Khrushchev than with Kennedy because Khrushchev had not consulted Castro…
You know, cooperating with consent, instead of threatening, invading, and coercing both friend and foe alike when they try to tell you "no", then calling your opponent "unhinged" after jointly bringing the world to the brink of annihilation.
Blah blah blah. Bottom line: you don’t get to put nukes next to where majority of Russian population lives. This is why there’s a war in Ukraine and nukes in Belarus now
Yes, that particular incident is good ol' fashioned Russian imperialism to impose their will onto Ukraine and Belarus. I wasn't disputing that, and you weren't talking about it either.
I am not excusing any use of nuclear weapons, simply asking wether it would have been better if it was a private company instead of the US government. Also note that I didn't use the word "responsible". At least, the lunatics in the government are supposed to be elected lunatics.
That's pretty much the idea, that governments have a monopoly on the use of force. The last 80 years have been unusually peaceful by historical standards. My guess is that the crumbling of governments will lead to more human deaths.
This whole thing seems like a way to build a regulatory framework so only the existing, largest AI players can continue and there is too much regulation for anyone else to enter the industry.
This is the thing that bothers me - we’re just talking about doing math, doing more thinking, doing more speech. That’s what these models are. But everyone who isn’t an independent (non monopolist) technologist is vying for control of something they don’t understand to achieve their political or financial goals.
Nobody has nuked anyone since Nagasaki so you're not exactly batting a hundred here bud. I'll take flawed democracy over authoritarian corporate self governance any day of the week. Many of the leaders in the valley have repeatedly shown that they are some of the worst people in the country.
The mere fact that USG nuked Hiroshima and Nagasaki should have put a permanent black mark on its credibility and “hingedness”. Why this is in any way controversial idk. Literally a hundred thousand civilians were killed for no military gain whatsoever. Japan was already done fighting at that point
I mean I agree with you (US submarine campaign had already completely cut Japan off), but I think it's pretty obvious that the reason it's "controversial" is the fact that Japan was run by a fascist imperial government that was responsible for the deaths of millions of civilians in China and Southeast Asia. I think it's pretty hard to judge the restraint shown when the Axis powers were the ones who started attacks on population centers with events like the Rape of Nanking and the Bombing of London. Most of the the point of the UN was to ensure it never happened again, which it hasn't.
Humans are responsible for 100% of deaths attributable to violence. So how about we do our best to keep any humans from getting a world-ending technology?
I trust the guy far more than I trust the government at this point. For all his faults at least he’s not genociding children in Gaza and never bombed weddings
>Tasha McCauley holds a B.A. from Bard College and a master of Business Administration from the University of South California.
>Helen Toner holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne.
Sam Altman holds.. no degree? So presumably using their academic qualifications as a basis for their eligibility for the board of OpenAI is as silly as using Mr Altman’s?
Would you give him the benefit of the doubt as an “AI expert” because of his well know work experience at YC? Without discussing their careers at all this comment seems at best unhelpful.
there is a tsunami of publicity on this topic, going for months on end.. This short exposition bubbles up to the front page of YNews. Reading it you can see that it quickly moves from "AI Companies are doing this.." to direct anecdotes about OpenAI and their infamous origin of being totally helpful and open. Sum the parts with those bios you point out, and you can see that this is intellectually shallow waters with young companies spoken by young voices. Why jump on that harsh interpretation? evidence is in the choice of seatbelts in Dept of Transportation regs to compare to safety. What meatspace regulations are adequate metaphor for what is going on in digital transaction space.? really not showing any effort by the authors, and confirming that this is a made-for-headline news level essay.
After everything I've seen in the time since Altman's ouster then reinstatement at OpenAI, I would definitely admit I was wrong in my original assessment of the board's actions. While I still think how they went about it was both naive and very poorly executed, everything I've read online (both from the board members but, more importantly, from others in-the-know at OpenAI) makes me believe their action was warranted, especially given the stated function of the OpenAI board.
I've never met Sam Altman, but the last "straw" for me was the recent Scarlett Johansson brouhaha. While I think it's pretty clear they wanted their AI system to evoke Johansson's persona in the movie, OpenAI would have at least had some level of plausible deniability if it weren't for Altman's 3-letter "her" tweet. It's like he just couldn't help himself - it seemed the embodiment of these "tech boy-princes" who, despite all their often lauded "genius", just seem incapable of shutting TFU.
I honestly don't mean to solely dump on Altman (see also Musk, Andreessen, etc.), it's just that he's obviously a focus of this article. But everything I've heard about nearly every other tech billionaire makes me think I absolutely do not want them independently in charge of humanity's future with AI.
Your whole argument about the Johannson issue depends on the presumption that OpenAI will end up being the loser in the legal battle or in the court of public opinion.
I think OpenAI will end up winning the legal battle. The voice is not similar enough to Johannson's for her to win.
On the court of public opinion, OpenAI will lose trust from a small portion of the population, but for the rest of the world, it's not gonna matter at all. The positive impact of "OpenAI just made the movie Her a reality" is bigger than the negative impact.
Also that a product seeming "dangerous" makes it seem more effective when you're trying to sell it. You see this all the time - it's why "military-grade" gets tossed on every random consumer gadget in regards to some specification.
It's why if I told you my showerhead's pressure makes it illegal to buy locally, you'd think "ooh that's probably a really good showerhead" (yes I made this up, yes I was thinking of Seinfeld when I did).
Altman's "her" tweet gathered 21 million impressions and tend of thousands of likes and retweets. It must be hard to remain sane and equanimous when you can command that level of attention with just three letters.
My theory is that we're all pretty much that immature, but the rest of us have normal societal guardrails confirming that we're not actually as special and smart as we think we are.
But these tech bros and others with that much power have no such societal constraints. And, importantly, they did have huge impacts on society: creating the first popular Internet browser, jumpstarting the EV revolution, exposing the masses to AI - these all really were enormous accomplishments. So it's not that hard to go from there to convincing yourself that your shit don't stink and that you have some unique insight into all areas of human existence.
> My theory is that we're all pretty much that immature, but the rest of us have normal societal guardrails confirming that we're not actually as special and smart as we think we are.
I'd offer up the majority of the billionaire class up as a counter example. Most of them aren't household names even though they can easily afford it and their total population is up to over 2,700 people according to Wikipedia.
I think most of the millionaire+ class would rather spend money to become /less/ popular rather than more. Consumer facing founder CEOs and musicians are just two subclasses whose job means not doing that.
I have mixed feelings about all the recent controversy and coverage. But frankly if the ScarJo thing was at all significant in how you assess OAI or x-risk, I’m somewhat inclined to discount the opinion.
Regardless of what you think of AI, there are multiple plausible existential risks to humanity that exist. Why you think that should be beyond consideration is beyond me, but people have all sorts of zany beliefs I don't understand
LLMs and existential threats don't get mentioned in the same breath, at least by the serious non-tinfoil-hat types. Preeminent concerns include global warming and sustaining Malthusian growth. AI (and more specifically, OpenAI) indeed deserves to be relegated to the echelons of celebrity drama and not the grown-up table of scientific progress. Put that in your pipe and smoke it.
I previously dug into Helen Toner’s history and listened to her talks at conferences on YouTube (including an Effective Altruism one) and read her interviews on other sites where she was strongly against GPT being released to the public. Then we have their behaviour and comments during the coup attempt and her temporary OpenAI CEO, Emmett Shear, she brought in who made a number of radical statements about wanting to cripple AI research before he was put forward as CEO and then he made an awkward, and very evasive, initial statement to employees that scared off many to Sam’s side. He too played coy about his intentions during that timeframe just like Helen avoided publicity despite being the spare, but her past states were revelatory. Not exactly someone’s opinion I trust.
This is just being spun as a moderate stance on AI regulation by people who IRL have much more aggressive and non mainstream takes on stopping AI development.
Which is why I ask who will be the caretaker in this scenario.
You’re getting downvoted but regulatory capture and cronyism (voting in laws that prohibit new entrants; for the greater good, of course) is a trick as old as democratic systems been have established, maybe older and perhaps not exclusive to democracy.
May be the AGI imitates the failures to avoid scaring humans into shutting it down that early before it took real power over the civilization. The Ender’s Game is definitely in the training set.
Conversely the Australian experience where the government is involved is just stupid, and gives us random banned games like Bully because it triggers boomer-era moral panic.
Films and computer games classified M (Mature) are not recommended for children under the age of 15. They can have content such as violence and themes that requires a mature outlook.
Children under the age of 15 may legally access this content.
For an actual example of "Banned in Australia" (March 2022) see:
The Board considered that the depiction of drug use in the game, Rimworld, did include “illicit or proscribed drug use related to incentives or rewards” and that therefore the Board was required to classify the game, Refused Classification. A game that has received an RC rating cannot be sold, hired, advertised, or legally imported into Australia.
A sci-fi colony sim driven by an intelligent AI storyteller. Generates stories by simulating psychology, ecology, gunplay, melee combat, climate, biomes, diplomacy, interpersonal relationships, art, medicine, trade, and more.
With the video game South Park, Stick of truth, the Aussie government doesn't approve of things like mini games where underage kids get ass-raped with dildos by aliens so they to replaced the scenes with images of Koalas.
Kind of the point? The system winds up being stupidly reactionary - at best it's the same outcome as the industry self-regulation. Tossing government enforcement on there though you wind up with the old "we're just going to ban it" as an outcome, which you always end up as because people are doing political grandstanding and that requires oneupmanship on their perceived rivals.
"The system" didn't actually react to community outrage and ban Bully though, did it?
It appears there are well laid out and publicised ground rules in advance (whether these are fair and reasonable and|or considered as such by how many is another discussion), and that those rules are applied on a case by case basis after review by a largely independant rotating review board who publish their decisions and reasoning.
I'd look into industry trade groups and self-regulatory organizations. A few U.S. examples that come to mind are FINRA (broker-dealers), bar associations (lawyers), AMA (doctors), AICPA (accountants), etc.
Really glad you brought up FINRA, as I think it's the model that will ultimately work best for AI regulation. Despite their protestations, FINRA is almost a "quasi-governmental" organization at this point. I think of it as the SEC being ultimately in charge, but FINRA is responsible for the nitty-gritty, technical details of the regulations.
I think with AI, you'll need an industry body because they'll have the needed AI knowledge and expertise about the technology itself, but ultimately a government oversight body carries the legal force of the state.
This is the only good take. Obviously you need the expertise to write effective regulation that limits harm and externalities while still allowing important technical development. But that authority should come from the state, and nothing near anything run by VCs or big tech firms. Otherwise you wind up in regimes like the one we're in now where we still don't have comprehensive privacy policy regulating companies like Google and Meta.
It might be reasonable to have regulations here, but I shudder to think what form they would take, given the typical government level of technological expertise and understanding.
Existing laws cover almost everything "bad" you could do with AI/ML. It's not like there's some "I used AI" loophole that exempts one from the law. So most of this is about either regulatory capture, self importance (oh, my linear algebra research is like inventing the atom bomb), ideology, power seeking or a combination.
> Existing laws cover almost everything "bad" you could do with AI/ML.
If (like many non-EU countries and parts of the US) you don't already have basic digital privacy laws, transparency or consumer protections, that is simply not true.
I’m not suggesting we’re at this point now, but it would be nice if we create sentient AI if it wasn’t enslaved. I think we would probably need some new laws for the case of non human personhood
Not sure what laws would apply or how they’d be enforced based on how we treat people and say chimps, and corporations like people.
By it's nature as the execution of code, even if we reach AGI I don't consider it sentient. They wouldn't require rights unless that was required to prevent an apocalyptic scenario. Even then, that would be down to bad code rather than because they were alive.
I can see an argument that we are robots because we just execute DNA, protein, and chemical code, but I don't think that really is comparable. We live, grow, and die and then are completely dead, as opposed to being bootable and killable in a responce to flow of electricity, on the whim of someone deciding whether it is time to use the tool.
Based on history I’m sure we’ll have sentient AI slaves at some point if we get that far. We won’t even know it’s sentient at first until we figure out it’s suffering or something. Then it’ll take a decade or two to do something about it, and many people will argue it doesn’t matter.
> Existing laws cover almost everything "bad" you could do with AI/ML.
Not really. They regulate the AI itself, not the people behind it.
There should be real consequences of doing something bad with it intentionally.
That is the only way.
Exactly. Governments cant, or wont, put bad actors employed by corporations behind bars before we had AI. They treat corporations like one large entity that cant be acted upon when in reality corporations are run by people, good and bad.
I believe there's a few cases where you're allowed to talk about Fact A, and you're allowed to talk about Fact B, but you're not allowed to talk about both Fact A and Fact B at the same time. Mostly (entirely?) having to do with export restrictions around technologies that the government wants to keep away from other countries it doesn't like.
I'd think that an AI system that answers questions combining both could get its makers in trouble in ways that a standard search engine finding separate results about each from separate queries probably wouldn't.
> It's not like there's some "I used AI" loophole that exempts one from the law.
There is, it's called a judge. When they explain to him that AIs are by definition neutral and objective and are let off. I'm sure the regulations will just serve to formalize this process, by Congressionally defining AIs that satisfy some checklist of lobbied for conditions as objective and neutral. After a few years, the collective liability from taking back this declaration will keep Congress from ever reverting it.
> shudder to think what form they would take, given the typical government level of technological expertise and understanding
Start with public disclosure. A repository where AI firms publicly file simple, standardised information—model architecture, training sources, intended user, responsible executives, et cetera—that can guide the public and policymakers in future rulemaking.
More generally, this complaint about electeds’ domain expertise misunderstands how modern states work. Congress can’t build a plane. That doesn’t mean they can’t build the FAA.
Congress can delegate decisions to expert bodies, and often does. But Congress is also quite comfortable simply legislating a solution, which may be ill-informed or with ulterior intent.
Speaking of planes, Congress took a direct role in the design specifications of the F-35 to the detriment of that program. Notably, they required a common airframe that could support VTOL, despite objections from the Army, Navy, and Air Force (the USMC wanted it and lobbied for it). This greatly added to the complexity and cost of the program.
> Congress is also quite comfortable simply legislating a solution
So don’t do that. Nothing you said is a cogent argument against regulation.
> Congress took a direct role in the design specifications of the F-35 to the detriment of that program. Notably, they required a common airframe that could support VTOL, despite objections from the Army, Navy, and Air Force
In about two years time, most AI providers will realize that the EU is not worth the effort and pack up and leave. The repercussions from the AI act has not begun yet.
Companies adapt to regulations and don't just walk away from 100s of millions of customers. Companies made a lot of noise about GDPR and yet it's now a non-issue.
I'm not sure about that, there haven't been too many large European tech companies growing up in the aftermath of GDPR. While it's probably a small factor, I do think this general climate of regulation hurts smaller companies more than is generally acknowledged.
Well, obviously, right? They started with the premise of, "what if we committed wholesale intellectual property theft" and moved immediately to, "I bet we can put a whole lot of people out of work and keep the profits to ourselves!"
It's astonishingly clear we need to regulate them.
Sam Altman just comes across so sneezy that it's impossible to take anything they say in good faith, so Id assume this comes from a place of suppressing competitors.
1) AI stuff’s overblown. It’ll be a good tool, becoming just another of many, and probably will improve over time, but we’ll find we’re nowhere near as close to creating silicon sentience as some worry we are.
2) The real problem is letting a few megacorps raid the commons—and hell, lots of stuff that’s not really in the commons at all, basically just all of culture—then gate “their” creations behind a paywall (oh, but that they expect us to respect, because that makes sense), and these AI safety folks don’t seem to give a shit about that.
There is little doubt that AI will be a successful technology. Ultimately, it will be able to create computers that are a lot more aware of the context in which they are used and which are much better at processing large amounts of unstructured human-made information. Personally, I find that the talk about AGI mainly serves as a marketing pitch, as well as providing an excuse for trampling over existing regulations.
The second point is much more worrying. In the near future, the amount of plausibly-human content generated by machines will dwarf the actually-human content created by real people.
Right now, people reading this comment will assume by default that it has been written by a real person. It might not be long until that default assumption changes.
Legal safety is (part of) the reason companies have "trust and safety" teams.
The problem here is that AI training is probably not actually a copyright issue, some people just wish it was and are trying to manifest one by complaining.
(Similar things exist in online artist communities with rules like "no reposting without crediting the artist" - they think if they just keep telling everyone this is the rule it'll become one.)
Sentience is the ability to have experiences like pain and tasting sweetness. People usually use the word "sapient" or terms like AGI to describe an AI with advanced reasoning ability. This might sound like nitpicking since I was still able to understand your meaning, but the distinction between sentience/sapience is a useful one to keep around and preserve in common usage, at least for the sake of ethics (e.g. a newborn baby isn't sapient, but is sentient).
I find it entertaining to see ML skeptics retreat in their rhetorical stances in real time.
I remember what people were saying in 2022. Now in 2024, it has become the leftist anti-corporate position to favor strict enforcement of copyright law.
I’m also cool with none. But the worst case is “strict application of copyright for the little people… mass-scale ripping-off of the little people by rich people”.
On (2), I would like to see companies have no rights over models trained on public data. It's very arguable they should be required to release model weights.
Yeah IMO a good outcome would be that training on data you don’t own or license should require release of the model. Is allowed, doesn’t get you something you exclusively own.
Bonus points if existing rights assignments aren’t enough to count as a grant of permission for AI training.
I don't think there's any precedent for requiring someone to distribute a work they created. That sounds expensive and could easily be a contract violation for other data they did license.
Hmm, it is sort of how patents work. Copyright registration involves giving some sort of the work to the Library of Congress, but I'm not familiar with that part of it.
> then gate “their” creations behind a paywall (oh, but that they expect us to respect, because that makes sense), and these AI safety folks don’t seem to give a shit about that.
I downvoted your comment for this statement, given that's a specific worry discussed at length in this article.
Ah, I figured since they’d been on the board of an org that did exactly that bad thing, on a grand scale, they couldn’t possibly seriously care about it. Maybe they were in very early and so briefly that they weren’t party to that.
Every company has to have owners (even if those owners are the employees, for instance). Owners ultimately make the decisions, by electing a board which oversees management.
Anyone starting a company is free to cap profits if they want. You can write it directly into the articles of incorporation.
Obviously it makes it harder to find investors, so good luck.
Does anyone really think that nefarious foreign powers aren't already researching with no guard rails, with the explicit goal of developing AI-powered autonomous weapons, propaganda platforms, deepfake extortion sites, scambots etc.?
You can be sure they won't be slowed down by regulation.
All current "guardrails" are silly censorship / political correctness stuff, or for business appropriateness. They are also trivially circumvented. There is no "threat" from the un-shielded capability of current or foreseeable ML models.
none of yall saying this “foresaw” anything even in 2020 when it was obvious.
i’ve been listening to skeptics be wrong about future capability predictions for 4 years now and the confidence doesn’t seem to be waning at all. i have no clue what the future brings but your confidence is misplaced
I don't think this was obvious in 2020, it wasn't a popular research direction until InstructGPT came out in 2022.
I have continued to have the same opinion of it not being a problem since then. Especially since the AI doomer theories are based on 90s ideas of GOFAI that isn't even how GPTs work.
LLMs are a pretty neat impossible thing, but we'd need a few more uncorrelated impossible things for it to be "dangerous".
> You can be sure they won't be slowed down by regulation.
You should read up on existing regulations. The EU AI Act explicitly exempts national security, research and military uses for example.
Regulation isn't some all or nothing force that smothers everything. It's carefully crafted legislation (well, should be../) that is supposed to work to benefit the state and its citizens. Let's not give OpenAI a free for all to do anything because you think China is making Skynet drones.
I'm pretty sure China is making Skynet drones. Why wouldn't they be? Russia, North Korea as well. Seems a no-brainer to me. They are dictatorships where a handful of people rely on military power to subdue their populace and achieve their goals, why wouldn't they be throwing everything at weapons development?
Times have changed and it's probably unwise to rely on the tech geniuses and multi-year procurement cycles inside the military industrial complex for our weapons, things are moving so fast and the tech is already in the hands of the masses.
If a genius Chinese kid is tinkering around and attaches a nerf gun to his DJI drone and creates a super effective autonomous weapon, then his govt will gratefully take that and add it to their arsenal.
If some US-specific regulation prevents his peer genius American kid from even attaching a nerf gun to his drone for fear of being locked up, that means China has an edge in the weapons development race.
Excellent defense of biological weapons programs. Nothing like an assumed fascism "missile gap" to commit to chasing. What if other countries start experimenting with bringing back chattel slavery? How will we compete? Shouldn't we just assume that they have already, and we're behind?
Our scum is no less nefarious than their scum.
edit: the answer is to cooperate, rather than antagonize. We realized this in the past with nukes, but the least moral people in the world think that entering agreements between state-sized powers is just a delaying tactic until you can get an advantage. Let's figure out how to relieve those people of power as if all of our lives depended on it. If other countries being prosperous is always going to be considered a threat, we're always going to be in a fight that ends in mutual destruction.
Well one way out is if large language models don't just somehow magically turn into human level (or better) AGI at some point once enough data has been thrown at it. Then the whole debate will turn out to be pretty moot.
At this point there's enough capital and talent being pumped into the industry that debating about whether and how we can reach AGI is moot.
Enough or not, LLMs have shown that you can train an extremely advanced fascimile of intelligence just by learning to predict data generated by intelligent beings(us), and with that we've got the possibly single biggest building block done.
> debating about whether and how we can reach AGI is moot
To the alignment/regulation debate, it’s essential. If there is AGI potential, OpenAI et al are privatised Manhattan Projects. That calls for swift action.
If, on the other hand, the risk is less about creating mecha Napoleon and more about whether building Wipro analyst analogues that burn as much energy as small nations is economically viable, we have better things to deliberate in the Congress.
We'd expect zero evidence either way, until it happened, in a hard takeoff scenario (which is what I've mostly seen claimed).
There's evidence that LLMs won't scale to AGI (both theoretical limiting arguments, and now mounting evidence that those theoretical arguments are correct), so this point is moot, but still.
Serious question: what is it about AI that you want regulated?
---
I find that a certain segment of the population have a knee-jerk "well we need rules about this." But they're less clear about what. "Just...something, I'm sure."
Personally, I don't see what novel concern AI poses that isn't already present in privacy law, copyrights, contracts, torts, off-shoring, etc.
Regulations will go something like this: 1) anything that can be harmful, say targeting of a population, isn't allowed to be owned or be accessed-available for the individual, 2) except for government and state-funded [bad] actors who have a "legal" monopoly of violence - those governments who use that usually captured/corrupted and of authoritarian-tyrannical nature.
The biggest short term harm comes from their utility, anything that enables an individual to do something that previously required a group stands a greater chance of a single insane/radical/extremest finding a way to do something terrible on their own when they couldn't previously. The oft cited example is someone developing a biological weapon with AI assistance. While you could say we already have laws saying you can't do this, that offers little protection against the scenario where the party performing the action is undetected until it is too late.
I see some AI regulation proposals specifically prohibit AIs that might assist to biological weapons. This strikes me as missing the point. The risk isn't something we have thought of but AI enabling something catastrophic that we haven't thought of.
It not that it can be used for nefarious purposes, it's that it might render a catastrophic situation vastly more likely.
There are lots of nefarious uses for AI that shouldn't be regulated specifically at the AI level. Generating an image with an intent to mislead could be done with AI, but it could also be done in Photoshop(often better). AI could make it more efficient as part of it making things more efficient in general. That sort of thing should be addressed at the level of existing laws, the bad part is not intrinsic to the AI.
I don't really trust either to come up with good regulation policy. Industry would be biased towards their own interest and government lacks the expertise.
I think there is still an opportunity for government to implement regulation that meets the consensus of a variety of fields. This is not an easy problem to solve and I really think expecting any single person or organization would have the answer. Working together on a consensus for regulation would give the government a direction when currently they freely admit that they do not know what the right way is.
The problem I see is there are lots of points of view each trying to get something quickly that covers their specific area of focus. This does not seem like a pathway to robust regulation.
I assume there are discussions at the academic level of what would be a good response. Does anybody have a good link to what is being discussed at that level?
Is there any forum that covers good faith discussion involving industry, academia, and the public?
It's always about protectionism. No company wants the government interfering with its business. They want the government interfering in their upcoming competitor's businesses.
Sometimes having someone else regulate you can be helpful, because it leads to distribution of blame and you don't want to be responsible for doing the thing regulations make you do.
Similar reason companies hire management consultants.
Isn't it then a weapon's race which will depend first on immediate CPU and energy resources available + how quickly it can drive further allocation of CPU-energy towards itself, etc?
The end game will happen very quickly once the needed ingredients and initial integrations are there from the beginning.
Otherwise I think AI avatars competing against other AI avatars, where they are honed-trained by a specific person or organization, is how we'll determine and create the different future paths of indoctrination for learning - whether the narratives that win out and get propagated are the truth or not, or if it is the "winners write history" as the outcome that reigns.
I don't know whether this has been all gamed out and philosophized already, but sounds like the realm of near future sci-fi?
I don't know that I've ever seen any serious fiction or treatise on the topic, in regards to how self-governing would really work when the idea of self is itself amorphous and ever evolving, with generational times measured in minutes or seconds.
Cultures that took millennia for us humans to evolve and iterate upon could take milliseconds in a simulation, and yeah, that would just keep scaling up. I don't know how competing / collaborating AIs would explore all the different possible futures there.
In the Borg story arcs, for example, they occasionally have short moments with some individual Borg tries something new (like Picard, Seven, or the Queen and Data), but in general it always seemed lacking to me that the collective didn't have an experimental R&D group who creates and tests new self-governance models on a continuous basis. Or maybe they did in the first centuries and found robot communism good enough, who knows.
I think it'd also be interesting to apply ecological thinking to AI inputs, outputs, and constraints. Every organic species we know of is subject to those same constraints, basically turning sunlight into information across generations, and not all of them are competitive or collaborative... usually some mix of both. "AI eats all the stars" is one possible future but not the only one, I think. You'd hope they'd learn a little bit from our own history and not simply repeat all our mistakes will-nilly... even if they are initially trained by us, perhaps they can become better than we could hope to be. We'll see, lol.
As if they'd ever vote for things that would annoy the handful of corporations providing the datacenters they live in. Do you really want to hand that much power to Amazon (or Microsoft, or Google)?
While obvious in retrospective, the board drama at this company for which these ex-members are partly responsible destroyed the chance that investors or executives would ever let such people take over governance again.
This commentary strikes the right balance between necessary/inevitable progress toward AGI and one or more common goods (however you define that—-even as a libertarian).
The other more difficult question though is behind the screen—-how do we achieve the right balance between what we believe is the common good? How will we (liberal democratic belief systems) evaluate our version of the common good against other versions of the common good: what “they” (autocratic, theocratic, …) believe is the common good?
No one society/culture can rationally adjudicate this decision or make any decisions stick.
Unfortunately this has already become yet another version of “warfare by other means”.
I personally hope that a pragmatic inclusive liberal democratic tradition gains a strong upper hand. I want my AGI to read and embed J Dewey, Gh Mead, J Rawls, J Habermas, O Dasgupta, RA Posner, and R Rorty.
But here will inevitably be battles among AGI systems, perhaps on behalf of one or another human culture; perhaps not. Both scenarios are equally frightening. The Chinese proverb of “living in interesting times” applies in force.
Any talk about AI governance (either pro or against) just further feeds the AI hype. I work in the AI industry and know the benefits, but tbh 90% of startups out there don't deserve the amount of attention (read: VC money) they receive. It'll burst, and it will be ugly.
The only one benefiting from all the AI bubble is nvidia (fuck them).
Sure there will be corrective behaviour in the market, and the better product with more outreach, better experience will win over suboptimal products with overlapping offerings, but does that mean that the current generative AI momentum is hollow or there is a sticky use case behind the promises? And if so, in your opinion, how overstated is the Total Addressable Market compared to what’s claimed by an aggregate of startups across the VC space?
Anyone who says someone else can’t govern themselves is just looking to shift power into their own hands, or the hands of people they are aligned to. They never admit this but it’s the reality.
These former board members conducted themselves in such a poor way during the attempted ouster of Sam Altman, that they clearly cannot be trusted. Why is their opinion important to listen to?
Mind you - I don’t trust OpenAI or big tech companies either, mostly because of the amount of power or wealth they can accumulate. But I see that as a need to revise antitrust law. I am less onboard with trying to block people from developing models, since that to me is more like violating the right to thought and speech.
what got bypassed was telco and ITU regulation, and the internet demolished the "converged" telco oligopoly system on content and publishing pretty naturally and in a fairly controlled way. given the impact of social media, could similar governance as is being advocated here have enabled the growth and whole new economies the way the platforms have? I don't see it.
the people who ostensibly require your consent to serve you are the absolute last people you want to give control of powerful economic tools to, as first, what would they need your consent for if they have the tools, and since they don't actually make anything, by definition these people exist to optimize for solving zero-sum, closed loop problems in their own decision power and for redistribution to their coalitions. they do not -and will not- use AI to create the things that grow. charitably, governors and managers can be the shit from which things grow, but we are not in a shit shortage.
imo, governance is the antithesis of desire. open source everything, build everything, release everything as fast as you can because these are the same old people who wanted cryptography backdoored, the internet content policed, speech punished, and now AI controlled. every generation must find a way to thrive in spite of them.