What is interesting is that the AI “ethicists” all want to serve as a high priesthood controlling access to ML models in the name of safety. However, I think the biggest danger from AI is that these models will be used by those who control the models to control and censor what people are allowed to write.
These open source models in the hands of the public, are, IMO the best defense against the true danger of AI.
Kudos to Facebook and Microsoft and Mistral for pushing this.
> What is interesting is that the AI “ethicists” all want to serve as a high priesthood controlling access to ML models in the name of safety.
This is a very uncharitable take. I would suggest familiarizing yourself with the actual arguments rather than summaries on social media. There’s considerably more thought than you’re crediting them with, and extensive discussion around the risk you’re worried about along with proposed solutions which – unlike your “best defense” – could actually work.
Moreover, in the next sentence GP confesses that they “think the biggest danger from AI is that these models will be used by those who control the models to control and censor what people are allowed to write”, revealing that they too harbor ethical concerns about AI, they’re just not one of “those” AI ethicists.
That's more a terminological accident I think. Those that describe themselves as working on "AI ethicis" in academia are mostly worried about stuff like AI's not saying something offensive or potentially discriminating related to race or sex, while people who use the term "AI risk" or "AI safety" are more worried about future risks like terrorism, war, or even human extinction.
Thinking about it, both groups don't talk a lot about the risk from AI being used for censorship...
> Thinking about it, both groups don't talk a lot about the risk from AI being used for censorship...
This is a pretty common topic in the academic community I follow, along with related things like how it’ll impact relationships with employers, governments, etc. I see that most as a counterpoint to the more sci-fi ideas as in “don’t worry about AIs annihilating humanity, worry about your boss saying an AI graded your work and said you are overpaid”.
Yeah I think that's the posters point, "AI ethics" isn't "AGI risk", and I'll add A) Eliezer isn't a "high priest" he's just a guy, B) he plays a character and knows it.
You'd be surprised how much you can advance in life just by avoiding talking or thinking about other people too much, much less grouping them. It's a fundamental part of our animal brains and it's gotten oh-so-much worse as we migrated to the internet. And it leads to epistemic closure.
n.b. I think the AGI risk stuff is bunko and the original AI ethics cheerleaders ruined their own field. You don't need to agree to understand
I think it's harmful to characterize "all" AI ethicists as a "priesthood" wanting to gatekeep access to these models. There are plenty of people who care both about the democratizing of these tools as well as safe and ethical use.
Seriously, I'd also really appreciate some examples of moderate AI ethicists. The vocal minority is all I've heard so far, and their arguments sound closer to fiction than science.
Thanks, Andrew. And I’m happy to send links to a free read only verizon of the standard of helpful. Or do a webinar with Andrew on 7010 to demonstrate the moderate AI ethicist stance which I hope I embody but don’t need to focus on titles too much. My ideology or agenda, as it were, is that AI governance prioritizes ecological flourishing and human wellbeing at the outset of design, which also means the outset of funding. Accountability then moved from a focus on the output of one AI system or product to how the people making and releasing it demonstrate their ongoing, full value chain commitment to giving back more to the planet than they take and working to create genuine l, symbiosis level caregiving oriented value to an end user.
That's an interesting perspective. What's the hope that AI will ensure human symbiosis when traditional software models (arguably) fail to do so?
The best analog that comes to mind for me is Open Source software, and viral licenses that encourage a literal obligation to "give back" to the community. As helpful as that is, Open Source software still consumes power and can't ensure ecological symbiosis with it's users (even if it's ethically superior to proprietary alternatives). With that in mind, I'm curious how AI licensing differs, and how it's increased cost of training/execution will be subsidized by the value of the gross output.
The other more common question that comes to mind is enforcement. In your "agenda" as it were, would AI governance be obligatory or optional? Should we draw the line differently for research/nonprofit/commercial entities? In a traditional economy, the existence of Open Source software has enabled better value extraction for businesses which ultimately do not prioritize environmental concerns. Along the same line of thought as the first question, I'd be interested to hear how AI governance can avoid overreach while still addressing the same issue we had with traditional software creating excess value that mostly does not benefit the ecology or greater good.
This is something I'm very interested in generally, but I question if we have the framework to actually achieve meaningful human-AI symbiosis. Open Source succeeded in it's goal to subvert copyright and capitalism by carefully following the rules and managing it's expectations from the start. I worry that you're biting off more than you can chew asking for human-computer, human-AI or even AI-ecology symbiosis. I'd be glad to summon another boffin who can prove me wrong though :P
The broader point that John is making, and was central to the thesis of the standard is that we have to entirely rethink the software engineering paradigms, and generally engineering paradigms to include at every step a question of human centered externalities.
That is just not something that’s built into anything post Norbert wiener cybernetics shift in the late 1950s, 1960s which was just totally blown out of the software side of engineering.
I wish you luck. I have limited perspective, but I'd wager that the externalities of greed and human conflict will prevail over the thoughtful and pragmatic limitation of technology for human benefit. I hope I'm wrong (for everyone's sake).
Yes well that’s basically what I’m working on the rest of my life.
I’ve been working on ASI for two decades and now as we’re going to achieve it, I’m switching to working on alternative socio-economic systems to capitalism so that ASI doesn’t control human systems.
TIL. That's a neat standard and I'm glad it exists, it's an interesting reflection of what opt-in ethical frameworks can look like.
For every reasonable and non-authoritarian suggestion I read for regulating AI, I feel like I wade through 10 Neuromancer-level takes. It's definitely a me-problem, I gotta stop scrolling through /new...
This was the effort of dozens of engineers, ethicists and systems people all done prior to the LLM revolution so it doesn’t have all the mystical junk that the newcomers seem to be latching onto.
I mainly read on algorithmic fairness, safety, and auditing since they're more practical for work. Authors I enjoy are Inioluwa Deborah Raji, Andrew Smart, and Timnit Gebru.
I think at this point, the cat is out of the bag. Relying on not so nice people complying with license legalese was never going to be a great way to impose control. All that does is stifle progress and innovation for those who are nice enough to abide by the law. But anyone with other intentions in say Russia, North Korea, China, etc. would not be constrained by such notions. Nor would criminal organizations, scam artists, etc.
And there's a growing community of people doing work under proper OSS licenses where interesting things are happening at an accelerating pace. So, alternate licenses lack effectiveness, isolate you from that community and complicates collaboration, and they increasingly represent a minority of the overall research happening. Which makes these licenses a bit pointless.
So, fixing this simplifies and normalizes things from a legal point of view which in turn simplifies commercialization, collaboration, and research. MS is being rational enough to recognize that there is value in that and is adjusting to this reality.
> What is interesting is that the AI “ethicists” all want to serve as a high priesthood controlling access to ML models in the name of safety. However, I think the biggest danger from AI is that these models will be used by those who control the models to control and censor what people are allowed to write.
Who says that this is not an (or even the) actual hidden agenda behind these insane AI investments: building an infrastructure for large-scale censorship?
Every center of value develops a barnacle industry with their foot hovering over the brake pedal unless a tax is paid to their army of non-contributing people
I wonder, how would this future differ from how big tech currently operates in relation to (F)OSS?
Even with code/weights common to the public, a significant resource divide remains (e.g compute, infrastructure, R&D). I'm not arguing against more permissive licensing here, but I do not see it as a clear determinant for levelling the field either.
But you said "AGPL". AGPL SAAS running on someone else's computer that you can access requires that they provide you with the source code they're running. Barring shenanigans, that source code would enable you to run the same SAAS yourself if you desired to do so.
I'd say having the ability to run the program locally _and_ its source code is "more open" than just having the ability to run the program locally in binary form. With AGPL in your scenario you get all three of source access, local execution, and remote-SAAS exection. Proprietary local code you get one of those three.
I don't understand how normal people having access to AI models helps you when big businesses are using them in unethical ways.
Lets say for example I have access to exactly the models Facebook is using to target my elderly relatives with right-wing radicalising propaganda. How does that help me?
This assumption that it helps somehow sounds like you've internalised some of the arguments people make about gun control and just assume those same points work in this case as well.
This small model could run locally and filter out bullshit/propaganda as configured by the user. Having control over the last model that filters your web is essential.
Local models will be mandatory once the web gets filled with AI bots. You need your own AI bot to fight them off.
Most people don't even use ad blockers today. Hoping that people (especially the people who are vulnerable to such misinformation and actually need it) personally configure a propaganda filter AI is wildly optimistic.
Don't think this is the biggest danger. In a few years if they continue to improve at the current speed these models can become really dangerous. E.g. an organization like ISIS can feed one some books and papers on chemistry and ask it "I have such and such ingredients available, what is the deadliest chemical weapon of mass destruction i can create". Or use it to write the DNA for a deadly virus. Or a computer virus. Or use one to contact millions of say Muslim young men and try to radicalize them.
Why radicalize only Muslims? Why do you need an LLM to teach you how to make a bomb?
Why not just ask it how to reach heaven with the lowest effort possible? Why don't good guys like you have your LLM pre-un-radicalize all those poor young men?
Indeed. Pretty much any horrible way to die from the olden days makes you a martyr in Islam. For example, having a building fall on you or gastro-intestinal disease. Fighting is only one of the ways and not really the easiest since the other ways are passive.
No, they can't - they would have done it if they could. Producing a practical chemical weapon is a complicated task, with many steps that are not documented in publicly available sources.
That’s somewhat true – it’s not easy but not hard enough, as we saw with the Aum Shinrikyo attacks – but an LLM won’t magically have access to non-public instructions and, not having an understanding of the underlying principles, won’t be able to synthesize a safe process from public information.
Eh that is up for debate. If I dump a library of chemistry books and industry books on chemistry and volatile chemicals its distinctly possible the model could generate this data.
Not without some kind of understanding of the underlying principles. If you were testing something verifiable in code you might be able to test candidates at scale, but this involves a number of real-world processes which would be hard to tackle that way.
Control of materials is a far bigger hurdle. If you try to procure materials which can be used for bombs/chemical weapons/.. in significant quantities you will get noticed pretty fast.
The same ISIS who released "You Must Fight Them O Muwahhid" [0] with step-by-step instructions for the construction of home made triacetone triperoxide (TATP) bombs as used in the 2017 Manchester Arena attack, the 2015 Paris attacks and the July 7, 2005 London bombings, isn't hoping someone releases an uncensored LLM it can use in 24Gb VRAM so it knows what to do next.
What is interesting is that the AI “ethicists” all want to serve as a high priesthood controlling access to ML models in the name of safety. However, I think the biggest danger from AI is that these models will be used by those who control the models to control and censor what people are allowed to write.
These open source models in the hands of the public, are, IMO the best defense against the true danger of AI.
Kudos to Facebook and Microsoft and Mistral for pushing this.