I think my bigger concern with AI in the short term isn't the immediate displacement of all jobs, but something more personal.
In much of the world today, we observe the fraying of human connection and relationships due to a variety of factors [0], and this could be further accelerated by AI that can seem very human-like [1]
I expect AI is going to take the lead for societal concerns of unintended consequences. The advancement pace is way ahead of our ability to reason about the potential effects of what we are building.
We will have completed many iterations before we have even a moment to review the feedback loop. Thus, we will likely compound many mistakes before we realize it.
I have no idea how it will slow down. Someone just figured out how to reduce the cost of building a multi million dollar model to around $600. That was supposed to take another decade.
I've spent a lot of time thinking about what the picture looks like in this advancement. I think we are going to trip over many landmines in this new tech gold rush. I've written my own perspectives on that here - https://dakara.substack.com/p/ai-and-the-end-to-all-things
The problem is this kind of discourse is coming almost entirely from people who haven't made a good-faith effort to understand the technology. You're engaging with Hollywood illusions that don't resemble reality. There are genuine concerns related to AI developments but they have nothing to do with "alignment", it's all that boring stuff like shifts in power dynamics and biases in decision making. Really important, but not exciting enough for a gripping pop narrative. Less basilisk, more... failing to assemble adequately debiased training datasets for your mortgage approval app, or realizing someone can actually identify certain individuals in an anonymized medical dataset.
If you find yourself feeling anxious about AI, spend time learning how modern transformer-based predictive and generative models actually work. From resources rooted in math, statistics, and computer science; not doomer bloggers.
I agree here. LLMs aren't going to destroy the world, but they may do a hell of a job destroying society.
Now, once we have a continuous learning feedback loop AI, especially one that is multimodal over many dimensions, that's when things can quickly spiral out of control.
> but they may do a hell of a job destroying society.
Agreed. I feel they might actually become so disruptive that they interrupt the progress towards AGI/ASI.
When it comes to the actual topic of alignment for AGI/ASI. Under the current premise, it appears to be an unsolvable paradox for more reason than I can list here. I've written on that in more detail as well, FYI - https://dakara.substack.com/p/ai-singularity-the-hubris-trap
I’ve always wondered if in my lifetime exponential growth of economic and technological developments would hit a velocity where one or both expands faster than I can even process the changes.
At this point all I have to offer is “strap the fuck in”
EDIT: I’m not even talking about some world ending thing - even GPT4 in its current form has me questioning how many comments I read are real or not - the thing can beat the Turing test in many cases!
>I’m not even talking about some world ending thing - even GPT4 in its current form has me questioning how many comments I read are real or not - the thing can beat the Turing test in many cases!
> We are about to enter a very disturbing era of unverifiable truth and reality. It is an untenable situation for societal order and stability.
When did this stop being the case, I was raised to assume everything on the internet is a fricking lie and that every woman was actually a man in their parents basement, every child was an FBI agent, and every piece of advice I was given was for the purpose of getting me to nuke my machine for the lulz.
There were a simple rules
1. Never share any PII on the internet.
2. Never take anything on the internet seriously.
3. Don't believe anything you read on the internet.
It seems almost every major problem we are facing on the internet is because we broke one of those rules, from social media, to misinformation.
There's a few particular problems here... "The internet" is pretty much everything that you're not physically present at. Any form of communication not in person is subject to the same rules. That TV broadcast... no idea if its actually real. Someone calling you on the phone... Yea, the voice replication AIs are insane these days.
The things that used to bind societies together are quickly falling apart and will lead to distrust. Culture can quickly break down that way. Humans require culture to exist in our modern world of convenience. For example businesses require trust between each other to have exchanges for resources. You know, resources like coal for power. Or long term chemical supplies so we can keep chlorinating our water.
The Foundation series had humanity breaking down because our systems got to complex and we stopped learning. But I think a bigger part is for us at least, is we'll stop trusting anything we see or hear.
Sure, but it was still possible as there were communication formats that have been useful such as audio and video recordings good enough for evidence in court.
That is about to be lost. Soon the fakes will not be distinguishable from reality. They are good enough already to fool enough people to be disruptive. This becomes an order of magnitude worse than what we had before.
> I have no idea how it will slow down. Someone just figured out how to reduce the cost of building a multi million dollar model to around $600. That was supposed to take another decade.
I don't think this is accurate. The Stanford team used LLaMA as base model and added a smaller model on top of it - training the joint model using data (generated from ChatGPT) is what cost $600. Nobody trained a GPT-like model from scratch for $600 - this experiment took advantage of the millions of USD used to train the larger models.
Am I the only one who thinks the AI doomsayers are even more ridiculous than the AI cheerleaders?
ChatGPT may be useful and so on, but dear god. It's still just a fucking chat bot.
We need to regulate AI and get better at building safe AI systems. Especially for things like facial recognition, self-driving cars, hyper-targeted advertising / engagement hacking, and the like. But "slow down" is such a ridiculous framing. The issues with these systems are obvious to everyone, and addressing those issues is a deeply technical and deeply difficult problem. We need to be throwing money at computer scientists to help build safety engineering tools and best practices. Hand-wringing over obvious faults by philosophers is not helpful, nor is burying our heads it the sand and going full Amish but with 2016 as our "just the right amount of technology" basline.
I hope in five year we remember that the non-technical journalist class lost their god damned minds over what ultimately amounted to be a mildly useful parlor trick.
> The issues with these systems are obvious to everyone, and addressing those issues is a deeply technical and deeply difficult problem. We need to be throwing money at computer scientists to help build safety engineering tools and best practices.
Doesn't this go hand-in-hand with slowing AI research down and focusing more on safety engineering and best practices?
I think everyone in the AI safety community is very very much onboard with "We need to be throwing money at computer scientists to help build safety engineering tools and best practices," but the current challenge is how to build the societal incentive structures to make that happen. Right now the overwhelming majority of prestige and money is in making AI more capable, not in safety engineering and best practices.
> Doesn't this go hand-in-hand with slowing AI research down and focusing more on safety engineering and best practices?
I don't see how this is given. It could very well be the opposite.
Consider self driving, for example.
Better object detection makes the system safer, full stop. It's on the AI safety folks to keep up with building good analyses for SoTA. Suspending all future improvements to object detection until we can better understand fault models of existing systems could very well make everyone less safe.
> but the current challenge is how to build the societal incentive structures to make that happen. Right now the overwhelming majority of prestige and money is in making AI more capable, not in safety engineering and best practices.
Slowing down doesn't address that problem at all. The answer here is regulation and accountability, which may or may not have the side-effect of slowing down deployments. But slowing down for the sake of slowing down is a non-solution unless you're a Philosophy major who watches too many movies.
> It's on the AI safety folks to keep up with building good analyses for SoTA. Suspending all future improvements to object detection until we can better understand fault models of existing systems could very well make everyone less safe.
Sure. That's certainly the responsibility of AI safety folks. The only problem is that there are too few of them relative to people trying to make AI more capable. How do we get society to agree to funnel more resources into AI safety?
> Slowing down doesn't address that problem at all. The answer here is regulation and accountability, which may or may not have the side-effect of slowing down deployments.
I don't think your viewpoint and the article's viewpoint (or at least Ding's viewpoint), despite its snappy title, are really that far apart. Ding isn't saying that slowing down is intrinsically good, but rather, in his words:
> If you’re a tech company, if you’re a policymaker, if you’re someone who wants your country to benefit the most from AI, investing in safety regulations could lead to less public backlash and a more sustainable long-term development of these technologies
which sounds pretty similar to what you're saying. Yes a likely outcome of that is slowing down AI development, but it's not the goal itself per se.
The gist of it is that we have a social problem. How do we coordinate to make sure that we develop AI safely? Because a free-for-all arms race where everyone cares only about the next shiny new capability seems really dangerous. Maybe the answer is regulation, but even then there are all sorts of questions of enforcement regimes and the like.
> How do we get society to agree to funnel more resources into AI safety?
Regulation and liability.
> I don't think your viewpoint and the article's viewpoint (or at least Ding's viewpoint), despite its snappy title, are really that far apart. Ding isn't saying that slowing down is intrinsically good, but rather, in his words:
Perhaps you're correct. In that case, the author's editor nukes the piece's credibility from orbit with the headline. If that's the case, then this article is "defund the police doesn't mean defund the police" levels of PR idiocy.
How do you coordinate across countries (in particular for AI, coordinate between China and the U.S.)? Because if you lose that coordination, you're back to all the classical problems of the prisoner dilemma.
You are underestimating the scale of the tectonic shift.
A computer program that can pass the god damned Turing test is not "a mildly useful parlor trick", it is the single most impressive computer program ever made. It exhibits reasoning, it can think with analogies. You can give it complicated requests in natural language. Given suitable prodding, it's creative. Everyone just woke up to the fact that we're a sneeze away from AGI.
>The issues with these systems are obvious to everyone
They are not. Already people routinely ask ChatGPT for factual info, and it doesn't bother them that it will simply make things up. It walks and quacks like a duck, so people assume it's a duck.
>addressing those issues is a deeply technical and deeply difficult problem
Alignment is a deeply difficult problem for philosophical reasons - we don't know how to reliably "align" humans either - but getting LMMs to output "roughly" what we want is fun and easy (they're like virtual humans!) and they're going to be deployed everywhere, problems notwithstanding.
Things are about to change, big time. In the short term we're only talking about something like the smartphone revolution, where the parameters of social interaction are fundamentally reshaped. In the long term it could get real weird...
I think it's pointless arguing about "slowing down", the cat is out of the bag. I would just like to see some transparency rules about weights and prompts - we're rapidly reaching a stage where a company hosting a highly used language model could do extreme violence to culture and business. Language models aren't paperclip maximizers - corporations are paperclip maximizers, and language models are the HypnoDrones.
People used to anthropomorphize Markov chains like this. I've been through this rodeo before, at least a few times.
I did a little lab on GPT with middle schoolers as part of a Science enrichment activity. Without prompting, the entire class was making it say nonsense inside of the one hour session.
Try using a GPT model to do something humans actually do, other than random bullshitting on the internet. Field customer service requests. Even carefully SFT'd models struggle to beat decision trees, and are sometimes worse, at actually solving the customer's problem. They sound more human / less robotic, but who cares if the customer's problem isn't solved and the dialog quickly diverges into insanity?
Just because you want to see God on a piece of toast doesn't mean that the average human has completely lost their capacity for critical thought.
> doesn't mean that the average human has completely lost their capacity for critical thought
The average person, maybe everyone, is very selective about where they apply their critical thought processes.
And groups of people ... groups don't have critical thought like individual people do.
Corporations, political parties, social groups don't "think" coherently, but wield most of the power by being centralized or synchronized.
Groups survive and grow by developing feedback and incentives that push back against individuals with critical not-aligned-with-the-group thought.
If we are depending on unorganized mass critical thought to save us, we are surely doomed. We are going to need our best systems for getting along to date, and upgrades.
I'd like to think this is just a fad (like Blockchain for instance) .. but there's such tangible application for LLMs that I think for once the hype is justified.
The blockchain hype was in pitching a solution to every problem in an effort to get rich quick. The technology still has application for verification and ultra-micro transactions. I expect it will be foundational within before the end of the decade.
This is more a statement about how bad technology hype has gotten of late. Having "tangible applications" is a very low bar. If we paused for every technology that has "tangible applications" we'd probably be entering the bronze age any moment now.
Thank you, you said it better than I did replying to another comment, but I feel about the same. It's disheartening how many outlets we have on social media to engage with big exciting narratives that are just detached from reality, and now here's one more.
The arguments against AI echo those against nuclear technology. While nuclear tech has indeed been misused, it has also revolutionized medicine, agriculture, archeology, transportation, space exploration, and energy infrastructure—demonstrating the immense potential of technological progress.
Similar to nuclear tech, the issue lies not with AI itself, but with its malicious application. For example, AI-driven medical breakthroughs will save lives, while AI-based disinformation and spam will harm society. The root of the problem isn't AI, but rather, human intent.
Ultimately, it's our responsibility to harness AI's transformative power for good and prevent its misuse, just as we've learned to do with nuclear technology. Or in other words, it's not an AI problem; it's a human problem.
I think the comparison with nuclear tech is missing one crucial aspect: ease of access. Nuclear tech has been misused, and a sufficiently funded and motivated malicious actor could get their hands on it in some way to cause harm, but it is, for the most part, out of reach.
AI, on the other hand, is already being used by every hustler looking to make a quick buck, by students who can't be bothered to write a paper, by teachers who can't be bothered to read and grade papers, by every company who can get it to avoid paying actual people do to certain jobs... Personally, my problem is not with AI tech in itself, it's with how easy it is to get your hands on it and make literally anything you fancy with it. This is what a lot of the "AI for everything" crowd can't seem to grasp.
"Personally, my problem is not with AI tech in itself, it's with how easy it is to get your hands on it and make literally anything you fancy with it. This is what a lot of the "AI for everything" crowd can't seem to grasp."
It's easy look at negatives of a technology to ignore its positives. Especially one like AI technology.
Great point. Though the issue still lies in human intent, not technology.
Shaking up traditional education methods, like paper writing and grading, can lead to more efficient learning and free time, as demonstrated by MOOCs and online universities. Exponentially growing online spam and disinformation might make it more obvious to people and recenter us as humanity to more credible information sources. We might need to adjust tax laws for companies that employ AI, but it could have positive effects. I think it's too early to catastrophize, even if I am sure the technology will be used with malicious intent by some.
In much of the world today, we observe the fraying of human connection and relationships due to a variety of factors [0], and this could be further accelerated by AI that can seem very human-like [1]
[0] https://thehill.com/blogs/blog-briefing-room/3868557-most-yo...
[1] https://www.thecut.com/article/ai-artificial-intelligence-ch...
Whether we should even pursue this direction of creating artificial companions to replace human connection is a whole another can of worms.