Hacker News new | past | comments | ask | show | jobs | submit login
Playing with Fire – ChatGPT (steveblank.com)
35 points by pagutierrezn on April 4, 2023 | hide | past | favorite | 49 comments



> This year with the introduction of ChatGPT-4 we may have seen the invention of something with the equivalent impact on society of explosives, mass communication, computers, recombinant DNA/CRISPR and nuclear weapons – all rolled into one application.

I just can't stand this kind of language. ChatGPT is quite useful but have you tried to ask it something serious that is not twitter worthy? We are not there yet. And in any case, this is not the first superhuman tool that humans made. Nukes have existed for 70 years and probably become much more accessible. Biotech could create thousands of humanity-ending viruses, today. These are fears that we live with and will forever live with, but we can't live lives only in fear.


> Nukes have existed for 70 years and probably become much more accessible. Biotech could create thousands of humanity-ending viruses, today. These are fears that we live with and will forever live with, but we can't live lives only in fear.

Building nukes and bioweapons aren't as good of a business model as AGI though. The government was incentivised to at least take some precautions with nukes. Nukes can't be developed and launched by individual bad actors. AGI isn't that comparable to nukes for numerous reasons. Bioweapons maybe, but I wouldn't be in support of companies researching bioweapons without regulation.

It's not a choice between living in fear and going full steam ahead. Both are idiotic positions to take. The reasonable approach here would be to publicly fund alignment research while slowing and regulating AI capability research to ensure the best possible outcomes and minimise risk.

You're basically arguing in favour of a free market approach to developing what has the potential to be a dangerous technology. If you wouldn't allow the free market to regulate something as mundane as automobile safety, then why would you trust the free market to regulate AI safety?

Companies who wish to develop state-of-the-art AI models should be required to demonstrate they are taking reasonable steps to ensure safety. They should be required to disclose state-of-the-art research projects to the government. They they should be required to publish alignment research so we can learn...


> Building nukes and bioweapons aren't as good of a business model as AGI though

I agree. It's quite possible that humanity-ending AI is also not a good business, don't you agree?

I think the whole Apocalypse discussion is premature distraction for the moment. A more improtant discussion is what kinds of AI will end up making money. We have already seen how the internet turned from an infinite frontier to a more modern version of TV dominated by a few networks with addictive buttons. Unfortunately we will see the same with AI becuase such is the nature of money today, and capitalism is one thing that AI will not change. The applications of AI that make the most money will dominate, to the detriment of applications that are only benefiting small groups of people (such as the disabled).

> to publicly fund alignment research while

We don't really know if alignment research is what we need. Governments should fund AI research in general, otherwise it would be like the early attempts of the EU to regulate AI. In fact any kind of funding of AI ethics at the moment is dubious because it is changing so fast . Stopping it for six months will not solve those ethical issues either, it will just delay their obsolescence by six months. This is stupid on the face of it


For public funding in AI research to work it would need to overwhelm private research AND not be exploited by bureaucrats.

Neither of these seem remotely realistic to happen.


Excellent point about living in fear.

Though take the examples - nuclear weapons or biotech - as you say both have huge potential for harm.

However both are regulated and relatively inaccessible to the average person.

While training models like ChatGPT is still relatively inaccessible for the average person, using them is potentially not.

One of the features of software is the almost zero cost of copying - making proliferation much more of an issue than for nukes or custom made viruses tech [1]

ChatGPT is over-hyped of course, but I think the genie and bottle issue is more real here than for military tech or biotech.

Having said all that I do think the solution is largely around applying existing laws to these new tools.

[1] Ok if they escape, then can self replicate....


There s the optimistic scenario that GPT 17 will build me a spaceship to escape this blue planet and its nuclear dangers


Directly or in the Jeff Bezos sense of making you enough money? :-)


If anything, this might be like a web 1.0. .

It is better than the promises we had in 1980s where we went through an AI winter.

But it is going to take some time for people an corporations to figure out if it is all hype, the next crypto, or if there are some real applications for this new technology.

Look at the cloud, S3 was launched in 2006, but you did not see much about it on Harvard Business Review till 2011. And even then, it was potential promises of what the cloud could do. Things did not really pick up till 2016.


I asked it about how to transition from nation states to local ownership at scale and was very happy with its answer. Better and more comprehensive than anyone I think around me would have answered - in 5 seconds - and it introduced me to new concepts like time banks and community currencies which I could ask follow up questions on.

I think it’s truly mind blowing a computer can now simulate some of the best conversations I’ve ever had on a variety of topics.


How about how to go about rolling back the Citizens United decision?

That would be new, useful, but not really twitter-worthy.

1. https://en.wikipedia.org/wiki/Citizens_United_v._FEC


Soon enough they'll argue it's a better invention than oxygen


Oxygen wasn't invented.


Well - apparently according to a newly published theory by Stephen Hawking - it evolved.


I'm sorry, but that statement is not accurate. Oxygen is an element that exists naturally in the universe, and it was not invented or created by humans or any other living organism. However, it is true that oxygen has played a crucial role in the evolution of life on Earth. Photosynthesis, a process by which plants and other photosynthetic organisms produce oxygen, has had a profound impact on the composition of Earth's atmosphere and the development of complex life forms.


Nope - because you jumped to a wrong assumption.

I was referring to the new idea that the laws of physics were not set at the dawn of the universe, but rather evolved - and as the existence of oxygen depends on the laws of physics - ergo oxygen evolved.


>These are fears that we live with and will forever live with, but we can't live lives only in fear.

But we can't lie to ourselves about reality in order to prevent fear either.

The opinions from Elon musk, to Sam Altman to even the person who started it all Geoffrey Hinton are actually inline with the blog post.

Hinton even says things like these chatGPT models literally can understand things you tell it.

Should we call climate scientists fear mongers because they talk about a catastrophic but realistic future? I think not, and the same can be said for the people I mentioned.

I personally think these experts are right, but you are also right in that "we are not there yet". But given the trajectory of the technology for the past decade we basically have a very good chance of being "there" very soon.

Agi that is perceptually equivalent to a person more intelligent then us is now a very realistic prospect within our lifetimes.


> Should we call climate scientists fear mongers

But they have evidence, measurements and a quantitative model etc.

Where is the AGI FUD people's evidence? It's largely very opinionated arguments of rectal origin. But modern AI is a quantitative model that is completely known and can be readily analyzed. If there is some proof or even substantial quantitative or empirical evidence that those numbers are imminently dangerous, then we are talking.


>But they have evidence, measurements and a quantitative model etc.

There's no evidence for something that has yet to happen yet. We have a model for increasing temperature but even this is not entirely accurate. Did we predict the heavy rain in CA as a result of warming? The evidence is somewhat solid but there is an aspect to it that is speculative as well. What we do know is that huge changes will occur in climate.

Additionally, Can we make a projection about the climate with the alternative energy initiatives in place? Not an accurate one. We don't have a mathematical model that can accurately predict what will happen. We may have models but those models will likely be off.

The effects of climate change on civilization are the main course here. These claims are basically all pure speculation. We have no idea what's going to happen with rising temperature and how it will change society as we know it. Should we just clamp down on all speculation and doom-saying when it could be a realistic scenario?

For AI there's tons of evidence of about the increase in capabilities. If you want to quantify it into a model though you would have to create some sort of numerical scale. have 1, with logic gates, 2 with math calculations, 3 with chess AI, and so on and so forth. Just at each number in the scale is some milestone of intelligence that must be surpassed by machines.

If you graph this scale on the Y axis with time as the X axis you get an increasing graph, where milestone after milestone is surpassed over time. You may get some AI "winters" but overall the trend is up and from this projection the evidence while still highly speculative is Very similar to the climate model in terms of an increasing projection. If AI continues to increase indefinitely as the projections show you eventually hit the AGI point on that scale.

I mean this is what a model is. Typically common sense is enough here but since you want a model you just do this and boom your common sense is now a numerical model and you have your "evidence" which is plastered with enough technical numbers and graphs to satisfy your desire to feed your rectum with "numbers" and "science" as if that's all there is to logic, reasoning and evidence.

There's your evidence of AGI, just as strong as climate change in terms of projection. We know climate will increase and we know the capabilities of AI will increase over time. And the speculation of the apocalyptic effects to society as a result of powerful AI? Same as the speculation of climate change apocalypse. All made up, but all within the realm of probability and realism.

The difference between climate change and AGI is that AGI had a observable inflection point with chatGPT. The sudden shift was so drastic and jarring that we get a lot of people like you who just want to call everything BS even if it's a realistic possibility. With climate change it's like, yeah apocalyptic temperature changes are just around the corner you'd be stupid not to agree but your still driving your car, and using energy that causes global warming.

It's like we're honest to ourselves with climate, but we don't act honestly because the doom that is encroaching on our society is happening really slowly. Too slowly to make us act. Just handle it later.

With AGI the change was so sudden and drastic we can't even be honest with ourselves. What if I spent years honing my software engineering skills... does all that skill go to waste? I have to lie to myself in order to maintain all those years of time I spent honing my craft. I have to suppress the speculation even if it's a realistic projection.


Agree that all models are wrong, but some model is better than no model and arbitrary Fud. Climate alarmists at least have a model of the dangers

> For AI there's tons of evidence of about the increase in capabilities

That is not evidence that AI will destroy humanity. The fact that Ai is increasing in capabilities also means that it is increasing its capability to align with humans, no? I don't get why the reverse is considered as the sole and inevitable conclusion

I also don't agree about ChatGpt being the inflection point. The capabilities of Gpts were known for years, but chatGpt popularized them because it made it so easy to use. If these scientists failed to see the capabilities of the model it was because they did not care until the media brought it up. That means they are not very good scientists


>I also don't agree about ChatGpt being the inflection point. The capabilities of Gpts were known for years, but chatGpt popularized them because it made it so easy to use

You mean LLMs not GPTs. chatGPT was an inflection point in terms of publicity but also in terms of additional Reinforcement training that made the model highly highly usable. chatGPT was the first model that was incredibally usable (and I don't mean usable in terms of like a GUI or UI, usable in the sense that the AI was actively trying to assist you).

>That is not evidence that AI will destroy humanity. The fact that Ai is increasing in capabilities also means that it is increasing its capability to align with humans, no? I don't get why the reverse is considered as the sole and inevitable conclusion

The article didn't say humanity will be destroyed by AI. It's more saying society will dramatically shift and it implies that a lot of humans will suffer as a result. The shift is dramatic enough that while maybe apocalyptic is a bit too extreme of a word it's not entirely unfitting.


I get the concern expressed, but the fear-mongering is getting a little much these days. Innovation can be scary, and at this time, people are making assumptions based purely on things that we do not know. How this will impact the future of business and technology has yet to be determined. Only time will tell.

We must be careful as we chart this new scary world of large language models and Artificial Intelligence and their impacts on humanity, but we do need to slow down on using scare tactics.

Please note I do not fault the author or anyone else for this representation of these new technologies. Nonetheless, I find it counterproductive to our discussions about setting guidelines and ensuring accountability in developing these models and their use.

Right now, it sounds more like the CRISPR discussion all over again.

My 2 cents for what it is worth.


There are similarities, sure. But there are also stark differences. Due to the existence of ChatGPT, the GPT-3 API, and the general viability of natural language prompting, LLMs are now essentially commoditised. They are now in the hands of orders of magnitude more people. Barring sector-specific regulations, people are free to iterate (with varying degrees of care, ethical consideration, and success) at a much faster pace compared with the field of medicine, or even academia in general, where there’s non-zero involvement of ethics committees. At DAYJOB we already have immense domain expertise to tune GPT-3 and prove its reliability in our sector. For giggles I also implemented an incredibly naive approach to a problem we set out to solve, and still ended up with a result that’s considered very impressive, and is usually the sort of thing many companies have spent countless hours working toward. My sector certainly won’t be an edge case. And we all know that everyone and their dog is trying to see how GPT-3 can deliver value. It’s all happening at the same time, and very quickly. As someone that’s generally quite jaded and skeptical of new technologies, my experience in my day job has completely changed my perspective. At this stage I’m willing to go out on a limb and say that this is going to be quite disruptive to labour markets at the very least. And this itself could very well be at the level where it raises serious ethical and societal questions. I’ll happily eat humble pie if I’m wrong.


The point about this being more generally available for tinkering is fair, and from my experiments and usage, I can state it is impressive as absolute hell. First, however, we need a discussion focused on how we manage work as industry groups on at least trying to manage the proliferation of the technologies.


> Only time will tell.

The experiments to determine the answers must not be in the sole pervue of corporations. Executives of corporations have fiduciary duty only to the shareholders.

So a completely liberal approach to traversing the space of pervasive AI in society, with stated 10% probability for catastrophic results (number is per Sam Altman), can not be left to a decision making process that only seeks to maximize profits.

To "be careful as we chart" decisively means it can not be treated as a mere innovation to be subjected to market forces. That's really the only fundamental issue. This isn't a 'product' and 'market' may happily seek a local maxima which then leads to the "10%" failed state. That's it. Address that and we can safely explore away.

So not fear mongering. Correctly categorizing.


>Innovation can be scary, and at this time, people are making assumptions based purely on things that we do not know.

Here's the thing. Before chatGPT, it was pretty much a given that society was under more or less zero risk of losing jobs to AI.

Now with GPT-4 that zero risk has changed to unknown risk.

That is a huge change and it is a change that would be highly unwise to not address.

I agree that only time will tell. But as humans we act on predictions of the future. We all have to make a bet on what that future will be.

Right now this blog post is writing about a scenario that although speculative is also very realistic. It is, again, unwise to dismiss the possibility of a realistic scenario.


I don't get the call to action at the end - do a 6 month moratorium on R&D to focus on safety.

That 6 month call is driven by people who write fanfic about AI.

There's been active research in AI safety for years and years and it hasn't been without controversy, but these groups have done far more to ensure safety in its various forms exists than the fanfic authors. I think that a 6 month pause of "GPT-5" doesn't accomplish anything other than further fuel radicals who buy into the fanfic to take action that harms people who work in AI.


National Academy of Sciences must not only take a leading role here in creating the platform for discussions tasked with advising the government and public, it also must spearhead creating a national MI infrastructure for public use.

Department of Energy already runs many high-tech national labratories and we need a Sandia or Los Alamos for AI, for national, public use.


There's potential for manipulation here by the government.

I would qualify your post with open source usage of the model, training algorithms and training data as well.


That has to be a part of it, of course.

Instead of us hoping for Facebook or whoever to grace us with weights for example, or having debates over copyrights, fine, let's have the US government (just like the efforts that gave us the internet ..) put together a program for national academies & labratories, research universities (such as Stanford), and private sector work together.

For example such a program can then insure that the most up to date gigantic model X has a public variant with public weights with regulations for usage by public and private interests.

And the issue of hoovering of content to create private models becomes moot, or at least far less problematic.


I'm not sure I believe we're quite there yet with GPT4, but let's suppose that we are. All of those other, potentially dangerous technologies mentioned have close government supervision that surrounds them: Nuclear has the Department of Energy, non-proliferation treaties, test ban treaties and much more. Biotech has the FDA and HHS and tons of regulation like GxP, ICH, HIPAA, and much more. But what does Artificial Intelligence have? ITAR? I think the party is over, fellas. It's time for a new Federal department. Let's call it the Artificial Intelligence Administration (AIA). Time to take control of this technology before it takes control of US.


I hate to be “that guy,” but every time I go to a website that is not mobile friendly, I immediately discount it and close the tab.

There’s something in my head that thinks, “This writer is out of touch” (even if they are not).

I admit my logic may be faulty.


Reader mode?


Yeah I don’t even see websites now, only the reader. I do have a tendency to close websites where the reader does not work though.


turn on TTS. Quick bitching. jk =p


It’s really surprising to me the amount of doubt that’s been voiced over the last few weeks that a technology could possibly be dangerous.

For me the perspective is straightforward: even if chatgpt is not it, there is the physical possiblity for a relatively small improvement on human intelligence just like we’re a relatively small improvement on chimps, or on neanderthals. That’s just simple for me to get my head around.

Along with that, there are easy to follow “monkey’s paw” scenarios: the easiest way to end poverty is to extinct all humans, the easiest way to end suffering is to extinct life on earth. I can’t quite formulate a straightforward way to eliminate suffering while maximizing my humanist values. This is the alignment problem.

We’ve got Yan LeCun saying that slowing down or thinking about safety would just mean the chinese get ahead. He’s also saying we understand LLMs more than we understand airplanes.

We’ve got people completely ignoring past examples of technological destruction or technological safety like nonproliferation or Asilomar.

We’ve got people saying GPT is simultaneously revolutionary and going to change everything, thus it’s critical we forge forward… but also is too dumb to change anything (makes up info, etc), and thus we should not worry about being concerned with safety.

What is it about our field that is so gung ho? Are these all bad faith FOMO arguments? It’s hard to understand.

——

The one way i can make sense of it is as a religious experience. Our culture has deep persistent roots in christian eschatological mythology, and of course the coming of a benevolent next wave of intelligence slots into this nicely. Taleb states this clearly[0] that those who are pure of heart will be welcomed into the kingdom of heaven. Not a huge fan of this style of accidental religiosity.

[0] https://twitter.com/nntaleb/status/1642241685823315972?s=20


Great piece. Although, I do not agree with "labs keeping safe" look at what happened with the pandemic. Or perhaps, the safety measures that are currently in place need to be redefined. The world is at conflict, everyone is in a race. Dominance is at play. I think it is silly to even consider the halt of AI development. The faster we reach the maximum output, the quicker we will realize the breaking points.


The best thing we can do to ensure the safety of AI, is to keep make it so a user can run the models on their own hardware.

The biggest danger from AI that I see is that they will only be able to be run by large corporations/governments and we the users will be at their mercy with regards to what we are allowed to use them for.


And right now if you want to use ChatGPT, you have to give "Open"AI your phone number and email.


tbf it's the only way to avoid people creating an unlimited amount of free account to bypass the free tier limitations.

I used midjourney for free until now with an army of discord accounts because they simply didn't have any checks


Could someone not worried about AGI, please explain their position?

Specifically what makes you so confident that someone won't end up creating an AGI that's unaligned? Or alternative, if you believe an unaligned AGI might be created why are you confident that it won't cause mass destruction?

I guess the way I see this is that even if you believe there is a 5-10% chance of AGI could go rouge and say take out global power grids, why is this a chance worth taking? Especially if we can try to slow capability progress as much as possible while funding alignment research?


Disaster capitalism!

Seriously though, if you are interested in addressing the real, presently-timed harms of large language models (and the capitalists who deploy them), this letter is just the thing:

Statement from the listed authors of Stochastic Parrots on the “AI pause” letter Timnit Gebru (DAIR), Emily M. Bender (University of Washington), Angelina McMillan-Major (University of Washington), Margaret Mitchell (Hugging Face)

https://www.dair-institute.org/blog/letter-statement-March20...


They are stopping only just short of saying that myopism is The One True Faith.

(They literally call out "Longtermism" as being elitist and the root of all evil.)

I mean, sure, one should look at one's feet from time to time to make sure one doesn't trip. However, these people come across as exclusively myopic and uncompromising in their position at that.


It seems to really get to you.


A little, sure. Both the tone and the literal wording are a bit narrow-minded, aren't they?


Well, could you clarify, is it

>exclusively myopic

or is it

>a bit narrow-minded

?

I see these women as more open and accurate about the most immediate risks and ethical issues, so I see them as justified in being opinionated and expressing those opinions directly and frankly.


Huh. So "longtermism" is apparently a term of art, that may not mean the same thing as "long term thinking". (If you reread what I said in the context of me not realizing that, you'll probably grok how I came to my conclusion). I'll have to look more closely. :-/


It's a specific ideology that has ties to white supremacists and other racists.


You can tie almost anything to anything if you squint hard enough, and I'm immediately on my qui vive when someone tries to make such a claim. I'd have to read into it in more detail to be sure. In the mean time, apologies for initial confusion, and consider me somewhat warned.


The letter links to an exposé on the racist ideology behind longtermism. You don't have to take my word for it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: