Hacker News new | past | comments | ask | show | jobs | submit login

LLM is already a misnomer. Latest versions are multimodal. Current versions can be used to build agents with limited autonomy. Future versions of LLMs are most likely capable of more independence.

Even dumb viruses have caused catastrophic harm. Why? It’s capable of rapid self replication in a massive number of existing vessels. You add in some intelligence, vast store of knowledge, huge bandwidth, and some aid by malicious human actors, what could such a group of future autonomous agents do?

More on the risks of “doom”: https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-o...




I mean a small group of malicious humans can already bioengineer a deadly virus with CRISPR and open source tech without AI.

This is hardly the first time in history a new technological advancement may be used for nefarious purposes.

It’s a discussion worth having as AI advances but if [insert evil actor] wants to cause harm there are many cheaper and easier ways to do this right now.

To come out and say we need government regulation today does stink at least a little bit of protectionism as practically speaking the “most evil actors” would not adhere to whatever is being proposed, but this would impact the competitive landscape and the corporations yelling the loudest right now have the most to gain, perhaps coincidence but worth questioning.


> I mean a small group of malicious humans can already bioengineer a deadly virus with CRISPR and open source tech without AI.

That's what interesting to me. People fearmongering about bioengineering and GMO's were generally dismissed as being anti-science and holding humankind back (or worse, that there opposition to progress meant they had blood on their hands). Yet many of the people who mocked them proved themselves to be even more dogmatic and apocalyptic, while being much closer to influencing regulations. And the technology they're fear-mongering about is even further from being able to harm people than biotech is. We are actually able to create harmful biotech today if we want; we don't know when we'll ever be able to create AGI, and if it would even pose a danger if we did.

This mentality - "there could be a slight chance research into this could eventually lead to apocalyptic technology, no I don't have any idea how but the danger is so great we need a lot of regulation" - would severely harm scientific growth if we applied it consistently. Of course everyone is going to say "the technology I'm afraid of is _actually_ dangerous, the technology they're afraid if isn't." But we honestly have no clue when we're talking about technology that we have no idea how to create at the moment.


Counterpoint: CRISPR only reignited what was already a real fear of reduced difficulty and costs of engineering deadly pathogens.

In fact, what you and GP wrote is baffling to me. The way I see it, biotech is stupidly obviously self-evidently dangerous, because let's look at the facts:

- Genetic engineering gets easier and cheaper and more "democratized"; in the last 10 years, the basics were already accessible to motivated schools and individual hobbyists;

- We already know enough, with knowledge accessible at the hobbyist level, to know how to mix and match stuff and get creative - see "synthetic biology";

- The substrate we're working with is self-replicating molecular nanotechnology; more than that, it's usually exactly the type that makes people get sick - bacteria (because they're most versatile nanobots) and viruses (because they're natural code injection systems).

Above is the "inside view"; for "outside view", I'll just say this: the fact that "lab leak" hypothesis of COVID-19 was (or still is?) considered to be one of the most likely explanations for the pandemics already tells you that the threat is real, and consequences are dire.

I don't know how can you possibly look at that and conclude "nah, not dangerous, needs to be democratized so the Evil Elites don't hoard it all".

There must be some kind of inverse "just world fallacy" fallacy of blaming everything on evil elites and 1%-ers that are Out To Get Us. Or maybe it's just another flavor of the NWO conspiracy thinking, except instead the Bildenbergs and the Jews its Musk, Bezos and the tech companies.

Same is, IMHO, with AI. Except that one is more dangerous because it's a technology-using technology - that is, where e.g. accidentally or intentionally engineered pathogens could destroy civilization directly, AI could do it by using engineered pathogens - or nukes, or mass manipulation, or targeted manipulation, or ... countless other things.

EDIT:

And if you ask "why, if it's really so easy to access and dangerous, we haven't already been killed by engineered pathogens?", the answer is a combination of:

1. vast majority of people not bearing ill intent;

2. vast majority of people being not interested and not able to perform (yet!) this kind of "nerdy thing";

3. a lot of policing and regulatory attention given to laboratories and companies playing with anything that could self-replicate and spread rapidly;

4. well-developed policies and capacity for dealing with bio threats (read: infectious diseases, and diseases in general);

5. this being still new enough that the dangerous and the careless don't have an easy way to do what in theory they already could.

Note that despite 4. (and 3., if you consider "lab leak" a likely possibility), COVID-19 almost brought the world down.


Great points. Will just add a point 1.5: There's usually an inverse correlation between ill intent and competence, so the subset of people who both want to cause harm to others on a mass scale and who are also able to pull it off is small


I’m not sure there is a way for someone to engineer a deadly virus while completely innoculating themselves from it.

Short-term AI risk likely comes from a mix of malicious intent and further autonomy that causes harm the perpetrators did not expect. In the longer run, there is a good chance of real autonomy and completely unexpected behaviors from AI.


Why do you have to inoculate yourself from it to create havoc? Your analogy of “nuclear war” also has no vaccine.

AI autonomy is a hypothetical existential risk, especially in the short term. There are many non-hypothetical existential risks including actual nuclear proliferation and escalating great power conflicts happening right now.

Again my point being that this is an important discussion but appears overly dramatized, just like there are people screaming doomsday there are also equally qualified people (like Yann LeCun) screaming BS.

But let’s entertain this for a second, can you posit a hypothetical where in the short term a nefarious actor can abuse AI or autonomy results in harm? How does this compare to non-AI alternatives for causing harm?


This gets countered by running one (or more) of those same amazing autonomous agents locally for your own defense. Everyone's machine is about to get much more intelligent.


“…some intelligence…” appears to be a huge leap from where we seem to be though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: