Hacker News new | past | comments | ask | show | jobs | submit login

> I can appreciate that. I haven't been listening to him for that long; having no idea what situation you're discussing, I'll take it at face value.

https://m.youtube.com/watch?v=2HMPRXstSvQ

We are currently facing the biggest radiologist shortage in the last 30 years.

> It’s the unknowns which are dangerous.

Electricity, the Industrial Revolution, the internet, gene editing/bionengineering also came with unknown existential risks.

> We’re inventing something new in almost every way imaginable.

> I think people assume advancements will accelerate, actually.

This is highly debatable.

> we're talking about the potential to build a new life form with godlike abilities.

Evidence? This has been stated in sci-fi books from before I was born and I don’t see any proof that we’re building something remotely close to this.




> Electricity, the Industrial Revolution, the internet, gene editing/bionengineering also came with unknown existential risks.

In a way, all of those things have led to where we are now. And might I point out that a form of bioengineering may have caused the latest global pandemic? Not taking these things seriously because "they haven't killed us all yet" seems a little shortsighted.


"not taking these things seriously" is very different from regulating something in it's infancy because of fear of the unknown.

> And might I point out that a form of bioengineering may have caused the latest global pandemic?

Even accepting this hypothesis as true, is the answer that we should have regulated cell culture in the 50s when HeLa cell culture became a thing?

Would that have prevented a nation-state from potentially causing a pandemic?


> "not taking these things seriously" is very different from regulating something in it's infancy because of fear of the unknown.

I mean, we should have better regulated carbon emissions generations ago. We'd have a lot more time to deal with it, because we haven't been able to deal with it in the century+ we've known about it. And AI extinction risk is likely to move faster than carbon-related climate change.


> "not taking these things seriously" is very different from regulating something in it's infancy because of fear of the unknown.

I don't pretend to have the answer of what the balance is for oversight and regulation to optimize safety and innovation - but the first step for those responsible is to recognize the potential dangers and unknowns, and produce plans that can be discussed and debated.

> Even accepting this hypothesis as true, is the answer that we should have regulated cell culture in the 50s when HeLa cell culture became a thing?

How about gain-of-function research? How about nuclear weapons? I lean libertarian but also recognize we have a responsibility to safeguard humanity.


> Electricity, the Industrial Revolution, the internet, gene editing/bionengineering also came with unknown existential risks.

Did they? I've never heard about electricity creating fears about human extinction.

The industrial revolution does pose an existential risk: climate change.

Did bioengineering garner worries about existential risk? It certainly had and has a lot of people worried about various risks. And then lots of countries banned human cloning: https://en.wikipedia.org/wiki/Human_cloning#Current_law

> Evidence? This has been stated in sci-fi books from before I was born

Yes, many sci-fi authors of the past were very forward-thinking and predicted future technologies... that's sort of the point.

> and I don’t see any proof that we’re building something remotely close to this

Again, this is a fundamental misunderstanding of the risk.

1. We are building AI. AI is possible.

2. We are advancing AI. AI advancement is possible.

3. We have not built superhuman AGI. But superhuman AGI is probably possible.

The fact that AGI doesn't exist today is frequently argued as a reason we won't have it in the near future, but it's a non-sequitur. We will definitely have more advanced AI in 5 years than we do today, and we can't say what that AI will be capable of. Therefore, it's possible it will be AGI.


> The industrial revolution does pose an existential risk: climate change.

And yet, until much later when we understood the mechanisms and could evaluate mitigations, any restriction on industrialization based on purely speculative “existential risk” that we could neither adequately explain nor provide a factually-grounded framework for evaluating relative risks of alternatives would mostly likely not have made things better, and could have made them (and even the real existential risk, as well as experienced conditions at the time) much worse.

It’s true that once we understood the concrete mechanisms and could evaluate alternatives, not mitigating the risk has been irresponsible, but that was…significantly later than the Industrial Revolution itself.


> And yet, until much later when we understood the mechanisms and could evaluate mitigations, any restriction on industrialization based on purely speculative “existential risk” that we could neither adequately explain nor provide a factually-grounded framework for evaluating relative risks of alternatives would mostly likely not have made things better, and could have made them (and even the real existential risk, as well as experienced conditions at the time) much worse.

Really? I don't agree that strictly limiting plastic production and carbon emissions 100 years ago would have had zero or negative effect on the timeline of apocalyptic climate change or any other existential threat. It might have made us more vulnerable to natural pandemics... but it also would have slowed the emergence of natural pandemics and virtually prohibited artificial pandemics.


> Really? I don't agree that strictly limiting plastic production and carbon emissions 100 years ago would have had zero or negative effect on the timeline of apocalyptic climate change

Neither do I, but then, 100 years ago doesn't clearly meet the description, anyhow. The discovery of the greenhouse effect and CO2’s role in it was after not very long after the dates usually used for the end of the (first) industrial revolution (which is usually what is meant without modifiers, not the second—fourth industrial revolutions) and well over 100 years ago.


What are you even arguing now?

Replace "100 years" with whatever timeframe is perfect for you and I'll make the same argument. Please respond to that instead of nitpicking.


> Replace "100 years" with whatever timeframe is perfect for you and I'll make the same argument

At any time before we had the information to understand thr mechanism of problems and evaluate mitigations, its very unlikely that we would choose tbe right mitigations. Sure, if we choose thr mitigations that we would choose based on information today before we had that information, that would be great, but its at least as likely that any attempt to mitigate risk of unknown mechanism vefore we had that infornation would have done exactly the wrong thing.

Basically, “Well, I have information today by which I can design a policy which would have been beneficial if implemented before that information was available” is not an argument that people seeking to mitigate a vaguely imagined risk without any understanding of its mechanism would have the means to design a productive intervention.


Ok, that makes more sense.

I'm saying that with what we know now, it's clear we could have made different choices that would have worked out better for us earlier in industry.

You're saying that the information at the time made it impossible to correctly make the optimal choice. (By that I mean, making the optimal choice would have appeared to be the wrong choice, based on information available.)

I don't agree with your point; I think we knew that burning carbon was going to change the atmosphere for the worse and we could have been more careful by simply burning less carbon. However, I concede that it's unknowable what science and technology would look like if we had done that.

In analogy to AI, that is very much a point of conversation in alignment/safety discussions. Should we wait until we have better tools to figure out alignment, since it's so insanely difficult to figure out now? Frankly, nobody knows for sure the answer to that question either. My contention is that you (and others) are assuming the answer to that question is yes, but that's not necessarily correct, even based on all the information and analogies we have now.


> 3. We have not built superhuman AGI. But superhuman AGI is probably possible.

"Dumb" AGI is yet to be proven theoretically possible let alone superhuman.


It hasn't been proven possible, but it's reasonable that it's possible. There's no theory I know of saying that it's unlikely to be possible. Compared to FTL travel, for example, which we have theory enough to state is not likely to be possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: