Hacker News new | past | comments | ask | show | jobs | submit login

> it's a shame that tech-weak showmen like Musk and Altman suck up so much discursive oxygen

Is it that bad, though? It does mean there's lots of attention (and thus funding, etc.) for AI research, engineering, etc. -- unless you are expressing a wish that the discursive oxygen were instead spent on other things. In which case, I ask: what things?




What things?

The pauses to consider if we should do <action>, before we actually do <action>.

Tesla's "Self-Driving" is an example of too soon, but fuck it, we gots PROFITS to make and if a few pedestrians die, we'll just throw them a check and keep going.

Imagine the trainwreck caused by millions of people leveraging AI like the SCOTUS lawyers, where their brief was written by AI and noted imagined cases in support of its decision.

AI has the potential to make great change in the world, as the tech grows, but it's being guided by humans. Humans aren't known for altruism or kindness. (source: history) and now we're concentrating even more power into fewer hands.

Luckily, I'll be dead long before AI gets crammed into every possible facet of life. Note that AI is inserted, not because it makes your life better, not because the world would be a better place for it and not even to free humans of mundane tasks. Instead it's because someone, somewhere can earn more profits, whether it works right or not and humans are the grease in the wheels.


>The pauses to consider if we should do <action>, before we actually do <action>.

Unless there has been an effective gatekeeper, that's almost never happened in history. With nuclear the gatekeeper is it's easy to detect. With genetics there pretty universal revulsion to it to the point a large portion of most populations are concerned about it.

But with AI, to most people it's just software. And pretty much it is, if you want a universal ban of AI you really are asking for authoritarian type controls on it.


> But with AI, to most people it's just software.

Practical AI involves cutting-edge hardware, which is produced in relatively few places. AI that runs on a CPU will not be a danger to anyone for much longer.

Also, nobody's asking for a universal ban on AI. People are asking for an upper bound on AI capabilities (e.g. number of nodes/tokens) until we have widely proven techniques for AI alignment. (Or, in other words, until we have the ability to reliably tell AI to do something and have it do that thing and not entirely different and dangerous things).


Right, and when I was a kid computers were things that fit on entire office floors. If your 'much longer' is only 30-40 years I could still be around then.

In addition you're just asking for limits on compute, which ain't gonna go over well. How do you know if it's running a daily weather model, or making an AI. And how do you even measure capabilities when we're coming out with with other functions like transformers that are X times more efficient.

What you want with AI cannot happen. If it's 100% predictable it's a calculation. If it's a generalization function taking incomplete information (something humans do) it will have unpredictable modes.


Is a Tesla FSD car a worse driver than a human of median skill and ability? Sure we can pull out articles of tragedies, but I'm not asking about that. Everything I've seen points to cars being driven on Autopilot being quite a bit safer than your average human driver, which is admittedly not a high bar, but I think painting it as "greedy billionaire literally kills people for PROFITS" is at best disingenuous to what's actually occurring.


It is very bad. There's more money and fame to be made by taking these two extreme stances. The media and the general public is eating up this discourse, that are polarizing the society, instead of educating.

> What things?

There are helpful developments and applications that go unnoticed and unfunded. And there are actual dangerous AI practices right now. Instead we talk about hypotheticals.


Respectfully, I don't think it's AI hype that is "polarizing the society".


They're talking about shit that isn't real because it advances their personal goals, keeps eyes on them, whatever. I think the effect on funding is overhyped -- OpenAI got their big investment before this doomer/e-acc dueling narrative surge, and serious investors are still determining viability through due diligence, not social media front pages.

Basically, it's just more self-serving media pollution in an era that's drowning in it. Let the nerds who actually make this stuff have their say and argue it out, it's a shame they're famously bad at grabbing and holding onto the spotlight.


The "nerds" are having their say and arguing it out, mostly outside of the public view but the questions are too nuanced or technical for a general audience.

I'm not sure I see how the hype intrudes on that so much?

It seems like you have a bone to pick and it's about the attention being on Musk/Altman/etc. but I'm still not sure that "self-serving media pollution" is having that much of an impact on the people on the ground? What am I missing, exactly?


My comment was about wanting to see more (nerds) -> (public) communication, not about anything (public) -> (nerds). I understand they're not good at it, it was just an idealistic lament.

My bone to pick with Musk and Altman and their ilk is their damage to public discourse, not that they're getting attention per se. Whether that public discourse damage really matters is its own conversation.


Just to play devils advocate to this type of response.

What if tomorrow I drop a small computer unit in front of you that has human level intelligence?

Now, you're not allowed to say humans are magical and computers will never do this. For the sake of this theoretical debate it's already been developed and we can make millions of them.

What does this world look like?


> What does this world look like?

It looks imaginary. Or, if you prefer, it looks hypothetical.

The point isn't how we would respond if this were real. The point is, it isn't real - at least not at this point in time, and it's not looking like it's going to be real tomorrow, either.

I'm not sure what purpose is served by "imagine that I'm right and you're wrong; how do you respond"?


Thank god you're not charge of military planning.

"Hey the next door neighbors are spending billions on a superweapon, but don't worry, they'll never build it"


On some things that is not a bad position: The old SDI had a lot of spending but really not much to show for it while at the same time forcing the USSR into a reaction based on what today might be called "hype".


The particular problem arises when both actors in the game have good economies and build the superweapons. We happened to somewhat luck out that the USSR was an authoritarian shithole that couldn't keep up, yet we still have thousands of nukes laying about because of this.

I'd rather not get in an AI battle with China and have us build the world eating machine.


SDI's superweapons remained by-and-large a fantasy, though. Just because a lot of money is pouring in doesn't mean it will succeed.


> What if tomorrow I drop a small computer unit in front of you that has human level intelligence?

I would say the question is not answerable as-is.

First, we have no idea what it even means to say "human level intelligence".

Second, I'm quite certain that a computer unit with such capabilities if it existed, would be alien, not "human". It wouldn't live in our world, and it wouldn't have our senses. To it, the internet would probably be more real than a cat in the same room.

If we have something we can related to, I'm pretty sure we have to build some sort of robot, capable of living in the same environment we do.


> "What if tomorrow I drop a small computer unit in front of you that has human level intelligence?"

What if tomorrow you drop a baby on my desk?

Because that's essentially what you're saying, and we already "make millions of them" every year.


If I drop a baby on your desk you have to pay for it for the next 18 years. If I connect a small unit to a flying drone, stick a knife on it, and tell it to stab you in your head then you have a problem today.


Very bad. The Biden admin is proposing AI regulation that will protect large companies from competition due to all the nonsense being said about AI.


> The Biden admin is proposing AI regulation that will protect large companies from competition

Mostly, the Biden Administration is proposing a bunch of studies by different agencies of different areas, and some authorities for the government to take action regarding AI in some security-related areas. The concrete regulation mostly is envisioned to be drafted based on the studies, and the idea that it will be incumbent protective is mostly based on the fact that certain incumbents have been pretty nakedly tying safety concerns to proposals to pull up the ladder behind themselves. But the Administration is, at a minimum, resisting the lure of relying on those incumbents presentation of the facts and alternatives out of the gate, and also taking a more expansive view of safety and related concerns than the incumbents are proposing (expressly factoring in some of the issues that they have used "safety" concerns to distract from), so I think prejudging the orientation of the regulatory proposals that will follow on the study directives is premature.


What I have heard from people I know in the industry is that the proposal they are talking about now is to restrict all models over 20 billion parameters. This arbitrary rule would be a massive moat to the few companies that have these models already.


Alternatively:

there is nonsense being said about AI so that the Biden admin can protect large companies from competition


Yup. I continue to be convinced that a lot of the fearmongering about rogue AI taking over the world is a marketing/lobbying effort to give early movers in the space a leg up.

The real AI harms are probably much more mundane - such as flooding the internet with (even more) low quality garbage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: