Hacker News new | past | comments | ask | show | jobs | submit login

The politicians are gunning extremely hard for open source AI. It's crazy.

"Soros argued that synergy like that between corporate and government AI projects creates a more potent threat than was posed by Cold War–era autocrats, many of whom spurned corporate innovation. “The combination of repressive regimes with IT monopolies endows those regimes with a built-in advantage over open societies,” Soros said. “They pose a mortal threat to open societies.”

https://www.wired.com/story/mortal-danger-chinas-push-into-a...

Literally everyone out there who pursues global influence is just frothing at the mouth over AI. This is seriously tempting me to buy the 512gb Mac Studio when it comes out so I can run the big llama3 model, which will probably be banned any day now.




> The politicians are gunning extremely hard for open source AI. It's crazy.

I hope someone has / can do some investigative journalism to check out their links with commercial / closed source AI; I can imagine the investors and those that benefit from companies like "Open"AI have close links with politicians. There's probably no direct links, they've become really good at obscuring those and plausible deniability.


You'll hear about it in 10 years in an article about "why did open source Ai die? ... once blossoming along closed source AI, open source AI disappeared after XXYYZ AI ACT. Turns out, Senator X and Senator Y were both in the pocket of closed source AI (and ended up in cushy jobs in Microsoft and Xai, not a coincidence) ... "


Regardless of your perspective on Musk, X AI is currently producing open source AI with permissive licensing, and seems very likely to continue open source releases in the near future.

https://github.com/xai-org

Microsoft, Amazon, OpenAI, others are driving regulatory capture behind the scenes. The usual suspects are dropping all sorts of money on establishing control and rent seeking - actual open source AI with end user control makes it much harder for these asshats to extract money and exert influence over people, and they desperately want both. AI, like search, will be a powerful influence vector for politics and marketing.


My bad, I only used them as an example. I was unaware they were doing open source work.


I don’t really understand the long term plan, or maybe I don’t believe lawmakers understand where we are going long-term with this stuff.

We’re still in the very early days.

Unless the academic community really drops the ball, in 5 or so years they’ll be training models around the quality of the current state of the art on professors’ research clusters (probably not just at R1 universities).

I’d be shocked if, in the long term, anyone who can get access a library’s worth of text won’t be able to put together a useable model.

There’s nothing magical about our brains, so I imagine at some point you’ll be able to teach a computer to read and write with about as many books as it takes to teach a human. I mean maybe they’ll be, like, 10x as dumb as us. A typical American might read hundreds of books over the course of their life, what are they going to do, require a license to own more than a couple thousand e-books?


> I don’t really understand the long term plan

The long term plan for any lawmaker is winning the next election. Anything further in the future doesn't matter much.

The long term plan for incumbents here might be building a large moat by regulatory capture.

Maybe incumbents are helping lawmakers. Do ut des.


maybe I don’t believe lawmakers understand where we are going long-term with this stuff.

Wait, you're saying that a bunch of legislators who believe the Earth is 6000 years old may not have a valid perspective on complex technical matters? No. Say it isn't so.


I guess it always just seems weird to me when they see something correctly as a rapid and dramatic change, but they don’t play out the obvious trajectory, and then come up with restraints that only make sense in the context of current technical limitations.


> I don’t really understand the long term plan

My guess: everything that slows down ai development is good, because it gives society time to adapt.

(I think this plan is flawed, because it is easier to adapt to open research than to closed research)


> A typical American might read hundreds of books over the course of their life

Probably only if you count books like green eggs and ham.


Sure, but those kinds of books are explicitly intended as part of the path of learning to read, right?


It all depends on whether the AI proponents are right or not. If they're right, then of course it's a massive destabilizing threat. Even a weaker version, where there is no autonomy at all and it's all just the result of prompts, is going to be seriously destabilizing if it delivers on its promises. We really are not ready for a world of near zero cost fake everything.

On the other hand, like existing ITAR, this will manifest in extremely weird rules that have very little to do with actual safety.


If all you want is to be able to run it, but don't care about speed, you can run it on a Dell R720, they support hundreds of gigabytes of RAM. https://ollama.com/ makes it easy to download. They're pretty cheap compared to a Mac Studio. I got an R820 for a few hundred dollars, it has 256GB of RAM, with room for much more.


Furthermore you can get used versions of these pretty cheap on ebay. I bought some years back for experimenting with openshift in my homelab and was able to get some pretty insane hardware for $600 USD. Processors are slow, but it will run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: