Hacker News new | past | comments | ask | show | jobs | submit login

It's the current hotness in tech. HN has always included trending tech topics. Ever since HN started, I've seen -

* Explosion of social media

* JavaScript libraries and the frontend revolution to modernizing the web and browsers

* Mobile apps

* Crypto

* Machine Learning

and now, AI & LLMs.

The only difference LLMs have with the others is the learning curve. The others, one could easily hop on the trend. With LLMs, before being able to understand what everyone's talking about, there's a lot to know. There can be a sense of being left out. I think that can be demotivating.

edit: formatting




For me the LLM topic is just all enveloping in a way the other trends haven't been.

Obviously there was a a million crypto posts and it was annoying but crypto covered a specific niche of overall software stuff (payment processing).

With LLMs it feels like every topic somehow has to come back to LLMs.

I currently work as a ML Eng at FAANG so maybe it adds to my exhaustion on the topic.


> For me the LLM topic is just all enveloping in a way the other trends haven't been.

Same here, for sure. I just try to dodge it all as best I can. Seems like every question has the possible answer of LLMs, and nearly always, someone provides it.


There's also this annoying dissonance between 'evangelists' and reality. Evangelists often feign that we're on the cusp of artificial general intelligence, while in practice LLMs remain stupid, error prone, and unreliable for anything beyond tasks that have a vast number sources for highly relevant training. Which also somewhat dilutes the utility, because I could just as well find those sources!

Oh right, that must just be because I'm not giving it the magic prompt that makes them magically turn into geniuses.


The current trend in LLMs is synthetic data and inference-time reasoning. They're past data sources.

The new problem is this only works when you can verify the answers.


Sometimes I wish I had the drugs that would make me believe in things like "we're on the cusp of AGI", everything would be so much more exciting.


Crypto currencies became purely speculative asses and their volatility ensured that they wouldn’t be useful for payment processing. Has block chain in general made any progress in payment processing?


I very much agree, just that if you were talking about idk something like compilers people wouldn't barge in about crypto.

Any HN topic now just feels like a small jump to LLMs.

I recognize my above is a bit of a strawman, hard to recall exact posts / comments over the years.


> covered a specific niche of overall software stuff (payment processing)

At the time it was everything blockchain. Not just payments. Decentralized, smart contracts and the like.


It's true it did get pretty rediculous and somewhat collapsed (at least in public interest).

I just don't see why every time we need to go through the incredible hype / death cycle.

LLMs are a useful tool if used properly, I hope I start seeing them used to create distinct value.


"Replicants are either a benefit or a hazard."—Deckard


It's deeply funny that they hit you with the ole "you don't understand AI" when it's your day job though. So you've got that going for you.


>With LLMs, before being able to understand what everyone's talking about, there's a lot to know. There can be a sense of being left out. I think that can be demotivating.

I find it to be the exact opposite. The idea of "Artificial Intelligence" as a thinking machine is fascinating. However, now that i learned a certain amount about this current neural network paradigm, the marketing magic of it as an intelligent system is gone, it is no longer interesting to me. These models are just some dry big-data statistical machinery to me.

I think many people find it interesting precisely because they dont understand it and think there is some magic in there. Of course when hype is stupid and people think the singularity is coming then it sounds a lot more interesting


The human brain is also dry big data statistical machinery.

The frontier models (of which R1 is an example) « think » in much the same way a human would - look at their chain of thought output). I think if you shut down LLMs in your head because you think you « understand » them and there’s nothing interesting there, then you’re blinded by hubris.


You literally have no proof for either of your statements.

This is the kind of current rhetoric that has me not coming to HN as often.

Any neurobiologist would laugh at the notion that the brain is a big dry statistical machinery.

Classic case of engineers talking outside of their expertise.


>I think if you shut down LLMs in your head because you think you « understand » them and there’s nothing interesting there, then you’re blinded by hubris.

I am no expert, and I am well aware that even experts have much to learn about it. It is interesting in its own way, like statistics is too. I don't feel like that changes anything.


I am quite sure I do not « think » by generating a wall of text word by word, thankyouverymuch.


It's interesting that every entry in your bullet list is a tech that I find to have had an overall negative impact on society.

Every one of those examples is a genereal tech that could be used for better or worse, but that have almost exclusively been used to more effectively exploit users.

As is always the case, what's good for the VC investor is not necessarily good for everyone...


> The only difference LLMs have with the others is the learning curve

Half the post about AI are telling us how good prompts should make me a 10x engineer, or I made this app in 5 minutes and I don't know how to code. At least for JS and, god forbid, crypto, there was some effort required.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: