Hacker News new | past | comments | ask | show | jobs | submit | borggambit's comments login

I think this is just wrong.

Any "understanding" is a mirage.

Case in point. prompt: "Is it possible to combine statistical mechanics, techno music production and pizza?"

If the model had even the slightest understanding of the world the answer would just be no. Instead gpt4o:

"Yes, it is possible to combine statistical mechanics, techno music production, and pizza, though it might require some creative thinking. Here’s how these three seemingly unrelated things could be connected" then gives a list of complete nonsense.

The trick is that it can't say no because it doesn't understand ANYTHING.

It has no understanding of the difference between combining pizza with two completely disparate nonsense subjects/items with combining pizza with two other food items. The later would seem to have a mirage, high dimensional, understanding from data of "food" though in the response.


I just asked chatgpt-4o and the answers were perfectly logical although not creative at the level of a creative human (but many humans are not that creative either.

For example one of the outputs:

"Host an event where statistical mechanics concepts are explained or demonstrated while making pizzas, all set to a backdrop of live techno music. The music could be dynamically generated based on real-time data from the pizza-making process, perhaps using sensors to monitor heat, time, or the distribution of toppings, with this data influencing the techno tracks played."

It not doing such a bad job trying to mix up three unrelated concepts. It knows music is not an ingredient for the pizza and knows that pizza requires heat for cooking and that heat is explained with statistical mechanics.

Sure you can nitpick and find nuances that are wrong but honestly an average human asked to come up with something for a school assignment would probably not do a much better job.

Now, there are clearly better examples of utter failures where even the best model trip on that reveals that they are not even close at understanding and modeling the world correctly.

My point is just that their weakness cannot merely explained by the next token prediction process.


"Is it possible to combine statistical mechanics, techno music production and pizza?"

You just did.


I would say the topics mentioned were basically non-topics in movies of the past unless explicit like Mississippi Burning.

I think it is more like the Matrix. Super relevant with AI but traveling through phone lines and pay phones, what? It would have to seem so dated the way black and white movies have always just felt too distant for me.


Getting hired is a random process.

It is like asking what the good lottery numbers are to play right now at 51.

All that changes at 50+ is your physical condition matters more. If you look like you can run circles physically around the hiring manager you will get extra points. If you look like you might die of a heart attack next week that will obviously count against you.


It is like the Mexican restaurant near me that has never come back to the business it was doing before covid.

The business is completely dead and just hanging on. People would view the business going under as negative but right now it is a complete waste of resources and the owner's life.

I suspect there is a staggering amount of business in this exact situation but we don't seem to believe in creative destruction as a society anymore.

Floating all these zombie businesses is just tinder for a larger fire that is harder to put out than it should be. No real shock the spark would be a yen carry blow up. I have seen this episode before.


My experience is that LLMs can't actually do 3 at all. The intersection of knowledge has to already be in the training data. It hallucinates if the intersection of knowledge is original. That is exactly what should expect though given the architecture.


exactly, dotcom is fueling this bubble. The huge difference is we didnt have a previous bubble to compare dotcom to at the time. Dotcom is one justification for the current bubble because everything worked out.

All that needed to happen for dotcom to workout though was high speed internet that business had come to people's homes.

For this bubble to workout we need to come up with artificial beings on the level of humans from limited use case LLMs.

It is completely delusional.


> For this bubble to workout we need to come up with artificial beings on the level of humans from limited use case LLMs.

Robots?

We've got loads of those already. Ones with super-human speed, super-human dexterity, super-human senses. Even got some cute tech demos where they get controlled by the OpenAI API.

I'm not sure why you think that's delusional.

But I'm also not sure why you think that's necessary, given that not only "all things which can be expressed in written form" can be used by an LLM, but also that the same architecture is fine with anything that can be tokenised which includes visual and audio tasks — i.e. most office jobs.

No, what's necessary for this to not be a bubble is for someone to figure out how to turn what exists into an actual business model.

"Business" is much less flashy, less cool, less exciting than tech demos filled with androids doing housework, parkour, or acting as a tour guide.


What will kill the boom and wake people up from this mass hysteria is an outside AI economic shock.

Of course, the maddness of crowds would conclude the risk here is underinvestmemt.


Thanks for that what day was yesterday prompt. I have ran across these situations before but never quite like that.

What is great about that Thursday prompt is how naked the LLM is to the reality that it knows absolutely nothing in the way we think of "to know". The bubble we are in is just awesome to behold.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: