Hacker News new | past | comments | ask | show | jobs | submit | x3haloed's comments login

I’d like you to explain what you don’t know, and why.

I don't know the history of the original tribes in Brazil. Why? Because I haven't studied it, read about it, or been taught anything on the subject. I know that such tribes exist, but I don't know their history.

FWIW, lately prompts seem more able to elict that same sort of answer from both Claude and GPT-4x on thinly learned contexts.

And, next release, they'll have had been trained on one more example of an answer for that sort of question ;)

Your post now will probably join the mix. So will mine and all the others here.

We can consider any question and conclude that we don’t know the answer. LLMs can’t meaningfully do that. They just make stuff up.

I’m sorry you’ve been inconvenienced by a murder… Or is that the country we’re living in now? “Murder happens, just don’t bother me.”

I believe this is the least known and most important use-case for LLMs today -- understanding and inferring the meaning in language. We've all seen cases where Google can occasionally surface the right content for indirect descriptions of what you're looking for. The most famous example I can think of is searching for "eyebrows kid" returns images of Will Poulter. Google's knowledge engine is getting pretty good at this kind of thing, but LLMs are way better.

Language models are exceedingly good at understanding the meaning of your language without the use of specific keywords. Here's an example from a recent search I did.

> metals can be flexed or bend, and it will regain some of its prior shape

in Google returns either "ductile" or "shape-memory alloy," which are both incorrect.

> What is the property of a material where it prefers to stay in its current form? This is often found in metals, where it can be flexed or bent, and it will regain some of its prior shape?

in ChatGPT correctly explains that

> The property you are referring to is called elasticity. Elasticity is the ability of a material to return to its original shape after being deformed (stretched, compressed, bent, etc.) when the external forces are removed.

We all know that LLMs can hallucinate, and they are therefore not a reliable source of truth or knowledge. I'm not necessarily trying to say that LLMs are more accurate than something like Google's knowledge engine. The value in LLMs is that they can infer your meaning to some degree of accuracy (just like asking a human) so that you can productively continue your own research in more depth.


I wonder what the play is here. Any ideas?

The key must be somewhere in the statement "we’ve been increasingly asking ourselves how we should work with computers. Not on or using computers, but truly with computers."

I wish I had some experience using Multi so I could picture this better. But is there a chance this is for a sandboxed execution environment that would allow models to interact with software alongside a human counterpart?


Their blog shows their product: https://multi.app/blog/launching-multi-multiplayer-collabora...

It’s Remote Desktop, but allows more than one participant to connect at once.


Startups always have an exit strategy. Makes sense.


Blame sales and marketing


I think wrappers are fine if they add meaningful value. It does seem that mostly you’re getting charged for a little prompt engineering and a UI skin.

I’d personally like to see a good wrapper for autonomous LLM coding. Nothing we have yet quite fits the bill, IMO.


nice way to see it...and most chilled comment here so far. thanks!


Exactly. I'm not sure if this is brand new or not, but this is definitely on the frontier.

I was literally just thinking about this a few days ago... that we need a multi-modal language model with speech training built-in.

As soon as this thing rolls out, we'll be talking to language models like we talk to each other. Previously it was like dictating a letter and waiting for the responding letter to be read to you. Communication is possible, but not really in the way that we do it with humans.

This is MUCH more human-like, with the ability to interrupt each other and glean context clues from the full richness of the audio.

The model's ability to sing is really fascinating. It's ability to change the sound of its voice -- its pacing, its pitch, its tonality. I don't know how they're controlling all that via GPT-4o tokens, but this is much more interesting stuff than what we had before.

I honestly don't fully understand the implications here.


Yeah, I'm annoyed that OpenAI has deprecated its text completion models and API. I think there's a ton of value to be had from constrained generation like what's available with the Guidance library.


I don't understand. What's an example of something that could be achieved using a system like this?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: