Hacker News new | past | comments | ask | show | jobs | submit login

> I agree that this is the core question, but I'd put it as: Who Gets To Decide What Is True?

Well, outside of matters of stark factuality (what time does the library close?, what did MSFT close at?), many things people may be "searching for" (i.e. trying to find information about) are more in the realm of informed opinion and summary where there is no right or wrong, just a bunch of viewpoints, some probably better informed than others.

I think this is the value of traditional search where the result is a link to a specific page/source whose trustworthiness or degree of authority you can then judge (e.g. based on known reputation).

For AI generated/summarized "search results", such as those from "ChatGPT Search" (awkward name - bit like a spork), the trustworthiness or degree of authority is only as good as the AI (not person) that generated it, and given today's state of technology where the "AI" is just an LLM (prone to hallucination, limited reasoning, etc), this is obviously a bit of an issue...

Even in the future, when presumably human level AGI will have made LLMs obsolete, I think it'll still be useful to differentiate search from RAG AGI search/chat, since it'll still be useful to know the source. At that point the specific AGI might be regarded as a specific person, with it's own areas of expertise and biases.

The name "ChatGPT Search" is very awkward - presumably they are trying to position this as a potential Google competitor and revenue generator (when the inevitable advertisements come), but at the end of the day it's just RAG.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: