Hacker News new | past | comments | ask | show | jobs | submit login

I would never trust the output from it to use it as a search.

But for quickly creating some template when i want to write a big report or email for something, then yes it's very useful.




Do you trust the content of the sites Google returns you? Or do you cross verify the content to other sources.

ChatGPT is about as accurate as random websites on the internet, and you don’t get obliterated with ads.

Simple example, ChatGPT will give you a clear recipe for whatever you want, sans life story designed to make you scroll past a million ads.


If I'm looking at MDN for JS docs, I don't need to cross-reference anything. Yet I can't be sure it used the same MDN in my answer as opposed to some random SO post.

I could specify for it to use MDN exclusively but at that point I might as well use search.

In addition to that I could judge the quality of search results (a lot vs little mentions of a technology, shady vs reputable site etc.) to make educated guess of the output I'm getting from search. Can't do that with GPT.

These are key differences off the top of my head.


Having the context for a recipe makes it a lot easier for me to evaluate whether this is probably a quality source or not. I really don't get the hate so many people seem to have for anything other than a list of ingredients and steps to follow.

In general, the context of search gives some insight into the credibility of the source.


It’s because they are entirely manipulative, and usually have nothing to do with the actual recipe. The “context around the recipe” rarely is written by the person who put the work in to develop that recipe. It’s almost always content-farmed out or more recently, entirely generated by AI.

The only reason those blobs of text exist is to get you to look at more ads. Put more things in your head against your will, sell you more garbage, and manipulate your feelings.

If it wasn’t true, why are the recipes always at the bottom? Why not put the most valuable part right front and center? These websites have no respect for you and likely copy pasted the recipe anyways.


Random life stories are everywhere, and have nothing much in common with cooking well. Even before LLMs could fake that part as easily as the recipe itself.

Only way to know if a recipe is good is to look at it.


> Do you trust the content of the sites Google returns you?

You don’t. But i’dtrust a top rated Stackoverflow answer over whatever LLM spits out.

There is no “confidence score” from an LLM output. You cannot tell whether it is making things up (and potentially make very bad decisions based on it’s output)


There's no confidence score for specifically ChatGPT, other GPT models hosted by OpenAI (let alone the broader research community) have been given that capability.

https://community.openai.com/t/new-assistants-api-a-potentia...


I honestly never thought of using ChatGPT for recipes. I just asked it for "a simple pizza dough recipe for a medium thick pizza you can cook in an oven" and it returned the exact recipe I have memorized which I think came from a "AirBake" pizza pan I bought 20+ years ago. Thanks for the tip!


If that’s what you do, it’ll give you a generic recipe.

Try something more complicated! Ask for a gingerbread recipe without sugar, for example.


Just to be clear, I wasn't complaining. I liked that it came back with the one recipe that I've already settled on (and I've tried quite a few over the years.)

I think I'll ask it for a calzone recipe this weekend. The one I use now makes the dough a little too bready.


Fair enough! Just, I find its recipe-making ability to be most useful once you start experimenting.

It's not very good at it, but it doesn't need to be, to be far better than I am.


What do you trust? I don't trust search or other humans to be 100% accurate about anything so I find it really strange that people presume I take whatever ChatGPT outputs at face value. I parse it just as I parse any other information.


I’ve seen this discussion many times over the past year and have come to think that the disconnect basically arises from the way that we have thousands of years of heuristics built up for interpreting the trustworthiness of what another human is telling us, 30 years of evaluating websites, and less than a year of evaluating LLM outputs.

People have some sense that someone giving them information may be an {expert, charlatan, idiot}, or that a website they’re looking at is run by a university vs a blogspam content farm, but many have not developed a sense for when or how much they can trust LLM output, which is delivered with the same tone and confidence regardless of whether it’s entirely fabricated.

There is probably a component of personality involved in how people approach this. Collectively we are all learning how to interact with this new source of information and people take varying paths.


Exactly. Whether it's a person, a website, or a book, we have a ton of cues that give us some sort of intuitive reliability score. That reliability is essentially never going to be 100%. But, especially if we cross-check sources, we can start to have very high confidence that an answer is true--at least as far as anyone knows. (Or even that no one really knows the answer with any certainty.)

I've had ChatGPT return very serviceable "true" results and I've had ChatGPT return utter fiction.


There are roughly two cases:

1. you don't know the answer, but you can check yourself and easily whether a given answer is roughly correct

2. you don't know the answer and wouldn't be able to check how valid a potential answer is

LLM-based tools are great for 1 to synthesize various sources into one coherent answer, since in this case, you won't become a victim of their hallucination. E.g. "write a one-off Python script to do this": you can quickly check if it does the job, even though you couldn't say whether that's idiomatic Python.


When I'm writing (English text) about something I'm passably familiar with, it can be useful for generating some straightforward descriptions and background. Nothing I'd just cut and paste out of the box wholesale but it can be a timesaver, especially if it's somewhat boilerplate.

I would say it is not good at giving a sophisticated answer to anything that requires a lot of nuance. And I've also asked it questions with fairly objective factual answers that it gets hilariously wrong.


I trust it was much as I do Google search results. Trust but verify.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: