Hacker News new | past | comments | ask | show | jobs | submit login

Do you trust the content of the sites Google returns you? Or do you cross verify the content to other sources.

ChatGPT is about as accurate as random websites on the internet, and you don’t get obliterated with ads.

Simple example, ChatGPT will give you a clear recipe for whatever you want, sans life story designed to make you scroll past a million ads.




If I'm looking at MDN for JS docs, I don't need to cross-reference anything. Yet I can't be sure it used the same MDN in my answer as opposed to some random SO post.

I could specify for it to use MDN exclusively but at that point I might as well use search.

In addition to that I could judge the quality of search results (a lot vs little mentions of a technology, shady vs reputable site etc.) to make educated guess of the output I'm getting from search. Can't do that with GPT.

These are key differences off the top of my head.


Having the context for a recipe makes it a lot easier for me to evaluate whether this is probably a quality source or not. I really don't get the hate so many people seem to have for anything other than a list of ingredients and steps to follow.

In general, the context of search gives some insight into the credibility of the source.


It’s because they are entirely manipulative, and usually have nothing to do with the actual recipe. The “context around the recipe” rarely is written by the person who put the work in to develop that recipe. It’s almost always content-farmed out or more recently, entirely generated by AI.

The only reason those blobs of text exist is to get you to look at more ads. Put more things in your head against your will, sell you more garbage, and manipulate your feelings.

If it wasn’t true, why are the recipes always at the bottom? Why not put the most valuable part right front and center? These websites have no respect for you and likely copy pasted the recipe anyways.


Random life stories are everywhere, and have nothing much in common with cooking well. Even before LLMs could fake that part as easily as the recipe itself.

Only way to know if a recipe is good is to look at it.


> Do you trust the content of the sites Google returns you?

You don’t. But i’dtrust a top rated Stackoverflow answer over whatever LLM spits out.

There is no “confidence score” from an LLM output. You cannot tell whether it is making things up (and potentially make very bad decisions based on it’s output)


There's no confidence score for specifically ChatGPT, other GPT models hosted by OpenAI (let alone the broader research community) have been given that capability.

https://community.openai.com/t/new-assistants-api-a-potentia...


I honestly never thought of using ChatGPT for recipes. I just asked it for "a simple pizza dough recipe for a medium thick pizza you can cook in an oven" and it returned the exact recipe I have memorized which I think came from a "AirBake" pizza pan I bought 20+ years ago. Thanks for the tip!


If that’s what you do, it’ll give you a generic recipe.

Try something more complicated! Ask for a gingerbread recipe without sugar, for example.


Just to be clear, I wasn't complaining. I liked that it came back with the one recipe that I've already settled on (and I've tried quite a few over the years.)

I think I'll ask it for a calzone recipe this weekend. The one I use now makes the dough a little too bready.


Fair enough! Just, I find its recipe-making ability to be most useful once you start experimenting.

It's not very good at it, but it doesn't need to be, to be far better than I am.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: