Hacker News new | past | comments | ask | show | jobs | submit login

Who cares. I literally use ChatGPT 30 times a day. It answers incredibly complex queries along with citations I can verify. Isn’t “this not good enough yet” line getting old? There nothing else that can estimate the number of cinder blocks I need to use for a project and account for the volume of concrete required for it (while taking into consideration the actual available volume available in a cinder block and settling) with a few quick sentences I speak to it. I can think of literally thousands of things I have asked that would have taken hours of googling that I can get an answer for in minutes.

I think the problem is you haven’t shifted your mindset to using AI correctly yet.

Edit: More everyday examples from just the last 3 days

- Use carbide bits to drill into rocks. Googling “best bits for drilling rocks” doesn’t bring up anything obvious about carbide but it was the main thing chatGPT suggested.

- gave it dimensions for a barn I’m building and asked it how many gallons of paint I would need of a particular type. I could probably work that out myself but it’s a bunch of lookups (what’s the total sq footage, how many sq ft per gallon, what type of paint stands up to a lot of scuffing etc.)

- coarse threaded inserts for softwood when I asked it for threaded insert recommendations. I would have probably ended up not caring and fine threaded slips right out of pine.

- lookup ingredients in a face cream and list out any harms (with citations) for any of them.

- speeds and feeds for acrylic cutting for my particular CNC. Don’t use a downcut bit because it might cause a fire, something I didn’t consider.

- an explanation of relevant NEMA outlets. Something that’s very hard to figure out if you’re just dropped into it via googling.




>Who cares.

clearly anyone trying to buy a car, which is already an ordeal with a human as is.

>I literally use ChatGPT 30 times a day

good for you? I use Google. mos of my queries aren't complex.

>Isn’t “this not good enough yet” line getting old?

as long as companies pretend 2024 AI can replace skilled labor, no. It's getting old how many more snake oil salesmen keep pretending that I can just use ChatGPT to refactor this very hot loop of performance sensitive code. And no ChatGPT, I do not have the time budget (real time) to hook into some distributed load for that function.

I'm sure in a decade it will wow me. But I prefer to for it to stay in its lane and I stay in mine for that decade.

>There nothing else that can estimate the number of cinder blocks I need to use for a project

is Calculus really this insurmontable feat to be defending big tech over? I'm not a great mathmatican, but give them excel/sheets and they can do the same in minutes.

>I can think of literally thousands of things I have asked that would have taken hours of googling that I can get an answer for in minutes.

I'm glad it works out for you. I'm more scrutinous in my searches and I see that about half the time its sources are a bit off at best, and dangerously wrong at worst. 50/50 isn't worth any potential time saved for what I research.

>I think the problem is you haven’t shifted your mindset to using AI correctly yet.

perhaps. But for my line of work that's probably for the best.


"I think the problem is you haven’t shifted your mindset to using AI correctly yet"

There is an indictment of AI "products" if I ever heard one


That’s a glib, low effort dismissal but it makes sense if you consider it.

It’s like people that kept going to the library even with Google around. You’re not playing to the strengths of AI and relying on whatever suboptimal previous method you used to find the answers. It does really, really well with very specific queries with a lot of looks ups and dependencies that nothing else can really answer without a lot of work on your end.


How come this spoon won't hold my soup? Don't tell me I'm holding it wrong!


I mean if my dentist adds a Helpful Super GenAI Infused Chatbot that can't book appointments or answer any questions about their services no amount of "you're holding it wrong" insistence about LLMs in general will actually make it useful to me.

The point is ChatGPT's wild success doesn't automatically mean consumers want and possibly will never want a chatbot as their primary interface for your specific app or service.


I feel like I'm being too obvious but maybe try using it for something it's good at.


> with citations I can verify.

And do you? Every time someone tried to show me examples of “how amazing ChatGPT is at reasoning”, the answers had glaring mistakes. It would be funny if it weren’t so sad how it shows people turning off their critical thinking when using LLMs, to the point they won’t even verify answers when trying to make a point.

Here’s a small recent example of failure: I asked the “state of the art” ChatGPT model which Monty Python members have been knighted (it wasn’t a trick question, I really wanted to know). It answered Michael Palin and Terry Gilliam, and that they had been knighted for X, Y, and Z (I don’t recall the exact reasons). Then I verified the answer on the BBC, Wikipedia, and a few others, and determined only Michael Palin has been knighted, and those weren’t even the reasons.

Just for kicks, I then said I didn’t think Michael Palin had been knighted. It promptly apologised, told me I was right, and that only Terry Gilliam had been knighted. Worse than useless.


I do. It’s not complex to click on the citation, skim the abstract and results and check the reputation of the publication. It’s built into how I have always searched for information.

I also usually follow most prompts with “look it up I want accurate information”


> I also usually follow most prompts with “look it up I want accurate information”

That didn’t work so hot for two lawyers in the news a while back.


A while back when the lawyers used it, chatGPT didn’t do lookups and citation links.


Please don't take this as a glib, low effort answer, but... I am glad you're not an engineer. Taking advice from an LLM on things like outlets, ingredient safety, and construction measurements seems like a mistake.


Did you do the cinder block project? Was its estimate close? From everything I’ve seen LLMs are not that great at math.


Yes, I finished the footings for the barn. I had to get two extra bags on an estimate of 68 bags. Not bad in my opinion considering the wastage from mixing, spilling etc. Also I would have had to do a bunch of tedious math that I didn’t have to.

I had about 5-10 cinder blocks left over, not bad for an order of ~150


It works for me, therefore it should work for everyone?


It’s pretty much the vein of the comment of the GP I’m responding to.


>I'm not sure if LLMs are getting good use yet / general chatbots are good or ready for business use.

They left room for the idea that the technology could evolve to be useful. You're simply dismissing anyone who cannot use he technology as is as "using it wrong".

As someone who did a tad of UX, that's pretty much the worst thing you can say to a tester. it doesn't help them understand your POV, it builds animosity towards you and the tester, and you're ruining the idea of the test because you are not going to be there to say "you're doing it wrong" when the UX releases. There's 0 upsides to making such a response.


This isn't what the other comment you're replying to was talking about.


Not sure if this is a sarcasm or "you are holding it wrong" moment.


There’s skill involved in using these. Of course there is.

That doesn’t make them useless.


depends on the tool and purpose. There was skill in navigating a file system, and now the next generation (obbscured from folder systems by mobile) seem to be losing that ability.

You can look at it in two ways, neither are particularly wrong unless your job is in fact to navigate file systems.


Of course, LLMs would be more useful to many more people if they could be used without skill, and were as "useful" as a human mentor.

That's true, and they lack that capability. Many people seem to react as though this means they're missing all value, however. I find them incredibly useful; it just isn't possible to get much value out without investing effort myself.


>LLMs would be more useful to many more people if they could be used without skill, and were as "useful" as a human mentor.

That is partially marketing's fault, so I say that confusion is self inflicted. Because marketing isn't focusing on "make yourself more productive!"


If you’re unaware of how something can help you, learning how can only improve your outcome.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: