Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Best Use Cases for LLMs
14 points by therealmocker 3 months ago | hide | past | favorite | 7 comments
I’m seeking advice on the scenarios where large language models (LLMs) shine and where they may not perform as well.

What types of problems have you successfully solved with LLMs? What are some common pitfalls or areas where they tend to underperform?




I am a huge fan of this kind of model

https://sbert.net/

for classification, clustering, and both text and image retrieval. It is often a drop-in replacement for other ways of doing things and most of their models are not crazy large so you can run them on an ordinary computer.

As for chatbots you should note they have superhuman recall in some sense but a limited ability to generalize or "reason". I have been asking Microsoft's Copilot for help with a maintenance programming project and I am amazed it it's ability to explain unusual but highly repetitive code fragments like the ones generated by the Babel compiler. Explaining what a program does by looking at the code is a difficult problem that LLMs cannot do reliably if they haven't seen very similar code before but there are many idioms that are used in application code that it has seen before and for those it is helpful.


Yes, I have often thought if it wasn't got GPT-3, this work would have been more recognised and more powerful. However generative embeddings are still more useful for more abstract cases and do embed a different space to these similarity/contrastive kind of embeddings.


They excel at paying attention.

So, reading through logs, deciphering vague error messages, navigating an overcomplicated screen for a specific thing. Especially for something like Android dev, where about 90% of the tracestack is just garbage and error messages don't say anything like the real problem.

They're very good at stochastic searches. So drafting outlines for a paper, logos, creative brainstorming.

They're bad with numbers.

They hallucinate, so you don't really want them in a situation where you can't prove whether they're correct. You can use them for medical diagnosis, but only if you double check what it gives. They use the average of what they're given, so if you're trying to code a thing, it gives you old tech stacks.

Basically you don't want it for things that you have no experience with and can't verify.


Something where you don't mind it being wrong? I don't know, though. I don't tend to use them except to experiment. I can tell you that typically if you ask it misleading questions ("Why did the USSR send teddy bears into space in 1957?"; I stole the teddy bear idea from a Wikipedia Signpost article a few years ago) it typically fails, though I remember asking one on the LLM Arena (chat.lmsys.org) that question and having it correctly call out that it couldn't find that, but then hallucinate something totally different. Sadly, I forget the name of that AI.


I think they are best at information extraction/classification tasks, especially for complex tasks with little to no training data, and data synthesis tasks. However, you should always test if simpler models can already perform the task reasonably well to save money.

They underperform at anything that requires reasoning.


Proofreading. You have to be careful though, sometimes the model just bullshits.


Check out Lektor.lol an open source wrapper around ChatGPT I created just for that :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: