Hacker News new | past | comments | ask | show | jobs | submit login

Support bots. Scrapers. Personal assistants. Search startups like perplexity. Scammer bots. Bots that spread political agenda. "AI" memecoins.



When does a LLM customer support bot that is based for example on RAG architecture, become an LLM agent?


My take is that if the LLM outputs text for humans to read, that's not an agent. If it's making API calls and doing things with the results, that's an agent. But given the way "AI" has stretched to become the new "radium" [1], I'm sure "agent" will shortly become almost meaningless.

[1] https://en.wikipedia.org/wiki/Radium_fad


^^ best definition.

Right now they are "read-only" which I would call a persona


The definition of agent is blurry. I prefer to avoid that term because it does not mean anything in particular. These are implemented as chat completion API calls + parsing + interpretation.


As soon as we admit to ourselves that agent is just another word for context isolation among coordinated llm tasks.

Will agents still matter once models do a better job paying complete attention to large contexts?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: