My take is that if the LLM outputs text for humans to read, that's not an agent. If it's making API calls and doing things with the results, that's an agent. But given the way "AI" has stretched to become the new "radium" [1], I'm sure "agent" will shortly become almost meaningless.
The definition of agent is blurry. I prefer to avoid that term because it does not mean anything in particular. These are implemented as chat completion API calls + parsing + interpretation.