The hype of Agentic AI is to LLMs what an MBA is to business. Overcomplicating something with language that is pretty common sense.
I've implement countless LLM based "agentic" workflows over the past year. They are simple. It is a series of prompts that maintain state with a targeted output.
The common association with "a floating R2D2" is not helpful.
They are not magic.
The core elements I'm seeing so far are:
the prompt(s), a capacity for passing in context, a structure for defining how to move through the prompts, integrating the context into prompts, bridging the non-deterministic -> deterministic divide and callbacks or what-to-do-next
The closest analogy that I find helpful is lambda functions.
What makes them "feel" more complicated is the non-deterministic bits. But, in the end, it is text going in and text coming out.
You can model it as a state machine, where the LLM decides to what state it wants to advance. In terms of developer ergonomics, strongly typed outputs help. You can for example force a function call at each step, where one of the call arguments is an enum specifying the state to advance to.
Shoot me an email if you want to discuss specifics!
I've implement countless LLM based "agentic" workflows over the past year. They are simple. It is a series of prompts that maintain state with a targeted output.
The common association with "a floating R2D2" is not helpful.
They are not magic.
The core elements I'm seeing so far are: the prompt(s), a capacity for passing in context, a structure for defining how to move through the prompts, integrating the context into prompts, bridging the non-deterministic -> deterministic divide and callbacks or what-to-do-next
The closest analogy that I find helpful is lambda functions.
What makes them "feel" more complicated is the non-deterministic bits. But, in the end, it is text going in and text coming out.