Hacker News new | past | comments | ask | show | jobs | submit login

I've been building workflow assistants that make existing employees more productive or enable entirely new business models. Some of these assistants use selected local models (due to cost or privacy factors)

Currently the stack is gravitates around:

- GPT-4 - either to drive the entire workflow OR generate prompts, plans and guidelines for the local models to execute.

- structured knowledge bases (either derived from existing sources OR curated manually by companies to drive AI assistants).

- embedding search indexes, augmented by full-text search. Usually LLM has access to the search engine and can drive the search as needed, refining the queries if results aren't good enough.

All of that is instrumented with logic to capture user feedback at every single step. This is crucial for the continuous improvement of the model!

Bigger model can use this information once in a while, to improve plans and workflow guidelines to make the overall process more efficient.

AMA, if needed!




Do you use a framework to pull it all together? Like Langchain etc


LangChain is good for the demos and learning, but it is too complex and brittle for my taste.

Using a bit of boilerplate code (a couple of python files) that I copy to new projects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: