Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Cognita – open-source RAG framework for modular applications (github.com/truefoundry)
142 points by supreetgupta 14 days ago | hide | past | favorite | 34 comments
Hey HN, exciting news! Our RAG framework, Cognita (https://github.com/truefoundry/cognita), born from collaborations with diverse enterprises, is now open-source. Currently, it offers seamless integrations with Qdrant and SingleStore.

In recent weeks, numerous engineers have explored Cognita, providing invaluable insights and feedback. We deeply appreciate your input and encourage ongoing dialogue (share your thoughts in the comments – let's keep this ‘open source’).

While RAG is undoubtedly powerful, the process of building a functional application with it can feel overwhelming. From selecting the right AI models to organizing data effectively, there's a lot to navigate. While tools like LangChain and LlamaIndex simplify prototyping, an accessible, ready-to-use open-source RAG template with modular support is still missing. That's where Cognita comes in.

Key benefits of Cognita:

1. Central repository for parsers, loaders, embedders, and retrievers. 2. User-friendly UI empowers non-technical users to upload documents and engage in Q&A. 3. Fully API-driven for seamless integration with other systems.

We invite you to explore Cognita and share your feedback as we refine and expand its capabilities. Interested in contributing? Join the journey at https://www.truefoundry.com/cognita-launch.




Congrats on the launch!

I find it relevant to what I want to do next and put in some time to understand the application vs other stuff e.g. Langchain. And if my understanding is correct, what this tries to do is:

For a lot of typical web services, there're non-realtime batch-processing data processors, e.g. search engine's crawler and indexer, or database's OLAP system, Hadoop, spark, etc. Once their processing is done, they will output data in arelevant, easy-to-use form for real-time web services to consume, e.g. search engine's index, or a list of e-commerce's best selling items.

If we extend such analogy to today's LLM RAG application and compare it with an out-of-the-box Langchain or LlamaIndex implementation, we'll realize everything is in one process altogether. Of course, for demo purpose, they have to.

Cognita tries to fit in by splitting the process into real-time and not real-time parts, on top of existing LangChain and LlamaIndex, and comes with an API endpoint for each part and a web UI for user querying.

For my use case, I'm looking into setting up a very basic RAG-based internal doc QA app, to see if this helps with some of our notoriously bad wikis. So I'm likely going to use this UI and just shovel whatever simple LangChain or LlamaIndex implementation into it. I'm not that interested in the modular design. Honestly, I could see a couple of different ways each market segment approaches such a problem: for demo/mainly static document/low stake application, the need to periodically refresh vector-db is non-existent; for companies with enough engineering expertise, they'll likely put the data processing part into existing data processing framework; for the rest segment, they probably can also get away with putting the whole offline data processing into a very long python script, setup cron and call it a day.

---

I haven't look into RAG in a year or so, but my overall sensation is this: 1. the RAG layer (on top of vector-db) isn't technically difficult, vs say OS development, database development, etc, after all, text manipulation has been around since 60s. 2, since the whole LLM generation is very sensitive to prompt, an early, too rigid abstraction likely do more harm than good.


You might want to look at https://github.com/danswer-ai/danswer as well, as it sounds like their UI might be better suited for your use case.


Cognita works on top of langchain! For your use case, you might not even need to develop anything just index data and you are good to go

Try out different retrievers and test the accuracy and effectiveness for your use-case.


Hello, a very interesting project. Conratulations for putting everything together. I have expressed some thoughts in the discussion sections of Cognita github repo: https://github.com/truefoundry/cognita/discussions/146 It would be great if the maintainers could reply.


Sure! I’ll check those :) Thank you for suggestions hoping for some awesome contributions from you :P

Thanks, looking forward to your answers

Congratulations on the launch! Will give this a try!

We were looking for a solution that would help our team test out the LLMs & prompts for repeatability and identifying edge cases.

The UI looks interesting, like a playground on top of the RAG framework, allowing the team to test out various prompts / configurations to handle edge cases, without requiring a lot of tech bandwidth!


Yeah! Do give it a try :) Experiment and develop great usecases!

Looks like a great product. I'll have to give it a try!

I like that the product seems to solve the RAG need only and not be an "everything framework" for LLMs. It makes for a richer seeming product for RAG while making other aspects of AI apps open for the user to choose their approach.


Yes- the product is intended to solve specifically for the RAG use case in production.


Whatever you do, never say "free software"!!!

That "freedom" stuff is commonism...


Agreed, we should acknowledge that every open source by any company has some intent to be able to drive adoption of their core platform!

Does a "web" data source only scrape the individual page or linked pages as well? I'm assuming the former. What would be the least painful way to ingest a knowledgebase (say a wiki-like site) from the web?

It can scrape linked pages too by defining the depth but make sure the depth parameter is not too much else it will consume too much memory and time.

Playing around with the UI, I cannot see where that depth would be set. Is it not a per-datasource variable?

Is the "scrape linked pages" configured to be "sandboxed" within a url hierarchy (so adding example.com/foo/ would add all linked pages that are also under example.com/foo/) or not (so it would also include linked pages to other domains or subfolders)?


This product appears to be promising. I'm intrigued to test it out. I appreciate that it focuses solely on addressing the RAG requirement and doesn't attempt to be a one-size-fits-all solution for LLMs.


Indeed! There is no one size fits all, the more you customise the closer you are to your usecase!

Interesting, is there any feature roadmap for future reference ?

Hey Hitesh, thanks to our contributors, we've introduced some exciting new features to Cognita:

1. Added VLM-based PDF parser 2. Integrated an intelligent summary query controller. Now, you can input multiple questions at once, and the controller will break them down into individual queries, answering each in a streaming format. Finally, it provides a summary of all responses.

Roadmap / Anticipated Contribution Scope:

1. Enabling hybrid and sparse vector search support 2. Implementing Embedding Quantization support 3. Integrating with GraphDBs and relevant retrievers 4. Enabling RAG Evaluation across various retrievers 5. Implementing RAG Visualization features ...and many other enhancements are awaiting.

Excited for the community's backing! Let's maintain the momentum of open source.


Congratulations and good luck.Will give this a try!


Thanks! Awaiting your feedback.

Many of the links are broken and lead to https://www.truefoundry.com/cognita-launch#

I tried on Firefox and Chrome.

I would make the GitHub link more prominent.

Congratulations and good luck.


Thanks for highlighting that! Here’s the GitHub link: https://github.com/truefoundry/cognita


Congrats on the launch Supreet! Can you talk about how Cognita compares against competitors like RAGFlow?


While a lot of RAG frameworks like Ragflow, langchain, llama index help in development phase of RAG, Cognita is developed to help productionize them well. In fact, it’s not cognita or others but cognita with others. Cognita leverages existing amazing open source frameworks and helps you organize the code in a manner that is easy to produtionize.

The api endpoints for all modules is a major plus. Besides, the UI for testing out different configurations is helpful for debugging and improvement and sharing with the rest of the world.


Congratulations on the launch. I am building GenAI application. Will explore it.

You could try to use the same in local or even a hosted version Vivek. Let us know if you face any issues. For early start-ups, there's a free tier to operate by connecting to any of your cloud accounts.

What’s best practice to integrate this in a Ruby on Rails application?


It seems to be a python app, so probably set it up as a seperate microservice with its own REST API


Best practice is to NOT integrate this in a Ruby on Rails application.


But you can run Cognita as is and you’ll get a fastapi server up and running. With that you can utilize the rest endpoints with your Ruby app.


What is RAG?


Retrieval Augmented Generation.

The best explanation I can give as a non-expert is: it's used when you have a general-purpose LLM but want to give it some domain-specific knowledge. The query sent to the LLM is run through what's effectively a search engine that catches relevant terms etc, to find useful snippets of knowledge to send to the LLM alongside the query, so the query is _augmented_ with potentially useful information for answering the query.


And really almost always its because the LLMs are really good at summarization and ok at extrapolation and generally lie a lot otherwise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: