I started getting very serious about AI when I got early access to GPT 3. Before that I was using Google Dialogflow to build AI powered chatbots. Since then I downloaded, installed and tested literally hundreds and hundreds of AI related software application. At first this was really exciting ... I thought "wow this software is going to change everything" ... then a few days later another application would be released and another and another and until I became paralyzed by the amount of choices available. This resulted in a huge amount of stress. For months and months I had analysis paralysis. Eventually I settled on Supabase/PGvector for RAG (mostly because it provides Row Level Security for data/vectors) and this gave me a strong foundation to build on. Now I use Supabase, Vercel AI SDK with Svelte and N8N and mostly Claude Sonnet 3.5 as my LLM (with OpenAI O1 for more complex tasks and GPT 4o mini for cheaper inference). I also use Cursor (with the same models) every day which is amazing.
I’ve been running n8n through Onestack.cloud for automation, which really helps simplify some of the workflow chaos. They also host tools like LibreChat, which has been a game changer for all-in-one LLM needs.
Cursor is indeed an amazing tool. I use it daily as well. Have you tried using it with any other LLMs, or are you sticking with the Claude Sonnet 3.5 and OpenAI O1 combo?
Btw have you explored any AI frameworks like JAX or Haiku?
Cursor for sure. If your code is all in one repo (e.g. frontend and backend), it's awesome for context.
I bought the SlimSaaS kit, and I just pump out non-stop products with Cursor. Also, if you use a UI component library, the AI does a much better job at design.
My favourite is Link Suggestions which recommends relevant pages and databases to link. It helps to build a more interconnected knowledge base in my Notion