Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Has Anyone Trained a personal LLM using their personal notes?
138 points by Erazal 5 months ago | hide | past | favorite | 71 comments
Some here comment on notes they've taken for their entire life, often using powerful note-taking systems.

Of course, there's a ton of note-taking systems out there. Org-Mode [1], Obsidian [2], plain .txt, ...

And it's become quite simple to integrate such systems with LLMs.

Whether to add that data to the LLM [3], using LLM formatting, or to visualize and use it as a personal partner. For the latter - there's also a ton of open-source UIs such as Chatbot-ui[4] and Reor[5].

And that's just the tip of the iceberg.

Personally, I haven't been consistent enough through the years in note-taking.

So, I'm really curious to learn more about those of you who were and implemented such pipelines.

I'm sure there's a ton of cool interaction experiences.

[1] https://orgmode.org/ [2] https://obsidian.md/ [3] https://ollama.com/ [4] https://github.com/mckaywrigley/chatbot-ui [5] https://github.com/reorproject/reor




Maybe not exactly what you’re asking, but I started doing talk therapy last year. It’s done virtually and I record the session with OBS. As soon as the recording finishes, the following happens:

- The audio is preprocessed (chunked) and sent to Whisper to generate a transcript

- The transcript is sent to GPT-4 to generate a summary, action items, concepts introduced with additional information

- The next meeting’s date/time is added to my calendar

- A chatbot is created that allows me to chat with each session, including playing the role as the therapist and continuing the conversation (with the entire context of what I actually talked about)

It’s been exceedingly helpful to be able to review all my therapy sessions this way.


I'm sincerely happy you're finding value in this and it's a very impressive workflow. The idea of sending my therapy sessions to OpenAI sounds terrifying though.


I hope you're the patient in this scenario, otherwise this is an egregious HIPAA violation.


Might still be a violation if they're the patient? Unless therapist and their employer's consent is given and ofc dependent on the relevant jurisdictions (IANAL).


I don’t know about the legality of it, but as a comical skit it’d be hilarious: a patient gets in deep shit with their doctor for violating patient-doctor confidentiality. Sounds straight out of Curb Your Enthusiasm!

Edit: It seems it's straight out of Curb, because it is! https://youtu.be/YH55dFlF_Rg?si=kOLC5rGq5fi8tke2


This is really interesting, are your comfortable with OpenAI having your personal details in this case?


This is where having our own LLMs and stacks running locally will save and empower us IMHO


OpenAI's privacy claims are fine. I wouldn't worry about this any more than I worry about my email provider.


wow - really cool.

I'm actually the founder of an AI Meeting Bot company - and we're thinking of open-sourcing so you could run exactly this set-up locally with perfect diarization / recording while also maintaining privacy [1].

I'm currently creating code examples, and just finished the "chat with each session". Would love to know how you implemented it.

[1] https://aimeetingbot.com


We're currently trying out - https://aimeetingbot.com Impressive!


I'm obviously biased as the founder, but check out https://recall.ai/ for a meeting bot API.

We serve over 300 companies, process millions of hours of recording a year, and are officially partnered with Zoom.


Did you discover anything interesting by being able to review all the therapy sessions?


Curious on the code. ( a friend is a psychiatrist and she noticed difficulty with multiple languages and device translations).

This flow could help me improve fluency in her sessions ( eg. she has a hardware translation device (expensive) which has significant issues auto translating ), since it's missing context a lot.

Eg. When grieving is incorrectly translated between dutch-polish, it defeats a bit of the purpose of being fluent in your native language.

Reducing the error rate would help a lot.


I’d love to replicate your workflow. Any luck with speaker diarization using whisper? I’ve tried WhisperX several but it didn’t work.


I've created an AI Meeting Bot API to do just that [1].

At the moment it runs on AWS, and we're thinking of open-sourcing so you could also run it locally to maintain 100% privacy of such conversations.

You'd get speaker diarization, names on top of the recording [2].

[1] https://aimeetingbot.com [2] https://spoke-1.gitbook.io/ai-meeting-bot

Happy to get in touch and have you run it


[flagged]


I read this as the OP being the patient, not the doctor. Pretty sure a patient is free to violate this confidentiality


Parent was probably referring to OpenAI having access to therapy notes, once GP sent them, and no legal responsibility to keep them private.


I'm sure OP knows this, thought about the risk, guessed the probabilities, assessed the nature of her therapy sessions, and decided the system she was building is worth the risk of her therapy content being available with an intuitive chat interface.

Not every therapy session is embarrassing, incriminating, or particularly interesting to anyone but you.


Off topic,but where did that gendering come from in your response? I am generally indifferent to personal pronouns but somehow this came across as particularly jarring as I guess it felt like a projection of misogynistic views of female weakness/need for therapy/emotionalness/etc. They/their might have been more appropriate here.


English is not my first language, so I beg for your forgiveness that I didn't use the they/them. In my language, there is no gendered pronouns, so it's hard for me to guess what's the correct way.

When I use he/him, people tell me I assume everyone on HN and tech is a male.

When I use she/her, you come and label me as misogynistic, too.

At one point I just give up. Or you know, you could also assume positive intent and not label someone misogynistic based on a comment where gender was not at all important to the discussion.


you can use "they" at every where today. but I don't whether someone will accuse you support Trans community.


Another possibility is to use "op", although it can feel a bit forumish.


I used OP, though, I didn't want to use "OP" every time I am referencing this person, I thought it would sound strange OP OP OP...

Anyway, I'm glad this thread turned into a "how you can avoid being called a misogynist in throwaway comment on HN". I guess from now on, I filter through my comments through Gemini to make sure my English is immaculate and that native English speakers don't castigate me for a small mistake.


Mattlondon probably could have done without the color commentary, but I wouldn't take anything you did as wrong in English.

Our lack of singular neuter pronouns sucks. (I.e. singular 'they')


I've been disciplined (perhaps obsessive at times) with keeping a daily diary for many years and I was interested in being able to query my diary locally via AI. I found a solution that works surprisingly well using GPT4ALL.

I found GPT4ALL (https://gpt4all.io) to have a nice-enough GUI, and it runs reasonably quickly on my M1 MacBook Air with 8Gb of ram, and it can be setup to be a completely local solution - not sending your data to the Goliaths.

GPT4ALL has an option to access local documents, via the Sbert text embedding model (RAG).

My specific results have been as follows; using the Nous Hermes 2 Mistral DPO and Sbert - I indexed 153 days of my daily writing (most days I write between 2 and 3 thousand words).

Asking a simple question like "what are the challenges faced by the author?" provides remarkable, almost spooky results (which I won't share here) - which in my opinion are spot-on regarding my own challenges over that the period - and Sbert provides references to which documents it used to generate the answer. Options are available to reference an arbitrary number of documents, however the default is 10. Ideally I'd like to have it reference all 153 documents in the query - I'm not sure if it's a ram or a token issue, however increasing the value of documents referenced has resulted in machine lock-ups.

Anyhow - that's my experience - hope it's helpful to someone.


This might seem impressive because of the subjectiveness. I also imagine, you arent mentioning the times it was completely incorrect because you used a negative in the sentence.

This is regular embeddings + LLM.

At the end of the day, you are basically just adding a preprompt to a search. Not to mention, the Mistral models are barely useful for logic.

I'm not really sure what you are getting out of it. I'm wondering if you are reading some mostly generic Mistral output with a few words from your pre-prompt/embedding.


>I also imagine, you arent mentioning the times it was completely incorrect because you used a negative in the sentence.

I haven't yet observed it being completely incorrect - I keep the queries simple without negation.

>This might seem impressive because of the subjectiveness.

It's surprising how it can summarise my relationship with another person, for example - if I ask "who is X?" it will deliver quite a succinct summary of the relationship - using my own words at times.

>I'm not really sure what you are getting out of it.

Mostly it's useful for self-reflection, it's helped me to see challenges I was facing from a more generalised perspective - particularly in my relationships with others. I'm also terribly impressed by the technology - being able to natural-language query and receive a sensible, and often insightful response - it feels like the future to me.


Is the diary digital? I prefer writing on paper, and I'd like to try this. Wonder if there's any decent OCR app that'll help me do it.


Yes, completely digital - in markdown format. I use IA writer on the Mac.


Nice one, I'm going to give it a try.

When you say Sbert you mean the GPT4All LocalDocs plugin?


Yes, check out advanced settings for the plug-in after it's installed.


Played around with fine tuning, but ended up just experimenting with RAG.

One thing I haven’t worked out yet is the agent reliably understanding if it should do a “point retrieval query” or an “aggregation query.”

Point query: embed and do vector lookup with some max N and distance threshold. For example: “Who prepared my 2023 taxes?”

Aggregation query: select a larger collection of documents (1k+) that possibly don’t fit in the context window and reason over the collection. “Summarize all of the correspondence I’ve had with tax preparation agencies over the past 10 years”

The latter may be solved with just a larger max N and larger context window.

Almost like it’s a search lookup vs. a map reduce.


Interesting, been waiting for some free time to do this myself.

Mind sharing how you set up your RAG pipeline and which (presumabely FOSS) components you incorporated?


I want this for my photos.

I'm not a good photographer, but I have taken tens of thousands of photos of my family. I would love to provide a prompt for a specific day and persons and have it create a photo that I never was able to take. I don't mind that it's not "real" because I find photography to be philosophically unreal as it is. I want it to look good, and inspire my mind to recreate the day however it can imagine.

And I want to do it locally, without giving away my family's data and identity.


This is basically what all the headshot generator apps do. It is pretty simple to achieve if you can spin up GPU instances (https://huggingface.co/docs/diffusers/v0.13.0/en/training/te....)

However, I find it challenging to achieve on Macbooks, despite all the neural core horsepower I have. If anyone has achieved this with non-NVIDIA setups i'd love to hear!


interesting idea.

completely unrelated i just read a scifi story where a technology was developed that could revive dead bodies for a short while in order to pose for family photos that they hadn't created before the person passed away.

https://clarkesworldmagazine.com/liu_03_23/

obviously going way overboard for something AI can do today, but probably the author wrote the story before that.


Going to wait for longer context local models. Fine tuning/training is lossy compression of your notes into the model weights -- there isn't much value in a vaguely-remembered copy of some of my notes. This is why other comments are pointing you towards Retrieval Augmented Generation instead, where the relevant notes are losslessly added to the prompt.


I think you're right


PrivateGPT is a nice tool for this. It's not exactly what you're asking for, but it gets part of the way there.

https://github.com/zylon-ai/private-gpt


I used Private GPT with our internal markdown based developers portal, the results are OK-ish but are closer to a fancy search then to a chat.


I’ve tried several different systems, nothing really stands out.

That being said, I’m trying to document as much as my life in anticipation of such programs existing in the near future. I’m not going overboard, but for example, I wouldn’t really keep a personal diary, but now I try to jot down something every day, write down my thought processes on things, what actions were done and why.

I’m looking forward to a day where I have an AI assistant (locally hosted and under my control of course) who can help me with decision-making based on my previous actions. Would be neat to compare/contrast how I do things now, compared to the future me.


This idea is also what I am thinking about. At present, I am also trying to do applied research in this aspect. If it can achieve substantial help, it will be very meaningful.


Gianluca Nicoletti, an Italian journalist, writer and radio speaker, is training a LLM with all writings as a support for his autistic child for when he won’t be here anymore. The software will speak with his voice.

https://www.lospessore.com/13/07/2023/una-chatbot-per-contin...


Very interesting.

Since presumably "all writings" refers to all his writings during his lifetime, I'd hope it can account for those times in his life at which he changend his mind on certain topics?


Yes I guess that “all writings” (sorry I meant all of his writings - not an English native) means everything he wrote and also his speeches, taking into account the times when he changed his mind. Knowing how meticulous he is I would be surprised if this wasn’t the case. Worth considering that he was recently diagnosed a mild form of autism himself.


Neither mine nor endorsing, and I haven't played besides the initial installation but Khoj has an open source offering for this. Check it out https://khoj.dev


Hi! Founder of Khoj, happy to add more context.

Khoj will allow you to plug in your Obsidian vault or any plaintext files on your machine or Notion workspace. After you share the relevant data, it creates embeddings and uses it for RAG, so you get appropriately contextual responses with your LLM.

This is the best place to start for self-hosting: https://docs.khoj.dev/get-started/setup


I started fine-tuning GPT-3.5 on a decently large corpus of my text messages and emails, and it pretty much generated schizophrenic output. I don’t think I did a very good job of curating the text that ended up fine-tuning, and I want to try again.


It sounds like you want RAG instead of training or even fine tuning a model.

Have you looked into the OpenAI APIs? They make it relatively easy to do assuming you have some limited programming knowledge.


I'm currently looking to implement RAG locally, using QDrant [1] for instance.

Just playing around for now, but it makes sense to have a runnable example for our users too :) [2].

[1]. https://qdrant.tech/ [2] https://aimeetingbot.com


I have recently briefly looked into Assistants API (if that's what you're referring to) but it seems relatively if I'm not mistaken.


Are you referring to the "Assistants" API?


I was not, that's relatively new, though if I'm not mistaken it might make the process easier.

I mean calling the embeddings API and then having software locally that finds and appends documents to your queries.



hadn't seen your repo yet [1] - adding it to my list right now.

Your blog post is really neat on top - thanks for sharing

https://github.com/eugeneyan/obsidian-copilot


Folks at gitbook are kind enough to give me a LLM over my notes https://til.bhupesh.me


oh damn, I'm using gitbook too actually - for our users [1]. Not open-source but will definitely try it out ASAP.

[1] https://spoke-1.gitbook.io/ai-meeting-bot


Do you have a Pro plan?


Somewhat related, for those of us who don’t take extensive notes: are there nicely packaged plugins for RAG in email, especially for eg Outlook or Apple Mail?


Has anyone seen or used something that can train on a complete imessage history?

Presumably, I have more than enough messages from me along with responses from others to chat with a version of myself that bears an incredible likeness to how I speak and think. In some cases, I'd expect to be able to chat with an LLM of a given contact to see how they'd respond to various questions as well.


I've implemented a POC on exactly this and am working on something more sophisticated right now. Can I reach out to discuss more?


Yes, I'd gladly trial something like this.

It must run locally / require no network requests. I can run on an M2 w 24GB or M3 with 36GB.

My email is in my profile here.


Came here to ask the same question. I've played around with a few implementations, but nothing with results that were close to useful.

Anyone had success RAG-ing a chat history??


Not on my notes, but I have used GPT4All to chat with the documents of dapr. I downloaded the md files from the docs GitHub repo and loaded the directory in GPT4All.

It is not "training" a model but works pretty great.


I have tried Bookstackapp https://notes.folktaler.com/ to take notes and using https://www.danswer.ai/ to retrieve useful information from those notes. You could also use Google Drive or blogs instead of bookstackapp


I have a large org-roam note system. I would like to create a pipeline where I can ask natural language questions, and it will build SQLite quries to efficiently crawl through the database and find what I want. I haven't gotten around to it though.


Not exactly what you're looking for but I a few months ago I spent a day building a llama-index pipeline against my markdown notes with a really privative note crawling implementation, and had surprisingly good results for question answering.

I don't use an org-roam note system but I've been working on a similar and highly opinionated note system that I'm always making tools for. And I'm always interested in seeing people's ideal note systems.

my crude WIP Obsidian / Markdown note RAG tool: https://github.com/bs7280/markdown-embeddings-search


I have heard good things about the Notion AI addon although I haven’t tried it myself.


have never found it powerful enough - or I'm just setting too high a bar?

I have a ton of databases on Notion (with all my teams conversation transcripts, meeting to-dos, etc.) and global AI search just isn't there.

I haven't found a way there (but have elsewhere using open source) to create a kick-ass search.


I’m literally working on it right now, dm on X if you wanna pair or something


I just DM'ed you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: