Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Reor – An AI note-taking app that runs models locally (github.com/reorproject)
411 points by samlhuillier 9 months ago | hide | past | favorite | 102 comments
Reor is an open-source AI note-taking app that runs models locally.

The four main things to know are:

1. Notes are connected automatically with vector search. You can do semantic search + related notes are automatically connected.

2. You can do RAG Q&A on your notes using the local LLM of your choice.

3. Embedding model, LLM, vector db and files are all run or stored locally.

4. Point it to a directory of markdown files (like an Obsidian vault) and it works seamlessly alongside Obsidian.

Under the hood, Reor uses Llama.cpp (node-llama-cpp integration), Transformers.js and Lancedb to power the local AI features.

Reor was built right from the start to support local models. The future of knowledge management involves using lots of AI to organize pieces of knowledge - but crucially, that AI should run as much as possible privately & locally.

It's available for Mac, Windows & Linux on the project Github: https://github.com/reorproject/reor




This is a good reminder of why storing Obsidian notes as individual Markdown files is much more useful than stuffing those notes in a database and having Markdown as an export format. The direct manipulation of files allows multiple apps to coexist and do useful things on top of the same files.


That was the reason why I gave up Joplin very quickly. The last Joplin thread, here on Hacker News, has also shown once again that some still do not understand why "But Joplin can export Markdown from the database!" is not the same as simple, flat Markdown files.


Last time I used Joplin (many years ago) it stored notes as flat Markdown notes with YAML headers. I stop it using because it gave me lots of headaches and at the end of the day your favorite file browser + your favorite text editor is a far superior solution than a jack of all trades that excels at none. My notes stack is neovim + fzf + git.


Yeah, that's also why I dropped it. Got too complicated when I wanted to start linking my notes into my work timesheets.


May I ask what you switched to? Running into the same issue.


Not OP but Obsidian (as mentioned) and Logseq are both good options.


On desktop, my favorite text editor plus my favorite file browser. On Android, Markor.


Obsidian is what I stuck with after seriously trying 10+ notetaking apps.

It has native clients for Linux, Android, and all files are plain MD.

I can freely sync everything with third party apps. Total freedom.


Not OP, but I switched to notable which also uses plain markdown.


I use Obsidian and Markdownload (a Browserplugin).


It's very possible to have multiple apps coexisting using a database. Although I'll certainly concede that it's probably a lot easier with just a bunch of Markdown files.


Sure, it's possible, but whichever app owns the database ultimately controls the data, the schema, etc. The file system provides a neutral database that all apps can cooperate within.


I guess what really matter is ultimate ownership of the data, if it's a sqlite-like db or a bunch of markdown files in my machine I can work with them, but if it's on a cloud (someone else's computer), then I'm doomed.


Why does any app have to "own" the database? I don't see this as being a restriction any more than it is with Markdown. Arguably even less so with a database since you have access to transactions.


If you made a new app what is "the" database you would write to? There isn't an existing standard for this aside from the file system.


That's a fair point in the sense that you're always guaranteed to have some file system already :)


Yes it was one of the best product decisions y'all made. Been so useful to have direct access to the files and options on how my data is processed and backed up.


The OP and your comment just made me cancel my Milanote subscription, export all my notes to markdown and start using Obsidian (to later experiment with this Reor).

As a side-effect, I just noticed that I prefer a long markdown file with proper headings (and an outline on the side) than Milanote's board view, which initially felt like a more free form better suited for unorganized thoughts and ideas for writing that I had (I use it for my fiction writing).

I still can have documents as a list of loose thoughts, but once I am ready to organize my ideas, I just use well written and organized headers, edit the content and now I have a really useful view of my idea.


Is a filesystem not a database with a varchar unique primary key, a blob data attribute and a few more metadata fields?


Files seem less useful for small bits of information. I feel the urge to fill a file with a minimum threshold. A database makes more sense for that.


>I feel the urge to fill a file with a minimum threshold.

Honestly that's more you subjectively than database v files.


Everything about database v. files is subjective like that. Filesystem is a database, just with more established tradition around schema and use patterns, and system level APIs.

On the other hand, you get to implement concurrent access yourself. Multiple apps working on the same files simultaneously only works when none of them makes a mistake with caching or locking.


I got an iOS journaling app on beta. It’s offline, no sign-in, no lock-in, social, etc. Saves to plain text. Syncs to your desktop if needed.

https://xenodium.com/an-ios-journaling-app-powered-by-org-pl...


Absolutely! Really respect the work you folks are doing.


"crucially, that AI should run as much as possible privately & locally"

Just wanted to say thank you so much for this perspective and fighting the good fight.


Thank you!


Great job!

I played around with this on a couple of small knowledge bases using an open Hermes model I had downloaded. The “related notes” feature didn't provide much value in my experience, often the link was so weak it was nonsensical. The Q&A mode was surprisingly helpful for querying notes and providing overviews, but asking anything specific typically just resulted in less than helpful or false answers. I'm sure this could be improved with a better model etc.

As a concept, I strongly support the development of private, locally-run knowledge management tools. Ideally, these solutions should prioritise user data privacy and interoperability, allowing users to easily export and migrate their notes if a new service better fits their needs. Or better yet, be completely local, but have functionality for 'plugins' so a user can import their own models or combine plugins. A bit like how Obsidian[1] allows for user created plugins to enable similar functionality to Reor, such as the Obsidan-LLM[2] plugin.

[1] https://obsidian.md/ [2] https://github.com/zatevakhin/obsidian-local-llm


Yeah, this is exciting - I'd much rather have it as a plugin for Obsidian though! I have my workflow with that, all the features I need. Having some separate AI notes app isn't what I would like to use.


Thank you for your feedback!

Working hard on improving the chunking to improve related notes section. RAG is fairly naive right now, with lots of improvements coming in the next few weeks.


I left an issue to explain this in more detail, but I don't think the problem is chunking. The issue is the prompt. The local LLM space does itself no favors by thinking about and using prompts as an after thought.

IME, the prompt should be front/center in terms of importance and the key to unlocking the models potential. It's one of the main reasons why Textgen-Webui is sooooo good. You can really dial-in the prompt, from the template itself to working with the system message. Then begin futzing with the myriad of other parameters to achieve fantastic results.


Which model exactly did you use and how large? I feel like even the best 7b models are just a bit too dumb for most things that I have tried. A 70b model or Mixtral or sometimes 34b seem to be adequate for some things. But those are several times larger and don't run on my oldish hardware.


Openhermes 2.5 Mistral 7b


Interesting project, wishing you all the best!

If you are using Obsidian, Smart Connections in v2 (1) does also support local embeddings and shows related notes based on semantic similarity.

It's not super great on bi/multi-lingual vaults (DE + EN in my case), but it's improving rapidly and might soon support embedding models that cater for these cases as well.

(1) https://github.com/brianpetro/obsidian-smart-connections


Does the future of knowledge management involve using lots of AI to organize pieces of knowledge?

I think "here be dragons", and that over-relying on AI to do all your organization for you will very possibly (probably?) cause you to become worse at thinking.

No data to back this up because it is still early days in the proliferation of such tools, but historically making learning and thinking and "knowledge management" more passive does not improve outcomes.


> I think "here be dragons", and that over-relying on AI to do all your organization for you will very possibly (probably?) cause you to become worse at thinking.

Socrates said exactly this.

But when they came to writing, Theuth said: “O King, here is something that, once learned, will make the Egyptians wiser and will improve their memory; I have discovered a potion for memory and for wisdom.” Thamus, however, replied: “O most expert Theuth, one man can give birth to the elements of an art, but only another can judge how they can benefit or harm those who will use them. And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.”


Fair, but the difference is that "remembering from the inside" and "writing stuff down" are still both activities that you are doing. And to in spite of this quote, writing does make the process of remembering/synthesizing information more active – you are engaging more parts of the brain in order to think about and write down the material. We have seen this on fMRIs, and there is a decent amount of evidence that handwriting works even better for this than typing, due to the higher level of spatial awareness involved (that's the theory).

An AI doing the work for you is the opposite of that.


> > I think "here be dragons", and that over-relying on AI [...]

> Socrates said exactly this.

I roughly recalled where you were going to go with that afterwards, but I couldn't help but 'spit take' at that given some of the quotes he does get credited with!


So if you only converse with LLMs (and never write or read anything), is the problem solved?


I don't think that the problem would be becoming worse at thinking, but I see a possible related problem. Each one of us has its own way of organizing things, that looks logical to us but not necessarily to others: think about how you organize things inside your home vs where other people put their stuff. A similar issue could arise with AI tools, that will classify and organize documents based on their logic, which doesn't necessarily align with ours.


I agree with this.

In some cases, hard thinking and searching for things manually can really enhance understanding and build your knowledge.

In other cases, particularly when ideating for example, you want to be given "inspiration" from other related ideas to build upon other ideas you've had previously.

I think it's a mix of both - reaching for AI as and when when you need it - but avoiding it intentionally at times as well.


Honest discussion point: do you think organisational stuff is important thinking? IME it's precisely this sort of stuff that distracts me from thinking about hard stuff - the urgent displacing the important.


You’ve discovered the dirty secret of PKM… it’s most useful for shuffling stuff around and feeling productive to avoid doing real work


I think you want to organize your own knowledge graph and then use the LLM to find novel connections or answer questions based upon it.


But if you are the one finding connections in your knowledge graph, then the neurons are not only connected on your machine but in your brain as well.

Probably a moot point once we have brain-machine interfaces, but we're not quite there yet.


Some suggestions :

- Create multiple independent "vaults" (like obsidian).

- Append links to related notes, so you can use (Obsidian's) graph view to map the AI connections.

- "Minimize" the UI to just the chat window.

- Read other formats (mainly pdfs).

- Integrate with browser history/bookmarks (maybe just a script to manually import them as markdown ?)

Thanks for Reor !


Thanks for your feedback!

- Multiple vaults is in fact in a PR right now: https://github.com/reorproject/reor/pull/28

- Manual linking is coming.

- Minimizing the UI to chat is interesting. Right now I guess you can drag chat to cover anything - but yes perhaps a toggle between two modes could be interesting.

- Read other formats also something in the pipeline. Just need to sort out the editor itself to support something like this. Perhaps pdfs would just be embedded into the vector db but not accessible to the editor.

- Integrating with browser history and bookmarks is a big feature. Things like web clipping and bringing in context from different places are interesting...


The problem with PDFs is that text isn’t necessarily text. Most RAG implementations that support them don’t do any sort of OCR or use local offline OCR implementations that have really low accuracy.


Literally yesterday I spun up a project with the intent to build something exactly like this for Obsidian.

Excited to see something already far more realized, and I’m looking forward to trying this out.

I’ve been working on a larger than small writing project using Obsidian, and my ultimate goal is to have conversations with the corpus of what I’ve written, and to use this to hone ideas and experiment with new ways of exploring the content.

Not sure if local LLMs are powerful enough yet to enable meaningful/reliable outcomes, but this is the kind of stuff that really excites me about the future of this tech.


There are these plugins:

https://github.com/zatevakhin/obsidian-local-llm

https://github.com/hinterdupfinger/obsidian-ollama

Which already exist and if nothing else are decent starting points.

> Not sure if local LLMs are powerful enough yet to enable meaningful/reliable outcomes

I've dabbled, briefly, with Ollama running Mistral locally on an M1 MacBook Pro with 32GB of unified memory, and throwing a couple of hundred markdown documents at it via a RAG resulted in quite decent output to prompts asking questions about abstract contents/summariesbbased on those docs.

So I'd say we're already at a point where you can have meaningful outcomes; reliability is a whole other issue though.


Thanks for sharing these; I’ll definitely check these out. I somehow missed these during my initial search for similar projects.

I recently got my hands on an RTX 3090 for my Linux workstation and I’m planning to try getting some kind of remote setup going for my MacBook Air.

Great to hear about decent output. Reliability is negotiable as long as there’s some value and hopefully a path to future improvements.


A starting point for that might be the way I am doing it. I am using LocalAI to expose OpenAI compatible endpoints and then get access to those via Tailscale.


Great to see something like this actualized. I’m a huge fan of Obsidian and its graph based connections for note taking.

Always see parallels drawn between Obsidian note structures and whole “2nd brain” idea for personal knowledge management, had seemed like a natural next step would be to implement note retrieval for intelligent references. Will have to check this out


Super interesting project. I like its focus. Wondering if the author looked into Cozodb, or other databases that combine vector + graph/triples. Since probably neuro-symbolic is the best path. https://docs.cozodb.org/en/latest/releases/v0.6.html talks about this idea.


Interesting. Thanks for sharing will take a look!


Extremely interesting read, thanks for sharing.


I have been looking for a while for a better way to take notes, what I was using worked fine but it did tend to end up being a blackhole.

I just downloaded this, I realize that it is still a new tool. But I think a critical feature needs to be context. The ability to have completely separate contexts of notes, maybe even completely different databases.

That way similar sounding to an LLM but contextually different don't get brought up. I figured that is what "new directory" did but it does not appear that way.

So is there any plans to implement a switcher for database? I can't find a way to change where it is right now.

But doing some quick tests importing some notes in it does seem very promising and I really like where you are taking it. It is just confusing notes that should be in distinct contexts.

Edit: I see this is already in PR! Awesome.


Which local model works best for folks? Sort of intimidated by the large number of models on Hugging Face and it is hard to conceptualize which of the variants work the best.

I downloaded:

mistral-7b-v0.1.Q4_K_M.gguf Q4_K_M 4bits 4.37 GB 6.87 GB medium, balanced quality - recommended

Was that a good choice?


Yes (imho) just be sure to get the instruct or chat model of any LLM you try. There is an awesome snapshot of what models people are using here:

https://openrouter.ai/rankings


I really like this idea and the app, but beware when using your existing logseq folder it will mess up the structure/indentation/bullet-points of the notes.


Really wish I read this before I gave this a go


Do not you use git with LogSeq?


I think I struggle to see any application of LLMs for my notes that wouldn't, in practice, be just as easily implemented as a search facility.

My main challenge with my notes (that I've been collecting for about 15 years) is remembering to consult them before I google.

I suppose a unified interface to both my notes via LLM and internet search would help, but then I get that with my Apple Notes and the Mac's systemwide search, if I remember to use it.


It's not the application of LLMs for your notes, it's the application of your notes for an LLM. Like if you're running a custom code-generation LLM, it could refer back to parts of your notes using retrieval aided generation to get some more context on the work you're having it do.

But yes, a good application is probably a ways away. Still, LLM vector embedding make a good search engine pretty easy to implement, especially if you're working with small sets of well curated data where exact keyword matching might not work great.

Like if you search for "happy" you could get your happiest journal entries, even if none of them explicitly mention the word happy.


Interesting. Yes I see, I guess :-) Thanks for the reply.


Can I still just run grep on my notes? Not trying to be snide, just wondering if the raw text remains available for simple text operations.


I did my usual test for these things - I tossed in the Markdown source for my site, which has 20 years of notes (https://taoofmac.com/static/graph).

Surprisingly, indexing sort of worked. But since I have an index.md per folder (so that media is grouped with text for every note) the editor is confused, and clicking on links always took me to a blank screen.

Also, pretty much every question gives an error message that says "Error: The default context shift strategy did not return a history that fits the context size", likely because there is too much context...

Edit: Fixed most of it by using a mistral instruct model. But the editor does not know what front matter is (neither in editing nor in previews, where front matter looks like huge heading blocks)


Also, it destroyed front matter on a few files I clicked on, which is a big no-no. Filed that as an issue.


Rear is a really interesting project with admirable goals. I believe this is just the beginning, but you have already done a great job!

I have been working on my note-taking application (https://github.com/dvorka/mindforger) for some time and wanted to go in the same direction. However, I gave up (for now). I used ggerganov/llama.cpp to host LLM models locally on a CPU-only machine with 32GB RAM, and used them for both RAG and note-taking use cases (like https://www.mindforger.com/index-200.html#llm). However, it did not work well for me - the performance was poor (high hardware utilization, long response times, failures, and crashes) and the actual responses were rarely useful (off-topic and impractical responses, hallucinations). I tried llama-2 7B with 4b quantization and a couple of similar models. Although I'm not happy about it, I switched to an online commercial LLM because it performs really well in terms of response quality, speed, and affordability. I frequently use the integrated LLM in my note-taking app as it can be used for many things.

Anyway, Reor "only" uses the locally hosted LLM in the generation phase of the RAG, which is a nicely constraint use case. I believe that a really lightweight LLM - I'm thinking about a tiny base model fine-tuned for summarization - could be the way to go (fast, non-hallucinating). I'm really curious to know if you have any suggestions or if you will have any in the future!

As for the vector DB, considering the resource-related problems I mentioned earlier, I was thinking about something similar to facebookresearch/faiss, which, unlike LanceDB, is not a fully-fledged vector DB. Have you made any experiments with similarity search projects or vector DBs? I would be interested in the trade-offs similar to small/large/hosted LLMs.

Overall, I think that both RAG with my personal notes as a corpus and a locally hosted generic purpose LLM for the use cases I mentioned above can take personal note-taking apps to a new level. This is the way! ;)

Good luck with your project!


So if I point this at my existing Obsidian library, what happens? Does this add to existing files, or add new files, to store the output of things generated by the AI? Doe the chunking of the files only happen within the vector database? What if I later edit my files in Obsidian and only open up Reor after -- does the full chucking happen every time, or can it notice that only a few new files exist?

Just wondering what the interaction might be for someone who uses Obsidian but might turn to this occasionally.


It's filesystem mapping 1:1. Basically the same thing Obsidian does when you open a vault. You can create new files with Reor, create directories and edit existing files. Chunking happens only in vector DB and everything is synced automatically so you shouldn't notice anything if you reopen Reor after using Obsidian.

In short, yes it'd work seamlessly if you wanted to use it occasionally.


I tried it and it doesn't really work, the models have no knowledge of my notes.

I tried llama-2-7b-chat.Q4_K_M.gguf and phi-2.Q4_K_M.gguf and neither showed any knowledge of the notes I added to the folder.

Does anyone know of a good way to test if it's working (a prompt?) and does anyone know of other projects like this?


I had been researching stuff related to this for some time. Interesting project! Why not an obsidian plugin to tap into the ecosystem?


Two reasons:

1. The libraries I used to run models locally didn't work inside a plugin.

2. I believe AI is a fairly big paradigm shift that requires new software.


Seconded. I like this idea but wouldn't want to trade the Obsidian UI. Would love to see something like this as a plugin.


Seems promising, but I couldn't get it to work at all. Seems like I'm not the only one having issues: https://github.com/reorproject/reor/issues


Running with a Local LLM on a Mac M1, this completely locked up my system for minutes. I tried to let it run, because the progress bar was ticking every now and then, but after 10 minutes I gave up and killed it.


You are running the wrong model. Get a smaller one Macs are usually ok with their performance since the GPU has access to main memory. A decent Nvidia GPU with lots of memory is still king though.

I run codelama:7b on a MacBook air, and even auto complete is partially usable.


I wouldn't recommend unless you got at least 16gb ram (though possibly more is needed depending on what model is used).


I do have 16gb ram.


I like the idea. Unfortunately, could not get it to work on Linux. Making a note caused a crash. Searching notes crashed. LLM chat would cause crash. Hope to see it work some time.


Seems cool, but didn't utilize my GPU? At any rate, definitely a futuristic POC, and prototype for the way I see desktop software going in the next few years.


Yes unfortunately not implemented yet. Will be coming soon though :)


This is really cool! Something i've actually been thinking about for a while.

Would you mind a pull request that spruces up the design a bit?


Absolutely! Would love your help.


is there a roadmap for features/improvements that you're wanting to make? what's your vision for the future of the app?


It doesn't seem to view my plain text notes. What file formats are currently supported, if plain text is not?


Just markdown right now, plain text is coming


since LLM are computer intensive, you should add some hardware requirements in the README


Is this really fully open source? What is the catch / what is the proprietary part?


No catch pal


serious question: when do you ever have your own notes and can't find the answer ?

i would call it bad note taking to not be able to recall an answer you put into your notes.

I like the idea of something like this but i've struggled to find a real use case.


Oh I definitely have cases where I'm quite sure that I have had something in my notes and it takes quite an effort to find them.

It is probably relevant to note that I have more than a decade of notes, and more than half of them are not written by me but are excerpts from various relevant resources, scientific papers, etc; so there are cases where, for example, I know that many years ago I had seen an interesting application of method X and should have in my notes, but I don't necessarily remember the exact keywords how it was discussed back then.


You underestimate the number of bad notetakers.

I’m one of them.

This tool might actually make me take better notes.

Maybe.


Could I share an idea(note) with a friend? And we grow the idea together?


Not yet!


How would this run on, say, a M2 Pro MBP with 32GB RAM?


That should be more than enough. I've been running Ollama on an M1 Max with 64GB of RAM without issue.


Wow cool, can I import my One Note notebooks?!!??


You can use Obsidian to create markdown from One Note.

https://help.obsidian.md/import/onenote


If you can convert your One Note notes to markdown files then yes. On startup, you'll be asked to choose your vault directory - which needs to be a directory full of markdown files.


How would I use this as a mobile user?


Wait some years until mobile hardware is sufficiently powerful to run similar models locally.

Or don't do this at all, and rely on cloud models like most other solutions - running things locally has some benefits for privacy, control and perhaps cost, but you can get all of the same functionality without a local model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: