Hacker News new | past | comments | ask | show | jobs | submit | more breadchris's comments login

great point, this is the one thing that I wish was more intuitive. You don't have to do this if main.go is the only file in the main package and all other code is referenced by a package.


"you must show the notification when you receive it", is it not possible to filter the notifications on the server before sending them down?


That's what they did. 10 second delay before sending out a "new message" notification to give it time to be marked as read.


I have been building something similar [1] with the goal to make it easy to save and share things you find. I will not use SaaS for saving content anymore as I find myself wanting to build larger compositions of work and dealing with APIs just isn't it. Maybe the SaaS works today, but you will always end up wanting a plaintext, offline-first experience.

[1] https://github.com/justshare-io/justshare


This looks cool! A few notes on our approach to Zenfetch:

1. We're not trying to become your personal knowledge management solution, we want to be the layer on top that helps you retrieve and use the knowledge you've stored.

2. The biggest reason we don't do this locally is we've found hosted models are much more powerful for the knowledge retrieval and synthesis use cases. However. we do hope to move more local over time.


Depending on who your market is (sales leads, researchers, developers) it may not make sense to support local/OSS. Posting this on hn will always bring a bearded dev, me, into the comments to rant about the way things should be, but this doesn't always reflect the path to product success. Right now your tool is too general, find a use case and crush it. Find some inspiration from this ext: https://chrome.google.com/webstore/detail/workona-spaces-tab...


Thanks for the thoughts. I actually use Workona and love it haha


does this have ml to search data?


Care less about the idea, and more about the effort. When your goal is to work on something that makes you personally happy, it is something you return to everyday organically. The effort you put in doesn't feel like work. Even if you don't reach your goal, what you have learned makes the next objective/project that much easier. The more you learn, the faster you can iterate, whatever the task may be.


^ This human gets it!

In my experience cool ideas are happy accidents that tend to show up as a side effect of doing something else. The more stuff I make, the more likely I am to have a good idea.


That's the spirit!


we love happy accidents


wow awesome analogy. I got rejected because my leet code solution wasn't fast enough for a security engineer position. The person who referred me was furious. Meanwhile, another interviewer did just this to me and it let me flex my knowledge.

Would love to see this more. There is also the inevitable future where someone publishes a book on it and it gets cargo-culted. C'est la vie.


What has always blown my mind is the lack of documentation/open source projects. With such powerful data we come across while browsing the web, it would only make sense to me there would be more tools to use an extend in this space. Browsing history is especially under valued. Even though the data technically exists, it is quite difficult to retrieve pages that have been visited, imo because of poor UX. Most people keep every Internet journey opened in hopes they will remember to return to it. I have been taking a stab at improving the UX with a history browser extension [1] which I have found myself legitimately finding value in using (a first for my personal projects lol).

[1] https://github.com/lunabrain-ai/lunabrain/tree/main/js/exten...


what is going on with those Dockerfiles?


They're just used to test different Python environments with. 2 of them for RPM building and testing in RHEL 8 & 9.

They all play a roll with merge requests during (GiHub Actions) pipeline testing and code coverage.


It seems like there are a lot of extensions that are being built for sqlite. I would like to use these extensions, but I am skeptical about their support over time. I like sqlite for how freakin stable it is. How do people feel about sqlite extensions?


My limitation with it is that it means I have to recompile sqlite for my use case. That's sometimes easy and obvious to do, but it's a lot more of a pain if it's, say, the sqlite embedded in my language interpreter and I just signed myself up for compiling a custom Python to support my project.

That said, I just googled it and it turns out I'm being a bit dramatic— it's actually not super hard to dynamically link Python to a custom-built sqlite: https://charlesleifer.com/blog/compiling-sqlite-for-use-with...


SQLite being as stable as it is means that even an unmaintained extension will probably continue to work for a very long time.


that is what i was thinking about, which is encouraging.


I guess it depends on what kind of support time scale you're wanting?

Popular curated extension collections like sqlean (https://github.com/nalgeon/sqlean) seem like they'll have a shelf life of many years.


I didnt understand the purposes of document stores until the past couple of years and they are fabulous for building POCs. Enhanced JSON support will help a lot for making sqlite a suitable document store.

I get full type support by serializing and deserializing protobuf messages from a db column and not making this column JSONB means i can filter this column too, instead of having to flatten the searchable data to other columns.


Yeah as long as you're reading and writing to the database with the same language, and that language has good type safety the benefits of your database schema effectively being defined by the same types as the rest of your code is pretty nice for a lot of use cases.

You just have to be vigilant about correctly migrating existing data to the current shape if you ever make breaking changes to types.


This. Would be nice if there was a framework (in go, or python pydantic) which would help me migrate data made with old structs to new structs. And also deal with the transaction.

For now i use sqlite to deal with transactions and only make backward compatible updates to structs. Brittle, but it is a toy app anyways.

(Normally use django to deal with models and migrations, but wanted to do something different)


Yeah migrations are the biggest issue for me. I really don't like not knowing what the actual shape of the document will be. Missing transactions, and not great relationship performance makes modelling some systems more hassle than it's worth.

I gave it a good go to use mongo and firestore for a few projects, but after a year or two of experimenting I'll be sticking to SQL based DBs unless there are super clear and obvious benefits to using a document based model.


There's a gradual approach there, where you start out with a JSONB column, and then as each piece of the data structure stabilizes* you move it out of json fields and into its own columns/tables.

* meaning, when there's enough code that depends on it that changing it would require some planning


This is the way I build all my apps now. Eventually the jsonb field stores nothing as it all gets moved to defined fields.


Alternatively, keep it in the JSON/JSONB column until you need to search/filter/query on it, in which case you pull it out into a column.


Even that may not be immediately necessary. I don't think SQLite has it yet, but Postgres can build partial indexes on JSONB fields.

Though most of the time, in that situation, I would pull it out.


Most commonly I see people use Alembic to migrate SQLAlchemy and Pydantic models.

But I tend to just use Django. Every time I try piecing together the parts (ex FastAPI, Pydantic, Alembic, etc) I reach a point where I realize I’m recreating a half baked Django, and kick myself for not starting with Django in the first place.


I have been running a murder mystery competition where you solve cyber security challenges for the past 7 years[1]. We come up with a story and then build an evidence graph for competitors to traverse. Challenges can be as simple as a Caesar cipher, or more involved like analyzing a clone of Twitter for clues. The challenge infrastructure is all defined with configs and gets deployed automatically to kubes. It is all open source too if you want to check it out[2].

[1] https://mcpshsf.com/ [2] https://github.com/xctf-io/chalgen


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: