Hacker News new | past | comments | ask | show | jobs | submit | cricketlover's comments login

there's kanban


Do we need another language for querying relqtional data? Why not just use SQL?


Datalog is a subset of Prolog, both of which predate SQL. By your logic, we should be asking what was the point of SQL.


> what was the point of SQL.

IBM's get-it-out-the-door release of partial (if not broken) implementation of Codd's relational-algebra model, which incorporates parts of predicate-logic, set-theory, and more besides. It kicked-off an industrywide multivendor revolution in information storage, processing, and retrieval.

...but thesedays I'm convinced the point of SQL is to have something other programming languages can point-to to deflect any criticisms of being slow or unwilling to adopt new features, being specifically unergonomic, or most of all: being entirely unresponsive to the day-to-day needs of present-day application developers.


Datalog is strictly more powerful than SQL and way easier to understand recursive, self-referential queries than CTEs.


Datalog is rad and has some neat ergonomics. It’s not so much a need as a want kind of thing. It’s definitely worth looking into.

I’ve only ever seen it used in Clojure(Script) projects although there’s support in other languages too.


I understand what you're saying, but in the example for accounting, we can also solve this problem using NoSQL. Because the most important feature we're talking about there is transaction support. Similarly, schema-on-write can be provided by a library.

To me it seems like NoSQL works better when there is less to normalize, which is the case with microservices. Those services struggle with the support for a distributed transaction when they have to make a distributed transaction. This problem will be very easily solved in SQL (assuming its not shared to completely denormalize everything for performance).

Note that this normalization problem also shows up in schema-on-write. If multiple people are contributing to a schema from different teams, then it will become hard to maintain a schema-on-write.


Great work and it is also something I've been trying to build. Is this open source? Any plans on sharing the high level design?


Good post but given my lack of experience, I could only follow about half of it.

What would be a good place for me to understand how Query Processing Engines excute and optimise in general? I want to see and understand actual system calls behind them.


I can recommend the lecture for implementing database systems by Thomas Neumann, who spearheaded the Umbra system which CedarDB builds on. The slides and lecture recordings are available online:

https://db.in.tum.de/teaching/ss21/moderndbs/


Agree. Very unclear why they won't simply use a secure socket or why a user space tunnel will be needed.

I surmise that the reason might be that a user space tunnel might be faster (like maybe they can do UDP over TCP or something to gain speed improvements).

Good post nevertheless.


Anybody with an archive link? These days my only social media is hackernews.


How can we prepare for such a future? And secondly, the most important thing in software is not the actual writing of code, but making sure that it solves the business problems, makes the right trade offs, is maintainable, testable and bug free. 90% of the time is not being spent writing code.

Yes. LLM can write some pieces of code. Yes, software can write some code. But can it maintain a million line codebase, can it prioritize issues, can it commit to timelines, can it talk to other people to resolve the ambiguities, can it make the trade offs, can it push to production, can it debug in production?

None of this actually involves writing code. At best, LLMs can take out interns and SDE1s but not more than that (think aechitects, staff plus roles)


Pardon my ignorance but I was hung up on this line.

> Out-of-sync document stores could lead to subtle bugs, such as a document being present in one store but not another.

But then the article suggests to upload synchronously in S3/DDB and then sync asynchronously to actual document stores. How does this solve out of sync issue? It doesn't. It can't be solved is what I'm thinking.

> Data, numbers

How much data are we talking about?


Great post. I was asked this question in an interview which I completely bombed, where the interviewer wanted me to think of flaky networks while designing an image upload system. I spoke about things like chunking, but didn't cover timeouts, variable chunk size and also just sizing up the network conditions and then adjusting those parameters.

Not to mention having a good UX and explaining to the customer what's going on, helping with session resumption. I regret it. Couldn't make it through :(


The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: