Hacker News new | past | comments | ask | show | jobs | submit | more hamandcheese's comments login

Is this a competitor to Temporal? I admit that I have never used either, but it strikes me as odd that these things bring their own data layer. Is the workload not possible using a general purpose [R]DBMS?


Disclaimer: I work on Restate together with @p10jkle.

You can absolutely do something similar with a RDBMS.

I tend to think of building services in state machines: every important step is tracked somewhere safe, and causes a state transition through the state machine. If doing this by hand, you would reach out to a DBMS and explicitly checkpoint your state whenever something important happens.

To achieve idempotency, you'd end up peppering your code with prepare-commit type steps where you first read the stored state and decide, at each logical step, whether you're resuming a prior partial execution or starting fresh. This gets old very quickly and so most code ends up relying on maybe a single idempotency check at the start, and caller retries. You would also need an external task queue or a sweeper of some sort to pick up and redrive partially-completed executions.

The beauty of a complete purpose-built system like Restate is that it gives you a durable journal service that's designed for the task of tracking executions, and also provides you with an SDK that makes it very easy to achieve the "chain of idempotent blocks" effect without hand-rolling a giant state machine yourself.

You don't have to use Restate to persist data, though you can - and you get the benefit of having the state changes automatically commit with the same isolation properties as part of the journaling process. But you could easily orchestrate writes into external stores such as RDBMS, K-V, queues with the same guaranteed-progress semantics as the rest of your Restate service. Its execution semantics make this easier and more pleasant as you get retries out of the box.

Finally, it's worth mentioning that we expose a PostgreSQL protocol-compatible SQL query endpoint. This allows you to query any state you do choose to store in Restate alongside service metadata, i.e. reflect on active invocations.


That's definitely a good question. A few thoughts here (I am one of the authors). The "bring your own data layer" has several goals:

(1) it is really helpful in getting good latencies.

(2) it makes it self-contained, so easy to start and run anywhere

(3) There is a simplicity in the deeply integrated architecture, where consensus of the log, fencing of the state machine leaders, etc. goes hand in hand. It removes the need to coordinate between different components with different paradigms (pub-sub-logs, SQL databases, etc) that each have their own consistency/transactions. And coordination avoidance is probably the best one can do in distributed systems. This ultimately leads also to an easier to understand behavior when running/operating the system.

(4) The storage is actually pluggable, because the internal architecture uses virtual consensus. So if the biggest ask from users would be "let me use Kafka or SQS FIFO" then that's doable.

We'd love to go about this the following way: We aim to provide an experience than is users would end up preferring to maintaining multiple clusters of storage systems (like Cassandra + ElasticSearch + X server and Y queues) though this integrated design. If that turns out to not be what anyone wants, we can still relatively easily work with other systems.


Nothing prevents you from using your own data layer, but part of the power of Restate is the tight control over the short-term state and the durable execution flow. This means that you don't need to think a lot about concurrency control, dirty reads, etc.


Counter-anecdote: none of my Linux PCs have python.


Debian comes prepackaged with Python. If there are distros that are good enough for a server almost out of the box, surely Debian stable is one.


Not sure who's to "blame", but I was super surprised a few days ago when I installed Kubuntu 24.04 (minimal), and Python was missing. Was fine though as I strictly use via pipx and miniconda only, but still surprising.


Counter-counter-anecdote: my toaster has python.


I am sorry for your toaster.


For this page in particular, which is now on the front page of HN, does any of the HN load reach the Pi? Or is it completely handled by Cloudflare?

Search still does seem responsive, which I find impressive. Curious what the search load is right now.


CPU is about 10% according to grafana and that seems to have been the peak.

Search load is approx 1,000 per hour but there are spikes of a few dozen per second here and there, mostly from bots (there's an amazonbot?).

That CPU spike appears to be correlated with the search spikes.


Did Sonic start offering native IPv6? Last I looked, it was only covered via tunnel only.


They turned on native IPv6 end of last year, at least in some areas (including Berkeley where I live). You wouldn't know it from their help pages, but there are some post in the forum to that effect.


I checked and I indeed have an ipv6 assigned to my router! Now I just need to figure out how to make it request a prefix.


When I was experimenting with IPv6 on my lan, router advertisements indeed worked great!

But the big loss was that I had no control to reserve a particular IPv6 address for a particular MAC address inside the DHCP server, or assign DNS names automatically, etc. since it's basically 1 way - device receives a RA then configures itself with a random address.


You could give the device a static address and let duplicate address detection do it’s thing


> Not like - python/pip rust/cargo dotnet/nuget javascript/npm java/maven - because with each such different language specific build layer you lose how to express depedencies between them.

Its funny you highlight this as a feature, because my #1 piece of feedback for the bazel ecosystem is that it really sucks to have to give up language-native tools. Whether it's LSP servers, documentation that says "add this to your gradle file", "add this to your build.sbt", it's always an uphill battle.

And then it's an uphill battle convincing product engineers to forget everything they know, do it the bazel way instead.

I get why bazel is cool, but I think it's worth highlighting that for a lot of folks it very well can make their inner development loop worse than without it, particularly if they don't have someone (or several people) dedicated solely to developer experience.


Can you explain further? Yes, it takes more effort to express, or rather to create .bzl files to express these dependencies (e.g. let's say we have Python using C/C++) - but once it's done it gives a higher level (BUILD) language where this is easy to express... Now there are still many rough corners (especially when comes to dealing with non-mono repo deps), but I'd rather use this, than hodge-podge of shell scripts/makefiles/cmake/etc, where the steps are not even clear what to run first, and the confiendence if this going to be reproducible.


I am not really talking about cross-language dependencies. I mean simple things like Java product engineers who know gradle and want to use gradle and get grumpy when they don't know how to add dependencies to WORKSPACE. These people need to be trained to use different tools rather than use the skills they already know. That's fine, but it's definitely a cost to using bazel.

Or if you are trying to maintain bazel for an org. Invariably, you will receive requests of the form "I would like to achieve [well document thing in language-specific build tool], please help me do that in Bazel". The net result for me, who was once in that role, is that understanding the original build system, then bending bazel to match a certain behavior (usually with no or inadequate docs) is a huge amount of effort. Enough that I now am skeptical of the value-add of bazel.


For what it’s worth, gradle is probably the only other build tool that has good inter-language capabilities (and it is heavily used by the android toolchain). Most other build tools are specialists.


I think the lack of true laziness will be a big performance problem for a large build graph.

On the other hand the monolithic nature of the nixpkgs package set is one of the authors gripes with nix, so performance at that scale may be a non-goal.


I'd definitely like to have good performance even for large build graphs! I'm hoping the laziness exists "where it counts". To walk through an example, if you build your backend, and your backend calls the function `postgres()`, and that calls `openssl()`, and THAT calls `gcc()`, etc., etc., each function is basically building an object to represent its chunk of the build graph (each function returns a "recipe"). Nothing gets built until that object gets returned from the top-level function and the runtime does something with it

In other words, the eager part is basically constructing the build graph. Maybe I'm wrong but I don't that this would necessarily be slower than the lazy version. In practice the most complex build graph I've made is basically the full chain of Linux From Scratch builds (that's the basis for my toolchain currently), and I think that takes about 400-500ms to evaluate. It's about 160 build steps, so it's not _simple_ but I know build graphs can also get a lot more complex, so I'll just have to keep an eye on performance as I start to get into more and more complex builds

Maybe I'm missing something but intuitively I'd expect this approach to be fairly efficient-- as long as build scripts only call these functions when they're used as part of the build graph


I think it really depends on your definition of "large". I don't think strict eval + full build graph can scale to something the size of nixpkgs, for example.

I mentioned in another comment that this is why Bazel uses simple strings to form dependencies on other targets. That way Bazel can manage the laziness and only evaluate what is needed without needing to use or invent a language with lazy evaluation.

But that is also the big downside (in my opinion) - the full build graph necessarily can't exist purely in starlark (at least for Google-scale projects) which increases complexity of the tool overall.

Edit: I'd like to add, though, that I think it's perfectly fine to not scale to Google scale or nixpkgs scale! Many many projects could still benefit from a great build tool.


Honestly, I think the "stringly-typed targets" thing isn't too bad, having used Buck2 quite a bit, and being a Nix user for 10+ years. If anything, it's a small price to pay for some of the other smart things you get in return, like the far more fine-grained action graph and the tooling around BUILD files like querying. One weird benefit of that stringly-typed bit is that the BUILD files you have don't even have to meaningfully evaluate or even parse correctly, so you can still build other subsets of the tree even when things are broken; at ridiculous-scale it's nearly impossible to guarantee that, and it's something Nix does worse IMO since evaluation of the full nixpkgs tree is slow as hell in my experience but a requirement because a single eval error in the tree stops you dead in your tracks.

Also, no matter how much I might not like it as a language nerd, I think Starlark is simply far more "familiar" for your-average-bear than the Nix language is, which matters quite a bit? It might be more complex in some dimension, but the problem space is fundamentally complex I think. So other factors like how approachable the language is matters. (And at least in Buck2, you can use MyPy style typing annotations, thank God.)


> One weird benefit of that stringly-typed bit is that the BUILD files you have don't even have to meaningfully evaluate or even parse correctly, so you can still build other subsets of the tree even when things are broken; at ridiculous-scale it's nearly impossible to guarantee that, and it's something Nix does worse IMO since evaluation of the full nixpkgs tree is slow as hell in my experience but a requirement because a single eval error in the tree stops you dead in your tracks.

I think you get more or less the same property with Nix. You can have all kinds of errors, even certain syntax errors in the same file, but if they are unneeded for the current evaluation, they won't cause any problems.

As for language familiarity/approachability - this will always be a matter of opinion, but I personally don't think it makes sense to optimize for the casual contributor. Plenty of people know python, but I never see casuals making anything besides trivial changes to bazel build files. I don't think they gain anything by familiarity with python, they could very well copy paste nix or any other language. And if they get in to trouble, they will call in the experts.


Yes, and Bazel makes some very serious trade-offs in order to make starlark work. Notably, dependencies are referenced as stringly labels as a poor substitute for lazy evaluation (starlark itself has strict evaluation semantics).

This in turn requires additional tooling to catch errors early, and also means that a starlark-repl for Bazel will never really be all that useful, since the build graph doesn't exist in starlark alone.

In my experience, this makes Bazel a significantly harder build system to truly grok, tho perhaps easier to use it without understanding it.

Contrast with nix, where the entire build graph exists as a nix expression. In my experience, you can gain a surprisingly deep understanding of nix armed only with knowledge of nix-the-language (and without knowing any implementation details of nix-the-binary-that-builds-derivations).


> Many people now want more new stuff cheaper.

Yes of course, and that's why our grandparents got fewer things that cost more, because that's what they wanted.


I'm pretty sure if they tried in Japan, the guards would remove them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: