> RonDB provides Class 6 Availability, meaning its system is operational 99.9999% of the time, thus no more than 30 seconds of downtime per year. This ensures that RonDB is always available.
I get marketers stretch the truth all the time, but they can't possibly be serious.
Such uptimes aren't in the realms of impossibility. They do remain very hard to design and engineer, however.
Famously, Amazon SLAs Route53 with a 100% uptime (for its data-plane) [0] (not sure if any other AWS service comes close). So, here's at least one KV store that's one-ups RonDB.
That's an SLA where they expect to end up paying customers back for outages. They've done an admirable job with only a few global outages, but subsets of customers have experienced plenty of outages.
A single 5 min outage would blow through more than a decade of SLO at 6 9s. As far as I'm aware, there does not exist a service that has been up for more than a decade with fewer than 5 min of downtime, and that definitely includes route 53.
In order to achieve 6 9s you need a two levels of replication, you need synchronous replication with instant failover, plus asynchronous replication to handle site failover. RonDB provides both. NDB is used in lots of telecom services where you can't make a phone call unless NDB is up. These services definitely can at times run for decades without downtime. RonDB is built on top of NDB.
Correct, Availability is the amount of time you are available to read and write data. Durability is the amount of time your data is not lost. The metrics mentioned in this post are about availability. Most cloud vendors provide SLAs of 99.95% availability. One problem to solve when working with a cloud vendor is that they need to upgrade their OS images every now and then, so to get the highest availability one must integrate with the cloud APIs announcing those changes.
Note that it is the dominant subscriber database in the telecom space. There is a high likelihood you are using it as part of a home location registry or similar when you use your phone.
Here's a benchmark comparison with Redis (outperforms it on a single node):
Here's a YCSB benchmark where it beats all other well known key-value stores (not reproducible, but all database vendors (except RonDB) have a DeWitt Clause):
These numbers are based on NDB customer experiences from operating tens of thousands of NDB clusters for more than 10 years. Obviously to achieve 99.9999% uptime requires an operational competence as well as the software to achieve it. This is why we are building this operational competence to make those numbers accessible to anyone.
Six nines is completely doable if you have the money. Availability is more limited by cost than technical difficulty.
Also consider that availability just means "the service is still running". It may be practically unusable but still available. Always read the fine print.
(Actually their math is wrong: six nines is 31.5 seconds of downtime)
One tough thing about rule of zero classes is that many to most end up having an implicit 'the object has not been moved from' precondition on methods. The classic example is a class with a unique_ptr member that is never null except after move, so none of the methods have null checks.
I understand why the committee was unable to get destructive moves into the standard, but I do think that is one thing Rust got right that I really wish C++ had.
People bang on about borrow checking etc but really it's move semantics that's the million dollar idea in Rust - it just obliterates a huge Gordian knot of complexity in one stroke
In Rust you cannot write a move constructor. That proves limiting. But Rust move construction, also, cannot fail, which is liberating.
Rust cannot abandon backward compatibility with itself, so will increasingly be stuck with old choices, like C++, and will soon be fully as complex and infuriating. But it will, like C++, remain usually more useful than alternatives.
Agree. Can't really blame C++ for it being difficult to retrofit something like that. It's akin to the lack of const in Java: they'd add it if it were practical to do so.
Especially since caught panics amount to exceptions and most Golang code isn't exception safe. It's a recipe for leaking database connections, deadlocks, etc.
Go error handling is awful, but this isn't a fair.
In go, if you write code like the following:
conn, err := db.Connect()
defer conn.Close()
That defer will be run during a panic. Same thing as 'defer mutex.Unlock()'
Yes, like most of go, it's manual and painful and poorly thought out, but most people do follow these patterns, so for the most part go code will safely unwind from a panic.
Most of the time, sure, but there are lots of cases where `defer` being function scoped leads to manual unlocks, e.g. inside loops or short critical sections in the middle of a function. Yes, you can use closures/otherwise, but few codebases are that disciplined. Those edge cases are plentiful enough that catching panics is dangerous.
it can be called with an S as the single argument and produces something of the type Step parametrized with S and A) and -does not- outlives the s lifetime.
's refers to a minimal lifetime and not a maximal limit.
Even still, that's only 100k jobs. There's already more than 4 million jobs in the city alone. A 2.5% increase in jobs is just not going to change that much. Cyclical employment changes alone are much, much larger than that.
So much this. Every time I want to make a function async, I have to write my 5010th `ErrorOr` struct to send over a channel. Even if they would only add the multiple return hack to channels, that would save me a bunch of pain.
> The author seems to be confusing typeclass composition with OOP.
I agree and would like to expound on that idea. One of my biggest frustrations with inheritance in traditional statically typed languages (I program in C++ for a living) is that inheritance is performing two functions at once: code reuse and typing. Confusing the two seems to cause a lot of pain. Inheritance as a type system is describing the kinds of things the object can do. Inheritance is (usually) a sufficient condition to say that the types can be used interchangeably. Making inheritance the only way to express the type information often forces some very unfortunate code.
The author is trying to use traits as a code re-use mechanism. He wants the trait to be able to see into the implementation and be a function of the implementation's private data. If that were allowed, that would invite all of the pain of inheritance for that kind of trait. Types with a different internal implementation would end up being awkward at best.
No, it's supposed to teach people how to use the language. When a large fraction of your user base will be bringing a specific skill set, then you should generate documentation aimed at translating those skills!
> > What IS the idiomatic Rust way to do a cyclical directed graph?
> Unfortunately, the correct idiomatic way is that you try very hard to avoid doing so. Rust is all about clear ownership, and it doesn't like circular references.
> The usual solution is to put the cyclic graph code into a library, and to use pointers and about 20 lines of unsafe code.
The point about C++ is important. Cyclic graphs are hard in any non-GC'ed language because it breaks the ownership story.
Safe Rust allows one to do an incredible amount of things whole providing strong guarantees against the sorts of problems that plague lower level languages. Since this isn't suitable for absolutely everything, we have unsafe Rust to fill in the gaps in the small areas it's needed. Unsafe Rust isn't "bad", it just doesn't provide the same guarantees as safe Rust. And if something goes wrong, you can at least narrow down your search to unsafe areas of the code.
This isn't the sign of fundamental language design flaws. It's the sign of a phenomenally well-designed language, where the downsides of seldom-needed yet powerful features are limited to only those areas where they're used.
I'm not convinced that cyclic data structures are a feature that's exotic enough to warrant breaking the language contracts. This thread alone shows that it's a common occurrence in tree structures. There will be likely be other structures as well.
So I'd think a better way than saying "don't do that - or if you have to, you're on your own" would be to analyze use cases and see which scenarios the language can satisfy. (As another poster suggested in making "parent pointers" an explicit language feature(
Agree. The language's memory safety system needs to support more common use cases. At least:
- Back pointers. (They're not owning pointers, and they have a well defined relationship with some other pointer. The language may need some way to explicitly say that.)
- Collections where not all slots are initialized. (This requires simple proofs by induction as you grow a collection into previously allocated unused space.)
You can always use std::rc::Weak if you need a cyclic data structure with weak references. That covers, I suspect, the majority of the use-cases for these type of structures.
For real, honest-to-god, cyclic data structures — just use unsafe Rust. Again, unsafe Rust isn't "bad". It's just unsafe. More care will be expected of the code to ensure that it exports a safe interface, but nothing in the language stops you from doing it. There's quite literally no loss of expressive power.
And if something goes wrong, you can at least narrow down your search to unsafe areas of the code.
No. That's only true if the unsafe code presents a completely safe interface to its callers. If the safe code opens a hole in Rust's protection system, which is very easy to do, you can now have C-type no-idea-where-it-is bugs.
Yes, the crash can come from every line of code, but the origin of the bug is in the unsafe code.
That's why unsafe code has to be cleanly inspected to be sure it has a safe interface, and by reducing the dangerous area to only few lines, it is far easier to do.
All languages have design flaws. Thats why people keep making new languages. Arguing that Rust does not have design flaws is going to arouse more skepticism then anything else.
It is not mandated that you 'must' write safe code but that you 'can' write safe code. So there really is no fundamental assurance that rust and its libraries are in fact 'safe' by Rusts own definition in any meaningful sense.
You could make the same style argument that C is a 'safe' memory language also as long as you use automatic memory management and no malloc. Of course that would be a rather disingenuous claim to make.
Nobody's arguing that Rust doesn't have design flaws.
That said, people also need to understand that many such instances are not design flaw, but are instead design tradeoffs. The OP indicated that having to switch to unsafe Rust to represent a cyclic datastructure is a design flaw. My response is that it's a tradeoff that pays dividends in every other piece of code.
Can Rust improve upon the number of problems that can be solved with safe vs. unsafe Rust? Absolutely! Is it worth cordoning off a few cases that don't technically need unsafe behavior, to ensure all other Rust code performs safely? Emphatically yes.
> All languages have design flaws. Thats why people keep making new languages. Arguing that Rust does not have design flaws is going to arouse more skepticism then anything else.
As somebody who has used Rust since late 2012, I completely agree. It seems that most of the more sweeping claims come from those new and excited users - hopefully over time this will soften and become more reasonable. Hype is definitely something that needs to be kept in check though, as it more than often results in a back-fire.
It indicates that idiomatic Rust doesn't like cyclic references and thus you must step outside safe, idiomatic Rust to do that, it really just means that every language has an idiomatic way to do things to stay on that idiomatic road and not every way of doing things fits the idioms - they're still possible but not idiomatic.
Given how the ownership system works, it makes sense
You may say that that indicates a flaw in the ownership system, but it doesn't.
The ownership system can do certain things, but not all of them - it's designed to prevent the majority of common memory safety violations, but if it can't prove everything. That doesn't indicate a design flaw, it indicates a system limitation, same way a GC pause it's not a design flaw, but rather a limitation of a well deigned system.
I get marketers stretch the truth all the time, but they can't possibly be serious.