I didn't watch the video but I read the pdf transcript. I noticed that the contract example (if x > 50 then x+50 > 100) can fail because of integer overflow. This is an area that Ada seems to take much more seriously than Rust does.
Anyone know what happened with Adacore and Ferrocene? Are they no longer working together? It sounded like a good collaboration earlier on, given the Rust juggernaut.
Also I notice no mention of anything like SPARK for Rust.
I still feel like Rust and Ada are designed for differing domains. The archetypal Ada app in particular might malloc some memory (and possibly fail) during an initialization phase, but post-initialization, must never fail, which for among other things basically says never use malloc. The post initialization part is basically a control loop using those fixed buffers to keep a jet engine spinning or whatever.
Rust's archetype application on the other hand is a web browser: tons of dynamic allocation, possible memory exhaustion depending on what the user tries to do, and the requirement is to never fail ungracefully, as opposed to never failing at all. "Never fail at all" is not exactly more stringent or complicated: it just means you're not allowed to even attempt certain things, because they might fail.
Ada definitely targets the same domain as Rust, it just _also_ targets additional domains like low level control systems. It has a lot of flexibility and is quite a good language design that people seem irrationally prejudiced against because it has Algol/Pascal-flavored syntax instead of C-flavored syntax.
From what I can tell, trying to write a browser in Ada would be madness. Ada's memory management stuff is very primitive compared to Rust's. That may change as Ada keeps evolving, of course.
From what I can tell, you have to do special crap in Rust to deal with overflow. Saturating is almost never what you want with integers. "Integers" itself, the math term, denotes the infinite set 1,2,3... and while computer arithmetic is necessary limited to a finite fragment, computer integer calculations should always give the same result that you would get using the unbounded mathematical integers. When that doesn't work, you should get an overflow exception, similar to running out of memory.
In Ada, I think the relevant exception is Constraint_Error, but I don't think Rust has a way to do that. By "that" I mean ordinary addition like "a+b" where and b are i32 variables, gets overflow checked without having to call checked_add(a,b) instead of a+b. Is that mistaken?
> In Ada, I think the relevant exception is Constraint_Error, but I don't think Rust has a way to do that. By "that" I mean ordinary addition like "a+b" where and b are i32 variables, gets overflow checked without having to call checked_add(a,b) instead of a+b. Is that mistaken?
So, this answer to this is yes and no, in different senses.
So first of all, Constraint_Error is an exception, and Rust doesn't use exceptions for error handling like this. It does have panics, but a Rust-native API wouldn't use a panic in this situation. and so the type signature of checked_add looks like this, for i8:
This returns an Option type, which you can than handle as you wish. So it's true, you invoke this like
a.checked_add(b)
Now, there are two other common semantics for checked addition: saturating, and wrapping. They look like this:
a.saturating_add(b)
a.wrapping_add(b)
Okay. At this point we can actually get into the interesting part: yes, typing this is not as nice as a + b. However, for both saturating and wrapping addition, there are wrapper types:
use std::num::Saturating;
let max = Saturating(u32::MAX);
let one = Saturating(1u32);
max + one
to get a u32 back out, you use .0, since this is just a wrapper struct:
(max + one).0
So! What about checked? Well... there isn't one. It appears that the reason why is that it was proposed, and then there wasn't a lot of enthusiasm, so it got closed. What's interesting is that Saturating was in that RFC as well, but I guess it made it in later.
I want Pattern Types for an entirely selfish reason, I want to write BalancedI8 and similar types -- as user defined types, but in stable Rust and have them Just Work™.
BalancedI8 is similar to i8 (an 8-bit signed integer) and to NonZeroI8 (the same type but, with a niche where zero should be), instead of removing zero, which we often have a use for, lets remove the annoying most negative value i8::MIN
Why "BalancedI8"? Because now that we've removed this most negative number (to provide a niche) the remaining value is symmetrical, it goes from -127 to +127
Now, I profess that I want these types but I clearly don't want them enough to do very much about it, so that's not great.
My first job was mostly SPARK Ada, subtypes were so useful, both in terms of making contracts clearer for the human and for the various analysis tools.
Rust is high on my list of languages to learn when I can make some time, having something like this available will be great.
So, you want an integer, but one value sacrificed to denote not-a-number? Is this what "niche" means?
So that with these, you can have Option[integer8, None], an option type that either contains a -127 to +127 8bit integer or contains the empty value, and still consumes only 8 bits of memory? Or are there other uses for this?
And Rust already has this, and with the memory optimization, but for some reason only the version where they sacrificed the value 0.
You're correct that Rust provides NonZeroI8, which in fact I called out in my text. If you look at the source, you'll see that the Rust standard library is allowed to make this type but you can't in stable Rust,
[rustc_layout_scalar_valid_range_start(1)] means, "Hey, Rust compiler, I promise it's fine to just assume I never have this value: 0". You absolutely could write BalancedI8.... if you're the Rust standard library. Unfortunately you aren't allowed to do that in your own stable Rust software because it needs these compiler internals.
There were discussions about stabilising this idea, and they were knocked back because if you do that you're basically stuck with it forever.
One day in the future NonZeroI8 could "just" be a Pattern Type, like the ones you can make at home, but today there are no Pattern Types and NonZeroI8 needs compiler magic.
There are other Rust core types which you could make yourself, IPv4Addr for example is just a 32-bit value, you could make your own, but yours wouldn't be on everybody's Rust system. Option<&T> is just a sum type over &T and None, you genuinely could make your own, call it Maybe<&T> and it would work exactly the same - same optimisations even, although not all the compiler diagnostic messages would be as good if you use it wrong I think. But today NonZeroI8 isn't like them, it actually needs a sprinkle of compiler magic.
The main use is not having to deal with INT_MIN, the two chief problems of which is that you a) you can't sensibly abs() it, b) most people tend to forget about point a). This problem in my experiences most commonly arises in naive implementations of itoa/etc. which tend to assume that they can just call abs() and concentrate on the positive half-range signed integers. Nope, you can't do this, not all negative integers have a positive counterpart.
They're sort of used in graphics APIs. When a value in [-1.0, 1.0] is quantized to an i8, the result is a BalancedI8: -1.0 maps to -127, 1.0 to 127, and -128 is unused; if -128 is used in the dequantizer, it is treated the same as -127. This replaced the old scheme where -1.0 mapped to -128 in which 0.0 was unrepresentable.
The value of encoding this case in the type system seems minimal to me though.
The reason to encode it in the type system is it allows for optimization. Option<NonZeroI8> uses a single byte because it knows 0 is available to represent the None state. Option<BalancedI8> could do the same thing with -128 representing None.
If you're really attached to it being min you'd have to copy that library.
Edit to add: we actually use these in one of my main Rust codes; they're useful, but I'm not sure they're so useful I'd want them built into the language.
Yeah, the XOR trick†. It's very affordable on modern hardware (your CPU can cheerfully XOR two registers while doing other things, barely measurable) but it's more the principle of the thing. Maybe I should just put up with the XOR trick and stop thinking about Pattern Types but I don't think I'll be able to.
† Just in case you've never looked inside those types, they have a constant value and when you want a NonFoo you just take a NonZero and you XOR it with that constant, in other respects they work the same way as any other wrapper type. This is one reason the NonZero types exist, they make the XOR trick very easy to do.
What are pattern types? Cannot watch the video, too slow.
Edit: Thanks for the explanations. From what I see, pattern types are subtypes of a given type, such that the subtype predicate can be specified as a pattern match. This way, pattern types are subtypes that can be checked at compile time (at least, if the used pattern can be checked at compile time, I don't know if all Rust patterns are compile-time checked).
With Ada/SPARK (and hopefully Rust) this can be checked at compile time to formally prove code. You can also catch the runtime exceptions and handle them accordingly if you're not using SPARK.
The presentation right before the Rust one[1] actually did a bit of a dive into this.
I recently created a UUID library in Ada, and I'm able to validate an input string on the datatype level without ever having to worry about doing so in the function itself:
subtype UUID_String is String (1 .. 36)
with Dynamic_Predicate =>
(for all I in UUID_String'First .. UUID_String'Last =>
(case I is
when 9 | 14 | 19 | 24 => (UUID_String (I) = '-'),
when others =>
(UUID_String (I) in '0' .. '9' | 'A' .. 'F' | 'a' .. 'f')));
Now my function simply needs to be:
function From_String (From : UUID_String) return UUID
with Pre => From in UUID_String;
And I can code the function with the confidence that it won't be processing a malformed string.
In Rust the bit patterns which are not occupied by a value are free for some other use, Rust calls this a "niche" and uses it extensively in its sum types.
This lets us have all the performance advantages of using magic sentinel values occupying those bit pattern, but with the same ergonomics as for an ordinary sum type.
For example in C a Unix file descriptor is just an integer. 0, 100, 1000 - all perfectly reasonable file descriptors. But, -1 is not a valid file descriptor, so Rust's OwnedFd is internally just an ordinary C-style integer, except, it's never -1 as a result Rust's Option<OwnedFd> is the same size as the C integer, you'll get the same machine code as the C integer, but in C you need to remember to check it's not -1 before using it, in Rust you won't make that mistake because that's not Some(fd) that's None.
Rust does this with its references, Option<&T> is the same size as &T, depending on what
exactly T is that's probably "really" a machine address in a CPU register, and so None is the same CPU register with an all-zeroes bit representation.
My favourite non-standard library use of this feature is CompactString. CompactString is the SSO (Small String Optimisation) made famous in C++ but applied to Rust's strings. Rust's native String type is as simple as possible, thus no SSO, it's actually internally Vec<u8> plus rules to ensure it is always UTF-8 encoded text. SSO in C++ standard libraries means that "Dog" or "Cheese" are stored inline in the type itself, no need for a heap allocation. CompactString takes that to an extreme. While a typical C++ std::string might allow you to store "ycombinator.com" inside the 32 byte data structure, CompactString fits "https://ycombinator.com/" in its just 24 bytes!
It does this by being able to distinguish whether that last byte is a valid final UTF-8 code unit, if it is then this is a 24 byte string, but if it's not then it signals how long the rest of the string is and how the other 23 bytes should be interpreted.
Minor correction about C++ SSO: different C++ implementations have different small string capacities. In clang and libc++, you can store 22 bytes of short string when the data structure itself is 24 bytes. This is more than what MSVC or gcc can do.
* Splitting large enumerated types into smaller ones
* Using the same range for arrays indexes, "for" loops parameters which index those arrays. In those cases a good compiler removes the useless run-time checks!
Ada range types can have bounds that are known only at run-time. It was not possible with Pascal.
To see subtypes in action: another FOSDEM presentation:
> To be honest, it wasn't particularly useful, because it requires runtime checks which panic if they fail.
Checking at runtime is 100% still incredibly useful. That's how you enforce critical program invariants to avoid security vulnerabilities or prevent invalid states that could ruin the program's data.
Also, sometimes you can arrange that the "check" is just the natural way to express what you meant in the language, so whether it's a runtime check or not, the ergonomics are improved enormously, it feels natural.
In Rust NonZeroI8::new(n).unwrap() is either a NonZeroI8 or, if n was zero, it panics because we asked to unwrap the Option and that's None.
What's inside NonZeroI8::new ? Literally a cast. When compiled this is nothing at all, no machine code is generated. It is relying on the fact that Option<NonZeroI8> has bit pattern 00000000 for None, and that's also the bit pattern for the integer zero.
So if we ask at compile time, NonZeroI8::new(FOO).unwrap() for some constant FOO, the compiler will, at compile time, transform the FOO bit pattern, if it's all zeroes this code is just a runtime panic, and if it's on the clear through line (e.g. the only code in the main function) by default the compiler says well that's not going to work, here's a compiler error [if you actually want a program which just panics when run you can ask for that with a compiler setting, good luck to you].
If FOO isn't zero then, still at compile time, now we've got a NonZeroI8 with value FOO
According to the talk it‘s more about Dynamic_Predicates, i.e. arbitrary conditions on a type. He‘s showing an Ada type definition for prime numbers, not just simple subranges.
They're called pattern types because the intention is to allow them to match a Rust pattern expression (more or less, no idea how conditionals would work).
As an update to my previous comment in the linked thread, the aforementioned minimal version of pattern types have been merged and generic support is currently being implemented[0]. Slowly but surely, we're getting there!
> Why "BalancedI8"? Because now that we've removed this most negative number (to provide a niche) the remaining value is symmetrical, it goes from -127 to +127
Actually instead of just removing the most negative number i8::MIN you can appropriate its bit patter to denote "missing value" (or "null" if you wish) so that your new data type is symmetric and nicely "nullable".
I'm seeing a lot more Ada posts on the front page recently. Was there a big investment in Ada recently or an acquisition where its marketing machinery is starting to chug along? Or was there a meaningful release or something?
Even with administration change in the US there is a long-term initiative across multiple agencies to eventually move to more robust software, and a part of it is the push towards more safe programming languages (where safe means fewer memory issues, fewer vulnerabilities, less bugs in general, etc.). Similar government initiatives now start in Europe and elsewhere in the world, too.
This regulatory change is not new, it has been going on for years and will take more years to finish. And even without a regulatory pressure, the migration would happen eventually. Look at the cars. As they get more features like adaptive cruise control and various driver assistance features, the need for software that runs with no lag and is reacting to surroundings correctly and quickly becomes absolutely critical. Car companies now can go out of business simply because their software is buggy. The car vendors now produce more software than ever, and they are in dire need for better programming tools than ever.
Languages like Java, Scala, C#, Go, etc. cover many scenarios like cloud services and, for example, car entertainment system. But for devices, microcontrollers, real-time low latency systems etc. C and C++ have been the go-to languages for decades, and now it is starting to change, because turns out it is very, very hard to write correct, bug-free, safe, and secure software in these languages.
Rust is one language that is getting into this market: Rust Foundation established a Safety-Critical Rust Consortium last year, for example, and Ferrocene compiler has a bunch of certifications done already. Ada is another option that would work for many such use-cases, too. The selling point of Ada is that it's been around for a long time and many vendors already have established compilers and tools with all appropriate certifications and qualifications done for all sorts of industries.
So, it's not really "Ada is interesting" and more "Languages that can replace C and C++ are interesting".
A lot of this movement continued to happen under the previous Trump administration; we have no idea what is going to happen, there are good arguments that it may go away, but also ones that argue it will accelerate or intensify. We’ll see.
Am I a bad programmer if rust is really difficult for me. I desperately want to learn a low-level language, since I think I'm missing something hanging out with my buddies C#, Python and NodeJS.
C and C++ are too difficult, Rust is also hard. I think my best bet is Zig at this point.
No, Rust gives you a lot to wrap your head around very quickly, and it took me three or four epiphanies to get it, and I tried to learn and stopped twice before I was able to get far enough along with it that it became easy to think about and work with. It doesn't make you bad programmer, but I'd still suggest giving it another shot every 6 months or so as your time allows, and see if you're getting farther each time. As long as you're still learning each time you bounce off of it, it was probably time well spent, most Rust concepts will show up in other languages, and the ones that won't are still useful paradigms to be aware of so you can contrast it with other paradigms as you get more experience with those.
C is small enough as to be very valuable to know. It's not the "high level assembler" that it was originally sold as, but as a conceptual model it's invaluable. It's also very valuable to run through CVEs, bug reports and such to find out how it falls down, and why safer languages have value. You can learn all that without even learning it to such a degree that you can write code in it.
I know you said C seems too hard - I felt the same way some 30 years ago. Then I learned Pascal, and found learning Ada to be a bit of a breeze after that. After that, coming back to C was a different experience - very natural. You don't have to force anything, and you can learn whatever you want whenever you want (or not at all).
Sometimes your capabilities can quickly level up in unpredictable and exciting ways - which I find to be one of the most fun parts of programming. And it tends to unfold like a horizon - you learn low-level stuff, then you get interested in high-reliability software, or real-time systems, or highly parallel / functional systems, etc. This is all great, and no one brain is big enough to hold it all - so have fun!
If you have written multithreaded C# code which encountered deadlocks, race conditions and had to build a mental model of which thread is responsible for what and currently "owns" a resource and is allowed to modify them... then you already kind of know a bunch of things implicitly that rust makes explicit and can transfer that knowledge.
At least that's how things clicked for me when I came from Java and I already had to debug lots of concurrent code before.
Similarly, if one has fought with a garbage collector and thought about efficient memory representations, where things get allocated and deallocated that transfers to some extent.
If you have been living blissfully in effectively-single-thread-land, then yeah, there'll be a bunch of new concepts to digest.
At least for the work I typically do, including my hobbyist projects in Unity, I almost never have to use threads. Praise be to Async, which has abstracted a lot of this.
I couldn't really grass programming until I found JavaScript. Even now my criteria for learning a language is one of two things .
One, do I want to build something and this language would be the best tool .
Or, two, is this language going to make me a ton of money.
I really want to find a good use case for Rust that isn't superseded by what I can do in the languages I already know. I might take some time to make a game in Rust one day...
> Or, two, is this language going to make me a ton of money.
I don't think any language per se is ever going to make you a ton of money. Even COBOL is not enough to make a ton of money. What might make you a ton of money at some point is being highly skilled at developing in some broad area where that particular language happens to be used.
> I couldn't really grass programming until I found JavaScript.
This is a bit surprising to me, because JavaScript is not at all an easy language to work with. You might want to look into TypeScript as a way to make quicker progress in that same domain, with Rust as more of a side-project to help you grok lower-level programming in general.
> I might take some time to make a game in Rust one day...
Game development in Rust tends to be especially challenging, since the main competition is C++ - another low-level language. Take a look at this example of a very simple Breakout game in Bevy, one of the most well-known game engines (though still very much at a highly experimental stage!) https://github.com/bevyengine/bevy/blob/latest/examples/game... - this might help you realize the level of effort that's typically involved.
I've been a salaried software engineer for a long time at this point.
JavaScript just clicked. A lot of more complicated things you need to understand for c and c++ are just handled for you.
To be fair I was technically using Unity Script, which has everything in a nice game engine sandbox. Want to move a cube, takes 1 line of code.
I am working with Typescript on my latest project. It definitely helps when dealing with more complex projects. My current project is basically a website built in React.
I still like Unity, but I don't trust the company so I'm looking for a new engine.
Note that Rust only covers such scenarios for in-process data, any kind of resource that is external to the process, there is no help at all from Rust's type system in concurrency or threading.
Which always gets forgotten in fearless concurrency discourse.
No, Rust is definitely hard. One of the best memes I’ve seen about rust goes like “Other languages have a garbage collector, with Rust you are the garbage collector.”
I also haven’t had as much fun learning another language in a long time. Keep with it! It’s worth it.
Rust is very different from Python or C#, let alone NodeJS. I'm not sure what exactly is so difficult about Rust for you, but the basic feature-set of Rust as "your first programming language" (i.e. no prior experience with C/C++) is one where the main emphasis to begin with should be on passing stuff by value and explicit copying, almost like a functional language - avoiding both references and interior mutability. These latter features should be looked into after you've become familiar with the basics, so as to avoid any confusion.
If you encounter specific diagnostics that are cryptic, file tickets. A lot of things were aware of and we improve them regularly, faster when easy, slower when complex, but we eventually get to them. We can't fix cases we haven't been made aware of, so reports from the field are very useful (sometimes users are more creative when it comes to misusing Rust than even fuzzers are).
Rust has a steep initial learning curve followed by a plateau of enlightenment.
The language has a lot of corners though -- not so much messy edge cases like C++ but just emergent complexity from its sophisticated grammar and type system.
As with all such languages: you do not have to make use of all language features everywhere, and in fact you should not.
The D programming language is far safer than C/C++, with the same amount of power. The syntax/semantics are such that it's very easy to learn if one is familiar with C/C++.
For example, although one can use pointers in D with abandon, it's better to use slices, where array bounds overflows cannot happen. Array bounds overflows are the #1 cause of security bugs in shipped C/C++ software.
The people who do use D really like it. It's a mature language with solid compilers.
I haven't experienced a memory corruption problem with it in maybe a decade, and I write D code every day. That's a big change from my days writing C and C++.
> The selling point of Ada is that it's been around for a long time and many vendors already have established compilers and tools with all appropriate certifications and qualifications done for all sorts of industries.
This makes it sound like the field is dominated by proprietary/closed-source tooling. That is a huge red flag for the health of the language if true.
Well, yes. But that's the feature in this context. The point is: places where Ada is used require certain levels of qualification and certification, all the way down to the compiler. You have to be able to prove that that binary fragment was produced by that particular code fragment. Think aircraft, nuclear reactors, rockets.
So this tooling is already certified and it fills the "compiler + various verification tools" space. Otherwise you'd have to certify your tool chain yourself every time it changes.
Say you are building an aircraft. The whole software stack is part of the aircraft certification, all the way down to the compiler. A complete aircraft is a system composed of other systems which themselves are composed of other systems. Eventually, there's a software system. Systems are certified independently and for a complete configuration.
Of course, requirements get stricter as the criticality of the component increases. Not every component has such strict requirements as discussed above. But a lot of them do.
The popularity of Rust has caused a lot of people to start taking safety more seriously. Ada is the safety language for everything except memory and thread safety (they're pretty safe still, but not completely like in Rust).
I think some of it also comes from the number of new programming languages that have come out over the last few years. It's common for people to point at Ada as a reference for how to address problems that come up in C-like languages since Ada already solved so many problems in the 80's.
My assumption has been that Rust's rise to prominence here has led people to investigate & reflect more on prior art in the realm of "safety" in the PL world. This leads them to Ada. And as there's a general trend in HN to look at less traveled roads it starts popping up here. the Baader-Meinhof phenomenon starts to kick in and creates a feedback loop
No, if it was for the bandwagon in social media, everybody would be forced to use Rust. However companies that work on government contracts have for a long time used Ada, and they have a lot of software and investment there. So, with the recent government advisories, the Ada tech is coming back to fill that opportunity.
Maybe the rust folks realized that 40 more years and billions of dollars wasted on "we'll get pointers right this time and not leave the same old gaping holes in our new software" left people receptive to new things?
They probably have another good reason to advocate too - 90% of anti-rust sentiment is "Gross, people like using it so much they tell others that they like it".
> 90% of anti-rust sentiment is "Gross, people like using it so much they tell others that they like it".
More like "Gross, people like using it so much they imply (or just outright say) other languages are inferior."
edit: apparently I'm "posting too fast", so I'll reply in an edit. What you (sophocles) said is effectively equivalent to "other languages are inferior", although I appreciate how you twisted the words up like that. Computers can be used effectively in a wide variety of languages. Do some languages age poorly? Sure. But not all of them.
Nah, that's just the listener's ego overreacting to someone acknowledging that the obsolete, and factually inferior language they like is in fact obsolete and factually inferior. For a long time the world bought into C is best for X, Y and Z things and it's the only real choice... so people did that human thing and tied their identity to using that particular language, their subconscious has decided that they are being called inferior because they think the tool is their value rather than their ability to reason deeply about what computers are going to do.
I don't know if "putting lesser languages to shame", but https://hubris.oxide.computer/ is professional grade, built in just a few years.
But I absolutely take umbrage at the idea that "build a modern production OS" is the bar to clear for any new language to prove themselves because modern OSes are rare regardless of language of implementation. How many "professional grade OSes" have even been released in the last 10 years?
Right. After Java in the mid-90s, Rust has become the most hyped language. Meanwhile C/C++ have been quietly adding in wisdom from lessons learned in the trenches and have gotten much better.
Next time a rustacean starts jumping up and down, show them Raphael Finkel's book Advanced Programming Language Design so that they can understand there is nothing new under the sun.
IMO The language has merits regardless of a few (if not very few) bad apples in the community, and its technical merits should be the primary lense for evaluating it as a choice.
I'm more fed up with smug rustaceans acting like everything Rust does is somehow novel. Just because *you're* not aware that a language feature already existed earlier doesn't mean Rust invented it.
The whole point of Rust was to take developments in programming language theory from 20 years ago or so and make them practical for widespread use. Practically nothing Rust does is new, because "new" stuff means that the overall implications of that language feature have yet to be figured out. Which is bad if you want to keep your language elegant and easy to learn.
That's my point in a nutshell. It's a cool language. But there's a growing number of converts for whom this is their first introduction to these concepts, and they don't seem to have the intellectual curiosity to dive deeper.
I also see this behavior with Typescript and "powerful type systems"
People will point out features they like about Rust, sure. But I've never seen anyone be smug about it, or claim they're novel features.
Edit: Maybe these types of low quality comments exist on social networks, but... Isn't that just the norm for social networks? You can say the same about cooking advice on social media, or "life hacks" content, etc
Is there another language out there which encodes lifetime semantics and affine types besides Rust? AFAIK there isn't (happy to hear otherwise though).
I have had some back-and-forth with some rustaceans here on HN and most seem to be cargo culting. I am all for enthusiasm/evangelism when there really is something noteworthy but not blind religious fervour. Graydon Hoare/Manish Goregaokar are on record saying they took good ideas from old (and lesser known) languages and putting them together in Rust based on their view of how a modern language should be like. Obviously this is not a bad thing but folks need to be aware of history and how they got to where they are now.
Rustaceans would do well to read something like Raphael Finkel's book Advanced Programming Language Design where a whole bunch of languages and their unique features are shown/compared.
I think Rust opened up a ton of interest in languages that are as fast as C++ but safer... The fact that Ada was doing a lot of stuff that Rust is trying to do now but decades earlier is particularly interesting.
There was an interesting conference where Ada was presented. Ada marketing has gotten better over the years as well. Still not enough that I'm trying it, but it remain intrigued.
I would say Ada got a lot nicer when Ada 2012 was released, and interest picked up since then. Around the same time, C++ got a lot nicer with the release of C++11.
Hardware products like sensor and control devices. I also use it for desktop tooling whenever a script wants a for loop. Unfortunately I use Flutter for GUI stuff because I hate js and because it has Android/IOS plugins that I would have to write if using Gnoga.
A huge portion of Ada's modern ecosystem is MIT/apache or similar licensed, and available via Alire: https://alire.ada.dev/crates.html
A great deal of that are driver libraries for sbcs and other embedded frameworks, but there's plenty for things like web services and even a few game engines in there.
But the biggest advantage in Ada is the the syntax and good compilers that can empower that syntax. These compilers are the expensive part and most worrying. Availability of the libraries does not matter that much if you can't access the good compilers. I would be particularly interested about the features of open-source compilers vs. these commercial ones. Also, whether there are any effective verification/proofs available on those open-source compilers.
Ada was my first (non hobby use) language. Mostly on VAX/VMS with LSE as the "IDE", and I use the term IDE loosely :-). Although I encounter it rarely I'd encourage anyone to take a look.
The vendor list is really telling of the issues with Ada's adoption:
- Of my quick scan, only AdaCore supports Ada 2012 (where a lot of the really good stuff is implemented) the rest are stuck on Ada 95.
- None of the vendors seem to list transparent pricing.
If you are only selling based on volume and through a sales rep then you are off the bat excluding most startups, SMEs, and just general bootstrap curious folks.
Adacore technically retired the Community version a few years ago, so the most up-to-date free version would be GNAT from FSF (which is what Alire brings in automatically with `alr toolchains --select`).
Predicates are great but most of the good stuff including privacy, proper subtyping and abstract data types came with Ada 83. Rust can't hold a candle even to Ada 83 imo.
Ada has practically no mindshare, Rust does. Just like say, Scala, things can be technically 'good', but without adoption it isn't going to get the attention and visibility to compete with everything that does.
We're talking about mindshare, not commercial incentives. There are plenty of things sold to small groups of buyers with no significant mindshare. Mindshare does not equal commercial viability or sales numbers.
As for who would pay for a Rust compiler: probably not a whole lot of people unless that compiler has something special the offer that the normal compiler does not.
The same goes for a C compiler, there are Intel compilers that are supposed to be 'better', but as it turns out, in most cases not 'better' enough for people to pay for them. But even then, I would not be surprised if more people pay for the ICC than for Ada (but I would also not be surprised if the Ada compiler sales rack up more money than the ICC sales).
I don't know shit about law, but I'm assuming it's not allowed for someone to build their own compiler and make their own business out of the Ada programming language?
I don't think there's anything preventing anyone to sell an Ada compiler, tooling. It's an ISO standard and you can read its spec (the holy Ada Reference Manual) and rationale for free.
I'd say the biggest hurdle is that Ada is a very complex language to implement. I've been writing Ada for more than 20 years, and writing static analysis tools for 5 years with libadalang (an amazing toolbox...) now and I still find it hard to grasp the whole combinatrics of features.
How much harder would it be to implement than say, JS? 3 orders of magnitude more difficult?
I wonder if there's a deno/bun style business in here. Would be cool to see Ada compete with Go/nodejs/java etc. Or do you think that would be a terrible fit for the language?
This probably isn't the best metric, and I may be missing some files as I'm not very familiar with GCC internals, but if I'm getting this right then we can see that just the Ada frontend has more lines than the C, C++, and D frontends combined.
> I wonder if there's a deno/bun style business in here.
You could do this with a library without anything special in Ada. You don't need to dig deep in to the internals like you do with JS.
I think Ada is seen as some esteemed language that only the military and space would have the expertise to use. I had that opinion for maybe a decade but it was very wrong. The other issue was that it is so powerful that compilers had bugs and by the time an affordable open source compiler came along I guess C++ was gaining libraries and the militaries don't release libraries.
The ironic thing is that Ada was designed to be cost effective over a projects lifetime. C costs you a lot further down the line.
One of the reasons of the whole safety hype going on, is that companies have finally start mapping developer salaries, devops and infra costs, to fixing all those CVEs.
> I think Ada is seen as some esteemed language that only the military and space would have the expertise to use.
I know Boeing is technically a space company these days, but they weren't when they created the 777. Several of its systems are written in Ada, including the cockpit interfaces, electrical generator systems, and primary flight computers.
With respect, Rust isn't a great choice for web either. At least, not for web _sites_. I would still argue it's not very good for 90% backend API development either, but that depends on what's being done in the backend.
Rust is a bad choice for web and a meh/bad one for game development. It's certainly not good enough at either to have much of a technical edge over Ada.
There are worse choices in the compiled web language niche. Most of the competitors don't qualify, because people don't want to do formal verification for their side projects.
I think with the recent flurry of languages focusing on safety, Ada has been making a comeback (see: ada-lang.io, getada.dev, alire, etc).
This presentation in particular was in the Ada dev room at FOSDEM (I gave a presentation in that room as well), and there were over 100 people there; we actually ran out of seats in the auditorium.
FOSDEM was packed overall. I can't think of a subject track I went to that had spare seats. I noped out of more than one because I just couldn't get in.
There are many other factors that influence language popularity besides technical quality, like:
- marketing;
- big companies using it;
- familiarity;
- history of the creators;
- history of the influencing languages;
- timing;
- luck;
- regional usage;
- etc.
Despite some programmers seeing themselves as fully rational making cold decisions, we're like everyone else.
> The street cred of "hey, large company X is using it, it must be good" is also very important.
Yes, and also, "large company X is spending lots of money on it, so they aren't just going to abandon it once it's no longer the newest, coolest thing."
> It was literally first programming language taught at university.
Do you mean "It was literally first programming language I was taught at university."? because the first language ever taught was more likely to be one of the autocoder/assembly variants, or FORTRAN.
"First language" is generally taken as a term of art in this sector. It is the first language we're teaching students who we expect to learn other languages as well, so emphasis on "first" here unlike for say a "taster" course in another discipline where you're learning only say, Python, with no expectation you will ever learn other languages.
Edited to expand: For a First Language you can choose to pick a language that you don't expect your students will actually end up using, for pedagogic reasons, just as we might spend time proving fundamental things in Mathematics even though those are already proved and you'll never do that "in real life" after studying, a language which has good properties for learning about programming is not necessarily also the right language to actually write yet another web site, database front end, AI chat bot and video streaming service.
I've spent considerable time thinking about this and I believe Oxford and Cambridge were right to choose an ML as First Language. The MLs have desirable properties for teaching, even if you expect your students to end up writing Python or C++ after they graduate. I am entirely certain that my exposure to an ML at University made it much easier to pick up Rust than it was for people whose nearest previous languages were C and C++
It's indeed hard to imagine that the first programming language taught at any university would be Ada. That would at least mean that the university started teaching programming and computer science very late.
There's been a number of main programming languages taught over the years. Back in the late seventies/early eighties, some universities in my region used Simula in their programming courses, for example.
Agreed. My mum learned basic Fortran at university in the early 1970s, before Ada existed. (It was done on punch cards, and they had to wait a day or so to find out if their programs worked!)
My university had a model train set, hooked up to some ancient 386 machines (and we're talking late 2010s here) and it was used for a real time programming course which was taught in Ada.
Unfortunately the lecturer that ran the unit retired the year I started my degree and by the time I had the prereqs required to do the course the faculty had run the course once without the original lecturer and it was apparently a disaster so they canned the unit until it could be rewritten from scratch, sans train set ... and in Java.
I still think about missing out on programming a train set. Years later
The University of Waterloo has a similar course, CS452: Real-time Programming.
It’s not quite the same as having physical access to the train set, but a student eventually wrote a simulator for the Märklin train set [0]. Another student wrote an emulator for the TS-7200 used for the class [1] if you don’t want to test your kernel in QEMU.
The university where I work used to teach Hard Real Time to electronics students with toy elevators. When (not if) you screw up your elevator smashes into the roof of the elevator tower and self-destructs the toy, reminding you that this is serious business and if you fuck up that badly in real life you'll be lucky if the victims live to sue you...
This is a good thing to have a test suite for a language but from a business perspective it increases barrier to entry, why? 1/ you start your new compiler with 10000 bugs (the failing tests in the public test suite) 2/ you get no client since clients want a compiler passing the public test suite 3/ no client means no money and this until you fix the 10000 bugs.
With programming languages that do not have a credible public test suite you can get away with shipping the compiler ASAP and get money from customers, then concentrate on fixing customer impacting bugs.
All in all a blessing and a curse, life is full of compromises :)
- if most users are "specialists" then you don't need compiler/tooling/doc with very good UX for non specialists, but until you have very neat UX even for non specialists you will not get a lot of traction with non specialists and in turn it might seem pointless to fix the UX
I say half problem because in my experience fixing it is also beneficial for "specialists".
This problem also overlaps with perceived (and sometimes actual) gate keeping and/or elitism.
Basically the same reasons why e.g. Haskell is much less widely used then it could be (and why most usage is in companies mostly filled up with people from Universities which have it as a required (or strongly recommended) course.
Just with more issues (listed in some of the other responses) added on top of it.
> I say half problem because in my experience fixing it is also beneficial for "specialists".
This is critical. Once I realized that even experts didn't necessarily have homogeneous advanced knowledge of the entire language, it became easier to justify to myself spending time on improving the diagnostics of uncommon errors for clarity and learnability. An expert requires less explanation, but they still require one.
And I also suspect that spending time on better DX at the right time of the adoption curve has an outsized impact on it.
The bad part is that it is a hard, slow, endless and somewhat thankless job (you're going against the grain of the compiler design half the time, adding logic that needs to be maintained, and for every case you handle ten more pop up elsewhere that need to be handled in a different way).
It hurts my eyes (more seriously, the Pascal family look lost vs the C one, it's a popularity thing) and from my understanding, it took too long to become truly usable in the FOSS world (a bit like CL).
This is a common complaint I read, and I have never understood it.
My eyes are not strong: I wear spectacles of about -6.5 dioptres.
If text size is fairly small, it is REALLY difficult to distinguish
(...)
from
{...}
... on large screensful of text. And
[...]
... is not much more visible. Making that significant is terse, yes, but Sendmail is terse. Terseness is not an unambiguous virtue.
Secondly, indentation: I learned 2 or 3 languages in the 1980s before I learned C, and it's a given that you indent opening and closing control structures to match. That's how it works: you structure blocks of code to show the hierarchies.
But most curly-bracket language coders have their own weird schemes where the opening { is in a totally different column from the closing }. And they fight like alley cats about it.
A pair of readable words, words that of different lengths so are easily distinguishable without reading them characer-by-character, are far more legible than a pair of line-noise characters -- especially if those characters on their own have wildly different meanings in the language, and -- to make matters even worse -- are also extremely close to the characters for code comments!
Look, to spell this out:
You have your own opinion and that is fine. You are perfectly entitled to it.
But you are making out that your preference is some global truth, and I think you need to realise that what is easier or clearer for you is in fact less clear and much LESS readable for other people.
Those words were chosen for good reasons and have persisted through whole generations of programming languages for something like half a century, and there are good reasons for this.
The C designers chose things shorter and easier to type and that is their privilege. I personally, and thousands of others, dislike their choice and prefer Wirth's.
Nobody is absolutely objectively right or wrong here.
The point is that there are good reasons for both choices, and nobody -- including me -- gets to go and say one is better and clearer and the other is worse.
What you like, I hate. What I like, you hate. But we both have our reasons. I think mine are good reasons. Presumably you think yours are, although you haven't explained them. You just assert them as obvious global truths. They are not.
My broad feelings on many of the issues in Rust, including what's presented here, is that they come from starting with a C-like language as a base. The memory safety and thread safety features in Rust are very impressive, but many other things feel like they're lacking and non-safety features that were added later feel tacked on rather than being a core part of the language.
I feel that if Rust had started with a language like Ada and applied their innovations there then we would have ended up with a much nicer result.
Probably true, although I wonder how much comes from acting like C semantically and how much comes from looking like C. If Rust had taken Ada and made it look like C then would it still be so popular?
My guess is that it would be since there's already a steep learning curve for a C developer to pick up Rust. For a developer who's willing to go through all that I don't think a few more new concepts would be a major issue.
Or it could have killed it on the vine because the syntax was too esoteric and would have pushed the target demographic away before they even tried it. It's hard to test the counter factual in this case.
A common topic of conversation in the rust project is the concept of "weirdness budget": the idea that you can only be so different from what your expected userbase already knows before you become too hard to learn. I don't think you could introduce the borrow checker on a language like Erlang if you're aiming at systems programming. But now you could make a language that is slightly more weird (introducing other features) than Rust when targeting existing Rust programmers.
Rust started with ocaml as a base, then evolved in a more C-like direction later on, mainly as a familiarity with the target programmers thing. It wasn't originally envisioned as the systems language that it became, it was much more high level, with a planned GC and everything, then the lifetimes stuff got developed and the contributers at the time pushed it into more and more control in the hands of the programmer.
Which C-like language do you believe was the base of Rust?
If you're judging because the syntax looks like a semi-colon language that's just a thin disguise, to make it easier to onboard people from those languages.
Not a specific C-like language, but that family of languages in general and the design decisions from them:
A number is just a number even if two numbers represent completely incompatible things. If one integer represents a chmod value and one represents a number of files then you can add them together and the compiler will do nothing to stop you even though adding the two is always nonsensical. There's ways around this by creating your own types, but it's not a trivial process. In Ada I can just declare two different numeric types with identical ranges (type A is range 1..200) and the two will be incompatible.
Somewhat related to the above, all array indexes are just numbers. In Ada we can define a type and then use that type as an index of an array type, so if a value is the index type then it must be a valid array index. In Rust if I declare an array as having 5 elements then pass it to a function where I try to access element 6 then nothing will try raise an error at compile time.
Continuing on from the last point, arrays are only indexed by numbers rather than any discrete type. In Ada and enumeration is a valid type to use as an array index, or a range from 5 to 15, or -123 to 7. I'm sure this is something you can do with a generic in Rust, but it's going to be more clunky than having native language support.
Structs are just basic collections of fields. In Ada we have discriminated records that allow for some struct fields which can only be set at instantiation to control things like the number of elements in an another field which is an array. An example of where this could be used is in a ring buffer which is fully contained within a simple struct without the need for extra memory allocation. (I'm aware this conflicts with the other examples about arrays, in short there's a second type of array declaration with a variable subrange as an index). Maybe you can do this with generics in Rust, but it's not as clean, especially if you want to, for example, add a procedure to the ring buffer that takes another ring buffer as a parameter.
These are just off the top of my head as someone who's only briefly looked at Rust, I'm sure there's many more examples. The exact syntax might not match C, but many of the concepts come directly for C and derivatives.
The Ada examples I've given exist in other languages too in various forms, I just happen to be an Ada dev so I know exactly how they work there.
3. Maybe const generics? e.g., Foo<N: 3> where Foo { array: [u32; N] } or something. You could also do this with less typed-ness with Foo::new_with_capacity(n) and some private size management.
It definitely takes aesthetically from the C-family of languages in the same way that Java and Go are all braces and semicolon languages. I can't exactly minimize this - because I am comfortable with languages that look like this and less comfortable with some languages that don't, so I recognize the effect is very real - but it doesn't affect the features of the language much.
Well you say "that family" but you only give examples of why Rust isn't Ada.
Is the situation that you consider there are two families of languages, "Ada" and then "All of the other programming languages" and you've decided to label that second group C?
On that basis I agree, Rust has a "C base" as do Fortran, Logo and the Standard ML of New Jersey, but I don't think that's a useful way to understand anything about programming languages.
> In Rust if I declare an array as having 5 elements then pass it to a function where I try
to access element 6 then nothing will try raise an error at compile time.
Looks like a compile time error to me. I suspect you wrote a function which takes a slice reference not an array. In Rust these types are named &[T] and [T; N] and sure enough we don't know the value of N in a slice because well, it's not part of the type. Ada can't magically know this either, in Ada you've probably been writing an array here, and if that's what you meant you can do that in Rust too with similar effect.
You can't do it in C (the array types do exist in C, but they decay to pointers at the edge of any function, so we can't pass these types between functions) but you can in Rust.
> arrays are only indexed by numbers rather than any discrete type
I guess you're thinking about core::slice::SliceIndex<[T]> ? This is (at least for now and perhaps forever) an unstable trait, so, you're not "allowed" to go implement this in stable Rust, but if you did anyway you can cheerfully implement SliceIndex for your own type, Rust won't barf although you can probably undermine important safety features if you try because none of this is stabilized.
Far from being "only numbers" core::slice::SliceIndex<[T]> is already implemented for the ranges (both kinds) and for a binding pair (lower_bound, upper_bound)
So in terms that maybe seem more obvious to you, we can do foo[(Bound::Unbounded,Bound::Unbounded)] as well as foo[3..=4] and foo[3..] in addition to the obvious foo[3] -- these are all defined on slices rather than arrays though because in practice that's what you actually want and we can just decay the array to a slice during compilation.
Overall I still think it really is the syntax that threw you off.
> Well you say "that family" but you only give examples of why Rust isn't Ada.
Ada is the language I work with daily, but I think other languages that have been designed for the ground up for safety have similar features.
Aside from arrays vs slices, the solutions to these problems all seem less ergonomic than they could be if they were a part of the core design of the language, which is my major issue here rather than whether it is possible at all to do such things.
> You can't do it in C (the array types do exist in C, but they decay to pointers at the edge of any function.
You can pass around a pointer to an array in C (int (*)[3]), but I don't think it's worth considering as the language and standard library are not designed to make working with it easy. This is the same as my issues with Rust, if declaring a new type to represent a number of files isn't as easy as just using a u32 then no one will use it, etc..
My opinion is that the absolute safest way to do something needs to also be the easiest way to ensure that users always pick the safest option. Ideally the user would even have to go out of their way to pick the more unsafe option. Rust is really good at this with memory and thread safety, just not as much with the rest of the language (although it's certainly better than C).
As an addendum to my last example in the GP, I suppose the borrow checker prevents any safety issues from such a type requiring a separate allocation without generics that I was originally considering, although it does not help with leaks.
> This is the same as my issues with Rust, if declaring a new type to represent a number of files isn't as easy as just using a u32 then no one will use it, etc..
The annotations aren't needed until you make these part of your API and you want to communicate specific actions, but it's likely close what you'd end up with.
The first thing I'd do, and likely get away with, is
struct FileCount(u32);
> My opinion is that the absolute safest way to do something needs to also be the easiest way to ensure that users always pick the safest option.
I am always confused about the terminology. I know RangeTypes or RangedTypes from Pascal, but I think in Rust they are something else. The talk mentions SubTypes in the ADA context which I know from an OO context, but again it seems to be different. Then there are Refinement Types which seem to be more powerful but subsume the others. And now we also have Pattern Types.
> The MISRA guidelines for Rust are expected to be released soon but at the earliest at Embedded World 2025. This guideline will not be a list of Do’s and Don’ts for Rust code but rather a comparison with the C guidelines and if/how they are applicable to Rust.
> In particular, it should be noted that using debug or release compilation profile changes integer overflow behavior. In debug configuration, overflow cause the termination of the program (panic), whereas in the release configuration the computed value silently wraps around the maximum value that can be stored.
Anyone know what happened with Adacore and Ferrocene? Are they no longer working together? It sounded like a good collaboration earlier on, given the Rust juggernaut.
Also I notice no mention of anything like SPARK for Rust.
I still feel like Rust and Ada are designed for differing domains. The archetypal Ada app in particular might malloc some memory (and possibly fail) during an initialization phase, but post-initialization, must never fail, which for among other things basically says never use malloc. The post initialization part is basically a control loop using those fixed buffers to keep a jet engine spinning or whatever.
Rust's archetype application on the other hand is a web browser: tons of dynamic allocation, possible memory exhaustion depending on what the user tries to do, and the requirement is to never fail ungracefully, as opposed to never failing at all. "Never fail at all" is not exactly more stringent or complicated: it just means you're not allowed to even attempt certain things, because they might fail.
reply