Hacker News new | past | comments | ask | show | jobs | submit login
Ada 95: The Craft of Object-Oriented Programming (adaic.org)
138 points by stefankuehnel 6 months ago | hide | past | favorite | 75 comments



I don't know why integer derived types don't exist in every language, from what I understand they are a compile time feature that removes large swaths of potential errors.

We all agree that string enums are generally a good idea when we want to limit passed in configuration values to a limited set of valid strings, so why not ensure that if 2 (or 5...) integers are passed into a constructor (looking at you Java) that they are actually all of the proper "type" of integer?

For those who don't know what I am talking about, Ada lets you do stuff like (taken from the linked book)

    type Age is new Natural range 0..150;
and now you can say that the age parameter on a function is of type Age, and if you try to pass some other integer type in, it will fail.

Ada's type system is so powerful that the metric system of measurements, with physics units, speed, acceleration, all that jazz, can be expressed in it.

I've seen C++ people say "but templates can do that!" but if I ask about how much eye bleed is included, they always change the topic...


Rust is considering this under the name of "pattern types" [0][1]. If the full idea is realized, you should be able to handle strings and enums as well.

There is also a PR for a minimal implementation using a macro and targeting integers specifically[2]. It's experimental and there is a chance this doesn't get stabilized, but here's hoping it makes it through!

[0]: https://github.com/rust-lang/rust/pull/107606 [1]: https://cohost.org/oli-obk/post/165584-ranged-integers-via [2]: https://github.com/rust-lang/rust/pull/120131


I’m glad to hear that. We’re moving stuff to Rust at work and it’s great at memory safety. Much better than relying on Unchecked_Deallocation. But that said, there are a lot of times when something can’t be out of a range, like voltage is from 0.00 to 1.85 in steps of 0.01. Ada is much better at safely specifying that range, while Rust will happily take values that are outside the acceptable range.


That's unfortunate. Ada is so much nicer to work with and Spark now has deallocation safety and memory leak prevention.

https://youtu.be/97G1V2U8Drk?si=wNqNs1GiuO9ZdGwP


I chose Ada over Rust for my company a few years ago and couldn't be happier. My entire embedded and tooling code bases are now Spark compatible. I don't believe any language can match Ada for drivers, memory register and network protocol handling.


NVidia did as well for their car automation firmware.


Almost as though in order to maintain runtime type invariants on values you must use some kind of function to “construct” them...


Ada checks the ranges on types assigned to each other at compile time and then invariants on assignment and when passed as parameters. You can turn the runtime side of these checks on or off individually at the module level.


Agreed!

Expanded type guards can replace a huge amount of unit tests in a way that is more concise and provides a better promise to more of the code.

It's about orientation. Orienting around the data makes so much more sense. Why test every function's email address param when you can instead guard via an email address type?

Typescript is the most popular language to try and their type system is a lot of fun. However, the compiler messages remind me of working with Boost libs in C++. It's also unfortunate that typescript actually provides no guarantee about runtime so I never have complete confidence like I do with real static typed languages.

You can also get to a point in typescript where you go down a type system rabbit hole and find yourself a code philosopher. It's been a personal black hole for time wasting on a project for me.

Libs like Zod go a long way in JS and they accomplish the same but better for me. They leverage typescript and you get compile time and runtime support.

Vogan in C# provides compile time warnings but uses generated code to achieve it. It is impressive what the new code generation support in C# can do now.

Still, it would be nice to have first class support from a language.

https://github.com/colinhacks/zod

https://github.com/SteveDunn/Vogen


I think this is a good example of how Ada handles data. The approach is to describe the data structures and let the compiler handle the best implementation.

I don’t know if that always leads to the most efficient code, but what it does is that you often think about your data first before you start programming. It’s a different type of prototyping. Once that is done a lot of the programme flows naturally around that.

Nothing stops you to do that in other languages of course but Ada makes it quite easy and part of the concept.

Together with pre- and post-contracts you get a long way in writing code that avoids some of the usual error groups without much effort.

Plus it is (in my view) easier for someone to read code if you can quickly check the range of an integer or to read the pre-post-contracts of functions. It is surprising for me how much of an implementation is fixed, once that sort of thing is clear.


While people keep using Ada for ranged types examples, this was already present in Pascal and Modula-2.

    type Age = 0..150;

    type Age = [0..150];
It is a pity that has taken so many years for people to start appreciating type driven design.


A feature that combines well with range and enum types are typed indexes of arrays:

  TYPE
   TWeekday = (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday,
    Sunday);
  VAR
   WeekdayNameIndex : ARRAY [TWeekday] OF ShortString = ('Mon', 'Tue',
    'Wed', 'Thu', 'Fri', 'Sat', 'Sun');
Now `WeekdayNameIndex` is a properly type- and range-checked look-up table.

It's really sad that these simple joys of Pascal are still lacking from a lot of mainstream languages.


This is an awesome feature, combined with appropriate type and bounds checking and prevents so many errors. It can also avoid resorting to a heavier-weight map type.

Ada has this as well, including using any arbitrary continuous range for array indexing which handles remapping indices for you. e.g. if the key range is 20-40 the language handles associating it with array indices for you.


Indeed, I used that kind of stuff a lot.


I also used it, but wouldn't go back.


Me neither, but that is because now I have F#, Scala, Haskell, Rust,... :)


I think Ada, due its DoD origins and mandated use in some defense projects is more common than Pascal or Modula-2.

Funnily enough though, I took a basic computer science course back in 1995 where Pascal was the first language that was taught! I still remember the instructor telling us to imagine how we would instruct an alien how to use a vending machine, we could not assume it knew anything and had to lay out each logical step and it would do exactly as we told it, so we better get it right. He said that was the mindset we needed for programming.

Fast-forward to 2004 when I resumed my further education and now the first language taught in that university was Visual Basic, followed by Java for the object-oriented programming class and then straight into pure C for Operating Systems which was a bit of a jolt. Oh, and then back to Java when we had some horrible UML class with design software that auto generated code.

Apologies for the off-topic meandering, the mention of Pascal brought back memories!


Ada was a love-hate of mine. We had to use it for avionics/defence stuff and it always took a week just to get a project started, writing pages of type definitons. But:

1) It made you really think about the ideas. Sometimes it revealed misunderstandings in the requirements gathering or translation. That was the optimal time to find problems in the recspec and bounce them back before it got too late.

2) One you get moving in a very strongly typed language the rate of serious obstacles falls off rapidly. It really does pay yo do the work up front. I think of it as having to tidy up your room before you're allowed to play.


Ada has been shown time and again to be more cost effective than C, C++ or Java. Even in the old days before Ada2022 and Spark 2014s recent improvements.

http://sunnyday.mit.edu/16.355/cada_art.html


More common?

Appolo, Lisa and Mac OS were originally written in Pascal.

Apple's Object Pascal was adopted by Borland and left its mark in the PC computing world, still present to this day in Delphi and Free Pascal.

The MS-DOS Demoscene had mostly Turbo Pascal as companion to the otherwise Assembly written demos.

One really needs to be deep into caring about security to learn Ada, due to how costly getting access to a compiler was during the last century, before GNAT came to be.

I had the books and no compilers to actually write the code on.


The Windows 16 bit APIs had the PASCAL calling convention and MS sold a Pascal at the same time or possibly earlier than C.


Yes. "long far pascal" was part of C function prototypes for those APIs used in Windows programming.

They also sold MS Fortran and MS Lisp.


> They also sold MS Fortran and MS Lisp.

1980s Microsoft Lisp was just reselling Soft Warehouse muLISP, minus the compiler. In 1999, Soft Warehouse was purchased by Texas Instruments. Microsoft also resold their computer algebra system, muMATH which was built on top of muSIMP (muLISP with an Algol-like syntax)

MS Fortran, by contrast, I believe was always an in-house Microsoft-developed product. So originally was Microsoft COBOL versions 1 and 2, but version 3 onwards was rebadged MicroFocus COBOL


Interesting, didn't know, thanks.


I don’t know if Niklaus Wirth (the designer of both Pascal, Modula, Modula-2, and Oberon) participated in the DoD competition, but Ada was heavily inspired by those ideas. The shame was Dr. Wirth (out of ETH in Switzerland) didn’t define enhancements to Pascal. That left vendors to plug those gaps (like Object Pascal on Mac or Borland), which made those extensions non-standard. But he wasn’t making a business out of it. He was purely academic.


He collaborated somehow with Apple on Object Pascal and Clascal.


In the early 90s I ended up taking a Pascal course in high school for a math credit. Afterwards I joined the military in a computer software MOS and was trained in Ada. Of course... my actual day to day was often working with Visual Basic or VBA. The government employees assigned to my unit regularly worked in COBOL or Fortran... we had a single Ada system that required rebooting into 16-bit DOS to compile, but ran under Windows 95. Imagine the fun that was to debug from a single computer!

I am happy to contribute to your off topic meandering.


Julia has Value types for when a primitive type can be used during compilation: https://docs.julialang.org/en/v1.10/manual/types/#%22Value-t...

They aren't the same thing, I mostly brought them up to point out that Ada can only throw compile-time errors for range types under limited circumstances, otherwise you're expressing a runtime bounds check using the type system. Which I agree is a nice feature for a static language, which Julia is not; Ada won't let you leave out the runtime check, so numbers outside the range are unrepresentable in the program.

It should be possible to use Value types to write a generated function which instantiates a constructor for a primitive type which is restricted to a range. All type errors in Julia are at runtime, but like I said, Ada has to push many range exceptions to runtime as well, and the property that a number outside of the range is unrepresentable is preserved in both cases. Although I can't point you at an implementation of this in Julia, I'm increasingly confident that it could be done.

Julia's type system is also powerful enough to express units https://github.com/PainterQubits/Unitful.jl


Ada is very flexible and does let you leave out the runtime check. However the program will be stopped or atleast be exceptive by default if a logic error creates an invalid value that you haven't checked the validity of. Spark can be used for a higher degree of value analysis at compile time because flow analysis is obviously needed in many cases. Volatility can still be an issue but in most cases Ada knows the inputs such as for API usage validity.


I would rather say that Ada allows you to remove the runtime check on ranges. This is the opposite of enforcing a range with a home-rolled check of the values, that's what I mean by "won't let you leave it out": without deliberate action, the range will either be proven by static analysis to be in-bounds, or that invariant will be checked at runtime.

Julia has a similar system for array bounds: by default, accessing an array is checked for range restriction, and an error is thrown if the value is out of range. Range checks inside functions (including the system `getindex` function) can be annotated with the @boundscheck macro, and user code which knows an index is in-range can use @inbounds to elide that check. If this is done wrong, Julia will segfault or corrupt memory. In C, if it isn't all done perfectly correctly, same thing: opt-in, not opt-out. Julia is more like Ada in that respect.


I wish rust had learned from that and also kept range/overflow checks on by default and not only in debug mode.


One of the key principles of Rust is zero-overhead abstractions. The more runtime code they insert the harder it would be to sell the language to C/C++ programmers.


They do keep runtime array indexing checks on. I'm pretty sure I remember reading a comment from Steve Klabnik on here that they only turned off overflow checks in release builds by default because array range checks stay on. With neither over/underflow checking nor array index checks, it's a lot more likely you'd accidentally use an out-of-bounds index.

For example, there was an iMessage vulnerability a while ago that relied on unsigned overflow to create an undersized buffer so that data would be written out of bounds later [0]. If it had been written in Rust (or Ada), with overflow checking on, it would have panicked upon overflow, and with it off, it would have panicked when using an out-of-bounds index. With both off, you get this vulnerability.

[0]: https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...


And yet bounds checks were kept. Other compilers for other languages gave the option to disable some or all of the range checks, or to enable thorough validity checks. Not an all-or-nothing 'debug' toggle, but project- or file-level options for runtime checks.

On most code I worked on the bounds checks are really not having a large performance impact, especially on modern processors and it's been a very rare occasion I had to remove them, most recently converting the code to SPARK and proving the absence or runtime error. If people are OK pleasing the borrow-checker I'm sure they'll enjoy interaction with the prover (which farms most of the work to why3 and SMT solvers) to make sure their optimisation is actually safe.


> the code to SPARK and proving the absence or runtime error

I wrote Rob Pike's simple regex from the "Practice of Programming" in Ada/SPARK and it blew my mind that I actually managed to prove an absence of runtime error (guaranteed no overflow or out of bounds).


That's very interesting!

I have my copy of "Practice of Programming" about to be delivered. Is your regex implementation available somewhere?


I found it on a past comment, for whoever might be interested:

https://github.com/pyjarrett/simple_regex


What I really like about SPARK is the pragmatism. It's not all or nothing, you can call on non-SPARK code, and you don't have to go full functional verification to profit from the SPARK toolset.

edit: if you feel like sharing your code and experience there, I'm pretty sure some people would be interested.


> you don't have to go full functional verification

The amazing thing to me is that Ada code can call SPARK code just fine, and there's crates of SPARK code in Alire that you can use. It's a huge boost of confidence in the quality of a library that you're using when it has some form of verification.


You can also switch runtime checks off in Ada. The difference is that, in Ada, the default is to have them on, while in Rust, they are off by default.


BTW they've been on by default only since around 2010. IIRC Adacore checked the performance impact on most of their codebases and didn't really see an impact on performance (probably around the time branch prediction really improved on x86 and the Core architecture made branch misses far less painful).

They also pioneered validity checks injected by the compiler.

For the brave souls, shameless plug, I talk a bit about Ada's runtime checks, there https://blog.adacore.com/running-american-fuzzy-lop-on-your-...


You could potentially do the same thing with a class but it'd be really verbose:

Class Age {

  private int value = null;

  public boolean SetValue(int value) {
    if (value >= 0 && value <= 150) {
      this.value = value;
      return true;
    }
    return false;
  }

  public int getValue() {
    return this.value;
  }
}

That's approximately what the equivalent would look like in any language that supports classes (any errors are because it was typed on my phone). Being able to do that in 1 line would be much more convenient. That's why most programmers working in those languages just define age as an int and move on.


That's a runtime check which does part of what Ada will do. Ada will also perform compile time checks related to uses of these restricted range variables. SPARK can go even further and detect potential overflow/underflow errors (and other things) forcing you to add preconditions (one resolution) to functions that declare that the values being added (or otherwise operated on) are small enough that overflow won't happen.

Ada will also use a reasonable storage size. If Age is ranged [0,150] then it can place it in a single byte (you can also influence the storage size so you can increase this if you want for some reason), there is less memory overhead than an object. Since it's a range it brings in (automatically) all the arithmetic operations you'd expect.

EDIT: Regarding storage. That's technically implementation defined. GNAT, at least, will use a reasonable storage size by default, you have to override it and ask for something bigger. I had occasion to use Green Hills years ago and it did the same as I recall. I'd expect any other commercial implementation to use an appropriate size and not something absurdly large like 64 bits for a range that easily fits in 8 bits, it would not fit their general market (safety-critical, performance-critical, real-time, and embedded systems). Poor use of memory could cripple the utility of a compiler in these kinds of systems (especially performance-critical, real-time, and embedded).


You can’t generally make it a strictly compile-time feature, as far as I know. You do need what constitutes to overflow checks, basically.


Yes, but Ada does offer some compile time checks. That example only does the runtime check part of range checking, it does do compile time checking for type mismatches (or would in most statically typed OO languages). And SPARK offers more compile time checks than plain Ada (but it restricts what you can do since it's a subset of Ada, useful within specific packages or for libraries but not necessarily globally depending on circumstances).

Ada also offers the option to turn off the checks if you think you know what you're doing (you've proved out your system with SPARK, a lot of testing, or a lot of other analysis).

For instance the class based version won't catch this until runtime (I'll assume some operator overloading for ease of demonstration, syntax is C-ish just because it's more engrained in my fingers):

  // suppose range is 0..127
  my_type val = 200; // runtime error with the class version, compile time in Ada

  my_type a, b, c;
  // somehow a and b get assigned values
  c = a + b; // SPARK can warn about potential overflow here, class based version won't, straight Ada won't


Not only in one line but also you don't need to wrap your function argument with a class instantiation, you can just pass the integer into the function you are calling and the compiler takes care of the rest.

And that is a problem with many class based languages, that a class can't work as a primitive, compare Java's int vs Integer.

    program TestType;
    type age = 0..150;
    
    procedure test(age:age);
    begin
      writeln(age);
    end;
    
    begin
     test(100);
     test(200);
    end.


This is horrible design. It’s mutable and your returning Boolean values for whether or not your operation was a success. Use exceptions.

If you want to take the pure public data approach the better approach would probably be:

    record Age(int value) {
        Age {
            if (value < 0 || value > 150)
                throw new IllegalArgumentException(value);
        }
    }

Ultimately what we need is Valhalla to deliver primitive classes.


> and if you try to pass some other integer type in, it will fail.

But since that passage has to be done in order to complete the programmer's task, the fix will be to stick in a coercion. Over time, those things will pile up and uglify the program.

Then an age shows up that is 151, because of non-human animals and objects. Oops!


It doesn’t compare, but TypeScript can do this for small ranges, or small sets, very well.


Ada also has decimal values. I don't understand why all languages don't have them. Floats are a PITA when dealing with monetary values.


In this example, how do you get an instance of `Age`? i.e. If a user is asked for their age and they type in a number


Ada has had generics since the start.

https://learn.adacore.com/courses/intro-to-ada/chapters/gene... - Generics in Ada. This is from a good intro course for Ada and the site has several other courses as well.

https://learn.adacore.com/courses/intro-to-ada/chapters/gene... - generic IO portion of the above.

A sketch, you'd need to place these lines in valid locations since it won't run directly like this but these are the lines you'd use:

  type Age is range 0..150;
  package Age_IO is new Ada.Text_IO.Integer_IO (Age);

  -- declared somewhere
  A : Age;
  -- from user input
  Age_IO.Get(A);
  -- if we "use" the above
  use Age_IO;
  -- now we can skip the prefix
  Get(A);
This would result in a runtime error if the read value is out of range.


C++ can do this very well, no eye bleed is involved.


I’m skeptical that C++ can manage anything as clear and concise as:

    type Age is new Natural range 0..150;


Well, for the end user it'd look like

    using age = in_range<int, 0, 150>;
Not the end of the world


Does this really define a new range-checked type? I never used in_range, but it seems a function [1] that you must call whenever you want to do the check on one variable, while in Pascal all the checks are injected automatically every time you modify a variable of that type.

[1] https://en.cppreference.com/w/cpp/utility/in_range


If I understand correctly the other comments, they just say it's possible to do in c++ with templates/operator overloading/constexpr. It's not an existing type of the standard library. This type could work in simple situations but will probably not be as good as a native type in more complex situations.


Calling in_range explicilty at every spot, versus letting the compiler do the work, isn't the same.


That’s a function, not a type.


It's a hypothetical. The actual `in_range` template function doesn't work like that at all and would be annoying to use for this if you tried (it returns true/false so you have to wrap all operations in a conditional). jcelerier is suggesting we'd have something like:

  template<typename T, int min, int max> // probably not int for min/max, but whatever
  class in_range {
    ...
  };
which would be instantiated with `using <my_type_name> = ...;` and would have all the necessary operator overloads and checks. Still, it's only bringing in the runtime, not compile time, checks and can't be "turned off" (well, maybe take a fourth boolean value and have two code paths everywhere that can be optimized down to one if you want to turn off the checks) like it can in Ada (whether that's a good idea or not depends on how well you've proved out your code). It's also introducing all the overhead of an object which isn't necessary in Ada.


> Still, it's only bringing in the runtime, not compile time, checks

it's doable to have both compile-time and run-time checks:

https://gcc.godbolt.org/z/oT3adre86

some compile-time interval calculus library would enable more check and does not seem much harder to implement than a run-time one.


a little bigger, more complete, example would help. this is something i have never seen in C++.

(note C++ is a big language that changes a lot)


For those of us who don't know C++, would you care to provide a code sample (or a link to one) demonstrating what it would look like?


   #include <stdexcept>

   template <int min, int max>
   class Range
   {
   public:
       Range(int i): i(i)
       {
           if (i < min)
               throw std::runtime_error("value lower than minimum " + std::to_string(min));
           if (i > max)
               throw std::runtime_error("value higher than maximum " + std::to_string(max));
       }
       int to_int() const { return i; };
   private:
       int i;
   };

   using Month = Range<1, 12>;

   int main()
   {
       Month month(-1);
   }
terminate called after throwing an instance of 'std::runtime_error' what(): value lower than minimum 1


My favorite thing about Ada is how it uses modules ("packages") for encapsulation and not classes. This separates it from a lot of other languages with object-oriented programming by allowing you to split up and expose externally visible behavior in modules and submodules without making type internals visible, but allowing other elements in the implementation side of the module and submodules to look inside.


Most languages with OOP also do modules, all the way back Simula and Smalltalk started to influence Mesa/Modula-2 derived languages.

Additionally in a somewhat naive ways, classes can be seen as extensible modules, which in the presence of generics is less required as in languages that don't support them.

When using something like Standard ML functors, classes aren't really needed.


Yes, they are "officially" ML (no, not that ML) style.

https://ada-lang.io/docs/learn/why-ada#feature-overview


The explicitly specialised generic packages are nice too


It looks like Ada 2022 has a way to track at compile time whether functions block or not. Seems cool for async programming.

http://www.ada-auth.org/standards/22over/html/Ov22-2-2.html


I don't think any compiler implements that yet.


Do you believe it was designed and standardized as a reasonably implementable feature?


I'm far from an expert on compiler development, but I think it's just a matter of walking the AST to ensure that there aren't any calls to procedures with the Global or Nonblocking aspects inside a parallel block.

The bigger issue is that the `parallel do` and `parallel for` blocks added in Ada 2022 [1] haven't been implemented and as far as I know, nobody's working on it.

I suspect that if we ever do get parallel support, it'll come from the GNAT-LLVM project [2], rather than GNAT-GCC. In the meantime, there's a CUDA compiler [3].

[1] http://www.ada-auth.org/standards/22over/Ada2022-Overview.pd...

[2] https://github.com/AdaCore/gnat-llvm

[3] https://github.com/AdaCore/cuda


Only recently have compilers started catching up to Ada 2012 and SPARK, outside Ada Core, thus this will take a while.


Ada 95 was the first language taught in my college CompSci program back in 1999 (this might have even been my textbook).

Everyone would complain about Ada because no one had heard of it before, but looking back it was the right call. A great language to learn the basics on and not get in to trouble.

They’ve since moved to Java and now probably something else.


I'm guessing Python.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: