Naturally you first implement your algorithm together with a correctness proof inside the turing-complete type system. After that writing the real implementation can be left as an exercise for the compiler :)
I'm sure that in 1986, before standardizing ANSI C, someone victoriously typed the same thing in front of his tty in a discussion about using types in declarations and not just doing it à la K&R with just the function name being called cowboy-style.
Just because a system does not catches all bugs does not make it useless - for me, even if C++'s type system had caught only one bug it'd be worth it (and in most of my code, which is mainly about fairly specific domain objects, the compiler saves me from myself pretty much daily).
In any case, I'd likely have this somewhere
constexpr double negate(double x) {
return -x;
}
// example implementation for the sake of the comment, there are much more exhaustive libs out there
constexpr bool sample(auto func) {
bool ok = true;
double x = -1000.;
while(x < 1000.)
{
ok &= func(x);
x += 1.;
}
return ok;
}
static_assert(sample([] (double x) { return negate(x) == -x; }));
this way I'd get a compile error if my cat walks on my keyboard and inadvertently removes the - (also a relatively common occurence)
You're effectively just changing where you write your unit tests. So it's hard to agree that the type system is saving your from anything, when it's really just writing tests.
But no one said that type systems disqualifies from writing tests (and C++'s improvements over C aren't only about the type system).
It removes, however, the need for a lot of tests that are needed when using weaker type systems, mainly because you can make invalid data unrepresentable.
To give a few examples:
1/ in C++ you can write a type positive_int such that invalid values (negative ints) are not representable at all - either the operation a - b gives a correct, positive result (3 - 2 == 1), or you get an error that you can process if for instance you do 2 - 3.
You can also write things in a way that the user only has to provide things in a declarative way.
2/ I'm working on an API for defining dataflow nodes for multimedia applications which tries to leave zero margin for error : the user simply has to define the inputs and outputs like this, as types, which the C++ type machinery is able to digest and show to the user :
in both cases we are defining a dynamic dataflow node with various inputs / outputs (for instance an audio input and output, plus a control), which is then displayed in an user interface (thus things have to be reflected in some way).
But in the C version, the onus is on the programmer to :
- cast the inputs / outputs to the correct types according to the object's specification when using them
- cast the values contained in the inputs / outputs to the correct types
- access the correct inputs / outputs in the arrays
there is just so much more opportunity for error, which entirely disappears in C++
3/ Another example that you can see in my code: the make_uuid function. Takes a UUID in string form and converts it to binary uint8_t[16] at compile time ; also checks that the UUID is a valid one.
Every time I use this function, this is a test that I do not have to write - I know that all UUIDs in my system are valid, because I use an "uuid" type which can only be created this way by the user.
double negate(double x){ return x; }
?