But no one said that type systems disqualifies from writing tests (and C++'s improvements over C aren't only about the type system).
It removes, however, the need for a lot of tests that are needed when using weaker type systems, mainly because you can make invalid data unrepresentable.
To give a few examples:
1/ in C++ you can write a type positive_int such that invalid values (negative ints) are not representable at all - either the operation a - b gives a correct, positive result (3 - 2 == 1), or you get an error that you can process if for instance you do 2 - 3.
You can also write things in a way that the user only has to provide things in a declarative way.
2/ I'm working on an API for defining dataflow nodes for multimedia applications which tries to leave zero margin for error : the user simply has to define the inputs and outputs like this, as types, which the C++ type machinery is able to digest and show to the user :
in both cases we are defining a dynamic dataflow node with various inputs / outputs (for instance an audio input and output, plus a control), which is then displayed in an user interface (thus things have to be reflected in some way).
But in the C version, the onus is on the programmer to :
- cast the inputs / outputs to the correct types according to the object's specification when using them
- cast the values contained in the inputs / outputs to the correct types
- access the correct inputs / outputs in the arrays
there is just so much more opportunity for error, which entirely disappears in C++
3/ Another example that you can see in my code: the make_uuid function. Takes a UUID in string form and converts it to binary uint8_t[16] at compile time ; also checks that the UUID is a valid one.
Every time I use this function, this is a test that I do not have to write - I know that all UUIDs in my system are valid, because I use an "uuid" type which can only be created this way by the user.
It removes, however, the need for a lot of tests that are needed when using weaker type systems, mainly because you can make invalid data unrepresentable.
To give a few examples:
1/ in C++ you can write a type positive_int such that invalid values (negative ints) are not representable at all - either the operation a - b gives a correct, positive result (3 - 2 == 1), or you get an error that you can process if for instance you do 2 - 3. You can also write things in a way that the user only has to provide things in a declarative way.
2/ I'm working on an API for defining dataflow nodes for multimedia applications which tries to leave zero margin for error : the user simply has to define the inputs and outputs like this, as types, which the C++ type machinery is able to digest and show to the user :
https://github.com/jcelerier/score-simple-api-2/blob/main/Si...
In contrast, in C, one has to cast things left and right because the only tool you have is void* : see for instance
https://cycling74.com/sdk/max-sdk-7.3.3/html/chapter_msgatta...
or
https://cycling74.com/sdk/max-sdk-7.3.3/html/chapter_msp_adv...
in both cases we are defining a dynamic dataflow node with various inputs / outputs (for instance an audio input and output, plus a control), which is then displayed in an user interface (thus things have to be reflected in some way). But in the C version, the onus is on the programmer to :
- cast the inputs / outputs to the correct types according to the object's specification when using them - cast the values contained in the inputs / outputs to the correct types - access the correct inputs / outputs in the arrays
there is just so much more opportunity for error, which entirely disappears in C++
3/ Another example that you can see in my code: the make_uuid function. Takes a UUID in string form and converts it to binary uint8_t[16] at compile time ; also checks that the UUID is a valid one.
Every time I use this function, this is a test that I do not have to write - I know that all UUIDs in my system are valid, because I use an "uuid" type which can only be created this way by the user.