In numerical analysis, "Hexadecimal float with an exponent" is not an obscure feature, it's a really nice one! If you want to communicate to somebody an exact number that your program has output, you need to either tell them the decimal number to enough digits + the number of digits (i.e., "Float32(0.1)", which is distinct from "Float64(0.1)"), or you can tell them the same number in full precision in binary, in which case the floating-point standard guarantees that that number is exactly correct and does not depend on how you interpret it. It's really nice for testing numerical code, especially with automated reproducible tests. Completely unambiguous, and I wish more languages had that (I saw the feature in Julia first).
I wish Javascript, etc. had hexadecimal floats. It’s annoying to worry about whether different implementations might parse your numbers differently, worrying about whether you need to write down 15 or 17 decimal digits, ...
Often the numbers (e.g. coefficients of some degree 10 polynomial approximation of a special function) are not intended to be human-readable anyway. Such values were typically computed in binary in the first place, and the only place they ever need to be decimal is when written into the code, where they will be immediately reconverted to binary numbers for use.
I mean, it's not too difficult to add them. You can parse, shove your result into a data view, Uint32Array, or whatever, then turn that into a Float64Array.
As the parent already said, the nice thing about hexadecimal floating-point is that the standard MANDATES that you get exactly the float/double you have uniquely described. The assertion “you can uniquely identify all floats by printing them with printf("%1.8e")” on the other hand relies on the compiler using round-to-nearest for conversions between binary and decimal, which the standard does not mandate and which, even when the compiler documents the choice, is sophisticated enough that the compiler/printf/strtod may get it wrong for difficult inputs:
The preprocessor trick of passing function macros as parameters is not that obscure. I have seen it used and I've used it myself. It is very useful when you have a list of static "things" that you need to operate on.
Say I have a static list of names and I would like to declare some struct type for each name. I also would like to create variables of these structs at some point, and I would always do so for the entire block of names. You could do something like this:
#define apply(fn) \
fn(name1) \
fn(name2) \
fn(name3) \
...
fn(nameN)
#define make_struct(name) struct name##_t { ... }
#define make_variable_of(name) name##_t name;
...
apply(make_struct); // This defines all the structs.
void some_function(...) {
apply(make_variable_of); // And this defines one variable of each type.
}
Yes, it is not pretty (it is the C preprocessor after all), but it can be very useful and clean.
I would call that the X macro pattern, but the wiki article doesn't agree that it should pass the `fn` as the argument. Not sure if that's important.. https://en.wikipedia.org/wiki/X_Macro
Maybe at some point macros could not be passed as arguments? I honestly don’t know. Passing it as a parameter avoids all that define/undefine business.
Most of these are due to the cruft added in C99 and later.
Compile-time trees are possible without compound literals.
More than twenty years ago, I made a hyper-linked help screen system a GUI app whose content was all statically declared C structures with pointers to each other.
At file scope, you can make circular structures, thanks to tentative definitions, which can forward-declare the existence of a name, whose initializer can be given later.
You can't do this purely declaratively in a block scope, because the tentative definition mechanism is lacking.
About macros used for include headers, those can be evil. A few years ago I had this:
#include ALLOCA_H /* config system decides header name */
Broke on Musl. Why? ALLOCA_H expanded to <alloca.h>. But unlike a hard-coded #include <alloca.h>, this <alloca.h> is just a token sequence that is itself scanned for more macro replacements: it consists of the tokens {<}{alloca}{.}{h}{>}. The <stdlib.h> on Musl defines an alloca macro (an object-like one, not function-like such as #define alloca __builtin_alloca), and that got replaced inside <alloca.h>, resulting in a garbage header name.
Compound literals are fun. You can use them to implement labelled and optional parameters to functions too:
struct args { int a; char *b; };
#define fn(...) (fn_((struct args){__VA_ARGS__}))
void fn_ (struct args args) {
printf ("a = %d, b = %s\n", args.a, args.b);
}
fn (0, "test");
fn (1); // called with b == NULL
fn (.b = "hello", .a = 2);
(As written this has a subtle catch that fn() passes undefined values, but you can get around that by adding an extra hidden struct field which is always zero).
Yes; the "extern" is potentially confusing when the definition is located in the same file below. They usually inform the reader of the code to look in some other file for the definition, and usually appear in header files.
I'm not certain, but the `extern` variant probably doesn't reserve space at compile-time; it just says "the linker will know where to find these". So resolving those symbols might need to wait until link-time. The tentative definitions probably do reserve space (and hence an immediately-known address), and the later true definitions just supply the initial value to put in that space.
The order of evaluation among initializers is unspecified, which makes it UB.
We don't have to use mixed-up designated initializers to run aground. Just simply:
{ int a = 0;
int x[2] = { a++, a++ }; }
This was also new in C99 (not to mention the struct literal syntax); in C90, run-time values couldn't be used for initializing aggregate members, so the issue wouldn't arise.
Code like:
{ int a = 0;
struct foo f = { a, a }; }
was supported in the GNU C dialect of C90 before C99 and of course is also a long-time C++ feature.
I was reading the source code for a NES assembler written in pre-C99 C, and there was an odd C feature used in it that I haven't really seen anywhere else.
It was before C had built-in booleans and the author had defined their own, but true was:
void * true_ptr = &true_ptr;
true_ptr is a pointer to itself. So however many times you deference it:
I still think that it's neat that, even with ASLR, you have an address at compile time that you know won't collide with address space of malloc results, or the address space of your stack.
Also you can declare the pointer as const and the value it points to as const and, if your kernel faults on writing to readonly memory pages, you get a buggier version of a NULL pointer that only segfaults on write.
Also it takes a second to figure out why the position of the const matters even though the pointer's value is the value of the pointer, and why only one of these segfaults on write:
My (admittedly naive) understanding of the ordeal leads me to believe that it is that the 1st would not segfault but the second will since declaring it as a const pointer will create additional memory constraints.
Testing it on my machine with the following code seems to validate this hypothesis.
In the first version, const_pointer is a non-const variable (so located in .data) holding a pointer to potentially constant data—you can't modify the data through that pointer without a typecast, but the actual location in memory may be mutable. That's why you don't get a segfault when you cast away the const and modify the data—the destination (const_pointer) is not const even though the const-qualified pointer would allow it to be.
In the second case the const_value variable itself is const-qualified and thus located in .rodata, but the pointer itself is not const-qualified so nothing prevents you from attempting to modify the data through that pointer. This is why you get a compiler warning about discarding the 'const' qualifier in the initialization. Since const_value is in .rodata, writing to it through the pointer causes a segfault.
As Sean1708 pointed out, it's more obvious what is going on if you place the 'const' qualifier immediately before the thing it's modifying, which is either the pointer operator or the variable name, never the type itself:
What would something like "const int" even mean on its own, anyway? There is no such thing as a mutable integer. It's the memory location holding the integer which may be either mutable or immutable.
My assembly knowledge is limited, but it looks like both const_pointer and const_value get put into .rodata (read-only data). In both cases you're trying to change what is at the memory location that the pointer points to, in the first case it's the pointer that's in .rodata so you can change what it points to, but in the second case it's the value that's in .rodata so you can't change it.
Edit: Actually I don't think the pointer is put anywhere, rather its value is stored in .data (non-read-only data) so it can be mutated without issue. Again though, my assembly isn't amazing.
Related to trigraphs are the alternative logical operator keywords like `and` and `or`. I'm surprised people don't use them more often because they're nicer to read than && and ||. In C, you must #include <iso646.h> but I think they're standard keywords.
Hmmm... I would not prefer using "and" and "or" because that syntactic sugar obscures whether the bitwise or logical operations are intended. You get really used to reading && and || as "and" and "or" in your head after the first two decades of C programming : )
After long debates (since IBM needs them in EBCDIC machines, like z/Series) they were dropped from C++ (I believe C++17, but might already be C++14) so if you use them in a header this might cause incompatibilities.
The story I heard was that the trigraphs were added during standardization because ISO 646, the international version of ASCII, did not require the characters [ \ ] { | }.
Fun fact: ISO 646 is also the reason that IRC allows these characters in nicknames. IRC was created in Finland, and the Finnish national standard had placed the letters Ä Ö Å ä ö å at those code points.
Edit: That doesn't explain the trigraphs for # ^ ~. I'm guessing some EBCDIC variants lacked those. Or some other computer vendor on the committee still supported some other legacy character set.
I had to use them once on a truly ancient amber screen serial terminal that lacked { and } on the keyboard. That was back in the 90s and the terminal was completely obsolete at the time but I needed to write a tiny hack program to solve an immediate problem. I remember only knowing about them in the first place from an odd compiler warning I'd seen on a different program.
They weren't in K&R. I recall a story about a standardization meeting, on the way to ANSI C, where representatives from some European(?) country that didn't have some of the necessary punctuation on their country-specific keyboards, essentially snuck the trigraphs into the spec when the other representatives weren't looking.
A more obscure feature is the uncommon usage of comma operator. We often use the comma operator in variable declaration and in for loops. But it can also be used in any expression.
For instance, the next line has a valid C construct:
return a, b, c;
This is particularly useful for setting variable when retuning after an error.
if (ret = io(x))
return errno = 10, -1;
The possibilities are endless. Another example:
if (x = 1, y = 2, x < 3)
...
But the comma operator really shines when used in conjunction with macros.
Also, the comma operator forces a left to right evaluation order.
Surprisingly, you can override it in C++. I haven't seen anyone do it, but you can. If you find a good, productive override for the comma operator, please post about it.
It's not the comma operator that is being noted as obscure, but particular usage of it, such as complex return statements that set a value and return another.
So, are you making the case that the examples present are common in JS as well? Because the whole point of the comment was in the uncommon usage of comma operator, as stated in the first sentence.
And to be clear, even if it is common in JS, that still doesn't reduce the usefulness of the original comment, because we aren't talking about JS, we're talking about C, and the commonness of the features notes in the C ecosystem. There are plenty of obscure oft-ignored features on one language that are common in another. For example, taking someone's interesting C macro that allows some level of equivalence to functional map and apply and saying "that isn't very obscure, even lisp has that" is missing the point.
Not really. The way it's usually introduced is that you get the same "reference" both ways. The fact that it's literally equivalent, and especially that there's no pointer type requirement on the left-hand-side, with the consequence of allowing ridiculous code like 2[array], is pretty obscure. Even more so because the equivalence doesn't work that way in C++ -- in general, features of C that aren't available in C++ tend to be not as widely known.
Only for the most primitive types, though. For anything else, operator[] is used in C++ instead, without an operator+ fallback. So for example, if your array is a std::array instead of a C-style array, saying 1[array] will not work.
Yeah, that's how pointer arithmetic works. Adding an int and a pointer assumes an array and gives you a pointer to the nth element in that direction (which had better exist, for your sake). char pointers can be used for byte offsets if a byte is a char on your platform (and it's been quite a while since that wasn't true).
You can also subtract two pointers into the same array and get the distance (in elements, not bytes). a[b - a] == b.
As the author continues, it becomes mildly weird only when you realize that you can write b[a] and it just works (tm). I've seen students saying the compiler somehow checks that the "a" is arrayish so the swapped version doesn't make sense.
Yeah, I did know 3 of them[1] but the rest were completely new to me! Not that I’m an expert or anything, but I tend to enjoy reading about obscure C features. That sizeof can have side effects is... a bit crazy although the multiple compatible function declarations are horrifying.
[1] Array designators, Preprocessor is a functional language and a[b] is a syntactic sugar.
Only the first two expressions on the right are null pointer constants (integral constant expression with a value of 0, optionally cast as a void *), that can be used to initialize all pointer variables, including function pointers. The last one is merely a null pointer (to void), that can't be implicitly converted to a pointer to a function.
C++ has stricter rules for null pointer constants, and thus only the first version is valid C++.
Although calling the preprocessor "functional" is being too pleasant. The C preprocessor was always a text substitution system, so macro as parameter is not that "obscure". Of course, I may have missed something subtle in the example.
It's also not clear how to use that preprocessor example.
"If the size expression of a VLA has side effects, they are guaranteed to be produced except when it is a part of a sizeof expression whose result doesn't depend on it."
(from: https://en.cppreference.com/w/c/language/array)
In the example this is not a problem, because int[printf()] means you must call printf() to get the return value and determine the size of the array.
"If the type of the operand is a variable length array
type, the operand is evaluated; otherwise, the operand is not evaluated and the result is an integer constant."
But what does it mean to evaluate the operand?
If the operand is the name of an object:
int vla[n];
sizeof n;
what does it mean to evaluate `n`? Logically, evaluating it should access the values of its elements (since there's no array-to-pointer conversion in this context), but that's obviously not what was intended.
And what about this:
sizeof (int[n])
What does it mean to "evaluate" a type name?
It's not much of a problem in practice, but it's difficult to come up with a consistent interpretation of the wording in the standard.
> If the operand is the name of an object [...] what does it mean to evaluate `n`?
When you say "n", syntactically, you have a primary expression that is an identifier. So you follow the rules for evaluating an identifier, which will produce the value. C doesn't describe it very well, but the value of the expression is the value of the object. In terms of how it is implemented in actual compilers, this would mean issuing a load of the memory location, which is dead unless `n` is a volatile variable.
Sorry, in the first example I meant to write (adding a declaration and initialization for n):
int n = 42;
int vla[n];
sizeof vla;
not `sizeof n`). (It doesn't look like I can edit a comment.)
Logically, evaluating the expression `vla` would mean reading the contents of the array object, which means reading the value of each of its elements. But there's clearly no need to do that to determine its size -- and if you actually did that, you'd have undefined behavior since the elements are uninitialized. (There are very few cases where the value of an array object is evaluated, since in most cases an array expression is implicitly converted to a pointer expression.)
In fact the declaration `int vla[n];` will cause the compiler to create an anonymous object, associated with the array type, initialized to `n` or `n * sizeof (int)`. Evaluating `sizeof vla` only requires reading that anonymous object, not reading the array object. The problem is that the standard doesn't express this clearly or correctly.
but you can get more fun with variably modified array parameters instead of just constants like 42. For example,
double sum_a_weird_shaped_matrix(int n, double array[n][3*n]) {
double total = 0;
for (int x = 0; x < n; ++x) {
for (int y = 0; y < 3 * n; ++y) {
total += array[x][y];
}
}
return total;
}
has a variable and a more complicated expression in those positions.
But those variably modified parameters can have arbitrary expressions in them, like
int last(size_t len, int array[restrict static (printf("getting the last element of an array of %zu ints\n", len), len--)]) {
return array[len];
}
C++ denies us this particular joy which could have made function overload resolution even more fun.
Another neat note IIRC is that array parameter sizes don't actually do anything, they just are there for semantic purposes and get treated as raw pointers.
That's what the static keyword means in those array declarators.
void func(int x[static 10]);
must be called with an argument that is a pointer to the start of a big enough array of int. I can't get recent GCC or Clang to warn on violations of this, though.
The wording in the C FAQ is that arrays “decay” into pointers when you pass them to functions. Which they explain as the reason why you can’t know the size of a passed array (at least in standard C.)
The C FAQ is pretty old though, I’ve always wondered how much of that advice changed in C99/C11... from cursory googling things don’t seem to have changed much.
It's funny that K&R chose to have arrays "decay" to pointers, but to allow structs to be passed by value. Thus you can actually pass arrays by value if and only if you wrap them in a struct:
I figured that having arrays decay into pointers is one of those features that got grandfathered in because changing it would have broken the dozen or C programs that existed at the time. It's a real shame too because a working sizeof() for strings could have avoided a LOT of C exploits over the years.
I took the code sample of the article from a snippet intended to be as dirty as possible, and removed most madness out of it. But yeah you're right :D
I just wanted to avoid adding one more item in that bullet list.
A nice but I guess more of a linker feature is if you declare a function as __weak__, you can check it at runtime for == NULL to determine if the application was built with the function defined.
True, however, it is precisely the stupid linker tricks that make C such an interesting and powerful language nowadays. Weak symbols. Interposition (LD_PRELOAD). dlopen() and friends. Filters. Direct binding / versioned symbols. ELF semantics in general (which make the use of one flat symbol namespace safer).
This was c++ and not C, but it is a preprocessor pitfall.
I needed to compare and older and newer version of some file from the RCS, so I saved temporary copies named "new" and "old". diff told me what I needed to know, but I failed to delete those temp files.
Hours later I typed "make" to build my program and got all sorts of errors deeply nested in some library function. Did someone misconfigure the server I was on? OK, maybe it is an incremental build problem? etc. It took took long to figure out the problem.
It turns out that during compilation, as one of the library .h files was being scanned, it contained #include <new>, which picked up the junk file in my working directory instead of using the C library.
I don't like to think of it as compile-time trees. It basically only allows you to construct complicated structures but doesn't allow you to examine them at compile time (that would require constexpr functions in C++; can't be done in C). It's honestly not a very impressive feature.
> VLA typedef ... I have no clue how this could ever be useful.
I used this feature recently. I had several arrays of the same size and type, and the size was determined at runtime. The VLA typedef let me avoid duplicate type signatures which I find more readable.
int N = atoi(argv[1]);
typedef int grid[N][N];
grid board;
grid best;
grid cache;
That's hardly obscure: it's Javascript that's the odd one out of the languages with C-like syntax, with its wacky function-level scoping instead of variable shadowing when using 'var', and falling back to global scope when not declaring a variable properly. Also, C and C++ behave identically with your given example.
A perhaps obscure feature is that you can "unshadow" a global variable like this:
#include <stdio.h>
int global = 0;
int main()
{
int global = 1;
{
extern int global;
printf("%d\n", global); // prints 0
}
}
> It is also why C++ is not a strict superset of C
Can you explain? That code in C++ also scopes ‘a’ to the block.
EDIT: I see you’ve edited the code, but I think it’s still true in C++. I’ve often done that for RAII and unless I’m mistaken it works just as well when shadowing variables like you’re doing as when not.
Agreed. And I have also used it in C++ for RAII purposes. In C++, braces introduce a scope, and objects local to that scope will be destructed upon exit.
sizeof ('a') doesn't reliably tell you whether you're compiling C or C++. It yields the same result in an implementation where sizeof (int) == 1 (which requires CHAR_BIT >= 16). (The difference is that character constants are of type int in C, and of type char in C++.)
So if sizeof ('a') == 1, then either you're compiling as C++ or you're compiling as C under a rather odd implementation.
Both POSIX and Windows require CHAR_BIT==8. The only systems I know of with CHAR_BIT>8 are for digital signal processors (DSPs).
If you want to tell whether you're compiling C or C++:
Line comments have been part of the C language for a long, long time (added in C99); so much so that, especially when discussing the subject on the internet, more and more often they predate some of the younger participants.