Agreed. It strikes me as an inelegant abuse of notation even when it's used in a maths context, albeit a convenient one. It violates the way we can usually reason about infix operators.
a < b is an expression which gives you a booean value, true or false. Why then are we comparing whether it is less than, or greater than, some number?
Unlike with addition or multiplication:
a < b < c ≠ (a < b) < c
also:
a < b < c ≠ a < (b < c)
instead:
a < b < c = ((a < b) ^ (b < c))
The same criticism does not apply to C's chained assignment expressions, a = b = c, but I dislike that for another reason: if the type of b is a narrower type than that of c, you may get an unexpected value assigned to a.
a < b is an expression which gives you a booean value, true or false. Why then are we comparing whether it is less than, or greater than, some number?
Unlike with addition or multiplication:
a < b < c ≠ (a < b) < c
also:
a < b < c ≠ a < (b < c)
instead:
a < b < c = ((a < b) ^ (b < c))
The same criticism does not apply to C's chained assignment expressions, a = b = c, but I dislike that for another reason: if the type of b is a narrower type than that of c, you may get an unexpected value assigned to a.