The revulsion a lot folks express at one based indexing is always bizarre to me. I write code in c, python and matlab. Switching between these is really not that hard. The two indexing models just seem to be convenient/painful for different things.
And yes, perhaps one based indexing introduces a class of bugs when you need to call into c. But zero based indexing has the same problem if you need to call into fortran, and calling fortran is really common for numerical code.
I agree and I would have been a shrill critic of any 1-based indexing.
Most of the classic linear algebra algorithms seem to be described using 1-based indexing and the last time I needed to use one of these algorithms, I stubbornly tried to translate everything into zero-based indexing which was more difficult than you'd imagine and it was difficult to have confidence that I had correctly captured the algorithm.
I switched to using a 1-based matrix implementation and everything became trivial. It's not that hard to switch between looping from 0 to n (exclusive) and looping from 1 to n (inclusive).
The issue is whether you want to cater to established conventions (both in mathematics where 1-based indexing is typical, but non-mathematicians will also similarly use 1-based counting), or you want to cater to what is most logical.
I suspect the main reason 1-based seems convenient for some things is because of convention. Note that 0 wasn't even really used in mathematics until hundreds of years after our system of counting years was created.
Since our year counting is 1-based, we end up with odd things like "2019" being the "19th year" of the "21st century" as opposed to "2018" being "year 18" of "century 20". I suspect it's also fairly unintuitive that the current century began in the year "2001" rather than "2000".
Pretty much all languages designed for mathematics use 1-based indexing. Mathematica, R, Matlab, Fortran, etc. Either people have to think that the designers of these languages all made a mistake, or realize that it makes much more sense for mathematical computing to follow mathematical standards.
Is it possible that mathematics got it slightly wrong? The whole concept of 0 is relatively recent. Plenty of mathematics comes from before its inclusion, so presumably the idea of maintaining convention was there for successive mathematicians too.
It's not about right or wrong, it's just that they work for different things but programming languages unlike math or human languages have to pick only one as default. 1-index is good for counting, if I want the first element up to the 6th element, then I pick 1:6 which is more natural than 0:5 (from the 0th to the 5th). 0-index is good for offset, for example I'm born on the first year of my life, but I wasn't born as a 1 year old, but as a "0 year old".
And since pointer arithmetic is based on offset, it wouldn't make sense for C to use anything other than 0-index. But mathematical languages aren't focusing on mapping the hardware in any way, but to map the mathematics which already uses 1-index for vector/matrix indexing. You can see the relation of languages in [1].
If you want to write generic code for arrays in Julia, you shouldn't use direct indexing anyway, but iterators [2] which allows you to use arrays with any offset you want according to your problem, and for stuff that is tricky to do with 1-indexing like circular buffer the base library already provides solutions (such as mod1()).
> 1:6 which is more natural than 0:5 (from the 0th to the 5th)
This is again just begging the question. When you want to refer to the initial element as the "1st", it is due to the established convention of starting to count from 1. The point is that the reasonining for starting from 1 might only be that: conventional, not based on some inherent logic.
You start counting with 1 because 0 is the term created later to indicate an absence of stuff to count. If I have one kid, I start counting by the number one, if I have 0 kids I don't have anything to count.
But then I agree that there is no inherent logic, math is invented and not discovered, and you could define it any way you want. If we all had 8 fingers we would probably use base 8 instead of 10 after all.
Actually we naturally count from 0, because that's the initial value of the counter.
It just so happens that this edge case of 0 things doesn't occur when we actually need to count something. Starting from 1 is kinda like head is a partial function (bad!) in some functional programming languages. Practicality beats purity.
Does it matter if it's wrong? In mathematics it's a pretty standard, if not written, convention that for example the top left corner of a matrix has the position (1, 1) and not (0, 0). If I read an equation and saw an "a3" in it I can safely assume that there exists an a1 and an a2, all three of which are constants of some sort. I can safely assume that there does not exist an a0, because this just isn't the convention. And furthermore, when I do encounter a 0 subscript (e.g, v0), it is implicitly a special value referencing some reference value or the original starting value. This is different than if I were to see a 1 subscript, such as v1. For example, the equations
f = v0 + x
f = v1 + x
Those are the same equations right? Sure, but when I see v1 I'm not really sure what it is or could be, vs if I saw v0 I can assume it may be the initial velocity when I can look up.
And yes, perhaps one based indexing introduces a class of bugs when you need to call into c. But zero based indexing has the same problem if you need to call into fortran, and calling fortran is really common for numerical code.