Safety. Without pointer arithmetic it's possible to create a language that can never derive an illegal address that succeeds incorrectly. Compiler and hardware technology have advanced to the point where a loop using array indices can be as efficient as a loop using pointer arithmetic. Also, the lack of pointer arithmetic can simplify the implementation of the garbage collector.
> Compiler and hardware technology have advanced to the point where a loop using array indices can be as efficient as a loop using pointer arithmetic.
I literally refactored some code from index arithmetic to pointer arithmetic and bought a 10% increase in performance for an (admittedly silly) numerical performance contest on Friday, so I'm not convinced either LLVM or the "jit compiler that reorders operations inside the x86 chip" are that smart yet.
That said I would not doubt that most modern languages convert iterables that look like index arithmetic into pointer arithmetic, but if they do so I would suspect it's at an IR level above the general compiler backend.
In Rust iterators are actually the fastest way to safely iterate over arrays, because they will elide bounds checks. If you use `array[index]` you will get a mandatory bounds check (branch) on each access. Using pointer arithmetic would avoid that, but is unsafe in Rust for obvious reasons.
In C I would assume indexes and pointer arithmetic to have exactly the same performance, since the `array[i]` is the same as `*(array + i)` and there are no mandatry bounds checks. Might be interesting to move your code in godbolt and see what actually changes here.
Ziglang. We looked at it Tracy so we confirmed that having a counter and a pointer is bad relative to having just a pointer. We don't want bounds checks because they are 1) expensive (remember, silly numerical computation challenge) and 2) we overshoot the end of the array anyways to be able to aggressively unroll as many writes as possible at compile time, don't worry, we make sure to allocate extra slots that are mathematically guaranteed to be sufficient beforehand.
Yes. And on the more general sense, I noticed Go tried to remove everything which can obfuscate the code and make it harder to read, or become junior-unfriendly. At least that's my impression.
In general, that seems like greatest improvement that can be made, because they optimized language itself.
They made it efficient, but still readable, so you can just read or write the code without much/any Googling. I never really understood why simplicity was considered especially "junior-friendly", while actually more experienced people can benefit the most from it. It's much easier now to just visualize algorithms used in code, without introducing tricky notations. It's also much easier to just write your code, when you finally have syntax and stdlib that will cover majority of typical cases, instead of language, where you need to either reinvent the world or use over-engineered libraries to solve common problems.
Contrary to this mindset of simplicity over everything else, abstraction is the very bane of existence of the whole field. While there is indeed accidental complexity we should strive to avoid, there is essential complexity which can only be managed through proper abstractions.
So that over-engineered library that actually solves the problem is the only real productivity benefit we can achieve, since languages don’t beat each other in productivity to any significant degree (see Brooks).
So while one may not want to go into macro-magic with a semi-junior team, no abstraction will just as much kill a project because LOC will explode, it will have tremendous amounts of buggy copy-pasted/multiple times implemented code, etc. And the few empirical facts we know about programming is that bug count correlates linearly with LOC.
> I noticed Go tried to remove everything which can obfuscate the code and make it harder to read, or become junior-unfriendly.
That is unsurprising
> They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
What he was saying from the previous bit is that they wanted their new language we now know as Go to be familiar to those who have learned Java, C, C++, etc.
This is opposed to it being derived from, say, Haskell, that, while brilliant, few have familiarity with.
It’s not ironic that Go is similar Java, especially in its earlier days. Those similarities are stated as an intent of Go’s design.
OCaml came out in 96. That would be PhD level in comparison. As for Python, it's interesting how it was considered a simple, easy to use language that focused on doing things one way back then. But now nearing version 4, it's turned into a complex language with many ways to do things. Makes one wonder how long Go will be able to hold out for. JS has gone the same route as Python. I guess Scheme managed stay simple, but it was never that popular. OTOH, It's cousin CL is complex.
To paraphrase Stroustrup:
There are two kinds of languages: the simple ones (Scheme, Smalltalk, early Python and JS), and the ones everyone uses.
Actually you _can_ do pointer arithmetics. It's just not straight forward, and the behavior is "unefined" in the sense that even if it works today, it might break in future versions.
Safety. Without pointer arithmetic it's possible to create a language that can never derive an illegal address that succeeds incorrectly. Compiler and hardware technology have advanced to the point where a loop using array indices can be as efficient as a loop using pointer arithmetic. Also, the lack of pointer arithmetic can simplify the implementation of the garbage collector.
<3