Take a look to see how simple is working of a CPU, and by how many MAGNITUDES does the code size grow when any modern "programming paradigm" is involved.
Programs that are brought down to the absolute minimum of arithmetic and logical operations in assembler can often run thousand times faster than when written in higher level languages.
I remember I was shown how a classical computer science problem called "Sisyphus dilemma" can be done in a single logic instruction, instead of a kilobyte long program in Java that makes the smallest solution possible when no binary operations are allowed.
I don't think you lose three orders of magnitude of performance with a high level language. What a modern C compiler produces with a little help from the programmer should be reasonably close to what you can achieve by hand. Even slow languages like Python are not a thousand times slower than C. Maybe a hundred times.
I'm not sure I'd go as far as "beat a human 99.9/100", certainly not beat by a significant amount, but definitely "at least equal a human 99.9/100". Of course that 1‱ where the human wins could be large wins, but then you have to consider if they are in positions that actually matter i.e. tight loops.
1. CPUs are not simple by any means, as evidenced by spectre and meltdown
2. Sure, assembly is faster than Java script but good luck getting any abstraction or type safety in ASM. And also you're code is impossible to debug and takes 15 times longer to work on
> Take a look to see how simple is working of a CPU, and by how many MAGNITUDES does the code size grow when any modern "programming paradigm" is involved.
Yeah, but you can't scale that to teams of programmers working on complex business logic.
Yep, just shift the binary representation of the total number of members by 1 bit
/**
*
* @param n (41) the number of people standing in the circle
* @return the safe position who will survive the execution
* ~Integer.highestOneBit(n*2)
* Multiply n by 2, get the first set bit and take its complement
* ((n<<1) | 1)
* Left Shift n and flipping the last bit
* ~Integer.highestOneBit(n*2) & ((n<<1) | 1)
* Bitwise And to copy bits exists in both operands.
*/
public int getSafePosition(int n) {
return ~Integer.highestOneBit(n*2) & ((n<<1) | 1);
}
I've never heard of it either, but it's worth saying that if it can be done in one instruction it's entirely possible that GCC or clang will do it if possible
Take a look to see how simple is working of a CPU, and by how many MAGNITUDES does the code size grow when any modern "programming paradigm" is involved.
Programs that are brought down to the absolute minimum of arithmetic and logical operations in assembler can often run thousand times faster than when written in higher level languages.
I remember I was shown how a classical computer science problem called "Sisyphus dilemma" can be done in a single logic instruction, instead of a kilobyte long program in Java that makes the smallest solution possible when no binary operations are allowed.