Static typing enables faster feedback loops. The kind of thing Bret Victor raves about.
In practice, I've been able to accomplish things with static typing that I hadn't the brainpower to do with dynamic typing, because even with a REPL, runtime errors came too late, and didn't meaningfully informed me of my mistakes.
As for why we switch to dynamic typing as we scale up, some of it probably has to do with trust. Computers communicating over the internet cannot trust each other, if they can even trust they are talking to one another to begin with. They have no choice but to perform lots of runtime checks to make sure they don't get pwned by the slightest remote mishap. Once you get to this point, static verifications become much less useful, and can often be skipped altogether.
> As for why we switch to dynamic typing as we scale up, some of it probably has to do with trust.
I'll note that we switch to 'dynamic' much before the question of trust arises. Whenever you do IPC between two OS processes you fully trust, communicate through a database or file, wire up a multi-process system with config files, etc. Most internal code at an organization is trusted, yet the communication between OS processes within an company uses dynamic messaging semantics.
I'll also point out that if you use a compiled shared library, or any system call you are forced to use the type system of C, which isn't particularly advanced.
It could be that we haven't developed dynamic messaging protocols that can bind safely and correctly. I'm arguing that if we have these, they might apply on a smaller scale too (objects within a 'process' or connections between shared libraries, etc.)
> Static typing enables faster feedback loops.
All of that only works on the small scale - where the type knowledge has to be fully shared within the entire program. As soon as you're writing code to call a service, static typing benefits are moot. The current solution is to create 'stubs', which doesn't scale well either. I am not arguing for 'dynamic typing everywhere', but rather about a different POV where we start looking at 'interconnections' at all levels the same way, and think about making the introspection and safe late binding in a standard, scalable way. Minimizing pre-shared knowledge would be one way to improve how this scales, for instance.
The point is trust and typing seem orthogonal. E.g. I use static typing within one 'program' but 'dynamic' as soon as I split my program into two, even when the level of trust didn't change. Same thing when including my colleagues' code within my 'program' vs. calling it across program boundaries.
The point I'm trying to make is that 'static-typing style verification' which happens across different parts of a 'single program' doesn't extend to multiple programs or real systems. Maybe we should look at some kind of late bound dynamically bound verification - i.e. a protocol that is executed whenever one 'program' reaches out and connects to another, to determine if the connection is safe and correct. Do you think this has value?
In practice, I've been able to accomplish things with static typing that I hadn't the brainpower to do with dynamic typing, because even with a REPL, runtime errors came too late, and didn't meaningfully informed me of my mistakes.
As for why we switch to dynamic typing as we scale up, some of it probably has to do with trust. Computers communicating over the internet cannot trust each other, if they can even trust they are talking to one another to begin with. They have no choice but to perform lots of runtime checks to make sure they don't get pwned by the slightest remote mishap. Once you get to this point, static verifications become much less useful, and can often be skipped altogether.