Lack of typing support is a major advantage. If you don't have types you don't need interfaces, generics, etc. The resulting code is shorter and less bug prone.
You still need all of that. You just have to store that information in documentation and do analysis in your head instead of relying on a static analyzer.
Languages like Python and TypeScript support 'any' as a type, so you can always opt out of strong typing if you want. Most of the time generics are preferable to things randomly dying at runtime though.
Things don't die very often due to typing issues. The unit tests always pick up typing problems.
Testing the code's behavior implies testing the code's typing. And if you are not testing the code's behavior then the code isn't really tested at all.
Does your code check every single variable is not null before using it? That every function argument is of the expected type? That every attribute exists before it is accessed? And do you have unit tests for all of this too?
If so - then you're writing a whole lot of manual checking that a strongly typed language would perform for you, at compile time. If not - well then you're doing less testing than a strongly typed language would.
I agree that these issues are rare, and there is evidence that strongly typed languages have a similar number of bugs to dynamic ones. But suggesting that because you have unit tests, you don't need strong types, is a bit naive to me.
I don't actually care about any of that stuff. You are confusing technically wrong with not working. If the code works in production it doesn't matter that the code is technically wrong.
Testing via typing is very weak testing. Almost worthless. Type checking doesn't find many bugs in general.
Strongly typed languages have 2.5 times the number of bugs as dynamically typed languages per software feature.
Why would you intentationally add bugs to your code?
It doesn't make any sense to me.
Thinking using static typing in a scripting language is a good idea is pretty naive. It's like creating a version of Haskell with mutability.
> If the code works in production it doesn't matter that the code is technically wrong.
This is actually very insightful, but I'm afraid you won't be able to convince most people.
The only valid metric of code correctness is empirical: how many times code ran successfully in production. Everything else (unit tests, static typing) is theoretical and often close to useless.
I remember seeing a talk where someone analyzed all Github repos to find which languages produced most reliable software. He found C++ to be the most reliable. But not because C++ itself is great as a language. The main reason was that lots of the dependencies that other languages are using were written in C++. C++ software happened to be the most battle tested.
Once you have got 5 years real world experience and find programs in production that work for the wrong reasons, you will understand. (And I'm not just talking about programs with dynamic typing either)
I'm aware of one case where a program traded 10 million dollars, made 5 million profit. There was a serious jaw dropping error in it. Did it matter? (and if so, to whom?)
The alternative is pretending your house is a treasury and buying a lock which costs more than any other item in your house. While your roommate develops kleptomania.