Dynamic languages works since they are very polymorphic across types. Your function is more general if it doesn't have to know if you pass a list or a tuple or a dict, the implicit false will work on all of those. Making strict type checks quickly makes dynamic typing impossible to work with, and then you start to require strict typing everywhere, and at that point I'd not work with python but in some other language.
This goes for if you make a library as well, your library will be easier to use if you are less strict about the inputs you take, since that allows your user to work in a more naturally dynamic way. I love static types, but I have worked on making python libraries and there accepting a wide range of inputs is an important part of usability.
On the other hand, big codebases without type hints rely on their engineers remembering types of every argument. Or the functions being overly defensive.
Each to their own liking, I prefer knowing what argument types a function accepts so I don't need to think about it, and focus on writing business logic. If the function could accept more types, Id just improve it.
Isn't your first paragraph all about not knowing what argument types the function accepts just looking at its declaration, as it defeats the purpose of using a dynamic language?
Type hints supports generics and abstract interfaces, you use those to display what behaviour you are using within the function and then you try to do what you need in the function as dynamically as possible.
That is for library code, maybe it would be too cumbersome to try to do that for code with less reuse. I have never worked on a large python codebase that wasn't a library so I'm not sure what is best there.
This makes it easier for errors to go unnoticed in large codebases. If the function expects to take a list, and someone passes a tuple, it's likely that they passed the wrong value by accident.
For beginners, and in toy examples, it's kind of neat when code bends over backwards to work. Take this example:
>>> def all_uppercase(lst):
... return [s.upper() for s in lst]
...
>>> all_uppercase('hi')
['H', 'I']
>>>
Kind of neat, right? It's almost like a joke in code. Ha ha, iterating over a string gives you strings! But the charm of finding cute things to do with unexpected inputs doesn't scale. What is a helpful attitude at a small scale translates to "errors should manifest as far away as possible from the programming mistake that caused them" at a large scale. 99 times out of 100, if your code expects a list and somebody passes a tuple, they want a stack trace, not a return value.
> I have worked on making python libraries and there accepting a wide range of inputs is an important part of usability
I work with a large Python codebase at work, and this is a frequent source of frustration. I frequently track down bugs and find that on some untested code path our code passes nonsensical values of the wrong type to a third-party library, and the library just... finds some way of interpreting it.
Even if all the code paths get tested, they can't be tested with every possible input. Property-based testing seems like overkill for our application, and our tests are already almost slow enough to be annoying. And what if the third-party library is side-effecting in a way that's hard to test? It gets mocked out. And I find that the mocks are configured to expect the nonsensical values, because the original programmer found that they "work."
All because libraries don't want to make an unfriendly impression by throwing a stack trace.
This goes for if you make a library as well, your library will be easier to use if you are less strict about the inputs you take, since that allows your user to work in a more naturally dynamic way. I love static types, but I have worked on making python libraries and there accepting a wide range of inputs is an important part of usability.