Hacker News new | past | comments | ask | show | jobs | submit login
PyCon US 2021 Recordings are Available (pycon.blogspot.com)
216 points by precern_harlan on June 4, 2021 | hide | past | favorite | 101 comments



After watching this 2018 talk on typing: https://www.youtube.com/watch?v=hWV8t494N88

and this one last year: https://www.youtube.com/watch?v=ST33zDM9vOE&t=68s

and finally this one in 2021: https://www.youtube.com/watch?v=Lj_9TyT3V98

I think I'll finally start using type checking - it really seems to just eliminate a whole class of potential bugs; a class that is often not tested at that.


The thing with Python types is that they are “type hints” and not true types, hints is the keyword. There’s also two type checker implementations.

As they are hints it means you can still pass in wrong values in some situations and what types check on one implementation may not type check with the other implementation. So the types are superficial only in Python.

The paper “Python 3 types in the wild” at https://news.ycombinator.com/item?id=26788177 is a good read. I enjoyed reading it and found it useful.

I’m a fan of types as the compiler tells me I made a mistake and blocks me making mistakes. You can use the type system to make certain errors unrepeatable (impossible to make). With that my preference is use a language where types are first class rather than a language trying to retrofit some loose form of type checking on top which is nothing more than linting.

Being a fan of strongly typed languages I was eager to use types in Python when I had to involuntarily use Python. After a few instances of putting in type hints and things type checking when they shouldn’t I mainly don’t bother with them now outside of data classes as they where not catching bugs I wanted so still getting runtime errors.


You can have strict lint checks enforced with pre-commit hooks/ci checks. My current team I've been adding type checker tests (mypy + pyright) and for now as we're missing types I have type checkers on a nice rule of any prs you make must decrease the number of type errors (by 5, but up to you how harsh you want to be) in master to be merged forcing people to add more types. Once the type hints eventually hit 0 on our current strictness settings I'll gradually increase strictness requirements higher. Full strictness of no Any's anywhere including intermediates isn't really possible as a number of 3rd party packages lack types (tensorflow is especially sad with types). But we can get many of the strictness and that should cover a large number of basic bugs.


> There’s also two type checker implementations.

Four: Mypy, Pyre, Pyright and Pytype.


I'd expect someone to name theirs Thypon


> The thing with Python types is that they are “type hints” and not true types

Python has several type checkers, as does Haskell, each with different soundness properties. What makes GHC's type checking more "true" than Mypy's or Pytype's or Pyright's?


Every correct Haskell ’98 or Haskell 2010 implementation (not that a lot of them are still alive) accepts exactly the same set of programs, because that set is described in the Report. Every correct Standard ML implementation (of which there are still a few, surprisingly) with extensions turned off accepts exactly the same set of programs, because that set is described in the Definition.

GHC with its innumerable extensions and impenetrable (if well-defined) typing rules is and was always intended to be a testbed for wild type system ideas; that whatever it accepts became the received superset of standard Haskell (of which it is the only implementation) is in my opinion a failure of the Haskell community and the very thing that turned me away from Haskell after several years.

On the other hand, the only real description of what Mypy accepts is the source code of Mypy, and while many interesting programs that fall into that set also happen to be accepted by Pytype or whatever else, it’s not in any useful sense a bug if some are not. There’s also no practical way to tell if an arbitrary program is accepted except to try throwing it into the typechecker, because there is no human-adapted description of what constitutes being accepted.

None of which is to say that Mypy is not useful. It is useful, tremendously so. But as far as maturity of type systems in Haskell or SML goes, it’s not even in the same league. (So that this is not taken as academic elitism, neither are Scala, OCaml, or Typed Racket.)


Sure, I'm just saying that Mypy is a real type checker. The distinction is one of degree, not of kind. Writing a spec for Mypy wouldn't make its types more "true", just as deleting Haskell's spec wouldn't make its types less true types.


Still not necessarily true.

The academic usage holds that the distinction between the true type systems and the pretenders is that the first are sound: a well-typed program either evaluates to a normal form (value) or fails to terminate, but can never get stuck (segfault, terminate with an uncaught exception, however your environment implements that).

Of course, a gradual system like Mypy can never both be sound in this sense and remain gradual (which is a major selling point for Mypy), so for them the rule is usually amended with “provided the program is fully annotated” (and it should be obvious whether any given program is fully annotated or not, otherwise the system is just user-hostile regardless of how sound it is; at the very least it must be decidable). That is still obviously not satisfied for Mypy, because it doesn’t track exceptions (and including uncaught exceptions in normal forms would be useless because then no Python programs would ever get stuck). The actual guarantee Mypy is supposed to provide sounds kind of wimpy, like “never gets a TypeError that is not explicitly raised in user code”, but would still be useful regardless... Except that soundness is usually a pretty non-trivial theorem, and the lack of a readable specification for Mypy’s type system precludes it from being proven.

While well-typed Haskell (or SML) programs can in fact crash from the user’s perspective, the possible sources of crashes are rare and avoidable enough (for Haskell, only non-exhaustive patterns and explicit use of undefined, error, or throw) that you can call them additional normal forms without making the whole thing trivial. Every other way an untyped program could get stuck is disallowed by the type system, and there is a proof (SML) or at least a proof sketch (Haskell) of that (although of course not every untyped program that can’t get stuck is allowed by the type system, that is impossible by Turing—Gödel). Even the horrendously complicated systems that GHC implements nowadays still have papers that prove their soundness for a good enough toy example that one walks away convinced that the real thing works just as well.

So yes, there is a sense in which Mypy’s system is less “true” than SML’s or Haskell’s, and it is directly caused by (though not completely reducible to) the lack of a complete human-readable spec for both Mypy and Python itself. (I love Python to bits, but its object model is surreal and its only complete description is Objects/object.c together with Objects/typeobject.c.) If a wizard waved his wand and made every copy of the Haskell Report disappear, its type system would technically remain sound, but illegibly so to human mathematicians, thus it would in fact become less “true” in this sense than previously. Multiple implementations help, but I expect that without a prose spec it wouldn’t be easy to tell that GHC, UHC and Hugs implement the same system, the same way I have no idea if Mypy and Pytype do.


Thanks, that makes sense to me.

It's hard to find people who are knowledgeable enough about both static and dynamic systems and able to explain in layman's terms. Is there somewhere (online?) I can go to talk with people like you?


Calling me “knowledgeable” on this stuff might be a bit of a stretch, but then my standard of “knowledgeable” here is probably somewhere around “did a thesis on it”.

The people on #haskell at Freenode (nowadays Libera) were extremely friendly and helpful when I frequented it about five to ten years ago, although of course there are a subtle but important limits to initiating discussions that are interesting to people on the channel but not strictly on topic for it.

My more general advice is (for a double combo of both trite and condescending) read a book. Specifically, read Pierce’s “Types and programming languages”. I usually cherry-pick paragraphs and sections from books rather than read them from start to finish, but this particular book I basically gulped down in two sittings on two-hour flights (being trapped in economy seating helps) and found extremely enlightening both in that it dissolved the mystery around some lofty-sounding words I was always afraid of (“corecursion” and “coinduction”, “terminating” vs “productive”) and in that it contained completely elementary mathematical insights that have previously passed me by (Tarski fixed point theorem and order theory in general).

While I did have an advantage of having encountered, if not understood, basically all of the words inside it beforehand, I still believe it might be helpful even if you have absolutely no background in type systems. And don’t be put off by the use of Java in the case study, it is genuinely on point there.

For a more practical and yet more advanced (dependent types!) view, you can try “Type-driven programming in Idris” and “The little typer”, but I haven’t studied the former as carefully and have only skimmed the latter, so can’t recommend either with the same degree of certainty. I guess just follow the general advice on difficult literature: look around, follow references, don’t hesitate to put down things that don’t work for you, and if all else fails, let it stew for a week before trying again.


Python types are unprovable, since the type of a variable can change at runtime. It's the Halting problem. All you can do is enforce types on static function boundaries - but that doesn't prove you won't send a different type through the boundary at runtime.

GHC types are mathematically provable. Types are guaranteed to stay within your defnitions, so you can analyze the type graph without ever having to run the program.


> Python types are unprovable, since the type of a variable can change at runtime

You can certainly statically identify places where such changes are possible and the possible changes, in fact, Python static typecheckers do this.

> It's the Halting problem.

The Halting problem is a real thing, but you can simply bail out on any path that reaches a certain depth without resolution and fallback to the broadest possible type (failing narrower constraints) to avoid it.


You can identify some places changes are possible, but the problem is fundamentally unprovable, because of the Halting problem. You can't just hand-wave that away by saying, "we can use tricks to get 95% coverage." Ok, but if the type graph is not a provable entity that you can derive without simulating the program, you've sacrificed the fundamental point of C-style types, and that means you can never operate on them in the same way you can with C.


> since the type of a variable can change at runtime.

Can you give an example of that? Mypy catches this:

    def addsome(val: int) -> int:
        val += 1.1 #  tmp.py:2: error: Incompatible types in assignment (expression has type "float", variable has type "int")
        return val

I'm sure there are cases where it doesn't notice, but afaik those are considered bugs. Doesn't Haskell have bugs?


Provided you don't circumvent strict typing, no, Haskell does not have typing bugs. Or rather, it has so few that you can functionally behave as if the type checking is flawless. One of the main benefits of strict typing is that confidence - 95% coverage is helpful, but it's just helpful. 100% coverage allows you to program completely differently.

It's also static - does Mypy throw that error when that program is statically analysed, or at runtime?


Mypy shows that error statically.


Haskell doesn't have "several type checkers". It has several compilers, and if you use a particular compiler then you use its type checker and not the type checker of another compiler.

(There is also Liquid Haskell, but it doesn't check Haskell types, it checks Liquid Haskell types.)


Could you explain the significance of the distinction you're making?


You first!


pythons type system, just like typescripts, is completely unsound. Haskell is also unsound because of unsafeCoerce. But that makes it very clear that what you are doing is unsafe. In python you very often write unsound code that passes the type checker. This means you cannot trust the type checker to catch type errors.


The primary unsoundness is lack of types. The type checkers have strictness settings that will error if you are missing any types even types internal to a function/3rd party library used. The highest strictness settings what places are there that still are unsound? The very dynamic behavior that sometimes happens has two main impacts. Either type checker doesn't understand it and on high strictness settings will treat it as an error or you need to extend the type system use plugins (mypy has some plugins for very dynamic libraries to define custom rules).

Practically most large programs today can not pass maximal strictness settings as too many libraries lack type hints. But that's not a fundamental issue of python type system just a needed improvement of the ecosystem. Type hints are gradually growing and some major libraries have had noticeable improvements in the past year (numpy only started to have some types around 1.20).


> As they are hints it means you can still pass in wrong values in some situations and what types check on one implementation may not type check with the other implementation. So the types are superficial only in Python.

Could you elaborate. When can you pass in wrong values?

The type checkers aren't always precisely the same (some do inference, mypy doesn't), but the cases where they disagree are usually because one is correct and one wants more information, not because you're able to do the wrong thing despite having annotations.


I believe GP means that they are not checked at runtime. But I don't think that matters much in case of finding bugs, because you run mypy and it will scream that you are assigning wrong type.

I use it whenever I can and it helped me find many bugs in the code (especially not checking if value is None through the Optional). It also makes refactoring your code in IDEs that understand types (like PyCharm) much much easier.


> I believe GP means that they are not checked at runtime.

Haskell types aren't checked at runtime either, so that can't be it.

Python has runtime type checks that do happen at runtime, so maybe it's Haskell that doesn't have types.


What Haskell does and other statically compiled languages is figuring the type before program runs. Once it does it is impossible to have a different type in the code.

Python is dynamically typed. That means everything including things like string or integer is stored as a structure with a type. The checking of the type happens at runtime. This is major reason why languages like Python are so much slower than statically typed languages.

The annotated type in Python is providing mechanism of figuring out types before code is run, this helps finding bugs, but these types are not used during normal operation[1], but that doesn't stop them from being useful for finding bugs in the code or helping with refactoring.

[1] There are some packages that implement some runtime checks, but they IMO are waste of time. Basically they are ensuring that type problems that tool like mypy would detect also will cause your code to crash. It also add performance penalty as well.

There's also mypyc, code that compiles python code that using types increasing its performance. It is currently used to compile mypy increasing its performance by a factor of 4 I believe. It also makes some python features unavailable if you want to use it (https://mypyc.readthedocs.io/en/latest/differences_from_pyth...)


> The checking of the type happens at runtime.

Mypy checks the type before runtime, just like GHC does.

Python also checks during runtime, but Haskell doesn't have that feature. Right?

> This is major reason why languages like Python are so much slower than statically typed languages.

Julia's dynamic typing, and hence its ability to do specialization using runtime information, is part of why it can (sometimes) beat Fortran in numerical performance.


> Python also checks during runtime, but Haskell doesn't have that feature. Right?

I don't know Haskell so can't answer that, but typically a statically typed language doesn't need that.

Sometimes languages still provide runtime check, typically happens when their type system is lacking. For example in Go, if you use interface{} type, the type checking happens at runtime. That's why it is discouraged to use, because could be reason that slows down your code.

> Julia's dynamic typing, and hence its ability to do specialization using runtime information, is part of why it can (sometimes) beat Fortran in numerical performance.

I'm also not familiar with Julia, but from what I read is that for numerical types, Julia uses native types instead of using objects like Python.

It looks like well written Julia code can enable the interpreter to infer types and then JIT can use that to optimize the code. Python doesn't have that functionality at least not yet.


I had a situation I gave a method argument a type hint, I then called that method with a completely different type by mistake. The type checker didn’t pick up the error so I got a runtime exception where I’d expect a type checker to catch the issue.

I don’t have the specific example, it was something I observed not so long ago, said well that’s shit, fixed my code issue and moved on. Not being a huge fan of Python outside of small scripts and only using Python as told to use Python I took little interest in the detail as was a case of just get the job done and move on.

It’s been a while since I read the paper I shared but think that mentions it something similar if I remember correctly.


I like typing in Python because I can choose how much I want to use it. If I want to script something, or prototype something, or test something in a repl, no overhead of typing needed.

It's also been my experience (C(++), Kotlin, Swift) that as the complexity of typing increases, 10% of the time I spend servicing types accounts for 90% of the sites I need typing, and the other 90% of the time spent on types is me cursing at the compiler trying to make it accept the odd 10%.

An optional type annotation system allows ME to pick where I want that effort/payback balance to be at on a case by case basis.


I started a new project in Python in late 2019. Coming from a C++/Java background I elected from the start to use type checking and data classes heavily.

The project has now scaled to multiple developers and many thousands of lines of code. If I compare my experiences this time around with past experiences in Python, I feel those early choices are paying big dividends in terms of code readability and correctness.

In particular:

The ergonomics of data classes is great (IMO). They avoid a lot of boiler plate, integrate well with type checking and auto completion, and provide clear context to anyone reading the code.

I’ve found Mypy a bit flaky (sometimes it misses errors it seemingly should catch) and some things that cannot be expressed in the type system to be pain points but overall having type annotations present and enforced helps document code, works well with IDE auto completion, and occasionally picks up errors that would have been otherwise missed.

In particular I’ve found it’s picked up a lot of errors around missing None checks (nullability). These seem to be particularly easy to miss in unit tests.

I’ve also found that PyCharm’s built in code code checking has often been able to pick up and highlight problems which it would have otherwise missed thanks to having type annotations. Having early feedback on problems is nice productivity boost.

One hint if you do use Mypy make sure to enable check-untuned-defs. You’ll get much better coverage.


Great! I am so glad more people see types as a useful tool. Personally I see it as essential, so I struggle to understand the mind set of people who do not want to use them. I understand that people think it feels like a lot of work, but that is like nothing compared to the 90% of time people who dont use types spend on reading logs and fixng the same issues every day. I assume people have some sort of amnesia and thinking «oh the value was undefined, I have never seen THAT error before», but that is probably just me being bitter. So I was wondering, since you seem like a super-fresh converted;did you have any specific reservations originally, and whas there any specific type of solution to a issue that finally changed your mind?


> I struggle to understand the mind set of people who do not want to use them.

I've posted a few reasons before. I think I've refined the list slightly since last time:

* People's first exposure to programming often came from using C++ or Java. An old version with really crappy type errors. Schools are often conservative about updating their tech stacks.

* Types are most useful when reading or changing code. New learners are mostly writing new code.

* Writing `Person person = new Person()` seems highly redundant. In the kinds of small programs you are writing as a newbie, you are likely to write a lot of this.

The reasons so far make types seem like overhead: you are having to do more work and getting little in return for doing so. The next two get a bit deeper.

* In the small codebases you'll have as a newbie[0] at disproportionate amount of the codebase is interacting with the outside world. The outside world is untyped. In both typed or dynamic languages, you have to shuttle this untyped data into your language's type system. But when there is a type error in this process, I find it is generally easier to debug in a dynamic language. That makes sense to me: they need to have good workflows for dealing with runtime type errors.

* Classes tend not to teach technique[1]. Students are expected to invent it on their own as they learn programming. As professionals, we often don't teach technique to each other either. If you invent techniques that leverage the strengths of types, then the appeal of types will seem pretty obvious. If you don't you may go years without before you are exposed to these techniques. If you invent techniques where static types get in the way, then you may find static types to be a severe hindrance to getting things done.

[0] I think this also applies a lot of the time when starting a new project.

[1] By technique I mean the steps you do to produce working code. This doesn't just include generating new code, but understanding existing code and debugging code.


That makes a lot of sense. And I think the lack of teaching technique has to do with the broad variety of programming tasks. Techiques are way too specific to the domain and we won’t see more of it in school until «computer science» is split into more and more fine grained studies or get a bigger part of existing studies like math is now. e.g now I feel embedded, backend and frontend are distinct «directions». In the future I expect it to be further branched into for example finance, healthcare, chemical processing, public infrastructure. This is gradually happening all the time though and «true» computer engineering will always exist, but I think it will be more and more niche just like embedded and OS level programming is more niche now but was the only option before. Anyway I think types are key to making programming safe enough to bring it to this kind of broader workforce (in some distant future, with good IDE support).


Glad to share but I'm not sure if I'm representational - I'm a data analyst who is learning more about backend engineering.

I think if you're a SWE or has a CS education or worked with a compiled language before, you know exactly what kind of problem typing is solving. For me, I have to first discover the problem, slowly learn that there is a solution out there, and finally realizing that my problem can be solved by this solution.

The formalization of problem space + the discovery of solutions + the final realization of matching solutions to problems are non-trivial. I'm a bit embarrassed that it took so long and a bit sad. But it is what it is.


That makes perfect sense actually, thank you for taking the time to answer. And I’m indeed a SW eng with a degree, so spot on. I do something similar to data analysis «for fun»/non professinally but my day job is payment/banking systems. Live payment systems and settlement systems are both very different from analysis tasks. They have all the focus on «correctness» and integration interfaces with other organizations (which are ALWAYS wrong because «they» don’t understand types/strict interfaces). While analysis in my experience is dealing with tons of dirty data sources not in real time or the code will even just run once (not millions of times). I always go to cleaning the input and validating the data, but I suspect that is because of my day job and i feel it is less valuable to put in this effort than it is for my day job. So I have myself also skipped using types many times to learn whatever library I need or to iterate quicly (minutes compared to days). For example, it is very comfortable to just let pandas give you 0 for NaN when you just want a plot. So I understand it takes more time because the value of types is probably much lower to you in absolute terms.


> I struggle to understand the mind set of people who do not want to use them

It usually comes from people who never had experience with statically typed languages, don't work on projects with a large dev team, cross teams projects or work with a lot of 3rd party libraries. They don't realize the maintenance nightmare that types can easy.

Sure you write one off script, or something that is rarely run don't use types, but when you have a 50k lines codebase that is actively developed and maintained it's another story.

I'm so glad Typescript changed that mindset of frontend developers and is supported by a lot of prominent open source projects


That might be where it usually comes from, but that hasn't been my experience. My Python subcommunity groupthink was against type annotations for years, despite the groupthink also saying that statically typed are better all things equal and most of the loudest folks having worked for some FAANG company.

Thinking that typing is good and buying that a pasted-on-after-the-fact typing system is good are different matters.


In my experience, it was quite jarring to see a python function with a bunch of : -> Union[int,str], Optional, Any, etc.

Change is always hard.


I actually don't understand Optional in that mix. "Optional" basically means the value can be "None". You typically use it like this "Optional[str]" which means the value is a string or "None".


Very true, but there is a gradient there. I have an example of the opposite extreme. A few years ago I worked with a person who would rather continue working in php 5.3 than start using Scala, which was hot at the time. He literally copy-pasted an old project instead of starting a new one in Scala (or ANY other language, this was a payment system and php does not even have strong enough crypto support in old versions). And I can tell you for sure he had at least 1 year exposure to Scala and doing at least 1 full project. His reasoning for copy-pasting old php: He was «just too stupid to understand Scala». It is an extreme example, but most people I met who dont like strong typing fall into that category (lazy? Certainly not stupid) and I just dont know what to say to them or what to think myself to accept them or even how to use whatever hidden powers these people have other than creating circular dependencies for the company by making obvoius mistakes they themselves spend most of their time fixing. Im not actually bitter, just frustrated that so many seem to just not care.


A lot of it depends on the work you do, and what other kinds of tools you use. If you’re working with simpler code and use something like flake8 or have a reasonably fast test cycle, you can be pretty productive either way. If you work with harder-to-type data structures (e.g. nested JSON or XML), you’ll see less wins from typing then validation (this is why Django apps tend to have fewer issues this way because the database validation reduces the amount of data motion before something validates it).

That’s not to say that typing isn’t useful - I use it daily, especially since VSC has made it a lot faster than mypy - but I do think the Python community is wide enough that you can find people legitimately saying it is or isn’t a big deal for them as a function of where they work. You also have a certain amount of PTSD from languages like Java which make typing far more labor intensive and through some combination of limited expressiveness, dynamic logic, and culture end up making the benefits less than anticipated.


I can absolutely see that there are many cases where the code is not run many times or the dataset is dirty and that types actually get in the way. As a example, I cannot stand using stuff like VSCode with lsp because it is unbearably slow for both linting and code completion. I feel pycharm is faster (actual typing is slow, but code completion makes me rarely type more than 3 characters before completing which makes it feel faster). I’m mentioning this because I would never dream of typing out a data structure, which is probably because my work is so different from an analysis job. For me code for data structures should be either generated or be simple enough to type out in a few minutes and there should be as few as possible of them. In sharp contrast to data engineering type of work where data just is dirty because it is so much of it from everywhere. Anyway, in both cases I dont want to type a full variable name without some completion, ever, much less 100 times which it seems some people are completely fine with. I’m showing my inexperience with python here, but I would love if there existed something like the F# type providers so I do not have to type so damn much:D


One of the pycon type talks was about type checking json. TypedDict or pydantic can both be used to deal with that fairly well now. For a complex json sure the type might be long but write it once and used the typeddict name after that. It feels similar to needing to write protobut def/thrift def. Some of the json I even work with is given a spec with protobuf and loaded that way which gives you type support. protobuf can generate python type stubs with mypy-protobuf. One practical annoyance is many libraries that use protobuf generated code don't currently include type stubs with them. I currently just make the stubs myself, but that is something most people would probably get stuck on and just needs more libraries to update the build scripts to include them in wheels.

XML could have a similar approach although I'm not sure if anyone has done it already.

Second talk (30 minute mark) of this video, https://www.youtube.com/watch?v=ld9rwCvGdhc


Yes, I use pydantic for that and it’s quite helpful - one of my favorite libraries in recent years. I do think your last point is key: this seems to be a community maturity point because a fair fraction of people will bounce off of a bad first experience and say it’s not worth the cost.


If you don’t have really flexible type system, strong typing makes certain things complicated.

Think for example a library like Pandas. Building something like that with a not-very-flexible type system would result in much more cumbersome user experience. In many cases you would likely feel the types just getting on your way while working interactively.

Microsoft has done great things with C# on this frontier, but it has also taken many years and probably a lot of brainpower.


I personally don’t think Python type hints are particularly useful for solo projects, but are very useful for communicating the intent of my code to others.


Sometimes, "others" is yourself in a few months trying to look at the code :)

If my project is something beyond a small script that I plan on working on on or off for a while, I do find that annotations definitely help. Also, it fuels IDE completions between functions if you annotate your input.


I've never had problems that strong typing would have prevented and see no use for them in languages that originally didn't provide them. They add noise to the code and require more effort in reading around them.


Above all, it's much more usable now, especially with 3.10 around the corner:

- you can use list[] instead of typing.List[]. Same for dicts or sets.

- you will be able to use bool | str instead of typing.Union[bool, str]

- mypy has saner default, is better supported by IDE and is way faster

- a lot of edge cases are now covered, including protocols for duck typing, vartype, etc.

- fantastic libs such as pydantic, fastapi and typer make using them a ton of fun


Use an editor with an LSP client, along with pyls-mypy[1], and you'll get type checking support as you write your code. I just use the typing module and not Mypy, and it still works great.

[1] https://github.com/tomv564/pyls-mypy


We added mypy last year and it’s been a huge benefit. It catches a lot of bugs and makes onboarding easier.

However, mypy is woefully limited when compared to TypeScript. Granted, TypeScript is maintained by a trillion dollar company while mypy is a community project.


Is this eliminated class of bugs just generally the ones that aren't caught by tests but could be caught by static type annotations, or was there a discussion of the kinds of bugs that had some convincing details?


I think there's considerable overlap between classes of bugs that could be caught by static type checking and those that could be caught by unit tests. That said, I view the two as complementary and use both extensively.

One particularly interesting (though in my experience, rare) class of bugs are typing errors _in unit tests_. For example, an API under test expects one thing and a unit test supplies something else. If the test passes, it's hard to spot the problem without static type checking.


Playlist: https://www.youtube.com/playlist?list=PL2Uw4_HvXqvYk1Y5P8kry... This page has full titles of the talks. No abbreviation.


way better! thank you!


As a young ambitious man I always knew programming was going to be my passion, but life had different plans and I have ended up in less technical IT related roles. I used to think that this ever-revolving door of advancement in tech would create an interesting career but now that I realize that programming is not my career path, I now feel that in my mid 30s with a family it could become extremely cumbersome to those in the field.

I'm interested in opinions of how experienced and mid to high-level devs still feel after 15+ years of experience about the constant evolutions of technologies and their associated "metas" such as TDD, project mgmt systems such as agile, etc jive with that mentality. Does the chase/"circle around" ever get old? I try to keep up with it all through HN, but wasn't the basis of a lot of languages such as python and php to reduce that and get things to MVP? Maybe I'm just way off course.


I don't think it's cumbersome. After doing stuff long enough you get to the point when you realize you can't know everything, you are not perfect, and more importantly, NO ONE IS.


Also if you go back and read the software engineering conferences proceedings from 1968 and 1969, David Parnas' classic 1970s papers, and other instances of this foundational material, you realise that except that we have faster computers, higher-level languages, and version control, software engineering really hasn't changed since then.

We're still struggling with team organisation, verifying features with users before building them, choosing proper abstractions, testing for reliability and understanding, resource management, documentation, modeling the domain clearly, prototyping the right way, and so on.

All the actually difficult things today are just the same as they were 40--50 years ago.

While on the surface things look like they move quickly, what makes one a good software engineer is the same as it's always been.

Good software engineering is not about today's favourite technology. It's about building things that work and which people actually need. That is a hard-earned skill and you don't get it by chasing shiny things.


It's horrible and I wish I'd kept programming as a hobby. The continual 'evolution' is ridiculous, as we build and rebuild the same things with newer tools. Often project management is more important than building things and build tools became an obsession as more layers started to come between writing code and seeing results. Agile, TDD, Testing, Design Patterns are all topics that muddied the waters. And with people on both sides of the fence with these practices, it gets very tiring and you just want to find refuge outside of it all. A place where the tools don't change each year and a half, or where the best practices can easily be agreed upon.

Yes, it gets old and I can't wait to retire.


I’ve been a coder for over two decades, and here’s my take on this. First, good employers want smart engineers, and bet that they can learn things quickly. You don’t need to have N years of experience with Kubernetes. Second, most of your jobs later in your career will either come through your network, or will decide on you based on references from your network. Your network is the most import thing. Third, I’ve seen “new hot technologies” come and go without me ever learning them. Many are fads, you don’t need to learn them all, or even most of them. BUT, you do need to continuously learn, and ideally be able to display that in some way.


1. Learn something, be really good at it. Have demonstrable experience in that thing.

2. Learn a handful of things a little bit less well. Diversify your learnings. Some database stuff, some devops stuff, maybe some game stuff, network stuff, whatever. Try to have demonstrable experience in this stuff well.

3. In 3-5 years, repeat step 1 and 2.

Do this over and over and you'll be competitive in tech indefinitely. It doesn't really matter what you pick to be good at or care about, something current (but not bleeding edge), and then dabble in a bunch of other current technologies and tools.

In general, don't worry about things like project management. Every company bastardizes any written practice, and every company is very different. Just do what your employer does, and then if you find a new employer, do what they do. Maybe have some opinions and reflect on what you liked and didn't like, because employers might pretend to care what you think about processes and improvements, but it's really all just buzzword nonsense. Regardless of what you pretend, it's all going to go how it's going to go.

r.e. "metas" - Play around with stuff. Spend some time reading about TDD, maybe give it a shot every now and then. Understand why it's good and understand why it's bad. Stay away from "never"s and "always"s.

As I write this, I realize you're already doing half of what I'm suggesting, which is just to pay attention to the industry and the comings and goings of tech/tools. The other half is to learn and get interested in various pieces of tech. If it feels "cumbersome", it might just not be the right career path for you. That's totally okay - the work you're doing now is totally fine! Do what you like. Don't force stuff you don't like. Most of the successful programmers I know think that toying with a new language or tool is fun/interesting. It doesn't feel cumbersome because it's both a job and a hobby.


> In general, don't worry about things like project management.

I agree with most of what you say but this sounds like terrible advice. Fixing bugs in software is cheaper the earlier they are fixed. Cheaper in testing than in prod, cheaper in design than in testing, and so on.

The cheapest place to fix them are in the processes that lead to the design.

Fixing things in the project management process is a hugely levered activity.


> Fixing things in the project management process is a hugely levered activity.

Respectfully, you're absolutely right, but at many many employers dev's opinions on project management doesn't matter and is totally ignored. That's why I say don't worry about it. The devs I've been around that get upset and dogmatic about project management principles just get burnt out faster because employers don't care/won't change. "Don't care" didn't mean "Don't know about", but moreso "Don't get worked up about" or "Don't be strongly opinionated". Of all the hills that exist in software to die on, project management is pretty much the worst one.


I hate it when YouTube channels put the subject of the video at the end of the title. It gets cropped and I can’t tell which videos are of interest to me without clicking on each one to see what it’s about.


This sounds like something that "increases engagement metrics" ...



Found this annoying too. I manually set the width of the playlist div to like 2000px just so the titles wouldn't get cropped.


Agreed, hover over it if you have a mouse and the tooltip should show the full name.


I’m on a phone :(


Turn it sideways


For those that care about performance while using Python,

"From 3 to 300 fps: NES Emulation in Python and Cython"

https://www.youtube.com/watch?v=3of9pY2vovA

"Restarting Pyjion, a general purpose JIT for Python- is it worth it?"

https://www.youtube.com/watch?v=YFeUUdKBrJ8

"Python Performance at Scale - Making Python Faster at Instagram"

https://www.youtube.com/watch?v=xGY45EmhwrE

and for some hardware fun,

"More Fun With Hardware and CircuitPython - IoT, Wearables, and more!"

https://www.youtube.com/watch?v=GnteZjiHVdA


Pyjion has some excellent documentation that makes the whole thing seem approachable/introspectable. (See the talk.)

See optimizations breakdown here https://pyjion.readthedocs.io/en/latest/optimizations.html


Question about youtube: Youtube video thumbnails abbreviates long video titles to "..."

For example: TALK / Mariatta Wijaya / Oops! I Became an Open..

Is there a way to fix this?


The playlist is easier, but if you like the grid view, pasting the following into the browser console seems to work:

  style = document.createElement('style'); style.innerText = 'html:not(.style-scope)[typography] { --yt-link-line-height: 4rem; }\nytd-grid-video-renderer #video-title.yt-simple-endpoint.ytd-grid-video-renderer { -webkit-line-clamp: 10!important }'; document.getElementsByTagName("head")[0].appendChild(style);


Not the best, but right click on one of the titles, inspect element, it should put you on a #video-title, you then just want to uncheck the -webkit-line-clamp and set a very high max-height like 999999px

Or just add the following to any CSS injecting extension:

#video-title { -webkit-line-clamp: 999 !important; max-height: 999999px !important; }



I'd fix it by skipping the talks and looking at the commit histories instead. Works for every PyCon and is quite instructive.


Been waiting for these!

There's a lightning talk from day 1 about the new Flask that explained some of the performance improvements, which actually answers a question I've had since its release.

Also really enjoyed the talk on the new profiler Scalene, I would love to try it out ASAP.

Looking forward to the rest.


where is that lightning talk link?


Phil Jones - What’s new in Flask (starts at 44:48): https://youtu.be/5zEn3Jta2Dg?t=2688


Any recommended talks to watch from this?


I liked the pyKnit presentation it was a very relaxing experience, but mostly because even if I don't knit I've met very nice people who knit at conferences. https://www.youtube.com/watch?v=y7LEN2oqpkM


Pycon videos are such a quality content. Best use of time. I always learn so much. Is there a JS equivalent conference which has such content? JS newbie here.


Any suggestion about good presentations?


Thanks for sharing this link


There’s some great stuff on there. I’m blown away by the pace of innovation in the Python community.


What stuff blew you away?


As someone who writes lots of code that wrangles messy raw data inputs via DataFrames, the talk on Pandera type checking for DataFrames was very interesting. Dunno if it will work in practice for my use case, but I’ll definitely be checking it out.


cool -- what talk was that? Could you share a link? I've looked through the youtube uploads but can't find the Pandera talk based on the titles



Too many conference recordings without a touch on refactoring the broken mapped function ?

Wish i could have a real `array.map` just like in javascript.


Is this what you wish?

    ['10', '10', '10'].map(parseInt) === [10, NaN, 2]


What's your point here ?


Scheme, which was an inspiration for JavaScript, works the same way Python does in this case. See page 57 of the spec: http://dspace.mit.edu/bitstream/handle/1721.1/5600/AIM-848.p...

That was written in 1985. If you're thinking of this as a mistake and waiting for it to be corrected, you're going to be waiting a long time.

Edit: Although it would be awesome if Python had extension functions like Kotlin, which would make this super easy to fix (either in your own code or the standard library). But that would require static typing to be more baked into the core language.


why is it broken?


In common sense, u just need to put a function into map, right ?

x.map(fn)

In python, mapped takes some arguments that makes me confused in order and types of arguments.

Secondly, it broke my thinking process.


Inline code completion/IDE integration can have it show the parameter names or docstring inline in the editor. That helps me with map, specifically. :)

With that configured, a window pops up in Vim and says we have:

    map(func: Callable[[_T1], _S], iter1: Iterable[_T1], /) -> Iterator[_S]                                               
     ——————————————————————————————————————————————————————————————————————————————                                        
    map(func, *iterables) --> map object                                                                                  
                                                                                                                          
    Make an iterator that computes the function using arguments from                                                      
    each of the iterables.  Stops when the shortest iterable is exhausted.                                                
                                                                                                                          
    ——————————————————————————————————————————————————————————————————————————————

There are things to criticize here. Note that the type hinting (this is Python 3.8 btw) doesn't agree with the full generality of the stated function interface (the multiple iterables part).


Having a different interface doesn't mean its broken...


What's the TL;DWWWW?

(Too Long; Didn't Waste a Whole Week Watching)


I watched several talks on 2x speed. It's not comfortable, but some speakers speak so slow, that 2x sounds almost normal, also, I can skip that way long introductions and slow down where the interesting part starts.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: