Hacker News new | past | comments | ask | show | jobs | submit login

I've been programming for 34 years - 25 of those professionally. In the early days of my programming life it was super important to know how the code would run on the metal (and much of my time was spent writing assembler for that reason). Processors were slow, memory access was slow, and memory quantity was small (32k on my first computer). I spent a decade in the games industry building graphics engines and hand interleaving assembler to get the maximum amount of juice out of whatever console I was writing for (I beat what Sony said their Playstation 1 could actually do).

Then OOP happened and much of the early guarantees about how something ran went away, abstraction everything meant we couldn't reasonably know what was happening behind the scenes. However, that wasn't the biggest issue. Performance of CPUs and memory had improved significantly to the point where virtual method calls weren't such a big deal. What was becoming important was the ability to manage the complexity of larger projects.

The big deal has come recently with the need to write super-large, stable applications. Sure, if you're writing relatively small applications like games or apps with limited functionality scope, then OOP still works (although it still has some problems). But, when applications get large the problems of OOP far outstrip the performance concerns. Namely: complexity and the programmer's inability to cognitively deal with it.

I started a healthcare software company in 2005 - we have a web-application that is now in the order of 15 million lines of code. It started off in the OOP paradigm with C#. Around 2012 we kept seeing the same bugs over and over again, and it was becoming difficult to manage. I realised there was a problem. I (as the CTO) started looking into coping strategies for managing large systems, the crux of it was to:

* Use actor model based services - this helped significantly with cognition. A single thread, mutating a single internal state object, nice. Everyone can understand that.

* Use pure functional programming and immutable types

The reason pure functional programming is better (IMHO) is that it allows for proper composition. The reason OOP is worse (IMHO) is because it doesn't. I can't reasonably get two interfaces and compose them in a class and expect that class to have any guarantees for the consumer. An interface might be backed by something that has mutable state and it may access IO in an unexpected way. There are no guarantees that the two interfaces will play nicely with each other, or that some other implementation in the future will too.

So, the reality of the packaging of state and behaviour is that there's no reliable composition. So what happens is, as a programmer, I'd have to go and look at the implementations to see whether the backing types will compose. Even if they will, it's still brittle and potentially problematic in the future. This lack of any kind of guarantee and the ongoing potential brittleness is where the cognitive load comes from.

If I have two pure functions and compose them into a new function, then the result is pure. This is ultimately (for me) the big deal with functional programming. It allows me to not be concerned about the details within and allows stable and reliable building blocks which can be composed into large stable and reliable building blocks. Turtles all the way down, pure all the down.

When it comes to performance I think it's often waaaay overstated as an issue. I can still (and have done) write something that's super optimised but make the function that wraps it pure, or at least box it in some way that it's manageable. Because our application is still C# I had to develop a library to help us write functional C# [1]. I had to build the immutable collections that were performant - the cost is negligible for the vast majority of use-cases.

I believe our biggest problem as developers is complexity, not performance. We are still very much working with languages that haven't really moved on in 20+ years. Yes, there's some improvements here and there, but we're probably writing approximately as many lines of code to solve a problem as we were 20 years ago, except now everyone expects more from our technology. And until we get the next paradigm shift in programming languages we, as programmers, need coping strategies to help us manage these never-ending projects and the ever increasing complexity.

Does that mean OOP is dead as an idea? No, not entirely. It has some useful features around extensible polymorphic types. But, shoehorning everything into that system is another billion dollar mistake. Now when I write code it always feels right, and correct, I always feel like I can trust the building blocks and can trust the function signatures to be honest. Whereas the OOP paradigm always left me feeling like I wasn't sure. I wish I'd not lost 10+ years of my career writing OOP tbh, but that's life.

Is functional programming a panacea? Of course not, programming is hard. But it eases the stress on the weak and feeble grey matter between our ears to focus on the real issue of creating ever more impressive applications.

I understand that my reasons don't apply to all programmers in all domains. But when blanket statements about performance are wheeled out I think it's important to add context.

[1] https://github.com/louthy/language-ext




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: