Hacker News new | past | comments | ask | show | jobs | submit login

I like the philosophy of Rust (and some other languages) of "safe by default". Rust variables are immutable by default, but can be made mutable using "let mut". Rust is memory safe by default, but can be made unsafe using the drumroll "unsafe" keyword. As for null references, other languages still have them but enforce "strict null checking", such as Kotlin and TypeScript. They force you to verify a reference is not null before using it, but without the verbosity of an Optional or Maybe type.

Functional programming is great, but it's far from optimal in many situations. For example, implement a self-balancing binary search tree using a common imperative language with mutability. Then try implementing it again but using pure functional programming. Certainly very possible but also requires a lot more work when you're not allowed mutability.




A note regarding your binary tree example:

I'd argue that FP is actually great for working with tree-like data structures. For example, the common implementations for sets and maps are based on balanced trees. IME it's graph-like or array-based stuff where the paradigm struggles.


I don't disagree that manipulating deep arrays and graph data can be harder in an immutable functional language, at least with default commands.. however many fp languages have powerful and mature libraries that let you work with deep data structures very gracefully and correctly. Clojure has Specter, Haskell has lenses and stuff.

For me, the FP dream ( and we aren't quite there yet) is that your program is an airtight declaration of the desired functionality, and then we get out of the way and let the compilers and profiling runtimes optimize away.

How cool would it be if the runtime stats could let the jit either unroll and inline a function if it's called on small arrays from network buffers, or push it to gpu if it's frequently called on huge in ram buffers? Not there yet, but as the tower of complex tooling needed to work in a compute parallel world keeps growing, we are going to be relying more and more on the compiler/runtime to optimally use that power.


See my cousin comment; I just wanted to make the point that working with tree-like data structures in FP feels rather natural.


Tree data structures are pretty prevalent in compiler. Like that, perhaps?


> For me, the FP dream ( and we aren't quite there yet) is that your program is an airtight declaration of the desired functionality, and then we get out of the way and let the compilers and profiling runtimes optimize away.

I would guess we are at least 100 years away from this. Current imperative compilers miss obvious optimizations and FP typically needs many layers of optimization before it would be efficient. Any one layer failing to make progress would result in a very slow program.

Currently FP compilers generally don't even use successors for natural numbers because it would be slow. Typical programs use other data structures more complex than numbers that would need to be optimized into a mutable structure. If FP compilers can't do it for integers, they definitely can't for FP programs as a whole.


great for reading trees, but not for cases where nodes need to be added or removed. There's a "zipper tree" structure but it's kind of a pain to implement:

https://github.com/xdavidliu/fun-problems/blob/main/zipper-t...


Pay a log(n) slowdown and you have arrays in all their glory (and more!)

https://hackage.haskell.org/package/containers-0.6.5.1/docs/...


If we really need to we can just actually use arrays tho:

https://hackage.haskell.org/package/primitive-0.7.4.0/docs/D...

That's what I meant: You can mostly use tree-like log(n) data structures rather nicely but you'll have to isolate your mutating code into stuff like PrimMonad, see:

https://hackage.haskell.org/package/primitive-0.7.4.0/docs/D...

I like Haskell because it lets me write clean interfaces at a high level but fall back to byte-level stuff when needed.


It isn't just log(n), it's also L1 vs L3 access times and an extra level of pointer chasing.


Just to confirm, you are talking about L1 and L3 caches, right?


Yes, and the specific advantages of using contiguous in-memory representations for data which is accessed sequentially.


> but without the verbosity of an Optional or Maybe type

Without the safety of having that encoded in the type system.


No, Kotlin/TS do the same thing with different syntax.

See https://kotlinlang.org/docs/null-safety.html#safe-calls (TS is basically the same)


`T` and `T?` being different types is encoding in the type system.

It is a different encoding, though.


Actually, implementing AVL tree in purely functional way is easy, I'd say it's easier than doing it in the mutable in-place way. It will allocate more, but will have the optimal O(something) complexity.

Many other algorithms are much harder, though, especially those requiring fast array indexing (graph search, hash tables, ...)


You're allowed mutability, even in languages like Haskell.


A better example would be to implement an efficient HashMap in pure FP. This is simply impossible unless you fall back to some tree based solution which is less efficient.

Interested why this is downvoted .. Hashmaps can have O(1) time complexity for lookups and inserts in the imperative world, in pure FP either lookup or insert can be no better than O(log(n))


> in pure FP either lookup or insert can be no better than O(log(n))

For single operations, yes. But what we really care about is usually the amortized performance over many operations, which can still be constant time.


No, this is not the case. It is actually for non FP solutions that the amortized performance converges to constant time. But for pure FP, there is no solution that performs in constant time, also not when amortized.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: