Why not try to implement the iterative O(ln(n)) Fibonacci from SICP (Ex 1.19), rather than the old O(n) one? Or the Matrix exponentiation solution (he'd probably want to implement iterative matrix exponentiation too... doesn't look like he knows clearly how to do this)?
https://en.wikipedia.org/wiki/Fibonacci_number#Matrix_form
Considering that the Fibonacci sequence is exponential, and hence the Nth Fibonacci number contains O(n) digits (well, in any sane base at least), you cannot compute the Nth Fibonacci number in under O(n) time.
Given the guys programming in Javascript and not using arbitrary precision arithmetic, he's always using 64 bits and representing his numbers as floating point, so I don't think you are reasoning correctly about the asympototic complexity here. I mean, if you really want to argue this, then to quote Leibniz: "Calculemus!"
The javascript solution presented does everything in O(n) arithmetic operations, and the SICP way does it in O(ln(n)) arithmetic operations (actually, there are a lot of ways to do it in O(ln(n)) operations).
If we're going to stick to a fixed precision numbers, and we're just out to score maximum internet points, you can do it in O(1) by just computing out a table in advance, and for not much storage space since the series grows exponentially.
I mean, if we're going to screw around with the Os let's do it properly.
If you like doing this sort of thing, there are a million little CS problems like this codewars.com
One cool feature of the site is that you can vote on solutions; this lets you see a lot of neat solutions to problems, and learn a lot about how to write clean code.
I do this sort of thing sporadically in my free time too... it's still good to go back and do stuff smarter when you learn better rather than accepting whatever they taught you in CS101 as the end of the story.
Not corecursive, but here's the SICP O(ln(n)) algorithm:
fibonacci = fib 1 0 0 1
where
fib a b p q n
| n == 0 = b
| even n = fib a b p' q' (n `div` 2)
| otherwise = fib a' b' p q (n - 1)
where
p' = p*p + q*q
q' = 2*p*q + q*q
a' = a*q + b*q + a*p
b' = b*p + a*q
Well, frankly you can get along not knowing the "Gang of Four" design patterns and write Java. By the same token, you don't need to know about iterables and comprehensions to write Python, smart pointers to write C++, Graph theory to use a Graph database, macros to write LISP, etc.
By analogy, you don't need to know abstract algebra and category theory to write Haskell. But as in the other cases, knowing helps.
"Different sorts of mappings" is NOT synonymous with "category theory". Not even close. Heck, Euclid knew about "different sorts of mappings".
Most everything in Gamma et al is arguably useful for everyday programming in Java. Maybe 5-10 pages of MacLane is useful for everyday programming in functional languages.
Unless by "Category Theory" you mean "5-10 pages of MacLane", Category Theory -- on the whole -- is a horrendously inefficient way of teaching about "different sorts of mappings useful in functional programming."
Unless you want to use functional programming as an environment for doing pure mathematics, there's no reason to actually study actual Category Theory.
I've really never needed an abstraction for semigroups, monoids, meet-semi-lattices, monads, comonads, arrows or catamorpisms in Clojure, Common Lisp, Scheme, or Hy.
These concepts become more relevant when I program Haskell, Agda, Isabelle/HOL, or Coq.
I'd say a stronger analogy can be made between reading MacLane's Catagories for the Working Mathematician and reading Hoyte's Let Over Lambda; you really only need to read a little bit of these books to get the core concepts. That being said, depending on what sort of functional programming you're doing, a strong background in category theory or meta-programming can enabling (or not).
> when I program Haskell, Agda, Isabelle/HOL, or Coq.
That's fair. Although Haskell is a bit of an odd man out in that list, both in terms of its nature and in terms of its typical use case.
> That being said, depending on what sort of functional programming you're doing, a strong background in category theory or meta-programming can enabling (or not).
This is where the analogy between the two books breaks down. When you're using a functional programming language as a proof assistant, category theory can be helpful. But this is far less common than meta-programming.
> It's a long road from "gee, I changed something, better copy everything" to something as good as FFTW or ATLAS.
CUDA implementations of FFTs and matrix operations are faster than both FFTW and ATLAS, and they are neither sequential nor functional.
CUDA, C, and Haskell all have domains they typically outperform one another. The math vs. simulation divide sketched in this blog post is more an expression of the author's own psychology more than anything else.
To be fair, the FFTW/CUDA thing is due to fundamentally different hardware architectures which drove design constraints for these types of libraries. FFTW was never meant to run on a dedicated, ultra-parallel processor with highly optimized floating point instructions (GPU), but it is incredibly fast considering it runs on general purpose hardware. I am sure the FFTW authors could have done something to squeeze out more performance if they controlled both the hardware and software as NVidia does. And the transfer time to/from the GPU does matter, especially for smaller/more frequent operations...
All that aside, the psychology of pure functional vs. pure OOP vs. some hybrid methodology is really interesting, and even the view of what a "clean solution" is becomes tainted based on past experiences with other code written in that style.
For my own 2-cents, once I learned QuickCheck[1] and Parsec[2], I found myself slowly rewriting them for every language I program in.
> Will it help me professionally?
Speaking as another professional, probably not.
I love Haskell, but my colleagues can't deal with it. They can't write it. They don't know functional programming. And to be entirely honest, they will never understand Monoids, Monads, or macro-hacking.
> ...Help me being a better programmer?
Maybe. If you like already like to think of software axiomatically, then yes, it will make you will make you think of software more conceptually.
But honestly, getting javascript, ruby and go under your belt will also make you think differently about software, and those are more practical languages.
Microsoft's relationship with linux is actually a bit more complicated. It was a top 5 [1] contributor to the linux kernel in 2011 and a top 20 [2] contributor in 2012. The reason here is because linux virtualization is a big deal for Microsoft Azure.
Although the rational here is fairly clear: having good linux support for Azure helps Microsoft sell its cloud service. Having good linux support for Microsoft filesystems helps people move away.
I'm aware of Microsoft's Linux kernel contributions; I've also been around long enough to remember the woes of NTFS compatibility, CIFS compatibility, the Halloween documents, the "virus" accusations, the SCO debacle, and not least of all relevant to this current conversation, the fact that exFAT is basically banned from an FLOSS implementation due to onerous licensing and software patents.
Albert-Laszlo Barabasi talked about this sort of fear a few years ago in his boot Bursts[1]. He figured Google would be the one to do it. Even with dropouts occurring when someone goes into a bus, people's movements tend to have low entropy[2]. Since most people's movements follow pretty predictable routines, over time the system can learn to predict most people's positions given noisy data.
It might not work well for criminals trying to avoid being seen by arial cameras during the day, hiding out away from their usual haunts and obscuring their faces. It will probably be a big win for marketing data analytics firms.
A small wishlist:
- ClojureDocs: http://clojuredocs.org/
- Hoogle integration: http://www.haskell.org/hoogle/
- BroPages: http://bropages.org/