Hacker News new | past | comments | ask | show | jobs | submit login

I too have been programming professionally for nearly two decades. Much longer if you consider the time I spent making door games, MUDs, and terrible games in the 90s.

I think functional programming gives you powerful tools to reason about the construction of programs. Even down to the machine level it's amazing how amortized functional data structures change the way you think about algorithmic complexity. I think laziness was the game changer here. And if you go all in with functional programming it's surprising how much baseline performance you can get with such little effort and how easy it is to scale to multiple cores and multiple hosts.

There are some things like vectorization that most functional languages I know of are hard pressed to take advantage of so we still reach out to C for those things.

However I think we're starting to learn enough about functional programming languages and how to make efficient compilers for them these days. Some interesting research that may be landing soon that has me excited would enable a completely pure program to do register and memory mutations under the hood, so to speak, in order to boost baseline performance. I don't think we're far off from seeing a dependently typed, pure, lazy functional language that can have bounded performance guarantees... and possibly be able to compile programs that don't even need run time support from a GC.

I grew up on an Amiga, and later IBM PCs, and that instinct to think about programs in terms of a program counter, registers, and memory is baked into me. It was hard to learn a completely different paradigm 18 or so years into my professional career. And to me, I think, that's the great accident that prevented FP from being the norm: several generations were simply not exposed to it early on on our personal computers. We had no idea it was out there until some of us went to university or the Internet came along. And even then... to really understand the breakthroughs FP has made requires quite a bit of learning and learning is hard. People don't like learning. I didn't. It's painful. But it's useful and worth it and I'm convinced that FP will come to be the norm if some project can manage to overcome the network effects and incumbents.




I would agree with this, I came up in the same time period and we just programmed closer to the metal in that period, we did not have the layers and it was just normal to think in terms of the machines hardware (memory addresses, registers, interrupts, clock, etc.) This naturally leads to a procedural way of thinking, variables where a think veil over the actual memory they addressed.

It actually takes a lot of unlearning to let go of control of the machine and let it solve the problem, when you are used to telling it how to solve the problem. I came to that conclusion when I dabbled in ProLog just to learn something different, and I had a really hard time getting my head around CL when I first got into it, due to wanting to tell then machine exactly how to solve the problem. I think it was just ingrained in those of us that grew up closer to the metal and I think the Byte magazine reference, in the talk, has a lot to do with it, we just did not have that much exposure to other ideas, given that mags and Barns and Noble, where our only source to new ideas. That and most of us where kids just hacking on these things alone in our bedroom with no connectivity to anyone else.

I remember before the web getting on newsgroups and WAIS and thinking how much more info was available that the silo'ed BBS we used to dial into. Then the web hit and suddenly all of these other ideas gained a broader audience.


OTOH, think of the vast hordes of new developers exposed to lot's of FP and NOT having that background in Amiga and PC and bare-metal programming that you do.

FP has been largely introduced into the mainstream of programming through Javascript and Web Dev. Let that sink in.

End of the day, the computer is an imperative device, and your training helps you understand that.

FP is a perfectly viable high-level specification or code-generational approach, but you are aware of the leaky abstraction/blackish box underneath and how your code runs on it.

I see FP and the "infrastructure as code" movement as part and parcel to the same cool end reality goal, but I feel that our current industry weaknesses are related to hiding and running away from how our code actually executes. Across the board.


"End of the day, the computer is an imperative device, and your training helps you understand that."

I mean... it's not though, is it? Some things happen synchronously, but this is not the same thing as being an imperative device. Almost every CPU out there is multi core these days, and GPUs absolutely don't work in an imperative manner, despite what a GLSL script looks like.

If we had changed the mainstream programming model years ago, perhaps chip manufacturers would have had more freedom to break free of the imperative mindset, and we could have radically different architectures by now?


Individual cores execute instructions speculatively these days!

Predicting how the program will be executed, even in a language such as C99 or C11, requires several layers of abstraction.

What most programmers using these languages are concerned about is memory layout as that is the primary bottleneck these days. The same is true for developers of FP languages. Most of these languages I've seen have facilities for unboxing types and working with arrays as you do. It's a bit harder to squeeze the Haskell RTS onto a constrained platform which is where I'd either simply write in C... or better, compile a subset of Haskell without the RTS to a C program.

What I find neat though is that persistent structures, memoization, laziness, and referential transparency gave us a lot of expressive power while giving us a lot of performance out of the gate. In an analogous way to how modern CPU cores execute instructions speculatively while maintaining the promise of sequential access from the outside; these structures combined with pure, lazy run time allow us to speculatively memoize and persist computations for more efficient computations. This lets me write algorithms that can search infinite spaces using immutable structures and get the optimal algorithm for the average case since the data structures and lazy evaluation amortize the cost for me.

There's a good power-to-weight ratio there that, to me, we're only beginning to scratch the surface of.


> but this is not the same thing as being an imperative device. Almost every CPU out there is multi core these days

The interface to the CPU is imperative. Each core (or thread for SMT) executes a sequence of instructions, one by one. Even with out-of-order and speculation, the instructions are executed as if they were executed one by one.

> and GPUs absolutely don't work in an imperative manner, despite what a GLSL script looks like.

They do. Each "core" of the GPU executes a sequence of instructions, one by one, but each instruction manipulates several separate copies of the state in parallel; the effect is like having several identical cores which operate in lockstep.

> If we had changed the mainstream programming model years ago, perhaps chip manufacturers would have had more freedom to break free of the imperative mindset, and we could have radically different architectures by now?

The cause and effect are in the opposite direction. The "imperative mindset" comes from the hardware. Even Lisp machines used imperative machine code (see https://en.wikipedia.org/wiki/Lisp_machine#Technical_overvie... for an example).


> The interface to the CPU is imperative. Each core (or thread for SMT) executes a sequence of instructions, one by one. Even with out-of-order and speculation, the instructions are executed as if they were executed one by one.

That is, in the traditional model of declarative programming, the semantics given are guaranteed, but the actual order of operations are not. So, in a sense, the CPU takes what could be construed as imperative code, but treats it as declarative rather than imperative.


Exactly my point. With out of order execution, we execute as if they are in order, making sure that an item with a dependency on the outcome of another is executed in the correct order.

We end up having to rely heavily on compilers like LLVM which make boil down exactly what should depend on what, and how to best lay out the commands accordingly.

Imagine if the dominant programming style in the last few decades had been a declarative one. We wouldn't have had any of this nonsense about working out after the fact what depends on what, we could have been sending it right down to the CPU level so that it could deal with it.


From wikipedia:

> In computer science, imperative programming is a programming paradigm that uses statements that change a program's state.

All CPU's I know of are definitely imperative. My (limited) understanding of GPU instruction sets is that they are fairly similar, except that they use all SIMD instructions.


gpu just means lots of cores. the cores are composed of execution units too.

even the most exotic architecure you can think of is imperative (systolic arrays, or transport triggered architecture or...whatever)

there are instructions and they are imperative.

I can vaguely rememeber some recent iterative AI of some kind who had to produce a functioning circuit to do XYZ, and the final netlist that it produced for the FPGA was so full of latches, taking advantage of weird timing skew in the FPGA fabric and stuff, and no engineer could understand the netlist as sensical, but the circuit worked... I suppose when there's that level of non-imperative design, you can truly call it both declarative, and magic.


nope. end of the day there is a linear sequence of instructions being executed by any given part of the hardware.|

OO and FP are just higher-level ways of organizing source code that gets reduced to a linear sequence of instructions for any given hardware execution unit.


hardware is imperative at it's lowest level. sure, hey you can even say that the instructions are declarative if you are speaking of the perspective of the ALU with regards to stuff you send to an FPU, for example...


> FP has been largely introduced into the mainstream of programming through Javascript and Web Dev. Let that sink in.

Not really.

"Confessions Of A Used Programming Language Salesman, Getting the Masses Hooked on Haskell"

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72....


> FP has been largely introduced into the mainstream of programming through Javascript and Web Dev.

JavaScript's use of more and more functional patterns came with Underscore.js and CoffeeScript, which were both inspired by Ruby-based web dev!

I'd say the entire industry, Java included, has been moving towards more FP in a very sluggish fashion.


Having first-class functions and closures was the can of worms. How much you can get done passing callbacks is what got me wondering how much more stuff is there to learn in FP.


>End of the day, the computer is an imperative device, and your training helps you understand that.

Well... it's complicated. A CPU is imperative. An ALU is functional. A GPU is vectorized functional.


true, well that's complicated too, as that ALU likely runs microcode or has a lookup table, but presuming boolean hardware logic underlaying it somewhere, THAT level is declarative, not sure about what functional composition is involved here, but declarative programming of boolean hardware where the actual imperative activity is occurring.

maybe the physics is imperative too lol


end of the day, some poor schmuck has to get up and DO something...lol


I suppose that since one is still only talking about the external interface to any given hw execution unit (gpu, alu, fpu) one could always present it in whatever format was useful or trendy.

But I'll contend that it's much more productive to basically wrap low-level functionality as modules that higher-level languages could compose. One could then optimize individual modules.

The mechanism of composition should lay it out as desired in memory for best efficiency, and hence the probably need for a layout step, presuming precompiled modules. (it could use 'ld', for example) i'm not sure how you would optimize memory layout for black-boxes, but perhaps some standard interface..

Most people here are doing this already without knowing it, if you look into the dependencies of your higher level programming tools and kit.

End of the day OOP is a code-organization technique. FP is too. They are both useful. We still have complexity. Some poster above needing actor models etc, depends upon the scale I suppose. If one is considering a distributed healthcare application, or is one trying to get audio/video not to glitch etc.


I am fully on board with you.

Learned to code in the mid-80's, Basic and Z80 FTW.

Followed up by plenty of Assembly (Amiga/PC), and systems level stuff using Turbo Basic, Turbo Pascal, C++ (MS-DOS), TP and C++ (Windows), C++ (UNIX), and many other stuff.

I was lucky enough that my early 90's university has exposed us to Prolog, Lisp, Oberon (and its descendants), Caml Light, Standard ML, Miranda.

Additionally the university library allowed me to dive into a parallel universe from programming ideas that seldom reach mainstream.

Which was great, not only did I learn that it was possible to marry systems programming with GC enabled languages, it was also possible to be quite productive with FP languages.

Unfortunately this seems to be yet another area that others only believe on its possibilities after discovering it by themselves.


> Some interesting research that may be landing soon that has me excited would enable a completely pure program to do register and memory mutations under the hood, so to speak, in order to boost baseline performance. I don't think we're far off from seeing a dependently typed, pure, lazy functional language that can have bounded performance guarantees... and possibly be able to compile programs that don't even need run time support from a GC.

Is there any more info/links available about this?


I don't think they've finished writing the paper yet but I'll post it out there when it gets published.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: