Hacker News new | past | comments | ask | show | jobs | submit login

OTOH, think of the vast hordes of new developers exposed to lot's of FP and NOT having that background in Amiga and PC and bare-metal programming that you do.

FP has been largely introduced into the mainstream of programming through Javascript and Web Dev. Let that sink in.

End of the day, the computer is an imperative device, and your training helps you understand that.

FP is a perfectly viable high-level specification or code-generational approach, but you are aware of the leaky abstraction/blackish box underneath and how your code runs on it.

I see FP and the "infrastructure as code" movement as part and parcel to the same cool end reality goal, but I feel that our current industry weaknesses are related to hiding and running away from how our code actually executes. Across the board.




"End of the day, the computer is an imperative device, and your training helps you understand that."

I mean... it's not though, is it? Some things happen synchronously, but this is not the same thing as being an imperative device. Almost every CPU out there is multi core these days, and GPUs absolutely don't work in an imperative manner, despite what a GLSL script looks like.

If we had changed the mainstream programming model years ago, perhaps chip manufacturers would have had more freedom to break free of the imperative mindset, and we could have radically different architectures by now?


Individual cores execute instructions speculatively these days!

Predicting how the program will be executed, even in a language such as C99 or C11, requires several layers of abstraction.

What most programmers using these languages are concerned about is memory layout as that is the primary bottleneck these days. The same is true for developers of FP languages. Most of these languages I've seen have facilities for unboxing types and working with arrays as you do. It's a bit harder to squeeze the Haskell RTS onto a constrained platform which is where I'd either simply write in C... or better, compile a subset of Haskell without the RTS to a C program.

What I find neat though is that persistent structures, memoization, laziness, and referential transparency gave us a lot of expressive power while giving us a lot of performance out of the gate. In an analogous way to how modern CPU cores execute instructions speculatively while maintaining the promise of sequential access from the outside; these structures combined with pure, lazy run time allow us to speculatively memoize and persist computations for more efficient computations. This lets me write algorithms that can search infinite spaces using immutable structures and get the optimal algorithm for the average case since the data structures and lazy evaluation amortize the cost for me.

There's a good power-to-weight ratio there that, to me, we're only beginning to scratch the surface of.


> but this is not the same thing as being an imperative device. Almost every CPU out there is multi core these days

The interface to the CPU is imperative. Each core (or thread for SMT) executes a sequence of instructions, one by one. Even with out-of-order and speculation, the instructions are executed as if they were executed one by one.

> and GPUs absolutely don't work in an imperative manner, despite what a GLSL script looks like.

They do. Each "core" of the GPU executes a sequence of instructions, one by one, but each instruction manipulates several separate copies of the state in parallel; the effect is like having several identical cores which operate in lockstep.

> If we had changed the mainstream programming model years ago, perhaps chip manufacturers would have had more freedom to break free of the imperative mindset, and we could have radically different architectures by now?

The cause and effect are in the opposite direction. The "imperative mindset" comes from the hardware. Even Lisp machines used imperative machine code (see https://en.wikipedia.org/wiki/Lisp_machine#Technical_overvie... for an example).


> The interface to the CPU is imperative. Each core (or thread for SMT) executes a sequence of instructions, one by one. Even with out-of-order and speculation, the instructions are executed as if they were executed one by one.

That is, in the traditional model of declarative programming, the semantics given are guaranteed, but the actual order of operations are not. So, in a sense, the CPU takes what could be construed as imperative code, but treats it as declarative rather than imperative.


Exactly my point. With out of order execution, we execute as if they are in order, making sure that an item with a dependency on the outcome of another is executed in the correct order.

We end up having to rely heavily on compilers like LLVM which make boil down exactly what should depend on what, and how to best lay out the commands accordingly.

Imagine if the dominant programming style in the last few decades had been a declarative one. We wouldn't have had any of this nonsense about working out after the fact what depends on what, we could have been sending it right down to the CPU level so that it could deal with it.


From wikipedia:

> In computer science, imperative programming is a programming paradigm that uses statements that change a program's state.

All CPU's I know of are definitely imperative. My (limited) understanding of GPU instruction sets is that they are fairly similar, except that they use all SIMD instructions.


gpu just means lots of cores. the cores are composed of execution units too.

even the most exotic architecure you can think of is imperative (systolic arrays, or transport triggered architecture or...whatever)

there are instructions and they are imperative.

I can vaguely rememeber some recent iterative AI of some kind who had to produce a functioning circuit to do XYZ, and the final netlist that it produced for the FPGA was so full of latches, taking advantage of weird timing skew in the FPGA fabric and stuff, and no engineer could understand the netlist as sensical, but the circuit worked... I suppose when there's that level of non-imperative design, you can truly call it both declarative, and magic.


nope. end of the day there is a linear sequence of instructions being executed by any given part of the hardware.|

OO and FP are just higher-level ways of organizing source code that gets reduced to a linear sequence of instructions for any given hardware execution unit.


hardware is imperative at it's lowest level. sure, hey you can even say that the instructions are declarative if you are speaking of the perspective of the ALU with regards to stuff you send to an FPU, for example...


> FP has been largely introduced into the mainstream of programming through Javascript and Web Dev. Let that sink in.

Not really.

"Confessions Of A Used Programming Language Salesman, Getting the Masses Hooked on Haskell"

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72....


> FP has been largely introduced into the mainstream of programming through Javascript and Web Dev.

JavaScript's use of more and more functional patterns came with Underscore.js and CoffeeScript, which were both inspired by Ruby-based web dev!

I'd say the entire industry, Java included, has been moving towards more FP in a very sluggish fashion.


Having first-class functions and closures was the can of worms. How much you can get done passing callbacks is what got me wondering how much more stuff is there to learn in FP.


>End of the day, the computer is an imperative device, and your training helps you understand that.

Well... it's complicated. A CPU is imperative. An ALU is functional. A GPU is vectorized functional.


true, well that's complicated too, as that ALU likely runs microcode or has a lookup table, but presuming boolean hardware logic underlaying it somewhere, THAT level is declarative, not sure about what functional composition is involved here, but declarative programming of boolean hardware where the actual imperative activity is occurring.

maybe the physics is imperative too lol


end of the day, some poor schmuck has to get up and DO something...lol


I suppose that since one is still only talking about the external interface to any given hw execution unit (gpu, alu, fpu) one could always present it in whatever format was useful or trendy.

But I'll contend that it's much more productive to basically wrap low-level functionality as modules that higher-level languages could compose. One could then optimize individual modules.

The mechanism of composition should lay it out as desired in memory for best efficiency, and hence the probably need for a layout step, presuming precompiled modules. (it could use 'ld', for example) i'm not sure how you would optimize memory layout for black-boxes, but perhaps some standard interface..

Most people here are doing this already without knowing it, if you look into the dependencies of your higher level programming tools and kit.

End of the day OOP is a code-organization technique. FP is too. They are both useful. We still have complexity. Some poster above needing actor models etc, depends upon the scale I suppose. If one is considering a distributed healthcare application, or is one trying to get audio/video not to glitch etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: