Hacker News new | past | comments | ask | show | jobs | submit login

I wasn't exposed to spreadsheets until a few years into college back around 1996 or 1997 maybe (I had been programming in C/C++ for 7 or 8 years by then). I wasn't taught matrix math until pretty late in the curriculum, I want to say junior or senior year. Also I was lucky to have a semester of Scheme but they were transitioning to teaching Java around the time I graduated (I don't know if they ever switched back). And this was for a computer engineering degree at one of the best state universities for engineering in the US.

Honestly I think it might be time to phase out teaching imperative and object-oriented programming. Most of the grief in my career has come from them. I don't care if they're where the jobs are. The mental cost of tracing through highly-imperative code, especially the new implicit style of languages and frameworks like Ruby and AngularJS (which have logic flows having no obvious connection to one another, or transition through async handlers connected by a convention which isn't immediately obvious to the developer) is so high that the underlying business logic is effectively obfuscated.

I think we should get back to fundamentals. Maybe start with the shell and explain why separate address spaces connected by pipes are such a powerful abstraction, maybe touch on the Actor model, maybe show how spreadsheets and functional programming are equivalent, and how even written notation is superfluous. Really focus on declarative programming and code as media, and how that made the web approachable by nontechnical folks before it was turned into single page application spaghetti. There are so many examples from history of where better ways were found before the mainstream fell back into bad habits.




Computers are still imperative, so all functional code is arguably syntactic sugar over that core causing a lot of leaky abstractions to show up all over the place.

I think the problem with Object-oriented programming is it's taught to soon. Start with Imperative then Functional then toss object oriented into your senior year.


A ton of problems in software engineering, I am convinced (and isn't wild generalization one of the marks of our field! I at least want to own my own hypocrisy here) are communication problems. Nearly all the interesting ones, anyway.

And one of them is axioms no-one ever communicates. I've worked with programmers with at least three markedly different axiomatic bases, for want of some less pretentious – and less exaggerated-for-rhetorical-effect – way of putting this:

* `Electrical engineers`. "Computers are imperative because computers are just circuits". Functions, objects, mathematical abstractions in general are fundamentally leaky and to be regarded with deep suspicion. Paradigmatic languages: C, asm.

* `Pure mathematicians`. Computers are abstract machines for manipulating symbols. Programming is set theory; hardware is an implementation detail. State is just an artifact of insufficient precognition. Paradigmatic languages: ML, Haskell.

* `Approximators`. These folk rarely come from computer science backgrounds; instead they tend to come from "technical computing", meaning sciences, engineering, economics, statistics and the like. They're also on the rise, because this is the group of people who "get" machine learning. Computers are fancy and very fast slide rules; their job is to run approximate numerical methods very fast. The only true types are `float` and `bool`; programming is translating linear algebra into algorithms which estimate the correct results stably with bounded error, ideally by calling the right bits of LAPACK. Paradigmatic language: historically, Fortran; these days, whatever array language is embedded in the libraries they're using, surrounded by Python or (if statistician) R.

The point is; these are three markedly different worldviews and none of them are any more fundamentally wrong than the others – they're all useful and wildly incomplete. So unless you can get agreement – or at least empathy – within your team, you're going to spend a lot of time talking past each other.


That's a very interesting categorization. I would generally place myself in the 'electrical engineers' category. I usually code in Python, Javascript, or Java, but I see those languages as essentially C or machine code with productivity enhancements like garbage collection, objects, closures, and lots of interoperable libraries.

OTOH, I would also place myself somewhat in the 'mathematician' category because I see imperative code as functional code where each instruction is a function that takes the current machine state and some parameters and computes a new machine state. The new machine state is an input for the next instruction. When you see it that way, instruction pipelines in the CPU become very easy to comprehend: the CPU calculates one or more possible future machine states while waiting for a response from DRAM, then selects the correct state based on the response.

Seeing it that way also reveals the main issue with imperative programming: the each instruction accepts too many inputs! Functional programming helps constrain the inputs so that it's easier to reason about what the code does and how it can change.


It's a pile of massive overgeneralizations, not least of which is that people don't fall neatly into these categories. But it's a useful framework for thinking about miscommunication, I've found.


I agree. Next time I have a disagreement with another coder, I might ask them whether they see computers as primarily circuits, set theory implementations, or AI. (I might add "filing cabinet" as another category.) It would help me understand which assumptions I can probably make.


I like that categorization. One semester during college I was taking both a computer architecture class, and an automata/theory of computation class--I still remember when I realized that there were 2 fundamentally different ways of approaching computing. Studying the history of computing, you'll also come to a similar realization that there are 2 fundamental approaches.

Personally, I've always loved both paradigms, and I've never been able to decide which one I prefer. I tend to move between the 2 depending on which one works better for the task at hand.

That realization (along with the fundamentals required to understand both paradigms) was one of the most useful things I got out of my degree.


Yeah, interestingly, I have this impression that Python is a modern-day Fortran. (Largely for the reasons you describe in the third category of programmers.)


At least among my colleagues (process engineering) the language is basically an afterthought. Your slide rule analogy is right on.

For a lot of the people I work with code is just a way of representing mathematics. So you end up with "Fortran written in every language"

Here is a snippet of code (this is written in C but I guarantee most of my colleagues would write this the same regardless of it it were fortran, python, Matlab what ever).

           for (i=m1;i < m2; i++) 
           {                      //f(p)^2 = (u^T)(Q^T)(D^2)Qu and F(dF/dp) = (u^T)(Q^T)(D^2)Q(du/dp)
               g =h;
               h = (v[i+1]-v[i])/(x[i+1]-x[i]);
               g = h - g - r1[i-1]*r[i-1]-r2[i-2]*r[i-2];
              ...
           }
Here's some SQL

SELECT t2.Line, t2.Slope, t1.X, t1.Y, t1.Y - (t2.Slope * t1.X) as Intercept /* B = Y -Mx */


On some level that's literally true: when you run NumPy or SciPy code, most of your basic array manipulations are done by BLAS, which are a set of very tightly optimized linear algebra routines originally written in Fortran.


A very large part of the answer to "why do people use Python?" is "why did physicists adopt Python en masse in the early 2000s?" and a large part of the answer to _that_ is "f2py exists"...


When you try and ignore the hardware you find out things like random bit errors occur. Thus pure symbol manipulation is unequivocally false.

Pretending a computer is somethings else can be a useful abstraction, but you can actually learn what’s going on.


So where do Smalltalkers and Lispers go?


Part of the Lisp tradition was the 'knowledge engineer'. Identify the knowledge and processes necessary for problem solving and implement a machine for that: logic, rules, semantic networks, ... Bring the programming language near to the problem domain.


Outside in the car park playing hackey sack


presumably in the "mathematical" group


Mental health facility :(


Statistically insignificant.


I have heard critiques of functional programming, but not that there is a problem with leaky abstractions. Can you provide an example? Does it invalidate the discipline of trying to use pure functions when possible?

I learned to program using BASIC on an Apple II. All of the variables were global, and it wasn't until I got Apple Pascal that I had access to a language that had local variables. I immediately saw the advantage of the discipline of using local variables whenever possible because I had experienced the difficulty of tracking down where in the whole program a global variable might be changed.

It could be argued that local variables are just syntactic sugar over a global memory space, but no one credibly argues today that we should therefore only use global variables. Local variables make it easier to reason about the behavior of a program, and it appears to me that functional programming takes that idea even further.


Program execution time and memory footprint are great examples. Math does not care about these details, but just because two programs are logically equivalent does not mean they will behave identically.

I am very pro functional programming, but your mental model needs to be accurate to really dig into the details. As such you really need to understand ASM and procedural code or it will eventually bite you.


That's a curious place to draw the line. As we've seen in the past year, assembly isn't good enough. You need to understand your microcode or it will eventually bite you ... and you don't.

But that's all just nit-picking <<1% cases. When's the last time you saw a functional program whose execution time or memory footprint were unacceptable, and required knowledge of assembly language (or lower) to resolve? I can't say I ever have.


The best example I can give is not quite functional code. I have seen a lot of horrific SQL because someone did not really understand that Joins for example had a cost based on some understandable criteria. You need to have a basic mental model of what the database does to write good SQL.

I don't expect everyone to proficient in ASM let alone microcode, but I do expect them to at least understand that they exist. Beyond that I expect any decent CS program to enable someone to build a useful mental model of what's going on. Plenty of subject matter experts write very valuable code without that knowledge, but they tend to hit a very real wall.


Perhaps I'm missing the point here, but my understanding is that functional programmers (and especially Haskell programmers) tend to have a different problem: knowing what code the compiler can optimise well.

From what I've heard about tuning Haskell code for performance, much of it depends on the particulars of the compiler, rather than on the underlying CPU.


I don't like how people equate the algebraic FP style of programming with "math" -- there are plenty of ways to model execution time and memory footprint using math, for example. A functional program is no more "mathematical" than an imperative one.


I mean as a mental model.

You can for example analyze a program assuming an ideal compiler (which I have seen people do), or you can understand the actual compiler used. The second is an implementation detail subject to change, but it’s also more accurate.

I recall one meeting when people said doing comparisons with a list of objects one comparison at a time should be as fast as doing the same thing one object at a time. I pointed out that due to cache issues doing each check an object at a time would be faster. A few people where shocked when the test showed a significant improvement.


Purely functional languages come with the promise that a sufficiently advanced compiler will see through all that monadic functional cruft and run your code as well or even better than it would if written in an imperative language.

As we don’t yet have such an advanced compiler, impure hacks like `par` start seeping through the cracks.


What do you mean impure hacks like `par`? It's a parallelism primitive. You mean `seq`? In either case it's not impure. It's just a primitive that can't be expressed in the language itself. It's still referentially transparent (which is what people mean by pure).


> Computers are still imperative

"Computer Science is no more about computers than astronomy is about telescopes." — (Mis)attributed to Edsger Dijkstra, 1970.


> As a result, primarily in the U.S., the topic became prematurely known as "computer science"---which actually is like referring to surgery as "knife science"---and it was firmly implanted in people's minds that computing science is about machines and their peripheral equipment.

Edsger W Dijkstra. Mathematicians and computing scientists: The cultural gap. Abacus, 4(4): 26–31, June 1987. ISSN 0724-6722. URL http://dl.acm.org/citation.cfm?id=25596.25598.

Which is similar in spirit to the contested quote, and predates by a few years any of the references on Wikiquote.


In the SICP lectures, they explain why names like 'Computer Science' come into being.

They explain it by explaining the origins of the word 'Geometry', which translates to 'Earth Measurement'. What they deduce is when a field is young, its hard to make a distinction between the science behind the field and instruments/tools you spend time with to make the science happen. That is because you spend so much time with the tools that the big part of the work is tool expertise itself.

Until we arrive at the perfect computers and perfect languages, we are going to be stuck with this sort of a phenomenon for a very long time.


We had the perfect term early on: informatics

It is also the official term for computer science in many cultures. Mostly in European countries/languages.


That explains why astronomers are so incredibly enthousiastic about telescopes, and spend immense amounts of time trying to create better ones to overcome the limits of what they can do now.


Sure, everyone is enthusiastic about their tools. Still, it is not telescopes that are the subject of the science of astronomy.


It's not the subject, no, but the limitations on telescopes are what places limits on how much astronomy can do, and by improving their telescopes, and by improving how they use their telescopes, they can find out more than what they could before.


Note that the detection of gravitational waves necessitated the modelisation of these waves first. So astronomy is needed as an input and also progress as an output of the telescope..


Meanwhile, in the real world...


I'm still convinced that (Mis)Quoting Dijkstra Considered Harmful. A Dijkstra quote derails a conversation about CS almost as surely as anything else.


Developing computer programs is not computer science, either. The application of science to solve practical problems is "engineering".

Unfortunately, real programs run on real computers and engineers need to deal with that. In theory, theory and practice are the same. In practice, they are not.


And what about humans? You are rarely writing code for computers, but for humans. Many companies optimize code for readability. And there is a reason. Usually multiple people are working on the same code at the same time, or over time at least. It is the compiler's job to translate the code to computers language which can be imperative, I would still prefer the human side functional and declarative. Maybe it is just me.

I have struggled with the concepts of Java for a long time, especially with the many keywords that I cannot relate to, private, final, static, void while I understood majority of Clojure at the first glance. I could not write Java without an IDE but more than happy to develop Clojure with VI. It appears to me that we overcomplicate things for no good reason and underestimate the power of simplicity.


“I don’t understand it, therefore it is inferior.”

I currently program for a living with Java. It clicked with me the moment I first learned it. (It was also my first introduction to OOP.) Right now I’m struggling to learn the nuances of functional programming with both Scala and Haskell. Do I consider functional programming, Scala, or Haskell inferior because they’re not as straightforward for me as Java (or C#, or C, or even ASM) was? Of course not.


To be fair, almost no one can (or should) write java without an IDE. IntelliJ + ideaVIM has been a great compromise for me as a long long time VIM user. I know I am not responding to your main point (of which I am sympathetic) but I figured I'd throw it out there anyway


Education is there to teach you. I don’t think everyone will use SQL or ASM for example, but IMO every CS program needs to have at least one class covering each of them in depth for at least a few weeks.

Procedural code is even more central, and should be covered in depth.


> so all functional code is arguably syntactic sugar

No. "Syntatic sugar" implies that it is relatively trivial 1-1 translation that could in principle go backwards. Functional programming has many, many more optimizations and transformations.


I disagree that they should phase out imperative / OO programming, but from my experience more emphasis on different languages i.e. throw in a ML (as in Meta Language) and a Lisp, and most importantly some information about the trade offs.

However it would require a culture shift as a university being a place to get you ready for a career in industry rather than academia.

Because you are prepping for academia it is OK for them to postpone functional to later years on even the PhD.

You mention grief with OO - but I think a lot of it you would get with Functional too and it has more to do with the incentives and people structures in organisations that produce code. Usually praise goes to those who get a 8 hour Jira ticket done in 4, or a 5 week task done in 4 weeks etc, and from the users point of view 'it works'. The structure of the code is not discovered until later.

Code reviewers are working at the deckchairs level, they are unable to change the Titanic's direction, and again both parties in a code review have incentive to do it as quick as possible (while not looking like they obviously brushed over it), so it become mostly a potential bug hunting and syntactic cleanup exercise.

It's almost a running joke that refactoring rarely gets done, and if it does it is trivial, and usually required some stealth from a developer or manager to create cover to do the refactor.

The only hope we have is to work somewhere where there is a good understanding of code quality from top to bottom in the org, or at least when you cross the technical/non-technical boundary there is a high degree of trust to let the tech people do the right thing, and not KPI them into submission.


You summed it up nicely. After writing my comment, I realized that imperative programming is so much more difficult than functional programming that in a way, it's the majority of the work of programming. Anyone can learn to use a spreadsheet, but it takes years of dedication to master debugging enterprise software. So there will always be huge demand for that skill, and so colleges should probably continue building it in students.

I'm still in mourning though imagining how far programmers could go if they weren't stuck endlessly debugging imperative code that will never be deterministic or free of side effects. Lots of lost potential there. I'm coming up on 3 decades of experience doing that and it feels like well over 90% of the code I've written was a waste of time. I guess it paid the bills though.


> I realized that imperative programming is so much more difficult than functional programming that in a way, it's the majority of the work of programming. Anyone can learn to use a spreadsheet, but it takes years of dedication to master debugging enterprise software.

Anyone can learn to program spreadsheets, because spreadsheets realize that state is the most important thing, and puts it front and center—hiding the calculations. Most programming paradigms are about manipulating the calculations, and the state is only visible when the program is running. As long as programming tries to avoid state, it'll be hard for most people to learn.


What you just said really struck a chord with me. I do tend to ignore state because it's less accessible. The result is that my code gets more esoteric and abstract while the changes I need to make to state come very slowly. On the other hand I've made spreadsheets as complicated as small applications I've made, but the cognitive load feels significantly lighter.

How can we bring state forward during development?


> from my experience more emphasis on different languages i.e. throw in a ML (as in Meta Language) and a Lisp, and most importantly some information about the trade offs

Most CS curricula have a course where you spend time programming in a variety of different programming languages, in different paradigms.


> Code reviewers are working at the deckchairs level, they are unable to change the Titanic's direction, and again both parties in a code review have incentive to do it as quick as possible (while not looking like they obviously brushed over it), so it become mostly a potential bug hunting and syntactic cleanup exercise.

And that's why I despise code reviews. There's obviously not enough time allocated to them for in-depth understanding of the reviewed code, so I'm mostly spending precious resources (attention, energy) on doing a half-assed review that is not going to do that much good. It feels like such futile work.


>Honestly I think it might be time to phase out teaching imperative and object-oriented programming.

I have seen plenty of universities teach Java and C++, haven't seen any that teach actual OOP. James Coplien aptly calls the current paradigm "class oriented programming".


It triggers me to no end when I watch an introductory course, for people with no previous exposure to any programming language, and the teacher starts with

"public static void main()"

In order to understand it, you need to have a good grasp of classes, static methods, access controls.

This is usually followed up by a request to ignore the entire line, which is one of the worst habits you can have as a developer.

Then you have to compile this. It used to be the case that people would copy and paste command line excerpts (bad habit!), but now they will get a pre-configured "IDE" where they can punch the compile button (ok, but I have met plenty of professionals who never left this stage).

Instead, they could avoid the whole thing by using something else, say, Python, or even Javascript. In both cases you can quickly drop to a repl and start trying stuff out.

Once the basics are there, then you can proceed to object orientation. And perhaps even to Java as a more advanced course, only you now have the proper concepts to understand the very first line you are supposed to write.

Why would universities teach Java and C++ anyway? They are either 'boring' from a computer science perspective, or too convoluted for teaching concepts.


To me, my intro to comp sci class nailed it by teaching Scheme. The first 30 min or so were spent introducing all the syntax wed learn in the entire class. The remaining several weeks were spent on CS concepts and learning to express ourselves to the computer. I've never used Scheme/Lisp professionally and nor do I have an urge to. But it was the perfect teaching language because it puts so little between the student and learning to program a computer.


The thing I love about Python as a teaching language is that you don't need to tell people anything about objects to get them started, and even to do some more advanced stuff (e.g. turtle graphics). And then you can introduce objects, and they light up the mental image of the world that is already there - because objects were part of the ride all along, just staying in the background.


Why would universities teach Java and C++ anyway? They are either 'boring' from a computer science perspective, or too convoluted for teaching concepts.

Not a fan of Java by any means, but they should teach Java so graduating students can put that on their resume and get a job so they won't live with their parents until they are 35....


This is presumably why they do teach Java.


This doesn't really apply to C++, but I know that Sun was pushing very hard for Java in the early 90s, to the point of basically giving away hardware and course material to universities who would teach Java.


>This is usually followed up by a request to ignore the entire line, which is one of the worst habits you can have as a developer. That really bugged me when I learned Java at University. I was also unaware how the compiler built the code and ran it, it was all hidden by the IDE. I felt a bit satisfied by my data structures course which had a pretty decent book that explained a bit of this in the first chapter. I then decided to just switch to using vim and Make to build and run my school assignments.


Anecdotally, I tried to teach myself programming in the summer before starting University. I went with Python since it's available by default on Linux, and managed to get a little CLI board game with a main loop that asked the user for their move, updated the game's state and printed the new board.

One of the first CS courses at University was OOP in Java. I really struggled to grasp OOP, for most of the first year. It didn't "click" until I tried doing it in Python; after that I went down the rabbit hole into meta-object protocols, Smalltalk/Self/Newspeak, etc.

Java seems to occupy the opposite of a sweet spot: it makes learning difficult for newcomers, yet it's very limited and restrictive for those with experience. Not only is it overly verbose and ceremonious compared to untyped languages like Python, but also compared to ML/Haskell, whilst being less expressive and less safe!


Why not just explain it in less detail? Public so that java can see it. Static so it just lives here, no need to create objects. And void is that no value is returned, we will just print our results. Java calls it as program runs, thus main(). Btw, object is something you can create and call methods – functions of that object.

Static may be hard to understand, but it is something you just can do. You can say(hi) and that requires nothing, but you cannot drive() without a car.


You can't teach that without touching all the concepts you mention. These would be almost all later parts of the course.


> This is usually followed up by a request to ignore the entire line, which is one of the worst habits you can have as a developer.

It is usually a request to ignore it for now, because "we will get into each part latter". I fail to understand what is so bad about it. It's just not practical to provide all the theoretical foundations upfront, and postpone practice to the second semester.

Up to this day, I still think that the easiest way for me to learn a new language is to copy some hello world, and replace one line after the other once I get why they are there.


I was never very good at that--ignoring something until later when it's staring right at you.

I prefer teaching programming with a language that minimizes the boilerplate required to get started.


I think of it a bit like how physics start with that frictionless vacuum, or skip relativistic stuff for later.


This thought is why for a long time, Programming 101 at my alma mater (ETH Zurich) was taught in Eiffel. Obscure, but strongly opinionated towards OOP by way of strongly enforcing the paradigm.

Stop reading here.

(Since Bertrand Meyer, driving force behind the language and course, retired, the course is now taught using Java.)


> Honestly I think it might be time to phase out teaching imperative and object-oriented programming. Most of the grief in my career has come from them. I don't care if they're where the jobs are. The mental cost of tracing through highly-imperative code, especially the new implicit style of languages and frameworks like Ruby and AngularJS (which have logic flows having no obvious connection to one another, or transition through async handlers connected by a convention which isn't immediately obvious to the developer) is so high that the underlying business logic is effectively obfuscated.

I don't think this is a fault of functional programming as much as it is a fault of implicit language constructs. There are common conventions where being able to chuck functions and closures around is actually useful (e.g. sorting). These language constructs/conventions are useful because they can be extremely expressive, but unfamiliar implicit behaviors cost me much more "working memory" to understand when I'm reading the code.

There's a version of object-oriented programming that mostly sticks to declarative behaviors and uses inheritance, unions, mix-ins, etc. to actually hide behavioral abstractions. The problem is that it often takes 5-10 years of experience before the programmer (myself included) begins to appreciate how important future readability of the code is, and to understand that the more exotic language features should be used sparingly.


"...have logic flows having no obvious connection to one another, or transition through async handlers connected by a convention which isn't immediately obvious to the developer"

This is very true. I hate parsing code written like this, or code written using a mashup of concepts, requiring mind gymnastics to fathom an implementation of a solution to a relatively simple problem. It makes the job so much more unnecessarily difficult.

Points for simplicity if you ask me.


Inverted control is a hallmark of many programming domains. It’s not like you can get the user to be synchronous with respect to the program they are using (unless it is a non interactive batch tool). Even functional paradigms like Rx have control flow running around all over the place, the only saving grace is if you can keep it encapsulates via a set of opaque data flow pipes (and often you can’t).


>I think it might be time to phase out teaching imperative and object-oriented programming. Most of the grief in my career has come from them.

Good luck. Computer Science degrees are balancing act between practical and theoretical. More importantly students want to work with technologies that are popular in the industry today. You can try to to stand against the tide, but you will lose.


Would you mind elaborating on "why separate address spaces connected by pipes are such a powerful abstraction", or point me to some sources? Likewise if there's something you can recommend for reading up on the Actor model?


CSP[1] is also related.

For a quick breakdown of why CSP and the Actor model are both Good Ideas:

Think about any concurrent system (i.e. multiple threads of execution). Any modification of a shared resource by one thread (one concrete example is a global variable, but filesystem, devices &c. all apply too), is implicit communication that can happen at any point whatsoever in another threads execution.

The state of one thread being modified in unsynchronized ways in another inevitably leads to bugs that are very hard to reason about.

Both CSP and Actors involve removing the implicit communication and replacing it with explicit communication.

Two Unix processes connected by pipes are one example of this, since two processes do not typically share any writable memory.

I don't know if you've written any code designed to be used over a pipe. If you have, you may notice that you do not care at all when the process on the other side of a pipe writes to variables. You do not need locks or mutexes or semaphores or any of those constructs for IPC in this situation.

I don't know if you've ever written any multithreaded code with shared variables, but if you have, you've definitely noticed that you need to carefully hand-synchronize modifications to those variables.

Now of course, the only special thing about the unix processes example was the lack of mutable shared state, and the use of an explicit communication channel. You could write a single-process with many threads that communicate over such channels to avoid the IPC overhead of unix, but still get the simple-to-reason-about concurrency of processes.

Pretty much all of the ideas above was published before 1980.

My dad read Hoare's CSP paper when getting his Masters degree and he told me "80% of the class didn't understand it, and the remaining 20% of the class thought that manual synchronization was not a problem" which explains much of the buggy, highly concurrent software written since.

1: https://en.wikipedia.org/wiki/Communicating_sequential_proce...


Thank you for the insight. What you're describing reminds me pretty strongly of pure functions. Which I guess makes sense, since "Functional languages can do concurrency out of the box!!" (well yeah, but you need to learn how to write truly functional code first :D)

>"80% of the class didn't understand it, and the remaining 20% of the class thought that manual synchronization was not a problem"

Unfortunately it seems to be this way with many things in our field (/life?). And the 80% non-understanders group doesn't even stay the same people for every topic! :)


Pure functional programs work because what kills you isn't shared state, but mutable shared state, and if you have no mutation, then you have not mutable shared state.

Unix processes work on the opposite side: you can mutate everything, but very little is shared.

With threading and impure code you need to not mutate shared things, and limiting yourself to message-passing is a good way to achieve this (of course you can't message-pass shared pointers or you've missed the point).


The phrase probably refers to Unix pipes: https://wiki.tuhs.org/doku.php?id=features:pipes


Ya in my mind, executables that do one thing well and are connected by pipes is one of the most proven models for getting things done. Even relatively nontechnical people can write shell scripts, batch files and macros that take some kind of data from a socket/file/pipe, pass it through a bunch of black boxes and spit out an answer. This is very similar to the Actor model and is much simpler to reason about than the shared state threads of languages like Java. Optimizations like copy on write can eliminate most of the overhead of sending data between processes.

Here's a quick overview of how the Actor model is better than shared state (mainly by avoiding locks and nondeterminism): https://doc.akka.io/docs/akka/2.5.3/scala/guide/actors-intro...

And just as an aside, I haven't fully learned Rust yet but from what I understand, the borrow checker only applies to mutable data. If you write a fully immutable Rust program then you can avoid it altogether. So to me, this is a potential bridge between functional and imperative programming. It also might apply to monads, since (from what I understand) languages like ClojureScript can run in one-shot mode where all side effect-free code runs completely and then the Javascript runtime suspends execution, only starting it again once new data is ready. This might also be a bridge between functional and imperative code, because I've found monads to be one of the weakest links in functional programming and I've never quite wrapped my head around how they work or if they're even a good abstraction. Maybe someone can elaborate on them!

In short, the goal here is to reduce all concurrent code to a single thread of execution that is statically analyzable as much as possible. So a typical C++ desktop app may not see much benefit from this, but anyone who has found themselves in Javascript callback hell can certainly appreciate the advantages that async/await provides, since it more closely approximates the message passing of Erlang/Go, which is more similar to shell scripting. If we had a "perfect" language that optimized all code to be concurrent as much as possible, for example by converting for-loops into higher order functions like map and then running them on separate threads behind the scenes, then we could offload the wasted work currently done by humans onto the runtime. So that would mean that instead of writing programs that, say, access remote APIs, cache the data somehow, remember to invalidate it when the source of truth changes, etc etc etc, that complexity instead could be reduced to a dependency graph that works like a spreadsheet and updates all dependencies when something changes. This seems to be where the world is going with reactive programming, with a lot of handwaving and ugly syntax because it's reimplementing old concepts from Lisp etc: https://en.wikipedia.org/wiki/Reactive_programming

Sorry this got a little long, I could ramble about the downsides of programming in these times for hours.


> I haven't fully learned Rust yet but from what I understand, the borrow checker only applies to mutable data

This is not correct. The borrow checker applies to references, both mutable and immutable.


I think the intention was that if you have no mut refs, then you have no borrow checker problems.


That’s not true. You still have things like dangling pointers being checked for.


Ok thanks for the followup, it was something I heard but hadn't tried in practice.


No problem!


Thank you for the follow-up and the links! I wonder if your suggestion would hold if it were put to the test and one were to implement a big and complex software strictly with the independent processes+pipes model. I could well see that you would wind up in some other sort of hell, because the problem of understanding lies not in the way it's implemented but in the complexity of the tasks involved.

>Sorry this got a little long, I could ramble about the downsides of programming in these times for hours.

Haha, programmer talk is the best!


Real hardware is imperative. Object-oriented programming was once treated like Functional programming. The languages which did implement OOP as it was originally thought was horribly inefficient. So practical languages took the good idea out of OOP for convenient syntax sugars.

Functional Programming will follow the same path. You probably don't like it for it will be ended up impure and just some syntax sugar that looks like FP, but rest of the world like it because it gets the job done.


> Real hardware is imperative.

Wrong.

> So practical languages took the good idea out of OOP for convenient syntax sugars.

Also wrong, unless by "syntactic sugars" you mean "nontrivial compiler rewrites"

> Functional Programming will follow the same path.

Doubt it.

> You probably don't like it for it will be ended up impure

Purity does not constrain efficiency at all.


>Wrong.

At an abstraction level below digital circuits, you are correct. But thankfully the layers on top of that do almost always do a very good job of hiding it.

>Doubt it.

Functional programming is already following the same path. Sure there will non-hybrid languages around if you want to use them, but take a look at the functional constructs that have made their way into almost every procedural language in use today.

Most people have never touched Haskell, but almost everyone has seen a map function.

>Purity does not constrain efficiency at all.

Maybe not inherently, but there's a reason we have to break out and use mutable data structures when we really need performance. For the systems we are working on today purity does impact performance/efficiency.


> time to phase out teaching imperative

Knuth would probably disagree.


> show how spreadsheets and functional programming are equivalent

They are not in fact equivalent, unless the functional programming you're doing is trivial.


Could you elaborate on how you came to be programming in C/C++ for 7 or 8 years before encountering a spreadsheet? That seems an unusual path even back in the 90s.


Ya I picked up my first programming book when I was 12, then started with HyperCard and Visual Interactive Programming (VIP) on the Mac Plus with my friend (another guy named Zack):

https://en.wikipedia.org/wiki/HyperCard

http://mstay.com/images/screens/vpc1.gif

So thought of code as a big flowchart where you fill in the boxes with business logic. Transitioned to C within a year or two and got incredibly deep into assembly language and low level code for blitters back in the 486 era when DOOM came out. Unfortunately moderately priced Macs were an order of magnitude too slow to play fullscreen scrolling games at 640x480x8 resolution until the 60 MHz PowerPC came out in the mid 90s. So never made any real money on Mac shareware games, but I digress.

The Mac had an informal programming environment (no console or office suite with standardized inter-application communication) and I hadn't seen MS Access or FileMaker yet, or spreadsheets. When I first encountered them for writing reports in college and it finally clicked for me that code didn't have to be run imperatively, it was devastating in a way. But I made a full recovery in the early 2000s when PHP/MySQL got popular and I was able to get back into rapid application development with ORMs. I feel like the world is about ready to make the jump from hands-on tools like Laravel to WYSIWYG tools like Wix. Most of the attempts I've tried are frankly terrible, but I'm optimistic.


If you compare the ease of "authoring" in the top hypermedia environment of its day (Hypercard) with doing the same today (the web) it's plain as day that things have regressed.

Here is a straight up comparison someone made if anyone is interested:

https://twitter.com/ecgade/status/1029795513514774529




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: