This post itself is a prime example of programming is writing, and writing is thinking. Some of the best programmers I have known have been philosophers and lit majors. Constructing and argument, telling a story, they all have to be broken down into cohesive parts. How many times have you come across code that is modeled after the style of a child where it just says, "and then ___, and then ___, and then ___" at which point it stops and says its over?
Until end users can construct their own languages, language designers have a long way to go.
>How many times have you come across code that is modeled after the style of a child where it just says, "and then ___, and then ___, and then ___" at which point it stops and says its over?
Few times, alas. Most of the times it's some over-engineered monstrosity by architecture astronauts or people trying to be clever.
My thoughts exactly, though it should be noted that the architecture astronauts are more common in customer facing software. In most backend projects I've been involved in, the final state of the project is so well known, all different corner cases are well covered by the chosen architecture. For frontend clients however, the architectural choices mostly end up being tech debt.
Sometimes I prefer "and then, and then, and then" over "sign up async observer, send through our java-redux, oops forgot to implement async actions, hack in an and then and then and then case into it" -solutions.
"Some of the best programmers I have known have been philosophers and lit majors"
Or musicians like Rich Hickey. Without a strong sense of aesthetics you can't build good software, that's where the "over-engineered" stuff comes from. Aesthetics and a sense of proportion makes miracles.
I'm not sure what your definition of best programmer is, but to create your own language in order to solve a problem is an extreme measure in my opinion, very rarely justified.
I like code that says "does this, then that, then that etc" (as opposed to Eg. the misuse of the concept of abstraction in Java development). Easy to read, naturally follows into a better mental model of whats going on, which is especially important if you are reading code from somebody else. This is not a test of intelligence, as some programmers may be inclined to think. Its about using what solves the problem - easily, even if it is "this then that" style.
I don't want to come into work and fight to de-tangle the inner minds of a crazed developer. I'd rather come in, and use my brain to add value. (Reading simple clean code, and using my brain to add to it).
Goodness. If we had to learn a language just to be able to consume a fellow engineers mental model, we would all be in trouble.
Isn't Lisp particularly good at making your own DSL?
Anyway, the majority of the programming community tends to dislike that sort of freedom, preferring maintainability and the ability to work as a team where every team member isn't off creating their own dialect.
>Isn't Lisp particularly good at making your own DSL?
Anyway, the majority of the programming community tends to dislike that sort of freedom,
The majority of the programming community doesn't use Lisp.
Lisp metaprogramming is very different to metaprogramming in other languages (extreme opposite example: C preprocessor macros). Lisp homoiconicity makes macros easy to write, and easy to maintain.
The majority of the programming community dislikes that sort of freedom, but the overwhelming majority of Lisp programmers considers it an essential freedom.
After all, we're all doing high-level programming. We need tools to abstract stuff, and the more complex the problem is, the more powerful the abstraction features should be.
The state of the art in abstracting stuff in 2018 is still either hindley-milner languages (ML family), Prolog and friends (Erlang), or Lisp (Common Lisp, Scheme).
I suspect that LISP has the same problem that Smalltalk-72 had: creating DSLs is "too easy" and unconstrained. Language design is tricky and very abstract and nobody is a good language designer in the heat of the moment.
Oh, and while very unconstrained in some sense, the syntax also strongly "suggests" some sort of applicative model, so call/return again.
So maybe too constrained where it shouldn't be, and too unconstrained where it would need to be more constrained.
What exactly would be an alternative to call/return? I've seen that term thrown around before but I don't know how to Google for it effectively. Is it an alternative to other execution models like fork-join or something?
There are a number of architectural styles that aren't call/return:
1. Dataflow
With the popular variant pipes + filters. There is also synchronous dataflow, which can be compiled to state machines. This has recently been bastardized as "FRP" (distinct from Conal Eliot's original FRP).
What's kind of amusing is that traditional programs typically get translated to dataflow variants twice: first, the compiler turns procedures into a dataflow graph and generates code from that, later most OOO speculative processors turn the computation into more or less a dataflow graph again.
2. Asynchronous messaging
See Erlang/Actor model. You send messages (calls), but they don't return.
3. Implicit invocation/broadcast
You send out messages/broadcasts. Clients can register. There is no 1:1 correspondence between call-sites and receivers. May also be asynchronous (but doesn't have to be).
4. Constraints
Both dataflow constraints (see spreadsheets) and "solver" constraints. You just specify the relations you want to hold.
5. Coroutines
Normal call/return runs the callee to completion. Coroutines interleave. The Floyd paper mentions nested coroutines, which I have to look at.
etc.
A good reference is Software Architecture: Pespectives on an Emerging Discipline by Shaw and Garland
That's probably because the majority of programming community is doing commodity products, which are optimized for delivering consistent results on schedule with cheap and replaceable staff. Which is good for the business of making such commodity products, but it puts a limit on what kind of products you can make.
Sure, and I'm not saying anything negative about Lisp, just pointing out to the parent that languages well suited for DSLs have existed for some time, which may or may not satisfy what the parent had in mind.
Hey, I took off and hid my "smug Lisp weenie" hat for a reason :).
Seriously, I replied to the second part of your comment. Lisp may be the pinnacle of DSL-friendliness, but, as you say, plenty of languages have various measures of metaprogramming support, and yet, as you note, those facilities tend to be universally disliked by the mainstream programming community :).
(I believe that another strong reason for this is that the mainstream programming community is, for better or worse, eternally stuck in the "cater to the newcomers" mode. Effective use of metaprogramming requires a deep understanding of more abstract principles behind programming and complexity management, and are indeed dangerous in hands of beginners.)
There's an effect I've witnessed that I dubbed the "insecure intermediate metaprogrammer". Developers who have a strong desire to prove that they are no longer 'intermediate' programmers start taking advanced language features and trying to apply them to the problems they are given.
The idea, as I see it, is that they think that at a later date they can go "I'm not an intermediate programmer, I'm an expert because I have used advanced language features like metaprogramming". At the time they will probably not admit that this is a surreptitious attempt at career building.
Metaprogramming, by its nature, is sort of like amplified programming - it can be very powerful and save on tons of boilerplate if you use it at the right time in the right place. However, with that power comes a greater potential to create an almighty steaming pile of code shit that is way more horrible to reason about than even the worst code written by the most junior programmers. The intermediate programmer will likely not realize this.
Moreover, the intermediate doesn't actually have the kind of problems it solves. Metaprogramming is useful for building frameworks and I've seen it put to good use there on heavily tested open source which has hundreds of eyes on it to spot every issue and keep its natural tendency to get out of hand in check. But, however much many developers really want to believe otherwise, building frameworks is a very niche thing. It needs to be done well, but it does not need to be done often.
So the intermediate programmers then end up creating a glorious code turd of magnificent proportions in an attempt to prove that they are really experts. This creates code that gives juniors and experts alike nightmares. The juniors get conniptions because they think that they're just too dumb to know what is going on. The expert will (hopefully) be furious and demand that the shit is cleaned up. Hopefully at some point the intermediate will achieve the zen moment of "knowing when, not how to use metaprogramming is the truly important skill".
I think part of it is just achieving enough self-confidence that you no longer feel you have to prove yourself. I'll admit to inflicting overly-clever code on a previous employer when I first got into Scala (roughly when my career was at the 4-5 year mark).
I'm a bit embarrassed about it now, and it's exactly as you said: The older devs were quite visibly annoyed with me, the less experienced ones deferential :)
I realized my mistake after I had given my notice at that job and had to walk another dev through a module I had written :O
I'm much better now. I've been programming in Clojure for almost 2 years, and I don't think I've ever once used a defmacro in my own code :)
I think it's partly that but I don't think it's purely about self confidence and it isn't limited to metaprogramming. I think it's also a market driven effect. Developers will see jobs advertised or that require doing X that pay a lot more than what they currently do and think "I should gain some experience in X so I can sell it at my next job interview". Most developers have probably felt this instinct at some point even if they're plenty self confident.
I don't think it's such an easy problem to solve because, whatever X is, developers will probably have to be a fuck up at using it first. Then they get good at it. The higher pay is going to keep them motivated to use it and it's not usually practical to tell programmers that they should put their career development on hold because it's torturing your architecture.
Furthermore, sometimes X will more or less do the job ok even if it wasn't strictly necessary. In that case, was it really such a bad thing?
Anyway, I don't see it as something to be embarrassed about. Everybody screws up and learns from it.
haha, this reminds me of a chunk of self rewriting machine code I wrote for the C64 (where such enhancements could be justified) Every time I wanted to expand on it I had to sit myself down and look at it FOR A FEW HOURS just to remember what the fuck was going on. And it wasn't even big. The hilarious thought someone else would one day try to decipher it crossed my mind hundreds of times. "Okay, so this is where the code generation rewrites it self.... again"
Sure, you can have conditionals but you can also replace them with no-ops and sure you can have no-ops but you can also move the lower part of the code up.... or should that be the upper part down? Which one is the bigger? How to keep track of this modification? surely moar code morphing is the answer?
Or, to put it another way, most us arent't building abstract programming frameworks to build programming frameworks. We're building useful software people need.
Javascript is modeled on Scheme. Can you create DSL's in JS? (Legitimate question, I'm not familiar enough with JS to know for sure. Theoretically, it seems possible.)
From your first link...
1) Minimalism.
2) Dynamic typing.
3) First-class functions and closures.
"Modeled on" <> "is"
Brendan Eich admired Scheme and was still very familiar with SICP at the time. Perhaps "inspired by Scheme" would clear up the misunderstanding?
My question was about creating DSL's in JS. Is it simple enough that people actually do it in professional software development?
If it is, then I call into question the assertion that the majority of the programming community doesn't want to use languages giving them that much freedom.
I've not seen a JS DSL attempt that wasn't merely a "toy", but there's probably more room to explore ideas than people give JS credit for. You could also argue that there are plenty of DSLs out there as things like Babel plugins.
I guess the objections are more about DSLs being the its main abstraction once you escape structured programming, and less about DSLs being available at all.
It lacks coherence and overview and structure. I'm sure you'd agree that taking related "and thens" and wrapping them up in a procedure would be helpful. The next step after that is coming up with more informative relations than "and then".
A description like
"I went off the platform and then on the train and then I looked at the nearest seat and then I saw it was occupied and then I looked at the next seat and then I saw that was occupied too snd then I repeated that and then I saw an empty seat and then I walked toward it and then I sat down and then ..."
is begging for structure by wrapping up into procedures
"I boarded the train, and then I located the closest free seat, and then I sat down in it ..."
with the obvious definitions. That can in turn can be further improved to
"After boarding the train I sat down in the first free seat ..."
where the relations between the things are more useful than "and then".
Just how I program after 10 years of practice. Tables and procedures mostly :-). No considerable developments in programming languages needed as far as I'm concerned.
How about a language that lets you put your procedures in tables?
I often wish for a more rich way to layout a complex 2-level decision tree. Dividing it up into sub-functions/classes scatters the logic, while putting it all together is too heavy. However a 2D decision table would often be perfect.
I remember reading someone’s proposal for 2D decision tables in code quite a while ago maybe 15 years.
I’ve just tracked it down. Here’s something about it by the same author Roedy Green (author of How to Write Unmaintainable Code)
http://mindprod.com/project/scid.html (look for the words ‘decision table’)
Any language that lets you put anything in tables lets you put procedures in tables. Most languages do not offer reflection for their own syntactic elements (probably for good reason), however if you need that you might want to make your own language anyways...
# now call the reqd. function via string s read from somewhere: keyboard, file, etc. ...
t[s]() # error handling omitted
Kidding apart:
>Dividing it up into sub-functions/classes scatters the logic
What is the issue with scattering the logic? if you break up your problem/solution into different logical units, with good judgement as to the points of breakup (i.e. coupling, cohesion, etc.), then what is the issue? Not clear.
Well, no. Each individual piece in the child-like code might be easy to understand, but you lose the forest for the trees. You, as the reader, have to reconstruct how all the simple (simplistic?) pieces fit together to accomplish, well, whatever the code is supposed to do. Compared to over-abstracted code, you've just shifted the difficulty from understanding an explicit structure created by the programmer to mentally extracting the implicit structure of the code.
A good point of comparison is technical writing. Think back to the best textbooks you read: they didn't have the flourishes and sophistication of literary writing, but they still had a significantly richer structure and organization than, say, a children's book. They needed this richer structure to get their ideas across effectively. You can't write a coherent textbook in the style of Where's Spot?
The same goes for code. You can be too clever, sure, but you can also not be clever enough. On a scale from "Salman Rushdie" to "Clifford the Big Red Dog", you don't want to be at either extreme.
Is this hard? Yes! Writing readable code is a skill unto itself, distinct from writing working code. People don't always agree on what is and isn't readable, but that's true for writing too. Still a worthy goal, just one that doesn't lend itself to simple rules. You just have to build up the right skills from experience.
Haha, great example—that's exactly what I want from my bash scripts too :). Just a simple list of commands.
Salman Rushdie style is reserved for libraries that make a massive impact. Learning abstractions like that is like learning a new part of the language so it has to really pay off. Rare but not impossible.
The Haskell lens[1] library is a great example. It's almost too clever for its own good and learning it is like learning a new programming language, but it is such an improvement for the entire codebase it's worth it. I use it widely at work and it pays for its own difficulty almost immediately. (It's also important that it can't achieve some of its core functionality without the "clever" things it does.)
The documentation could use a bit of work though :/.
I've always thought Haskell was later James Joyce clever :)
I'm not sure if it is really appropriate for any code to be more than Robert Frost clever, sometimes think that should be clever enough for anyone. But this is only a periodic opinion.
on edit: I guess that means I'm saying no language needs to be more clever than Python.
We don't talk about c-groups and compressed files with metadata, we talk about containers and images. We don't talk about individual machines running a number of highly specialized daemons with configuration files, cryptographic keys, and networks-in-networks, we talk about clusters and pods and gateways.
Notwithstanding that much of the code which has built up these sweeping generalizations is built of a tangle of red dogs, the abstractions are very high level and mean we rarely have to discuss infrastructure at a very low level.
Of course, there's a twist to that as well, that hearkens back to an earlier statement: the understanding have been limited to an extremely high level as well. The number of people who can troubleshoot a container cluster is small, and growing smaller. We've intentionally created cliff notes of "Salman Rushdie" novels and believe that's all the populace needs.
This style has quite high cognitive load, how many 'and then's can you hold in you short term memory until you loose the context completely?
I believe the the art of programming is actually writing the code in a way that transitions the reader through context changes keeping the amount of context necessary to understand what's going on to a minimum.
I strongly agree. I usually express this as "minimizing complexity", but I like the "guiding through context changes" phrasing.
One of the best things about humans is how easy it is for us to work at multiple levels of abstraction simultaneously, as long as we're guided to and between them properly. Good code - hell, good engineering - exploits this to create things which otherwise wouldn't fit in a person's head.
Or, how little can you make the reader have to think and still be able to really understand the code? There's a sweet spot there, where either simpler (or more simplistic) or more complicated is a net loss to the reader.
The problem is, the location of the sweet spot depends on the reader (or at least, on their familiarity with the idioms used...)
It's ok for short code/stories, but how many pages of that would you read? There's often a point where there just are better structural choices than the crude linear approach -- hence, OOP, FP, RP and everything else.
Programming is specialized work (I'd call it a discipline, but then we'd have to argue whether it's more an engineering discipline or art discipline). You have to gain knowledge and experience to understand it.
A bridge blueprint is not understandable for all. It is understandable for all who took time and effort to gain education necessary to understand it.
(The motivation between this comment is to counter the increasingly noticeable sentiment in our industry that everything needs to be dumbed down to the lowest common denominator; it's an understandable sentiment if you're selling something and want the biggest market, but it goes against what's needed to build great things.)
The thing is, people have been building bridges for thousands of years. What a bridge is and what it does changes very slowly. Almost all the specialised knowledge that bridge builders have is about _how_ to build a good bridge.
Software is not like that. We may be experts in programming languages, algorithms and data structures, but the things we create and the problems we solve keep changing all the time.
We're not usually experts in these problem domains and in some cases there are no such experts at all. So our code needs to be a lot more descriptive and written for readers that may not be very familiar with the problem at hand.
We are more like lawyers supporting law makers or even like law makers themselves.
We think bridges are a very well explored domain. Then something like the Tacoma Narrows incident happens, and we realize how much of a fallacy that really is.
Bridges are unique to their locations and their scale: what works over a creek will not work over a ravine. What works for a pedestrian will not work for a car. What works for a car will not work for a train. What works for a train will not work for a marching army.
The Tacoma bridge collapse has a wikipedia entry exactly because bridge building is a very well explored problem and bridges don't usually collapse.
I don't doubt that each bridge comes with unique challenges. But the purpose, user interface and constraints of bridges have remained stable enough for long enough to allow specialisation. That is not the case in many areas of software development.
>everything needs to be dumbed down to the lowest common denominator
That kind of ought to be the default for most code. However, it's the natural tendency of code to become abstruse and unnecessarily complicated.
Since market forces will often tend to exploit rather than reward the work done by programmers to "dumb their code down" and inadvertently reward "clever magic understood by a limited number of experts" there's more of an incentive to amplify this effect than to work against it - especially where money is involved.
I'm a bit conflicted because on the one hand I don't want to help developer compensation get ground down by billionaires or other business owners with an entitlement complex who consider developers to be "spoiled brats", but I do prefer working with clean, straightforward code.
There are times when you simply have hard problems.
For example, take code generation (I actually have in mind a SQL query generator, but a compiler back end would do just as well). The problem is already abstract - the input is code, you're naturally writing code that manipulates code, like you would with macros or reflection. The problem is complex: you can generate simple code, but it won't perform well. There's irreducible complexity here that cannot be simplified away.
You'd also like the code generation itself to be fast. That puts a limit on how much you can break it up into parts that can be understood individually and in isolation. Optimization works in opposition to abstraction because the optimal often requires steps that span multiple abstraction layers, and often requires multiple instances of the specific over few instances of the general.
Personally, I think the two biggest reasons code becomes unnecessarily complicated these days are (a) testing and (b) local modifications. Unit testing in particular encourages over-parametrization so that dependencies can be replaced for the purposes of testing; while normal software maintenance under commercial pressure leads to local modification because nobody has time to understand the whole. People instead make conservative local changes by adding parameters, extra if-statements, local lookup maps, etc.
I've found elegant solutions are often on the other side of a hill from over-engineering. You write specific solutions, then you climb the a hill of abstraction as you add layers, indirections, parameters etc., until you reach a summit, where you can see the whole, and can then start boiling things back down again, only retaining abstraction where it's actually necessary, or perhaps replacing multiple abstractions with a single more powerful abstraction (I've found this to happen a lot with monads; another one is converting control flow into data flow).
>There are times when you simply have hard problems.
>
>For example, take code generation (I actually have in mind a SQL query generator, but a compiler back end would do just as well). The problem is already abstract - the input is code, you're naturally writing code that manipulates code, like you would with macros or reflection. The problem is complex: you can generate simple code, but it won't perform well. There's irreducible complexity here that cannot be simplified away.
Yeah sometimes you do, and that is exactly the kind of problem that is irreducibly complex, but I think new kinds of problems like this don't tend to crop up in the wild very often and when they do they tend to show up in subtle and non-obvious ways.
The problem you've described is far from a new problem - it's the same problem space that is covered by ORMs. Furthermore, if I were working on a team where a developer has uttered the words "I've created my own ORM" (or something to that effect), my face has probably already landed in my palms.
The rest of what you wrote I'm in vigorous agreement with though- especially the parts about unit testing, local modifications and "the other side of the hill". Seen all of that.
> The problem you've described is far from a new problem - it's the same problem space that is covered by ORMs.
This is digression.
I have in mind something I wrote, the most complex piece of code I've written in the past couple of years. It isn't actually well covered by ORMs. ORMs are usually tuned for (a) static schemas, and (b) graph navigation in OO-style. Give them a problem like "here's a filter in the form of a syntax tree, please give me the top 100 results from this 10 million line table" - where the table schema is determined at runtime - well, most ORMs can't even answer this question because schemas are assumed to be static. And if you want to control the join order using nested subtable joins, with predicates that don't need joins pulled out of the filter expression and pushed down, because MySQL's optimizer observably doesn't reliably do the right thing, ORMs don't give you that control.
It wasn't an ORM kind of problem; think more something like a read-only Excel spreadsheet, but with typed columns, and a rich autofilter, on the web, scaling to millions of rows. The output of the database query is a page of tuples, not objects in any behavioural sense.
In some ways a database seems like the wrong solution, but morphologically it's exactly right: the user data is rectangular, relational, has foreign keys to other tables, and needs to be sorted, filtered and joined with other user-defined schemas. Modelling the user schema as database columns performs better than any other database solution, and database solutions are preferred because shared state, transactions, etc. Because the product is closer to being an actual database than a program using a database, ORMs aren't tuned for it.
Ok so when you said "SQL query generator" I interpreted "a generic tool for SQL query generation". i.e. ORM. Metaprogamming makes sense there.
Your rather unique use case certainly falls outside of the remit of what ORMs provide (cutting down on generic SQL boilerplate) but I'm unconvinced that what you built is in need of especially powerful language features to build it.
> I'm not sure what hard problems have to do with code readability.
Code can be hard to read because it's badly written, or it can be hard to read and understand because it deals with a hard problem.
> I've seen hard to read code that solves simple problems and I've seen easy to read code that solves hard problems.
I've seen easy to read code that solves simple problems that seem hard because of things like combinatorics (e.g. Sudoku solvers and the like). Actually hard problems don't have simple solutions; there's a complexity that doesn't go away no matter how you express the solution. This is doubly true when there are constraints on the solution in execution time and space, because such constraints limit how much you can break the problem down.
> Why are you advocating for the hard to read code?
Exactly the kind of over-engineering I'm talking about.
The objective is to solve business problems, not write tests. Warping design to introduce unnecessary abstractions and indirections for testing is what leads to Java-itis, with factory factories.
This is exactly why I only ever test at the edges of a project, using mock systems that are as realistic as possible given time and resource constraints. The industry has an unhealthy obsession with ultrafast, ultra-unrealistic, low level unit tests that tightly couple to implementation details rather than behavior.
I don't think this is over-engineering though. Even if unit tests didn't warp designs they'd be a bad idea. I think it's just bad engineering based upon stupid dogma spread by the likes of J.D. Rainsberger and Uncle Bob.
The problem is, the converse is not always true. Problems that are very easy to formulate ("find the k shortest paths from my house to my office") might require very sophisticated, non-obvious, non-simple code to be solved in a non-naive way.
We all know that sometimes problems are easy to state and hard to solve. To my mind the problem you describe sounds quite possibly impossible to do better than brute force, so any code at all that solved it would be to a certain extent insightful (but the simpler, the better - and truly great code for that problem would be code that let me understand how an easier solution was possible).
I highly disagree with this, though I like your statement on principal.
Greatness is highly dependent on context. I think your statement on greatness would apply to code that helps you learn and think.
This is, however, NOT the kind of code I'd want to see at work. Great code in a business setting is simple, easy to understand, and has absolutely no subtlety.
If I was looking for a literary example, great code in a business setting would be like the writing style you'd see in a newspaper: written for easy consumption by the greatest number of people possible.
If you write in a child's language, you will only be able to write children's books.
To add to p2t2p's comment, procedural programming doesn't offer very powerful high-level abstractions. OOP on the other hand is so powerful that it's easy to do wrong.
> We have an unfortunate obsession with form over content
I partially agree with this observation but I also think that it’s a bit misleading: especially if used, as here, to refer to the distinction between programming languages (and their grammars), rather than paradigms.
In truth, different programming languages are highly coupled to different (classes of) paradigms, and the grammar of a programming language often prescribes or at least culturally supports a given set of paradigms. There might not be as much difference between, say, Pascal and FORTRAN, to use Floyd’s example. But if you asked a professor whether they taught Lisp, Java, Python or Haskell in their introductory programming course would tell you a lot about the paradigms they teach.
I think verbosity occurs when the language can't easily represent the paradigm. And rather than use the proper paradigm, programmers do what is easy to represent. Lightpost fallacy of hammer utilization.
When we have better language creation tools we will get more outsiders, who will bring more paradigms.
One thing I wished was addressed in general in all of education was in the style and analysis on the ways of solving problems. Something like solution formulation aesthetics.
In High School, I didn't get to see a computer until after the introductory computer science course. And so I would assume that the basics are trivially the same, no matter which compiler is chosen to go along with it.
e.g. Stanford switched from Scheme to Python, using the same work book anyway. Does that tell you anything? Probably that there's a Lisp in every language.
“Then I look for a general rule for attacking similar problems, that would have led me to approach the given problem in the most efficient way the first time. Often, such a rule is of permanent value.“
I like this. It’s a rule for finding useful rules. I wonder if I can get good at this kind of analysis.
Would someone kindly explain what paradigms do HDL's like VHDL & Verilog fall into? Has anyone done the enumeration of these paradigms, including a study of the deficiencies in programming caused by discouragement of unsupported paradigms?
Any pointers would be highly appreciated. Thanks :)
They belong to the category of synchronous programming languages. The terms synchronous dataflow, dataflow, and synchronous reactive are also variously used. See: https://en.wikipedia.org/wiki/Synchronous_programming_langua... . This paradigm has also been used in a embedded and real time software. The recently deceased Eve programming language combined the synchronous and relational paradigms. I think it's a neglected approach to concurrency, and I was sad to see Eve run out of money because it seemed like the first time a synchronous programming language for software and "for the masses" was being developed. If you want a detailed systematic account of programming paradigms, (without detailed study of what applications they are good and bad for I'm afraid) I would highly recommend Concepts, Techniques, and Models of Computer Programming by Roy and Haridi (this book leans towards dataflow oriented paradigms, explaining a few other paradigms using the idea).
> The synchronous abstraction makes reasoning about time in a synchronous program a lot easier, thanks to the notion of logical ticks: a synchronous program reacts to its environment in a sequence of ticks, and computations within a tick are assumed to be instantaneous,
I wouldn't classify HDLs as synchronous or reactive programming explicitly because the last assumption in the quote (instantaneous ticks) is emphatically false when doing HDL programming. Synchronous programming (as with FRP systems in haskell, usage of mobx or Vue in javascript, or other such libraries) affords the programmer this tick based Excel-like approach to expressing cause-effect chains. You really can't do this in, say, Verilog if you want a functioning program.
You can definitely argue the toss about paradigms, since things tend to kind of overlap and blur into each other unless you're careful. It's true, if you want a synthesisable design you need to meet timing. However, once you meet timing you can assume your ticks are instantaneous. I would argue that for me, there is still a somewhat similar feel to the parts of HDL that don't feel like hardware design e.g. creating pipelines. This is a limitation of hardware rather than the programming language per-se.
This type of imprecision seems common when talking of programming (language) paradigms: are they based on languages in an idealised world, execution environments, language ecosystems, APIs exposed by the operating system? We can only really talk about them in an exact sense when we work in an artificial environment as in Roy and Haridi. Given the inherent imprecision, I don't see why when we deal with real systems, we shouldn't bend definitions a little bit and say a language "sort-of" or "mostly" exhibits a certain paradigm.
Maybe "pipeline programming" is a pretty apt description. But to your point, you could argue that VHDL and Verilog are just bad synchronous programming languages haha.
I think one of the issues is that most sellers of "new" programming paradigms try to sell theirs as "the one". As Floyd pointed out, there's tons, and putting on my conservative hat, there is probably a reason, at least partially good, that the currently dominant paradigm became dominant.
In fact, if I only had one paradigm to play with, I'd probably also want the current one, despite its limitations.
So what we then do is take the current paradigm and try to conservatively extend it. That also doesn't seem to work because you can't just bolt paradigms on top of each other. So putting on my revolutionary hat, we have to blow it all up.
I think the solution is generalization ("refactor to abstract superclass"): do something more general, then reproduce the dominant as well as other paradigms from within that generalization.
Think about composing a song (an orchestral piece to be exact). You have beats and there are measures. The entire work has parts that are played by various instruments. The functions you write are melodic lines that are played by individual instruments. However, they all need to be timed in such a way that the whole thing occurs in concert with each other.
HDL programming has functional elements (combinatorial logic) that are reasonably declarative in nature. Given this combination of logic bits, produce this set of logical bits as output. There is also a timing sensitive imperative portion alluded to above. This would mirror your standard C programming (routines, finite state machines) but with additional constraints that timing must be accounted for. Imagine concurrent/parallel programming but instead of mutexes, you can control the program flow at a lower level to avoid read/write hazards as appropriate. Of course, verification/testbench software helps validate that the whole thing will operate well on various pieces of hardware (and even take things like temperature into account). There is a bit of knowledge regarding the underlying transistor logic that is helpful but not mandatory (for example, it would explain what metastability is or why a floating impedance consumes more power).
It's a bit much for some, but I like it from a pure hobbyist standpoint.
For a more general analysis of programming paradigms, see the work of Peter Van Roy [2], which decomposes paradigms in the individual components [1] that are used or avoided in each particular paradigm.
My understanding is that traditional verily is declarative in the same sense that SQL is. Systems Verilog is that but also can be written as if it were C-like. Then the whole thing also has C++ templates bolted on as well.
It's also more of a hardware design language than a programming language, so the paradigms of classical programming languages might simply not apply all that well.
Neat! Was just looking at this last week, currently writing a paper on the problems our current mono-culture is causing, with almost everything being a variant of the call/return architectural style.
"In evaluating each year’s crop of new programming languages, it is helpful to classify them by the extent to which they permit and encourage the use of effective programming paradigms."
Well, currently the answer to this is very Fordian: you can have any effective paradigm as long as it is call/return. And we will make all other paradigms ineffective, so that works out. :-)
"When we make our paradigms explicit, we find that there are a vast number of them."
And that's the issue: when our languages only support a single paradigm, we then have to encode every other paradigm in such a manner that it becomes implicit in the patterns that are used.
In fact, even OO isn't really supported by current languages: the crux of OO is the connections between the objects, but you generally cannot write down connected objects, you must write procedures that build the connected objects. I've always felt that the people who want to make "object construction" special were on to something, but then completely missed the point.
"Often our programming languages give us no help, or even thwart us, in using even the familiar and low level paradigms."
Hear hear.
Anyway, my take on bringing in language support for appropriate paradigms is setting "paradigm = architectural style", noticing that "architectural style ~= "language metasystem" (start with MOP in OO and generalize), then make all these bits flexible and provide enough syntactic room to make the resulting adaptations as trivially usable as possible.
Yep, it's hard and it's a lot, not least because of this:
"To persuade me of the merit of your language, you must show me how to construct programs in it. I don’t want to discourage the design of new languages; I want to encourage the language designer to become a serious student of the details of the design process."
So there are three levels:
1. Write programs with certain paradigms
2. Create language support for those paradigms
3. Create meta-language support for writing language support for those paradigms.
Semi-related talk by Scott Meyers on designing good interfaces [1]. This applies at the programming/implementation level as well. "Make interfaces easy to use correctly, and Hard to Use Incorrectly"
What a horrible post. If the author loves Floyd so much, then why ruin his quotes with empty words that just distract and annoy the reader?
Opened the post expecting to learn about paradigms of programming, all I got was a very annoying collection of quotes that were hard to read because of the way they were presented.
Until end users can construct their own languages, language designers have a long way to go.