Hacker News new | past | comments | ask | show | jobs | submit login
The future is fewer people writing code? (techcrunch.com)
126 points by pratap103 on July 22, 2016 | hide | past | favorite | 247 comments



Non-programmers make this mistake all the time: thinking that the syntax is the hard part of programming.

No, the hard part of programming is understanding in very specific and rigid details how to accomplish a task. What the author doesn't realize is the enormous amount of processing power, shared culture and empathy that goes into human interaction.

Even mighty Google doesn't have the compute power or the architecture to replicate this. Until computers can understand humans at a human-level this will not be possible. Several breakthroughs will be necessary, and even still I expect decades/centuries before it happens if it even happens. Essentially we're talking about the Singularity.


> Non-programmers make this mistake all the time: thinking that the syntax is the hard part of programming.

Programmers make it all the time too, or at least something very similar. See all those people who are on the endless quest for the "perfect" language, which usually means one which allows for writing the shortest code, sometimes at a very high cognitive cost for very little payoff. No, I really don't want to do a massive amount of mental backflips just to save a few characters, thanks.


Many people looking for the "perfect language" want one that minimizes cognitive costs. An initial increase (comprehensive type systems like in MLs; rust's notion of ownership and lifetimes; Spark Ada's comingling of specification, code, and types) in cognitive cost which, once internalized, results in reduced cognitive costs for certain tasks. Less time spent developing tests. Less time debugging. Less time spent in maintenance (not because it's not needed, but because it goes faster when it's done).


> See all those people who are on the endless quest for the "perfect" language

To be fair, many programming language enthusiasts understand this and seek languages that properly frame the problems that need to be solved.

Sometimes languages seem to have a high cognitive cost because they expose that the underlying problem has a high cognitive cost. Some things that seem really simple are actually rather complex, like string manipulation and data sharing.


There isn't anything wrong with this kind of obsession tbh. We do need SOMEONE inventing, experimenting with new languages and if someone else is excited about doing so I'm quite ok with that :)


Quite true.

Apparently many programmers keep forgetting that the syntax is just a small part of:

- language itself

- toolchains (compilers, interpreters, AOT, JIT, hybrid)

- differences between implementations and specific behaviour

- libraries, both standard and the most well known third party ones

- IDEs

- build systems and deployment options

- extra tooling for correctness like static analysers

- culture of the people that usually cater for the language

Hence why it is easy to dabble and grasp concepts from multiple languages, but very hard to very quite proficient in a few of them.


Do you write in assembly?

If not, you clearly see some gain in more terse syntax.

Seems like a good thing to explore, although perhaps it should not be one's sole focus in life.


The difference comes down to syntax vs. semantics. The problem with assembly isn't its syntax (mov eax, ebx is fairly readable), it's that the underlying semantics are too low level. Exploring new abstractions/semantics to use is very useful (local variables, first-class functions, algebraic data types, etc.), but optimizing solely for source program size leads to a language that's great for code golf and not much else.


Not sure. I mean yes you can abuse some languages and fit an entire program on 1 line (see any golf project). But I can generally grok a 100 line scala program in the same time as a 100 line Java program, and the scala program is usually about twice as dense. (So I am groking scala twice as fast).


I generally find this to be true for myself, too. Although I would add the qualifier that I find Java to be an exceptionally wordy language, for lack of a better term. It seems to take a lot of talk to do a little in Java even compared to other object oriented languages.


Outside of J/K/APL few languages truly attempt to optimize for source size. Few people truly, deliberately optimize around source size outside those language programmers, demoscene types, and mathematicians/engineers-turned-programmers.


Add Ruby to the list. Many gems (and frameworks) advertise with how much you can pack into a single line of code, at the expense of seemingly impenetrable magic happening behind the scenes.


I am obsessed with programming paradigms, not languages.

Best way to learn a new programming paradigm is by learning a new programming language conducive towards a certain manner of problem-solving.

So yes, I am obsessed with programming languages. But not for the reasons you listed.


The language is the medium. I like writing code that I can look at and be happy, for no real reason. For traditional art, some prefer oils some prefer water-colors. It's the same for programmers. http://programmers.stackexchange.com/a/16164


Or learning one javscript framework after the other where you do the same thing with different syntax.


You agree with the author.

>> No, the hard part of programming is understanding in very specific and rigid details how to accomplish a task.

  The real benefit of something like Project Bloks is that it actually
  removes the code; it allows children to begin thinking
  programmatically, without the obstacle of syntax. And this is a
  tough distinction to make, because people often use “programming”
  and “coding” synonymously. But the fact of the matter is thinking
  programmatically needs to be divorced from writing code: the former
  offers large educational value to a broad range of students, while
  the latter offers very little.


Sure, we agree the code is not the important thing, but that's a very oblique point.

Where I disagree is that "removing the code" will help someone gain better understanding. It might help them dip their feet in and get at the core of what's important (hence MIT using scheme for its intro CS class for 30 years). However as soon as someone passes the beginner phase, the difficult thing is expressing the ideas succinctly and unambiguously. That's exactly what programming languages do.

A good analogy might be mathematics notation. You can explain basic math without any notation, but as soon as you get to more complex stuff you need the notation to help you reason about it and encapsulate the ideas succinctly. Similarly, graphical programming environments can do a ton of useful stuff (HyperCard is what got me into programming at the age of 10), but they will always always hit a wall (until true human-level AI anyway). The author definitely doesn't get that part.


I agree with you. Programming languages are dramatically simpler than human languages. Anyone can learn the syntax and grammar for a programming language in a few months (at the very worst). Not only that, but they reduce the amount of expression possible so that you have to be very precise in your descriptions. While beginners find this difficult to deal with, it's actually a massive help because there are only a few ways to express something and the compiler/interpreter often tells you when you get it wrong. Again, compare that to a human language where you have a huge amount of expression available to you and it is often nearly impossible to realise if you are being precise enough to describe the problem/solution well. A good way to demonstrate this is simply to try to write a computer program in your native human language, without restricting yourself to pseudo code. It's pretty easy to leave out important details and to end up with a huge mess of incomprehensible garbage.

There are many people who think that a visual language or some other abstraction will simplify the problem of coding. I really don't think so. While things like diagrams are great for helping people comprehend something, they are absolutely terrible for expressing something. Just look at how difficult it is to draw really good diagrams. Consider the fact that we have had cook books for hundreds of years, but almost none of them are described solely with diagrams. Words are how humans prefer to communicate.

I think we often fall into the same trap with programming languages or frameworks. We say, "Look how easy it is to do this trivial thing. It's 100 times easier to do trivial things with this framework than without it". We are completely blind to the fact that the framework makes it 100 times harder to do ordinary things and 1000 times harder to do complex things.


I think the author was saying something different, with which I agree.

There's a POV from which code is the details needed to get something done, and the less, the better. In my life, the less visible the code task has become, the more power I've gained. A couple of examples:

When word processors stopped needing to be told stuff like /b at the beginning and end of words I wanted boldfaced, it got better. I found the whole WYSIWYG thing enormously helpful. The portion of time I devoted to typography was released for improving content.

LabView made it possible for me to assemble data acquisition and control systems with drag/drop icons and drawing connections instead of coding a PID module and then a gain module and then a Kalman filter function and ... you get the idea. Less code to write, more stuff done.

By analogy: Today, I turn a dial to make a fire to cook my dinner. I turn a valve to get water. I drive a car to go places. Yes, I've rubbed sticks together, I've winched a bucket up from a well, and I maintained a horse for mobility when I was a kid. I'm pleased and empowered by no longer needing those skills.

In the same way, I look forward to the disappearance of coding as a prerequisite to creation. I'm immensely grateful to the people whose coding is making that happen.


This is a strawman. Coding has never been a prerequisite to creation. People were creative forever, and then computers came along and created new avenues for creativity. The vast majority of which is done with applications, and applications have improved by leaps and bounds every decade.

Only a very tiny portion of creative work requires actual programming, like demo scene programmers. Even video games and Pixar movies have far more artists then programmers. It's true that more and more programmers are required to extend and maintain the software, but the growth of the end-user base has far outpaced the programmers. The majority of programmers are implementing business logic of which they have minimal high-level creative input.

The things we can do with computers without code has only increased. Pontificating on how it would be better if we didn't have to write code is like someone who once used a circular saw pondering why can't artisanal wood carving also be done with simple straight-forward, easy-to-use tools instead of difficult-to-use hand tools. I mean its crazy how often we hear this refrain about coding as opposed to other professions. No one asks why can't we have a gadget to perform an appendectomy at home instead of paying a surgeon thousands of dollars, but for some reason people think that coding is somehow unnecessary magic which could be done away with by a bit of clever rethinking. But code is not some rube goldberg device designed to obfuscate and impress; code is a medium, like a blank canvas, or a sheet of typing paper, the only difference is it can control physical things. It is not one concrete thing with a specific purpose which can be optimized like a word processor, or a stove, or a faucet or a car. Less code doesn't mean a simpler world, it just means less of what code can do.


I appreciate the thoughtful reply, thank you. But it's not a strawman. That's when you advance an argument that wasn't at issue.

I supported the article's thesis, that fewer people will be coding in the future, and provided a couple of anecdata points from my experience. I don't think code is unnecessary magic to be done away with. It's essential magic that becomes invisible when done well, to everyone's benefit.

And to clarify, creative work isn't limited to the arts. The creative impulse is common and has multiple outlets. Engineering is every bit as creative as portraiture or sculpture or crafting LOTR on a ginormous graphics processor farm.

My observation is that as the need for coding has decreased due in large part to increased computational sophistication, it's gotten easier for a wider range of people to do contributive work.

Suppose you have something to say about climate change. If you need to model it, you have several global models you can run, without having to code FEA and planetary atmospheric physics from scratch.

Or pick an artistic field - I know a hand weaver who uses commercial software to visualize her designs before she heads off to her hand loom. She'll never code a simple if/then/else statement, but the (invisible to her) code in the app lets her do things she'd never have time or resources to try otherwise.

Even in your medical example - I'll guess that fewer than 2% of the surgeons who use a Da Vinci medical robot could do anything useful with the code that runs that system. And if learning to code the robot were a prerequisite to using it, it would be a failed product.

I like your thought of a device for home appendectomy. Why not, really? Ultrasound imaging, vital signs sensing, actuators for retractors, scalpels...and lots of invisible code to make it work well. Just the thing for clinics in parts of the world where there is appendicitis but no doctors.

Code may not be designed to obfuscate, but it does that to many who would otherwise be able to use computation to do something new and useful. I think the more sophisticated code becomes, the less it will be a critical skill for most people. Which is good. We'll be free to figure out the next important thing and work on making it trivial.


Once I figure out the difficult specific and rigid details, I just want the most productive way to communicate these details to the computer. Writing code is the most productive, because it is more exact and I am faster typing than drawing/dragging/clicking. For example, with Vim, I can do a quick "Ack def function_name" to find the definition of a function.

For learning purposes, Project Bloks looks great. But when it comes to real work, I'd happily learn the syntax, as it lets me be more productive.


I agree and I disagree at the same time.

For professional programmers, many people on this forum, the format that we use (text) is almost certainly the most efficient (yet conceived). It's made better with better tooling, of course, like IDEs that help us refactor, show errors in code as we edit, etc.

The author's point is that many more people will be programming in the future (hopefully) than are today. But not as professional programmers. For them, tools like (but not, of course, the same as) Project Bloks will be better.

Hell, we already do this today for ourselves. How many people get into the code-behind on GUIs regularly? Do you detail in code "button x will be placed to the left of button y, the center point between them will be ...". No, we often use some combination of markup language (XML-based like XAML, or something else) or visual designer (that may be generating a markup language version behind the scenes).

Then we connect the dots, the various objects to various actions or data sources, and off we go.

Just like 80% of my (early career) programming was really just gluing together a bunch of data sources for generating reports, much of what businesses need is in the same vein. Relatively (compared to the scale professional programmers like to think of our own work) trivial applications, that exist at a relatively high level (they're not writing a new DB server), connecting pieces together based off logical rules.


From a first principles perspective, it's me and the computer, and I want the computer to do something. My laptop has the following input devices: keyboard, mouse, webcam, and microphone.

I mostly use the keyboard and mouse. There are some places where the mouse is better than the keyboard. I was playing online chess today, and I like to drag and drop the pieces. It feels more natural and not that much slower than inputting the coordinates.

But when I was coding today, I needed to navigate through my code quickly. The mouse would have been much slower than vim's CTRL-D, CTRL-U, and /search_keyword, so I used the keyboard.

For the non-professional programmers, Project Bloks might be more like chess's drag and drop. It's more natural and not that much slower for simple tasks. I can see why non-professional programmers may like it.


How is Project Blocks different than the multiple (rather unsuccessful) failed attempts at graphical (even drag-and-drop) programming languages of the past?


its much faster to put together a functioning system than it was 10 years ago. For example in ROR, install devise and you already have a login system. I imagine things will keep getting incrementally easier / faster until one day we wont need a 'programmer' to do what we want.

google isn't unaware, I'll try not to put words in his mouth, but a google exec said something along the lines that we aren't very far from needing half of the programmers/ IT people we have today


> google isn't unaware, I'll try not to put words in his mouth, but a google exec said something along the lines that we aren't very far from needing half of the programmers/ IT people we have today

In one way, that's a scary idea (a lot of people will end up looking for work elsewhere). On the other, what programmer wants to do the stuff that can be automated away or done by a non-programmer? Or wants to reinvent the wheel because of NIH syndrome?


> what programmer wants to [...] reinvent the wheel because of NIH syndrome?

Almost all of them?


Hard to imagine, when I see all the stories about tiny modules in Node.js...but easy to imagine when I see how many JS frameworks are out there.

Still, so much of what we do these days is just tying together other people's libraries, connecting them together and maybe doing a bit of data conversion. My first step when I need to do something is to find out if someone else already wrote the software. Then I can just install+configure+move on to more interesting things.


Disagree that conclusion follows from quote. "Blocks" doesn't teach sufficiently deep thinking, and can only really solve problems within the domain of (foreseen by) the system itself.


Seems like a gross overestimation of time needed for said advances and complexity of such systems. There is already a research-grade approach to learn to generate short programming-challenge-like programs from natural language description: http://arxiv.org/abs/1510.07211

>This paper envisions an end-to-end program generation scenario using recurrent neural networks (RNNs): Users can express their intention in natural language; an RNN then automatically generates corresponding code in a characterby-by-character fashion. We demonstrate its feasibility through a case study and empirical analysis. To fully make such technique useful in practice, we also point out several cross-disciplinary challenges, including modeling user intention, providing datasets, improving model architectures, etc. Although much long-term research shall be addressed in this new field, we believe end-to-end program generation would become a reality in future decades, and we are looking forward to its practice.


Yeah, well I could be wrong, I'm not up to speed on the research, however the claims of general purpose AI have been "just around the corner" for half a century now so I'll take that bet.


Came here to say this. And text is still by far the most flexible modality, not physical blocks. Show me a templating or metaprogramming engine written in LEGO blocks and I'll show you a Grey Goo scenario.


What the author doesn't realize is the enormous amount of processing power, shared culture and empathy that goes into human interaction.

Much of this isn't even useful for getting a task done, but mainly to keep the human in the loop as to how things can proceed… wasteful. I personally can't wait for prevalent brain computer interfaces. I don't think there is a need for computers to "understand" humans as far as it can observe the things we are optimizing for and do it better whatever way…

For me if I could choose between typing into keyboard on the seconds-hours timescale- latency in vs us timescales via other transmission channels, I'd pick the later every time, and twice on Sundays (which i guess is no surprise since I actually worked on such systems lol).


“50 years from now, I can’t imagine people programming as we do today. It just can’t be."

Dear writer, let me introduce you to FORTRAN, COBOL, LISP, or BASIC. These are alive languages, all 50+ years old.

Coding didn't change much. The languages, the methodologies, the ideas change, but the approach is the same, and whoever thinks this will soon (50 years is not _that_ far) changes, have never had to debug something nasty. Doing that with voice commands in my opinion is significantly harder compared to what we have now.

We will have tools, accessible, easy tools; Arduinos and Pis of the future; sure. But it will not replace, nor eliminate or reduce the amount of code written.


There's a serious gap in the writer's mind about computation and programming. It's like the author is suggesting that "eventually we won't need writing: it will be replace by writing-thinking or picture-writing". It's completely absurd. Specific, complex ideas can only be described and communicated in text. Not pictures. Blueprints, for example, have a pictorial element to them, but their fundamental value is our ability to use the formal language to analyze what's on the plan and whether it is correct or not. To the degree that a picture or a motion graphic can formally accomplish this is to the degree that it is supported by a specific language under the covers. Not the other way around.


Blueprints/schematics are far, far superior at conveying the information they do compared to a written narrative. Given the ease of preparing written text compared to drawing schematics nobody would go to the trouble of doing so if that weren't the case.


Of course, all of the pieces of information that blueprints and schematics are conveying are 2D layouts. Once you're out of the realm of things whose forms can be reproduced at reduced scale, or simplified to functional equivalents that lie on a plane, the types of useful visual representations of things are sharply reduced; they become extremely stylized, and symbolic. At that point, you've basically arrived at language again.


An interesting angle on the topic comes from my father who at one time was a project manager and designer in the construction industry. In the days before computers he would painstakingly hand-draw the design that was reproduced as blueprints, that was the role of the "draftsman".

But the drawing wasn't the source of stress, rather it was the project "specification" that he sweated. The issue was the spec was a legal, text-format document detailing the size of beams, type of wire, plumbing, fixtures, etc. He had to assure that beams were sufficient to support structure, electrical wiring was safe and up to code, etc. A mistake could expose the contractor and himself to legal liability if a component failed, so an accurate spec was a task he took seriously.

Of course the subject of program specifications is commonly discussed, though often doesn't have the same significance that my father experienced. I guess in most cases program crashes don't have the same impact that a roof caving in would entail. In situations where crashing can't be tolerated, the spec will mean a whole lot more.


I work in the same construction design industry. The drawings themselves are also contractually binding. Many smaller jobs forgo the written specifications altogether.


My father had mostly worked on larger projects, like tract houses and the like. Of course, times change, my recollection was of how things were a long time ago. My comment was just illustrating an instance where relying on a text description was still important even though there was a graphic format as well.

Your info was relevant to the idea of that at some level of complexity it becomes necessary to use text vs. only graphic presentation. Maybe in construction that occurs when there are more than a few elevations to juggle, but you probably know much more about it than me.


If you had a blueprint of the whole of NewYork city, you surely would need some tool to abstract away the maze of individual lines and be able to refer-to/work-with concepts like "Central Park", "the Harlem", or "Brooklyn Bridge".

It is not about how much more information can we convey, but how much less data must be expended to present a tractable model of reality to the human operator. Conveying more details is worse than useless, it results in informationi overload and cognitive stagnation.

Historically, the way it happened in computer programming is those tools are text based. This is as much about the early use of computers as clerical aids to process business data, and the early synergies between computation and linguistics. Maybe it can be done, but it will require millions of man hours to accomplish. And almost nobody wants to invest in doing so because of the cost of opportunity.


Of course, there's the ability to zoom and pan to get the appropriate level of detail. There's a reason Google Maps isn't a text adventure.


In Google Maps, the ability to zoom relies heavily in a (unacknowledged) property of the problem domain: planar geometry. If every relevant detail is nicely clustered together and, more importantly, every irrelevant detail is nicely clustered far away from wherever you are zoomming-in, then sure!

If, on the other hand, you cannot ever be 100% sure that fixing one stop light in Brooklyn will cause a bunch sewage lines to flush out to the street in Long Island, then zooming does more harm than good. At the end of day, you need the map to conform to the realities of the territory. If that gets in the way of that pretty abstraction of yours, then the abstraction - not reality - is wrong. And when that is the case, you need to start over and make a better map.

Text based toolchains are, for all their limitations, a (sufficiently) reality conformant map. It does not mean there cannot be others; but as of today I do not know about any suitable candidate.


When I write "2.5 mm" this is not narrative. If you want to explain "2.5 mm" without using text, how would you do that? The only way to do it is to use something literal from the real world. That's what we're talking about when we're comparing blueprints to programming. I think the word is literal. Can't avoid the need for text when it's precision we're after.


> Blueprints/schematics are far, far superior at conveying the information they do compared to a written narrative.

Blueprints don't change as much as software does. It's not generally interesting to diff, fork, reformat, or patch a blueprint.


Hmm, I don't think you've ever worked on designing a building. Being able to diff two sets of plans would be hugely beneficial.


Graphics can be useful in some domains but nothing beats text in the general case.


Just imagine a compiler that scans your diagram written by hand on a piece of paper, translate it into AST then interpret it or even produce an executable.


And then realize that your diagram was misinterpreted and you have a big bug in said executable.


It used to be that CPUs were designed with schematics (drawings). Today, they seem to be designed with text (VHDL or Verilog). I wonder why?


Basically all other electronics is developed with schematics though.


Why do you think textbooks (and ancient works) are written in text, not comics?


Why do you think Euclid drew diagrams and didn't write everything out in text?


I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs. Even when "hierglyphics" is used as as a term of abuse of for programming languages synatax -- it ends up pretty popular.

I thoroughly hated LabView when I had to program in it, but it did convince me that a graphical programming language could work -- if only it refrained from doing the cking stupid things that LabView did (such as the strongly typed editor* that would automatically progate the any type error it found, but not your fixes).

In my current C++ work, I would dealy love a graphical tool that showed me where any given value came from, much like LabView does by its very nature.


"I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs."

My understanding is linguistics research has pretty thoroughly debunked this idea.

Don't remember the experimental design (was a long time ago, sorry), but I believe a study showed Chinese readers basically translate the characters back into the sounds of spoken language in their heads, before any processing of meaning takes place. In other words, pictographic mnemonics may be helpful when first learning the characters, but play no role for a fluent reader.

I suspect a similar thing will be true with programming for a long time to come. Even if you try to replace keyboard characters with other icons, it will be just substituting one arbitrary association between symbols and meaning with another. (Which is basically what language boils down to, anyway.)


> I thoroughly hated LabView when I had to program in it, but it did convince me that a graphical programming language could work

That's funny. I came away with the opposite opinion. Text is much better at describing details and it's much easier to be consumed by various things: people, editors, analysis tools, web apps, test engines, code generators, code transformation tools, ... I could go on.

Languages like LabView never have a complete toolchain (Prove me wrong by posting a small piece of editable LabView in a reply to this HN comment). They work well as domain specific languages, but that's about it.


> I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs.

Based on these two sentences, I'm confident that you don't know the first thing about Chinese characters or Egyptian hieroglyphics.


I think we can distinguish. The ideograms and hieroglyphs have very, very specific rules about they can recombine, and that nothing to do with their pictorial aspects. It has to do with semantic / grammatical aspects.


As someone who is awful at Pictionary, I hope so as well. Just today, I defined a class with 4 functions. I had another function that created an instance of the class and called one of the functions. It changed a variable that would show up in the web browser formatted by CSS. And I can't even draw a dog in Pictionary...


Emojis = "picture-writing"


And with emojis you can describe how to build a bridge precisely?



You can't necessarily judge the future of of a technology by its past. Consider transportation. Imagine it's 1936, automobiles have been around for 50 years, but there are still plenty of people getting around by horse. Some people are claiming that in another 50 years, by 1986, horses will be hardly used for transportation compared to cars, other people say that horses have been used for thousands of years, there's no way they'll ever go out of style.

Programming languages exist today because computers can't handle ambiguity and don't understand software design. In another 50 years, machines will be a lot smarter, more able to handle ambiguity, and better than people at designing workflows and catching potential errors. Like the horse, no doubt some people will still prefer to do things the old way, but there's a good chance this will be limited mostly to academic exercises.

All they're saying here is that the tools we have will progress a lot in the next 50 years. There are some obvious problems with the way we design software right now which are due to human limitations. The only way to fix those is to remove a lot of direct control from humans and give it to AI programmers. Manually writing JavaScript in 2066 will be like manually carving arrowheads today: still effective but not something you would do for a serious purpose.


Your example actually cuts the other way. Imagine it's 1966 and someone tells you that the cars, trains, and planes will "have to be dramatically different in 50 years," yet lo and behold a trip from NYC to LA takes about the same amount of time now as it did back then and the Northeast Regional is a hair slower than the Metroliner used to be.


I was focusing on 50 years into the development of the technology as a rough analogy. By 1966 it was much more mature, but look at how much things have changed. A mechanic from 1966 would find today's cars completely unrecognizable. They might appear somewhat similar from the outside, but on the inside they're basically just giant computers. We now have cars with varying levels of self-driving capabilities, drones replacing pilots, traditional pilots being essentially babysitters for autopilot systems, hyperloop designs. I'd say those are much bigger changes than 1936-1986.


Cars are not "basically just giant computers" on the inside. Computers are used to control various engine parameters, and aspects of the transmission and suspension, but all the parts that make the car go are just refined versions of what existed in the 1960s. Okay, so now we use computers to control valve timing instead of mechanical means. But the principles of what valves are and how they make the engine work are very similar to 1966.

And that computing horsepower mostly goes towards fuel efficiency and safety. Which is nice, but almost certainly not the kind of progress people in the 1960's thought we'd make in automobiles over 50+ years.

> traditional pilots being essentially babysitters for autopilot systems

The first autopilot takeoff/cruise/landing happened in 1947.

> hyperloop designs

But we don't have hyperloops.

> I'd say those are much bigger changes than 1936-1986.

By 1986 we had fully-digital fly-by-wire aircraft. Our big achievement since then has been about a 20% improvement in fuel efficiency.


The concept of what change is more or less significant can be pretty subjective. I'm not talking about things like fuel efficiency, although those are some really interesting facts. Autopilot in 1947! didn't know that one. Yes, cars and jets still use the same basic architecture for what makes them move, but the control mechanisms for that architecture have completely changed.

To bring your comparison closer to the subject at hand, this article has nothing to do with the design of computers themselves. We could use the same basic Von Neumann architecture in 50 years and still get rid of traditional programming languages as a primary method of designing software, just like we use the same basic engine designs from 50 years ago but use entirely different methods of designing and controlling them now.

Take an engineer designing a jet in 1966 and put them with a 2016 team. They will have to learn an entirely different workflow. Now computers are heavily involved in the design process and most of what was done manually by engineers is now written into software. The same situation will happen 50 years from now for people who design software.

Take an extreme example like game creation. In 1966, you could make a computer game, but you were doing manual calculations and using punch cards. Now you download Unity and almost everything but the art design and game logic is done for you. Game design moved quickly toward these kinds of automated systems because they tend to have highly reusable parts and rely mostly on art and story for what separates them from the competition. But there's no reason why this same concept wouldn't apply to tools used for any kind of program.

The horse to car comparison was only meant to show that the development of a technology in the first 50 years (or any arbitrary number) will not necessarily look like the next 50 years. Well-established tools quickly fall out of use when a disruptive technology has reached maturity, even if that tool has been used for thousands of years. Right now, software design is difficult, buggy, and causes constant failures and frustrations. Once we have established and recorded best practices that can be automated instead of manually remaking them every time, there will be no need for manual coding in traditional programming languages. Machines are getting much better at understanding intent, and this will be built into all software design.


"Take an engineer designing a jet in 1966 and put them with a 2016 team. They will have to learn an entirely different workflow."

Send them to the "PCs for seniors" course at the local library to learn the basics of clicking around on a computer. Then a one or two week training course on whatever software is used to design planes these days.

Getting up to date on modern "workflow" is not going to be a major hurdle for someone smart enough to design a jet. Heck, it's very likely there could be someone who started designed jets in 1966 and still designs them today. (Post retirement consultancy.)


My point was not that they wouldn't be able to learn it, only that the tools and methods of design have changed and become much more automated. That process has not stopped, only accelerated. The people in this article are saying that the process of making software in 50 years will be very different from the modern method. It will rely heavily on automation and what was done manually by writing in programming languages will be integrated into systems in which the intent of the designer is interpreted by a machine. You can see it in IDEs today. They already analyze and interpret code. This is extremely primitive compared to what we will have on 50 years. The progress of machine intelligence is clear and doesn't require any major breakthroughs to continue for the foreseeable future. It will be as irresponsible for most people to write everything manually in 50 years as it is not to use a debugger today. No doubt there will be people doing things the same way, just like we have traditional blacksmiths today, but we will not have billions of people typing into terminals in 50 years. The criticism is against the idea that in the future, everyone will need to learn how to code in the same way as everyone needs basic arithmetic. That is not a plausible version of the future. It's trending the other way, more automation, more code reuse, less manual entry.


"Now you download Unity and almost everything but the art design and game logic is done for you."

Yes, Unity helps to visually organize your game's data, and there are built in and downloadable components (which are all created by coders) that can be used to plug into your game, but it's just another set of abstractions. Most of the time you will be writing your own components in a traditional coding language or delving into other's component code to adapt it to actually make your game function. There ARE game creation systems intended for no coding required, but they come with the expected limitations of visual coding that people are bringing up in this thread. No, Unity doesn't really fall into this category, barring a few limited game domains.

Perhaps in 50 years every domain will be "mapped" in this way, with predefined components that work with each other and can be tweaked as needed, but I don't see how that could eliminate coding, or even displace it that much. Two reasons I think coding is here to stay:

1) Any sufficiently complex system needs it's organization to be managed. At a certain complexity, whatever system is replacing coding will become something that looks a lot like coding. At that level of complexity, text is easier to manage than a visual metaphor. 2) Most pieces of software need custom components, even if only to stand out. Those game creation systems with no coding? No one is impressed by the games that are created in those systems. Not because the system cannot produce something worthwhile - but because with everything looking the same, the value of that output drops substantially.

I think coding will only go away when programming does. When the computer is as intelligent and creative as we are. And that's a point which I do not want think about too much.


I think we'll reach that point in 50 years because we already have computers with certain types of intelligence that exceed ours. Translating a human intent into machine language does work with coding, but we have to admit that it's not ideal. There are too many mistakes and vulnerabilities. Even the smartest people create bugs.

This like the shift in transportation. A lot of people love driving and mistrust autonomous vehicles. But the tech is almost to the point where it's safer than human drivers. In most situations, it already is.

Another comparison would be SaaS. For a lot of companies, it's about risk mitigation. Moving responsibilities away from internal staff makes business sense in many cases.

This is a criticism of the idea that we need to make coding a basic life skill that everyone should focus on. It looks a lot like denial to some people.

Let's go back to transportation. Imagine if people were pushing the idea that commercial driving needs to be in every high school because driving was such a big employment area. Some people might say that the autonomous vehicles look like a big threat to job prospects, so maybe it's not such a good idea to focus on those particular skills.

Coding is great, provides a lot of opportunities to the people that it attracts, but it's a pretty specialized skill that's going to be increasingly displaced by more natural and automatic interfaces this century in all likelihood.


Well, it's dramatic in the little things, but not so much in the big things.

Cars now go 100,000 miles between tune-ups. They used to go, what? 10,000 miles?

Cars are much safer in collisions than they used to be.

Most cars now have air conditioners. I've driven in a car without AC in Arizona in July; believe me, AC can be a really big deal.

Most cars now have automatic transmissions, power steering, and power brakes.

And cars get much better fuel economy.

Driving from NYC to LA takes less time due to interstates and higher road speeds (and cars that can comfortably handle those speeds). Not half the time, but still a significant improvement.

And yet, most cars are not dramatically different as far as the experience of driving them is concerned. Nothing in the last 50 years looks revolutionary. It's been an accumulation of improvements, but there has been no game changer.

I suspect that the next 50 years in computing will be similar.


>Cars now go 100,000 miles between tune-ups. They used to go, what? 10,000 miles?

I'm curious what your definition of tune-up is, because I don't believe there exists a car that can go that far unmaintained without doing lasting damage to various systems.

After a quick Google, my impression is that most 2016 cars have a first maintenance schedule around 5k-6k miles. Some as low as 3,750.


I don't think an oil change is a tune up. Maybe it is.. My honda has 80k miles on it, and has had oil changes + tires replaced. That is it. Compare to a 1970s car and what it would need in the first 80k miles.

For even lower maintenance look at electric cars. I think Tesla has very very low maintenance requirements for the first years.


> I don't think an oil change is a tune up. Maybe it is.

It's not.

I have a couple of 60s Mustangs and several newer cars. My original '65 needs ignition service (what most people call a "tune up") every couple of years (of very modest usage). My '66, converted to electronic ignition, gets about twice as long (and 10x as many miles) before needing ignition service. They both end up fouling plugs because of the terrible mixture control and distribution inherent in their carbureted designs.

My wife's 2005 Honda CR-V gets about 100K to a set of plugs. (Fuel injection, closed loop mixture control, and electronic ignition are the key enhancements that enable this long a time between tune-ups.)

My diesel Mercedes and Nissan LEAF obviously never get tune ups.


> My diesel Mercedes and Nissan LEAF obviously never get tune ups.

You don't do valve adjustments on the Mercedes?


No. I have the OM606 engine. Hydraulic lifters eliminate the need for mechanical valve adjustments as on the older diesels.

About the only thing I've done abnormal on the car in 7 years is replace two glow plugs. (And when the second one went, I actually replaced the 5 that hadn't been changed yet, since they are cheap and I didn't want to take the manifold off again to change #3...)


Actually, the Nissan Leaf can, although is be really conserned about the brakes at that point.


There are currently no signs that what you think will happen will happen. Soft AI is the only place where anything is moving on that front and the movement is infinitesimally small. Here's an analogy for you: It took more than 1000 years (from Babylon to Archaic Greece) for us to go from writing with only consonants to using vowels for the first time.


Years don't make progress on their own, people working during those years push progress forward. The estimated population of ancient Babylon at its height was 200,000. Let's imagine that 1% of them were working on developing writing for at leat 2 hours every week and that those who came after them were able to maintain that level of work for 1000 years until ancient Greece, over 200 million hours of work. That's less time than the official Gangnam Style video has been watched on YouTube.

In 50 years, 99%+ of all the work ever done by civilization will be done after 2016.


As long as we're criticizing the analogies in the discussion (rather than the actual arguments) I'd say the hours spent do not have a consistent quality vis-a-vis solving hard problems. Because there are more absolute hours available does not mean that there are more hours available for solving hard AI problems. There are very likely less. And there has been virtually NO progress on the hard AI front.


Hard, human-level AI would help this a lot, but it isn't necessary. All that's required for traditional programming to become obsolete is for computers to be much better at understanding ambiguity and have a robust model for the flow of programs. With today's neural networks and technology, I have no doubt it would be possible to design something that would create good code based on all the samples on github. Not easy by any means or someone would have done it, but it doesn't require any breakthroughs of computer science, just lots of data and good design. The tools referenced in the articles are working primitive versions of this.


There's an important distinction though between being able to write a compiling (or even functional) program and being able to write a program that serves a particular purpose.


I'm talking about human-guided programming without using traditional programming language, creating a design document to lay out what it does and how data flows and allowing the computer to sort out the details based on a stored data set.


> I'm talking about human-guided programming without using traditional programming language, creating a design document to lay out what it does and how data flows and allowing the computer to sort out the details based on a stored data set.

Creating clear and accurate design documents is so much harder and more specialized a skill than programming that many places that do programming either avoid it entirely or make a pro-forma gesture (often after-the-fact) in its direction.

(I am only about half-kidding on the reasoning, and not at all about the effect.)


"creating a design document to lay out what it does and how data flows and allowing the computer to sort out the details based on a stored data set."

This is exactly what programmers do today. We just call the "design document" a "program".

Over time, our design documents become higher and higher level, with the programmer having to specify fewer details and leaving more of the work of sorting out the actual details to the computer.


Yes, exactly! That's what the article is claiming.


Why do you assume that this design document would be simpler to create than the traditional computer program? Because otherwise, this is exactly what happens now.


There are some fairly aspirational claims about how it might be different in this paper, which is a great read:

http://shaffner.us/cs/papers/tarpit.pdf

There has already been some significant progress on this front. E.g., SQL and logic programming let you describe what you want to happen, and let the computer figure out some of the details. Any compiler worth using does this, too. Smarter machines and smarter programs will mean smarter programming languages.


Design is always going to be a part of creating something. What this article is arguing is that manual typing of text by humans using traditional programming languages will not be the primary means of implementing those designs in the future. We don't yet know how to make computers into good designers, but we know that we can create tools that translate designs into executable code that can be less error-prone and more reliable than people typing letters into a text editor.


My question is how is drawing rather than writing simplifying anything i.e. what is the gain from moving from traditional programming to some sort of theoretical picture programming? Is it that you can draw lines between things rather than just assuming that the line from one symbol points to the next symbol on the line? Does that simplify things, or make them more complicated?

> we know that we can create tools that translate designs into executable code that can be less error-prone and more reliable than people typing letters into a text editor.

I disagree. Maybe you know, but I haven't seen any indication of the sort.


Drawing rather than writing is just one method. A lot of it will likely be conversational. I could imagine a designer with an AR overlay speaking to a computer which offers several prototypes based on an expressed intent. The designer chooses one and offers criticism just as a boss would review an alpha version and suggest changes. The machine responds to the suggestion and rewrites the program in a few fractions of a second. The designer continues the conversation, maybe draws out some designs with a pencil, describes a desire, references another program which the machine analyzes for inspiration, and the machine adjusts the code in response. This is just one of many possible examples. The point is that software design is trending toward more automation. Coding is not a new essential skill that everyone will need on the future. Human-machine interactions are trending toward natural and automated methods, not manual code entry. Most people need to learn to be creative, think critically, analyze problems, not learn the conventions of programming languages.


Analogies are always a rabbit hole. Haha.


> for us to go from writing with only consonants to using vowels for the first time

Speaking as someone who has studied cuneiform and Akkadian, I would say that this claim isn't true. Here's a vowel that predates the period that you mentioned[0].

[0] https://en.wikipedia.org/wiki/A_(cuneiform)


> Here's an analogy for you: It took more than 1000 years (from Babylon to Archaic Greece) for us to go from writing with only consonants to using vowels for the first time.

Where did you get this idea? Babylonian writing fully indicated the vowels. It always has. You're thinking of Egyptian / Hebrew / Arabic writing.

Even where a semitic language was written in cuneiform, vowels were always indicated, because the cuneiform system didn't offer you the option to leave them out. https://en.wikipedia.org/wiki/Akkadian_language#Vowels

(Old Persian was written in repurposed cuneiform, and therefore could have omitted the vowels, but didn't.)


>It took more than 1000 years (from Babylon to Archaic Greece) for us to go from writing with only consonants to using vowels for the first time

Yeah and it took "us" 60 years from discovering flight to landing a rocket on the moon. Took "us" 60 years from the first computer to globally-live video streaming in your pocket. Time is a pointless metric when it comes to technology. You don't know what someone is cooking up down in some basement somewhere that will be released tomorrow and shatter your concept of reality.


I wonder if 'leisure-person years' is a better metric of progress (where 'leisure' is defined as the number of hours you can spend neither raising/searching for food nor sleeping).

Be really hard to identify, though.



What do you mean by computers handling ambiguity? At the end of the day for a idea to become cristalized it needs to be free from ambiguity. That is the case even in human interactions. When using ambiguous language, we iterate over ideas together to make sure everybody is on the same page. If by handling ambiguity, you mean that computers can go back and forth with us to help us remove ambiguity from our thoughts then they are basically helping us think or in some sense do programming for us. That is a great future indeed! A future where actually AIs are doing the programming in long run! But with this line of thought we might as well not teach anything to our kids because one day computers will do it better. Specially if we already stablished that they can think better than us :)


Let's teach our kids the higher level stuff that doesn't ever get old, thinking clearly, engaging in creativity, solving problems, whether through code or whatever means appeals to them. Let's give them options and opportunities, not must mandate memorizing specific facts. Let's teach kids computer science instead of just programming, creative writing instead of just grammar, mathematics instead of just algebra, let's engage their imagination, not just their instincts to conform to expectations!


The best "programming" curricula aimed at general education teach (elements of both) generalized problem solving and computer science with programming in a particular concrete language or set of languages as a central component and vehicle for that (and often incidentally teach elements of a bunch of other domains through the particular exercises.)

This is particularly true, e.g., of How to Design Programs [0].

[0] http://www.ccs.neu.edu/home/matthias/HtDP2e/


Let's teach them computer science with programming as a fantastic way to concretely demonstrate its abstract ideas. (The same goes for math vs. arithmetic!)


Yes, definitely. Too often the application of the idea is taught without understanding the idea itself. Then we get standardized testing and focus not even on the application but in what ways the application of the idea will be stated on a test. We still need the conceptual framework to learn anything lasting!


I've said this before, the reason code and CLI and texting and messaging all these text based modes of communicating and controlling computers are still popular is that they mirror one of the most intuitive and fundamental inventions humans have ever created: language, specifically, written language. Even speech doesn't rival written word in some contexts; for example, laws and organization rules, policies, are still written.

You can't beat written word. Corps are not lining up to rewrite their bylaws in a bunch of connected drag-and-droppable blocks. I really don't think it's just inertia, the preciseness, versatility,and ease of examination and editing, and permanence of written language is hard to beat. Same with source code.


I appreciate the context of your argument when you discuss the use of text in by-laws, but it's worth noting that there are lots of examples of by-laws being enforced via non-text mediums:

1) road signs, which are predominantly graphic based

2) public information signs. Eg no smoking. Also usually picture based albeit does often contain text instruction as well

3) beach flags indicating where to swim etc.

All of these are enforcing by-laws yet none specifically text driven. In fact when conveying simple rules to people, it often makes more sense to explain that in meaningful images as that enables anyone to understand the message, even if one doesn't understand the written language (eg tourists).


Right, note that in communication referring to written laws, people tend to use visual aids. This is also similar to how people try to visualize source code, like dependency graphs, inheritance graphs and such. In one case, we are reminding one about ideas using visuals and the other, we are helping comprehension of it.

However, the original specification, which in the case of the signs are laws and for software is code, is text, not in visuals. There in lies the difference. I think visual aids like dependency graphs will help us visualize code and communicate ideas like in the signs you mention, but due to the reasons I mentioned previously text will still be the preferred method for specification, or in software engineering, programming. For example, visuals only go so far. The best visual specifications I can think of are blueprints, which I'd argue still require a little of reading to understand. But in certain domains, as I said, text is a better medium.


I agree, though I think there are improvements that could make things better even for existing languages. It just doesn't seem to be a main focus of our industry.

I found a lot of ideas in this article to be pretty interesting: http://worrydream.com/#!/LearnableProgramming


>> We will have tools, accessible, easy tools; Arduinos and Pis of the future; sure. But it will not replace, nor eliminate or reduce the amount of code written.

I think something eventually will, though. My reasoning for this conclusion is simply that I don't believe a significantly larger percentage of people will learn to write production software than are able to do so now. At the same time the need for software in every sector continues to grow, leading to some varying levels of scarcity in programmers. That's a massive economic opportunity, and so people will continue to pound at that nut until it cracks.


We will keep making developers more and more productive. And if there aren't enough of us to solve all the problems, well tough luck - let them unsolved, every profession is like that.

But if we create an AI that can understand people well enough to know what those people want without clear instructions, yes, we will have placed us out of the job market, together with everybody else.


Not all nuts are crackable. I agree, though, that people will continue to pound. Even if it doesn't crack, we may find a way for many people to get things done without learning to write "production" code.


> Coding didn't change much.

That's not true and even Sussman acknowledges this:

"The fundamental difference is that programming today is all about doing science on the parts you have to work with. That means looking at reams and reams of man pages and determining that POSIX does this thing, but Windows does this other thing, and patching together the disparate parts to make a usable whole.

Beyond that, the world is messier in general. There’s massive amounts of data floating around, and the kinds of problems that we’re trying to solve are much sloppier, and the solutions a lot less discrete than they used to be."


I dont agree with that sentiment.

50 years ago, it might be conceivable to build auto scaling website that does something like pinterest within a decade, which now can be built in hours.

I'm not just talking about scaffolding and api usage either, so much has changed in coding in the last 15 or so years as well. think object oriented programming, interfaces, and GIT, and other new / useful practices.

the way we store our data is different as well. I believe it was the 70s, but during that time people needed convincing storing data in relational databases was a good thing.

today even that is changing


The actual practice of writing programs is, a few outliers aside, incredibly different in 2016 as compared to 1966.

(And even just looking at languages, Fortran 2008 is hardly recognizable as compared to FORTRAN IV)


50 years from now, I can't imagine people driving cars as we do today. I do know that human-operated cars are old, but it does not mean that we can't do better nowadays.


drivers:users::mechanics:programmers


And designing cars hasn't become easier, it's become exponentially harder as we demand more from them.


you missed assembler; it's still a thing


I did miss that, true; however, Assembly is heavily architecture-tied. Therefore x86 Assembly significantly differs from, for example, ARM assembly, but nonetheless I should have mentioned it.


Its annoyingly incompatible and RISC is more verbose but really the concepts are pretty much the same. Load something to a register, do some very basic operations on said registers and save it out. An X86 programmer should be able to pick up other CPUs fairly easily. Although delay slots will probably piss them off every time.


>Doing that with voice commands in my opinion is significantly harder compared to what we have now.

You could have automations around that though. A lot of manual work could be replaced with little AI bots that do the work. And since this work is not really "creative", it could be done through AI.


Aside specialty industries, the way the average programmer codes in a very different way to what it would have been like 50 years ago.


Yes and no.

Yes, in that the tools are massively better. So is the hardware that it all runs on.

No, in that you still have to tell the computer precisely and unambiguously exactly what you want it to do, and how, mostly in text. The level of detail required today is somewhat less, due to better tools, but at a high level the work hasn't changed.


That has changed significantly too though. Sorry about this being a long post, but having programmed through most of the last 50 years, I've seen a massive shift in the way people code even from a language perspective:

1) There's a massive reliance on reusable libraries these days. Don't get me wrong, this is a good thing, but it means people spend less time rewriting the "boring" stuff (for want a lazy description) and more time wring their program logic.

2) Most people are coding in languages and/or language features that are several abstractions higher than they were 50 years ago. Even putting aside web development - which is probably one of the widest used frameworks these days - modern languages and even modern standards of old languages have templates, complex object systems, and all sorts of other advanced features that a compiler needs to convert into a runtime stack. Comparatively very few people write code that directly maps as closely to hardware as they did 50 years ago.

3) And expanding on my former point, a great many languages these days compile to their own runtime environment (as per the de facto standard language compiler): Java, Javascript, Python, Scala, Perl, PHP, Ruby, etc. You just couldn't do that on old hardware.

4) Multi-threaded / concurrency programming is also a big area people write code in that didn't exist 50 years ago. Whether that's writing POSIX threads in C, using runtime concurrency in languages like Go (goprocesses) which don't map directly to OS threads, or even clustering across multiple servers using whatever libraries you prefer for distributed processing, none of this was available in the 60s when servers were a monolithic commodity and CPUs were single core. Hence why time sharing on servers was expensive and why many programmers used write their code out by hand before giving it to operators to punch once computing times was allocated.

So while you're right that we still write statements instructing the computer, that's essentially the minimum you'd expect to do. Even in Star Trek with the voice operated computers, it's users are commanding the computer with a series of statements. One could argue that is a highly intuitive REPL environment which mostly fits your "you still have to tell the computer precisely and unambiguously exactly what you want it to do..." statement yet is worlds apart from the they we program today.

Expanding on your above quote, "mostly in text": even that is such a broad generalisation that it overlooks quite a few interesting edge cases that didn't exist 50 years ago:

1) web development with GUI based tools (I some people will argue that web development isn't "proper" programming, but it is one of the biggest areas in which people write computer code these days. So it can't really be ignored. And there are a lot of GUI tools that write a lot of that code for the developer / designer. Granted hand crafted code is almost always better, but fact remains they still exist.

2) GUI mock ups with application-orientated IDEs. I'm talking about Visual Basic, QtCreator, Android Studio, etc where you can mock up the design of the UI in the IDE using drawing tools rather than writing creating the UI objects manually in code.

3) GUI based programming languages (eg Scratch). Granted these are usually aimed as teaching languages, but they're still an interesting alternative to the "in text" style programming languages. There's also an esoteric language which you program with coloured pixels.

So your generalisation is accurate, but perhaps not fair given the number of exceptions

Lastly: "The level of detail required today is somewhat less, due to better tools, but at a high level the work hasn't changed.":

The problem with taking things to that high level is it then becomes comparable with any instruction-based field. For example, cook books have a list of required includes at the start of the "program", and then a procedural stack of instructions afterwards. Putting aside joke esoteric languages like "Chef", you wouldn't class cooking instructions as a programming language yet it precisely fits the high level description you gave.

I think as programming is a science, it pays to look at things a little more in-depth when comparing how things have changed rather than saying "to a lay-person the raw text dump of an non-compiled program looks broadly the same as it did 50 years ago". While it's true that things haven't changed significantly from a high level overview, things have moved on massively in every specific way.

Lastly, many will point out that languages like C and even Assembly (if you excuse the looser definition) are still used today, which is true. But equally punch cards et al was still in widespread use 50 years ago. So if we're going to compare the most "traditional" edge case of modern development now, then at least compare it to the oldest traditional edge case of development 50 years ago to keep the comparison far rather than comparing the newest of the old with the oldest of the new. And once you start comparing ANSI C to punch-inputted machine code, the differences between then and now become even more pronounced :P


It should be noted that that quote is not from the author, but from someone that the author is quoting.


for some perspective... 50 years is just twice the length of time since i've been coding (in some capacity)... and i'm only in my early 30's.


Agree completely!


Dear article writer,

Natural language sucks, it is amibigious, difficult to manipulate, verbose, and have too many non-functional degrees of freedom. After all, that's why mathematics left natural language and adopted the mathematical syntax we have today.

Diagrams suck, they are ambigious, difficult to manipulate, verbose, and has too many non-functional degrees of freedom. That's why cook books don't have diagrams to describe recipies.

The syntax will never die, it is the only sensible we have to define programs.


> Diagrams suck

And yet anytime two or more programmers get together to talk about what they are creating, they start drawing diagrams on whiteboards.


And it is on a whiteboard because it is not useful enough to record in a longer term medium.

I'm not saying diagrams are useless, they just make a poor substitute for syntax.


I'm going to disagree on that. Every day I wish I could intermix textual and pictorial representations of logic in the programming I do. In particular, any series of computations that can be represented as a directed graph, e.g. a streaming data workflow, or state machine, is much more easily understood pictorially than textually.

The flowchart and decision tree exist for a reason to describe algorithms.


> In particular, any series of computations that can be represented as a directed graph, e.g. a streaming data workflow, or state machine, is much more easily understood pictorially than textually.

As long as it is very simple. Electronics already have a highly developed visual language for describing their functions - but if what was going on inside every chip was illustrated just as what was going on between chips, it would be entirely unintelligible. Instead, any visual representation is at a particular scale, and well known portions are represented as blocks with cryptic textual notes next to each interface (ACK, EN, V0+, CLK, PT2, HVSD, WTFBBQ, etc.), labels to identify company or type, and an expectation that you know what they do or can find out on your own (and not an expectation that you understand how they do it.)

Anything simple enough to be completely expressed in human-comprehensible pictures should be exposed to the user and modifiable (even if not by using pictures, but forms.) I totally agree, if that's what you and this article are trying to say. My experiences in trying to encode actual human workflows in BPMN have taught me that when using pictures it's harder to express things of any sophistication than in words - because of words like "with" and "each" and "all", "if" and "when," and because of ways things change over time, and because of separate but overlapping/interacting flows that languages can express easily but pictures not so much.

In pictures, that involves looking all over your picture for different things, trying to figure out how to draw lines to them; if the condition is once or twice removed from the object of the search, it involves trying to untangle massive knots with your eyes and memory. Theoretically, that is. What it involves in practice is scrawling words all over your picture (just like in a circuit diagram.) Words that express the same types of relationships over time and type as the picture is trying to express projected onto a plane, words that could be easily expanded to include those relationships and eliminate the 18 types of lines, the 25 types of shapes, the 12 types of shape borders, the 16 color schemes and the long list of rules for connecting them that had to be invented to avoid coming up with a textual syntax.


Yes, and that's how I would use a language that would allow mixed picture and text logic flows. At a certain level of abstraction block diagrams greatly assist understanding program flow, and it is redundant that I have to write the code and then draw the block diagram later for documentation.

Going back to electronics, I don't think anyone would argue that schematic block diagrams are inferior to reading the raw netlist. Similarly, I feel programming could be improved if IDEs for popular langauges would allow connecting functions together in a streaming manner. Of course, I am aware this exists, Simulink, LabView, FPGA schematic workflow, but these are niche languages that I don't work in.


"I don't think anyone would argue that schematic block diagrams are inferior to reading the raw netlist."

Well, no, but some may well argue that reading the HDL is better then a diagram. I have experience working with both the HDL and schematic in the FPGA world, and in my estimation text-based HDL is way better than working with a diagram.

Of course, YMMV, my brain may just be more optimized for processing text instead of images.


Many times I wished there was a HDL for PCB design input instead of schematic tools, now that there is often very little discrete/analog parts in a board, because large chips include almost everything needed and you mainly spend time connecting them together, possibly with a bit of plumbing but not much, and the only remaining discrete components are very repetitive: a ton of similar decoupling capacitors, pull-up/down resistors, termination resistors, a couple of voltage divider resistors and a few other common functions.

That should be a great fit for a textual HDL instead of labouring through a schematics mainly linking pins to pins again and again. It would even be much more expressive, now that we often have chips so big that they cannot be represented efficiently as a single symbol on a single sheet but are split in smaller blocks looking like HDL ports without the flexibility; now that µC, SoC and other kinds of chips have pins that are so much muxed out that they don't have a clear, expressible function, meaning that grouping them in blocks is more of random choice than a good solution. And this multiplexing means that you'll often have to change and change again the connections of your wires in the schematic, and that would be much easier to do with an HDL.

-----

That's why my mind was blown when a software job forced me to use a graphical tool like Scade. It felt like coming 20 years backwards, when in electronics HDL were not popular yet and we had to design FPGAs and such with schematics. And that was even worse, because the graphical representation looks parallel, concurrent, as a electronic schematic does, except that it doesn't match anything on the software side: first the specification/design document you have to implement is generally sequential, not concurrent, and then the generated code and the way the CPU/computer works are sequential as well, not concurrent. So you have this weird looking graphical part in the middle, which looks parallel but isn't really, and messes with your brain because you have to perpetually translate between the sequential specification to it, and from it to what it really does sequentially.

An appaling moment to do this job and discover that they considered it an improvement on C/Ada/whatever regular programming. And I didn't mention the tooling; like when what could have been a simple textual diff turns into an epic nightmare you are never sure you can trust the result, if you manage to get a result.


> I'm going to disagree on that. Every day I wish I could intermix textual and pictorial representations of logic in the programming I do. In particular, any series of computations that can be represented as a directed graph, e.g. a streaming data workflow, or state machine, is much more easily understood pictorially than textually.

I've done this. It doesn't work. You need more details than can be cleanly represented on a diagram. How do you do namespacing for example? Which database schema will that box connect to? How will it reconnect?

All 'visual programming languages' fall back to text boxes constantly. Inevitably the contents of those text boxes are needed to understand or execute the visual representation of the program.


Sure, the contents of the text box is necessary. But treating the contents as a black box is not something new, and I don't see it as a problem. That's pretty much every function call ever - all I'm interested in is the calling signature. A combination of pictures and text would suit me far better than what we have today, which is text everywhere, and diagrams / flowcharts afterwards if you get around to writing documentation.


> To get there, programming tools should first use our language. For instance, to turn a button red, we shouldn’t have to write code. We should just be able to point to the button and select a shade of red.

We've had that for over twenty years.


Yeah but web developers still tend to do it the "hard way" but editing the CSS directly. Why do web developers still do it that way even though visual tools exist?

My guess is because with working with text is more efficient. The cognitive load of finding a setting in the UI, moving your mouse to it, and selecting the value is far greater than just typing.


Why do people play D&D instead of WoW?

Perhaps, in part, because one allows for a greater expressiveness -- albeit with a little more planning and cognitive load.


This does make me wonder if developers tend to be involved in more forms of creative expression (roleplaying, art, etc.) than in creative consumption (gaming, movies, etc.), what intersection(s) exist and why...


> Why do web developers still do it that way even though visual tools exist?

Visual editors don't work so well with pages where parts are static & parts are dynamically generated. (Like, a css class may be different based on the data that it's showing). And at this point, pretty much every page in a web app has dynamic parts.


Text can be version controlled and diffed.


And so can IDE generated code from drag and drop controls.

Visual Studio and many others have this out of the box but there's plenty of companies that do well selling exactly this like telerik.

I personally don't and been doing it the "hard way" since the beginning because it feels less confining. And I don't want to rely on an IDE to build something. Plus the code it generates was always kind of kludgy.


Why do people still build business process tools when they could point and click in Salesforce?


Because there are no good visual designers for web apps.


I like smart tools. I can open CSS in Idea. It'll highlight colour values. Then I can click on that colour and change it using colour picker. I think that it uses the best of both worlds.


Concentrating on the "our language" part, we could probably create something that changes setting based on natural language. You'd say

    Turn the color of button3 in pane4 red
Which would be equivalent to

    pane4.button3.color = 'red' 
Which is...actually a bit shorter, and a lot more precise. Who'd have thought. CSS is actually pretty close, descriptive and all.

    #pane4 #button3 { color:red;}
Neat coincidence on the number of WYSIWYG editors too.


That's exactly what Bubble does. Having played with it a bit it seems pretty nice ...basically the latest iteration of tools like Klik & Play or Mediator from the 1990s. You get a visual designer with a few basic widgets, a simple workflow editor that allows for basic sequences of actions and so on. It's programming stripped down to the absolute basics. You get a database, a simple visual query builder thingie that vaguely resembles English, etc.

I just showed it to my girlfriend who is learning Python. Her comment was "hmm but is this really easier than learning to code"? Well, it probably IS a lot easier, as long as your app fits within the constraints of what Bubble can do.

The problem with such tools is always that you very quickly hit the limits of what they can do, and then you're stuck. You can't easily peel back the abstraction and go deeper. You end up having to scrap the project or just give up on certain things. Bubble's "language" can't do looping, for instance. You can apparently write snippets of Javascript to do other stuff, but then you're back to needing to learn programming again.


Good! Now, the button should be a slightly different shade if the user is logged in. And it should be blue if the user is an administrator. And the customer requested that it shouldn't show up at all if the user lacks the 'foo' privilege.


Turn the color of button3 in pane4 red

Cobol had that idea, and Applescript took it farther. For example:

  tell application "Safari" to activate
It starts to get unwieldy because there are so many parts of speech, and remembering them all and how they fit is a pain. Some people call Applescript a "read only" language because of this.

"Looking like a natural language" doesn't make the language easier to use, at least until we have AI compilers.


The "natural language" programming thing has been done a few times. HyperTalk comes to mind for one example.


And never quite took off.

I think python strikes a good balance, naturalish language whenever possible, without forcing it into places where it doesn't fit. At least that's the feel I got from it.


I just had a waking nightmare of AppleScript.


I have been doing coding for twenty years now and I am kind of surprised that plain text is still the best representation of code.


"...so why are we having a serious conversation about grooming children to become software developers before they’ve even gone to middle school?"

We're not, really, but given the pervasiveness of computing technology we're recognizing that it's important for children to have some formal experience with software design concepts regardless of which career path they choose.

I'm a firm believer that at least some coding ability is beneficial in any profession. It's like writing, or vocabulary; you don't "need" it for some professions, per se, but being a good writer enhances both your professional and personal life in many ways, so it's worthwhile to teach. It's much the same with coding.


Exactly - teaching people how to use software to solve problems at an early age is not railroading them into a single career path, it's setting them up to be more effective in the career path that they eventually choose.

An administrative assistant who can write scripts to collate and email weekly reports to their boss is far more valuable than one who spends four hours a week combing through excel spreadsheets.

A visual artist who can write their own plugins for Blender/Maya/Photoshop/etc will be much more flexible and productive than one who performs the same 2-minute-long string of commands hundreds of times a day.

A machinist who can quickly design a part in Solidworks and send it to a CNC mill or lathe will be able to serve a wider range of customer needs than one who outsources that design work or crafts the part by hand.

And so on. Why is there so much hostility to the idea of teaching children some basic programming skills at an early age?


This. Whether or not we need more professional devs is beside the point. At coder dojos, for example, they're not teaching design patterns, they're teaching the basics - assignment, loops, conditionals - the kinds of things that allow you to automate computation in other fields of endeavour


There is also one million other skills that would be beneficial.


Programming is done in code for the same reason mathematics is done in notation, for specificity.

Doing programming in plain English would be just as cumbersome as doing math in plain English.

I don't think anyone can quite fully imagine the nightmare of trying to program in a recursively enumerable language[1].

[1] https://en.wikipedia.org/wiki/Chomsky_hierarchy


Hell, even trying to explain how a relatively simple shopping cart web app works, unambiguously, in plain English, to an executive, is extremely tedious and verbose, requires defining a lot of specific terms, and at the end of the day it still confuses the hell out of him.


you should work for better executives.


I think this is correct -- but.

In Utupia (aka Nowhere) we will have programs that a human being can know is correct. This might involve mathematical proofs, tests and whatever. But how do we know that those things are correct?

In some cases, it would help if we had an English text explaining what the program should doo + computer verification that the program really does that. The English text is only one part of the picture -- but an important part.

This is what acceptance testing tools like Cucumber and Robot try to do; but they avoid actually parsing English. Computers are getting better at parsing human languages, so I expect improvement in this field.


Isn't doing programming projects much less cumbersome when you hire a professional programmer and express a high-level project spec to him/her? Why cannot a sufficiently advanced machine-learning model behave just like such a professional?

Modern models learn to achieve goals in increasingly sophisticated 3d environments and even learn execute commands in some very limited form of natural language. You could say that the question of achieving more humanlike performance may be "just" a question of engineering and scale.


Some of my coworkers have been using similar concepts to teach K-5 coding in a few school districts for a few years now. And the big, surprising impact has not been about the code, but about the problem solving... I'm getting the stories second-hand, but apparently the concept of debugging problems becomes so ingrained that they "debug" all their efforts. When they make a mistake in math, they debug their process to fix it. They debug what is wrong with their handwriting to improve it. They think of everything as a problem to be solved, work towards solving it, and are doing wonderfully across all subjects.

So the question of whether these kids will be coders as they grow up really doesn't seem to be that important of a question -- they are being taught how to succeed at whatever they try. I'm excited to see these types of programs move forward and become prevalent throughout our educational system.


> the concept of debugging problems

I've been seeing this same thing happen with my 7 year old son (I'm a programmer), and I always encourage him to find out why something didn't work out the way he expected it to.

Whether they grow up to become a coder or not, I'll be happy when more people look for the cause of the problem, rather than just simply treating the symptoms.


I think the author is missing the forest for the trees, here. Schools often don't have much of a computing curriculum, and these classes are great for improving general programming literacy. And if one kid finds out he really enjoys it when he wouldn't normally have through school, that's a win. It's true that most of them won't become programmers, but we don't teach biology in high school assuming you won't become a biologist, either.


"To add an example for clarity, think of the field of typography — until the Digital Age, typography was a specialized occupation. But with new programs like Microsoft Word coming into existence, typography (e.g. formatting a document, setting the margins, making sure the lettering is appealing, etc.) became something everyone could do easily without much thinking."

Without much thinking pretty much encapsulates what Word did to presentation standards, at least in my experience. Let us never forget WordArt.

Part of me always wants to make the argument that these things are difficult, not just because of an abstract syntax and arcane rules, but because these things are genuinely difficult to reason about - attempts to make difficult things easier by papering over the cracks results in a lot of pain for a lot of people. Bits ping off and people are left unable to even begin to solve the problem.

However the very, very, obvious flipside of this is that lowering barriers to entry is pretty much always a good thing. It invites unconventional perspectives and novel approaches - how could that be a bad thing? Sure, some people will make crappy things that shouldn't have ever existed but by the same token some people will make great things that never would have been without the lowered barriers.


Coding in a good language already consists of writing about the things you care about, not the things the computer cares about. Text (with a few symbols) turns out to be the best way to express computations, not to a computer but to a human reader/maintainer.

The future is most professionals writing code as part of their job, just as the present is most professionals writing as part of their job.


It saddens me that laypeople equate software development to writing code, that's like saying an architect just knows how to draw.


Which is an excellent example. Architects still do architecture, the tools just evolve.


"The future I imagine is a world in which programming is self-explanatory, where people talk to computers to build software. To get there, programming tools should first use our language."

But is: "For every button on the page that is a "warning" button, replace the background color to red."

Necessarily better than? $("button.warning").css("background-color", "red")


Inform 7 is a programming language that looks similar to natural English. As far as I can tell the only benefit is tricking beginners into thinking it's easy. By the time they figure out it's a normal programming language only with extra verbose syntax, it's too late, they're already a programmer.

If we had a real natural language based programming language it would have all the problems of law. Laws are written in a very formal style that takes a lot of training to understand, and despite this they contain enough ambiguity to support a whole industry of lawyers arguing about them. Making programming similar to law would not make anything easier.


I used to mess around a bit with Inform 6 (they had a really excellent tutorial[1]). It wasn't great, but since it was basically C-like with some extra DSL stuff sprinkled on top, you could usually figure out what it was doing without too much effort.

I tried to get into Inform 7, but the natural language syntax was so fuzzy, I was constantly trying to figure out how things worked and how I had to write things down to get the results I was trying to get.

EDIT: Since guess-the-verb was sometimes half the battle with old Infocom games, I suppose its ironically fitting that the experience of writing IF with Inform 7 parallels that experience.

[1] http://inform-fiction.org/manual/about_ibg.html


For a previous attempt at the same idea, see COBOL. The original design goal was to make professional programming obsolete.


Applescript is worth looking at for the same reasons.


Yes - it's miles better. If you're unconciously competant (i.e. good at your job) they might seem like they both take about the same cognitive effort to understand.

But think for a beginner:

What is $ ? Is it money?

What are () for?

What is CSS?

I want to shade it, so lets do background-shade .. oh, color only works, so you have to know all the terms.

Can I do "background-color" "red blue white striped"

What is button.warning? Can I do button.border ?

These are all questions a newbie could ask - because they have no experience in the "domain language"


The questions raised was "better," not "easier to learn." In absence of everything else, being easy to learn is a point in favor of natural language--but the inefficiencies, ambiguities, and probable weakness of using natural language are big marks against it. Checkers is easier to learn than chess, but has fewer players.


I have to think of AppleScript "Tell xxx to yyy". That was a real winner :-)


I remember going to a mobile conference in the early 2000s and every single vendor there was saying that developing mobile apps using UML was the future. No code, just map out everything in a diagram.

Granted a smart phone was unheard of at this point so most mobile apps wouldn't even be called apps by today's standards.

A decade and a half later and mobile developer is a highly skilled _coding_ position.


in the early 2000s and every single vendor there was saying that developing mobile apps using UML was the future. No code, just map out everything in a diagram.

UML in later versions got so bad that even the original creators disowned it. Turned out that putting everything into a diagram was just as hard (and less convenient) than writing it as code.


And UML is... well, maybe not dead, but nobody thinks it's going to replace coding.


> For instance, to turn a button red, we shouldn’t have to write code. We should just be able to point to the button and select a shade of red.

Someone is going to have to right the code so that the end-user can just click buttons. So if this future of programmatic interfaces is coming, it's going to require more people writing code to build it— not fewer!


Just point to the button and select a shade of being able to point at the button and click a color. (Yes, that is confusing.)

I don't think the author or the person interviewed has used Visual Basic, or understands why a developer might not want that.


Essentially what he's saying is that in the future people will just read and not write . . .


I don't know if I agree with the author since they didn't seem to provide any evidence / argument for why the future is codeless. They did warn me though that I wouldn't be able to understand from my vantage point in Silicon Valley though so maybe other people see the argument? But from my perspective code is only growing without showing signs of abating.


First off, I agree. I agree that everyone who continually says "programming" is dying and in 5 - 10 years, "AI" will be writing code (or whatever else they can dream up) have no idea what they are taking about.

Programming is definitely going to get easier and I don't doubt that it will become a common skill.

I just doubt this idea that professional programming is a dying art or something. It is just silly to say that because tools that lower the barrier of entry are becoming more common, that the entire profession will soon be dead.

(Also, I am like 75% sure that it was a sponsored article to advertise for the company named in the article)


Programming won't be any easier. It's like claiming that in the future thinking will get easier. I'd claim otherwise: keeping in mind recent trends of "outsourcing" the knowledge many brains just don't have enough building block in working memory to do complicated thinking.


I believe that:

> It is just silly to say that because tools that lower the barrier of entry are becoming more common, that the entire profession will soon be dead.

Is actually due to Labor Supply vs. Labor Demand. I can envision a world where majority of jobs are all replaced by machines (from Baristas to Accountants), and web/software development (among other fields, such as specialized medicine) are some of the last places for people to remain self-sufficient off their own labor. With this over-abundance of supply, employers are capable of being extremely discretionary over their hires, and are able to pay just enough for their human capital to sustain themselves.


I think we are on the same page, but I was genuinely hoping someone could fill in the gaps the author seemed to be hand waving about. Is programming going to become an easier skill? Absolutely. What does that have to do with the means? What evidence do we have that drag and drop interfaces will replace the majority of coding? Why not something like swift playgrounds or some other IDE innovation that make the coding easier but get out of the way to expose the text if need be? Why is that harmful? Why is that not coding anymore?


Another SV resident, I can understand given WYSIWYG tools and CMS like WordPress it would "only seem logical" that eventually nobody needs to code and we just magically make things work.

I think the problem with that assumption is that there has been virtually no traction on "just telling the computer what to do" - whether text or voice. There is also the assumption that programming is not getting more difficult. While not everyone is working on "capital E" engineering efforts, it's a bit ridiculous to think that our jobs are getting easier to automate in the wake of Internet of Things, Virtual Reality, Augmented Reality, and projects like Web Assembly pushing browser-based development into interesting, new places.

Just because we made it easy to set up a blog doesn't mean we made it easy to "program with words".


Progress happens slow in the short term and fast in the long term.

In the 1980s, there was a consensus that "software components" enabled by object orientation were a pipe dream.

They were so long as you were using C++ which was barely binary compatible and where you couldn't reuse objects in a .so file without also having an .h file. It was awful, not at all a minimal viable product.

Then Java came along and a number of other languages that adopted essentially the same model for OO programming such as Python, PHP, Ruby, C#, etc.

Now you can cut and paste a few lines of XML into Maven and woohoo... You've incorporated a software component into your system.

People bitch that it has to be XML, but the sheer ease of doing so means it is not hard at all to get 100+ dependencies in a project and now the problem is dealing with the problems that come when you have 100+ dependencies.

(And of course the same is true with npm and every other language that has similar tools.)

Two big themes are: (i) tools that reduce the essential difficulty of software development and (ii) antiprofessionalism in software engineering.

Compilers like FORTRAN mean you don't need to have the intimacy with the machine you need to write, say, Macro Assembler. That is mainstream, but other technologies, such as logic programming and rules engine are still stillborn. In theory tools like that mean the order of execution does not matter so much so you don't need the skill to figure out what order to put the instructions in. Practically they are yet to become vernacular tools that are palatable to programmers and non-programmers. (Anything programmers can't stand will be 10x more painful to non-programmers, I can tell you that!)

Anti-professionalism is another big theme. Had computers come around 20 years earlier we would probably have a programmer's union, licensing and other things that would make a big difference in our lives. As it is, the beef that programmers have is not that we don't get paid enough, it is that we are often forced into malpractice by management.


This is a very amusing article because I went in expecting fairly sophisticated arguments about Cloud, PaaS, DevOps, layers of abstraction, higher-level languages/paradigms, etc. Perhaps this was naive, given the domain of origin.

The irony is that the future probably will enable individual programmers to have an even more outsized ability to create value, and fewer programmers will be necessary to accomplish the same set of tasks. Sadly, cogent arguments about meaningful issues aren't exactly TechCrunch's forte.


There is an upper bound limit for how abstract a general purpose programming language can become. Programming languages mainly exist because of their ability to remove ambiguity. Our natural language on the other hand is very vague. Many people might read the same exact article and interpret it differently. This is natural language's great feature. This feature is why a kid, without fully formed thoughts, can learn and use a natural language. Hence I don't see a day programming languages will completely fade away. Programs are result of a careful thought process that cristalizes a concept into a process and that process is only complete when you can describe it in an unambiguous language. One may argue that natural languages are capable of being not ambiguous. A subset of a natural language can be used without ambiguity but that is just definition of a programming language. Arguing programming languages will fade away is the same as saying math one day will not be necessary because we can explain all concepts in physics or other sciences in natural language.


Are there graphical languages that advocates like?

The only graphical language that I've encountered professionally is LabVIEW, and I've yet to see an instance/programming style where it has been superior for anything but quick prototyping.

A language that's editable in both flowchart and traditional formats could be very useful, if executed in a way that doesn't cripple the traditional side of things.


People seemed to like Scratch [1] a lot a few years ago. Haven't heard much about it since, but IIRC it can switch between flowchart and text mode.

[1] https://en.wikipedia.org/wiki/Scratch_%28programming_languag...


I have an artist friend who does everything in Max/MSP. At first I was rather dismissive and thought he should just learn to code properly - but the patching environment makes him able to get results incredibly fast - much much faster than I can do with C++ (even using something like Openframeworks)


LabVIEW was eerily satisfying to write code in. Or at least some type of code in. Loops and logic blocks got a tiny bit weird, but overall you could do a lot with sub-vis.

I find myself craving it even 15 years later.


LabVIEW gets really weird once you get to things like loops or, even worse, threading. You will start craving some good old fashioned code very quickly.

I haven't seen any kind of visual coding environment that didn't fall apart quickly once you got to more complex scenarios.


I've heard tons of praise for https://docs.unrealengine.com/latest/INT/Engine/Blueprints/ and I've job-interviewed several people who have built simple games without bothering to type code.


A designer/fabricator friend developes a CNC CAM system in Grasshopper for Rhino, probably faster than I could in code. It is node-based, but more importantly changes can be made live, and with a 3d-engine available preview/visualization/debugging


The writer presents an ideal vision for the future - one where people can "build software" purely through abstract thought, without needing to know the semantics of specific tools and programming languages.

If such a future is possible, that would be great. I would be all for it. But the people who are already in the field, working in the trenches, don't think it's a realistic vision for the near future. All attempts thus far to produce "layman friendly programming" have been either failures, or relegated to non-functional toys. Hence why we don't want to waste our scarce time and resources on such moonshots.

If the author and his peers disagree, they are free to found/invest in such ventures. And if they're right, they can make a fortune for themselves in the process. But just sitting in the sidelines and armchair quarterbacking is a pointless waste of time for everyone involved.


That's a bit Cocky, considering the amount of knowledge and effort google has behind AI, quantum computing and others.

But I guess the main point of the article was not that, but the Bubble plug.


Think of the the other time when people want to be precise: contracts.

Contracts are written down in text, so they can be edited, carefully read and referred to later.

He might as well also say, that in the future there will be no written contracts, we'll go back to debating and settling our issues verbally in public.


> that in the future there will be no written contracts, we'll go back to debating and settling our issues verbally in public.

This is a great metaphor. Only I would change that he thinks we should "go back" with he thinks we are going to invent a magical technology that lets lawyers from two companies drag and drop a few images together and BOOM there's a legally binding contract.


The lawyers will explain what they want to the computer. They will have no idea what the computer made of it, but the result will look close enough that they won't be able to argue they didn't get what they asked for.


Nice sponsored article there.


I actually clicked the link for whatever visionary platform bubble is--it's a workflow engine! Real groundbreaking stuff there.


Visual Basic was a success until Microsoft killed it.

Remember when you could write HTML with a WYSIWYG editor?

Most sites can be built with Wordpress.

The hosting side of things is mostly automated and becoming more standardized.


To each their own. Yes, I'm sure one day more people will use software that allows them to put together amazing programs using complex GUIs that do wonderful magic. These may be business programs that do statistics, or games, and we've already seen both (hello Excel, hello game studios). The problem, as it always has been, is that point when you want to do more and suddenly find yourself in asm land or looking to talk to mysterious "drivers". After all, what's a port anyways? Does that mean my computer has Docker built-in? As I hint, people will hit that wall of mystery, and lower-level programming become inevitable. After that happens, they eventually become accustomed to this idea of telling a computer what to do by strict commands according to a specific protocol. For the average guy who never gets into it, you could tell him it's kind of like talking to a dog. "Sit." It sits. "Love." It wags its tail. "You're so smart, pooch." It gives you a blank stare... and keeps wagging it's tail.

These days, when I look for a programming language, it's not about the syntax sugar, it's more about the feature set that comes with the language: things like the module and build system (e.g. Java is pretty easy), the ease at which both complex and trivial tasks can be accomplished, and the availability of support community and libraries.


If it was another company doing this, there might be a chance it is meaningful. Google is not that company, however. They started and killed visual programming projects before, dumping everyone's data. The last one was called App Inventor. At least they made the code open source so MIT could write App Inventor 2 based on it. The best thing they could do to make me think Project Bloks will be meaningful is to give it to another company in a similar fashion. Otherwise I expect they would just kill it next year anyway like other projects of theirs.


The difference is that Bloks isn't a service that can be shut down. It's essentially a set of open source schematics.


Programming is really formalization. That is the hard part. The kind of thinking required to take something and express it as a set of logical and computable constraints. It doesn't matter how much money you throw at the problem we are never going to have the entire population being able to "program".

It is the same reason despite all the training in mathematics only a few people go on to get a PhD and come up with something novel in mathematics. The rest of the population gets by with basic algebra, not even calculus is required.


Dear writer, the future is fewer people writing sponsored content.


Maybe an unpopular opinion here, but I agree with the article overall.

My view is less that programming is going away, and more that all jobs are. Not immediately or anything, but I don't think we are going to magically produce programming jobs for all the masses who are going to need a job.

Having been at this over 15 years I have single handedly automated thousands of jobs, and of those a healthy handful are making things more efficient that a project needs less programmers, etc..

So while we will still need programmers probably forever , I'm not sure why people think that the number of programming jobs will do anything but stay the same or decrease, while the number of candidates increases.

Tooling has come so far, and it's going to go farther. You don't need to know a lot to make something meaningful anymore.

How we expect to train all children to be programmers and think that by the time they are our age it will still be a field that is lucrative is silly I think. My prediction is with all the new programmers coming into the field intersecting with the tooling getting better, intersecting with that a lot of other markets need less people I think we are left with a crowded field of players where the average skill level is lower because the tasks do not require it to be all that high any more.

Programming is the new carpentry. Probably jobs for a long time , but training the children now like it's going to be the most amazing career path is short sighted I think.

Considering this, I hope my kids don't pick programming as a career. I would love to be wrong.


Supporting your opinion the BLS puts the job outlook of a programmer at -8% between 2014 and 2024.

I hold the same opinion as you. I think that new languages and tooling will empower people to do more with less while being easier to learn. Combine this with globalization plus stagnating economies and the outlook of programming as a career seems less lucrative.


"Programming" is the task of making other tasks irrelevant. Thus, if we have all the software we could want, it means there aren't any jobs for anybody, whatever profession they choose.


Programming is the new carpentry. Probably jobs for a long time , but training the children now like it's going to be the most amazing career path is short sighted I think.

Train your children to think. Give them the freedom to change career paths if necessary.


It seems like this is focused a lot on what people want to do, as opposed to what provides value. Not that that's a bad focus! But if you want to predict the future, I think market forces would be a better indicator; as much as we might want to move to a future where programming involves more intuitive tools, I still think it will be more powerful, and thus valuable, to be able to muck around in the code.


>But with new programs like Microsoft Word coming into existence, typography (e.g. formatting a document, setting the margins, making sure the lettering is appealing, etc.) became something everyone could do easily without much thinking.

Correction: Most people do it without thinking. This does not mean the majority of the typography on the internet (and eventually printed) is good or well done.

My LaTeX setup produces fairly good typography for math work, without too much thinking on my part, after setting up the packages, fonts, etc and learning LaTeX, which did took some time. (And ironically enough, writing LaTeX feels quite a bit like coding...) But first of all, deciding to use and learn LaTeX (or some other workable solution to produce good typography, it probably isn't the only one but the one I'm familiar with) requires you think about typography and realize that good typography is needed in the first place.


This article strokes me the wrong way. If any of the things described in there were possible, why the hell would I be writing any code? I'm a lazy programmer for gods sake, and code I don't have to write is a win in my book.

Matter of fact, once I got past the point where the novelty of writing lots of code wore off, I'm spending most of my time trying to write less code.

That side step all these miracle solutions for bringing coding to the masses in one fell swoop and eliminating its tedium do is that "Hey, technically, if you draw pictures instead, it doesn't count as writing". Yes, technically true, totally useless. I personally believe that whoever comes up with this again and again deserves to be bludgeoned by a copy of "K&R The C Programming language", turned into a picture book. All 70,000 pages of of it, with the big glossy full-page, double-page foldout prints.

/rant


-on mobile so I apologize in advance for bad grammar and typos

I don't think the author understands the purpose of the project. Google wants more coders. As a company Coders are likely one of Google's largest expense (At my Job staffing is 25% developers, and staffing is ~80% of our current budget). Does Google necessarily need more developers writing code? No, however they could use more people who can code to solve the small problems they face daily.

I think it's more likely we'll have people writing code informally, and as a small part of their overall job.

>Writing code will become less and less necessary, making software development more accessible to everyone.

I agree with that sentiment, however I fail to see the link between more accessible development and fewer people writing code. This process has been happening for years.

>The real benefit of something like Project Bloks is that it actually removes the code.

But is that new? What if something more advanced is needed?

Excel is a good example of writing code for a job. Access is an example of programming without writing code (it's sql with a GUI). Both tools are popular however people have a hard time doing advanced things. This is also perhaps due to the high price of creating the building block interface/software cost.

By thinking logically, people may not write code formally, they may not write any code: however it will encourage people to create solutions to the problems most applicable to them. Maybe their solution is 90% the blocks provided by their program. 10% code they wrote so handle their edge case. Perhaps it's something they only engage in one day a month.

In the end I think we'll see more code, more people writing code, and programs with more handling of the common tasks as building blocks but the ability to write code for the complex parts and plug it in where needed.


Thinking that turning buttons red is the major problem of programming is mistaking the interface for the substance. It's as if you wanted to teach people how to develop new automobile technology by selecting the shape of the steering wheel, the fabric in the interior, and the color of the paint job.

The problem with programming is that computers can't understand your intention. What amateur programmers need is side-effect free functions, efficient abstracting away of cores and memory-management, and static analysis that makes functional bugs as obvious as a leak in a plastic bag. Computers will never understand your intention; programmers barely understand your intention and they have a lot more in common with you.


He's right but he doesn't understand why. We'll write less code in the future but it won't have anything to do with visual programming languages or other fancy tools for "programming without code". Code will always be the best way to write programs.

The reason we'll write less code in the future is we won't need as many programs. The future is in machine learning and machine teaching, which enable a single program to perform a huge variety of different tasks. We'll train the computers of the future by showing examples and correcting mistakes, as we do with our fellow humans. Machine teaching is a different thing entirely from programming.


Did the author not notice the text on the Project Bloks homepage he linked to that says

"creating new ways to teach computational thinking to kids."

Pretty disingenuous or arrogant to act like you're setting Google straight xD.

Also, "intersectionality."


Any time a new idea comes along, there's always someone who claims everyone needs to learn it, and that it needs to become a part of school curriculum.

In my opinion, the ability to think laterally is far more valuable than the ability to think 'computationally'. The latter is comprised of essentially one pattern of thinking -- procedural -- while the former opens one to an infinite set of patterns with which to think.

The computer is a decent vehicle for exploring patterns or modes of thinking once you've discovered them, but the goal should be to explore the pattern, not the vehicle.


"Writing code will become less and less necessary, making software development more accessible to everyone."

I heard that same argument 30 years ago. "4G" and "expert systems" and "application generators" and "visual programming" were going to do away with the "engineering" aspect of software engineering.

However, in reality, we write more complex code for a simple business app now than ever before.

Once hard AI can extract requirements and transform them to systems, we can retire from coding, but probably not before then.


Programming is the act of "describing" a computation to a computing machine. In the past people used gears and levers to do that for mechanical computers. The way of describing computation can exist in many forms, like, formulas in excel sheet or using timeline to describe an animation or using C to build a device driver. All these ways have their own particular context to be useful. So, it doesn't make sense that in future we will have "one particular way" of describing computations.


Is "computational thinking" really something that isn't already being taught in schools? There's a link to an article that spends three pages defining it: https://www.cs.cmu.edu/~15110-s13/Wing06-ct.pdf

It alludes to computer science, but really it just boils down to solving problems by breaking them into parts. In other words, problem solving skills. It doesn't sound like anything special to me.


I don't think it's being taught adequately to lots of people who aren't destined for careers in tech or academia. Look at the nearly universal disdain for math and especially "story problems," as well as the mystical aura of "coding" outside of those tech / academic circles.

In that context, I do think early encounters with programming might be a great entry point to "computational thinking" for some who aren't well served by existing curricula. It certainly makes more sense to me than trying to train every child to write code for its own sake.


"[...] so why are we having a serious conversation about grooming children to become software developers before they’ve even gone to middle school?"

For the same reason we groom children to become mathematicians, scientists, readers, writers, historians, musicians, actors, etc. before middle school. Project Bloks grooms your child to be a Software Developer about as much as their first grade teacher grooms them to be a Quantitative Analyst.


Once upon a time, Excel didn't exist, and if you wanted to do that kind of thing, you needed to program it. So Excel replaced a segment of programmers.

At the same time, many other programming opportunities opened up. We have code in every single doorknob (at least, a lot of them). Some coding usages will be replaced, but others will always open up, at least for the forseeable future.


Future isn't about writing less code or using a building blocks, but by using AI and NPL to produce results from user intents. Perfect example would be Wolfram|Alpha and Wolfram Language. It's no longer about developing a software, mini app or microservice but more about getting answers. Either textual or graphically.


Well, I disagree. When I started to program, in 2000, it was só easy that a child 12 years old could do (HTML,php,ftp). Although anyone can deploy something online today, I doubt it's só easy for a 12 years old kid to learn something similar to what I used, like React, node and git.

Editado: conclusion, programming is becoming harder, not easier.


programming is becoming harder, not easier.

So true. I'm not sure why, it shouldn't be that way.


There will exist both. You need one to create the other. Eventually graphical will become the textual representation to create the higher power.

There is always a lower and higher power, even if they are equal in design.


The linked article by Jeannette Wing is more interesting than the techcrunch article. She wrote really interesting stuff on subtyping in the 90s so her name jumped out right away.


I'm sorry the future is clear. Perl 6 will be the last programming language. The only way coding will become visual is when we use unicode symbols to do diagrams in perl 6.


As someone that has had several times now to "productionize" or "modernize" an Excel spreadsheet or Access database many times, this is one of those prognostications this is one of those things people think from time to time and thus far in my experience tend to be incorrect about.

The issue with "computational thinking" in so far as how this article seems to want to teach it and how most schools often do teach it already in the real world is the tendency to stop at the basics and Office applications and just enough VBA/macros to give people a feeling of competency without giving them a glimpse into the real depths of programming and what software developers really do.

I keep wanting to make a XKCD-style sketch graph of the idea. But there's a lot of Dunning-Kruger over-competent business people that thinks all the software they need to run their business is spreadsheets and spreadsheets pretending to be databases like Access. To them real software developers seem over-paid based on their experience of Lovecraftian "systems" they can hack together given what they think they know.

That's a very real and dangerous place for business people to be, but it is unsurprisingly common. Those people don't respect programming as a discipline and a craft, and sometimes those are the people out in the corporate world controlling software developer salaries or morale...

It's also the same lack of knowledge about software as a craft (as engineering, in a very classical sense) that leads people time and time again to the well of "well in the future people won't be coding because [ Excel will do it all | There will be a visual tool everyone will easily understand | AI will do all the programming based on natural language queries | Insert some other magic idea here ]".

There's as much art to software development as there is science, and forgetting that art will still need artists and will not make itself is a strange thing that is surprising common.

To be fair, there are a lot of software developers themselves that have played into this delusion, and it's something of a trap that a software developer can easily fall into. We're trained to break down systems and try to automate them to their fullest potential and it's hard sometimes to avoid that meta-leap to wanting to do it to our own systems. We fall into building "Business Rules Engines" that we think some business users might be able to understand and comprehend and might obfuscate away the need for programming. We experiment with boondoggles like visual programming languages and "auto-coding" experiences. We get grandiose visions of the machine or software product or great AI that will make it all more accessible...

The future will probably look like the present in that regard. We'll still have the Dunning-Kruger folks building mission critical applications out of complex webs of Excel and Access and other past and future productivity tools we build in the goal of making programming more accessible. We'll still have software developers eventually hired to clean up the messes and craft versions that can sustainably last or reliably operate outside the hacked together environment from which they were originally built. There will continue to be software developers continuing to think they can build the environment that will done rule them all and save everyone time (and meanwhile eat up so much of software development budgets and time to built it)... And all of these groups will still have a hard time communicating between each other the real risks and efforts involved in any of it.


So long as it's the right people no longer writing code, that future sounds pretty nice to me.


Is the future of math to get rid of notation, greek letters, etc?


Anyone else stop reading at "intersectionality"?


Well, that's what they said forty years ago, so...


It's not about code. When you, as a programmer, consider the purpose of programming, and how computer-illiterate people view it, things are different. A programmer's job is not to type code. It's to find out what is holding other people back from achieving what they want and find a way to remove that barrier. It's to solve problems and help people make more valuable use of their time.

It could be the analyst that spends hours of drudgery printing out things from one system and retyping them into another (then going back to fix the typos) when what they want to be doing, what would enable them to provide value, is analyzing the output from the second system. It could be the junior exec that spends countless hours manually collating data to build spreadsheets and presentations about their projects, when they want to be doing is trying out new ideas and refining their projects.

Many people were raised to "write each vocabulary word 30 times" and see drudgery as a necessary, albeit frustrating, part of their jobs. Programmers automate that away so those people can do more important and useful things and produce more value. It's not just drudgery though. People have hard problems, often vaguely defined, and programmers help them understand, specify, clarify, and solve those problems.

Younger generations are more computer literate, but still often just use computers and don't realize how much control they could have over them. They may use programs that don't do quite what they need, not realizing how easy that would be to fix. Even if they're not the ones doing the programming, just recognizing that a programmer could help solve their problems is valuable.

At the same time, many non-programmers don't realize how hard some things are to program, or how clear, precise, and unambiguous things need to be defined in order for computers to produce the desired result. They assume something must be simple when they can't even define what the something is. Or they assume that because a program exists, any programmer could make an equivalent but slightly different program quickly and easily.

We don't need to teach everyone to code, we certainly don't need to teach everyone the syntax of some specific language. But we really do need to teach them to think in these terms. What problems do you face? Which of those could be automated or streamlined? How would you specify it clearly and unambiguously? What edge cases and special conditions do you need to deal with? etc. Given that line of thinking, those who are interested will learn to program and those who aren't will at least understand it. Some hypothetical pictocode/vocalcode/AIcode doesn't really matter. People need to understand the basic concepts of problem-solving and automation, how they can be useful, and what makes them relatively easy or difficult.


same as always right?


And in this thread you can see programmers getting defensive and flustered over the notion that they too might be vulnerable to automation.


Or recall all the other times a silver bullet has been hyped - first I recall was "the Last One" in the late 70's early 80's




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: