Hacker News new | past | comments | ask | show | jobs | submit login
What will programming look like in 2020? (2012) (lambda-the-ultimate.org)
139 points by slbtty on Sept 25, 2021 | hide | past | favorite | 146 comments



http://lambda-the-ultimate.org/node/4655#comment-73772

> ... the beginnings of intelligent ... assistants in our IDEs ... specialize (sic) in ... C/C++, Java, Mobile. They will have intimate knowledge of common APIs ... trained on tens of thousands of code projects pulled from the open repositories across the web (google code, github, bitbucket,...). In addition to having 'read' more orders of magnitude more code then any human could in a lifetime, they will also have rudimentary ability to extract programmer intent, and organizational patterns from code. ... The computer automatically bringing up example code snippets, suggesting references to existing functionality that could be reused.

This person (Marc DeRosa) predicted Github Copilot within a margin of one year. Incredible.


Cherry picking only the accurate prediction makes it seem like the predictor is really good!

Look at the rest of that quote:

> The human, computer pair, will also interactively suggest, confirm and fine tune specifications of mathematical properties and invariants at points in the the program.

That hasn't happened and as far as I am aware, is not even close.

> This will help the computer assistant to not only better understand the program but also to to generate real time, verification test runs using SAT technology,

Ditto.

> Interactive testing will eliminate whole classes of logic bugs, making most non ui code correct by construction.

Ditto..

So, 1 out of 4 predictions correct, which makes it worse than random chance?


Someone imagined an AI mathematician. What we've got is a parrot…


I don't know, some of that is at least starting to happen with Idris 2, though not in the exact way predicted: https://www.youtube.com/watch?v=mOtKD7ml0NU (Type Driven Development with Idris 2)

Short summary of linked video: when your type system is powerful enough you can restrict the set of possible implementations to the point that the compiler can make a decent guess as to what the program should be to satisfy the type signature.


Can you explain how this is worse than random chance?


It is, for people who think that the chance of meeting dinosaur on the street is 50% (you either do or don’t)

/s


Does that account for multiverses?


They're just using a flat prior, which is maybe inappropriate.


A flat prior over a probability space with only four possible events, but in reality there are of course many more possible events from which those four were originally picked.


Yes, if we're being pedantic then they were far off.

I was just pointing out something fun.


> That hasn't happened and as far as I am aware, is not even close.

Isn't that just the Rust borrow checker?


In terms of end result, yes, but not in terms of means.


> Leveraging the strengths of the computer and human will lead to an order of magnitude improvement in programmer productivity. Interactive testing will eliminate whole classes of logic bugs, making most non ui code correct by construction.

I'd like to see evidence / experience reports backing up this part. Certainly Copilot exists, but what I've read about it is pretty mixed.


More like Brooks was the one being right (for like 3 decades now) with his ‘No silver bullets article’, in that we will not have another order of magnitude productivity change in programming after high level languages became a thing.


Brooks wrote "No Silver Bullet" in 01987. The examples of high-level languages he mentions in his paper are Ada, Modula (not sure whether the 01975 version or Modula-2), Simula-67, APL, Pascal, Smalltalk, Fortran, COBOL, PL/I, and Algol, and of these he thinks Ada is the most promising. Does his thesis hold up in the face of modern programming language advances?

I am not confident that there are programs I can write in Python, JS, or OCaml in one hour that nobody can write in Ada in ten hours, and programs in those languages in ten hours that nobody can write in Ada in 100 hours. I'm even less confident that they beat Smalltalk by as much. The exception is for very small programs: you can write things in Python in 5 minutes that nobody can write in Ada in an hour.

http://canonical.org/~kragen/sw/dev3/rpneact.py is maybe one example, which I wrote over the course of about 12 hours on January 2. It's a simple interactive calculator app that includes a general-purpose numerical equation solver; from the Python standard library it gets floating-point and complex arithmetic, regular expressions, and command-line editing. I don't think it would take me even as long as 120 hours to write it in C, which is roughly the same level as Ada but more painful to debug.

Similarly, http://canonical.org/~kragen/sw/dev3/lmu.py is a very limited MUD, where multiple users can connect, interact, and collaboratively build a textual world by creating rooms and other objects and setting their descriptions: sort of like a Wiki for interactive fiction, though without much richness of interaction. I wrote this between July 18 and July 24 last year, probably taking more than 20 hours in total. Writing it in C would have been slower but surely would not have taken 200 hours.

The main thing that's happened since 01987 is that computers have gotten a lot faster and bigger, so we can get by without as much attention to efficiency, and software libraries are also enormously more powerful. It's not that much easier now to write a SQL database than it was in 01987, but it's a hell of a lot easier to link in SQLite. It's not that much easier to write a regular expression matcher, but every modern language has one in the standard library already. Rendering Telugu text properly still involves a lot of hairy cases, but whether you're in a terminal or a browser, generally all you need to do is emit some UTF-8 bytes and you're golden.


I may not really get your point: are you trying to prove or disprove the original article?

In my opinion it does hold up, since indeed we can hardly even be 2x more productive in a barebone project with different programming languages, let alone an order of magnitude.

And what you are getting at is as far as I remember was the escape hatch mentioned by Brooks himself, that is the only significant improvement in productivity will come from the ecosystem. And I think we are behind even at that, in that we unfortunately have many NIH projects doing the exact same thing across language ecosystems, with not much interoperability. Eg. the JVM, Python, JS ecosystem has barely any shareable elements. Hopefully something like Graal’s polyglot might change it in the future.


Well, I was hoping to disprove it, and then I looked at things I'd written recently in high-level languages, for which I had logs of how long it took me, and came to the opposite conclusion: Python might be more productive than Ada, but not by an order of magnitude. Of course someone else might be a much better programmer than I am in Python, but in all likelihood their improvement over me in lower-level languages would be even bigger.

The library ecosystem is a great bright spot: the cost of modularity has fallen to such a point that leftPad is its own NPM module. The fact that Python has a separate leftPad as the .rjust method on strings seems less significant than the reduction of duplication within each ecosystem. Historically if you wanted your library to be usable from anywhere, like D-BUS, you wrote it in C, and that's still a viable option; but Rust is shaping up to be a better alternative there.

I do think we can be more than 2× more productive in a barebones project with better programming languages, because I've been programming in assembly language this week, and it is pretty slow going. But I reluctantly came to the conclusion that Brooks was right: the big leap in raw productivity was from assembly to Algol 60, not Algol 60 to JS. (And, separately, from batch mode to interactive programming.)


Why do you write years with a preceding zero?



Availability of internet & resources like Stack Overflow can be considered a silver bullet, if you consider less proficient / junior programmers.


Does anyone actually use Copilot day to day? How much time does it actually save?


I do, it's helpful most of the time, mostly because it behaves like intelligent text expander, and couple of times a day I'm consciously aware how it helped me in even more "intelligent" way. There were also situations when it was distracting/confusing, but I can deal with it. I'm working as web dev, with TS, JS and PHP, and in my case there is no place for some great solutions that I wouldn't thought about myself (less than 5 times I've used something generated with comment), it's simply predicting what I want to do next in the line and instead of writing 50 chars to end that, I can hit tab. Honestly, I don't want to work without it anymore.

EDIT. regarding these situations when it doesn't predict what I wanted, I feel that my brain is getting better at not focusing at that and moving on. At first it was distracting, now I subconsciously know that the provided solution might be wrong and I'm deciding faster if it's something I should choose or skip. Your tool is adapting to you and you're adapting to your tool to find the right balance :)


Perhaps controversial, but maybe that's an issue with the PL - if higher level, domain specific, more powerful abstractions were baked into the language, maybe all the boilerplate would not be necessary.

Of course, getting a new language to wide spread adoption is far from trivial, but that's a separate problem.


Writing scrapers with it is also amazing. All from the puppeteer code, response interface and casting to a local class model. Very time saving indeed.


How close are you to being fully obsolete?


The correct response to becoming obsolete is to learn something new, not to fight the thing making you obsolete. Doing the latter is about as productive as yelling at the tide to not come in.


Yes, I guess it's like complaining about nuclear weapons. Best to embrace them.

If we can do it, we should do it. All technology is progress. All technology improves the human existence.


It's irrelevant whether the technology is good or bad. There's nothing you can do to stop it happening, so do you want to complain, or do you want to embrace the change and stay ahead of the curve?


I work on deep learning professionally, at the most important company in the space. I even have open source deep learning projects, most recently:

https://NN-512.com

I profit from your obsolescence. I'm making you obsolete. I know what I'm doing. I'm making you worthless. The fact that you can fail to do even a basic check into me speaks volumes.


Very impressive, but if I spent my days googling the random hex usernames of every person I replied to on hacker news I'd be obsolete even faster :)


I am the one who knocks.


Writing boring chores or repetitive code is amazing. Need to handle all cases in a case in pretty much the same way but changing only one thing? Write the first one, the rest is auto coded perfectly.


> Need to handle all cases in a case in pretty much the same way but changing only one thing? Write the first one, the rest is auto coded perfectly.

As someone who has suffered though inheriting a code base of excessive copy/paste with things done "pretty much the same way but changing only one thing" it's a nightmare to take it to diff and try and refactor out the common code. In my case I had to fix a bug in their copy/paste mess and found six separate copies of a whole functions worth of code duplicated.

I suspect part of the reason is for the initial developer it's a great feeling to churn out tons of code and it all kind of works without having to think how you make a block of code generic (avoiding temptation to just pass in a global state struct and dealing with function pointers in the worst cases etc) but having suffered the after effects, I implore you, PLEASE don't repeat yourself (DRY)


Sounds like a technical debt generator. I feel for the devs who will have to clean up the legacy from this in 10 years.


Sounds like a job creator :P


But isn't much better to just manually code it into reusable function and use it repeatedly, instead of letting AI generates all the boilerplate code and causes maintenance headache down the road?


> boring chores or repetitive code

> manually code it into reusable function and use it repeatedly

Refactoring every place where you do x into a reusable function that does x is a boring repetitive chore.


Can you clarify if this is from your personal experience using copilot, you are using it to do those things, successfully, in your routine work?


Sounds like a really limited programming language if there aren’t smarter ways around this in the code.


Depends on the task, writing e2e tests using copilot it's amazing


I do every day. I code 5 times faster on problems several times harder.

You also get good at 'using copilot' just like you can be 'good at googling'. So if you not already doing it. Start now.

IMO. There's literally no point coding without it. You are completely wasting your time. However it has limits. It only helps you write code. Architecture is still down to you. If it could read your whole codebase rather than just the page you are on. And if you could supply it prompts via urls. i.e. preloads it with hints from other code bases. Then it will really become something else.


> There's literally no point coding without it.

Except maybe that it is still not generally available.


True. My son is on the waiting list several week now. But honestly traditional coding is the past. Soon they might be asking for someone with 12 months copilot experience. I don't mind the negative downvotes from luddites. Copilot is smarter and faster than you. Accept it and learn to use it.


If copilot is smarter and faster than you, that says more about you than it does about copilot :)


If you don't think it is. It says a great deal about you too :)


I don't just think it isn't, I know it isn't. I've tried it, and watched friends try it too.

Copilot is decent at speeding up some commonly occurring boilerplate in a common language, and pretty bad at anything else.

FWIW, there was also this https://challenge.openai.com/codex/leaderboard, and Copilot's 'time' was laughably bad compared to any decent human.

Does this mean that it couldn't potentially be a useful tool? No. I think it could be, and for some projects I would probably use it myself to speed up some laborious tasks. But currently, claiming it can help with anything remotely more complex is just untrue.

If you have evidence to the contrary, I'd like to see it.


Not sure what you would take as evidence. You seem convinced. I have github repos. If you like, take any app idea. In any language you want. We can both spend 2 weeks building it. Screen record yourself making it and ill use copilot, you don't. We can see who's is better?

edit.

- ok so you conceded in your sentence that it's faster and you would use it for that reason.

so your only disagreement is it can't help you solve harder problems?.

- the average dev is hitting stackoverflow 10x a day for that very reason. And that data is embedded in copilot. I've literally typed the name of functions I was going to have to spend 20 minutes googling to figure out how to achieve. typed the pseudo code comments and boom. copilot has done it before I even had to think. If not I just have to change how i prompt it by feeding it a few variable names or changing my comments. As I said. You can get good at using it. If copilot is crap it's because of the person using it. I.e. if you are looking for a bike and you google 'chicken sandwich' well. You aint gonna get a bike.


That sounds like an interesting idea, but to me a more convincing thing would be seeing the same person do something with + without copilot and see if it makes a big difference when used correctly.

To me 'smarter and faster' means being able to solve harder problems + accurately reason about things, etc. (And not introduce security issues, which is a whole other thing...)

If it is just a replacement for stackoverflow, it's probably useful and it could save time, but I'm not sure it's 'smarter and faster' any more than any other feature in an IDE (you could also create a hotkey or something to search SO which would probably be just as fast too).


You really notice when you don't have it anymore after using it for several weeks. Writing each line again, each painful character one at a time. I keep accidently opening sublime and going to type then realising it doesn't have copilot. And think ugh.

It's nice to reason about your actual problem and not the features of code.


Or trivial matters like “license compliance”.


People only seem to have heard about GitHub Copilot. But Microsoft has a similar solution (that is not as... invasive) that was first released in preview in 2018. It's called IntelliCode, available for multiple languages both for Visual Studio Code[1] and regular Visual Studio[2] (but not enabled by default last time I checked -- which was a while ago, granted).

JetBrains has their own experimental thing that is not enabled per default.

And there's also Codota and Tabnine.

Copilot is actually fairly late to the party here.

[1]: https://marketplace.visualstudio.com/items?itemName=VisualSt...

[2]: https://github.com/MicrosoftDocs/intellicode/blob/master/doc...


Well people were making API calls to stackoverflow at the time to pull the most popular answer based code comments. There were several Sublime plugins. So this wasn't so hard to imagine.


FYI, specialize is the American spelling.


Yep, I'm an American. It was missing the 'D' on the end ("specialized") hence the (sic).


Its honestly not that hard to predict, with a bit of statistics and some insights into early deep learning, its obvious that alot of data with discernable patterns will produce good results.


What will programming look like in 2028?


2028 will be the year of Linux on the desktop, so…


We'll all be using Deno, Crystal, Nim, and Zig, and lament with great nostalgia the days of Node, Ruby, Rust, and C/++.


I understand that the value proposition of Rust is better C++, Deno is better Node. Crystal's "Better Ruby" doesn't sound like much given Ruby is niche anyway. What is it for Nim and Zig?


Zig is a better C in many ways. Memory handling, error handling, security, but most importantly it can be used together with C. No need to port your code base just swap to zig and continue. Of course now you have a code base in zig and C... :/


What makes you think Rust will be out the door? None of those languages are as fast, while also being memory safe


The “ Some safe and some bold predictions” comment is almost exactly my view on how programming should evolve. (functional, reactive, going toward dependent types etc ) Interesting how in 2012 it was already so clear!

I think mostly we do have gone in that direction, even if probably even slower than the (already cautious) commenter predicted.

Honest question: why are we as a community so slow at evolving a good, solid programming environment?

I get that each time a new language is introduced, porting over all the existing code + training the people is an immense task. But this is _exactly the problem_, right?

Why aren’t we able to stop for a while and sort out once and for all a solid framework for coding?

It’s not a theoretical ideal: I’m very convinced that the dumb piling up of technologies/languages/frameworks that we use now is significantly slowing down the _actual daily work_ we do of producing software. Definitely >>50% of my time as a programmer is spent on accidental complexity, that i know for sure.

It’s very practical: at this point this whole thing simply feels like very bad engineering, tbh?


There are many different types of programming. For example there is an increasing amount of people who program but do not have programming related job titles.

- A designer might use HTML/CSS/JS directly or with "low code" tools (AKA graphical IDEs, they are still essentially programming) to implement frontends or a bit of scripting to automate stuff in their Adobe tools.

- An analyst might use Excel (a programming environment) or SQL or some hybrid tool.

- Mathematicians and scientists are increasingly programming.

- An electrical technician or engineer programs their installations.

- And so on...

There isn't one paradigm or even a set of paradigms that fits them all. Some languages are deliberately straight forward and provide minimal abstractions, other languages have strong runtime guarantees, others enable flexible, composable abstractions.

I think programming will become even more diverse in the future.


> The “ Some safe and some bold predictions” comment is almost exactly my view on how programming should evolve. (functional, reactive, going toward dependent types etc ) Interesting how in 2012 it was already so clear!

Given that languages used broadly in the industry are lacking behind research 20+ years that's not a good prediction. It's just the way things are going.

BTW: There's still no mainstream language with full dependent typing… (I count Haskell as mainstream; Scala seems closest¹ but still a long way to go).

Actually people are already so overwhelmed by all that really old stuff coming now to languages that some of them resort to even much more basic approaches on the level of the 70'ies, like Go, thinking programming is otherwise "too complicated". To add on that: I don't think "the average dude" will ever use for example depended types even they would appear in some broader used language.

My guess for the future is more that we will see a kind of split between a big group of people using advanced low-code like tools for programming day to day things and data exploration, and some much smaller group of "experts" doing the "hard things" needed to make those tools work. On the one hand side this will bring programming further "to the masses". But on the other hand side the hard parts will not only remain, they will get even harder and much less approachable by arbitrary people.

¹ https://arxiv.org/abs/2011.07653


> Given that languages used broadly in the industry are lacking behind research 20+ years

This is obviously true in abstract, but the real breakthrough happens when you make those concepts ergonomic for the working developer. The theory behind dependent types is well established, but I can't really write my next project in Idris, can I?

Similarly, there was a time when C was the only sensible choice to write anything non-academic, non-toy. It's not like people didn't know about OOP or functional programming back then, but it wasn't a realistic possibility.

Or parametric subtype polymorphism, also known by its common name "generics". The concept has been around at least since the 70s, C++ templates weren't widely used before what, 1990?

> My guess for the future is more that we will see a kind of split between a big group of people using advanced low-code like tools for programming day to day things

This has arguably already happened, we call them data scientists. Many of them are technical and have some light scripting skills but they couldn't be put to work on, say, your average backend project. Obviously this is a gross generalization, titles mean literally nothing, I'm pretty sure there exist data scientists that kick ass at coding.


> > Given that languages used broadly in the industry are lacking behind research 20+ years

> This is obviously true in abstract, but the real breakthrough happens when you make those concepts ergonomic for the working developer.

That's of course correct. That's actually why we're lacking behind research by such a long distance. It's not only type-systems. The "20 year lack to research" seems to be a quite general phenomena in IT. I'm not judging. It's an observation.

> This has arguably already happened, we call them data scientists.

I think this is only a facet of what I had in mind. I guess it will become more ubiquitous to use some "coding related" tools in a lot of places! But it won't be for sure software engineering what those people will do. I was thinking more in the direction of e.g. MS PowerApps. Or something on the spectrum between such thing and Jupyter notebooks.


> Similarly, there was a time when C was the only sensible choice to write anything non-academic, non-toy. It's not like people didn't know about OOP or functional programming back then, but it wasn't a realistic possibility.

But the elephant in the room? What about LISP?

To try to answer that by myself:

My guess is that (too?) "advanced" technology (at some point in time) doesn't get any traction on the mass market. That's another reason mainstream languages and tools lack significantly behind academia in my opinion. If something is called "academic" that implies "not pragmatic enough for use" for a lot of people, I suspect. (The LISP story has more to it but this would be largely off-topic so not going into that).


> But the elephant in the room? What about LISP?

My input on lisp:

I think as far as "practical" programming is concerned, lisp kind of missed the critical window. There have been three phenomena occuring simultaneously:

- Computers become so powerful that it's now justifiable to use inefficient languages. Ecosystem develops around said languages (perl, ruby, python...)

- The web becomes the primary way to ship software. The OS C API isn't something you necessarily have to keep in mind at all times.

- Mainstream languages incorporate 'academic' features.

The combination of 1 and 2 made it so that up until the turn of the century (and realistically well into the 00s) C and to an extent C++ were first class citizens, and everything else was at most ancillary. The most efficient Lisp compilers out there produce code that's within a factor of 3 of C++. That's amazing by today's standards, but if you're programming Pentiums (and earlier 486s, 386s) you don't really have a factor of 3 to spare.

As for the capabilites of the language itself, even around 2000, lisp really was secret alien technology. But by the time you could actually either afford the performance penalty on the desktop or to be web-based, the "inefficient" bunch is now also an option, offering more expressivity than C/C++ and better libraries/batteries than lisps.

> My guess is that (too?) "advanced" technology (at some point in time) doesn't get any traction on the mass market.

The point here is, IMO, that IT in general and programming languages in particular are very hard to innovate. PL innovations by definition don't offer more features than the status quo ante. Once you have a programming language that's sufficiently high level and with a sufficiently efficient compiler (in our timeline that was C), programming languages are effectively "feature-complete", there's never going to be a language with a feature that can't be replicated in C. In other words, the leap from ASM to C (or Fortran, Cobol, whatever, I'm using C for the sake of argument) is a 10x improvement, but everything after that has significantly diminishing returns.

What you're really doing is improving developer productivity: not every project needs C, so maybe write Python and be done in a tenth of the time, with a tenth of the bugs and without buffer overruns.

However what the market is after is "total" productivity: if my codebase is already in C, it's a huge productivity loss if we move to $NEW_LANGUAGE, regardless of the new features, and because language innovations has diminishing returns, it becomes increasingly difficult to justify switching.

> If something is called "academic" that implies "not pragmatic enough for use" for a lot of people, I suspect.

With that said, you are correct that there is sometimes a knee-jerk reaction. See for example the unreasonable pretense that "monad" is some kind of black magic.


Your analysis is quite astute. Lisp systems became big and resource-hungry ahead of newer, more affordable, less powerful machines being able to keep up.

Lisps were developed on departmental and corporate "big iron" computers from the beginning. Initially out of necessity, because those were the only viable computers that existed. Then later out of necessity because it was the only computers on which it would run well.

Very few languages (or other software!) that were popular on expensive big iron in the 1960-1985 era transitioned into the microcomputer era with their popularity intact, or at all.

For a window of time, microcomputer enthusiasts were simply not able to make any use of the big iron software at all. Those who programmed big iron at work and micros on the side did not pass on the knowledge from the big iron side to the newcomers who only knew consumer microcomputers. They just passed on stories and folklore. You can't pass on the actual knowledge without giving people the hands-on experience. And so the microcomputer culture came up with its own now iconic software.

Today we run descendants of Unix because Unix was actually developed pretty late into that big iron era, on smaller iron hardware; like a clumsy but workable ballerina, Unix readily made the hops to workstations having a few megabytes of memory like early Suns, to microcomputers like better-equipped 386 boxes.

There are stories from the 1980's of people developing some technology using Lisp, but then crystalizing it and rewriting it in something else, like C, to actually make it run on inexpensive hardware so they could market it. CLIPS is one example of this; there are others.

I don't think that people had no cycle to spare on 286 and 386 boxes. This is not true because even some much slower languages than Lisp were used for real programming. People used programs written in BASIC on 1 MHz 8 bit micros for doing real work. By and large, most of those people had no exposure to Lisp. BASIC was slow, but it fit resource-wise. Not fitting well into the memory is the deal breaker. Performance is not always a deal-breaker.

The Ashton-Tate dBase languages were another example of slow languages, yet widely used. They ran well on business microcomputers, and carried the selling story of being domain specific languages for database programming, something tremendously useful to a business.

All that said, our thinking today shouldn't be shackled today by some historical events 1980 to 1990.


My take on this:

1. Societal issues. Microsoft wanted Java they could control and change. C# it is then. Google is moving away from java to kotlin, because of disagreements with Oracle.

2. Wish of different trade offs. fast-to-learn vs feature-full vs ease-of-use vs configurability vs portability vs speed vs safety vs developer friendly vs user friendly vs admin friendly vs development-speed vs program corectness.

3. Low proof-of-concept cost. If some lib involves lot of boilerplate for my usecase, it is easy to create my own improved version and feel the sense of achievement (my version might be just a wrapper at first, but the gate have opened)

4. Reduction-of-programing-worlds-complexity is at the bottom of everybodies priority lists.


Google moving away from Java has nothing to do with Oracle. That lawsuit was about an old Sun license of Java that was thought to be hurt by Google.

Since then, Java was completely open-sourced, having the same license as the Linux kernel, so there is nothing stopping android from using it. The preference for kotlin comes from the fact that Android’s Java is barely at OpenJDK’s Java 8 versions, making syntactic sugars all that much more important there.


> Google is moving away from java to kotlin, because of disagreements with Oracle.

Java and it's VM are OpenSource. So that argument doesn't make any sense.

Also I don't think Google would, or even could, move away form one of their primary languages.

Kotlin on the other hand is only a significant trend in Android development, and it will get dropped like a hot potato in favor of Dart as soon as Fuchsia arrives as Android successor, I guess.

Kotlin has a problem: It tries to be "the better Java", but Java is picking up (slowly) all the features. The space for a significantly more powerful JVM language is already taken by Scala. So in the long run there won't be much space for Kotlin left: As soon as Java will get "more modern Features" Kotlin will have a hard time to compete. As likely mostly only syntax differences will remain.


> Kotlin on the other hand is only a significant trend in Android development, and it will get dropped like a hot potato in favor of Dart as soon as Fuchsia arrives as Android successor, I guess.

Do you have any idea just how much there's Android/Java/Kotlin code now?

> Kotlin has a problem: It tries to be "the better Java", but Java is picking up (slowly) all the features.

2016 called. Wants your opinion back. Kotlin is sooo much more now than "better Java", it is a Multiplatform, efficient and pragmatic language that is just pleasant to work with, unlike even modern Java.

It has first party support for iOS, Android, desktop and JS. It has modern Multiplatform GUI framework that rivals React and Flutter.

It has countless syntactic features and built-in null safety.

Java will be forever stuck in server and niche products like Intellij.


> Do you have any idea just how much there's Android/Java/Kotlin code now?

Does Google care? Ever had a look at their graveyard?

The rest sounds just like the usual marketing yada-yada and doesn't stand reality…


> Does Google care? Ever had a look at their graveyard?

Please, Google graveyard meme here?

Name at least one DEVELOPMENT project that was killed by them. Even the infamous GWT is still alive on life support.

In case you're serious, it seems you don't understand or know the scale of Android project. Android is one of the most important projects of Google, it rivals Chrome and YouTube.

> The rest sound just like the usual marketing yada-yada and doesn't stand reality…

Please, launch your Java application on Node.js or in a browser, or on iOS, on Android for that matter. Or create modern client-side application that runs on all major platforms.

And no, Codename One/Gluon/JavaFX don't count. Even worst of the worst Ionic runs better than those.


> Name at least one DEVELOPMENT project that was killed by them.

Google code. Google code search.


That's why you're using a throwaway account?

Codename One works as well as Ionic when used by the right developer.


> Codename One works as well as Ionic when used by the right developer.

Oh I'm sure it is. Both of them are awfully terrible compared to even React Native/Flutter, let alone native.


It's also as good as Flutter. It's a matter of developer skill. Not the tool. Flutter has better default look and feel, that's about it. React native isn't a write once run anywhere tool. As a result it has a lot of different problems which imo make it way worse than all of the above.


It's a wast of time to engage with a troll. It won't result in any curious discussion.


> It's also as good as Flutter.

It's not. Last time I used it, everything was super janky, the look of app from 90s also didn't impress me.

The DX is awful, compared to neat single binary of Flutter.

> React native isn't a write once run anywhere tool. As a result it has a lot of different problems which imo make it way worse than all of the above.

I only care about result, not semantics.


Reactive is a mistake. It's a tool for a specific job, sure, but it's too big and opinionated of an abstraction to use it everywhere. The future of programming should be reality based.

> Why aren’t we able to stop for a while and sort out once and for all a solid framework for coding?

There's never going to be one "solution" here. There will always be tradeoffs depending on the constraints of a given domain or problem space. One-size-fits-all solutions end up not being great for anything, since they have to make so many compromises.

Also great tools are made through solving real problems. If we just went to plato's heaven and dreamed up a "perfect" programming environment, we would end up with something which solves our problems in theory. But the issue with this is that our problems don't exist in theory, they exist in reality.


For a mere 8 year timeframe, the predictions seem rather poor. It feels like there was such a push among people to have the forward thinking ideas that they overestimated how much would change.

Looking back, the biggest changes between now and 2012 are:

* Git (and github) took over the world in the version control. Git was already the leader in 2012, but mercurial was doing ok and svn was still around to a much greater extent.

* Docker/Kubernetes and the container ecosystem. There was a guess here about app servers, but the poster seemed to thinking of PaaS platforms and Java app servers like Jetty more so. I guess you could say "serverless" is sort of in that vein, but it's far from the majority of use cases as the poster predicted.

* Functional programming ideas became mainstream, except in Go, which is a sort of reactionary back to basics language.

Overall though:

Good predictions:

* The IDE/editor space gets a shake up, though maybe not in the way any of the specific predictions guessed (the rise of VS code)

* Machine learning gets bigger

* Apple introduces a new language to replace Objective-C

* Some sort of middle ground to the dynamic/static divide (static languages have got local type inference, dynamic languages have got optional typing)

Bad predictions:

* No-code tools are still no further along mainstream adoption than 2012

* Various predictions that Lisp/ML/Haskell get more mainstream rather than just having their most accessible ideas cherry picked.

* A new development in version control displaces git

* DSLs, DSLs everywhere. DSLs for app development, DSLs for standard cross database NoSQL access,


> * No-code tools are still no further along mainstream adoption than 2012

In 200 years, people will still be predicting the rise of no code solutions.

If you are executing diagrams and schematics, you still have code. And the people maintaining that code are still coding. That is, they are coders. They're just working in a whole new stack that doesn't have git, diff, a variety of editors, open standards for encoding, etc.


There's no such thing as no-code, it's just a question of the properties of the coding scheme. For example C.S. Peirce proposed a turing-complete "no-code" graphical logic (aka programming) language[1] in the 19th century.

[1] http://www.jfsowa.com/pubs/egtut.pdf


true, it's mostly a matter of altitude on the abstraction ladder, and not dealing with textual notation (it's all graphs underneath anyway)


This is true, and there are also coders today who's code does not run on computers, but in people's heads as the implementation of diagrams and schematics.


> Functional programming ideas became mainstream, except in Go, which is a sort of reactionary back to basics language.

I feel like there is also a return to basic imperative programming, with OO and functional where it makes sense.


Imperative programming will always be a thing as long as computer processors work the way they do (mutating and incrementing things).


Due to all the out of order executions and pipelines, they are quite functional at the same time - so that is not a good way to look at it.


I agree with your summary, but predictions are very hard (especially about the future).

Except for the perpetual no-code and DSL memes, whether git would dominate as much as it did was pretty much up in the air in 2012. Same with functional languages. In 2012 the mainstream was what, Java 1.6? ML/Haskell was like a breath of fresh air, it's hard to understate how pure OOP a la early Java sucks. The fact that some functional features would become mainstream wasn't a given back then, if anything, that is the surprising turn of events.


I've given Go lots of shit, but once generics land and are widespread, I suspect it will undergo a culture shift.


A couple small threads from back then:

What will programming look like in 2020? - https://news.ycombinator.com/item?id=4962694 - Dec 2012 (3 comments)

Ask HN: What will programming look like in 2020? - https://news.ycombinator.com/item?id=4931774 - Dec 2012 (12 comments)


> Unit testing will be built directly into languages as an intrinsic part, like compiler optimization or GC.

Rust and Zig have this, right?


And Clojure!


The App Servers prediction is actually a lot more accurate than I first thought:

> You'll abstract most applications with a DSL, structured of applets or formlets operating on an abstract client API. The same app can then be 'compiled' and served to multiple mediums - as a web-page, an android app, a Chrome app, or even a desktop app.

This is effectively React with Electron, React Native, etc. Of course the author was overly optimistic about how polished and effective cross platform apps would be, but it's still the same idea. React is a UI DSL with a reactive runtime that can run on many different platforms.


Off topic, but is there a new site that replaced lambda-the-ultimate? It's such an amazing source of PL discussion... but mainly from 10 years ago.


+1. I was about to post an almost identical comment.


> At a guess, people will use something with: - Strong tooling and libraries - An accessible type system - Deterministic memory behaviour - By-default strict evaluation - Commercial backing Every mainstream functional language is lacking in at least one of these areas.

This user casually predicted Rust.


Accessible type system? Hm, I would think this would be more about TypeScript than Rust.


I wouldn't call a "mutability first" language which relies mostly on side-effects "functional".


Rust’s type system is very close to Haskell’s and takes a lot of inspiration from functional programming.

I’m not sure I’d call Rust mutable-first either. As for side-effects, I suspect the next leap in PL development will be an efficient algebraic-effects system with a good dev experience (or at least a trade-off so appealing it overcomes the necessary friction).


> Rust’s type system is very close to Haskell’s and takes a lot of inspiration from functional programming.

Well… No.

It misses some landmark features like HKTs, and it likely never won't get them.

Inspiration? Sure. But what lang today doesn't take that?

> I’m not sure I’d call Rust mutable-first either.

So what do the affine types track? ;-)

> As for side-effects, I suspect the next leap in PL development will be an efficient algebraic-effects system with a good dev experience (or at least a trade-off so appealing it overcomes the necessary friction).

Let's make "some effect system" out of it and I'm with you.

But it likely won't be Rust getting such features in the near or midterm future (no clue about such a possibility in the far future, though).

Other languages are almost there on the other hand side. Scala for example will get "ability tracking" real soon™ now.

Rust may seem a little bit like a FP lang at frist if you're coming form one. But after writing some code you going to realize that Rust's philosophy is quite different form a FP lang: Rust embraces mutability and side-effects at its core! It "just" tries to make that more safe.


Thing is, the constraints that make mutability and side effects "safe" are pretty much indistinguishable from functional programming. You can opt out of those in Rust via "internal mutability", but that comes with a corresponding increase in complexity.


I don't think Rust can enforce referential transparency, nor does it have any focus on doing this manually. But I would say referential transparency is one the most important properties in functional programming, if not even the most distinguishing property.

Referential transparency is the one feature that makes reasoning about a program easy. You can think about referential transparent programs purely in the substitution model. You can move referential transparent expressions freely around as you please.

You can't think of a Rust program this way. It's inherently procedural.

Rust gets a lot of things quite right! But it's not a FP language. It's a better C. It's about shoveling bits and bytes around, as safely and efficient as possible.


> Strong tooling and libraries

* Long compile times * Too much micro libraries

> An accessible type system

As a sibling comment mentioned, typescript covers this more appropriately.

> - Deterministic memory behaviour - By-default strict evaluation

For most practical purposes, Golang and Java cover this. Rust / C++ is great is systems stuff, I am not going to use it for some application layer stuff, with all complexities that come with it.

> Commercial backing

Not much, really.


I find it pretty hilarious that “too much micro libraries” is a criticism, but then you recommend TypeScript - which runs on Node.is, the progenitor of “micro libraries” in the next sentence.

We’ll ignore every other bullet point containing fundamentally incorrect information also.


Didn't really say typescript is perfect. In terms of type system its just more accessible than rust.

I think rust is the great language. But community is the worst thing about it.


Server has apparently fallen over. Wayback Machine link:

https://web.archive.org/web/20210925211554/http://lambda-the...

P.S. these are predictions made in 2012 of what 2020 was going to be like.


No good deed goes unpunished.


I hope that made you feel good!

Seriously, someone goes to the trouble of finding the archive link because the server is having trouble (note that someone else had mentioned it too, not just me), and it gets modded down. Why? What possible justification is there for that?

Then even mentioning the ridiculous modding down gets modded down further. Why? (and no, "because those are the rules" isn't a reason... did it ever occur to you that sometimes rules are wrong?).


Complaints about downvotes are boring, and are often made redundant when the comment is upvoted again, as happened here.


The reality is programming has taken a back seat while everyone spends all their time on containerization, k8s, cloud security and keeping up with what's hot right now.


>What will programming look like in 2020?

It will be a complete shit.

http://lambda-the-ultimate.org/node/4655#comment-73750


The reply made me smile:

  So...

  Nothing changed, then.
http://lambda-the-ultimate.org/node/4655#comment-73758


“HCI-ware (and wear)

While increasing heterogeneous and concurrent architectures will certainly impact languages and frameworks (and there is much progress there already, e.g. to leverage clouds or GPUs), I'm more interested in how things like Project Glass, LEAP motion, Emotiv, and Touché might impact programming. Project Glass could provide pervasive access to eye-cams, voice, and HUD feedback. Something like that would be a suitable basis for pen and paper programming, or card-based programming, or various alternatives that involve laying out objects in the real world and clarifying with voice (perhaps non-speech voice).”

still no progress on this. Keyboard and mouse.

Minority Report is almost 20 years old.

i had high hopes for Project Soli:

https://atap.google.com/soli/


Personally I feel that Minority-Report-y predictions are foolish, because is ignores the human aspect in the name of glitz. Specifically, how tiring it is to hold your arms in the air all day.

I think the example of mail is instructive. Contrary to some "zeerust"-y predictions, we don't buy virtual stamps from the virtual store to virtually lick them before virtually walking to the virtual mailbox. Even with the vastly improved capabilities of 30-40 years in 3D technology, the concept is immediately bizarre.

In practice, we instead boiled away most of the extra bits and ended up with e-mail, which tries to minimize even the body-movement needed.


One person predicted the AI IDE assistance. They were right that it is here, although I don’t think it is very accepted yet.

No one mentioned containers or anything related.


I use tabnine every day, and it's phenomenal. It isn't Copilot, but it's good.


One comment does:

“I think you're right about virtualization, too. By 2020, hardware virtualization will be ubiquitous to the point that it's done for each application”


Predictably…’s prediction is basically Rust?


> how the hardware will evolve: several (4-16) heterogeneous(not all with the same performance, some with specialized function unit) CPUs

Isn’t this basically the architecture of the Apple M1 chips? They’ve got efficient cores and high-power cores.


ARM big little (which the M1 is at least somewhat based on) was announced the year before the prediction.


It is interesting to read some of these comments as an indication of what people actually hoped for, and then compare it to the reality of what they actually got.


Programming changes at human speed because it requires programmers to learn.

Maybe in 10 more years we will all be writing in ML-family languages!


Rust in 2021 is basically the functional programming language this commenter was predicting:

> I predict that functional programming will continue to gain ground as people discover the benefits of immutability and easy parallelism—but the functional language will not be Haskell, nor Scala, nor Clojure. At a guess, people will use something with:

>Strong tooling and libraries

>An accessible type system

>Deterministic memory behaviour

>By-default strict evaluation

>Commercial backing

>Every mainstream functional language is lacking in at least one of these areas.


Rust is basically C++++ so I doubt it's what that commenter had in mind


One of replies to the comment was that C++11 was what the commenter wanted, whereupon the commenter mentioned ways it did not meet the criteria.

If you follow the C++ development, you can see a huge effort to make it support more funcational paradigms (see for example the efforts for structural pattern matching). However, its backward compatibility requirements have limited it a lot. So even if Rust was C++++ without the backward compatibility baggage it could easily reach these ideals.


More like (C)++++ rather than (C++)++.


Eh

To me that's C++ snobbery talking. Long compile times, confusing type system, RAII as guiding design light

Just admit that it's basically C++


C++-C


Without a sequence point the result of that expression is undefined behaviour. Is that what you're trying to say about Rust?


Except that it’s not really “gaining ground”, all things considered Rust is not that popular.


It seems like its gotten some use lately. Isnt aws a big user of it now. Along with a handful of mid size tech companies?


To me 2021 feels like the year Rust went mainstream. There is use in major companies like Amazon, Google and Microsoft, lots of second tier tech companies like Cloudflare, System76, Dropbox are big on it, serious work in including it in the Linux kernel, I've gotten multiple job contacts based on my listed Rust experience (sadly all from the crypto startup market, not really interested in that industry), and there are so many smaller projects that HN commenters are getting frustrated by them. Really the one gap in saying Rust has "made it" is the lack of a really big open source project like Go has Docker/Kubernetes that people can point to and show "see, they used it and got big" (Firefox not counted, since it's only ~10% Rust and added it after they were big).


But more and more people are using things built in Rust. Which is what I'd call gaining ground. Was playing around in Deno. Yes it's JS, but it was built in Rust. In some respects, that's the true test of a low-level language. Things built in it. C isn't popular. But you'd never say it's not popular because it's what everything is built on.


A lot more people get a paycheck every month writing C than Rust. C is an incredibly relevant and popular language.


Link seems busted for me...


I read a lot of comments there, basically the devs back then did not know it, but they hoped for Docker and it's applications, for Rust, some for Node =)


almost excusively javascript frameworks


Nobody predicted GDPR or the need for multicloud.

Wonder what the big new grunt work for 2030 will be.


Well, GDPR is almost 1:1 identical to the regulations being in place "since forever" in Germany.

What actually changed is that it's now more enforceable. Before GDPR there where all those laws but nobody cared really as there weren't proper punishments in place.

My idea for 2030: We're going to tackle all that web-development legacy as we're going to leave the time-sharing mainframe model again (actually for the same reasons as last time… ;-)). But maybe I'm to optimistic, and it'll be 2040 by than.


I had a play around with Logo the other day. In the end I decided to dive too deeply into it, but I came away thinking that there is basically nothing wrong with the language. It was invented in 1967. 1965! Yet it has everything "there". I think there's even a university professor who teaches Logo to undergraduates, too, claiming it is much better than Pascal.

Pascal. That could be used as a systems programming language. Maybe it's not perfect - C++ wins when it comes to resource management - but that doesn't mean it's unfixable. Just fix the bits you don't like. Sorted.

I could go on: Dylan, Ada, even Basic. They all "work". We've been inventing language after language after language, and yet all the key ideas were already there.

Now, it's true, many are lacking the kinds of libraries we want. And most of these languages don't receive the love they need to smooth over the rough edges that have programmers reaching for the more popular languages.

Let us now consider programming languages that can be programmed graphically. These kinds of tools get announced with great hype, only to be ignored when everyone realises that it's more fuss that it's worth.

And yet, it's maybe not a complete wash. NodeRed looked practical, and Scratch seemed to have a place. In principle, the whole idea that a program can be viewed as a schematic has appeal.

Then there's the "MX" tool for STM32 MCUs (microcontrollers). It can used to configure the pins and peripherals of your MCU. I hated it at first, but now I'm seeing its advantage at simplifying configuration, even if it does spew out a ton of stuff.

One thing about MX is that it "understands" the chip, so it knows about conflicts of pins and the limitations of the chip. The key thing here is that it's a metatool, a kind of Domain Specific "Language" (although it's actually graphical). It can achieve something that general-purpose languages can't. There's no magic here, of course, the MX only understands the chips because it has been programmed to.

MX is completely useless for making things like database applications, for example.

So DSLs /can/ work - maybe - but they tend to suffer from the same problems as graphical programming tools. They are also niche products, so they don't tend to get much interest. They also tend to suffer from the problem of "walled gardens" - whereby anticipated things are trivial, but unanticipated things are impossible.

One of my own ideas is a kind of "compilation language". It would be a language, possibly a subset of a larger language, in which you could give developers something that "understands" their problem domain. I'm thinking something along the lines of C++'s notion of "constexpr all the things". Perhaps we could push this idea so far that we could build a set of libraries that the compiler itself executes and can validate other developer code.


I also like MX bust only use it to handle constraints and find a suitable use of a chip resources. After I have all resources planned it's CMSIS all the way, not that complex.


2012


Added. Thanks!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: