Hacker News new | past | comments | ask | show | jobs | submit login

No, there's definitely an underlying substrate of significant commonality between the various programming languages and technologies. If you're at 10 years in and you still feel like a beginner, you're doing something wrong.

Obviously I can't expect to pick up a brand new technology and instantly expect to be a wizard, but I do expect that I can pick up a new technology and be functioning at a high level in a week or two, tops, because it's almost certainly just a respelling/reskinning of some technology I've used before.

(The whole "young guys who know way more than their old-fogey elders" was, in my opinion, an isolated one-time event when we transition from mainframe tech to desktop tech. Despite its recurrence on HN, I think "age discrimination" is naturally receding and will just naturally go away as the people on this side of that transition continue to age, and skill up.)




If you pick up a new tech and it's " just a respelling/reskinning of some technology I've used before" you are doing something very silly or are not using new tech at all. If it's basically the same there is no reason to switch.


Don't tell me... tell the people who keep pushing old ideas in new guises on me!

"Oh, look, the JS community has discovered $TECHNIQUE. Ah, yes, I remember playing with this in 2003. Does any of them remember the ways in which it went bad and never took off, or are they just spouting hype? Ah, I see they've opted for hype. Well, this ends predictably."

Not that $TECHNIQUE is necessarily bad, mind you, it's just that none of these ideas are new and it would be nice to see one of these frameworks pop out every so often written by someone who up-front acknowledges the previous weaknesses of $TECHNIQUE and tries to address them, even if only through user education, somehow.

(And while no language community is immune to this, the last two years of JS has been noticeably worse about this than any other community I know.)


I've been hearing for years about how the javascript community is constantly rehashing things from the mainframe world or the desktop world. I haven't really been programming long enough to see it happen, though, (my first programming book was along the lines of "how to AJAX").

Could you be so kind as to mention some examples of javascript libraries or techniques that are recycling failed concepts from past decades?


"Failed" is too strong. Many of them are good ideas for certain use cases, but also have certain well-known problems, which is frankly true of everything.

Event-based programming was not discovered by Node. It was the dominant paradigm for decades, plural, on the desktop, and still is how all GUIs work, on all platforms, current and past. I've got a big blog post on deck about this one, actually, so I'll save the well-known pitfalls for that.

All of the async stuff that they've come up with has been tried before, and none of it is a miracle cure, though some of them are certainly better than callback hell. Still, many of them still have well-known problems with composability and program flow comprehension. This has been a rich source of people overestimating how green the grass is on the other side; for instance, Python has everything ES6 is going to have anytime soon, and it's still fairly klunky in many of these cases, IMHO. Not to mention the fact I've been outright stunned to see people in 2015 rehashing claims that cooperative multitasking is superior to preemptive multitasking because it gives you "more control" over the performance, which is roughly up there with seeing someone actively advocate for spaghetti programming because it gives you "more control".

Reactive programming dates back to Visicalc (note the fourth paragraph of the wiki page cites spreadsheets as an example); it has well-known problems with cyclic dependencies, which are shockingly easy to accidentally introduce. It strikes me as likely that the nature of web pages will tend to contain this problem, unless you're literally building a spreadsheet; this strikes me as one of the better tech fits. (The more you partition your problems structurally with "pages" and discrete "submissions" to the server, the less likely you are to spaghetti tangle your data flows accidentally, the way a single-spreadsheet application can so easily. The structure induced by the web is in this case harnessed to your advantage.) I wouldn't try to build a game with reactive programming, though.

The idea of using "binding" in your UI, as in Angular, was actually tried by multiple GUI technologies, and generally was hard to work with at scale (made easy things easy and medium things very hard). I'm pretty sure Microsoft had it at least twice on the desktop and once in ASP.Net; none of them stuck. The same effect may save you here, though... GUIs are generally in "pages" as well. Less sure about that. Adding binding to a pre-existing language can also cause inner platform effect, where you have to embed a full language for expressions inside the original programming language: [1] But this is one of those cases where advances may make something more practical than it used to be... dynamic scripting languages require a lot less work to make that work than the olden days, where the fact you were literally writing a new inner language really sucked (i.e., lots of new bugs the outer language didn't have).

Going to a bit of a wider range, NoSQL databases preceded SQL databases, which are called SQL databases precisely because there were databases, then there were SQL databases. Non-SQL databases had problems with being non-standard and causing your application to be too intimately tied to one of the very pieces of the tech stack most likely to fail one of your requirements and need replacing. That said, I will also point out that SQL databases really were in some sense too successful, and they should never have been the "only" choice. (But too many underinformed people still read too much hype and flee SQL databases when they really shouldn't.)

And I'll end my message with a clear restatement of my point, so it's both the beginning and the end: The point is very much NOT that any of these things are "bad" or "failed"... the point is that they are not new, and it would be advantageous to look back at our historical experiences with the tech where possible to learn what the pitfalls are. This is something that both the people writing the techs ought to be doing, and even if they do, the people using the techs really ought to as well, especially when it comes time to decide which tech to use.

[1]: https://docs.angularjs.org/guide/expression


To expand on the database point, there is a great paper by Stonebraker and Hellerstein covering the lessons learned and forgotten over the last 35 years of database research - http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.113....


Thanks. I agree that people sometimes present these things as more "shiny and new" then they are.


This was a very pleasant read, thank you!


Everything comes down to data structures and algorithms. It's just a matter of how much syntactic sugar you want on top of it. Image processing? Machine Learning? Real time communications? Functional programming? HTML/CSS/Javascript? Meteor? NoSQL?

It's all the same thing under the hood, just better syntax and tooling. Data structures and algorithms.


jerf is right.. you have a mental model of how a programming language works, and a new language is just changing the syntax used to represent those same concepts.

Does that mean there's no reason to switch? No.. since some languages are better at representing some ideas; some languages have better abstractions for certain ideas; etc.

I think starting with a lower level language helps with this way of thinking. If you learn everything about C (for example), and later learn a higher level language, it's easy to think of how you would implement a certain feature of the higher level language.


> a new language is just changing the syntax used to represent those same concepts.

I can think of two counter examples for this.

First, Lisp with its meta-programming is completely unlike any other language that doesn't support such concepts. No programming I did(In C-like languages) ever prepared me for something like Lisp.

Secondly, Haskell with laziness, immutability and a powerful type system is completely unlike any other mainstream language out there. You would struggle to even begin to express your programs in Haskell if you are not familiar enough with it. Imperative code and Haskell code are almost always completely different. If you have some experience with imperative languages, most of that experience would translate over to other imperative languages. However, an extremely small portion of my imperative language experience translated over to Haskell.


> I think starting with a lower level language helps with this way of thinking. If you learn everything about C (for example), and later learn a higher level language, it's easy to think of how you would implement a certain feature of the higher level language.

I cannot agree with this more. A lot of people seem to think that a "better" language to learn programming with is one that is "easier" or "more forgiving", but everyone I know who started with C became excellent programmers whereas ability among the group who started with something else is somewhat more hit or miss.


Yes.. I think it's because C forces you to think about things you would never have to think about in a higher lang.

For example, I know exactly how garbage collection works.. since I once had to write a GC for a project. So when I use a higher lang, that part isn't magic... it's just something someone else already wrote for me.

Whereas, if you started with a higher level lang, you could get by without ever learning how a GC works. Yes, you could dive into the details of your language, but there's no requirement for you to do it.

And I think that explains what you've noticed... it's hit or miss because those who chose to dive into the details of their lang eventually became excellent developers...


> everyone I know who started with C became excellent programmers whereas ability among the group who started with something else is somewhat more hit or miss.

How much of this is survivorship bias?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: