Hacker News new | past | comments | ask | show | jobs | submit login
Retiring Python as a Teaching Language (dadgum.com)
280 points by xngzng on Jan 21, 2015 | hide | past | favorite | 235 comments



When I took an introduction to Computer Science in college, it was taught in Racket. Over winter break, I then learned Python, and I was baffled -- why would any curriculum not start with Python? Python was easy, expressive, and allowed the user to get lots done with little effort. It was great at motivating programming by showing its use and power. It seemed like an ideal introductory language.

Over the last few years, however, I've back-tracked on this opinion: if you really intend to teach, not just motivate, then Python is a poor choice. A language like Racket is great for teaching because it lends itself to exploring so many core concepts: complexity of algorithms, functional programming techniques, recursion, data structures, etc.

A language like C (the next course I was to take in the introductory sequence) is also an excellent language because it forces the user to understand programming at the systems-level.

I don't really think Python is very good for illustrating either the theoretical or the systems-level CS concepts. It's a nice language for motivating programming and showing off how much you can do with little serious investment, but it doesn't really motivate learning Computer Science, i.e. learning Programming on a 'deeper' level.

Thus, I don't really think that learning Python offers solid programming foundations. Consequently, if you really aim to teach, not just to motivate, then use languages that bring the student closer to core CS concepts.


> Thus, I don't really think that learning Python offers solid programming foundations.

I agree. But there is a huge class of people that this argument tends to miss - all the people who learn programming as a tool and not a way of life, like scientists or engineers from other departments. For those, it's ideal to learn one language, and it's OK if they don't completely understand CS. For them it's better to know a language that has wide range of practical applications.

So unless you are absolutely sure that you're going to learn CS, I would still suggest Python as a first (and possibly only) language.

I would also say that doing GUI in like PyQt is really easy, but the documentation and tutorials are lacking.


Yes, that's a good point. Some people might not actually need to cultivate a deep understanding of CS -- they just need a tool to get the job done. Not everyone who uses a hammer needs to take carpentry lessons. In that sense, Python can be a very appropriate choice.


> Some people might not actually need to cultivate a deep understanding of CS

Most people :)


Some people who do computer programming might not need CS was the point I believe.


Exactly, for the others there's Software Carpentry: http://software-carpentry.org/lessons.html


This is a really good point.

I am not a programmer at all, but did learn a little bit of ASP.NET at university. I never use it anymore, but I ended up teaching myself PHP.

I use PHP regularly, not to build anything grand - but it gets me by. I can write little scripts like working out who's unfollowed me on Twitter, small information systems, reminder applications, hack Wordpress plugins, etc.

I wish I was taught Python at uni right out of the blocks. I've tried to learn it a few times, but ended up forgetting to do lessons and just going back the the crutch that is PHP.

The theory of CS is way beyond me at this stage of life, but a little bit of programming knowledge is a very useful thing (even if it is only PHP)


If you can understand the the core ideas of data structures and how to manipulate them (Arrays, Loops, Conditionals, Objects etc.), and can put that understanding into practice to solve a problem, you really are 90% of the way there.

Learning another language is easy, just try it :) Also a language like Python really is a breath of fresh air if your only experience is PHP.


It's important to ask "teach what?" There are a lot of foundations that simply aren't going to be relevant to large subsets of students. Teaching C, for instance, is nearly useless if you're trying to convey FP concepts or object-orientation.

I certainly support including C as a core part of any university's Computer Science curriculum. But such a curriculum should include as many languages as possible, each deliberately picked for what it can emphasize and convey. For a Computer Science curriculum, any discussion about language choice that doesn't end in at least three picks is, to me, a failure.

But there is a lot of learning to program that happens outside of university settings. Students who have no explicit professor or TA to talk with need wider community support. They need more basic guidance. In these cases, a language like C would do poorly: there are too many systems-level concepts that need to be understood before it stops feeling like magic. For this, Python is far superior to C, because it makes understanding what you're doing far more accessible far sooner.

Whether or not Javascript is a better choice for this than Python is something I'm undecided on. I can see the merits offered in the OP, but I'm not convinced they outweigh the faults in JS. The newbies I give recommendations to generally have an interest in going into data analysis. I think Python is the better choice for that, as a language that can form a bridge between "whoa, hello world" to, say, playing with scikit, and if they end up going in a different direction, they're unlikely to have picked up any particularly bad habits. Javascript? I am not convinced.

(This sort of stopped being a response to you in particular and turned into a response to the OP, but it'd be confusing to move it now. So... whatever.)


> Teaching C, for instance, is nearly useless if you're trying to convey FP concepts or object-orientation.

C is superb for teaching how to manually implement polymorphic behavior in the absence of object-oriented features. Using it to demonstrate how (class-based) OO languages work should be standard in any course that has CS in its title. (Using it to demonstrate how to implement prototype-based inheritance would be an advanced course).


> C is superb for teaching how to manually implement polymorphic behavior in the absence of object-oriented features.

Building a complex language from scratch is fine, but doing it by extending C isn't the best way.

I happen to like C. I'm good at C. I understand the C mindset, to the extent it has one, and it isn't especially conducive to OO. (It's absolutely horrible with FP, because the fact you have to manage memory manually does terrible things to function composition.) Which would be fine, if C were actually a low-level language. It isn't.

C gives you a simplified, high-level machine model. It isn't that close to the machine; if it were, you'd have access to things like SIMD hardware and everything else C can't give you because it would break portability.

(OK, you could possibly have access to them if you used C to implement an interpreter for a virtual machine. But that kind of defeats the purpose.)

So sure, implementing a non-trivial programming language is a fine goal; however, I'd do it by generating assembly language, because doing it in terms of C forces the programmer to adopt C's limitations (every function has a single return value because there's no real stack access; there's no access to processor flags; no access to parallelism of any kind; etc. etc. etc.) for no real benefit in that context. (School projects don't have to be portable.)

Edited to add: And if you're writing a compiler, C isn't especially interesting. Haskell and parser combinators are interesting; Common Lisp and macros are interesting; Python is, again, not especially interesting, but at least with Python you're debugging errors related to the problem domain, as opposed to errors related to the intricacies of manual memory management.

(Edited for comprehensibility as well.)


> Using it to demonstrate how (class-based) OO languages work should be standard in any course that has CS in its title.

I'm not sure if you meant to put it that way, but that is a horrible standard for "any course that has CS in its title". I would not like to walk into a course on, say, intro to AI, and have a session demonstrating how class-based OO languages work using C.

For the general thrust of your point, cbd1984 pretty much covered everything I would have said in response.


If "programming foundations" must include memory management etc, you're certainly correct. However, there's a reason they use Python for intro classes at eg; MIT[0] - the language gets out of the way so you can focus on CS basics. Once you get control flow and the like, you can start to explore complexity et al.

[0] http://ocw.mit.edu/courses/electrical-engineering-and-comput...


No, Scheme was that language. MIT now uses Python because when the dot com crash cratered enrollment, which dropped by more than half after being steady and high for decades, the EECS department and I gather some higher ups panicked. They also made the 2 new mandatory introductory courses much more EE and "gadget" heavy.

(Another input might have been the school having committed an insane quarter billion dollars to build a "showpiece" (but not very functional) building for CS research.)


Teaching is made of two components: access to knowledge, and motivation. Today, access to knowledge is almost entirely irrelevant: pretty much all knowledge is freely accessible from anywhere, anytime. You just need the motivation to look it up. Teaching yourself can be easy, if only you have the drive to keep at it.

It wasn't the case in our past. In the middle ages, before the printing press, teaching was 99% passing on knowledge, and motivation was assumed to be a given. Today, in the post-internet age, access to knowledge is a given, and teaching is 99% motivation.


Honestly, no. If you've got no overview, you'll not "surf the internet" but get drown in irrelevant or outdated stuff which exists in abundance.


I think there are other components that are at least as important. You need to know what to learn and be able to organise your time properly. A teacher sets a roadmap and intermediary objectives.


> pretty much all knowledge is freely accessible from anywhere, anytime.

If only that were true.

http://data.worldbank.org/indicator/IT.NET.USER.P2/countries...

It's getting better every year but we're definitely not there yet.


Guided presentation of knowledge. Not access to knowledge.

Admittedly a lot of uni teaching is just a data dump. But the data dump at least tells which topics to investigate, and regular reviews tell you when you're not getting it.

Without that you may never know why - say - compiler design is important in CS, and not just to compiler writers.


That's the approach Berkeley CS (and a lot of other CS programs, but I can only speak to UCB's coursework). The first CS course teaches fundamentals of computer science, using (mainly) python, but also Scheme and a declarative language to explore the concepts.The subsequent coursework exploring Data structures/Algorithms uses Java, and then system-level architecture uses C and MIPS.

Every professor is extremely careful to make sure nobody leaves the course believing "This was a python course" or "This course taught me Java"; rather they emphasize the topics covered, using whichever language as the tool for exploration. Python is a great choice in this regard BECAUSE of how rapidly prototyping happens, perfect for a course built for exploration.


Scheme is excellent, but Python only lacks a couple features that can mostly be bolted on or used from a different interpreter (tail calls, Stackless, etc).

Some languages make exploring certain concepts easier, but the lack of that feature isn't a hard barrier.

I think learning efficiency is an exponential function of the latency between hypothesis and result. Python just happens to have great tooling to minimize that.


> A language like Racket is great for teaching because it lends itself to exploring so many core concepts: complexity of algorithms, functional programming techniques, recursion, data structures, etc. ...

>I don't really think Python is very good for illustrating either the theoretical or the systems-level CS concepts. It's a nice language for motivating programming and showing off how much you can do with little serious investment, but it doesn't really motivate learning Computer Science, i.e. learning Programming on a 'deeper' level.

Could you explain what you mean by Racket lending itself to algorithm complexity, recursion, etc better than Python? I really don't see why one or the other is more conducive in this respect.


Racket is a descendant of Scheme, and shares its roots as a great playground for fundamental functional programming concepts, while also allowing for impure approaches as well, making it a great chance to compare and contrast styles. It's also tail-call optimized just as Scheme is, so recursion is generally safe.

The documentation is also almost obscenely robust, the teaching languages provide a great sandbox for easing into various concepts, and the learning material in How to Design Programs is more or less a complete first-year zero-programming-required CS text.


Racket has hygienic macros.


That is the assumption that lean to divert the focus to something else, like for example, math (A lot of "intros" to programing are really math with computer).

I have a FAR different education. I start with Fox 2.6 (DOS) and Acces. At the end of the semester, We have a FULL done application with: Menus, Forms, Small procedures, database, Reports, with colors and very crude idea of custom controls. From the side of Acces, we get the idea of how a Windows application was done.

This was NOT the university. The university teaching was so useless in comparasion (I was the one to teach the OO classes, because the teacher don't have a clue about it. And that was the 2d semester).

Then the instrucction of algos was terrible. And so on.

---- Rant objective?

Good, focused, NOT DIVERTED, teaching is far better than the core language. I have done some teach, and know dozen languages.. I believe python/pascal are far better to start - and probably pascal, if we want a C-like understanding without the badage of slow tools, weird syntax and crazines of the C family, but well, that my thing- and don't see why it could be problematic to teach any of the fundamental concepts of programing.

The key thing, apart of the quality of the teaching, is the focus. I think if the goal is teach "programming" in the general sense, python/pascal are the better. C, C++, Haskell, Java requiere a lot of divertion of the attention (setup and take care of a project, do make files, wait... wait...wait, "monads"?, etc).

In contrast, if the teach is learn how "a computer works" so C/C++/Pascal/ADA could be great. Again, is the focus.


I am really happy that I took the time to teach myself Racket in supplement to (really more importantly than) my introductory programming classes in Java. I was way better equipped to design programs in a good way because of it.


One of the first steps in teaching is motivation.


Beginners that aren't in a formal class setting (and his description more closely describes mentoring someone outside of the classroom (son/daughter)) have not yet made the decision to invest in learning things from first-principles as a university student has already locked into doing. So, for these beginners there must be some kind of reward or immediate application for their efforts if we even expect them to continue past the stage of early burn out. This is what the post speaks to.


Many university students aren't locked in either, and so are still in need of motivation.


Home computers used to boot to a BASIC interpreter. To do anything you needed to use a little bit of BASIC.

Looking at HN threads we see many people who claim this one thing created very many programmers.

But back when it was happening we had teachers giving remedial classes to force BASIC users to "unlearn" bad habits. (There's probably a "BASIC considered harmful" somewhere).

What language would you recommend for 8 year olds?


> (There's probably a "BASIC considered harmful" somewhere).

Edsger Dijkstra: "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."

I do not subscribe to this opinion. I think BASIC (at least the flavor Dijkstra was talking about) does encourage bad programming habits, but everything can be remedied with exposure to better languages and practices.

I actually agree with you: BASIC helped make a lot of people interested in programming. I know GW Basic, and later Quick Basic, did this for me.


I agree with the article that JavaScript is a good choice. Nothing to install. No way (or there should be no way) to accidentally wipe the disk. Easy to share your code.

Now, JavaScript certainly has some ugly parts, but the difference between it and the old-school BASICs is that it's possible (with care) to avoid most of the ugly parts in JavaScript. That wasn't possible with BASIC.


I totally agree but when they what to use the disk it requires NodeJs or similar and then you can wipe the disk again. But it easy to start safe.


If you just want to persist data between sessions, you can do quite a bit with HTML5 localStorage.


> A language like Racket is great for teaching because it lends itself to exploring so many core concepts: complexity of algorithms, functional programming techniques, recursion, data structures, etc.

You can explore all of those in Python, even though some may not be idiomatic in Python (many FP approaches), and some may not be suitable for non-trivial use in Python (recursion). And, in fact, there are plenty of introductory CS course that do use Python to explore all of these.

> Thus, I don't really think that learning Python offers solid programming foundations.

Languages don't offer foundations, curricula do that. And for a solid programming foundation, I think a curriculum that does that can be organized around almost any language -- there's a small set of essential features, but very few languages don't have serviceable implementations of what is minimally needed.


and does JavaScript: the replacement he is going with, offer all these points you articulated that makes Python not much of the best language to bring student closer to core of CS concepts?


All of the concepts you say Racket is good for could be done in Python (ok, functional programming not so much, but I certainly do bits of functional programming with it).

At the end of the day most jobs in the real world want to get stuff done rather than explore CS concepts. Visual Basic was a very popular choice for many years despite being a poor language. I am sure it was simple enough for many people to start programming (that has benefits as well as downsides).


I agree with everything you said about what makes racket a great fundamentals class, but I think you left out a key advantage. Most people haven't seen racket. Kids who took some basic CS course beforehand, but have to take the college introductory course, are practically level with kids who haven't seen anything.


As much as i can't disagree with the choice (the realpolitik of programming has made it the most correct choice), i am saddened that the language on which the choice falls is one not chosen based on any merit, but merely due the historical accident of being the first to be implemented in a browser and having had no appreciable opposition due to browsers at the time being in hands of corporations.

In "A Deepness on the Sky", Vernor Vinge describes a future in which there is the reality of a profession called "Programmer-at-Arms", necessitated due to thousands of years of layers of indirection and cruft that will combine to make the software of that time, making it a requirement for even simple changes in a weapon system to be done by someone who can dig through all of those layers. It seems that his prediction is closer and closer to becoming reality.


>...i am saddened that the language on which the choice falls is one not chosen based on any merit, but merely due the historical accident of being the first...

Couldn't agree more. Javascript is one of those things that I wish would just go the hell away and be replaced by something that isn't awful.


What we need is some kind of jvm-like VM binary standard that several languages can compile to. Right now that's simply JavaScript, though, but it's not very good for this purpose.


In case anyone hasn't heard of it, Asm.JS is basically trying to turn JavaScript into a similar standard. Combine that with sourcemaps in browser debugger tools and we're really close. I've been using clojurescript recently, and I can even connect a REPL to the browser and make on-the-fly changes just like we've always been able to do with JavaScript.


What I really wish someone would build is this: Create a platform-independent machine language which is designed to be translated into arbitrary other machine languages. Then put the equivalent of FX!32[1] in the browser to translate it to whatever architecture the client is running on. And have a compiler that will compile to both that and to asm.js, with asm.js being used (with consequent lower performance) for legacy browsers.

The point here is that it isn't a VM at all. It's just an intermediary platform-independent binary code. So it doesn't care what kind of language you're compiling from or what kind of hardware it's running on, but it still doesn't need a huge slow insecure runtime environment.

Basically it should do what Java originally tried to do, minus the fatal mistake Java made in tying the bytecode to a specific high level language (and thus all the changes made over the years which have made the JVM big and complicated).

[1] http://en.wikipedia.org/wiki/FX!32


> Create a platform-independent machine language which is designed to be translated into arbitrary other machine languages.

> The point here is that it isn't a VM at all. It's just an intermediary platform-independent binary code.

PNaCl ?


Basically, yeah -- just add a target for asm.js so the same code can run in browsers that don't support PNaCl.


I made a 0.1 release of just this idea back in 2001, then abandoned it because I'm lazy: http://wry.me/~darius/software/idel/

Well, minus the asm.js, which I kind of wish I'd thought of at the time.


This is known as UNCOL, and many attempts at it have been made, dating back to the 1960s.


LLVM.


Unfortunately platform-specific stuff leaks into the intermediate representation of LLVM.


Language of choice for beginners need not mean only language of choice. If Javascript is great to get started with, great. Once someone has that down, they can still continue on to learn C, Haskell, and other more interesting languages that explore CS fundamentals & theory. Or so I would imagine, anyway?


Interestingly, it's already the Sergeant At Arms in most US legislatures who deals with things like IT, certificates, etc.


It's not just the browser, people often want to create mobile apps today and there is no standard language that works for all platforms (you can try C# with Xamarin but it is not the native choice of all the platforms). And there is not always good, still developed, standardized UI libraries for every language. Many people starting with programming want to see something on the screen and then you need this. And there is usually some people who wan't to develop serverside web code which should be supported by the same language. If Xamarin was open source or free I think it with C# would be a great way to start programming. For now Javascript works.


re: if open source

Free for open source () => http://resources.xamarin.com/open-source-contributor.html

re: if free

Free for students () => https://xamarin.com/student

re: if cheaper

So $25/month is too much for capability to ship on all three mobile platforms - native development - using a shared codebase written in one of the nicest languages (F#/C#) in existence? If you're a startup w/ at least three people in your team just send them a email to work something your in your favour.


I thought Java was the first to be implemented by the browser?


Java was introduced in Netscape Navigator 2 at the same time as JavaScript. But Java was really not integrated with the html at all. It lived in its own isolated box on the page. JavaScript was integrated with html and with the browser environment.



Maybe as a plugin, but from what i can tell Javascript was the first to be supported natively: http://www.javascripter.net/faq/javascr3.htm


I think the TCL plugin predates the Java one.


I'm saddened by the lack of academic rigour displayed in this thread. If you have not taught introductory computer science courses, then you have NO CLUE what makes a good introductory language choice. It appears James Hague isn't working as a teacher either, just giving advice to newbies, so his opinion is just that -- opinion.

What you want is field studies: "This year we used Haskell and 75% of all students gave up on programming stating that it was to hard. Last year we used Visual Basic and only 20% did. Ergo Visual Basic is probably a better first language than Haskell."

Lots of research is being done on the subject, see this survey: https://www.seas.upenn.edu/~eas285/Readings/Pears_SurveyTeac...


Pretty much everybody I know that eventually went on to write code for a living starting hacking on little toy projects long before they got anywhere near a computer science course. JS in the browser is a great environment for people to start experimenting and dabbling in the same way that things like LOGO and Basic used to be in the early PC era.


I am not as confident in your predictor or badge of authority -- that teaching actually tells you how to teach. Most "teachers" you're talking about are researchers first, and lecturers second. They don't really think about pedagogical techniques, and they're not going to start.

Plus, they most definitely aren't going to take motivational issues seriously. Most professors assume you have motivation, and if you don't, they aren't going to care. They're going to let policy take care of you.

Also, like I've mentioned before, most professors are not teachers. They are lecturers. The difference is that a lecturer just spits out the information in a clean and accurate manner, but it's your job from there to handle your learning.

On a gloomy ending, I would even note that most teachers ever probably don't care for the state of pedagogical research. They probably already have very strong opinions about how to model the student population or best learning methodologies.


> What you want is field studies: "This year we used Haskell and 75% of all students gave up on programming stating that it was to hard. Last year we used Visual Basic and only 20% did. Ergo Visual Basic is probably a better first language than Haskell."

Why is that necessarily a good thing? What are we optimizing for here, understanding or popularity?

SICP review by Peter Norvig:

> Donald Knuth says he wrote his books for "the one person in 50 who has this strange way of thinking that makes a programmer". I think the most amazing thing about SICP is that there are so FEW people who hate it: if Knuth were right, then only 1 out of 50 people would be giving this 5 stars, instead of about 25 out of 50. Now, a big part of the explanation is that the audience is self-selected, and is not a representative sample. But I think part of it is because Sussman and Abelson have succeeded grandly in communicating "this strange way of thinking" to (some but not all) people who otherwise would never get there.

http://www.amazon.com/review/R403HR4VL71K8/ref=cm_cr_dp_titl...

EDIT: People seem to be downvoting a sentiment Knuth and Norvig expressed, rather than reflecting on the question posed - what should we optimize for in a intro to CS class and why?


I loved SICP when I worked through it, but I was already an experienced programmer at that time. I venture to guess that most of the people praising SICP actually has programming experience before reading it, even if the book purport to be an introductory course.

I would probably have given up on programming if SICP had actually been my introduction to programming.


1 person in 50 maybe has what it takes to make it to the level of programming where they can read and understand TAOCP and where doing so makes sense for them.

However almost everybody smart enough to muddle through college has what it takes to learn enough programming where they can use it to solve some simple everyday problems they'll face in whatever job the end up in.


> I think the most amazing thing about SICP is that there are so FEW people who hate it: if Knuth were right, then only 1 out of 50 people would be giving this 5 stars, instead of about 25 out of 50.

This presumes that the people reviewing the book are a random sampling of the population, rather than being significantly skewed toward a subpopulation in which the "the one person in 50 who has this strange way of thinking that makes a programmer" was overrepresented compared to the population at large.

People who have enough interest in programming to pick up a text on it, enough concern about programming texts to bother to write give feedback on one, and -- more importantly -- all that for one with SICPs reputation for rigor and focus on a language that isn't the currently-industrially-popular language-du-jour are going to include people who have the "strange way of thinking that makes a programmer" at, one would expect, a much higher rate than the general population.

> what should we optimize for in a intro to CS class and why?

That depends, among other things, on the role of CS in the broader curriculum. There's more than one right answer: SICP has one focus, HtDP has another -- and both are appropriate focuses for an intro to CS course. SICP represents a focus that is appropriate for a first course for majors in an environment where CS isn't part of the "core", HtDP has a focus that is appropriate for a first course in CS as part of the core curriculum.


In our environment, everyone should have a rudimentary understanding of programming. If even 10% of your complete beginners class drops out, you're failing to achieve that, so for this purpose, you should be looking for the lowest common denominator. Javascript is that LCD.

The Art of Computer Programming is a college level work - it makes sense for Don Knuth to aim it at the 1 person in 50 who could be a great programmer, someone who could progress the science forward, but his audience is very different to that described in the OP.


Depends on which 75% you are weeding out. What if you are just weeding out people which haven't prior experiences in a more accessible language?


Exactly this. Why are we trying to raise an army of procedural, object-oriented programmers?


Easy for corporations to replace cheaply.


>I'm saddened by the lack of academic rigour displayed in this thread. If you have not taught introductory computer science courses, then you have NO CLUE what makes a good introductory language choice.

Judging from what most academic institutions teach as a first language, that seems to move with the fads of each period and what's "hot in the industry" (e.g. Java a decade and a half ago), that's also the case if you HAVE taught introductory computer science courses.

>It appears James Hague isn't working as a teacher either, just giving advice to newbies, so his opinion is just that -- opinion.

As is most "informed opinion" of academic teachers. CS is mostly a sub-field of mathematics, the rest of it is, as Alan Kay says, pop culture.


As a proponent of python and a full time python developer I fully endorse this decision.

There is a compelling need to be able to distribute graphical python applications that isn't met (and that https://www.python.org/dev/peps/pep-0441/ isn't even a step in the right direction is an indication of how far away a solution is).

Don't get me wrong, python is a great language, but its not the right solution for teaching programming for exactly these reasons.


From the link:

> This feature is not as popular as it should be mainly because no one’s heard of it

Count me as one. As someone who does a lot of development in Python, I'm surprised I never heard of that feature either.

Perhaps another reason this feature goes unnoticed is because it still requires the end user to have Python installed. Therefore, when distributing a Python application, most people just bundle it into an EXE/ELF/etc and don't bother looking for other solutions.


Basically that's because you can't bundle c libraries in a zip file like this, so effectively you can't actually use it for any of the reasons you might want to use it.

(.exe bundlers, by comparison, bind the shared library into the executable or with it)


I'm sorry, your comment isn't making clear what teaching programming has to do with distributing graphical applications. Could you clarify what 'these reasons' are?


Teaching intro programming courses with "parse a CSV file" or "implement a text based calculator" is outdated. Modern students have grown up with smart phones and are used to being able to click around and have the computer react graphically. Students learning with C or Python have a large chasm of GUI libraries they have to cross before they can make anything like the UIs they are used to seeing. Being able to make shapes move around from day 1 in a JavaScript course seems like a much more compelling experience to me.


That's exactly it. There's an argument for teaching programming from the top down, starting by modifying GUI programs, rather than from the bottom up, at the bit level. See: http://radar.oreilly.com/2014/03/a-concrete-approach-to-lear...

Javascript now has all the major language features Python does. It's not quite as ugly as it used to be. There's strict mode, which disables some of the worst legacy crap. You can use Javascript on servers, desktop clients, and phone clients. Try doing that with Python. Compilers for Javascript generate amazingly fast code today.

Javascript has a tradition of bad, obfuscated code with no comments, but that's not the fault of the language.

(I used Python for years, and prefer it to Javascript. But Python remains limited. Just distributing a Python program is a pain. Yes, there are several systems for making an installable executable from a Python program. They sort of work. It's not considered a standard operation.)


Looks like there is some sort of school of thought that differentiates 'programming' from Computer Science. This seems wrong, because you're not teaching 'how' you're teaching 'what'. The time students spend learning how to write HTML what CSV is, isn't spent on what bool, operators, statements, blocks, procedures are. They're learning 'Web Programming', which won't help them nearly as much when they have to do anything else.


I think this author has lost his focus. Maybe there's only 1 choice if you want something to sit "on top", but anything can interact with "an unprecedentedly ubiquitous cross-platform toolkit for layout, typography, and rendering". Including Python.

For games, an introductory class using Python with Kivy or PyGame doesn't fit the bill because they may not always be around? If a student really wants to go deeper into game dev, they're going to need to learn at least one of Swift/ObjC/Java/C# and their respective native libraries anyway. You're not indoctrinating them into a platform like the web or iOS, you're introducing them to programming.

The rant about libraries not always being maintained is really brilliant considering the library churn than in the JS space. After all, they may be "well-intentioned today but unsupported tomorrow?". TkInter? This was around before Angular 1.x was a glimmer in daddy GOOG's eye. It'll be around after Angular2.0, and long after 2.0 is buried and the next JS framework/library rears its ugly head only to be put down in short time. Given the choice of having invested my time into TkInter, or Angular 1.x, I know which I'd rather have dumped my time into.

So what's the goal here, to teach programming with something you can show others? If so, then you can do it with almost anything, especially Python.

Use what you want to teach, but don't delude yourself with half-baked reasoning to justify what you clearly always considered an inferior choice to do it in- for the last 2 decades.

Back to the drawing board, professor.


It sounds like the goal is to get people programming. That means a combination of teaching them, and getting them excited. Being able to show the thing you created to your friends is a powerful motivator.


While I disagree with the author that this is the penultimate goal that should be overriding all other concerns. I'll bite for the sake of discussion. If that's the goal then Javascript was obliterated before the OP woke up to write this ill-conceived blog.

What people are building with Python on Kivy (chose the nearest example)-

https://itunes.apple.com/us/app/deflectouch/id505729681?mt=8

Vs.

What people are building on the webstack (also chose the nearest example)-

http://prog21.dadgum.com/203.html

I do give the author credit for not contorting the page into an eye cancer causing abomination like most web developers are doing these days. While he's espousing JS, oddly enough he may share my preference for a web that's as JS-light as possible. :) But our professor does already have something to show his friends! So download your full blown games, accepted into the AppStore (or Google Play) to show your pals what you built. Then pass out your little URL. Let me know how thrilled they are then.

In sum, if you're looking for a great general purpose PL to start with and maybe grow with, with plenty of ways to build cool stuff- Always Bet On Python.


Just because someone doesn't have a javascript heavy blog (which is designed to be mostly text) in no way means they think must think the whole web should be kept JS-light, or that JS isn't a worthwhile technology to build interactive applications with. They could believe in an appropriate use of technologies.

There are plenty of technologies for getting JS based apps into app stores, just as there is for Python based apps.

If your argument is is "Here is a highly interactive Python app and a low interactive blog post, therefore Python is more compelling" then:

What people are building with Python-

http://learnpythonthehardway.org/book/ex1.html

Vs.

What people are building on the webstack-

https://itunes.apple.com/us/app/biolab-disaster/id433062854

I agree that some of the arguments such as library longevity and health are debatable, but in terms of being able to get your application out to people, JS undoubtedly wins.


Yeah, unfortunately the discussion was around getting people to program so they had something to show others. Not getting your application out there to as many people as possible. Sounds like a kneejerk defense of your chosen platform than honest contribution to me.

Not that you've proven this new point either. At $99 per student for impact.js just to learn on, I think the only thing undoubted is that you're desperately grasping for straws.

Paid software compared to Python's ecosystem that provides something better for free. Along with more varied libraries, and a better language too. Maybe as a teacher you could buy just one copy and pop the hood and show them that source code on how this is built.. oh wait, it's not open source. Also the slight snag that impact.js doesn't have support for Android. Which happens to be the largest mobile platform in the world. For only $99.99! You may as well have sent me Xamarin to compare with Kivy and tell me how C# was the way to go.

Use what you want to teach with. But using these harebrained reasons concocted as to why you'd want to push something like JS over a PL of Python's caliber make no sense at all.


If you get excited by showing your friend some silly webpage, but aren't absolutely thrilled by function composition in haskell then you're a coder, not a programmer. You're in the wrong class.

The point of programming classes should not be to hide what programming is and show this silly mockery of it which javascript is. It shouldn't veil true programming issues, like correctly reasoning about the structure of your code, with a language which makes it easy to code in without ever programming.

If you need to motivate someone to be a programmer, they should not be one at all. They should not be going on to write poorly implemented software because they didn't have a passion for implementing proper reference counting in a legacy C codebase, but instead wanted to write an iPhone app for their friends.

If you need to motivate someone to be a programmer by teaching them to draw on a canvas or manipulate the dom, rather than teaching them the wonders of basic lambda calculus, then you're doing them a massive disservice by using the word "programming" to refer to something else entirely.


slow clap

It's rants like these that really harm the perception of the functional programming community. It just portrays you as the archetypal 'misunderstood genius' trying to show everyone in the world how wrong they are and how only the 'true programmers' do what you think is right. Yet none of this is backed up by evidence that the benefits of functional programming lead to better software.


Yet none of this is backed up by evidence that the benefits of functional programming lead to better software.

Amen. I've been hearing exactly these kinds of grandiose claims from FP advocates going back as far as 1998. The funny thing is, the same people almost never seem to getting around to actually writing any code of any substance. As far as I'm concerned there are some good ideas coming from the FP community but the more extreme implementations of FP like Haskell have yet to prove their worth in production environments.


> FP like Haskell have yet to prove their worth in production environments

Huh?

What is your definition of proving worth in a production environment?

Did you hear recently about bond? It's written in haskell by Microsoft and used in Xbox Live as well as other infrastructure.

There's also facebooks Haxl...

Remember Bump file/etc transfers in Samsung Galaxy commercials? That was written in haskell too.


You are completely assuming that everyone who wants to program is intending to do so professionally, or that they won't be able to grow from a basic intro of "here is how to present your thoughts programmatically" and then cover the deeper subjects later, as they need to.

This is especially relevant as we start teaching programming at younger and younger ages. I hope it's obvious that when kids start programming in 4th grade, it's much more intuitive to start them on Logo than lambda calculus...

See, for example: https://neil.fraser.name/news/2013/03/16/

Also, if you look at the amount of money that "Flappy Bird" made, perhaps making brain-dead iPhone apps isn't such a bad deal...


So much anger


It's with all of this in mind that my recommended language for teaching beginners is now Javascript. I know, I know, it's quirky and sometimes outright weird, but overall it's decent and modern enough.

Javascript is a flawed, quirky language, but with the upcoming ES6 and ES7 standards it's going to be a much, much better. I'd call it a tossup comparing it to Python or Ruby. I wouldn't have recommended JS to a beginner a few years ago either but that's where the momentum is now and, as the author says, the ease with which a newbie can get something interactive up on screen trumps smaller concerns. For a little more rigor you can use something like Typescript.


I'll be the first to admit that ES6 is nicer than Python in terms of the language itself. But I feel like it's still going to take years of heavy lifting to get the same level of high quality libraries, web frameworks, learning resources, etc. And it's hard for me to see js replacing Python as the obvious choice for beginners until that happens. It might be more fun to code in than Python, which is probably at least in part why there is more js code being written on github, but the larger ecosystem still seems like kind of a mess.


What's pretty funny is that the new features in ES6 and ES7 makes a lot of the patterns/mishaps/X-parts from the last 15 years pretty irrelevant or less-obviously-'the way'.

Examples:

  * "Class" syntax
  * Comprehensions
  * Modules
  * `let` scoping
  * Generators
It's a whole new ball game.

I think a lot (over 50%) of JavaScript devs 5 years from now will still be using <= ES5, but only because they've so settled in their ways over the last 10 years.


> I think a lot (over 50%) of JavaScript devs 5 years from now will still be using <= ES5, but only because they've so settled in their ways over the last 10 years.

Funny, I'd make the opposite prediction... the JavaScript devs I've known have been among the most curious and open-minded developers about different ways of doing things, interested in experimenting with different idioms made available by the flexibility of the fundamental language, and quick to adopt them they found them useful (possibly too quick). They're people who figured out how to get many of the benefits of the language features you've listed above starting with the language features that were available years ago.

Which kindof makes sense: everybody else was too busy complaining about function scope instead of block scope or how they couldn't get their heads around Prototypes instead of Classes or having to reason about this scope or truthy/falsey values or whichever difference/wart scared them off to bother learning to use the tools at hand.

No, my bet would be that ES6 adoption will take up a lot quicker than, say, Python 3.

(and most of the lagging will probably happen around frameworks with imported paradigms -- Angular 1.x, I'm looking at you in particular -- and other aging codebases).


> I think a lot (over 50%) of JavaScript devs 5 years from now will still be using <= ES5, but only because they've so settled in their ways over the last 10 years.

5 years from now, there will still be people whose customers require IE6 support, too. We're still working on that.


The customer I am on, still uses IE 8!


Is rigor even that important? Is a flawed and quirky language really a problem when you are learning? I'd expect that keeping the learner's momentum going is more important – if they keep producing they will be able to engage with the crap, or make their own new kinds of crap. Quantity over quality, you know? With that in mind, stuff like developer tools is more important... and maybe community (which is eclectic for Javascript – there's lots of material out there about Javascript, but a lot of it is crap material).


A quirky language might slow a learner's momentum by making it necessary to explain all the quirks. As a simple example, javascript's rules for casting values to boolean is quirkier than scheme's rule in the sense that scheme's rule ("#f passes conditional tests, everything else fails") is shorter to explain than javascript's rule ("NaN, false, 0 etc fail conditional tests, except new Boolean(false) passes it, etc"). As an experienced javascripter I've grown used to the quirky parts of JS, and some of the quirks are even rather convenient; however it's that much more that needs to be explained to the beginner that isn't strictly necessary.


We can have good, rigorous semantics and simple syntax in the same language. Python is, indeed, a good example. Ruby as well. I would even start to say Scheme is a very rigorous language that is quite easy to learn--if you first haven't been tainted by curly-bracket languages and second lived in a world in which there were a Scheme implementation with as many and as complete of a selection of libraries as Python.

A flawed, quirky language is a problem when learning, because it increases the number of things that have to be learned.

But yes, your other points are correct. Tooling is far more important.


> ... a Scheme implementation with as many and as complete of a selection of libraries as Python.

It's not quite the same size, but the Racket standard library is fairly close in size to pythons. It also has better GUI support, IIRC. It's selection of other libraries isn't quite as broad either, but it's decently sized, and you can use some things written for other implementations (well, mainly ones written in pure scheme).


I had used Racket for my personal projects for well into a year. It was a lot of fun, so it definitely filled a good niche for me at the time, but I eventually gave it up as I started to want to be more productive and start finishing projects, rather than just writing any ol' random code.

It is definitely the most complete Scheme available, which I believe is why they gave up the PLT Scheme name, to disassociate from the "bare bones" assumption of Scheme. And yes, it's GUI support is relatively better than Python's. But it's not objectively good. It tends to have mostly bare-minimum implementations of libraries. For example, the work done for connecting to databases is basically nothing more than a DB connection. Web development is basically no more than HTTP request parsing.

There's no Django or Rails or even Express for Racket.


+1 on the interactivity aspect. Not having to go through a bunch of compiler configuration (C/C++ land) to get a basic GUI up and running is a huge plus. And HTML is a lot easier to understand for people with no experience than a bunch of tkInter classes.


i kindof believe all languages should have some ability to run with a browser-based GUI... why bother with all the hair-brained implementations (Swing, Qt, etc.) when we could just have a "desktop mode" (altered security) version of Chrome pop up to provide GUI. it wouldnt even have to follow same rules as browser really (navigation could be via buttons only, no back/refresh), just be based on html & perhaps js to allow a bit of advanced nav logic (though hopefully not enough to start polluting app logic)

i believe chrome apps may have some steps toward this, though not sure... at 1 point i considered trying to make a browser-based GUI that does all local RPC communication to a c++ app, but i feel like you know.... there should be a nice standard way of doing this out there


Chrome apps are already the desktop mode :) There's also stuff like Chromium Embedded Framework[1] which Atom used to use[2]. AppJS[3] sounds the most similar to what it sounds like you want, though. I've never tried it, but it seems like it'd work well enough (Chromium-based).

1. https://code.google.com/p/chromiumembedded/ 2. http://blog.atom.io/2014/05/06/atom-is-now-open-source.html 3. http://appjs.com


I think this is easier than you think. About 8 years ago I was working on a big project in Qt that was two parts -- one part a 3d rendering engine, one part an interactive web view. Having the two interoperate was surprisingly straightforward, and we could do all kinds of HTML/CSS layouts and use JavaScript hooks to call to the Qt classes. I can't imagine it's regressed much since then, though I'll admit it's been a while since I touched it.

PhoneGap provides this kind of experience for mobile.

The issue always ends up being performance -- running the JS runtimes in sandboxed webviews is always* slower than native code.


Well Java 8 ships with webkit for JavaFX but almost nobody uses it.


Given that most enterprises are still migrating to Java 6, that is to be expected.


I hope you at least wait until those standards are thoroughly supported "in the wild" because the wild provides the entire point of choosing Javascript. Even then, the argument that ES6+ will make everything better ignores that those standards layer more stuff on top of a lot of bad old stuff that isn't going away. And the community status quo is bad documentation, messy APIs and things breaking all the time. If I did not HAVE to use this stuff I would happily choose to avoid the frustration.

There is no competition to Javascript for programming browsers, but why should programming education always center on programming browsers? For education in particular, there are a number of languages designed for education and Javascript (even ES6) has never been one of them.

No accounting for taste, I guess.


Programming education doesn't have to center on browsers, but it's very likely that someone interested in programming as a career in 2015 will have to make programs which interact with browsers in some way. So, while class chicken inherits from Animal and can override Animal's speak() method, that's not quite as fun as having .chicken.animal::after put chicken memes all over your homework.

Visual Basic was also pretty awesome for this, and I'm sad to see that give way to e.g. Java, which has probably turned many people off initially to how fun and rewarding programming can be.

TL;DR public static void main is not as fun as making more internet.


You can program in Python using Kivy [1] to create apps for Android, iOS, and the desktop. I made a simple app and took it through the entire process it is now on the Android app store. You are still using Python so you can focus on programming and not language features, but you get the benefits of programming something your students can show off.

[1] http://kivy.org/docs/gettingstarted/intro.html


Thanks for pointing this out. I've been using pygame with my students but kivy looks great.


JavaScript (or rather: a subset of it) is a fine teaching language too, especially since it's easy to publish runnable code - even for beginners.

However, there is no language that can do everything equally well. It's a common reaction displayed by the proponent base of many languages, even here on HN, to advocate the use of their language for every conceivable need. This can't possibly work well.

Programming teachers should make a habit of pointing out that writing code is a journey that will require students to learn new tools all the time.


>It's a common reaction displayed by the proponent base of many languages, even here on HN, to advocate the use of their language for every conceivable need. This can't possibly work well.

Is that some law of nature, or merely a consequence in how we've designed languages thus far?

A well designed type-infered statically typed language with a REPL/interpreter, a fast AOT compiler, an optional GC, good documentation and IDE support, and a big API library could work well for all kinds of domains, from scripts and websites, to network application servers and systems programming.

Note also that the money we spend on language and tooling development are miniscule and laughable compared to the IT industry's size.

When there have been some decent money involved we had good results. Namely:

With SUN/Java we got the fastest, more mature VM out there, with very good GC and a ton of tooling available.

With Javascript, we got V8/JSC/etc, that made the language 10-100x faster compared to the nineties.

With MS, we got C#, a great language that can cover tons of ground for what its designed, modern, with a huge ecosystem of tools and APIs.

And from Apple, we got the great Cocoa libs and Swift (still a work in progress but very promising).


> Is that some law of nature, or merely a consequence in how we've designed languages thus far?

It's a game of tradeoffs, same as everything else in nature, isn't it? I would rarely say that a given language's design or implementation is universally flawed, instead it's the result that comes out of a set of premises and decisions. Yes, some of them may be objectively bad, but mostly they are just decisions made to solve specific problems.

In a way the whole point of my post was that languages don't primarily exist on a one-dimensional spectrum of "good" to "bad". They're optimized for different things. The assumption that given a list of possible decisions during language design you just have to select the right one every time is based on the fallacious premise that right and wrong are the actual choices you get.

> A well designed type-infered statically typed language [...]

...and those are already some key decisions you made that reflect what's important to you personally. Every time you choose one of these properties, you open doors and close others behind you. While you may personally believe these are the minimum requirements which every "good" language absolutely must have, you should also recognize that you are doing exactly the same thing as every single language designer in history.


>It's a game of tradeoffs

I find that less and less true. The canonical example for me are abstract data types. They're pretty much free on a runtime level, enable clean code, and allow for more static checking. Basically a superpowered version of enums. I can't see any argument against having it in the language unless the express goal is to have no syntax (lisp).


Again, that's a personal preference, not a statement of fact. There is a cost associated with different type systems, and that cost can vary with the problem you are trying to solve. It's not that I argue against the benefits of having such a system in place, but you should be aware of the fact that this too might not represent the end-all-be-all for all programmers.

One of the basic mistakes made here seems to be the notion "if programmers only _knew_ about the one true way of doing it, they'd all use the same language and tools, namely: mine".


Again, this is not about personal preference.

In the real world, 90% of programmers use either C#, Java, C/C++, JS or Python/Ruby/Perl in their jobs.

We could restrict ourselves to this set of languages and syntax style, and design a super-set language that does everything, gets rid of their historical warts, and can go from scripting to HPC.

Stuff like using significant whitespace or not, are BS bikeshedding, which we can bypass.

The thing is, were it matters (speed, expressivity, availability of REPL and IDE, large SDK) nothing but money prevents us from making such a language...

Is there anything preventing a top notch team working for 10 years and come with one ready, including all the trims and works, JIT, AOT native compiler, ports to 2-3 architectures, batteries et al?

Sure, some people would still like their Java or their Lisp or whatever.

But there's not some logical impossibility preventing us from creating a language that's better for 90% of the tasks than what's our there.

In a sense, that's what MS did with C# -- but they stopped too soon because they have their own agenda. So it was Windows only, in the CLR with no supported AOT option, etc.


> Again, this is not about personal preference.

There is no need for passive aggressiveness. I get that aping the exact phrase used in a parent comment is used to communicate disrespect, and it's duly noted, but all things being equal I would prefer not to go down that route.

My argument here is that people designing languages have made choices based on their personal preference as well. It's not really appropriate to take one specific set of features you like and declare it to be the objective winner.

> In the real world, 90% of programmers use either C#, Java, C/C++, JS or Python/Ruby/Perl in their jobs.

That's a No True Scotsman-like argument. Also, what's the meaning of the three different types of separators you used there? But for the sake of convergence, yes, let's assume every important real-world language is in that list.

I'm still not sure what to say about this without repeating myself, except for: go and do it. You say it's a matter of investment, but on the other hand consider the benefits if someone pulled it off. If you really do believe you are the person who has this all figured out, please go ahead and implement this. Heck, make it a Kickstarter project or something, I'm in!


>There is no need for passive aggressiveness. I get that aping the exact phrase used in a parent comment is used to communicate disrespect, and it's duly noted, but all things being equal I would prefer not to go down that route.

What passive aggresiveness and/or disrespect?

The repeat is used to communicate disagreement with a specific thing attributed to what I said. Since it was attributed twice, with no regard to the arguments I made, it was mostly necessary.

If anything is "passive agressive" is this sudden ad-hominen and armchair psychology attempt.

I only responded to the abstract issue under discussion, didn't made any judgements on the participants. Can we keep it at that level?


>My argument here is that people designing languages have made choices based on their personal preference as well. It's not really appropriate to take one specific set of features you like and declare it to be the objective winner.

And my point is that I'm not doing that, I'm not picking features based on personal preference (my actual preferences are different), but to describe how a language can encompass the whole range of applications (be the right tool for most jobs, as much as possible).

This wasn't about "my dream language" (that would be something like Smalltalk) but about whether a language can cover all/most bases or we're forever doomed to use a babel tower of "right tools for the right job".

And the "cover all bases" thing I tried to tackle from a technical features standpoint. At this conceptual level of the argument I don't even care much if programmers will like the end result.

>That's a No True Scotsman-like argument

No, it's just a statistical observation. 90% of programmers do use this languages. I don't say the rest are not programmers or not true programmers -- just that we can cover the majority of programmers doing professional work with a language in that vein.

>I'm still not sure what to say about this without repeating myself, except for: go and do it. You say it's a matter of investment, but on the other hand consider the benefits if someone pulled it off. If you really do believe you are the person who has this all figured out, please go ahead and implement this. Heck, make it a Kickstarter project or something, I'm in!

Now, that's disrespect and passive agressiveness!

I never said it's just me that "has all that figured out". In fact lots of people say the same thing, inside every language community there's a trying to fix the same pain points to make each language more universal.

Lots of Python people for example wanted to make it GIL-less / capable of good async operation / faster / typed / etc. All this is for handling different kinds of scenarios that it currently does not.

People using JS asked for more speed, then for server side/native interfacing (Node et co), then for "programming at large" features (ES6) etc, types, things like asm.js to get native memory management, etc.

So, what I wrote was that having a language extend to almost all jobs is not impossible, and gave a laundry list of features (taken from observations such as the above), that could accomplish that.


> So it was Windows only, in the CLR with no supported AOT option, etc.

NGEN was there since day one.

Spec#, Singularity systems programing language (based on C#) only has AOT compilation to native code.

This work was the basis of Windows Phone 8 .NET, that only compiles to native code, in a PE format known as MDIL.

This work was then continued to bring static code compilation to the Windows 8 for tablets and now for desktop store apps.

It is part of the upcoming .NET 4.6.

The only deployment format not supported for static code compilation are the traditional desktop and the compact framework.

Parallel to this work, the Dafny the systems programming language for Ironclad,Singularity's successor, also produces static executables.


Cool, missed that NGEN was there from the start. Was mostly following Mono at the time, were the AOT option came quite later.


>It's a game of tradeoffs, same as everything else in nature, isn't it?

My question is if tradeoffs are inherent in programming language design (some mathematical inevitability) or due to lack of resources and other "real world" concerns that can be overcome given enough care and money.

I'm not convinced by anything that I've seen that we don't just have the latter.

E.g. one could say in 1995 "JS is an awful language for development, and it's unsuitable for anything that needs to be fast. It's an inevitable tradeoff, use the right language for that etc".

And then the big corps got interested in optimizing JS, and we can now write full 3D games, and even video encoding in it, and we have asm.js and the like that take the performance of JS 2-3x worse than C for most things, as opposed to 100 times worse back in the day.

And we got node in the server side, and tons of libs etc. And with ES6/7 we get tons of language improvements.

Even the "fundamental" issues with JS, like bizarro type coercion, global by default, no integer types etc, could be corrected, it's just compatibility concerns that makes us not fix them, not some inherent impossibility of getting a great language without those issues.

Heck, we could even introduce optional gradual typing for JS, if it wasn't for those backward compatibility concerns, and even AOT compilers to native code.

Those things maybe wouldn't make JS perfect and useful for everything, but it would make it an order of magnitude better than what it is.

And if we started from scratch, without all those compatibility constraints at all, we could get a new language pretty close to perfect with enough money and a great team.

>...and those are already some key decisions you made that reflect what's important to you personally. Every time you choose one of these properties, you open doors and close others behind you.

What door exactly closes? In the "mathematic necessity" way, not the "some people only like dynamic typing" way.

Because the thing under discussion was wether a language good for everything (or close to it) could be produced.

I chose those attributes not because I like them personally, but with this end goal in mind. Namely:

1) Without static types, you can't get the last mile of performance and safety checks, for using it for high performance apps and systems programming. They also help having a better IDE experience (autocomplete, suggestions etc) for those who like that.

2) If the types are not infered the language will feel too verbose and weighty to people wanting to use it for quick scripts and the like.

>you should also recognize that you are doing exactly the same thing as every single language designer in history.

Well, I don't have any beef with any language designer in history.

My basic problem with current languages is not that they made some choices, but that they didn't make some additional work they could PILE ON TOP of the previous choices and get much better.


I think the issue here is that we have two opposing theses on how this works. I posit that the programming ecosystem is very much like any other ecosystem in that there is are many organisms within it, for a good reason. For example, looking at nature, you might ask yourself why biology hasn't converged on a single lifeform yet, and whether it's not simply an issue of having all the right genes and discarding those that are not so good. But that's obviously not how it works. There are niches and different strategies for solving different problems. There is no one genome that solves everything. And yes, every single codon is a decision that opens doors and closes others as well.

If I understood you correctly, your position is the inverse of that thesis. You argue that there is a base abstraction that should work equally well in every context, and that specialization could come in the form of optionals piled on top of the one true base. There is no mathematical reason why this should be unworkable, but a solution so far eludes both programmers and evolutionary processes alike. That doesn't mean you shouldn't go ahead and try solving it. It's a worthy project. If you believe you can come up with common denominators that cover all aspects of all previous languages, by all means: give it a try!


>For example, looking at nature, you might ask yourself why biology hasn't converged on a single lifeform yet, and whether it's not simply an issue of having all the right genes and discarding those that are not so good. But that's obviously not how it works. There are niches and different strategies for solving different problems. There is no one genome that solves everything.

And yet, nature already has "converged" sort of into humans, who are "master of all trades" sort of, and somewhat analogous to the "full-spectrum language" I'm talking about.

And as with that language and other languages, the existence of humans doesn't mean all other lifeforms will perish or dissapear.

>* There is no mathematical reason why this should be unworkable, but a solution so far eludes both programmers and evolutionary processes alike.*

Well, for evolutionary processes we have humans. And soon, if we are to believe some pundits, the "singularity".

As for languages, we have some near damn all rounders, but my observation is that it's not because it has "eluded programmers" that we don't have it, but because of business reasons (e.g. some company wants to only target segment X), oss being underfunded, narrow scope, etc.


Optional gc is not something I've seen work in practice. Apple abandoned their attempt because of the difficulty getting gc'd and non-gc'd code to play together nicely. Rust has (and may again) offer something like optional GC, but it will operate at the value level rather than the program level and thus doesn't really solve the library problem.

Programming languages are designed in a huge, multi-dimensional space. Should the language be static or dynamically typed (or optionally typed)? Interpreted or compiled (or JIT'd)? Manually memory managed or gc'd? Powerful or simple type system? Large (C++) or small (scheme)? Batteries included? Hosted on a runtime?

Each of those questions (and of the dozens more than go into designing a language) involves tradeoffs. Manual memory management makes a language suitable for domains like kernels and games, but inevitably requires the user be aware of where and how memory is being allocated and deallocated. Small dynamically typed programs tend to be (in my experience) faster and simpler to write, but static typing makes maintain large codebases much easier.

I strongly believe there will never be a language that is the best available for all domains. Currently I find that I need three languages to cover my bases:

  * A dynamic, batteries-included scripting language (Ruby)
  * A static, fast, gc'd language (Scala)
  * A static, fast, non-gc'd language (C++)
I'm hoping that Rust will collapse the last two, but there are still a lot of tradeoffs.


>Optional gc is not something I've seen work in practice. Apple abandoned their attempt because of the difficulty getting gc'd and non-gc'd code to play together nicely. Rust has (and may again) offer something like optional GC, but it will operate at the value level rather than the program level and thus doesn't really solve the library problem.

Well, you could make 2 compilers for languages with the same syntax, and 2 sets of libraries (GC and no-GC) with mostly the same APIs, divergent only in whether they have GC or not.

For low level stuff you use "GCLESSLANG" (as a better C) and for the other stuff you use "GCLANG".

Otherwise the 99% of the syntax is the same, and the 2 langs are designed to easily call into one another. They could share most of the parser and compiler too.

This would basically give you the feel that you use one and the same language, switching between GC and no-GC version.

It's a money and resources thing, not some "cannot make this work" thing.


I think Rust attempted to do a much more conservative version of this (same language, different sigils on pointers depending on whether pointers were refcounted or ownership-transferred) and determined that it didn't work in practice. Libraries have to use one scheme or the other, nobody made APIs that exposed both schemes, and calling between them was too much of a hurdle. So in practice people ended up settling on a single pointer type, which was -- somewhat surprisingly for the language designers -- the non-refcounted one. And the language got rid of syntax for the refcounted one, and everything became more usable.

I could believe that with more money and resources everything would have been rosy, but my understanding of the facts doesn't particularly support this.


A well-designed type-inferred statically-typed language either cannot support the following (which is valid Python), or can only do so with tradeoffs that either stretch the definition of "statically-typed", stretch the definition of "well-designed", or would make the language even more intimidating than monads make Haskell:

    def fizzbuzz(i):
        if i % 3 == 0:
            return "fizz"
        else if i % 5 == 0:
            return "buzz"
        else:
            return i

    for i in range(100):
        print(fizzbuzz(i))
Now you might argue that it's a good thing for people to care about proper typing for large, maintainable codebases, and I would agree; in some of the better Python code I've seen, it's social convention to add type signatures in a comment (I once worked at a place where people added Haskell type signatures) and enforce proper typing. And yes, there are a few Python APIs that return either a single item or a list, instead of a one-item list, and they're annoying.

But it's a thing people do, it's especially a thing that's useful in a teaching language or in a scripting language that's trying to fill PHP's niche, and I don't think a language that refuses to type-check such a thing will fit every conceivable need. Sometimes the goal is just to get something done quickly, not be large or maintainable.


>A well-designed type-inferred statically-typed language either cannot support the following (which is valid Python), or can only do so with tradeoffs that either stretch the definition of "statically-typed", stretch the definition of "well-designed", or would make the language even more intimidating than monads make Haskell

A statically typed language can still have a generic type. C# for example has dynamic: https://msdn.microsoft.com/en-us/library/dd264736.aspx, and Objective-C has id.

You get static types whenever you need them and dynamic flexibility whenever you need it. No need to invoke something Haskelly at all.

For more type checking safety in this particular case, you could also say (or infer) that the return type is a union that's either "String | Integer".

I was talking about a possible very flexible uber-language here. Why would it miss those two oldest tricks in the book, that are already present in tons of existing languages?

That said, I didn't say it would have to express all idioms. Just that it would be applicable for as many domains as possible, from low (drivers, etc), to high (scripting, etc).

This particular idiom, as you also note, I don't find particular useful anyway, but rather a code smell. But if it was needed the two ways described above could solve it without much issue.

When you need to be quick and flexible use "dynamic" type, when you want to be fast and safe and take little memory switch it to a specific type. The language still covers both cases.


I'm not sure what your complaint is.

    import Control.Monad (forM_)
    
    fizzbuzz i | i `mod` 3 == 0 = Left "fizz"
               | i `mod` 5 == 0 = Left "buzz"
               | otherwise      = Right i
    
    main = forM_ [1..100] (\i -> either putStrLn print (fizzbuzz i))


Yeah, that certainly works, but one of Python's particular strengths (and in particular what the original article was about) is a teaching language. I certainly understand what you're doing there, but I wouldn't want to explain it on day 3 of high school CS class. :)

The slightly cleaner thing here would be to take advantage of both strings and integers implementing Show. I'm thinking a bit about whether that would actually work well enough in a teaching language; possibly. (Although if I'm remembering my Haskell well enough, you need to enable existentially quantified types to make this work, which, again not really day-3 material. Probably works fine in like Rust, though.)


It's because languages are designed to be niche. There are so many languages, that to be used you just need to be good at one or two things.

The "universal" language would be fairly low level, but would allow you to create your own DSLs. So you'd really be designing your own language for each task. That language would be 100% suitable to the task at hand. For example, you'd embed SQL directly into your code, not as a string, but actually parsed by someone's library so it's syntax checked.


> This can't possibly work well.

Not that I disagree, but could you elaborate your reasoning here?


There are many decisions that influence language design, it's not just syntax. Differing decisions are made about runtime behavior, memory management, levels of abstraction, and the efficiency of idiomatic data structures.

Even objectively, a good language to write an OS kernel in isn't necessarily equally well suited to, say, game development, or web development, or scientific computing. That doesn't mean there won't be heroic efforts to make a language work in every arena, but at some point most people draw a line based on pragmatism.

Also, often overlooked, there is the aspect of a language mapping well to a given developer's brain. Different people are drawn to different languages for different reasons, including borderline-intangible ones where they just feel more "at home", which translates into tangible outcomes like productivity.


I think there are a lot of commentators here that are failing to appreciate that this is not about which programming language provides the best introduction to Computer Science or Software Engineering, but which language will inspire students to continue programming.

The readership of HN is self-selecting. Almost everyone here was drawn to the world of programming. Very few of you probably had programming thrust upon them.

I teach in the UK where all students from the age of 5 to 14 are now required to study computing. I personally teach students from the ages of 11 upwards, so when I'm looking at the best programming language to use, I don't even think about which will provide the best introduction to Software Engineering.

My primary concern is what will interest an eleven year old, who's only experience with a computer is using his iPhone, and whose primary interest in life is football. Or how can I make this interesting for a student whose ambition in life is to perform on stage, and has already decided not to continue studying Computing for their GCSEs.


just the opposite. Many teachable subjects from Journalisism to data science, biology to finance increasingly include Python. NLTK, Beautiful Soup, Pandas, Ipython Notebook, Matplotlib... Programming is a tool for getting things done, and few languages offer students more tools for doing than Python. On HN we sometimes forget the ubiquity and pure value of automation and anaysis. We focus perhaps too much on software engineering (in this case gaming).


> Programming is a tool for getting things done

...and some of those things, python isnt any good at.

To be fair, it's very very good at others, but you can't argue with the points he's making. They're spot on.

It turns out, people learning programming in school aren't interested in processing scientific data. They're interested in games. and showing their friends what they've made on their phones.

Python is flat out bad at those things. It was never the intended purpose of python, and it's not a priority for the core team.

Why not teach kids with a tool that excites them, instead of one that frustrates them?


I've been through the Udacity CS101 and progressed well with their programming languages and design of programs courses. These are all in Python but I have covered interpreters, compilers, regex, recursion, types, etc. These all treat python as a tool to learn CS not just the language in itself. Alongside these courses I've also been building my own websites using Django and I'm now at a stage where I feel I can solve any problem in any technology (even though it will take me longer than someone experienced in that technology to learn it and get stuff done).

I've started exploring other languages (C, Lisp, even Java at work, Haskell next, javascript) and feel much more confident with the software engineering stuff that CS courses don't necessarily cover, having managed development teams (git, deployment, CI, TDD, etc). I agree with other commenters that python gets out of the way so that you can explore CS concepts.

Javascript could be a good intro language but students might want to move onto something else fairly quickly. I already tried js first and quickly started CS courses because I didn't feel that learning javascript was helping me understand how to solve problems with programming. I also don't think you can teach all of the core CS concepts with javascript as effectively as you can with other languages (e.g. OOP), so when you do start teaching js they will still come to you and say "how do I put this on a website", "use node", "ok but how does that work? why does that work? etc, etc". And when it comes to websites are you going to teach students to build a webserver in vanilla js (?!) or reach for node that abstracts so much that is going on under the hood?

Smart students are always going to ask questions and I've found in my experience that I've been able to either get the answers or know where to get the answers, by knowing core CS concepts. So whatever language you choose to start off with it needs to be couched in terms of what CS it exposes.


Funny, I have done a lot of game/graphics related programming using python and panda3d, which is written in c++ and has nice python bindings [0]. Sure, you can't just do "import wheemakegames" but the tools are out there. The same is true for GUI apps. If you want to click on stuff just have them use pyqt or pyside. I think that teaching students about other packages and how software actually works together is very important. Having said that packaging a python program into an executable installer or even just an executable is a pain in the ass ("What sane person DOESN'T have at least one working python interpreter?!?").

0. http://www.panda3d.org/manual/index.php/Main_Page


One of the issues I've had using Python to teach in the last few years has been the glacial transition to Python 3 for some packages, often with some annoying cross platform inconsistencies (trying to keep up with two or three different tool kits for one purpose because of different support for Windows vs OS X is annoying).

I really enjoy Python as a language, I enjoy using it for teaching fundamentals, but I really don't enjoy using it for teaching 'the fun stuff' as far as high school kids are concerned.

I might consider switching to JS in my new school. I was looking forward to perhaps using Swift but the new place is Windows-based.


Nothing got people easier into programming than good old BASIC did.

MODE 4; LINE 10,10, 250,250

Instant attraction.

Beginner steps need to be small and need to give an immediate reward, BASIC got that right, almost every other language stood so far away from the available devices that the spark failed to jump the gap in many places.

Smalltalk is another language that got this right, people - especially beginners - need instant gratification to move from curiosity to immersion.


Yes. BASIC also had an awesome help system, where an eight year old could learn how to draw things on the screen without any teacher, even if they barely spoke English.


I've had good success teaching Lua with https://love2d.org/. Super easy to create click-and-run executables that work on OS X and Windows. Mobile's a different story (obviously), but as a teaching language + environment I find it, well, lovely. :)


There are some mobile game sdks that use lua like gideros and corona. It's possible to do live coding in a simulator if you use zerobrane studio but I'm not sure if it's possible on a real phone.


I agree. As a beginner - and even as a professional, in an average-case situation - what you need in terms of abstractive power and optimization potential tends to be something akin to a classic microcomputer Basic, plus some modern API calls. You're already way ahead of the game when you consider JS in that light. You can write a lot of practical, maintainable JS that only uses arithmetic, string manipulation, branches, loops, arrays and simple function calls. POD objects, too, if you get a little more fancy.

Teaching towards that subset is also a healthy instruction in rule of least power - once you grasp how to use the basics, you have a fighting chance of picking up most other languages too. It's way more unhealthy to guide newbies into cargo cult thoughts about "proper" something-something design(insert favorite methodology here) - a learner has to make their own mistakes to learn properly, but they should do so from within the subset of programming concepts that predate current languages, because those are the things that have lasted through multiple generations of software.

If we consider Python, Ruby, JS, C#, and Java - the design of most of their core features was locked down all the way back in the 1990's. So all the new ideas they brought to the table tend to be "baby implementations", frozen in a half-correct state of refinement. But the things they got _really_ right tend to be the things that were there and usable, but not refined so well in prior languages.


As already mentioned in the thread, it depends on what you want your students to learn. My universities CS courses (aside from theoretical stuff) started with Racket for the basic concepts, good choice. We were then taught Java to handle a small project, questionable. But after that we had no problems with the language, when we had to implement algorithms, which would look quite similar in most other OO languages, maybe with added memory management.

I think JavaScript's prototype-based inheritance and global scope would make it hard to abstract concepts learned in it to other languages. Strict ES6 might help a bit. When switching from Scheme + Java to JS, you can get the Java+Scheme mix, that JavaScript is in my eyes, and use it accordingly. Switching the other way round you would basically have to relearn the concepts, because JavaScript does not express one distinct enough, but is an exotic mix of them (or an ugly hack as others might say).

Outside of CS though, I think introducing people to JavaScript, PHP or even advanced Excel is highly beneficial.


A lot of the discussion in here revolves around "not being able to get an interactive GUI up and running quickly". If this is indeed the main problem with teaching programming with Python, there are many ways of tackling this problem without resorting to teaching Javascript. Don't get me wrong, Javascript is an interesting language, but it has many oddities that could throw off a newcomer.

Sites like OpenProcessing[1] and ideone[2] already allow users to compile and share code in the browser. Why not create a Python GUI "toolkit" that interacts with the browser and can be accessed from an interpreter implemented in Javascript? Granted, this isn't the "vanilla" Python experience, but if you're just ramping up, having shiny bells and whistles to show off to friends is definitely a plus.

[1]: http://www.openprocessing.org/

[2]: https://ideone.com/


An even simpler solution would be Python mode for Processing, which has most (if not all) of the benefits of Python and Processing in one:

http://py.processing.org/


Simplest solution of all would be to just use js. I don't think that any syntactical or semantic niceties in Python matter much to an absolute beginner, and the extra layers of indirection you guys propose will just lead to more moving parts and frustration.


Wouldn't something like pyjs (http://pyjs.org) solve all of the author's concerns, while still teaching Python?


Brython is even better.


> A month later, more questions: "How can I give this game I made to my friend? Even better, is there a way can I put this on my phone so I can show it to kids at school without them having to install it?"

Giving a Pygame game to your friend is pretty easy. It has usually very few dependencies and should be trivial to package. And it's multiplatform (Mac, Windows, Linux including even exotic devices like the Raspberry Pi).

Putting a python game on a phone is definitely more of a problem, but at the same time something that is NOT made for a Phone touch controls will anyway perform poorly if you intend to do a demonstration with it.


> Putting a python game on a phone is definitely more of a problem

Not necessarily. kivy[1] is an alternative to pygame (for desktop builds, it uses pygame/sdl under the hood iirc) but it supports exporting to android as well as desktop platforms. I haven't done a ton with it so take this with a grain of salt, but it looks like it could be a viable option in this case.

[1] https://github.com/kivy/kivy


I might just not have played with it enough at the beginning of 2014, but kivy had some serious weirdness when it came to the way in which it approached its layouts, which I feel was down to the way in which it tried to approach being multi platform. This was mostly looking at developing a basic desktop app which I could then shift over to mobile as well.


I have a concern with Python as a teaching language in contexts where people are not necessarily into STEM and are inexperienced or marginally experienced as programmers.

There's no neat and tidy way to deal with Python 2 versus 3. You can say use 2. You can say use 3. You can kick the issue down the road by requiring programming via web browser. But in the end anyone who catches the programming bug winds up facing a decision that even seasoned professionals struggle with.

Python as a teaching language is great for replacing MatLab. After that it entails tradeoffs as fundamental as character encoding. Not that I'm saying JavaScript is the answer.

If I were trying to get someone juiced about programming and solving the problems that MatLab does wasn't a feature the audience cared about, I'd pick Processing. It does interesting things with sound and images and exports to HTML and JavaScript to run in the browser. All that and it's small, well documented and Googles up a low noise search results page. Yet it runs as deep as you want to dive into the JVM and its languages.


Telling somone to use Python 2 is a neat and tidy solution. What problems would this possibly cause? Python 3 is fine too.


Python 2 is not nice as soon as you face something containing unicode chars.


My high school's student laptops were super restricted (I was suspended for installing Eclipse on an unrestricted computer), so I played with JavaScript a lot as a senior. I didn't have a good understanding of the virtues of classical polymorphism and whatnot at the time, so JavaScript's unconventional OOP didn't bother me; I was content writing procedurally, treating objects as structs. JS was also my introduction to the notion of first-class functions, and it made them seem so natural and essential that I get angry when languages don't have them. I think JavaScript just barely misses the mark as a great programming language. But its new standard might change that and make it the best language for learning. If a language like Swift were as ubiquitous as JS I would have a different opinion.


Even if "runs in the browser" is a hard requirement, surely there are options like Dart that don't have as many horrible quirks and edge cases - quirks that are probably no trouble for an experienced developer, but can be very confusing to a newcomer - as Javascript does?


This is always a hotly debated topic. When I learned to program, I started because I wanted to understand how computers work. Javascript in a Web browser so many layers above the basics of a computer; I just don't think it is a good choice for a first language. Obviously, it depends on your goals. If you are changing careers and you just heard that you can make all this money web consulting and you couldn't care less about computer science then maybe go for Javascript, I don't know much about people who got into programming via that route.

I think Python is probably a useful first programming language for someone interested in doing mathematics or algorithms but I still think, and this is totally against the grain, that C is a good language for beginners to programming. However, it may not be the best language to teach undergrads and expect as many as possible to stick with CS, maybe that is not a terrible thing? Though, that is elitist, I think C provide enough of an abstraction on top of the machine while still allowing you to get some idea of how computers and memory work, from a CS rather than an EE perspective.

The other benefit is that once someone wants to know how to create a game or how to make a GUI program ... the resources are readily available and in fact most of the libraries will be written in either C or C++. I think C over C++ because OOP concepts and syntax of C++ can be needlessly overwhelming for a newcomer to programming.

I feel like it also gives the beginner the feeling of actually creating a program. Those interested in learning to program will always be familiar with normal executables such as EXEs on Windows or binaries that can be executed on Linux. Using C means you can create something just like those programs they are familiar with; it gives the learner a sense of really accomplishing something, in my opinion. Creating something you have to run in conjunction with another program and does not seem like a "typical executable" can leave the user feeling like "OK, how do I create a real program, though?" This is not to knock interpreted languages, at all; it's just something I think is important for some learners.


It's a fair point that the article makes. However, I've started out about 1.5 year ago with a programming group for kids in high school, and my choice was Python. I have since not regretted this decision because of one factor: fun. I could have gone with a weekly lecture on all sorts of theoretical things will learn again if (or when) they start at university.

Instead, I wrote a wrapper around OpenGL that provided some functions like "drawRectangle()" or "drawImage()", and they have used it to build all sorts of things. Additionally, they are constantly wanting to try other stuff. I don't think they would have done this if they didn't enjoy the process of writing code.


I looked at a bunch of things for teaching and I settled on JavaScript. It's not ideal as a language, but it ticks enough of the required boxes. The primary benefit was being able to teach someone without having to install any software. Quite often I would have to use locked down school computers, so doing everything from the browser was crucial.

I put together a simple library that has global functions to let the very first programs do interesting things while remaining simple.

I made a mediawiki Plugin so that I could have code examples that you can edit and run within the page. http://fingswotidun.com/code/index.php/Main_Page

Having simple examples that you can see,run and change helps a great deal.

This is my standard starter program.

http://jsbin.com/camocarohi/1/edit?js,output

It does something, and the entire program can be viewed on the one screen, so you can keep it in one place and point to the parts to show what is happening. When that question comes "How can I write a game?" (and it _will_ come before the first lesson is finished, I can direct them to the list of available functions at http://fingswotidun.com/code/index.php/API point out the first entry "clear()" then put that in front of the fillCircle() call. The program goes from a drawing program to a moving dot game. From there you just go with the flow of "How do I make it do (X)?"

This is where a single hour lesson with my own daughter (age 11) ended up after starting with that base http://jsbin.com/coguvokupu/1/edit?js,output

The single greatest downside to using JavaScript is that it is not preempting/blocking. A beginner will always assume that

    var spaceShip=loadImage("myship.png")
will have the image loaded when the next line of code is reached. Events and callbacks are not principles you want to deal with as a first step. Similarly, you have problems when they try

    while(true){drawCircle(rand(),rand(),rand())};   
The reasons why they can't do things the way they assume are at a more advanced level.


Since I first came across Scratch, the programming language / environment produced by MIT, I've recommended it to a number of friends who wanted their kids to learn programming. It covers the basics of software development (conditionals, loops, etc.) using intuitive and attractive GUI elements which can be combined and manipulated to easily create interactive programs such as animations and games.

It also integrates well with the web (Scratch 2 programs can be created from within the browser) so it's very easy for users to show the results of their work to others.

http://scratch.mit.edu/


Javascript has been around for close to 2 decades. I'm somewhat curious if there's a change in Javascript that has changed your mind, or is it something you think should have been true for the past 10+ years?


It is amazing how much of an IDE a web browser can be. I use typescript these days to prototype stuff and it is going pretty well. Reload and changes are up. Nothing else even comes close.


I think any REPL system does the same thing, though.


To some extent. I think pry comes closest to re-creating a truly interactive development experience but the browser is slightly better especially with source maps.


For getting people into programming with graphics quickly: Processing (http://processing.org/). Since Processing is a thin wrapper around Java, the programming concepts transfer easily. Processing also has a Python mode, and there's Processing.js for the JavaScript fans. If you're a Racket/Scheme type person, ClojureScript might be an option, with Quil for doing Processing-type graphics.


As much as I fear a Dijkstra apocalypse and how much brain damage newbies will sustain by starting with JavaScript (and thinking arrays being hash maps to be perfectly OK), I started with 8-bit AppleSoft BASIC and I have to say that, while the concept of a function or recursion only appeared to me after my introduction to Pascal, I turned out well enough. Also, I had a good idea of how machine language programs operated - they map very neatly to what BASIC could do.


I don't think mobile gaming should play a dominant role in teaching programming. (If I wanted to teach that, though, I wouldn't reach for Python, however.)


One important reason for the success of C was the library. One of the reasons for the success of Python is the big library and the plenty of add-on modules.

But on the UI side, Python has its weak point. The other problem today is the trouble with the migration from Py2->Py3. But on the long run, the UI problem can be even more trouble. Python is still a good language on the server side, even when PHP was faster and still has a bigger market share.


I learned in QuickBasic and later Pascal, and for that purpose they where great. In my first stab at Python i had my POC running within an hour.. but i do not like the hoops i had to jump through. Mostly the tabs, the v2/v3 incompatibilities and lingo. PHP and C# also are good languages for starters. Today its JavaScript, but my impression is that becoming an software architect in JS is not that easy.


I taught my son JavaScript (along with html and css) because I think the browser is a great platform to expose new programmers to programming. I also think it's the most relevant because any knowledge and experience acquired is likely going to count in some way in their future.

The JavaScript adventure can continue when we look at node on the server and even use it to control embedded devices like Tessel and Espruino.


Lets run with the OA for a bit.

What textbook is there that is easily available in English that could support a 30 lecture introductory course?

What 'environment' would you use? Text editor plus browser or a browser plugin for JavaScript editing or something else?

What base selection of libraries would be included with each project in your opinion?

Possibly linked with my first question, what kinds of projects/assignments would you suggest?


My impression is that Python is certainly not a better language than JavaScript. They both have their quirks, so it ends up being a matter of preference. For example Python doesn't have real lambdas, on the other hand it has named parameters.

The only reason to use Python that I can think of is maybe SciPy.

I made me sad to witness Python gaining so much in popularity as a beginner language in the last year.


The idea that readability matters influences the whole Python ecosystem and community. Python3 also eliminated quite a handful of quirks (e.g. division, Unicode). By the way, it's not since last year only that Python was used for introduction but for years now.


You can 'fake' real lambdas in Python, exploiting the things that it does allow. My approach uses continuation-passing style: https://github.com/yawaramin/lambdak


I'm curious about this sentiment about python lambdas. What about them is not real?


I suspect much of it is a conflation of the properties of anonymous functions with the properties of first class functions. People see first class functions implemented as lambdas and end up focusing on the wrong part of it (rather than fully understanding what is going on).

I realize that more powerful anonymous functions can make for prettier code, but Python's single expression anonymous functions cover an awful lot of the situations where making a name would make the code substantially uglier.


It's been a while, but doesn't it have restrictions, like you can only use it for one liners?

Edit: found this via Google - "lambda functions can not contain commands, and they can not contain more than one expression"


> and they can not contain more than one expression

I don't believe that is different from e.g. scheme. Ideally, a lambda expression would look like a normal function body, but I definitely wouldn't discount them as "not real" on this basis.


Iirc in scheme you can put anything into a lambda that you want.

Of course the Python lambda is "real", the correct word to use would have been "limited" I suppose.


I still recommend Ruby as the first language because it has all of the features you want to know about as a newbie and at the same time there are very few quirks. The things you sacrifice are performance and memory efficiency. I have seen junior programmers picking up Ruby as be able to write production ready code in couple of weeks. I guess the learning curve is not as steep.


I don't think it's a performance and memory issue that the author is raising. Ruby, much like Python, mostly wraps some lower level libraries for game dev. In both cases, you either make sure your users have dependencies installed, or you use one of the clumsy "compiler" solutions that don't always work well.

Consequently, Ruby isn't any better for the author's usage case (kids wanting to learn to write games that are super easily portable). It can be done, but it's not going to be as easy as some of the other languages/toolkits (for that specific usage case).


> for the author's usage case (kids wanting to learn to write games that are super easily portable)

This is an artificial use case. Kids wanting to learn to write games should be concerned with whatever works on the hardware they have available. Portability is premature optimization at this point.


Really? You don't think they'd want to be able to show their friends?

Mobile might be an artificial use case or a premature optimization, but it is solved with his choice of JS.


> Portability is premature optimization at this point.

It really isn't. If I am a new developer, you're raining on my parade by telling me that I or my friends will have to jump through a bunch of hoops to run my game.

In the context of learning to program as a younger person, this is a pretty big deal. This is anything but an artificial usage case, and is what the author is specifically looking at (education).


Expectations, expectations.

Mobile means stuff looks easier than what really is. If you cannot be bothered to jump through hoops, this is not a career for you. What I recall of my novice days is literally one hoop after the other. It is just that some people enjoy jumping (this particular type of) hoops.

And regarding the social aspect, think of the days of microcomputers before the PC. I am too young to have experienced first hand, but I recall my uncle having a Commodore 64 and letting me use it. What I recall is that he had 2 different circles of friends: the C64 guys he traded knowledge and warez with, and his actual friends who really need to come over home to be shown the new feat (if they were interested at all).


I would not recommend Ruby over Python, as the core languages and libraries make the Ruby API set much larger than Python to do the simplest things. Example: How many APIs do you need to traverse an Enumerable in Ruby versus a a list in Python? I'd suspect it's easier to just learn 1 API for them instead of 20.

For now, if they want to learn something as a start, even though it still has its quirks, write them in Swift. You get a bit of everything plus you can code some UI without the deep diving into ObjC.


I battle with the same problem with my students. I love teaching Python, and I think it is a fantastic teaching language. JavaScript, although harder to learn, is much more exciting for the students to play with, and libraries are always just a link away.

My main reason for not switching completely over to teaching JS, is the nightmare that is debugging simple syntax errors.


Most editors have a JSHint plugin, which will check your code as you go and point out when you've made a syntax error or used a bad practice.

I use it with Syntastic + Vim and it's a long time since I've ever loaded up code in a browser and there's been a syntax error. Definitely worth checking it out if that's the last hurdle before switching.


I would actually solve the game/show problem a different way. I would teach kids how to build MVC web apps and set up a cheap server so that apps could be "shown" to anyone on any device. If you use a mobile-first design philosophy for the webapps then phones are not likely to be an issue.

Yes, there is still the issue of response time but no language works for everything. You have to accept certain limitations and make compromises.

Actually, I might no longer even use webapps. Nowadays it is pretty cheap to give every kid a Raspberry PI so WxPython or Pygame can be used for everything. The showing problem boils down to borrowing a monitor, keyboard and mouse. You could actually carry around a small portable bluetooth keyboard and a mouse in your backpack with the Raspberry PI.

So, why stick with Python? It is because Python is a leader in moving up the evolutionary curve towards functional languages and functional reactive programming. I still think that Python is easier to get into that Erlang or Scala because you can "make mistakes" but still have a working program. And by examining the implications of mistakes you have a learning opportunity too. Note that Javascript has been following Python's lead in adding more functional features.

There is a general language evolution going on and Python is in the thick of it, maintaining compatibility with the old imperative style beloved of data crunching scientists along with the functional goodies that Haskell, Erlang and Scala focus on. It is not out of date. It is not mired in the past like PERL and PHP. What was good about Python 5 years ago is still good.


I would teach kids how to build MVC web apps and set up a cheap server so that apps could be "shown" to anyone on any device.

That's a pretty big hurdle for a newbie to get over before "Hello World".

JS in the browser has the massive advantage of being pre-installed and requiring no special tools, with an immediate edit-run feedback loop. Wrestling with a server and code deployment etc is a problem they can learn to solve once they've been bitten by the programming bug.


>I would actually solve the game/show problem a different way. I would teach kids how to build MVC web apps and set up a cheap server so that apps could be "shown" to anyone on any device. If you use a mobile-first design philosophy for the webapps then phones are not likely to be an issue.

Emm, MVC web apps are not appropriate for games. The could be the backend for SOME games, quizzes, RPGs maybe etc, but for the vast majority of games you need a canvas to draw in.

So that would still require using Javascript, not just Python on the server side.


I don't hear many 13-year-olds wanting a computer with no screen, keyboard, or mouse. I do hear a lot of kids with smartphones who want to play games on them.


I'm not sure your responding to the problem the OP has. He isn't saying Python isn't good for all the things you've listen. He is saying that for starting off in an introductory classroom environment, python is too complicated for simple interactive applications.

MVC web app certainly goes above and beyond TkInter in complexity. If you need to develop the web app, then you'll probably end up teaching Javascript anyways to accomplish what he is addressing. Having tutored friends in introductory courses before, functional programming would certainly also go over their heads.

So while I agree with you on your points. I agree with the OP that perhaps Python isn't the best language you want to use as an introductory course (if interactive GUI are on the to do list at least), but maybe bring it in during the second or third course.


You realize all these "new" concepts you think exist in these "modern" programming languages have been around since the early 1960s, right? MVC Web Apps ugh ... Maybe for some ... I just think there is a class of learner that wants to actually learn about computers and not just show web sites to their friends.


There are other languages that run in a browser and might be suitable for teaching. How about Dart or Scratch?


A few months back I wondered whether one could have an introduction to programming based on the language equivalent of "Evoland"[1].

For those who don't know this, it basically recreates the history of (console) role-playing games. You start out in something that resembles Zelda, then you'll get color, 3D movement and other perks.

One could do the same with a programming language, by using different features, or even slightly tweaking syntax and semantics. One approach might be similar to how some BASICs operate: line numbers and goto in the beginning, than subroutines, objects etc.

Or something where people don't have to follow actual history and we're going at things from a more mathematica/functional point of view (shouldn't be a problem to do this with Racket dialects).

Then again, IANAT. Never mind that I would think that the environment is actually more important than the language itself (for most normal values of "language"). So please don't inflict Eclipse on newbies.

1: http://evoland.shirogames.com/


I figured this out when I was teaching my nephew programming. I started him on Python. He got bored quickly because he really couldn't see the results of what he did and he really wanted to make simple games.

Javascript turned out to be much better for making simple games and doing simple puzzles.



I completely disagree with the argument here.

The class isn't a class to teach graphics or to teach how to make android games or how to put something on a phone. Beginning computer science classes explicitly should not be about those sorts of things.

Beginning computer science classes should explicitly be about what a program is, what a function is, abstractions, and so on. Not "what does code look like". Not "how do I make a hello world program". No, what a program is. Lambda calculus or a context sensitive grammar and lexer or what have you. If you look at SICP, it got it 100% right there.

Now, you say, he said programming, not computer science, and your argument presumes computer science. Even so, python and javascript are terrible language for beginners if we ever want to see our craft improve. If someone wants to be a good programmer, they should be able to understand the full implications of having a proper type system, what memory safety and management is, how functions work, and so on. They should be exposed to haskell, lisp, rust, or the like.

If we teach a beginning programmer to use C or Python or Javascript, we're simply teaching a new generation that memory leaks and buffer overflows are not a solved problem. That our languages are so poorly designed you have to constantly check if your type is what you think it is. That resource ownership is not a solved problem. That functions are weak and uncomposable. By God, you're withholding the true beauty of programming from them and showing them some mockery.

If your students find the idea of having a webpage or a game that they produced more appealing than delving into resource allocation issues and understanding what a type system is doing then they shouldn't be told they're professional programmers, they should be hobbyists coders. I'm all for people writing toys like games or webpages in unsafe languages that have learnt little from the last 50 years of CS research... coding but not programming. That's great. I'm not all for these same people being taught that they should be writing security software, banking software, and doing this stuff professionally when they have no clue what referential transparency is.

Truly, one of the reasons professionally developed software is rife with issues is that we don't teach programming or computer science in our computer science course; we teach some coding and software development and never even let them know what true programming is until the theory of computation course in senior year.


Beginners that aren't in a formal class setting (and his description more closely describes mentoring someone outside of the classroom (son/daughter)) have not yet made the decision to invest in learning things from first-principles as a university student has already locked into doing. So, for these beginners there must be some kind of reward or immediate application for their efforts if we even expect them to continue past the stage of early burn out. This is what the post speaks to.


Do note that the author is not a Professor teaching CS. If you click through the link in the first sentence of the post, this is the question he was trying to answer:

"What's a good first programming language to teach my [son / daughter / other-person-with-no-programming-experience]?"

His post wasn't really about teaching CS to students who are paying for years of classes. For his target audience, motivating them and giving them tools to quickly build usable software is important. Being able to trivially share said software is another plus.


Everything the GP said above is correct, though. Since so many people's first exposure to programming is through one of these languages, many people are simply unaware that it's possible to do better. Many people find it hard to switch from their first language. I agree with you that Haskell, Rust, and (to a much lesser extent) Lisp have the problem that it's hard to get started with easy graphical libraries aimed at beginners, though I don't agree that that's the case for Python. This is why historically there were languages like Logo explicitly designed as "Lisp for beginners," but today most such attempts seem to simplify programming to the point that it can hardly be described as programming at all (thinking of tools like Scratch).


If the main focus of the class is to code games for smart phones the choice should be the Unity Engine. BUT that would be the entirely wrong choice to learn programming. Games are multi-discipline projects, programming is just a small part of it, there are many distractions when writing a game to get graphics, sound, AI 'look and sound right', and the visual 'rapid prototyping' workflow of modern game engines gives you quick results, but you don't learn anything about programming. You just cobble together code snippets from Stack Overflow and tinker around until it feels right.

Python is a much better choice to learn programming then Javascript IMHO, then take a quick tour into Lisp, Forth and C, and then may be worry about creating games or UI apps.


By the way, I earn my living with games programming.


Thanks for introducing me to dadgum. I read a number of his older articles and I must say his opinions resonate with me. I'm going back in for some more. :)


A language like Perl 6 would probably be a good first programming language if some good libraries become available for it.

Printing one line of text is one line of code. Then you can start teaching branching, loops etc.

Object-orientation is also baked in.

Oh and by the way, there is this thing called "types". It helps you catch some types of bugs early and can potentially make your programs be compiled and thus run faster in some cases. Learn about that also without switching the language.

Some functional programming concepts can also be taught.

Also it has a C-style syntax so switching to another similar language later on would also be easier.


You could also not use Perl 6 for that.

Today, you could use almost any language for that.

Thanks, people who got stuck with Perl 5 code and now have to deal with it. May the whole concept die in a fire.


I'm sorry to come off as harsh, but I -- without any joke -- thought I would have better things to do with my time on this planet than deal with some scumbag's Perl 5 code. I don't think the solution to the "shitbrain writing Perl" problem is more Perl. Fuck that in the ass.

Well, you say, isn't that a problem in every language?

Perhaps. But I don't have to deal with it in every language. And every other language seems to have a type system that will let me know when something idiotic is being done, or exception handling, or "well, that pointer was null, so I'm done for the day" behavior that Perl doesn't have.

And as I've (sometimes reluctantly) given up on old relationships I'd invested a lot of time in as being not for me, or anyone for that matter, so is fucking Perl. It's not FOR ANYONE.


The purpose of a teaching language must be to show students core concepts of programming. Since JavaScript doesn't have proper polymorphism, inheritance, hell even types, I don't see how they would benefit from being introduced to it initially. We started with Java and while it was a pain in general, the concepts were clear and you could easily go ahead with any other language afterwards. We also had some scheme and C just to have a look at other programming languages, but didn't do much with them.


>The purpose of a teaching language must be to show students core concepts of programming.

Wrong. If the student isn't inherently interested in CS, teaching them all of that up front will mask the fact that it actually takes very little to get basic programs up and running.

For example, a sysadmin might want to learn some programming to automate some basic tasks. You don't need to learn about polymorphism or type systems to learn enough to become productive.


Just add HTML and a web framework at that time and all his concerns just fade into the background. Teaches marketable skills too.


javascript/html/css is a new lingua franca that is "understood" by all computers, all OS. It is always useful to learn it whatever the purpose of the student. If the purpose is to learn computer science, SICP remains the best source of inspiration, even if scheme is out fashioned. Do you know if scala is used for teaching ?



I don't like this concept of "first teaching language".

It does not matter at all which one one learns first! It's not the language, it's the ideas and the way of thinking...

This is like talking about whether it is better for a baby to speak english or chinese, in order to learn how to walk


Warning: this is a very opinionated comment.

One reason I haven't learned how to write web apps is because (sorry, but) I consider the web-app software ecosystem to be not very good. One of the modern web's defects relative other "software ecosystems" is that it is too specialized IMHO for media consumption, media distribution, entertaining and being entertained and persuading and being persuaded by "emotional" appeals (more precisely, appeals to System I rather than System II). Another one of its defects is that it is more complex (and consequently harder to adapt to new situations and new purposes) than most other software ecosystems.

The topic under discussion is what programming language (and by extension, what programming environment and what software stack) to recommend to beginners. Here is my suggestion: it depends on whether the beginner has the kind of mind that can stay engaged while alone and while staring at a white screen containing only black text all day. If the beginner can do so, then I suggest that Emacs (the version from the FSF) is the best software stack for most beginners to learn.

Emacs is often considered complex, but its complexity is of a different kind than the complexity of the modern web. Emacs consists of a small core that anyone would have to master to be able to modify Emacs and to build upon Emacs and a large amount of optional add-ons. This kind of complexity is more acceptable than the kind of complexity the web has as long as the beginner is competent enough at deciding which optional add-ons are worthy of his attention. In the case of Emacs, 95% of them are probably not. (More on this 3 paragraphs down.)

Most of the changes I want to make to my personal software environment (as a heavy Emacs user and experienced Emacs-Lisp programmer) can be made in Emacs. In other words, Emacs is a kind of non-proprietary "middleware" just as the web is a kind of middleware (which is admittedly non-proprietary in the parts most essential for a beginning programmer) and just as terminal-emulation apps plus TTYs and PTYs plus ssh plus the ncurses library and cursor-address using ANSI terminal codes is a kind of non-proprietary middleware on which a wide range of software applications (or the user interfaces of the applications) can be built. And I recommend Emacs over the "platform" consisting of terminals, TTYs, cursor-addressing and traditional Unix shells (including rc) because (very briefly) that platform has too many of it design decisions came out of an environment so constricted in computing resources as to be mostly irrelevant for today and because the common mouse idioms really did represent an advance in the state of the user-experience-design art. Hey, since I've already risked offending a large HN contingent (web programmers), why not risk offending a second large contingent (those who like apps that use a terminal for interacting with the user)!

I am aware that not all prospective programmers can stay engaged while staring at plain text all day. They would get bored. To stay engaged, they need more stimulation, e.g., colors, graphical elements, things that move around on the screen (or they need a social element, social interaction being very stimulating for humans). For them, maybe the web is the best environment in which to learn how to program (since it is difficult to learn programming while continuously staying engaged socially). I concede that I have not really studied the issue, so I am (provisionally) willing to believe the general consensus on this page -- but only when it comes to beginners who cannot stay engaged while staring at plain text all day.

Since 95% of Emacs's add-ons (including almost everything in ELPA and MELPA and much of what is distributed in the Emacs app itself) is not worth learning or embracing, perhaps I should add to my recommendation of Emacs the further recommendation that before tackling Emacs, the beginner should first explore and learn how to use OS X so as to give him or her a fighting change at learning how to tell which 5% of Emacs is worth learning and worth adopting. (I consider most popular apps on OS X to be well-enough designed to be worth learning, and my hope in making this recommendation would be that those examples of good-enough design will rub off on the beginner to the extent that he or she would be semi-competent in deciding what parts of Emacs to learn. Textmate in particular has a very well-designed user interface, but Textmate is just text editor for coders, not a flexible middleware like Emacs is, so stay with Textmate only long enough to get an idea of what good user-interface design is.) Certainly, I believe that most beginners should stick to graphical Emacs so that the common mouse idioms (moving the insertion point, selecting by dragging) are available, leaving Emacs running in a terminal to experienced employed programmers who need to ssh in to servers a lot. Also, stick to dired mode as opposed to other ways of navigating and making changes to the file-name space, and avoid org mode, shell mode, eshell mode and especially term mode: org mode makes you learn too many new commands and keyboard shortcuts, and shell mode, eshell mode and especially term mode are too rooted in the very old "ANSI terminal, cursor addressing and Unix command line" way of doing things, which IMHO is overrated on HN unless the goal is to become a professional sysadmin or maybe devops person. And all 4 of those packages / libraries, org mode in particular, do not allow the user to use the mouse enough IMHO.

A more important reason I haven't learned how to write web apps is because the web is our civilization's most important medium for reading, publishing writings and browsing (in the old sense of "browsing" as in browsing through a section of a book store) through collections of textual documents, and IMHO the more people writing web apps, the less suited the web becomes for this essential civilizational function -- because as a general principle, when a software ecology such as the web is adapted for a purpose A then it naturally tends to become less useful and less good for purpose B. In other words, a complex software system serving 2 masters serves those masters less well than a system serving only one master.

Again, I realize that not everybody finds reading static textual documents on the web stimulating enough to keep at for large blocks of time. however, for those of us that do, and who are using the web for a serious purpose (e.g., learning, teaching, inventing, creating new scientific information) the web of 1998 (when the web was already pretty good for reading, publishing and the old pre-electronic sense of "browsing", but had not been significantly adapted or specialized for writing web apps) was generally superior to the web of today. And I believe that the people for whom the web of 1998 was generally superior to web of today deserve some consideration, and I choose to show them some consideration by refraining personally from learning how to create web apps.

I am receptive to follow-up questions via email, especially from beginning programmers.


I did not sufficiently explain what I mean above by including "traditional Unix shells" in the list of things over which using Emacs is preferred.

In my .emacs are calls to such Unix "command line" programs as Rscript, OS X's open, sudo, rm, gzip, chown, chmod, /opt/mozilla/bin/firefox, OS X's screencapture, fetchmail, OS X's system_profiler, shutdown, ping, ps and growlnotify.

A popular and very much traditional way to call such programs (even in languages like Ruby) is via a traditional Unix shell (invoked usually in languages like C, Perl and Ruby by the "system" function), but there is no need to do that. In emacs for example, one can bypass the Unix shells by writing something like (call-process "sudo" "mv" "-i" "/foo" "/bar"). call-process in turn does a fork and exec and the exec gets the command name (in this case "sudo") and the command's arguments (here, "mv" "-i" "/foo" "/bar").

Actually, "sudo" might have been a bad example because for all it know is implemented by calling out to a traditional Unix shell (though one can certainly implement it without doing so). I'm not a purist striving to eliminate every use of a traditional Unix shell on my machine -- that would be a waste of my time.

Point is, I do not write shell functions and shell scripts anymore -- I write Emacs Lisp code instead.

I used to make heavy use of traditional Unix shells and other parts of "TTY world": from 1997 to 2005 or so, my only interaction with my computer, a Linux box, was via the Linux console. For the 5 years before that my only interaction with Unix and with the internet was through an old-school actual terminal and (later) and IBM PC XT running a terminal-emulation app and almost no other apps. But I've discovered, slowly over the years, that I'm happier writing Emacs Lisp even for things that are traditionally done with shell scripts and with shell functions defined in ~/.bashrc.

For many experienced Emacs users, Emacs is just another component of "TTY world". these users typically use Emacs in "text mode" or in "-nw mode" (i.e., without a windowing system) and perhaps they use shell mode, eshell mode or ANSI-Term mode, all of which pretty much require the use of a traditional Unix shell. But it's not difficult to use Emacs without making constant use of the components of TTY world.


Life isn't all about game, sometimes its about matrix multiplication and NumPy and SciPy do this exceptionally well.


Sure, but that's not how you start.


- No memory management

- No pointers

- No cache

- No branch predictor

- No interrupts

- No endianness

- No alignment

What exactly are we teaching students to program? It's certainly not a computer.


The best part is you can't forEach or map an HTMLCollection object. You have to [].forEach.call it. lol


HTMLCollection is a messed-up puppy, sadly. Consider this markup:

    <img name="forEach">
    <script>
        document.images.forEach.src = "something";
    </script>
This works, today, in all browsers. Adding "forEach" to the proto chain of HTMLCollection would make this code no longer work, so no browser is willing to add it for fear of breaking sites.

The answer is to stop using HTMLCollection.

What's worse is that NodeList doesn't have this named-lookup issue, but putting Array.prototype on its proto chain (the obvious simplest way to get all the array goodies) again breaks websites, because some people uses "instanceof Array" to assume things about how an object will interact with Array.prototype.concat.

Long story short, making improvements to the DOM is very hard because of the two decades of weird website hacks that are supposed to not get broken in the process...


Turing machines don't have any of those things either, and they are certainly computers. Universal ones, in fact


I would almost suggest CoffeeScript in place of JavaScript. In fact it feels more like Python. My main concern would be lack of Google-able resources compared to plain JS.


Coffeescript is clearly in the same vein as Ruby: designed by experienced programmers for experienced programmers. The list of caveats and gotchas for both languages is a mile long, and the grammar of both is ambiguous. I love both languages dearly for letting me be a trixy wizard but would never reccomend either to a beginner.


I would argue that CoffeeScript has less gotchas than JavaScript. And, while you can be a trixy wizard with CoffeeScript, it certainly isn't required. The leaner and more linguistic syntax would aid beginners.


Coffescript wouldn't be pragmatic since they would have to learn Javascript anyway.


So incredibly in favor of this.

I love python, it's probably my favorite language. I taught a class on it in Boston. But these exact questions have indeed brought doubts into my mind.

Year after year I'm disappointed by what python simply cannot provide in any straightforward way. The options are very few for making games. The options are very few for desktop GUI (I tried to do PyQt and then discovered QML with JavaScript and quickly left python behind for that project). The options are few for back-end web development-- yes, there's Django, but it just doesn't have the clarity, documentation, and community that Rails has, or Wordpress (different type of entity, I know), or many others. Flask is amazing but small in scope and community.

I'm a professional JS developer by day and I used to think JS was awful for teaching, because the functional side of it is nightmarishly confusing IN GENERAL, let alone to someone new to programming. Its objective/prototypal system takes quite a bit of getting used to, also, and it has many other quirks.

But, for the EXACT reasons mentioned in this linked post, I changed my mind. The fact that JS can be used for almost anything nowadays, is ubiquitously in demand, has a massive, massive worldwide "community," and allows easy GUI manipulation via HTML/CSS, makes it the best candidate for teaching by far.

It'll just take some careful consideration as for how to approach it best.


You had me until you said that Django doesn't have the clarity or documentation of Rails. Django is easily one of the best and most meticulously documented pieces of software I've come across in my 18+ years of writing software.

It also, IMHO, has a much higher "clarity" quotient - Rails is a lot of magic for me, while Django is, in the end, much less religion and much more clear. (I will acknowledge that I chose Python and subsequently Django because they spoke to me personally, but seriously, Django's documentation is second perhaps only to Redis.)


And this is why I'd argue that Python 3 was a big mistake. In exchange for 6 years and counting of freezing and bifurcating the ecosystem, we got a smattering of minor improvements that don't substantially change the developer experience. What could we have had if that effort had instead been focused on things like real performance optimization and native browser support?


This makes the false assumption that resources put into Python 3 somehow prevented "native browser support," something that has always been firmly in the hands of browser vendors and was never blocked by insufficient loyalty to Python 2. The world where just giving up on Python 3 would magically bring huge benefits never existed.

If you don't want to use Python 3, keep using Python 2 through its EOL in 2020+ but why interfere with other people happily using and preferring Python 3 for years now? It is not interfering with you.


> This makes the false assumption that resources put into Python 3 somehow prevented "native browser support,"

Agreed, browser support was probably unrealistic.

However other points still stand.

Python 3 was a mistake. It at the wrong time (too late), and didn't bring enough. The talks, and the time, and drama around it weren't worth to get dict comprehension and unicode support.

A 10x speed improvement (heck, just a 2x) would have been. Better IO handling via something sane like lightweight threads (via an already existing framework like gevent or eventlet). Better packaging. Better default GUI library. Refresh the standard library with something like requests, ipython...

Python had a golden opportunity and it squandered it. There were


There were, there were. I think Node.js is the way of the future with their evented IO platform.


> I think Node.js is the way of the future with their evented IO platform.

But can I use it to build microservices?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: