Hacker News new | past | comments | ask | show | jobs | submit login
The Long-Term Problem with Dynamically Typed Languages (chadaustin.me)
45 points by messorian on April 25, 2015 | hide | past | favorite | 93 comments



Bollocks. Maintainable code is maintainable. Unmaintainable code is unmaintainable. In 20 years of writing software, I've dealt with Java code that was impossible to understand which made it painful make even simple changes, and I've dealt with Python code that was trivial to reason about so changes were easy and fast.

Invariably, the ugliest code I've seen has always come from "we must move fast at the expense of everything else" startups. I worked at a startup that combined the worst of architectural temple building with the worst of "maintaining code is a Maserati problem" (e.g. by the time somebody has to maintain it, I'll be driving a Maserati and not caring). The company almost folded due to the weight of the code base preventing features from being released. Did I mention this was all Java?

Many fast-moving startups choose dynamic languages to do their ultra-fast, sloppy iterations on. Fast-moving startups tend to write code that is unmaintainable. Later in the lifecycle, as the maintenance nightmare hits, these same companies move to static languages to "fix their problem", but they've also introduced a lot more discipline in their engineering process. Afterwards, the dynamically typed language gets the blame for the lack of maintainability and the static language gets props for being maintainable.

Correlation is not causation. The engineering practice has far more to do with maintainability than the language itself. It just happens that, in the last 10 years, dynamic languages have been chosen more often during periods when companies write poor code. My Java example above was from 1998 to 2001 , when Java was the hot language for building the dotcom boom.


Hi, author here. I agree you can have bad code in any language. I also agree that it's possible to build large applications in dynamic languages.

Really, the main point I'm trying to make is that when you have millions of lines of code spanning a decade, it's WAY easier to have a compiler guiding you through certain types of refactorings than a test suite that takes half an hour to run. (If you search the internet for things like IMVU tests buildbot continuous integration you'll see how large our test infrastructure is.)

Our newest language at IMVU is Haskell, and the earliest Haskell written was quite nasty. We hadn't figured out the structure we wanted. But it's shockingly refreshing to be able to lean on the compiler to quickly restructure all of that old code.


> Unmaintainable code is unmaintainable.

Sure, but not all unmaintainable codes are equal.

Code without types is orders harder to maintain and modify, even with extensive code coverage (because you can never really now how extensive that coverage is).

Also, the absence of type annotations make the simple human understanding of such code extremely harder.


I hear you. Overuse of "Spring" configuration makes me crazy. It's like some terrorist organization where you never know your contact's real name, and nobody knows anybody else in the organization, in case the "coupling police" try to bust up a cell. "We always just called him Otto" (autoinject)


Are you just giving your anecdotal experience or do you have something to support your claims?


Every claim that's been made in this thread is based on anecdotal experience. Maybe in 50 years software will be mature enough that we'll have actual empirical data to back up our design/tooling decisions. But for now, almost no one is seriously interested in testing this stuff with well designed (and well funded) studies.


I realize this. I'm just hoping someone has come across some knowledge that I'm not aware of.


Seems like a perfectly reasonable counter-point to someone else's anecdote...


Except that one person actually made an anecdote and the other is making unsubstantiated claims. I would like to know if there is any truth in their claims or is it just another anecdote.


Programs that contain more semantic information are easier for both humans and tools to reason about, it didn't even occur to me that this would be a controversial statement.


Worse than code without types, is code without a solid vision for what was intended.


Unfortunately my experience. If you want to learn how to code professionally, don't work at a startup (except as a lesson on how not to do it).


Your point isn't well supported by citing Java code though. Java's type system gives you almost nothing in terms of what modern type system's promise, and it does so at the expense of a vast amount of unnecessary boilerplate.


At least with Java 8 lambdas FINALLY coming out, maybe we'll get more frameworks like "Spark" (register callbacks in actual code) and fewer like "Spring" (reflect in classes from XML gobble-de-gook).


I think the author's lack of distinction between static vs dynamic and strong vs weak typing detracts from his point. Static vs dynamic typing boils down to whether typechecking is done at compile time or at run time (for various definitions of "compile time" and "run time"). Strong vs weak typing is whether the language will raise a type error or it will try to silently convert types. Python is dynamic but also strongly typed. I feel like PHP's weak typing contributes to the maintainability issues mentioned - passing in the wrong types to the APIs cause silent failures rather than generating some kind of error message. I feel like this makes it more difficult to design a good API because you have to take care of more cases rather than simply throw an exception when you get something badly typed. Since programming in a statically typed language is going to require a test suite anyways, I'm not sure that it buys you much over a strong dynamic typed language like python. That said, an argument could easily be made the other way - it doesn't cost much to use a strong statically typed language like Java. You are going to have to spend time on type safety in each case the question is just whether it is while trying to get a prototype to compile or ensure that a mature project exhaustively tests arguments of illegal types (to pick the worst case for each side).


What I've found in discussing these topics with some colleagues are that the terms "strong typing" and "weak typing" are so vague that almost everyone has a different definition.

The academic literature uses types to mean checked at compile-time, and tags to mean checked at runtime. Python, calls them types but checks at runtime. I think the fuzzy terminology can work as long as we all declare in advance what we mean. :)

In general, I agree with your points. Where strong type systems start to help a lot is when you have super-complicated data flows and invariants, where many different types might actually have the same representation (say, a 32-bit integer or small string), but only some operations are valid on some types. That stuff gets tricky in Python, even if you implement a unique class per type.


One could look at it as two orthogonal questions, yielding four classes of language. Question 1: does the runtime receive interpretation semantics of values from the compiler (yes = "dynamic", no = "static")? Question 2: does the enforcer of interpretation semantics ("types") do so liberally or conservatively (liberally = "weak", conservatively="strong")?

So, dynamic strong would be Lisp[0], dynamic weak would be PHP, static strong would be Haskell, static weak would be C.

I suppose there's even a fifth option of "no interpretation semantics to begin with", which would be something like Forth.

[0] As you seem to allude: a dynamic language makes this question more tricky.


Actually, in the type theory community, Python would be uni-typed and strongly tagged. They use a formal definition of type that doesn't have a dynamic qualifier (so call them tags instead).

I guess my point is that we can be pedantic, but each one of us will probably have a different interpretation of the terms.


The long term problem with bashing less statically typed languages and constructs is that, long term, they have worked. The more statically typed languages are infants in the world of abstracting and solving problems.

Now, this is in large part because static type checking is a rather expensive operation. The "price" of which has been dropping significantly with computers. But, I can't help but see a large amount of hubris in the field of people proclaiming that dynamically typed languages don't lead to solid solutions.

Especially when some of the larger train wrecks I have ever had the pleasure of working on were abuses in higher kinded types.


As much as I try to remain objective, it's really hard to see any future for dynamically typed languages (and by that, I mean languages verified at compile time in order to avoid the silly pedantic distracting argument between statically and weakly typed languages).

The bottom line is that statically typed languages (STL) have been much better at getting good at what dynamically typed languages (DTL) shine than the other way around to the point that there is hardly any good reason left to use a DTL.

These days, STL are terse, fast, expressive, are supported by tremendous tool chains including amazing IDE's and DTL pretty much suck in all these areas.


Your bottom line reads more to indicate the resources that have gone into the statically typed languages than it does anything else. Also ignores the ocean that is javascript.

Seriously, think about that example for a minute. During a time when statically typed languages were getting more and more tooling than you can shake a stick at, browsers take off and prove that the ultimate tool for any language is a widely distributed runtime.

So, where is the difficulty in seeing that dynamic runtimes that can adapt to changes in the system have a very valid place in the future? I realize I'm an emacs person, so biased as can be. But there is something inspiring about finding a problem in my editor/runtime, popping over to the function, fixing it, reevaluating it, and then continuing without a restart. Is ridiculously nice.


Couldn't agree more with your last point, it's just that we've been able to do this with statically typed languages for years now.

Javascript still reigns supreme, no argument there, but the contenders that are trying to replace it all have something in common: they add types to Javascript. All of them.

I think the trend is pretty clear.


You're ignoring all the people that aren't trying to replace Javascript because they're... using Javascript. It's has a lot of warts but with some mindfulness it's entirely usable. No one can argue that people aren't getting things done with it and being successful.


Thank you. All of these BlahBlahScript wrappers don't change the fact that I still have to debug (and thus read, anyway) Javascript in the browser. Javascript isn't assembler, it's actually mostly OK to read - I kind of like it.

I only recently discovered JSDoc, and it goes a long way towards documenting types when they matter, especially if your IDE/editor (e.g. - WebStorm) recognizes the annotations and displays them when you click on a symbol.


Sadly, if there is a trend, it is that none of them have succeeded. So, if that is the path to success, it will only be after it was forced through more effort than really makes sense.

And, correct me if I'm wrong, but Haskel and vanilla Java, along with many other common statically typed languages, can not hotswap. Well, to be fair, vanilla Java can, under limited conditions.


Not being able to hotswap when debugging is a productivity pain for me. I'm in the C# world, and edit-n-continue is severely limited. JRebel is very good with doing hotswap with Java, but it has its limitations too. Ideally, I'd like a programming environment to go into interpreted mode so that if a change is made and its horribly wrong, it just throws an exception.


The fact that none of them have succeeded is more due to inertia than anything else. Again, this doesn't take anything away from the fact that that statically typed languages are the future and even Javascript won't be dodging this evolution for much longer.

Ruby and Python were the latest victims of this trend, Javascript will clearly be next, even if it might take years.


Isn't javascript younger than all of the poster static languages? Why didn't inertia help any of them beat it? (It is amusing that it was called JavaScript to capitalize the popularity of Java.)

I mean, ultimately, I think I agree with you. But at the rate things are moving, I expect my children will still learn JavaScript. And probably C.


The reason why JavaScript has succeeded where many other languages, whether statically typed or not, have failed, is that it was bundled with a runtime platform - the web browser - that became ubiquitous.

Millions of people want to write programs that runs in a web browser, and it's easiest to do that in JavaScript.

JavaScript is a textbook example of "worse-is-better", and you are right, for compatibility reasons we are going to be stuck with it until long past its best-before date, just like C.


That was the point I meant when I said the the ultimate tooling a language needs is a runtime that is widely distributed.



Thank you, I recently configured my blog software and clearly missed some of the caching settings. Working on it.


This seems like one of those "Zen" things, where you eventually realize you are more effective when you use both kinds (compile time types and runtime-only types/tags) of programming together, rather than trying to pick only one and label the other as evil.

Now I must "reflect" upon the wisdom of all the strong typing in my Java programs, and how a simple refactor always safely finds all the use cases of a class/method/variable for me. (NOT -- although IntelliJ is pretty good at finding many common hiding places in various texts for program identifiers)

I really want one language that combines optional (dynamic) and mandatory (static) compile-time type info, with some pragmas at the top of a module warning me that the module uses or supports dynamic types, rather than having to resort to XML DSL monstrosities.

One final rant: The author mentions an FP language (Haskell), and then talks about data flow analysis. Strangely, however, he seems more concerned with types than immutability and explicit data passing / folding. Favoring explicit input/output values over side effects, is a huge factor in seeing data flow (and orthogonal to compile vs runtime types). Alas, the "enterprise" development squad seems to actively resists learning FP, preferring to cling to "COBOL with namespaces and compilation units" as state of the art programming methodology. I think this problem is the real (productivity) killer.

(OTOH, he could tacitly assume pure functions, etc, are a good thing, and should be the norm. Hard to say)


As far as I understand dynamically 'typed' languages are actually the way we can get the best of both worlds since dynamically typed languages can be 'upgraded' to benefit from type annotations (see Dart, Hack/PHP, Python, …) whereas languages with static typing are not easy to 'downgrade' to behave dynamically (?).

The author runs into a few logical fallacies like underestimating how important early agility is for making a product successful (if your product isn't successful there is not much to 'fortify' using static typing) or conflating static/dynamic with strong/weak.

The bottom line imho is that we want languages that are both easily toolable (type systems can help here) as well as allowing us to do fast prototyping. It seems like starting with a dynamic language and to add optional/gradual type annotations is the way to go.


> whereas languages with static typing are not easy to 'downgrade' to behave dynamically (?).

That's not true. It's trivial to represent Python's "type system" in Haskell. An incomplete example of what that might look like:

  data Value = None | String Text | Dict (HashMap Text Value) | List [Value]
Also, consider things like the very useful Data.Dynamic: https://hackage.haskell.org/package/base-4.3.1.0/docs/Data-D...

On the importance of early agility: I'm actually 100% in agreement that early agility is critical. In fact, I directly cite Bruce Eckel's arguments to that effect. However, if a business is going to remain agile in the long term, it should probably invest in "correctness tooling" sooner rather than later.


Thanks for the elaboration.


Static types help with early agility. I don't have to get everything right at the outset, because I can change my code and know what I broke. That's possible in a code base that can't be statically type checked if you pin things down with sufficient tests... but the tests don't rewrite themselves when I make a change to the code - the types do (at least, when working in a decent language).

Note that I'm not at all saying that every statically typed language is superior to every dynamically typed language. Many - particularly older, popular, and oversold statically typed languages - asked for a lot of boiler plate in exchange for not terribly valuable guarantees.


I think we are in agreement but have different things in mind when we use the word agile in this case (which is not surprising).

What I meant was the ability to write new code and see if some functionality of the product 'works' before I have to satisfy the type checker. In statically typed languages I have to satisfy the type checker before I can run the code, that is a barrier.

Regarding refactoring good tooling supported by type annotations are definitely helpful. I guess what I'm trying to say is that we have languages now that give us the best of both worlds, no more need to pick either extreme of the spectrum or use a dynamically typed language in the beginning and to rewrite everything again in another language later.


I appreciate the charity, but I think we do actually have a disagreement.

Let's go too far, and define "agility" to be "things that make me faster in the first 5 minutes working on a problem."

I am saying that when I am writing Python in that context, Haskell's type system is something I find conspicuously missing.


It shouldn't be that hard to add a 'variant' type to a statically typed language. In fact, OO languages usually just use a common superclass for just this purpose.


Or if you are "Go", you just make up an interface with a "quack" method (and maybe "walk", "swim" and "fly", too), without the concrete types ever knowing / caring that there was such an interface, all while still getting the needed type checking on the function/method calls.


And of course Haskell now has Dynamic, it's just not very useful.


It's probably useful if you wanted to implement a dynamic language using Haskell. :P


I'm not sure if this was intended in jest, but it's a bit interesting to treat it seriously so I'm doing so... I think it would not be a very good choice to implement a dynamic language in Haskell, because what it makes flexible is choice of Haskell type, and you likely don't want to tie implemented-language type to Haskell type like that. It would seem more likely to be useful if you wanted to host a dynamic language within Haskell.


If you want to get a feeling for what a dynamically typed language + support for type annotations can give you have a look at this:

https://dartpad.dartlang.org/

On the top right you can select a Dart code sample. DartPad is similar to jsfiddle but it gives you semantic autocompletion, warnings and auto-correction like IDEs for languages with static typing would do.


I do believe Hacker News killed the site by too much traffic.


no, the oom_killer did


Using PHP to characterize dynamic languages is like using a Lada to characterize cars.


I think of it like typing the units in physics class. Sure, you can leave them out. But how much time are you really saving?

A lot of these languages make for lovely blog posts and also happen to be fantastic for gluing things together. But I fear there's an entire generation that's unaware that C# looks like a lot of boilerplate in a textbook but actually requires fewer keystrokes than most equivalents. And once you get fast at it, it's pretty amazing.

I use lots of Python. And Matlab. And of course Javascript.

But as you get older you realize that tools matter. And the tooling around dynamic languages by definition cannot beat what's possible when you declare a more rigid contract of what you're actually trying to get the machine to do.


If you are writing avionics and/or are dealing with lots of conversions, then units are very useful in both dynamic and static languages. But if you are writing a simple game without many or any unit conversions, you might not find them useful at all.


Interesting choice as game engines are famously C++ and the scripting languages -- even the one-offs custom made by the devs -- are less "scripty" than you'd think given the development pressures. From Wolf3D to Unity they land on a consistent set of squares here.

Games are an interesting example because of 1) strong tooling requirements 2) small but diverse teams 3) cross-platform requirements and 4) performance requirements.

Under these constraints they made their choices. And it wasn't because Python wasn't available.


Python has plenty of game too, as does c#, Java, and so on. Ask a game developer if they need unit types, and I'd think it wouldn't be a top priority (though indeed useful).


Ah, my metaphor wasn't clear.

I wasn't arguing for unit types (user-defined literals didn't appear in C++ until C++11...)

I was pointing out that reconciling units in physics is common sense. People don't debate whether writing down "meters" or "seconds" next to a quantity is useful or not. You just do it, then cancel the units as a basic sanity check.

Yet so much of the "strong typing is for people with weak memories" debate incorrectly centers around the equivalent of "len = 3.1" vs "Kilometers len = 3.1" (or, heck, "auto Distance = 3.1_km").

I'm not sure why people think this is some sort of ordeal but my hunch is they'll learn. Timezones, Unicode, Database Schema, Float vs Decimal: you can deal with it now or you can deal with it later. If there were a lightweight language that actually solved this we'd all be using it, not arguing about it.


If type systems weren't so darn annoying, I'm sure everyone would be using them. As it is now, it's a trade off and not always a clear win. Give game developers easy access to unit types, for example, and they would probably use and benefit from them. But if they are a PITA to use, then of course, they would just say no way!

For type systems, we really need much better type inference than we have now...something that works with the subtyping that many of us prefer to work with.


Agreed. Nerds in generally tend to make a lot of ... odd ... short term decisions. Perhaps time to admit that a lot of what we do all day is "annoying" while balancing that with the fact that we're some of the best-paid people on the planet.

Enjoying that economic windfall while doing what's all but scientifically provable as a half-assed job raises issues around quality and ethics. (Something as a community we're also a bit famous for tiptoeing around...)


The current state of PL is very annoying, less so than a decade ago, but still annoying. Most programmers don't really care, there are plenty of other annoying ways to spend life for less money. But improvement is so possible, not everything significant in PL was invented 20-30 years ago, yet that is what we are stuck on.


By definition? Many of the tooling tricks you probably like about static languages are just as possible in so called dynamic languages. The trick is usually to hook into the environment while it is running. At this point, there are precious few things that can't be known by the editing environment.


Rubbish. Create a new library function that takes 'fruit' as a parameter. What methods are available on 'fruit'? The IDE can't help you; the contract is unspecified.

In addition to Java, I regularly write Python, Ruby, and Javascript using IDEA. The code navigation and code completion functions in those last three are utterly hopeless and fallback to grep-like behavior most of the time. Ruby deserves special mention because pretty much anything can get redefined at anytime anywhere. The poor IDE doesn't have a chance.


And this is where things are kind of funny. You are, of course, correct in a major especially static sense. However, for many programs it is entirely conceivable that the IDE can tell you all of the members available on all objects that are actually passed to said function. Or, inverted, check that all of the necessary items are available on any item you call it with.

Now, in dynamic languages this is typically done at runtime. That is kind of the whole point of dynamic runtimes; that you can hook into them. Curious what all properties are available on the object represented by a given symbol? Essentially, ask it.

However, as computers get more and more powerful, this can be done by simulation statically for many programs. No, not all. Just many. Is it enough, though?


(I don't think that's the whole point of dynamic languages. C++ has RTTI and .NET shipped with reflection as a fundamental construct...)

I need to write code that talks to libraries today. I press dot and within 10 milliseconds I see an encyclopedic list of functions, complete with verifiable hints as to their structure. I type what looks like it makes sense, run the compiler that takes as much time to compile than python.exe takes to load, and a lot of the time, if the code compiles it just runs perfectly.

I do not need to wade through verbose Doxygen boilerplate, nor page through source code, nor dance around fake json schemas. I do not need to write unit tests that, in many cases, serve merely to exercise the interpreter to make sure it hits everywhere I might have forgotten a parenthesis. It just friggin' works because a community has decided it's important to do things this way.

So many of these "time-saving" languages are a full decade behind this reality. Right now it's definitely not enough. There's 30 years of idiotic cruft behind Windows bitmaps and it's STILL faster and easier to resize a jpeg in C# than using the insane Python libraries.

Meanwhile C++ got "auto" and C# got "var" and "dynamic." At that point I picked my winners, there are of course other options.

And when I write a library, doing it this way just seems like basic human respect. Sure: you're welcome to the source and here are some lovely docs. But if the only way to say whether something takes a string or bytes and that it varies between Python 2.7 and 3.x is in a README file, something's gone wrong. Like environmentalism we need to look at "total cost" before we declare something a net savings of energy.


This is a method of programming that just doesn't work for me. If I am at the point where I don't have a good idea of what functions I will be using, than the "encyclopedic list of functions" is just not going to be a help. Often, I find I will just put together a fairly long list of things where a much shorter construct would have worked.

Which is not to say it is bad that you do this. Just, to each their own. I'm much better served by working before getting to the keyboard. Other folks aren't.


You presume there is a runtime to be had. As a library developer I have no idea what else will be in the runtime and what objects could get passed into my function. Bigger projects need more modularization... and with each independently developed module, the contracts get fuzzier.


The thing I described would work just as well in library development. Though, yes, it would likely be easier to just describe some contracts at border points in a program.

That is, the inverted method would allow a way to say what all has to be on any object that is passed to you. If they have more, so be it.


They are languages for quick prototyping (consider Arc's philosophy - a core language for a quick, bottom-up prototyping).

Most of the time, the prototype is good enough for production. If not - there are implementation languages (C, C++).

So, I cannot see any problem.


I see a problem. Neither C nor C++ have garbage collection and other similar runtime features that every scripting language has. As such, if you rely a lot on these features, you could run into issues when translating into a lower level - either in implementation or in memory safety.


Modern C++ actually makes for a pretty nice prototyping language if you use auto and std::shared_ptr everywhere, turn on RTTI, and make every method virtual. That gives you mostly-hidden compile-time type induction, explicit type-based dispatch, reference-counted garbage collection, and extensible classes. Then, once you've got things working, you can refine the ownership semantics and internal interfaces.


No, thank you.)


OK, but simple, straight-forward memory management is not that hard. I could even rip out pool-allocations and buffers from nginx/src or reuse lib9 and frineds (which were at the core of golang before 1.5) There is also Boehm GC.

And if I didn't mentioned Golang It doesn't mean that I am unaware of its existence.)


The prototype may be good enough for production, but is probably not very maintainable. When you need a large team to be able to fix bugs and add features, static typing often wins.

If you don't choose static typing early, you might never be able to again.


I always hear this: "statically typed languages make for more maintainable projects than dynamically typed languages."

Is there any empirical evidence to back this up?


COBOL was statically typed, and they've been maintaining those programs for decades!


What is wrong with those Common Lisp programs?)


If you are employed at a bank to manage a small team of Java coders, how could it be otherwise?)


Dynamic code could be order of magnitude more compact and readable.

   schiptsov@MSI-U270:~/arc3.1$ du -sh .
   472K	.
Anyway, at least for me, static typing is the must is a marketing meme from the time of Java madness.

And if I really need it I could re-write my prototype in no time (like translating a ready book from one language to another).


What's wrong with this?))

Don't know how this very site has been made? Ignorance is strength.


Hacker News is a relatively small site with very few features that is managed by a very small team, and up until a year or so ago often ran rather poorly. I'm not sure it's the best argument for using a dynamically typed language.

That being said, I say use what works for you. Dynamic typing is obviously a viable solution, and there are plenty of successful projects that use use dynamically typed languages.

However, after almost 10 years of using dynamically typed languages I can tell you that I no longer believe the trade-off is worth it. Even on small single person side projects, I personally think that after a thousand lines of code or so, the benefits of static typing outweigh the cost.


The point about this site and Ark is that they (pg and rtm) practice what they preach. It is, basically, a real project based on philosophy from famous "beating an average" essay and the "On Lisp" books - quick, bottom-up process of bootstrapping of a several layers of DSLs (idea popularized by SICP and refined in On Lisp) - with a new dialect of Lisp as a by-product.) Everything in a half of Mb of text.

OK, they reused mzsheme's runtime, because runtime is hard (even clojure delegated everything to JRE) but nevertheless, Arc and this site are remarkably successful examples of real world project with all the benefits of dynamic languages.

Sorry for the delay, we had a quake that day.


I absolutely agree.

Look, you don't need every single type that C implemented to have a type system. You can just have number, w_string, w_char, byte, object and maybe DateTime as core types. Then build on those. But typeless languages are just maintenance nightmares that just get worse as time moves on. They are also inherently incompatible with any system which does use types (which is most), like databases, XML, libraries, OS APIs, and so on.

I know I am going to get stomped into the ground for this, but JavaScript is bad and I wish we had something, anything, else instead. JavaScript was made in like a weekend and copied Java badly, that's fine, but we've carried this legacy long enough and the longer it takes us to implement the replacement the longer we'll have to keep using JavaScript until every browser/device catches up (10-20+ years from now).

There are many things that could replace JavaScript, but all I want is a strongly typed language, with real classes, strong focus on libraries, a MUCH better core DateTime library, and with maybe some functional programming concepts to make it more maintainable and more optimizable (at runtime).

PS - I am aware of ECMAScript 6, but it is only a bandaid on a bad language at the core.

PPS - Dart is already dead. And "X converts to JavaScript" is also a bandaid.


TypeScript takes an afternoon to learn, and is strongly typed JavaScrupt. It's not perfect, but it's really very good. The JS produced is predictable, clean, and makes sense with virtually zero overhead.


Typescript can't guess types with libraries that are not written in Typescript. That's not the solution. But maybe that's the best you'll ever get in the JS world.


All the ones I've wanted to use have had bindings already written for them though...


TypeScript as of today is supported by a total of zero browsers with zero planned to add support in the near future. It doesn't replaces JavaScript in any way, just builds a larger house on the sand.


You don't know anything about TypeScript, yet you choose to weigh in on it. Interesting. I think you must be confusing TypeScript with something else, like Dart maybe?


Please clarify what I said that was incorrect? I said that TypeScript doesn't have browser support (true) and that they have no plans to add browser support (true), and I added that TypeScript is like building a house on sand (i.e. it is all built on top of JavaScript, since there is no browser support for TypeScript).

Please, go ahead, and correct my ignorance. Since I don't know anything...


The stated goal of TypeScript was never to run in the browser. That's what confused me about your statement and made me wonder if you were confusing it with Dart. TypeScript wraps JavaScript with static type checking and some other useful contructs. It compiles to very clean and readable JavaScript. It's a tool, not a browser script.


C/C++ doesn't have x86 machine support. Get it now?


    > It doesn't replaces JavaScript in any way
Except for the important place, my code tree


> They are also inherently incompatible with any system which does use types (which is most), like databases, XML, libraries, OS APIs, and so on.

This is obviously untrue in practice by any usual definition of "incompatible", as all of those things can be -- and are -- used successfully from languages everywhere on both the strong/weak and static/dynamic axes of typing.


I agree with your sentiment, but your facts are kind of wrong.

Javascript has pretty much nothing to do with Java. It's not a copy of Java in any way. The semantics of it are vastly different and much more influenced by Smalltalk, Self, and Lisp. It's an extremely functional language, which is one of the things that has ultimately saved it. Without higher-order functions, you don't have jQuery, Underscore/Lodash, AMD, Backbone, and all the other innovations that made using JS not suck nearly as much.

Don't get me wrong, it's a pretty flawed language. But "something better" is practically here now, in the form of ES6 and TypeScript, and to a lesser extent, Google's "use strong" proposal, which basically deprecates a whole bunch of old cruft. We're about a year away from being unshackled from the worst aspects of JS. You can pick TypeScript or Babel today and have many of the best benefits of future JS.


> Javascript has pretty much nothing to do with Java. It's not a copy of Java in any way.

I never said it was. I said:

> JavaScript was made in like a weekend and copied Java badly

The original author knocked out JavaScript's overall design in a weekend, then they tried to shoehorn in Java-like features badly. That's why we have artifacts like the Date() having month at 0 (like Java). They took a whole bunch of Java concepts and tried to clone them into JS after the naming.

My post would have made more sense if you understood a little more about JavaScript's history (both before and after naming). There was definitely a sense of making it more Java-y after it was named that.

> It's an extremely functional language

No, it just isn't. JavaScript has state all over the darn place. I mean heck it has a global scope with no namespaces at all, that alone is a death nail on calling anything a "functional language." I have no way of writing a function in JavaScript which makes a guarantee that it has no internal state, so the optimiser cannot make assumptions about that function either (e.g. inlining it). In fact I can dynamically change a function's definition at runtime, so it never runs the same code twice.


If it's just Date semantics, that's a fairly insignificant amount of copying. I'm not saying there isn't superficial similarity. But just look at the fact that Javascript until ES6 has lacked a useful `class` keyword, without which you can't even write a Java program.

I suppose "functional" is in the eye of the beholder. I see it in terms of what a language allows you to do, not what it prohibits you from doing. You can absolutely choose not to use shared mutable state.


>But typeless languages are just maintenance nightmares that just get worse as time moves on. They are also inherently incompatible with any system which does use types (which is most), like databases, XML, libraries, OS APIs, and so on.

I can't load the article, so I have limited context here, but note that being dynamically typed is orthogonal to being weakly typed/typeless.


It sounds like you would really like Dart. Have you tried it?

https://dartpad.dartlang.org/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: