Hacker News new | past | comments | ask | show | jobs | submit login
Announcing TypeScript 2.0 Beta (microsoft.com)
359 points by DanRosenwasser on July 11, 2016 | hide | past | favorite | 90 comments



    let lowerCased = strs!.map(s => s.toLowerCase());
I'm not a big fan of this, it's really starting to change JS semantics. It's not just type annotations anymore + ES6 . It's starting to look like its own language. Some might like that, I do not.

They should be a bit more cautious before introducing these features. What if Ecmascript in the future uses ! as an operator for a totally unrelated purpose ? It's like decorators, they are not in the spec, there is no plan yet to officially had them to the spec, yet, Angular2 which is written in Typescript abuse them. What if in 2 years they are introduced in ES2019 with different semantics ? How Typescript is going to handle that too ?


This post is full of misplaced FUD.

> It's not just type annotations anymore + ES6

How not?

! is just an inline type coercion.

> What if Ecmascript in the future uses ! as an operator for a totally unrelated purpose ?

This seems unlikely because ! is already an operator in JS.

But really, this argument could be made against any operator TS uses. What if ES introduces : as an operator? Or if they introduce some sort of ambiguity with the other myriad type definitions? This is not a new issue, and you shouldn't let null checks drive you away from the language.

In any case, both null checks and decorators are behind flags. The decorators flag is literally called "experimental decorators". So it's not like the TS team is pretending that it's going to be stable forever.


>! is just an inline type coercion.

Are you sure about that? It sounds more like a signal to the compiler to "trust me, I know what I'm doing" and would blow up if you passed undefined anyway. Maybe the author didn't explain it very well. That is, not a type coercion so much as a hint to not bother checking this access.


When would code with ! ever compile differently than code without ! ?


Umm never? That's not the point. It's not changing (coercing) the type of anything, it's just allowing the call without flagging it as an error. The call at run time might still get passed an undefined and fail. It's like choosing an unsafe partial function in Haskell rather than using a total function that forces you to deal with Maybe.

At least that's what it looks like. I've never used typescript or I'd test it right now. I'm on my phone and haven't tried it.

(Edit: my answer didn't quite make sense! It's different since without ! it compiles as an error because, in this case, you can't map over undefined. With !, it just ignores that possibility although it could still happen. The compiler cannot verify all run time possibilities since your sum type says it really can be undefined.)

(Re edit! Verifying the types vs skipping that verification is not the same thing as coercing the types)


It's coercing the type from, e.g. String | undefined to String.


  let lowerCased = (s : string[] | undefined) => s!.map((str : string) => str.toLowerCase());

  console.log(lowerCased(['A','B']));
  console.log(lowerCased(undefined)); //boom!
It doesn't coerce anything. All it does is let the code pass without error. "s!.map" and "s.map" compile to the exact same Javascript code, which means there is no type coercion going on. Are you sure you know what type coercion is? The first log prints out fine and the second blows up, which proves that undefined isn't being coerced to a string[]. If that was the case, the resulting JS would be checking for undefined and probably creating an empty array.


My bad, I meant conversion, not coercion.


TypeScript has dealt with this in the past by adapting to the introduction of `<Foo>` from jsx, which clashed with typescript's casting operator. If this ends up being a problem I'm sure that the TypeScript team will just pick a different syntax.

In my experience, the overhead caused by TypeScript is far outweighed by benefits in tooling and confidence that my code works (mostly) after refactors.


AFAIK, this is actually happening with decorators, which have changed a lot in how they're defined (the latest I can find is the actual spec changes: https://github.com/tc39/proposal-decorators/commit/c32ea1815...)

Even so, I've been pretty impressed with the TypeScript teams ability to manage this kind of change an uncertainty and keep up with and move closer to standard JavaScript where they have been differences.

Specifically regarding the `!` operator* , to convert to standard JavaScript just remove the `!` operator, like you would remove a cast or type annotation. It's still within the TypeScript spirit to me.

* I personally would have rather seen it either be a type of cast (maybe `<!>strs.map(...);`), or add a null-safe property accessor: `strs?.map(...)` which doesn't assert that `strs` is non-null, but makes the expression null-safe, returning null if `strs` is null.


That null-safe property accessor (`?`) is being discussed for ES, and it's reasonable to assume that it will ship eventually. They wouldn't add that and risk having different semantics.


Being realistic, it might take more than a decade to see a standard EcmaScript version with type annotations available in all major browsers. Until then, transpiling is the only choice. So I don't think diverging from the (possible) standard is too bad. We don't know how the state of web development will be at that time, and probably the standar won't ever catch up with TypeScript and transpiling will be required for quite a long time until something like WebAssembly really takes off.


The type definitions in flow are pretty much the same iirc. Also decorators is already an ES proposal, the only concern was the order of application of decorators vs declaration, they've been applied pretty consistently.

As to the additional checks for non-null types, I'd say that async/await would be far more welcome at this point... it's one of the few reasons that those using TS are still running their output through babel after.

Personally, I'm not a fan of TS, but can see why others would like it... I'm pretty good with ES2016 + Stage-1 via Babel...


I don't plan to use that feature. If they remove or change it, my code will still work.


Typescript is a superset of Javascript, but it it's not only Javascript plus types. Some features like async/await could end up being implemented differently in Ecmascript. It seems inevitable that there could be incompatibilities in the future. But the same happens with Babel or other transpilers when they introduce features that are not yet standardized.


Do you have the same complaints about optional parameters too?

    let x = (id: number, name?: string) => { return; };
Is it abuse to use interfaces as well? They are not present in vanilla JS.

How about React's move away from React.createClass({}) to ES6 classes extending React.Component? Do they abuse classes?

Async/await is coming too and it will turn JS on its head, does eschewing callbacks for async functions also count as abuse?

I won't even get into generics as that's a whole different can of worms.

Microsoft and the TS team have issued a mandate that TypeScript will always be a superset of EcmaScript and if ES2019 includes decorators that are incompatible with TypeScript's then it will be addressed at that time.

TypeScript was never meant to be just type annotations+ES6, there is Flow[1] for that.

[1] https://flowtype.org/


> let x = (id: number, name?: string) => { return; };

hmm, it would make more sense if it was written like that

> let x = (id: number, name: string?) => { return; };

But I guess it's more or less ok. But I definitely feel inconfortable with the ! .

> How about React's move away from React.createClass({}) to ES6 classes extending React.Component? Do they abuse classes?

I don't use React.

> Async/await is coming too and it will turn JS on its head, does eschewing callbacks for async functions also count as abuse?

The probability is higher for ES to implement async/await than the ! operator. The former is a safe bet, the latter isn't, by a long shot.

> Microsoft and the TS team have issued a mandate that TypeScript will always be a superset of EcmaScript and if ES2019 includes decorators that are incompatible with TypeScript's then it will be addressed at that time.

It means that language will break. Not very good if you write large TS codebases today.

> TypeScript was never meant to be just type annotations+ES6, there is Flow[1] for that.

That's your opinion. I'm not interested in Flow.

Ultimately TS is successful because it more or less looks like Action Script 3/ES4/JScript.net and javascript itself. Make it too Alien and the people who refused to use Coffee script because of its semantics will not want to use TS. And FYI I use TS today. I may reconsider that.


> > let x = (id: number, name?: string) => { return; };

> hmm, it would make more sense if it was written like that

> > let x = (id: number, name: string?) => { return; };

> But I guess it's more or less ok. But I definitely feel inconfortable with the ! .

I think the point is that typescript currently supports the shown optional parameter syntax, JS could be extended in a conflicting way to that as well.

In the end, if you are using something that extends JS and you expect that any code you write will continue to work in future versions and that the tool will be altered to stay as close to the standard JS as possible as extended features are incorporated into vanilla JS, you're most likely deluding yourself. Much better to accept that the tool should either try to match vanilla JS, in which case you must accept that you may need to refactor to update occasionally, or that it will eventually diverge, but your code will continue to work, and choose a tool that implements whatever strategy is acceptable to you.


Personally, I'd love to see some of the operators that F# has make it into ES stage-0 proposals...

    |> 
    <| 
    combinedFn = function1 >> function2
etc... Though the last is mostly handled with fat-arrow syntax


> I'm not interested in Flow.

Curious: Why not?


I'm not him, but the Flow compiler not supporting Windows is a dealbreaker for some.


Thanks. The Flow team is planning to announce official Windows support, hopefully in the next few weeks.


Is there a reason to use Flow rather than Typescript now?

It seems to have interprocedural type inference, which is really cool, but I'm not sure that's a feature I'd even want in Typescript, since I feel like it would make code less self documenting.

Overall it seems like there is more community support for Typescript than Flow, which is essentially just group inertia, but it makes me not want to switch.


You're right, it was never meant to just be annotations+ES6. Though when you talk to someone about TypeScript, they'll argue until they're blue in the face that its just "ES6 with types". I guess you can argue semantics here, but don't be too surprise people will argue back that way.


Uhh, yes it is meant to be just annotations + ES6.

https://github.com/Microsoft/TypeScript/wiki/TypeScript-Desi...

The exclamination is a forced cast to another type, just like "x as otherType". So this is still in the same category.


Well it depends how you use it also. To them it really could be ES6 with annotations.


There is no difference between TypeScript's and Flow's design goals. Flow using the same extension as regular JavaScript is just a dirty marketing trick. Microsoft would be lynched for "embrace-extend-extinguish" if they did the same.

But since its Facebook, its completely fine that they're trying to co-opt standards. No problem at all.


Agreed – this may now be my top reason not to use TypeScript (followed by inadequate inference / support for gradual typing, differences from Babel, and a few random features), which I would otherwise very much like to use.


This is a poor reason to not use TS.

If in the unlikely worst case scenario the TS team drops/renames the ! syntax at some point, given that these are compile time checks, you will just manually go through a list of compile time errors and fix the issues one by one.

More likely scenarios:

- They will make the change in a backwards-compatible way

- They will release a jscodeshift script to do the upgrade for you

- Etc.


Okay, fair. Perhaps I overreacted.

I think the general idea of adding syntax outside of declarations really rubs me the wrong way, but perhaps it shouldn't.


I agree, I feel like this is becoming kotlin (not a bad thing) but the syntax is quite a divergence from es5+types less so with es6 but your sentiments are bang on


Another interesting addition is discriminated union types: https://github.com/Microsoft/TypeScript/pull/9163

This will greatly facilitate a more functional style of programming.


Union types without destructuring/pattern matching seems really strange, doesn't it? Special casing a certain property name and converting types based on a string comparison?


It's definitely not as elegant as Scala or Haskell, but I think their justification is very reasonable. The current implementation is maximally compatible with existing usages of tagged unions (e.g. Redux actions), because it allows you to choose whatever discriminant you like ('type', 'kind', whatever). Keeping very close to JS seems to be a core tenet of TypeScript and is quite valuable in my opinion. It means they can adopt new ECMAScript syntax without clashes and also ensures that newbies have a smaller learning curve. Also, they can always add in some syntactic sugar later if they want to!


Yeah, the key here is JS-compatibility. Unfortunately, that does mean that TypeScript is sort of a dead end in terms of how far you can go wrt. the fabulousness of your code. (Of course, once everything is statically typed you'll have a much easier time just converting everything to $OTHER_STATICALLY_TYPED_FRONTEND_LANGUAGE, perhaps Scala.js, perhaps PureScript, perhaps js_of_ocaml, perhaps even GHCJS if you're adventurous.

(Aside: I'm moderately disappointed the 'fabulousness' was not in my Chromium's spell checker dictionary. It is now.)


If by «fabulousness» you mean in terms of static typing, the language may not meet your expectations, but it seems to me it's a bit harsh to say it's a dead-end, considering it's on par with many other languages, and still improving.

Also, sorry if you meant something else, «fabulousness» is a very subjective term, not unlike «elegant» (as in «elegant code»), and because it does not have a specific and commonly shared meaning (what is fabulous to you may not by fabulous to others), it's not something I'm very comfortable discussing, although I think I have understood what you meant by it, but with a low level of certainty.


> In TypeScript 2.0, the new --strictNullChecks flag changes that. string just means string and number means number.

This is great. However, it would be nice to know if this feature will become opt-out in the future instead of opt-in. In theory, if you're a TS user (as opposed to JS) it's because you want these nice features _by default_.


I appreciate Typescript remaining opt-in for all of these things (as that seems a clear goal of the project, --noImplicitAny has been like that since very early on), but it would be nice to have some sort of intentional signifier flag --maximumStrictness or whatever you want to call it that does a rolling opt-in to all the strictest checks as new ones are invented.


It's a tough call, and I can see arguments for both sides. On the other side, one of the big selling points of TypeScript is that you can take your existing huge JavaScript codebase, and with little effort turn it into TypeScript with all the types being "any", then gradually add strong types as time permits. This wouldn't work if --strictNullChecks was the default and would add one more step to the migration process.


`null` and `undefined` are subtypes of `any`, so you can still gradually convert to typescript.

(according to https://github.com/Microsoft/TypeScript/pull/7140)


I don't use TypeScript, but with Flow (which has always disallowed nulls in built-in and user-defined types by default), "null" is a subtype of "any", so that would still work. Is this not the case in TypeScript with --strictNullChecks?


> and with little effort turn it into TypeScript

Good point, but then how about an opt-out warning (instead of error)?


Wouldn't that break third party libraries? You'd also need a compiler flag to enable it only for your own code.


I feel that typescript being only 2.0, should be in a position to break things once in a while.

If you pay too much attention to keeping legacy code 'alive' you end up complicating matters with either compiler-flags all over the place or redundant API's i.e win32 api (old+new+newer versions of the same function).


The fact that the flag is global could actually slow down adoption of the new type system.

Let's say I want to use the new type system, but some library I depend on hasn't been updated. If I enable the new type system, I'll get both false positives and negatives when type checking.

If a library writer updated to the new type system, they'd break compatibility with callers that haven't updated. The safest option would be to maintain two almost-identical copies of the library -- one for the old type system and one for the new one.

A better option would be to allow specifying the type system mode at the top of each file (like "use strict"). This allows me to use the new type system even if not all my libraries have updated yet. This also allows libraries writers to use the new type system features without breaking existing users.


> In 2.0, we’ve started using control flow analysis to better understand what a type has to be at a given location.

Sounds like Flow. Curious to see how they converge/diverge over time.

I know there are people who hate the tooling spaghetti that modern JS development involves, but I appreciate that I can plug Flow into a babel/webpack stack and have it just do the error checking. I also have high hopes for things like tree shaking and its ilk that control flow analyzers (Flow, and now TypeScript) will bring to JS going forward.

I'm sure you could have Babel just do the experimental transforms and pipe the result into TypeScript, but having two separate sugar/transformation systems seems not-great.


> I also have high hopes for things like tree shaking

They're really starting to put their efforts behind control flow analysis. They already did a little bit of that before (with type guards), but it seems they're really serious about it now. 2.0 also adds verification for unused declarations with `--noUnusedLocals` and `--noUnusedParameters`, so I want to believe they're moving towards tree shaking.


It seems to me since babel6, that flow could be a plugin/extension for babel, and the rest of what typescript offers could simply expand on that... I'd love to see a bit of convergence in this space as well.

Then again, I was hoping the same thing with webpack, then we get rollup, etc... it's interesting, though hard to make some choices while staying pragmatic.


There are a _lot_ more features than non-nullable types: https://github.com/Microsoft/TypeScript/wiki/What%27s-new-in...


"Non-nullable Types"

This is the first filter I check when I am deciding to learn a programming language these days. Almost all real world code I saw have had random null checks everywhere.


It's one thing I don't like as much in JS, are those times where a number is a valid input, or a string that is a number, but not null, undefined or other types of values.


This is fantastic. Kind of gutted they delayed Async/Await until Typescript 2.1 releases.

That's the one killer feature I'm missing.

Typescript has been nothing short but amazing though!


In the meantime you can use async with TypeScripts ES6 emitter, which transforms async into generators, then run the output through Babel to transform the generators into plain old ES3/ES5. It's messy but it works.


If anyone is interested in this setup, I wrote a blog post recently about setting up Typescript with Webpack and React which describes how to set it up so that the Typescript output passes through Babel: http://blog.tomduncalf.com/posts/setting-up-typescript-and-r...


thanks it's very useful, as I now I doubt async/await will ever land in TS.


How hard is it to automate this and what impact on build times does it have?


It's not hard to automate with raw gulp or webpack, but it does add a significant amount of time to compilation. It roughly doubled our incremental compile times. My team decided it wasn't worth it, but it's possible that a cleverer person could decrease the overhead.


I ask because I'm debating between trialing TypeScript or Elm for a personal project (I primarily work in Rust and Go on the backend/infrastructure), and iteration times are important when trying to get a frontend UI "just right".


You could just use TypeScripts ES6 output directly during development, using a browser that supports ES6 natively, then add Babel to the build chain for public builds that need to work on older browsers.


Do I generally need to use prerelease browsers for ES6 support? Sorry, it's been a long time since I've been on the frontend.


The stable versions of Chrome, Firefox and Edge all support ES6 generators. Safari stable doesn't support them yet but the tech preview does.

https://kangax.github.io/compat-table/es6/


It takes about 20 min. It's very easy to get TS to output ES6 (a single line in a config file). It's very easy to get Babel to output ES5 from ES6.

The only thing that might be time-consuming is setting your paths if you don't already have a plan for keeping things separate.


The control-flow analysis of types is very clever.

I've been slowly switching away from Coffeescript to Typescript and have been mostly happy with it.

Only thing I still struggle with is grappling the namespace/modules mess in Javascript that's been inherited by Typescript.


Is it inherited from JS though? I was under the impression that Typescript was doing their own thing (like /// <reference)


TypeScript picked up a few features that were ultimately removed from JS at the draft spec stage, AFAIK the xml style references were never in a JS spec, but the convention has been around a while. TS also supports import etc.


The thing that bugs me the most is the inconsistent implementation of default imports with ES6. Makes it hard to work with a lot of libraries.


> This release includes plenty of new features, such as our new workflow for getting .d.ts files[0]

Does this mean that using Typings is no longer necessary, or is there some additional benefits that Typings still offers?

0. https://blogs.msdn.microsoft.com/typescript/2016/06/15/the-f...


Typings is only necessary when you cannot find the d.ts files from http://microsoft.github.io/TypeSearch/.


Typings is still useful for .d.ts files on package repositories other than NPM (such as directly from GitHub, Bitbucket, etc). Even then, for the moment it has more flexibility for NPM typings than Typescript on its own. Plus, it's still somewhat nice to keep your type definition dependencies in a separate typings.json rather than mixed in with package.json dependencies.


Also, for me it's still unclear how versioning is handled. It always seems like you have to use the latest version of the .d.ts and hope it works with your actual package version.


`.d.ts` packages should align on major.minor version numbers, so you'll be able to snap to a version and update appropriately.


So how are you guys building TS for web apps? I've been using VS Code, putting classes into namespaces and it builds with commonJS, which I concatenate into a single file that I load into a website.

That last part is in need of change because you're not supposed to concatenate commonJS. So how have you been delivering your code to the browser?


We're experimenting with using TypeScript 2.0 for compilation, and webpack 2 for bundling. You can configure TypeScript 2.0 to compile most TS/ES6 syntax to ES5, but to leave the module statements (e.g. import and exports) as ES6.

Webpack can then analyse your ES6 module tree and perform "tree shaking", a form of dead code elimination where unused modules are excluded from the compiled bundle. This will become increasingly useful as more libraries are authored in ES6 / TypeScript, as it will allow you to import the whole of a framework like Angular or React, but only ship the parts of it that you actually use in your application.


And I wanted to add that, while webpack can appear a little intimidating at first, it's actually very easy to setup, and in many ways much simpler than the kind of jerry-rigged concatenation and minification workflows we used to build with Gulp and Grunt.


In the past I've used TS with AMD/RequireJS. These days I've been using TS with JSPM and SystemJS, with various combinations of Babel in the mix and JSPM/System bundler.

I have projects that configure SystemJS to use TS itself as the only ES2015/TS transpiler (and thus no emit necessary in development, let the browser compile it, which makes for a fairly nice rebuild). You can even use TS directly as your bundler in this case if all you want to bundle are your TS sources as it supports the SystemJS registration format.

I also have some projects that still use Babel as a transpiler and TS emitting ES2015+JSX to it (still in the JSPM/SystemJS stack). I've particularly needed this because TSX is great but it's support for non-React TSX emission is still somewhat lacking in my opinion versus what easily works in Babel.


I've been using Typescript with Webpack (and Babel) for the last 9 months or so and in general been very happy with it, I'd highly recommend it as you get all the Webpack and Babel goodness (hot module reloading for React, async/await, the option to use Babel plugins etc.) on top of Typescript. I've written a step by step guide to setting up Typescript, Webpack and React at http://blog.tomduncalf.com/posts/setting-up-typescript-and-r... if you're interested to try the set up out.


I'm currently using a "TypeScript loader" for either Webpack or Browserify.

If that seems too much, the other option is to output the compiled code as AMD modules [1]. But I have not tried it myself. Regardless, the option is still there, and, IMO, it seems like a painless transition away from concatenation.

http://stackoverflow.com/a/36320145/538570


Never make clever use of undefined in JavaScript, as it will always result in a bug when someone decided to be a good citizen and gives the variable a value at the same time it's declared.

Always be explicit, like var foo=-1; if(foo!=-1) instead of only var foo; if(foo). It will also help the optimizer


> null and undefined are two of the most common sources of bugs in JavaScript. Before TypeScript 2.0, null and undefined were in the domain of every type.

So true. Happy to see this. Makes me think of how Rust works.


Id love too see partial classes.. It would be so helpful combined with generated classes. Do anyone know any alternative to partial classes with the same functionality as partials in C#?


Check out this:

http://justinfagnani.com/2015/12/21/real-mixins-with-javascr...

Might provide you with some functionality you're looking for. Since TypeScript is an es6 superset, I'm assuming this will work in that context.


Explain please, why anybody would want both null AND undefined?


Similar to why you would want to have both false and 0; it is possible to have a situation in which you wish to define something as explicitly being null.

I do think it's a little weird, but it's useful to be able to say that "this key exists and its value is null" as opposed to "there is no such key".


I thought about that too, but this breaks down once you realize that setting obj.key = undefined; will mutate an object into something which is neither "the key exists and its value is null" nor "there is no such key". The difference can e.g. bite you in things that iterate over the keys of an object or function that do that internally (like deepEquals implementations).

I guess in TS2 there would now also be a difference between a field defined as key: string | null | undefined and a field defined as key?: string | null | undefined, because the first one requires setting the key to one of the values on initialization and the other one not.


I usually find it useful in API's, When you send object across undefined means no value was sent, null is a null value sent.

That is the difference between not changing a value and changing it to null.


It's supposed to represent intention. Since all functions in JS are variadic, there has to be a way to differentiate between a parameter not being passed and an "empty" value that is explicitly passed. Technically, you should never explicitly assign `undefined` to anything yourself, but you can propagate it.


It can be useful when calling a function and you want to say "I don't care about parameter X" when null can be an otherwise valid value for that parameter.

I've also found it useful when a function needs to indicate it failed but is in a situation where exceptions are not useful. This is code where the consumer of the function can throw its own much more useful exception or can try to recover from the function failing (eg: it can try and fix the fault). This helps people avoid using exceptions for flow control of the application and lets exceptions be truly exceptional.

Examples:

  //3rd party code could dictate 'bar' be in the spot that it is
  function needsUndef(foo, bar, other){
    if(bar === undefined) bar = expensive_query();
    //rest of code
  }

  needsUndef(42, cachedResultExpensiveQuery, something);
  needsUndef(42, undefined, something);
and:

  foo = getFromDisk(foo_path);
  if(foo === undefined)
     foo = getFromNetwork(last_resort);

  is much nicer than:
  try{
    foo = getFromDisk(foo_path);
  }catch(ex){
    //Yes, I know you can filter exceptions, but what if someone forgets or sets too wide
    //of a net?
    foo = getFromNetwork(last_resort);
  }

  or:
  
  if(path_isValid(foo_path) && canReadFromPath(foo_path))
	foo = getFromDisk(foo_path);
  else foo = getFromNetwork(last_resort);


For second case most people use options in language that have them.


They'd want it indirectly, because they want to use or target javascript, which has always had both. There's very little reason why someone would have a primary goal of using both.


undefined typically means it has yet to be defined whereas null means it does not have a proper value set. For example I use null where there is an exception an cannot set a variable to a sane value.

When I see null it means a value could not be set or that it was intentionally unset. undefined means it was never set in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: