I just did some refactoring on a medium size code base and here are a few things to watch out for when adopting optional chaining and the new null coalescing operator:
foo && await foo();
is not the same as
await foo?.();
this will work in most cases but subtly, the await wraps the undefined case into a Promise, while the original code would skip the await altogether.
String regular expression matching returns null, not undefined, so rewriting code such as:
const match = str.match(/reg(ex)/);
return match && match[1];
is not the same thing as:
return match?.[1];
because the latter returns undefined, not null, in case of match failure. This can cause problems if subsequent code expects null for match failure. An equivalent rewrite would be:
return match?.[1] ?? null;
which is longer than the original and arguably less clear.
A common idiom to catch and ignore exceptions can interact poorly with optional chaining:
const v = await foo().catch(_ => {});
return v?.field; // property 'field' does not exist on type 'void'
This can be easily remedied by changing the first line to:
const v = await foo().catch(_ => undefined);
Of course, these new operators are very welcome and will greatly simplify and help increase the safety of much existing code. But as in all things syntax, being judicious about usage of these operators is important to maximize clarity.
Going between rust and ts, it's the judicious use of enums that js really feels like it's missing. The whole hoopla with undefined/null/NaN/etc could be avoided with a simple enum type. Not to mention the entire concept of Exceptions.
Nullish alone had me back to the == instead of === once I went ts. No reason to care about identity when ts makes sure I don't compare a string to a number, but treating undefined == null as true is what I want 99.9% of the time, and that 0.1% I should be explicit that I do care about treating them differently.
Sorry, JS? While JS might get this stuff one day, these language features are for TypeScript which is its own language. It's strongly typed and just happens to interop with and in some scenarios transpile down to JavaScript. It's whole existence is to deal with that billion dollar mistake you mentioned.
Speaking of which, optional chaining and null coalescence are core language features of some very good languages. Kotlin and C# for instance. Kotlin, much like TypeScript, interops with a "broken" language (Java and the JVM in its case) and attempts to address some core deficiencies in that ecosystem. Let us have nice things!! :)
I am really excited about these new features and hope they do land in JS sooner rather than later. I hope they do the pipe operator next! `pipe |> operator|> plz`
> TypeScript which is its own language. It's strongly typed and just happens to interop with and in some scenarios transpile down to JavaScript. It's whole existence is to deal with that billion dollar mistake you mentioned.
I'm afraid literally everything in this snippet is incorrect. The Typescript website opens with:
> Typescript JavaScript that scales. Typescript is a typed superset of JavaScript that compiles to plain JavaScript.
Typescript is Javascript. It's a superset, and as such its improvements are additive-only, by definition. The purpose of its existence is not to change or replace any JS features, only to augment them.
In the same was as programming languages are not organisms.
Typescript is ECMAScript in the same way as Netscape JavaScript is ECMAScript, in the same way as ActionScript is JavaScript.
In the same was as Scheme is Lisp.
In a similar but not identical way as American English is English (similar but not identical since programming languages and spoken languages are both languages, but not actually the same thing: the analogy cannot map perfectly).
> optional chaining and null coalescence are core language features of some very good languages
They are features of Maybe/Option too, just in a more consistent extensible way.
val a = Some(thing)
val b = a.flatMap(_.part).flatMap(_.subpart)
val c = Some(None) // look Ma! a nested option!
> While JS might get this stuff one day
I appreciate the attempt at pedantry, but TypeScript general only implements JS language features or TC39 Stage 3 proposals.
> I hope they do the pipe operator next!
Great example. After many hundreds of upvotes, the status is "waiting for TC39". [1] I.e. it will not be implemented until JS has it or is close to having it.
I had actually listed decorators as a rare counterexample, but then removed it from my comment for brevity.
1. Decorators are currently TC39 Stage 2. [1]
2. Because of its unusual status, TypeScript lists decorators as "experimental, subject to change". Once the EcmaScript proposal advances, TS will ensure it is compatible with the ES standard and then remove the experimental sticker.
3. If it doesn't affect runtime, then you shouldn't expect JS to have the feature. TS private members are no different than public members at runtime; there is no need for JS to have that feature. (Note that TypeScript deliberately chooses not to name-mangle members.)
4. Sure call it a "feature and syntactic superset", but that doesn't make TS any less beholden to JS. Dedicated adherence to that property commits them to support every feature and syntax that JS adds; future-proofing means they can't really add anything that JS doesn't have or is going to have.
5. Note that TS abandoned having a standard long ago. The behavior and validity of TypeScript program is determined by "whatever tsc does" and the ES standard.
6. Again, if you need convincing that TS effectively only implements JS features, refer the linked pipeline issue, locked as "waiting for TC39."
> and in some scenarios transpile down to JavaScript
It always transpiles to JavaScript and always runs as JavaScript. There is no such thing as a TypeScript runtime engine.
TypeScript is a superset of JavaScript. Therefore the OP's point is still valid. Any "mistakes" JavaScript might have made about having null AND undefined are also issues for TypeScript.
I don't know what you mean here, but it is certainly possible for TypeScript code to use JavaScript libraries and vice versa, which is presumably what most people mean by "TypeScript interops with JavaScript".
> It always transpiles to JavaScript and always runs as JavaScript.
> TypeScript is a superset of JavaScript. Therefore the OP's point is still valid. Any "mistakes" JavaScript might have made about having null AND undefined are also issues for TypeScript.
TypeScript adds type checking to JavaScript. The Million Dollar Mistake is having unchecked nulls; TypeScript supports checked nulls so it's not an issue. TypeScript's nulls are much more similar to Maybe/Option than unchecked nulls.
> I don't know what you mean here, but it is certainly possible for TypeScript code to use JavaScript libraries and vice versa, which is presumably what most people mean by "TypeScript interops with JavaScript".
Ah, I can see what you/they mean by that. The point I was trying to get across was: TypeScript doesn't exist when code is actually executing (which is what I think of as interop - it's happening at execution time.) At execution time - it's all just JavaScript.
I have found (working in a TypeScript team currently) that this fact is ignored, primarily by people who "look down" on JavaScript, but it is a VERY important point to remember when you are writing TypeScript, mostly because it's important to remember there is only compile time type checking not run time.
Heh, yes - as soon as I posted I realised that was silly. The word "always" is almost "always" incorrect! I should have said: "It usually transpiles to JavaScript and usually runs as JavaScript"
> The Million Dollar Mistake is having unchecked nulls; TypeScript supports checked nulls so it's not an issue. TypeScript's nulls are much more similar to Maybe/Option than unchecked nulls
Good point in theory but my practical experience hasn't borne this out. That is because TypeScript is an "optionally typed" language and it hasn't been true in practice because of excessive use of explicit or implicit "any"s.
<i> > The Million Dollar Mistake is having unchecked nulls; TypeScript supports checked nulls so it's not an issue. TypeScript's nulls are much more similar to Maybe/Option than unchecked nulls</i>
<i>Good point in theory but my practical experience hasn't borne this out. That is because TypeScript is an "optionally typed" language and it hasn't been true in practice because of excessive use of explicit or implicit "any"s. </i>
I think that's a matter of your team's discipline. It's good practice, I think, to enable TypeScript's strict checks, including no-implicit-any, and, to the best of your ability, to keep people who don't understand types ignorant of explicit any and to fail any code that uses it. `any` is basically never necessary even in typing existing code - if you genuinely don't know what the type is at a certain point, you should probably write a type like `unknown`.
If you take any of Typescript's options to "ease the transition" you're taking Typescript's options to continue the difficulties. One moves to typescript because javascript's runtime errors are a problem; so it is natural that you will have novel compile time errors.
AssemblyScript is not TypeScript.
Especially so if you don't consider TypeScript to be JavaScript; I'd argue that the difference between TypeScript and AssemblyScript is bigger than TypeScript and JavaScript.
This does not necessarily invalidate your wider points, but just FYI:
> It does not interop with JavaScript.
Hm, this depends on your definition of "interop". My JavaScript and TypeScript are languages that exchange information. The execution model ultimately involves JavaScript when I use tsc, but ultimately it also includes an interpreter. My user-space syntax doesn't care.
> There is no such thing as a TypeScript runtime engine.
V8 is a JavaScript runtime that compiles JavaScript on the fly, so it seems reasonable to say that a runtime that compiles TypeScript on the fly is a TypeScript runtime.
These features are copied from TC39 proposals for JavaScript. Presumably they were deemed safe to add to TS now that they have reached Stage 3 as JS proposals.
Not so much 'copied', considering it was the TS team that has gotten them added to JS. They just didn't want to commit to anything in the TS codebase that wasn't at least stage 3 in JS.
Out of curiosity, have there been features that were thought to be part of JS one day, added by TypeScript, and then abandoned for JS again, leading TS to rip them out as well? Or do such vestiges remain in TypeScript and have to be changed into different constructs on compilation?
I don't see it as two kinds of null, there is a null value, and then there is the fact that no value has ever been defined, which is undefined.
It can be useful to have the latter case distinguished in a dynamic language because it can enable certain powerful patterns. At the end of the day, compressing both these cases to a single concept of null would be lossy. This may have certain advantageous implications for simplicity, but you're trading that off for language power. Which you favour more of course depends on the usecase.
There is yet a third type of missing value in JS: empty array slot. This appears when you create an array with a length but no values for indexes in that length, e.g. `new Array(100)`.
You have to watch out for first and last one in JavaScript but not on TypeScript as it isn't possible to make that mistake because you have to type it as a promise or in the last one as void.
You can even avoid the problem in the second one by using NonNullable TypeScript types, but I admit that's not common so its still likely to arise.
The first example can happen in TypeScript; foo has type
(() => Promise<void>) | undefined
admittedly it may not be all that common to have a function-valued variable that may be undefined, but it happened in the code base I was working with.
In the last example, you're right that TypeScript will catch this at compile time. My point was to show how this compile time error can happen from refactoring to use optional chaining, and one easy solution in this case.
One of my remaining gripes with Javascript/Typescript is the try/catch mess around await. It makes assignment of const a pain for async calls that may reject.
e.g.
let result: SomeType;
try {
result = await funcThatReturnSomeType();
} catch (err) {
doSomethingWithErr(err);
}
// at this point result is `SomeType | undefined`
if (result) {
doSomething(result);
}
I really want some kind of structures that allow me to make `result` constant. In some cases I've rolled my own Maybe/Either wrapper and then move the try/await/catch into a function but that is still a pain.
This is such a common pattern in my code ... I wish there was a more elegant way to deal with it.
Can you hoist the `if (result)` into the `try` part of the statement? (Without seeing more context hard to know why that wouldn't work for you).
Another pattern to avoid the above is to remember that async functions return promises and that .catch() also returns a promise. So your above logic can be written as:
const result = await funcThatReturnSomeType().catch(doSomethingWithErr);
if (result) {
doSomething(result);
}
You can also get rid of `if(result){}` by setting the return type of "doSomethingWithErr" to "never":
function doSomethingWithErr(err: any): never {
throw new Error("Oops");
}
let result: SomeType;
try {
result = await funcThatReturnSomeType();
} catch (err) {
doSomethingWithErr(err);
}
// because doSomethingWithErr has return type "never", result will be definitely assigned.
doSomething(result);
Interestingly when I was encountering this myself recently, I discovered that JS finally blocks can return after a function has nominally already returned. Consider the following closure.
(() => {
try {
// finally will return prior to this console.log
console.log('this try was executed and the return ignored')
return 'try block'
} catch (e) {
return 'error block'
} finally {
return 'finally block'
}
})()
I don't think that's the right way of thinking about it. The behavior I see is consistent with my understanding of `finally` from other languages.
Basically, `finally` gives you a guarantee that it will actually run once the `try` block is exited. Likewise, `return` effectively assigns the return value and exits. But it doesn't (cannot and should not) breach the contract of try-finally, since the purpose of try-finally is to ensure resources are managed correctly, so it either exits to the caller or it exits to the finally block, depending on which one is most recent.
In your case, a return value is assigned and the `try` block is exited using `return`. We then have to continue into the `finally` block, since that is the core meaning of `finally` - we run it after we leave `try`. And then with `return`, we reassign the return value and finally leave the whole function. At this point, the return value is the second one that was assigned to it.
Maybe thinking of it like this is helpful, although I somewhat hope it isn't. You can see that "return" is reassigned before we have a chance to read it. I've simplified by removing any consideration of errors, but I console.logged the final output.
//this is the function call at the end of your IIFP
next_code.push(AfterMe)
goto Anonymous
// this is the function definition
Anonymous:
// this is the try-finally idiom
next_code.push(FinallyBlock);
//this is the try
console.log("this try was executed");
//these two lines are the first return
var return = 'try block';
goto next_code.pop();
//this is the finally
FinallyBlock:
var return = 'finally block'
goto next_code.pop();
// this code gets executed from the FinallyBlock's goto and is as if you have a console.log(..) around your whole definition.
AfterMe:
console.log(result)
It's probably either this behavior or a statement in the finally block that doesn't run, even though it's guaranteed to. Either way, some assumption of normal program behavior is invalidated.
Unless one goes the PowerShell way and just forbids returning from finally.
> Can you hoist the `if (result)` into the `try` part of the statement?
And now you wrap the function call to doSomething() in the try/catch too. Often (usually?) the try/catch specifically is for the asynchronous function. Usually that's because the async. stuff might fail due to expectable (even if undesirable) runtime conditions (e.g. "file not found"), while synchronous code should work for the most part (external condition errors vs. coding errors - and your catch is about the former, because, for example, you might want coding errors to just crash the app and be caught during testing).
Sure, you can claim that you check for specific errors that could only happen in that function, so that any errors occurring in doSomething() don't matter/don't change the outcome, or that doSomething never throws because you are sure of the code (the async. function may throw based on runtime conditions, but you may have development-time control over any issues in doSomethign() - but if you start going down that path, having to rely on the developer doing there job perfectly for each such construct, later maintainability and readability goes down the drain. You would have to make sure such a claim is valid when you later come across the construct. That is why you really don't want anything inside the try/catch from which you don't want to see any errors in the catch block. So my policy is to never ever do that even if in the given context it would work - it places additional work on whoever is going to read that section later (or they are ignorant of the problem and won't see this potential problem, which is not any better).
I’ve run into the exact same situation and continue to repeatedly. In fact this is a stupidly common pattern in a library in working on at work right now.
My solutions have involved casts (and comments explaining the assumptions involved) instead of the ‘if’ statement more times than I’d like to have done, but it ends up with the same result with the added (however small) compute with the conditional since it can be safe to assume the value is not undefined. It’s not perfect, but it at least omits unnecessary runtime code.
This right here is exactly why Errors should be enums, instead of this wacky exception handling system. Try/catch is why I frequently prefer plain old .then and .catch instead
Yep. Doesn't even feel so different from using error first callbacks. I had to go back to using them for something recently, and feel like most of my problems back then stemmed from excessive deep nesting, and overreliance on the closure scope. Don't get me wrong I love the Promise api, but if ts were popular pre-Promises, we wouldn't have had half the issues with callbacks for concurrency. You'd be able to type functions that take callbacks, know what errors they can receive, see if anything's out of scope or misused, etc
Iffys are a nice go-to in ts in general, because type inference is so handy and I'd rather not have a big pile of typed `let`s at the top that I have to assign into later. Even something as simple as a switch plays nicer if you iffy it and use it like an expression (similar to a match is rust)
Another commenter pointed out that .catch is perfectly fine in the async/await world, even though you pretty much never use .then anymore. Here's another way of doing it:
Right, which goes back to the question shtylman asked about why doSomething couldn't be part of the try block. I'm tossing this form out there in case it helps. People sometimes forget that await can go wherever an expression goes.
I really like the optional chaining operator for statically typed languages. Especially in TypeScript where you have the nullabilaty information baked into the type system.
However, in JS itself, it might cause developers to lose track of what can be null/undefined in their code. In case they start to "fix" stuff by throwing in some "?." because they don't know better, the code maintainability will degrade a lot.
Maybe I'm just pessimistic. Let's see how it will perform in the field!
It's nice for situations where you want to access a deeply nested prop, and you only care whether the whole path is there or not. Saves you having to add a seperate check for every level of the hierarchy.
e.g. You can do:
foo?.bar?.baz || "default";
Rather than:
(foo && foo.bar && foo.bar.baz) || "default";
Agree that developers can be careful about nullability (in fact I pulled someone up on this in a code review earlier today), but I don't think this feature makes that any worse.
The problem is that if you find yourself needing deep accessors, something is very wrong with your scopes. You are reaching across many levels of concerns which is a code smell.
So, by making it “nice” you are making a code smell less smelly, which feels good in the moment, at the syntax level, but makes your code worse at the architecture level.
This is roughly the story for all of ES6... make it “nice” to work with bad code, allowing bad code to look more similar to good code, until everything looks “nice” at the syntax level but you are surrounded by footguns that are impossible to find, and you need more and more static analysis tools (like TypeScript) to even be able to comprehend your control structures.
Callback hell isn’t bad because of indentation, it’s bad because there are too many handoffs in a small space. Promises make it easier to pack more handoffs into a small space, and guess what? Now the problem is even worse.
This new ? operator will make it easier than ever to pass on undefined values. In other words, it will make the problem it solves even worse.
> if you find yourself needing deep accessors, something is very wrong with your scopes
I'm not sure I agree with that statement in all scenarios. For code you control, sure.
But there are many APIs that return very deeply nested structures that are inconsistent in their shape. That, in my view, is the most common place devs will need deep accessors where parts may be null/undefined somewhere in between the root object and the key they are trying to access.
Sure, they could write functions that, similar to get in lodash, expose just the values needed, at which point chaining wouldn't be needed at all in the code that deals with that value. Or it could be serialized into a class object, but again, the chain would be dealt with in the serialization. At some point, the chain needs to be dealt with and often the JSON structures from APIs are not something that's always under our control.
I agree that the main scenarios people are excited for are probably the exact opposite of the ones we should be looking to solve with this. Indeed my last comment right before posting this was about how callbacks are better than we remember (and if we were on ts when using them, probably wouldn't have minded at all).
Deeply nested object props are often bad, except for when it's a big dynamically-defined object (eg a directory tree that you would traverse with lodash get). It's also great when you're not deeply nested, you're simply checking for the inclusion of something in a collection that is itself optional. Eg a Map#has on an optional map.
It came to my mind because I've seen some Angular templates (they had this syntax before TypeScript had) where the dev just threw in some "?." to fix only the symptoms of a bug. This resulted in some ngIf condition to be always falsy and never showing a specific element.
The real bug was that the property in question should never be null/undefined in the first place. It was a lot harder to find the error.
Also, I've seen some typos (or missed renames) being unnoticed without any errors. This caused some weird behavior in the frontend where there was no exception but there is definitely something wrong. Or, worse, the bug is never noticed and other code depends on that behavior.
So if the language is statically typed, the feature is awesome because the cases mentioned above will trigger a compile time error.
I've seen some misuse in dynamically typed ones.
I agree, I’m dreading the question mark in my codebase. Everyone is just going to start throwing it everywhere and you’ll never be able to reason about what exists and what doesn’t.
If something is null or undefined, you should deal with it in one place, the correct place. And downstream code should be able to assume all data exists.
Is it not bizarre that you're blocked on a syntax formatter to make a release? If that's what's happening, it sounds like prettier should be core and non-optional, the same way gofmt is.
The commenter was talking about a release of create-react-app, which is a starter-kit/scaffold. For such a project with the goals of being a packaged collection of tools/configurations, it seems prudent to only release if your set of tools present a consistent experience.
Many js developers see formatters as a sane default, and create-react-app is a layer in the ecosystem that incorporates it as a "core" piece in the way that you meant above.
I really like the new optional operator, this might be what gets me to bite the bullet and start moving some of my projects over to typescript - dealing with potential undefined objects in those chains is one of the things I actively dislike about writing in vanilla Javascript.
I'm assuming by optional operator you're referring to optional chaining? If so, it's very cool but strikes me as an odd reason to move to TypeScript, because it's a stage 3 proposal in JavaScript too, so is likely to be widely supported soon.
The TypeScript team is only implementing features that have a chance of 100% of landing in JS or 0%. Therefore, they wait for Stage 3.
If they start implementing features at an earlier stage, there is the risk of implementing a feature with different semantics in TS than in JS, since the JS spec can still change (or event get rejected). Both cases will result in diverging languages, which is something they try to avoid.
Edit:
In the past, that didn't work that well. TS 3.8 is planned to ship with JS private fields support (the one with the # syntax). TS had private fields for a long time. In fact, it's one of the first things that the language had.
However, these private fields behave entirety different to what ended up in the ES private fields spec. They can't change it afterwards. So in the future, we will have two semantically different ways of declaring a private field in a class in TS.
Another case are decorators. They are still in stage 2 and may change. They already exist in TS, behind a flag. But every Angular application depends on their current implementation in TS. If the spec changes, it will get interesting.
I would argue that TypeScript absolutely can change things that would be consider breaking changes. And they do have breaking changes in just about every release (sure wish they followed semver for that reason). They'd just want to be careful for big things like private fields or decorators, making sure the breaking changes get communicated publicly and loudly.
Not just Angular, TypeORM and NestJS and a bunch of other server frameworks written in TypeScript make heavy use of decorators. The JavaScript spec for them has been rewritten twice since what TypeScript has so there is going to be a lot of trouble.
This is correct. Move to TypeScript for the types. If all you want is the latest syntactic developments from ES, babel is a better fit. It tends to be further ahead than TS for the bleeding edge features.
You can even make your own definitions and syntax sugar using Babel. (If we continue to add more syntax sugar to JS, it will soon be more complicated then C++.)
You can as well in TypeScript, it's just never been promoted or talked about much. Which is a shame. When I get a decent chunk of free time I plan to play with it, I'm curious if it's possible to write a translation layer that would allow using babel plugins in TS.
I can almost guarantee you'll like it. Kotlin has this same feature and I've found it saves a good amount of boilerplate null checking when inter-operating with lousy legacy Java code.
>Just be sure to not be too strict in your tsconfig.json in the beginning.
Do you mean to avoid using the "strict" rule, or are you referring to other rules? My number one Typescript suggestion is to make sure you start with "strict": true in your tsconfig.json to be sure your code has nullability correctly enforced.
I'm talking about "noImplicitAny", "noImplicitReturns", "noImplicitThis"... they get in my way when I'm editing. When I'm about to push my code I activate them back.
`get` will destroy type inference, there's no workaround. Your output types are always `any`. The new optional chaining fixes that problem, giving you the same terseness as `get`
In lodash's defense, there is a @types package for it, which has a typed _.get. However, in my experience, the lodash typings are really difficult to use correctly.
Definitely will be replacing usage of _.get with optional chaining soon.
I feel really excited about 3.7. Optional chaining and null coalescing will clean up a TON of code.
... but with that being said, 3.7 seems to have broken many aspects of the `Promise.all` interface. Right now the largest issue seems to be that if any `Promise` result in `Promise.all` is nullable, all of the results are nullable.
Yeah, thank you. I think it's questionable they shipped with this kind of regression present (especially having known about it 1 month before release), but easy fix just to override core types with the one in #33707.
This is a moment where the postfix `!` operator comes in handy.
It is a well hidden secret, and one that I'm not going to try to lookup the docs for on mobile, but the idea is that the operator strips null/undefined from the type of whatever is before it.
So you can do something like this:
const [ a, b ] = await Promise.all([
async () => ({ foo: 'bar' }),
() => null
])
console.log(a!.foo) // `a!` strips the nullable off `a`
When I plug in your example (in Chrome console and in the TypeScript compiler), `a` is type `() => Promise<{foo: string}>` and `b` is type `() => any`.
I thought you had to actually provide a Promise to `Promise.all`, not functions that return promises (you have to invoke the functions):
const [ a, b ] = await Promise.all([
(async () => ({ foo: 'bar' }))(),
(() => null)(),
])
When I do this, `a` and `b` have their proper types.
Same with Swift. It is one of my favorite features and one I miss most after moving mostly to React Native. I also love `guard` and `if let` in the same vein of boilerplate-reducing sugar.
Definitely. It's a shame that dart gets so much unwarranted hate. I mean, I also think it's an absolutely awful language but it's really proven to be a valuable source of data for what other programming languages should do and perhaps more importantly: not do. I really hope we see a lot more things like Dart, and not so much negativity.
We don't tend to have detailed roadmaps because it risks setting people up for disappointment when schedules change. But what's roughly happening right now is:
- Non-nullable types are well underway. All but a few corners of the design are pinned down, much of the static checking is implemented, the core libraries have been mostly migrated, and we're working through the runtime implementation, migration tests, etc.
- Next up after that, the current plan (which may change) is control over variance and stuff around pattern matching.
What's the end state for TypeScript? Does it one day become feature complete and slow down? One of the frustrating things about trying to find help with TS today as a beginner is the plethora of SO and blog posts talking about much older versions. If you're lucky there'll be some comment "As of 2.6 you can now do ...", but even 2.6 is a lot of versions back - how do I know that's still the best way to do it in 3.7?
I'm for progress, but it makes me a little wary to start my team of no-previous-TS-experience JS devs on a TypeScript project when there's still a new version every few months. Keeping up with Webpack is enough of a hassle...
JavaScript itself keeps changing, so we need to keep pace with that, for one. Beyond that, we relentlessly seek to improve how people interact with our editor tools, and make additions to the (type) language and compiler to support that. .d.ts files from js files, for example, are highly motivated by a desire to better support incremental compilations with .js inputs, as .d.ts files are used as incremental metadata. Assertion signatures were added to better express the cross-call control flow patterns some (assertion) libraries already use, to make using them in a well-typed way more ergonomic.
Generally speaking, it's not often we add something that _invalidates_ the old way to do something (in the language) - our additions are usually made to make new patterns possible to express.
Thanks for the response. I do appreciate the efforts of the team, and am planning to continue learning TypeScript :)
Maybe this exists, but one thing that would be super helpful is a document that explains any places where there's a "new best way" of doing things, and also details what the old way was.
Here's an example from yesterday: I was Googling about extending a type (I think) and the first SO result (top of Google) says "you can't do that in TypeScript"... but reading the comments someone had said "actually you can, in 2.6 or later". That's the kind of thing that it would be great to have summarised in one place. Not necessarily all the changes (ie, not just all the release notes) but specifically where the language has changed and an old way of doing things, or a previous restriction, is gone. Preferably with examples. (I'd try to create this, but I don't have the skills to do so).
We actually primarily use StackOverflow for this (as it's the first resource many devs go to, as you did, and it's collaborative). We even pre-seed questions and answers for releases, sometimes.
If you find an answer on SO is out of date - suggest a new answer (or ask for one) and get it updated (there's far, far too many for us to keep explicit track of them, there's only a handful of us on the team)! :D
> how do I know that's still the best way to do it in 3.7?
Maybe don't worry about the "best" way to do something and use what works? Typescript has really good backward compatibility, it's very rare for something to stop working that previously worked. If it turns out there is an easier way to do something, you'll likely have reason to want to learn it, but at least in my experience with Typescript you'll rarely find a "need" to learn it or your code will break.
Javascript doesn't change at anything like the pace that TypeScript does, so it hasn't been much of a hassle.
But even so we (like many teams with large codebases, I suspect) have callbacks, promises, and async/await all mixed together, depending on when the code was (re)written. It's on our list.
Javascript formally releases once a year, but individual feature adoption in browsers is totally separate from that, and happens.... whenever they feel like it. v8 (chrome's js engine) doesn't have a set release schedule, but has already had 6 formal minor releases this year, each partially adopting some new JS runtime features, all of which have already shipped to chrome.
At our current pace, we formally release 4 times a year, for reference, which is actually a far cry from the more continuous deployment browser vendors currently have setup.
The main difference is that most people kinda ignore new JavaScript stuff for awhile until it trends or gains sufficient rollout. It's an interesting world - I can't really say that people are actually JavaScript version aware, beyond, maybe, what compatibility presets @babel/preset-env gives them.
Typescript really isn't the most exciting language, but it's very very helpful. Compared to regular javascript it saves me
a lot of time, and so many pains and headaches every day.
Does the function argument (i.e. the template string) get evaluated regardless of the optional chaining or does it match up with the “roughly equivalent” code?
log?.(`Request started at ${new Date().toISOString()}`);
// roughly equivalent to
// if (log != null) {
// log(`Request started at ${new Date().toISOString()}`);
// }
I copied the example from the article but it says “roughly” so it’s not entirely clear.
The situation that popped in my mind was something like: f?(++a)
Normally you’d expect the side effects incrementation to occur prior to the start of the function invocation. If the function is not evaluated it’s not clear from the article if increment will occur.
I'm super excited about this release, but it got off to a rocky start for me. Prior to 3.7, all our tests worked fine, but something in 3.7 changed that caused the type-checker to fail previously valid code. What's worse is that I can't reproduce the issue in a playground. Thankfully the workaround is "make your code more explicit", which is fine, but it was just a surprise to see something like this break.
For those that are curious, here's the error that 3.7 introduced:
Type 'SinonStub<[string, RequestBody, (RequestOptions | undefined)?], KintoRequest>' is not assignable to type 'SinonStub<any[], any>'.
Type 'any[]' is missing the following properties from type '[string, RequestBody, (RequestOptions | undefined)?]': 0, 1
I think the issue here is that `[string, RequestBody, (RequestOptions | undefined)?]` is a tuple type, and `any[]` is an array type. That being said though, I'd expect that a tuple would satisfy a `any[]` type.
Yeah, that's my assumption as well. It would explain why there's no mention of it in breaking changes, and why being specific about the generics in `SinonStub` fixes the error.
Makes me wonder what other unsoundness the compiler isn't catching.
Using `as const` will report both the 2nd and 3rd lines as type errors. You've been able to do this in TypeScript for a while even before they introduced the `as const` syntax.
Yes there is, you can either use as const or define the type as a tuple:
const x = [1,2] as const;
//const x: [number, number] = [1,2];
const y = x[666];//Tuple type 'readonly [1, 2]' of length '2' has no element at index '666'.
const z = y + 3;//Object is possibly 'undefined'.
Thanks. Is there a pattern for array access that helps with my example? Is the only option to define your own safe access function like "function safeGetFromArray<T>(array: T, index:number): T | undefined"?
I haven't tried it yet but it looks like the new optional element access feature is only checking if the array itself is defined, not if the array index is defined.
I can't seem the find the GitHub issue for it off hand, but I believe you can override the default indexing signature for arrays to return possibly undefined.
Otherwise, you're left to creating a wrapper function and a custom ESlint rule.
This is going to be one of my favorite releases since 2.8, which added conditional types. So many bad utility functions will be able to go away. The only thing it seems to be missing is variadic type generics.
if (typeof x === "string") {
return doThingWithString(x);
}
else if (typeof x === "number") {
return doThingWithNumber(x);
}
process.exit(1);
}
```
It's very embarrassing in my opinion that they haven't done anything yet against having these horrible kind of type checking; comparing against a string that has the type name? "string", "number"? it's completely ludicrous.
I will not take this language seriously until this is fixed. (For sure it's still better than JavaScript, but that's about it.)
function isString(x: any): x is string {
return typeof x === "string";
}
function isNumber(x: any): x is number {
return typeof x === "number";
}
function dispatch(x: string | number): SomeType {
if (isString(x)) {
doThingWithString(x);
} else if (isNumber(x)) {
doThingWithNumber(x);
}
process.exit(1);
}
TypeScript seeks to manage JavaScript's horrors, but importantly, it does not seek to hide them behind leaky abstractions. This is not an embarassment, but TypeScript's strength and weakness.
There is a long list of other languages that compile down to JavaScript or WASM, if you want a language built from clean foundations. But if you want to add gradual typing as a means of slowly reigning in your existing JavaScript behemoth? There is only one TypeScript.
String regular expression matching returns null, not undefined, so rewriting code such as:
is not the same thing as: because the latter returns undefined, not null, in case of match failure. This can cause problems if subsequent code expects null for match failure. An equivalent rewrite would be: which is longer than the original and arguably less clear.A common idiom to catch and ignore exceptions can interact poorly with optional chaining:
This can be easily remedied by changing the first line to: Of course, these new operators are very welcome and will greatly simplify and help increase the safety of much existing code. But as in all things syntax, being judicious about usage of these operators is important to maximize clarity.