I wrote the JavaScript regex engine in Hermes [1], and also regress in Rust. I want to explain what's sucky about this decision, but first some ES regexp background.
The regexp grammar portion of the ES spec is different, in that it's not sufficient. You can implement JavaScript from the ES spec, but NOT regexp. The grammar doesn't really make sense [2]. The Chakra team found the same thing [3].
I have not confirmed this but I have been told the regexp portion is tightly controlled by Google, more than most parts of the spec. When v8's regexp implementation is found to deviate from the spec, the spec gets changed. This makes sense (why risk breaking websites), but it illustrates how little weight the regexp spec actually has.
With Firefox using irregexp, it further cements this unfortunate reality.
> You can implement JavaScript from the ES spec, but NOT regexp. The grammar doesn't really make sense [2]. The Chakra team found the same thing [3].
I haven't tried to implement the RegExp section, so I can't speak to that, but this is largely why we have Test262[1].
Also that Chakra blogpost is from 10 years ago, talking about ES5. Things have improved considerably since then, though of course we have to live with historical baggage.
> With Firefox using irregexp, it further cements this unfortunate reality.
Is this a specific unfortunate reality? I don't spot any RegExp entry in the last spec Incompatibilities section[2] that seems like a bad change or even having to do with regex grammar at all.
Yes, thank goodness for Test262. One implements JS-compliant regexps by doing the natural thing, and then making Test262 pass.
Some things have indeed improved. The 'u' flag introduced in ES6 provides a much cleaner grammar for example. However the process is still busted.
Here is a concrete example. /{1}/ is permitted by the ES6 spec (including Annex B). But in practice, "web browsers" rejected it. Rather than fixing the browsers, the spec was changed in ES8 to reflect "web-reality". https://github.com/tc39/ecma262/pull/303
Browsers could have become ES6 compliant here, with zero compatibility risk. But when there is non-compliance, it is the spec that moves.
> I have not confirmed this but I have been told the regexp portion is tightly controlled by Google, more than most parts of the spec. When v8's regexp implementation is found to deviate from the spec, the spec gets changed. This makes sense (why risk breaking websites), but it illustrates how little weight the regexp spec actually has.
A few responses:
1. This was not at all my experience working with the V8 team. For example, when I noticed that Irregexp did not properly implement the Canonicalize algorithm for non-unicode case-insensitive comparisons [1], they were happy to accept my patches [2][3]. Nobody ever suggested changing the spec.
2. Nothing changes in the JS spec without consensus from all the major stakeholders. See, for example, the second sentence in the TC39 process document here [4]: "The committee operates by consensus and has discretion to alter the specification as it sees fit." Mozilla can block (and has blocked) proposals that it feels are bad for the web platform. Google has no special power to bend the committee to its will.
3. Another way of saying "when V8's regexp implementation is found to deviate from the spec, the spec gets changed" is "when the spec is found to deviate from the consensus of implementations, we correct the spec". The Chakra team's post notes: "In practice, all browsers accept regular expressions such as `/]/` and web developers write them." The purpose of the spec is to make the web work better by maximizing compatibility between implementations. In cases where all implementations agree, and real websites depend on that behaviour, the spec should change to match reality.
Don't get me wrong: Chromium monoculture is a real problem. If we thought that Google was going to add a bunch of new regexp features without going through the standards process, or if we had plans to prototype regexp features that we feared Google wouldn't accept upstream, we might have made a different decision. But regexps are not a plausible battleground, and the compatibility benefits of sharing code outweigh the harms.
PS: regress looks really sweet, and I am excited to see how it turns out.
Great reply, thank you for sharing your perspective.
First, I confess to some skepticism of the standardization process. For example, consider ES2018 lookbehinds. At the point of standardization, only one browser implemented them: v8. And in the standardized form (arbitrary width) they are very difficult to retrofit onto existing engines. Moddable had to throw their engine out and start over [1]. JSC's YARR still doesn't support them, neither does SpiderMonkey. Did all stakeholders really agree on this feature and then not implement it? Or was this just v8 in the driver's seat?
Second, regexps actually do have a role in driving monoculture. You can't polyfill regexp syntax. At one point, Steam's website only worked on Chrome because only v8 implemented lookbehinds [2] - again because it's hard to retrofit.
Lastly, it remains a problem that one cannot implement a conforming JS regexp engine from the spec alone. The rest of JavaScript does not have that problem, only regexp. And I disagree with the "real websites depend on that behaviour" qualifier. No website depended on /{1}/ being invalid syntax, yet it was made so in ES8.
Incidentally, it's amusing that v8's Canonicalize was buggy. Non-Unicode case-insensitive char ranges is THE ugliest part of regexp (QuickJS doesn't even try [3]) and I just assumed it was codifying v8's implementation. Guess it was someone else's, hah.
The TC39 minutes are all online [1]. You can see the initial introduction of lookbehind assertions [2], the meeting where they reached stage 3 [3], a status update while waiting for more implementations [4], and then the eventual agreement to move them to stage 4 (full standardization)[5].
They reached that point because: a) everybody agreed that they were a good feature, with precedent in other languages; b) the spec text had been scrutinized by the people who like to spend time scrutinizing spec text; and c) two implementations (Moddable and V8) had successfully implemented them. That's the process, and it was followed to the letter.
Does that put a burden on other engines to keep up? Sure. But that's what we signed up for. The webcompat bug you linked is from April 2020, more than two years after lookbehind assertions were standardized. That's nobody's fault but ours, and we did this project so that we don't end up in a similar situation in the future.
I'm not sure what your source is for JS regexps being impossible to implement from the spec, but if you have specific points of concern, you should open an issue [6]. Writing a spec is hard work, and things get missed, but the people working on this are genuinely trying their best.
(As far as I can tell, Canonicalize is what everybody thought they were implementing. V8 tried to get clever with ICU and ran facefirst into some of the dark corners of Unicode.)
I want to say four things. First, I think Mozilla probably made the right decision here. Mozilla is an org with limited resources and must direct them to what matters - and as you say, regexps are not a plausible battleground. I wish I had said so in my top-level post.
Second, I cast no aspersions against any of the people involved in the ES spec. I recognize they are engaged in hard and unrewarding work driven by a sincere effort to push JavaScript forward, while everyone's a critic, especially myself. It's awesome that the meeting minutes are available. Thank you for the links.
Third, I honestly believe the meeting minutes support my point. I risk overstepping, because I had no involvement, but from your third link:
> We have two implementations. One in V8 and one in the Dart VM. It's not a JS implementation, but it does implement this feature.
This supports my speculation that it's all about v8. Later:
> I would assume Chakra or V8 would have said something by now if they had issues
SpiderMonkey and JSC are chopped liver? This feature really seems to have been driven by the implementors.
And this one feature (lookbehinds) was so disruptive, so damn hard to implement, that Mozilla abandoned their implementation and now just uses v8's. The linked blog post cites this feature as an impetus to switch.
Was that the goal? Did SpiderMonkey engineers give the nod to lookbehinds with the intention of abandoning their engine in favor of irregexp? I have to believe not; I speculate it was the very human response of conflict avoidance. Easier to say "yes" especially if you do not appreciate how much work you are signing up for. But it means ES2018 regexp advanced the monoculture. One less regexp engine in the wild.
Fourth, I really do know that the ES regexp spec is not useful for implementors, because I am an implementor. Ok, it's not really a question, huh; I should just make a PR.
Or another reading is that Google is yet again abusing their dominant position to alter the spec to suit their needs instead of the need of people using non-dominant browsers.
Nice to see the V8 and FF team working together. And preventing FF from falling behind on features is good.
On the other hand, the more dependent Mozilla is on the Chromium base, the more power it gives Google (even if at this point Google already acts like they own the internet and do things that break in FF at will).
I agree, but in the article, it was mentioned that the V8 team already wanted to make their RegExp engine more independent. Also, since Mozilla runs of limited resources, it's great that they will have less maintenance to do in the future.
Hopefully we start seeing more "modularized" browser components. It's a huge detriment IMO that we are stuck with 3-4 (depending on who you ask) monolithic browser engines.
I honestly don't understand why Mozilla went the route of a monolithic and firefox-only browser engine. They had a lead before Chrome showed up and refused to cater to devs. Now they're paying for it and so is everybody else.
[regexp] Remove trivial assertion
The assertion in BytecodeSequenceNode::ArgumentMapping cannot fail,
because size_t is an unsigned type. This triggered static analysis
warnings in SpiderMonkey.
Does that mean Chromium doesn't use any static analysis tool? Or one that does not work for this trivial assertion?
> Or one that does not work for this trivial assertion?
Or one that does work for this trivial assertion, which is to say not emit a diagnostic.
What ends up happening in a situation like this is that somebody removes the assertion to quiet the compiler diagnostic. Then the types change and all of a sudden it can fail, but the assertion is gone.
You might say, "well, now that one of the operands is size_t how could the types ever change to reintroduce an issue?" That's the wrong question to ask. The whole point of the assertion is to avoid having to answer such a tricky question, or at least to encode your answer in a way that if the unforeseen happens things break loudly instead of silently. Anyhow, you'd be surprised by how things can break. I personally religiously use size_t for anything related to object size, but many other developers don't (including at Google and Mozilla), and so you often see a mix of, e.g., size_t and uint64_t, size_t and uint32_t, size_t and int, etc, and regular tweaks back-and-forth, which can easily introduce regressions.
I understand why compilers emit a warning--the idea is that if the assertion couldn't possibly be false, maybe it has a bug. But, IME, the opposite is usually true--it's well-written and deliberate, because the developer is trying to catch spooky action at a distance where the type, which is defined far away, is changed, accidentally or intentionally. I don't know where to draw the line in terms of second-guessing the code to help catch bugs, but GCC and clang need to provide far more succinct constructs to tell the compiler to shut up. Currently you're stuck with inline pragmas, __extension__, statement expressions, and other weird convolutions that require far more code than the assertion or operation itself. The issue makes writing arithmetic overflow-safe code more tedious and error prone.
(Other languages simply prohibit mixing integer operands of different types, so if a type is changed far away then code will break loudly even without any assertions. But in codebases like SpiderMonkey and V8 that use a wide range of integer types for space and performance optimizations, that tends to encourage casting, which has the exact same problems.)
But the bug is not in that function, if a function accepts size_t, then that value can never be <0. If someone were to pass (size_t)-1 to it, it would still be positive. The issue is in the caller of the function, which must convert a negative value of a signed type into size_t, and that is the actual bug in this situation. Perhaps those conversions should trigger at least a compiler warning.
Asserting things which are given from the types is a waste. Or do you think we should assert that a value of type bool is really either true or false and fail the program otherwise?
I love Rust. I've lost track of the number of times I've stared at a chunk of C++ code, considered how much nicer it would look in Rust, and sighed. At the beginning of this project, we did consider whether it was the right place to use Rust.
There are no existing JS-compatible regexp crates. BurntSushi's regex crate is great, but finite automata don't do backreferences, which is a dealbreaker for JS. After this code landed, somebody brought https://github.com/ridiculousfish/regress to my attention. It looks promising, but still has a long way to go before it's production-ready, and it didn't exist when we made our decision.
If we had written our own replacement, it would likely have been Rust. SpiderMonkey has a lot of cross-cutting issues (GC especially) that make it hard to replace individual C++ components with Rust, but the regexp engine has a pretty clean API boundary. It's the same reason that it was feasible to swap in Irregexp in the first place.
Ultimately we decided that writing a new engine wasn't the best use of time. A regexp engine is a complicated beast. Writing a new high-performance, JS-compatible engine in Rust could have been person-years of effort, with a long tail of corner cases and performance issues. We haven't had many memory safety bugs in regexp code. As sstangl points out in a sibling comment (hi Sean!), doing JIT compilation undermines some of Rust's safety guarantees.
When it comes down to it, the regexp engine is not a place where SpiderMonkey is looking to push the state of the art. We have to be reasonably fast and feature-complete, but beyond that nobody is going to notice marginal gains. There are higher-leverage opportunities elsewhere.
So another up and coming crate in this space is Raph Levien's fancy-regex [0] which uses a hybrid model to support back references. No idea how compatible the syntaxes are.
JIT engines like this one don't receive much benefit from being rewritten in a memory-safe language. Errors typically occur in the generated machine code, not in the compiler itself. The benefit would be small.
There's more benefits to Rust than just it's memory safety. It being a non-GCed language with ADTs is a big one, and that's been nice for the couple JITs I've written in Rust.
And the partial memory safety over the metadata around the actual JITed code is a big win as well.
Yes, Rust doesn't get you there 100%, but IMO it gets you closer than C or C++.
Having used both (and in the space for writing JITs particularly), C++'s support is very weak in the space of ADTs. Like most things in C++ you _can_ reach it with straight jacketing yourself in a particular way, with 80/20 static analysis rules backing that and 20% manual code review, but it's difficult to maintain.
Rust gives you that more or less by default and for free wrt tooling. It's sort of the classic "Rust makes you write the C++ you should have been writing all along", which makes it a net win IMO.
Given the fact that Mozilla is the primary sponsor of Rust, and Rust has been sneaking its way into Chrome as well, I'd say the authors of those browsers disagree with you.
I'm not them, but I suspect that they're slowly switching not because of its slightly better abstract data types but because it offers better memory safety.
Those are one and the same. The ADTs are how the shape and validity of the data is described to the compiler in a lot of cases. Rust wouldn't be able to be memory safe without it.
C++'s ADTs are easy to subvert even accidently; Rust's can't be without explicitly calling it out as unsafe.
It allows you to describe transformations of state in a formal set theoretical way. You should check out formally verified software like CompCERT and sel4 and their heavy use ADTs internally to achieve that. Rust obvs isn't full formally verified but it's a neat 80/20 in that direction.
That's exactly the point I wanted to make (but failed to) a couple months ago [1] - that for the (few) tasks you'd use C for (such as writing language runtimes) Rust might not be a good fit. If it isn't a good fit for regexp, it might not be a good fit for JavaScript either.
It’s not that rust wasn’t a good fit. It’s that writing a regex engine from scratch in rust to be integrated into their c++ code base wasn’t the right choice for them when compared against integrating an existing c++ solution.
> Open source is all about collaboration and this code redundancy is a historic accident.
I totally, 100% disagree. Software monocultures are extremely harmful: implementation bugs get baked in as standard features, innovation stagnate without competition and single providers get oversized power. The best case-study here is when Internet Explorer was dominant: you think that was good for the web? Software is like everything else: diversity and having a large "gene pool" is incredibly healthy.
> Would rust or go benefit from totally redundant implementations?
Of course they would, that would be awesome! Look at Python, it's been very good for the language to have a large variety of backends (Jython, PyPy, IronPython, etc.) in addition to the standard distribution. Of course none of them are as popular, but the fact that they exist is a sign of a healthy language.
I wouldn't even pick Python, but the JVM, where there are multiple very successful options and several businesses built on top of "totally redundant implementations."
Actually GraalVM isn't that recent, its history goes back to MaximeVM, initially released in 2005, so GraalVM has 15 years of research work put into it.
J9 is used a lot with IBM customers (banks + other finance on WebSphere aka Websfear). I'm not going to defend Java here, but Java/JVM might very well be the last flagship of a huge language + API + testsuite with multiple interworking implementations. If you don't like that, then there will be only proprietary single-vendor languages going forward.
> Now take half rustc devs and move them to another redundant compiler implementation. You've slowed down progress far too much, criminally even I would say.
This assumes most of development time is spent writing the actual code.
The counter argument is, that most of the time spent on such advanced projects is figuring out which path to take. Perhaps writing four or five different implementing of weighting the discovered pros and cons.
Having multiple javascript engines arguably allows exploring the design space much more efficiently than having all devs working on the same codebase.
> This assumes most of development time is spent writing the actual code.
The counter argument is, that most of the time spent on such advanced projects is figuring out which path to take.
is a great wording of the problem. Added to my favorite comments.
> nonsense, writing a design document and a list of the possible competing implementations take time but order of magnitude less than writing and maintaining it.
But writing a design document may not be enough to determine if it's a good idea. Sometimes you need to try something to find out.
I think you're missing my point. If you have multiple implementations, you can be trying out implementations of different paths concurrently. If you only have one implementation, that's very hard to do.
I'm not saying that's what will happen, just that the point about multiple paths works better with different teams on different code bases.
. If you have multiple implementations, you can be trying out implementations of different paths concurrently. and so can you with git branches.
The difference is that with multiple implementations, the team will most of the time not be aware of what is tried by the team from the other implementation (and conversely) leading to an inefficient overlap of what has been tried on the design space as I explained in another comment.
If both teams worked on the same project, through the same documentation and communication channel, they would-be far more aware of what has been tried and I don't see the issue with git branches.
Different git branches don't allow the features to be vetted by users. At least not by many users. It's going to be much harder to get users to use different builds of Chrome than it is to have different (or even the same) people using Chrome and Firefox.
Also, what management team is going to ask their developers to work on two conflicting features in parallel? It's a people matter as much as it is a technical one.
FWIW, Rustc not having multiple implementations is exactly the reason why I'm not looking into it for serious projects (1). OTOH, JavaScript being ubiquitous is the reason I can spend time on projects with a certain depth and commitment (though JavaScript ES6+ isn't nearly as ubiquitous as it was until ES5, with only two major implementations left), knowing that a multi-year development effort isn't torpedoed by language churn.
1) and I personally don't believe in a "one language to rule them all" behemoth but rather in small, focused languages and tools in a polyglot/Unix fashion
The multiple implementation story of Rust isn't as great as some older languages, but it's getting there.
There are at least two e2e implementations of Rust:
- rustc, the reference implementation.
- mrustc, an alternative written in C++ [1].
There are also multiple implementations Rust's IR (MIR):
- rustc compiles MIR to LLVM.
- A backend that compiles MIR to Cranelift [2].
- Miri, a MIR interpreter used for dynamic analysis [3].
mrustc doesn't perform the borrow check, so it can't prevent you from triggering UB like rustc. But you can use rustc for the borrow check and mrustc for codegen.
Could you explain what benefits the existence of an additional implementation would give a user? I use rustc. Now whether mrustc exists or not, how does it affect me? FWIW, mrustc does exist, so I'm wondering in what way I've benefited.
> language churn
Rust is stable. If there are breaking changes, they are explicitly opt-in. If you're concerned about future changes, you could go ahead and use a specific version of the compiler for years.
I believe Rust is fine, but it's not stable in relation to "multi-year development effort" (as in, didn't even exist), and has had inevitable churn during its forming years. Thing is, a programming language is a means to an end; choosing Rust you're tied to its success and need to take decisions early on in your project (for example, wrt target platforms) when that's something you'd rather postpone until you absolutely have to, which you can do with C and C++. Plus, I've seen languages that tried to be too clever which not only results in you fighting the compiler but also potentially causing generational churn. Like, a fresh generation of developers might hate having to deal with your super-intensive language, wanting to do its own thing instead. Basically, using <rustc-or-whatever-lang> might just suck in another way.
> Like, a fresh generation of developers might hate having to deal with your super-intensive language, wanting to do its own thing instead.
I suspect it's more likely that a fresh generation of developers will hate having to deal with C++! It's all very well if you have 10 years of experience, and you're used to it's quirks. But it's a lot to learn if you're new to it. Rust is significantly easier.
Forget Rust, the language. What's the blocker in using rustc 1.44 for your "multi-year development effort"? It's a binary on your hard drive, it won't change. Similar with any library you're depending on. The specific versions you're depending on will be available in perpetuity.
There's a very good argument for separate teams working on the same problem being able to produce novel solutions that bundling everyone together in one team wouldn't produce. Too many cooks comes to mind.
Humanity would also be way more efficient if we all lived in identical houses, wore identical clothes, ate identical meals, and worked identical jobs. Look at all the redundant work we do with different industries, restaurants, clothing, architecture, families, and identities!
Unfortunately, I don't think the bugfix is scheduled to land until Singularity 1.0.0 RC 1.
Mostly the same. An accident for a licensing philosophy difference.
Wow LLVM brought modularity but I bet that if 5% of the human resources that got into llvm got into modularising GCC the issue would have been fixed.
GCC nowadays is getting modular.
After all this years GCC is still faster on average and llvm is getting slower after each release or at best do not progress.
I'm fairly certain a good chunk of gcc improvements wouldn't have happened without clang existing and showing that it can be done better and that people care.
Yeah, this is an attitude I come across at work a lot. A lot of pointless work is justified as "unifying" several projects. Of course, a nod to the xkcd on Standards [1].
This is why we end up with monocultures (eg in agriculture), which all works great... until it doesn't (RIP Cavendish; say hello to Gros Michel).
Even in a large organization multiple projects that ostensibly do the same thing can be good. They can have different strengths and weaknesses and work from different design decisions and philosophies. Then the "market" (often the rest of the company) can decide which to use. This often produces much better results than simply anointing a winner.
Anointing winners rewards politics rather than the technical.
Of course this can go too far but so can anything. There can be too many competing products. That tends to highlight a problem with your organization's incentives structure (eg heaping rewards on new projects rather than maintaining things). Competition can also be toxic.
You may think it's a waste to have both clang and gcc. I couldn't disagree more. I think the existence of clang has inarguably made gcc better. Would gcc have gotten more modular in the absence of clang? Maybe... but maybe not.
I'll leave you with this quote:
"A foolish consistency is the hobgoblin of simple minds."
There is far more web content than there are native computer programs. Making that content renderer agnostic will become harder the more entrenched the Blink monopoly becomes and websites start relying on blink specific bugs. One day even blink devs themselves can't change their own code any more because it'll break the content out there otherwise.
> One day even blink devs themselves can't change their own code any more because it'll break the content out there otherwise
One day? That's already a problem. There are cases where Blink's behavior is clearly incorrect, but there is great resistance to fixing it, because there's existing content that (accidentally) depends on the broken behavior.
Is it not also a fallacy to fail to fully consider alternative explanations (see: fallacy fallacy, argument by dismissal)? Most engineers are fully comfortable with being corrected in cases of objective fact. What you're doing here is not that, it's instead insisting that any analysis other than your own is mushy-brained nonsense with a near-total lack of social niceties.
Consider the alternative explanation that people don't like the way you're aggressively presenting your ideas and failing to (at least outwardly) seriously consider the ideas of others in kind.
Have you considered that most of the people who point out logical fallacies incorrectly come off as incredibly rude and overbearing and this has nothing to do with Hacker News's "echo chamber"?
You appear to currently be at the level of "rationalist" mindset where you understand some ways in which people's reasoning is not ideal, and you haven't yet understood and accepted that "rationalism is by definition choosing approaches that win", and that being obnoxious tends to lose. Don't deliberately choose approaches that lose. (That's quite separate from things like "there is value in defaulting to treating people decently".)
You can point out a problem or fallacy without reveling in abrasiveness. Consider how good the brain is at pattern matching, what patterns people will match your comments to, and how you could more successfully convince others compared to your current strategy. Before assuming that you're just better than everyone downvoting you, carefully consider the accuracy of your priors, and the biases that make people generally tend to give more weight to their own beliefs. Consider how rarely people actually change their minds; seriously consider what it would take to convince you that a different approach would be more successful. Consider what failure modes you could fall into if you assume that you couldn't possibly be wrong in your approach to interaction with others. Consider whether, in the course of making rationality a sacred value (not by any means a bad thing), you have also come to treat "disregarding feelings and empathy" as a sacred value, rather than valuing empathy as a critical input.
If you want the world to be more rational, you are not only not convincing people, you are actively harming that cause through the example you're setting; you're giving people associations and exemplars for "rationalist" that will make people want nothing to do with it. If you think that people shouldn't care about feelings and should be pleased to get your feedback, then 1) you're committing the is-ought fallacy by not recognizing how the world actually works and acting accordingly, and 2) you should question your "ought", and what a world where people don't value empathy looks like.
(Lest this be misinterpreted: Choosing to value empathy does not mean choosing not to value truth, or to value it less.)
I've been where you seem to be. I've lived the is-ought fallacy, on multiple topics. It took me a long time (and various helpful writings) to learn to question some of my assumptions and hypotheses, test them, and improve my worldview. (One of the first ones I took a while to learn was "yes, how you dress matters, people will judge you accordingly, stop resenting it, accept it and use it to your advantage".)
I think as technologists we generally vastly underestimate squishy problems like jealousy, pride, leadership, organization, collaboration, and psychological safety.
I see your argument as akin to "communism would work if..." Or "command economy is more efficient when...". But human nature matters, and humans prefer to pick tribes, rebel against the status quo, etc. You need to fix that problem (hell you'll need to convince people it is a problem) before you can even dream of fixing this particular issue in software development.
The regexp grammar portion of the ES spec is different, in that it's not sufficient. You can implement JavaScript from the ES spec, but NOT regexp. The grammar doesn't really make sense [2]. The Chakra team found the same thing [3].
I have not confirmed this but I have been told the regexp portion is tightly controlled by Google, more than most parts of the spec. When v8's regexp implementation is found to deviate from the spec, the spec gets changed. This makes sense (why risk breaking websites), but it illustrates how little weight the regexp spec actually has.
With Firefox using irregexp, it further cements this unfortunate reality.
1: https://hermesengine.dev
2: Example: the NonemptyClassRangesNoDash production may include a dash
3: https://docs.microsoft.com/en-us/archive/blogs/ie/chakra-int...