It was. But he had 9 minutes vs more than an hour for Gukesh. The entire match has been Ding defending miraculously, I thought it was a matter of time before he eventually failed. The fact that it happened on the last moves of the last game, it's definitely hard for Ding, but fair for Gukesh IMO.
Overall I agree, the entire match seemed to be Ding defending. Gukesh kind of failed to capitalise the whole way through though.
wrt the time, this is kind of a bread and butter endgame. Ding shouldn't have blundered here with 10 minutes on the clock. Highly unlikely he would have blundered this two years ago.
This was game 14, they were tied almost the whole way, and this was the only time Gukesh won with the black pieces.
Before the match, the expectation was that Gukesh would take an early lead and never look back, with the match ending before game 14. This morning, the expectation would be that Ding would make an easy draw with white (as he has done in 5 of his games as white already, winning the other), and it would go to tiebreaks.
Having the championship decided by a decisive final classical game is pretty rare. The last time it happened was 2010.
ding was attacking though. it skuat crazy that he was looking to play for a draw with the white pieces, when he was in a great position to play for the win earlier, before he forced a trade of all the pieces.
ding may have lost for a blunder late in the game, but i think he lost the game and match early, when he traded down to try to play for a draw.
Presumably the classical world championship should be determined by classical chess games, and this was the last one before the shorter tiebreak games. Ding looked like he would’ve started losing more if there were more classical games, who knows though.
x86 is certainly not dead, and I don't think it will anytime soon, but they are still behind Apple M3 in terms of performance per watt. And M4 is about to arrive. I'm a bit disappointed because I really want more competition for Apple, but they're just not there yet, nor x86 nor Qualcomm.
Apple will also run into the diminishing returns, but they will retain the real killer advantage over general purpose CPU vendors that the have in other hardware areas: being able to retire or rework old or misconceived parts of the architecture entirely in future versions unilaterally. If they want to drop M1 support in some future MacOS version, all it will take will be a WWDC announcement that the next version won't simply work on that generation it earlier of machines.
Yeah, that all makes senses. But having legacy software that you support has been a big advantage for x86 for, basically, forever. For a lot of purposes, that is way more important than performance per watt.
I think they're running out of optimizations (hacks). They moved memory on package and bought the latest TSMC node. I guess they could keep buying the latest node but I don't expect any more large leaps.
Take it with a grain of salt. Other reviewers such as Hardware Canucks [1] have mentioned that they have not been able to get such long hours. Their numbers are closer to 15 hours.
YouTube Premium was already quite expensive. They increased my family plan from €17.99 to €25.99. This is almost a 45% hike! Compare to Netflix which is €13.99.
I only pay this to skip ads, but a lot of content creators still have their own ads on the videos. There doesn't seem to be enough value to justify such price hike. I am likely to cancel my subscription.
It's possible to write some pretty unreadable code with Clojure, just like it's possible in any programming language.
I can tell you that this code is very easy to read if you are familiar with Clojure. In fact, this example is lovely! Seeing this code really makes me wanting to try this library! Clojure has this terse, yet readable aesthetic that I like a lot.
But I completely understand you because at some point Clojure code also looked alien to me. You are not broken for having familiarity with some style of code. Familiarity is something you can acquire at any time, and then this code will be easy to read.
True hard-to-read code is one that is hard to understand even if you master the language it is written in.
Nice article. A couple of years ago I also implemented Lox in Rust. And I faced the exact same issues that the author describes here, and I also ended up with a very similar implementation.
I ended up having two implementations: One in purely safe Rust and another one with unsafe.
Note that if you're interesting in the "object manager" approach mentioned, I did that in my safe implementation, you can take a look at https://github.com/ceronman/loxido
The author seems very anxious because Rust is getting traction and they don't like Rust. They're afraid that one day Rust will become a "monoculture" and everything will be written in it.
I like Rust, but I consider this very, very unlikely.
Rust has actually brought more choice to the programming language scenario. If we're talking about monoculture, let's talk about C/C++. For decades this was the only viable option for systems programming. All new languages were focusing on a higher lever. Languages for lower level stuff were rare. There was D, but it never got enough traction.
Then Rust appeared and there is finally an alternative. And not only that, I because of that, other language designers decided to create new systems languages, and now we have Zig, and Odin, and Vale, etc.
So if anything, Rust is helping in breaking the monoculture, not creating it. C and C++ are not going away, but now we have alternatives.
And I think it's important to acknowledge that even if you don't like a language, if you see a bunch of software being written in such language, it's because the language is useful. I don't like C++ but I admit it's damn useful! People are writing interesting software in Rust because they find it useful.
Rust is challenging people because it declares several long-inadequate things about C/C++ to be inadequate (security issues, dependency management), and provides alternatives which show that it doesn't have to be like that.
The rewrites will inevitably be long and painful. Rewrites always are. But the onus on anti-Rust people is now to demonstrate a better language to rewrite in first, rather than just sitting in the status quo waiting for the steamroller driven by a crab to very slowly run them over.
D is interesting but seems to be a solo project, I'm not sure why it's not had traction. Maybe it's not different _enough_.
This is not meant as a critique of you, but your comment includes a hint of what bothers me with some Rust evangelists. I would call it "slightly entitled over-optimism".
I have a C++ service running in production. It's been in production for 10-ish years with minimal updates. It'll probably keep running just fine for the next 10 years.
With that in mind, "the onus on anti-Rust people is now to demonstrate a better language to rewrite in first, rather than just sitting in the status quo waiting for the steamroller driven by a crab to very slowly run them over." just doesn't make much sense to me. If the status quo is fine, there's no "onus", there's no difficult decision to be made, there just isn't any rewrite. The anti-Rust people will probably be fine by doing ... nothing.
Is it on a network (or other) security boundary, exposed to attack from the Internet?
Is it deployed on millions of machines worldwide?
_Those_ are the primary targets for replacement, because the networked environment is a very hostile place, and people are fed up with the consequences of that. Regular announcements of "sorry all your private data has been leaked lol". Constant upgrade treadmill to fix the latest CVEs.
(the poster child for this was really Shockwave Flash, later owned by Adobe, which had so many RCE exploits everyone united behind Apple killing it off. Even if this meant obsoleting a whole era of media and games which relied on it. That wasn't rewritten, it was just killed.)
I'd guess most services where performance matters are in the background. And this particular C++ service is only accessible over internal LAN, to be used by other back-end servers.
I agree with you that if ha-proxy and nginx didn't exist yet, they would be prime candidates for being implemented in Rust. But now that they already exist and reliably work, I'm not sure there is enough pain for them to get replaced anytime soon. BTW the last ha-proxy CVE was them differing from the HTTP spec and accepting the # character in additional URL components, which is something that probably no compiler could have flagged.
3. https://github.com/cloudflare/pingora says "Pingora keeps a rolling MSRV (minimum supported Rust version) policy of 6 months." so for anyone who dislikes the "constant upgrade treadmill", this won't help much.
So my summary would be that Pingora is a great Rust library which one day might be used for replacing nginx and/or ha-proxy.
But the main advantages of Pingora - which are the reason why CloudFlare is using it now - have nothing to do with Rust. Obviously, a software architecture designed in 2022 can take advantage of modern hardware in a way that an architecture from 2004 cannot. (Yes, nginx is that old). Intel's TBB library brought "work-stealing" to C++ around 10 years ago. The other big improvement in Pingora is moving from multi-process to multi-threading pools. Again, C++ had thread pools for years.
So Pingora is probably great and it's written in Rust. But the business benefits that it brings aren't coming from Rust. They are coming from the fact that it's a modern architecture.
> Think of Pingora as an engine that can power a car while you have to build the car yourself. Nginx is a complete car you can drive. River is a faster, safer, and an easily customizable car.
The title is being pedantic for effect. It doesn't say what you say it's saying.
This is an issue opened by someone on an open source repository. They aren't talking about how Cloudflare itself uses it, but about how they want to use it.
> so for anyone who dislikes the "constant upgrade treadmill", this won't help much.
Similar to above, this is moving the goalposts. Sure, that might be true, but it's unrelated to the original topic.
> But the main advantages of Pingora - which are the reason why CloudFlare is using it now - have nothing to do with Rust.
> We chose Rust as the language of the project because it can do what C can do in a memory safe way without compromising performance.
Cloudflare has been a vocal proponent of Rust for years now. Many years ago, they suffered a very serious bug, CloudBleed, that Rust would have prevented. And so they've been using Rust instead of C and C++ for a long time.
They of course would also very much agree that the architecture matters, but that doesn't mean that the implementation language doesn't matter either. If they chose to implement Pingora in, say, Ruby, that wouldn't have accomplished their goals.
I don't think anyone believes there will be no C++ codebases in the future - that's crazy talk. What could happen in a decade or two is that there'll be no _new_ C++ codebases. Popular languages are retired to legacy status from time to time, and C++ is completely outclassed by Rust.
Since C++ still evolves and changes, I guess greenfield C++ projects in the future can limit themselves to a subset of the newer improved language thereby C++ will continue living by that way as well.
The C++ community is talking about evolving in the following ways in this space:
1. contracts
2. profiles
3. successor languages
4. borrow checking
The idea of a "subset of the language" is one that's often talked about, but there seems to be an explicit rejection of the idea of a subset, at least by several important committee members. It's not clear to me why this is the case.
I’m not sure why you’re getting downvoted. The committee members have clear interest to keep things as they are. However, from a practical perspective I suspect that “subset lang” will happen. One just needs a linter or compiler flags to do that.
The thing is, there's a difference between a true subset and "some flags that reject certain things." Because that creates a number of different sets that may relate to each other in a variety of ways, some subsets, some overlapping.
But beyond the specific definitions here, "profiles" is that sort of approach, so something like it will happen, probably. It seems to have a lot of support.
"Subset language" is already an option. The developer can choose a safe(r) subset of C++ to constrain themselves to. Many (most?) C++ shops already do this, and go to varying lengths to enforce that only their blessed subset is used. We don't really need a committee to create a new "subset language" to accomplish this.
C++ already passed that line with C++17 and newer iterations. Not only it evolves way faster starting with C++17, the modern code looks sufficiently different that it needs relearning some parts of C++ from start.
I've written my biggest project with C++11, and C++14 was just out back then. Now, I plan to reimplement that project (and improve it), again with C++, but I need to look at so-called "modern C++" to do it correctly and in a more future-proof way.
...and I'm glad I have to do that, because while I love (old school) C++, seeing it evolve makes me happy. Because systems evolve, software scale evolve, and most importantly hardware and ways to get maximum performance from it evolve.
There's no need to write old-school C++ anymore. All these features are developed for a reason, they shall be used.
Fluid dynamics codes are still being written in Fortran.
Notably, there are still improvements happening in the Fortran space, and there's been a bit of revival of sorts. There are still features in Fortran that make it nicer to write in than C++ (and while I'd hoped rust might be a good Fortran replacement, I feel rust has taken a different path, and remains no better than C++).
I would like to echo this sentiment. I like Rust, but I can't see a Rust monoculture.
From my experience, Rust is an absolute improvement in developer experience around so many corners. I'm looking forward to the future where Rust is well established and boring, and all its round edges have been solved, even if that means adopting another new and exciting language :)
IMO there was only a brief period of monoculture, and only if you consider C & C++ to be part of the same culture, which is a stretch. It started in the mid-80s when people stopped writing programs in Pascal and/or assembly, and stopped in the mid-90s when Java & Perl started to get used extensively.
> if you consider C & C++ to be part of the same culture, which is a stretch
That’s my main gripe with most pro-Rust comments of this kind, to be honest. There are a few ways to write C and a lot of ways to write C++, and most of the C is quite unlike most of the C++. (I’m not counting marginal cases like raw GObject or raw COM as C here, I think those count as basically separate languages.)
The problem is, the Rust I’ve read (and read about) is methodologically and stylistically a replacement for most of the C++, but not a lot of the C. I don’t dislike the theory behind Rust—I’ve written Haskell, I’ve written SML, I read the Tofte&Talpin regions paper and some of the subsequent research more than a decade ago. I do dislike when people ask me to switch or even try to, of all things, shame me into switching from C to what presents itself as a better C++, in largely the same way that I dislike attempts to switch to C++ that claim it’s the same thing as C. No it isn’t. And I largely tune out when I read “C/C++”, because it implies the author does not get it.
(I’m aware there are other people that do get it, some of whom work on other programming languages. They just don’t write posts proposing Rust replace C.)
I was talking about a monoculture specifically in the systems programming area. Java, Perl, PhP, Python, Ruby, JavaScript, C#, Go, these got popular but they use garbage collection and have limitations for systems programming. C and C++ were the only options for a long while.
Systems programming is just a niche. A large niche, but just a niche. But between when Pascal & Assembly stopped being used widely and before Java started being used widely C & C++ were used for pretty much everything.
Windows used to be cool! As a kid I remember the Dangerous Creatures CD [1] came with a custom theme for Windows 95. It would change all the icons to cool animal stuff. The "My Computer" icon would change to a frog, the Recycle bin icon would change to a fish, and my favorite, the waiting icon for the mouse cursor would change to a Wasp!
> The result is a ~300 KiB statically linked executable, that requires no libraries, and uses a constant ~1 MiB of resident heap memory (allocated at the start, to hold the assets). That’s roughly a thousand times smaller in size than Microsoft’s. And it only is a few hundred lines of code.
And even with this impressive reduction in resource usage, it's actually huge for 1987! A PC of that age probably had 1 or 2 MB of RAM. The Super NES from 1990 only had 128Kb of RAM. Super Mario Word is only 512KB.
A PlayStation from 1994 had only 2MB of system RAM. And you had games like Metal Gear Solid or Silent Hill.
A PC in 1987 was more likely to have max 640kb of RAM, the "AT compatibles" (286 or better) were expensive still. We had an XT clone (by the company that later rebranded at Acer) bought in 1987 with 512kb RAM.
Yes. I wrote a version of Minesweeper for the Amstrad CPC, a home computer popular in 1987 (though I wrote it a few years later). I think it was about 5-10Kb in size, not 300. The CPC only had 64k of memory anyway, though a 128k model was available.
In 1987, I think you'd be very lucky to have that much RAM. 4MB and higher only started becoming standard as people ran Windows more - so Win 3.1 and beyond, and that was only released in 1992.
4 MB was considered a large amount of memory until the release of Windows 95. There were people who had that much, but it tended to be the domain of the workplace or people who ran higher end applications.
If I recall correctly, home computers tended to ship with between 4 MB and 8 MB of RAM just before the release of Windows 95. There were also plenty of people scrambling to upgrade their old PCs to meet the requirements of the new operating system, which was a minimum of 4 MB RAM.
It was over $100/MB for RAM in 1987. The price was declining until about 1990, then froze at about $40/MB for many years do to cartel-like behavior, then plummeted when competition increased around 1995. I was there when the price of RAM dropped 90% in a single year.
Like others have said, that would only be available on what would be a very costly machine for '87.
I distinctly remember the 386sx-16 I got late 1989 came with 1 megabyte and a 40mb hard drive for just under $4k from Price Club (now Costco), which was an unusually good price for something like that at the time.
Since we are talking about software written today, not just software available in 1987, X386 (which came out with X11R5 in 1991) was more than capable of running on a 386-class machine from 1987. Granted, a 386 class machine with 1MB of ram and a hard-disk would have been pushing $10k in 1987 (~$27k in 2024 dollars), so it wasn't a cheap machine.
Very interesting! They say they went from a stack-based IR, which provided faster translation but slow execution time to a register-based one, which has slightly slower translations, but faster execution time. This in contrast with other runtimes which provide much slower translation times but very fast execution time via JIT.
I assume that for very short-lived programs, a stack based interpreter could be faster. And for long-lived programs, a JIT would be better.
This new IR seems to be targeting a sweet spot of short, but not super short lived programs. Also, it's a great alternative for environments where JIT is not possible.
I'm happy to see all these alternatives in the Wasm world! It's really cool. And thanks for sharing!
Even faster startup times can be achieved by so-called in-place interpreters that do not even translate the Wasm binary at all and instead directly execute it without adjustments. Obviously this is slower at execution compared to re-writing interpreters.
As there seem to be even more runtimes popping up (great job with wasmi, btw), it seems like a fun, maybe even full-time, job to keep up with them all.