Hacker News new | past | comments | ask | show | jobs | submit | ceronman's comments login

It was. But he had 9 minutes vs more than an hour for Gukesh. The entire match has been Ding defending miraculously, I thought it was a matter of time before he eventually failed. The fact that it happened on the last moves of the last game, it's definitely hard for Ding, but fair for Gukesh IMO.


Overall I agree, the entire match seemed to be Ding defending. Gukesh kind of failed to capitalise the whole way through though.

wrt the time, this is kind of a bread and butter endgame. Ding shouldn't have blundered here with 10 minutes on the clock. Highly unlikely he would have blundered this two years ago.


Well it sounds like an instance of "your keys are always in the last place you look, because then you stop looking"


This was game 14, they were tied almost the whole way, and this was the only time Gukesh won with the black pieces.

Before the match, the expectation was that Gukesh would take an early lead and never look back, with the match ending before game 14. This morning, the expectation would be that Ding would make an easy draw with white (as he has done in 5 of his games as white already, winning the other), and it would go to tiebreaks.

Having the championship decided by a decisive final classical game is pretty rare. The last time it happened was 2010.


The match was more than one game


ding was attacking though. it skuat crazy that he was looking to play for a draw with the white pieces, when he was in a great position to play for the win earlier, before he forced a trade of all the pieces.

ding may have lost for a blunder late in the game, but i think he lost the game and match early, when he traded down to try to play for a draw.

gukesh played every game for a win


I don't get the "fair" argument. Would it be unfair if Ding did not blunder the rook? How so?


Presumably the classical world championship should be determined by classical chess games, and this was the last one before the shorter tiebreak games. Ding looked like he would’ve started losing more if there were more classical games, who knows though.


So the argument is some of the rules are unfair?


Agree completely.


x86 is certainly not dead, and I don't think it will anytime soon, but they are still behind Apple M3 in terms of performance per watt. And M4 is about to arrive. I'm a bit disappointed because I really want more competition for Apple, but they're just not there yet, nor x86 nor Qualcomm.


Apple will also run into the diminishing returns, but they will retain the real killer advantage over general purpose CPU vendors that the have in other hardware areas: being able to retire or rework old or misconceived parts of the architecture entirely in future versions unilaterally. If they want to drop M1 support in some future MacOS version, all it will take will be a WWDC announcement that the next version won't simply work on that generation it earlier of machines.


Yeah, that all makes senses. But having legacy software that you support has been a big advantage for x86 for, basically, forever. For a lot of purposes, that is way more important than performance per watt.


> x86 is certainly not dead, and I don't think it will anytime soon, but they are still behind Apple M3 in terms of performance per watt.

Do you have a source for this? I havent seen a good review of Lunar Lake yet. The article this HN story links to is pretty bad.


I think they're running out of optimizations (hacks). They moved memory on package and bought the latest TSMC node. I guess they could keep buying the latest node but I don't expect any more large leaps.


Take it with a grain of salt. Other reviewers such as Hardware Canucks [1] have mentioned that they have not been able to get such long hours. Their numbers are closer to 15 hours.

[1] https://www.youtube.com/watch?v=CxAMD6i5dVc


The type of test definitely matters but 15 hours ain't too shabby either


Yeah especially since my ~12 hours XPS does maybe 7-8 hours in typical usage. Going from 24 to 15 hours seems roughly on-par with the course.


YouTube Premium was already quite expensive. They increased my family plan from €17.99 to €25.99. This is almost a 45% hike! Compare to Netflix which is €13.99.

I only pay this to skip ads, but a lot of content creators still have their own ads on the videos. There doesn't seem to be enough value to justify such price hike. I am likely to cancel my subscription.


It's possible to write some pretty unreadable code with Clojure, just like it's possible in any programming language.

I can tell you that this code is very easy to read if you are familiar with Clojure. In fact, this example is lovely! Seeing this code really makes me wanting to try this library! Clojure has this terse, yet readable aesthetic that I like a lot.

But I completely understand you because at some point Clojure code also looked alien to me. You are not broken for having familiarity with some style of code. Familiarity is something you can acquire at any time, and then this code will be easy to read.

True hard-to-read code is one that is hard to understand even if you master the language it is written in.


Nice article. A couple of years ago I also implemented Lox in Rust. And I faced the exact same issues that the author describes here, and I also ended up with a very similar implementation.

I also wrote a post about it: https://ceronman.com/2021/07/22/my-experience-crafting-an-in...

I ended up having two implementations: One in purely safe Rust and another one with unsafe.

Note that if you're interesting in the "object manager" approach mentioned, I did that in my safe implementation, you can take a look at https://github.com/ceronman/loxido


I'd read your article, and it was lovely. It nudged me to just go unsafe and implement some of the data structures from scratch.


The author seems very anxious because Rust is getting traction and they don't like Rust. They're afraid that one day Rust will become a "monoculture" and everything will be written in it.

I like Rust, but I consider this very, very unlikely.

Rust has actually brought more choice to the programming language scenario. If we're talking about monoculture, let's talk about C/C++. For decades this was the only viable option for systems programming. All new languages were focusing on a higher lever. Languages for lower level stuff were rare. There was D, but it never got enough traction.

Then Rust appeared and there is finally an alternative. And not only that, I because of that, other language designers decided to create new systems languages, and now we have Zig, and Odin, and Vale, etc.

So if anything, Rust is helping in breaking the monoculture, not creating it. C and C++ are not going away, but now we have alternatives.

And I think it's important to acknowledge that even if you don't like a language, if you see a bunch of software being written in such language, it's because the language is useful. I don't like C++ but I admit it's damn useful! People are writing interesting software in Rust because they find it useful.


Rust is challenging people because it declares several long-inadequate things about C/C++ to be inadequate (security issues, dependency management), and provides alternatives which show that it doesn't have to be like that.

The rewrites will inevitably be long and painful. Rewrites always are. But the onus on anti-Rust people is now to demonstrate a better language to rewrite in first, rather than just sitting in the status quo waiting for the steamroller driven by a crab to very slowly run them over.

D is interesting but seems to be a solo project, I'm not sure why it's not had traction. Maybe it's not different _enough_.


This is not meant as a critique of you, but your comment includes a hint of what bothers me with some Rust evangelists. I would call it "slightly entitled over-optimism".

I have a C++ service running in production. It's been in production for 10-ish years with minimal updates. It'll probably keep running just fine for the next 10 years.

With that in mind, "the onus on anti-Rust people is now to demonstrate a better language to rewrite in first, rather than just sitting in the status quo waiting for the steamroller driven by a crab to very slowly run them over." just doesn't make much sense to me. If the status quo is fine, there's no "onus", there's no difficult decision to be made, there just isn't any rewrite. The anti-Rust people will probably be fine by doing ... nothing.


> I have a C++ service running in production

Is it on a network (or other) security boundary, exposed to attack from the Internet?

Is it deployed on millions of machines worldwide?

_Those_ are the primary targets for replacement, because the networked environment is a very hostile place, and people are fed up with the consequences of that. Regular announcements of "sorry all your private data has been leaked lol". Constant upgrade treadmill to fix the latest CVEs.

(the poster child for this was really Shockwave Flash, later owned by Adobe, which had so many RCE exploits everyone united behind Apple killing it off. Even if this meant obsoleting a whole era of media and games which relied on it. That wasn't rewritten, it was just killed.)


I'd guess most services where performance matters are in the background. And this particular C++ service is only accessible over internal LAN, to be used by other back-end servers.

I agree with you that if ha-proxy and nginx didn't exist yet, they would be prime candidates for being implemented in Rust. But now that they already exist and reliably work, I'm not sure there is enough pain for them to get replaced anytime soon. BTW the last ha-proxy CVE was them differing from the HTTP spec and accepting the # character in additional URL components, which is something that probably no compiler could have flagged.


> I'm not sure there is enough pain for them to get replaced anytime soon.

https://blog.cloudflare.com/how-we-built-pingora-the-proxy-t...

https://blog.cloudflare.com/pingora-open-source


That sounds like a great project :) but

1. "Pingora is Not an Nginx Replacement" https://navendu.me/posts/pingora/

2. you still put it behind ha-proxy https://github.com/cloudflare/pingora/issues/132

3. https://github.com/cloudflare/pingora says "Pingora keeps a rolling MSRV (minimum supported Rust version) policy of 6 months." so for anyone who dislikes the "constant upgrade treadmill", this won't help much.

So my summary would be that Pingora is a great Rust library which one day might be used for replacing nginx and/or ha-proxy.

But the main advantages of Pingora - which are the reason why CloudFlare is using it now - have nothing to do with Rust. Obviously, a software architecture designed in 2022 can take advantage of modern hardware in a way that an architecture from 2004 cannot. (Yes, nginx is that old). Intel's TBB library brought "work-stealing" to C++ around 10 years ago. The other big improvement in Pingora is moving from multi-process to multi-threading pools. Again, C++ had thread pools for years.

So Pingora is probably great and it's written in Rust. But the business benefits that it brings aren't coming from Rust. They are coming from the fact that it's a modern architecture.


> But the business benefits that it brings aren't coming from Rust. They are coming from the fact that it's a modern architecture.

This is moving the goalposts. Your original post said

> I'm not sure there is enough pain for them to get replaced anytime soon.

Yet, here one of them is, being replaced at a company that powers ~10% of the traffic on the Internet.

But beyond that, your links:

> 1. "Pingora is Not an Nginx Replacement" https://navendu.me/posts/pingora/

Here's what the start of the post actually says:

> Think of Pingora as an engine that can power a car while you have to build the car yourself. Nginx is a complete car you can drive. River is a faster, safer, and an easily customizable car.

The title is being pedantic for effect. It doesn't say what you say it's saying.

> 2. you still put it behind ha-proxy https://github.com/cloudflare/pingora/issues/132

This is an issue opened by someone on an open source repository. They aren't talking about how Cloudflare itself uses it, but about how they want to use it.

> so for anyone who dislikes the "constant upgrade treadmill", this won't help much.

Similar to above, this is moving the goalposts. Sure, that might be true, but it's unrelated to the original topic.

> But the main advantages of Pingora - which are the reason why CloudFlare is using it now - have nothing to do with Rust.

This is not what Cloudflare themselves would say. They chose Rust for very specific reasons when building Pingora: (repeating the link from above) https://blog.cloudflare.com/how-we-built-pingora-the-proxy-t...

> We chose Rust as the language of the project because it can do what C can do in a memory safe way without compromising performance.

Cloudflare has been a vocal proponent of Rust for years now. Many years ago, they suffered a very serious bug, CloudBleed, that Rust would have prevented. And so they've been using Rust instead of C and C++ for a long time.

They of course would also very much agree that the architecture matters, but that doesn't mean that the implementation language doesn't matter either. If they chose to implement Pingora in, say, Ruby, that wouldn't have accomplished their goals.


I don't think anyone believes there will be no C++ codebases in the future - that's crazy talk. What could happen in a decade or two is that there'll be no _new_ C++ codebases. Popular languages are retired to legacy status from time to time, and C++ is completely outclassed by Rust.


People love to think that C++ is only used in systems programming, the thing is C++ is used everywhere.

FORTRAN is being developed and improved, and new code, most notably in scientific domain, is still being written.

What Rust did to C++ is what clang did to GCC. Wake the giant up. Rust will go nowhere, but it's the same for C++.

Thinking that C++ will just fade to black is wishful thinking.


Since C++ still evolves and changes, I guess greenfield C++ projects in the future can limit themselves to a subset of the newer improved language thereby C++ will continue living by that way as well.


The C++ community is talking about evolving in the following ways in this space:

  1. contracts
  2. profiles
  3. successor languages
  4. borrow checking
The idea of a "subset of the language" is one that's often talked about, but there seems to be an explicit rejection of the idea of a subset, at least by several important committee members. It's not clear to me why this is the case.


I’m not sure why you’re getting downvoted. The committee members have clear interest to keep things as they are. However, from a practical perspective I suspect that “subset lang” will happen. One just needs a linter or compiler flags to do that.


It's all good, I have too much karma anyway.

The thing is, there's a difference between a true subset and "some flags that reject certain things." Because that creates a number of different sets that may relate to each other in a variety of ways, some subsets, some overlapping.

But beyond the specific definitions here, "profiles" is that sort of approach, so something like it will happen, probably. It seems to have a lot of support.


"Subset language" is already an option. The developer can choose a safe(r) subset of C++ to constrain themselves to. Many (most?) C++ shops already do this, and go to varying lengths to enforce that only their blessed subset is used. We don't really need a committee to create a new "subset language" to accomplish this.


C++ already passed that line with C++17 and newer iterations. Not only it evolves way faster starting with C++17, the modern code looks sufficiently different that it needs relearning some parts of C++ from start.

I've written my biggest project with C++11, and C++14 was just out back then. Now, I plan to reimplement that project (and improve it), again with C++, but I need to look at so-called "modern C++" to do it correctly and in a more future-proof way.

...and I'm glad I have to do that, because while I love (old school) C++, seeing it evolve makes me happy. Because systems evolve, software scale evolve, and most importantly hardware and ways to get maximum performance from it evolve.

There's no need to write old-school C++ anymore. All these features are developed for a reason, they shall be used.


What new scientific applications are being written in Fortran?


Fluid dynamics codes are still being written in Fortran.

Notably, there are still improvements happening in the Fortran space, and there's been a bit of revival of sorts. There are still features in Fortran that make it nicer to write in than C++ (and while I'd hoped rust might be a good Fortran replacement, I feel rust has taken a different path, and remains no better than C++).


New CFD codes? Links?


I would like to echo this sentiment. I like Rust, but I can't see a Rust monoculture.

From my experience, Rust is an absolute improvement in developer experience around so many corners. I'm looking forward to the future where Rust is well established and boring, and all its round edges have been solved, even if that means adopting another new and exciting language :)


IMO there was only a brief period of monoculture, and only if you consider C & C++ to be part of the same culture, which is a stretch. It started in the mid-80s when people stopped writing programs in Pascal and/or assembly, and stopped in the mid-90s when Java & Perl started to get used extensively.


> if you consider C & C++ to be part of the same culture, which is a stretch

That’s my main gripe with most pro-Rust comments of this kind, to be honest. There are a few ways to write C and a lot of ways to write C++, and most of the C is quite unlike most of the C++. (I’m not counting marginal cases like raw GObject or raw COM as C here, I think those count as basically separate languages.)

The problem is, the Rust I’ve read (and read about) is methodologically and stylistically a replacement for most of the C++, but not a lot of the C. I don’t dislike the theory behind Rust—I’ve written Haskell, I’ve written SML, I read the Tofte&Talpin regions paper and some of the subsequent research more than a decade ago. I do dislike when people ask me to switch or even try to, of all things, shame me into switching from C to what presents itself as a better C++, in largely the same way that I dislike attempts to switch to C++ that claim it’s the same thing as C. No it isn’t. And I largely tune out when I read “C/C++”, because it implies the author does not get it.

(I’m aware there are other people that do get it, some of whom work on other programming languages. They just don’t write posts proposing Rust replace C.)


I was talking about a monoculture specifically in the systems programming area. Java, Perl, PhP, Python, Ruby, JavaScript, C#, Go, these got popular but they use garbage collection and have limitations for systems programming. C and C++ were the only options for a long while.


Systems programming is just a niche. A large niche, but just a niche. But between when Pascal & Assembly stopped being used widely and before Java started being used widely C & C++ were used for pretty much everything.


FWIW it was LLVM that catalyzed the modern florescence of programming language innovation. Rust is just another result of that shift, not its cause.


Windows used to be cool! As a kid I remember the Dangerous Creatures CD [1] came with a custom theme for Windows 95. It would change all the icons to cool animal stuff. The "My Computer" icon would change to a frog, the Recycle bin icon would change to a fish, and my favorite, the waiting icon for the mouse cursor would change to a Wasp!

[1] https://en.wikipedia.org/wiki/Microsoft_Dangerous_Creatures


Power Toys! It was the age of the Weezer video on the Windows 95 CD. Whole different attitude ...


> The result is a ~300 KiB statically linked executable, that requires no libraries, and uses a constant ~1 MiB of resident heap memory (allocated at the start, to hold the assets). That’s roughly a thousand times smaller in size than Microsoft’s. And it only is a few hundred lines of code.

And even with this impressive reduction in resource usage, it's actually huge for 1987! A PC of that age probably had 1 or 2 MB of RAM. The Super NES from 1990 only had 128Kb of RAM. Super Mario Word is only 512KB.

A PlayStation from 1994 had only 2MB of system RAM. And you had games like Metal Gear Solid or Silent Hill.


A PC in 1987 was more likely to have max 640kb of RAM, the "AT compatibles" (286 or better) were expensive still. We had an XT clone (by the company that later rebranded at Acer) bought in 1987 with 512kb RAM.


Yes. I wrote a version of Minesweeper for the Amstrad CPC, a home computer popular in 1987 (though I wrote it a few years later). I think it was about 5-10Kb in size, not 300. The CPC only had 64k of memory anyway, though a 128k model was available.


7yo me could not understand how people could possibly make software but I knew I wanted to be part of it. I loved my CPC 6128.


Even the Windows 95 Minesweeper was only a 24 kilobyte program.


As long as you did not count the large libraries it was calling into.


Probably a little later but I had an Amstrad 8086 as a teen. I think it was the first computer I bought with my own money.


16kb of the 64kb was reserved for the screen buffer if i remember correctly.


In 1987, I think you'd be very lucky to have that much RAM. 4MB and higher only started becoming standard as people ran Windows more - so Win 3.1 and beyond, and that was only released in 1992.


4 MB was considered a large amount of memory until the release of Windows 95. There were people who had that much, but it tended to be the domain of the workplace or people who ran higher end applications.

If I recall correctly, home computers tended to ship with between 4 MB and 8 MB of RAM just before the release of Windows 95. There were also plenty of people scrambling to upgrade their old PCs to meet the requirements of the new operating system, which was a minimum of 4 MB RAM.


It was over $100/MB for RAM in 1987. The price was declining until about 1990, then froze at about $40/MB for many years do to cartel-like behavior, then plummeted when competition increased around 1995. I was there when the price of RAM dropped 90% in a single year.


Like others have said, that would only be available on what would be a very costly machine for '87.

I distinctly remember the 386sx-16 I got late 1989 came with 1 megabyte and a 40mb hard drive for just under $4k from Price Club (now Costco), which was an unusually good price for something like that at the time.


By comparison, the original from https://minesweepergame.com/download/windows-31-minesweeper.... is 28kb. Might be interesting it disassemble, surely somebody's done that?


A lot of the work being done here by the program code was done in dynamically linked libraries in the original game.


A PC in 1987 didn't run X11 either though.

You needed something way more expensive to run X11 before 1990.


Yes and no.

Since we are talking about software written today, not just software available in 1987, X386 (which came out with X11R5 in 1991) was more than capable of running on a 386-class machine from 1987. Granted, a 386 class machine with 1MB of ram and a hard-disk would have been pushing $10k in 1987 (~$27k in 2024 dollars), so it wasn't a cheap machine.



I wonder how big the binary would be on a 1987 Amiga 500 would be then


Also PlayStation was notiorious in game development, by being the first games console with a C SDK, until then it was only Assembly.

When arcade games started to be written in C, it was still using mainframe and micros, with downlink cables to the development boards.


Very interesting! They say they went from a stack-based IR, which provided faster translation but slow execution time to a register-based one, which has slightly slower translations, but faster execution time. This in contrast with other runtimes which provide much slower translation times but very fast execution time via JIT.

I assume that for very short-lived programs, a stack based interpreter could be faster. And for long-lived programs, a JIT would be better.

This new IR seems to be targeting a sweet spot of short, but not super short lived programs. Also, it's a great alternative for environments where JIT is not possible.

I'm happy to see all these alternatives in the Wasm world! It's really cool. And thanks for sharing!


Thank you, that's a good summary of the article!

Even faster startup times can be achieved by so-called in-place interpreters that do not even translate the Wasm binary at all and instead directly execute it without adjustments. Obviously this is slower at execution compared to re-writing interpreters.

Examples for Wasm in-place interpreters are toywasm (https://github.com/yamt/toywasm) or WAMR's classic interpreter.


There is also this (https://github.com/mbbill/Silverfir) and Wizard (https://github.com/titzer/wizard-engine).

I wrote a paper about Wizard's in-place interpreter and benchmarked lots of runtimes in (https://dl.acm.org/doi/abs/10.1145/3563311).

As there seem to be even more runtimes popping up (great job with wasmi, btw), it seems like a fun, maybe even full-time, job to keep up with them all.


Also great job on Wizard btw! Your article about Wizard was a really interesting read to me.

The abundance of Wasm runtimes is a testimony of how great the WebAssembly standard really is!


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: