Running Doom on anything and in any form imaginable really is an artform but in this case I can't help but be reminded of the recent "Brainfuck as a Service" post ( https://zserge.com/posts/bfaas/ )
That's absolutely hilarious, thanks for sharing.
"Made with :heart: by a Blockchain Expert who wrote like 100 lines of Solidity in 2017 (which didn't work)"
I don’t know how larger the game state/save game is, but if it’s small enough for a url parameter, they might even use their CDN for caching, as I would assume the same state always results in the same game output. If you don’t deviate from paths of previous players, you could sail along a cached path, no computation required :)
Somewhat related: the PlayStation port of DOOM only allowed saving at the end of a level. It used alphanumeric sequences for its saves. You'd write down the sequence and manually re-enter it on your next session, avoiding the need for a memory card.
Presumably it only needed to store <level ID, difficulty, weapons, ammo levels, health, armour, checksum>.
DOOM still used fixed point arithmetic at the time for speed, so it could be a large set but not as big as floats. You could also drop a few bits of precision off the LSB and make your search space that much smaller.
That would approximate infinity quite fast once you move around a few steps :-)
I guess limiting rotation to 64 steps would be enough, but even then the state space grows exponentially. So it's not really useful in the end, but that fits the project, I guess.
But you have to multiply that by the possible locations on the map and the enemy states. I imagine it would be possible to get a few hits at the starts of levels but I suspect that the vast majority of the frames for any play-through have never been seen before.
This is one of those projects that sounds neat at first, but the more you think about it you're left with an overwhelming question of why. Since serverless platforms charge you per API call this would be tremendously more expensive than just about any other approach that already exists today as the game would be making tens of thousands of API calls per user per session.
Neat idea, but the term "serverless" is still incredibly stupid and misleading. Go ahead and run any of this stuff without a server. Go ahead. I'll wait.
You can choose to deny it but you are using a very outdated definition of serverless. The modern definition is that you don't need to think about physical machines. You only worry about the code and running that code is someone else's problem.
I agree that this isn't the best word for it because the etymology is suggesting that there are no severs but language is about how it is used, not the letters that make it up.
Thats all well and good but I've taken the same position as OP when discussing with colleagues and they unironically held fast that the serverless model is essentially closer to a functional/pure programming model because its "serverless", i.e. stateless.
I mean I don't think they're wrong here. When all persistent state is hidden from your code by the runtime you're as close to stateless as you can really get. Sure you can argue that there is state somewhere in the stack but that's always true so it's not very useful to point that out.
Thats exactly my point, and is exactly why I'm pointing it out. The argument from this group of people ignores that the state still exists. The state is not being eliminated, its being shifted somewhere else. A lot of times the process to shift that state somewhere else is not worth the cost.
It does sound like an intentionally misleading name invented by the marketing / PR department of a cloud service provider though. IMHO it would be better to come up with a better name that describes what's actually happening.
This already exists with FaaS -- function as a service. But then the common usage of "serverless" is FaaS plus all the additional services you need to actually make a real app.
Because the middle man does stuff for you like emergent OS patching, secure log handling, request routing, TLS termination and cert distribution, zonal redundancy and failover, access control, etc.
I grew up with having to do all of that myself. It's really not that hard, a minor part of running a website/service/app. Why would I give up more control? Just a machine (bare metal or virtual) and the domain are enough... Guess I'm old now heh
As a business, you give up control for critical infrastructure to be managed by dedicated engineers who are experts in those areas, who can let you reuse their already-audited and compliance-certified infrastructure so that you don't have to do that yourself.
I agree it's not hard. It's also not hard to accidentally store your logs unencrypted, ruining your FIPS or HIPAA compliance and putting your company in legal risk.
Used by whom? A handful of marketroids? We're a technical community, ostensibly we favor precise language. We should push back against this sort of humpty-dumptyism, especially the new "descriptive language means you are forbidden to call out linguistic idiocy" kind.
If this is "serverless" then a taxi ride is "carless" and a restaurant meal is "stoveless". There is a distinct semantic difference between "you don't have to deal with X directly" and "X does not exist". Imagine the U.S. having a proxy war by supporting insurgents who bled and died for their cause, and then having the audacity to call it "bloodless" because the insurgents weren't Americans!
I just checked the source of the OP linked page, and this link is not in the page anywhere.
Likely an unintentional omission!(?)
I'd suggest callouts in <h1>s myself, ideally at the start and the end of the article.
--
As an aside, it's... reasonably fast, but not playable. And all the monsters appear to have quietly been turned off, and because the game loop can't see your exact keystrokes you can't cheatcode them back on :P
I think any potential customers would understand that scaling isn't the point of the demo. It's unreasonable to assume that they'd infinitely scale a processing- and bandwidth-hog like this just for fun.
As a potential customer, scalability is definitely top of mind. If you can't scale a tech demo, then I'm not left with a good impression with a CDN which is supposed to handle millions of requests per second.
It's not like Doom is that intensive, and the the bandwidth is certainly less than a single 4K stream!
Scalability is probably a concern for a lot of people, but it still doesn't make it the point of the demo. They want to show that their service is capable of even running a workload so classically local and native as a full game with half-decent latency only using edge functions. And it's a mystery why you bring up streaming and CDNs. If you think that a 4K stream poses the same challenges as stateless, real-time, non-cacheable game engine processing targeting 60 req/s, then I don't know where to start.
This runs super well for me.
Looking at chrome developer tools it's sending out a new frame request about every 38ms.
Runs much nicer than the top two google results for "play doom online".
I was wondering about that. The DOOM savegame is not a perfect representation of the game state: A major one: Monsters forget who they were targetting after a reload.
I can't play it right now but assume some consequences are: Monsters forget about you after you turn around a corner. Probably no monster infighting. Probably the revenant fireball can't home in on you either.
There was also some rounding error where things on the edge of a ledge drop down after a reload.
If kentonv or someone else at Cloudflare is paying attention, I think Doom would make a good demo for Cloudflare's new edge containers (currently in private preview) [1].
Edit: Or maybe Doom would be a good demo of Workers Unbound and Durable Objects. After all, Fastly just demonstrated that Doom can be ported to wasm, so it clearly doesn't need a whole Linux container.