I can shed some light on this. It's got little to do with physics in this case.
Whereas QuakeWorld uses a network update model based on acknowledgement, Quake II pumped out complete states of all visible entities/objects at a steady 10 Hz and assumes the client interpolates between those states (Primarily to make animation easier for singleplayer, as variable animation rate plus interpolation can be a lot to keep track of per-entity). As Quake II uses UDP connections and has to deal with packet loss, it was seen as easy to just keep pumping out full entity/object states with no regard for deltas. This was later seen as a mistake by John Carmack [1] and they'd try a lot harder in Quake III Arena. This is still a bit of a thorn in many peoples eyes that make modifications to Quake II as you have to break compatibility with the entire mod ecosystem if you wanted to address this. Would have been nice to see this rectified in the re-release.
With the re-release you're basically sending out 4x the rate of the original and there's still slow connections which might chug on all that data. Some parts of the US still deal with speeds that are closer to modem than broadband.
Anecdotal evidence: Some years ago I was advising a fellow Quake II modder on how they could fix some limitations of the network model. They took the easy way and changed the update rate to something higher that felt smooth enough for them and it'd render anything but listen/local games unplayable because they were sending out too much data over the network. Eventually they gave up and started on top of a QuakeWorld based engine to avoid reinventing the wheel.
Also some fun trivia: In the original Quake II, physics are rounded down to network precision. They are not rounded to nearest, mind you - which results in a drunken walk. Unless you're around the [0,0,0] vector on the map's coordinate system, you will be unable to walk forward/back in a straight line.
Creative Cloud has a web version of Photoshop[1] supposedly and then there's Office 365, which has been around for a good long while now. I suppose one could use those if need be.
Regardless of any special events going on, check out Ground Kontrol Arcade. They got real nice drinks and large amounts of authentic arcade/pinball machines - new and old.
Great suggestion, thanks! This is different. I mostly have things like the tea garden and raptor center for consideration. It's all a bit touristy I guess, so was wondering what else.
That's like comparing horse and carriage to a modern day truck though, no? This can be fully automated, generating photoreal content on its own. You couldn't airbrush a photo via a cron job like how you can now automate some model generating thousands of images of people rioting/looting for authoritarian purposes. Who will go through the effort of verifying every one of them? Other language models with precision issues?
> You couldn't airbrush a photo via a cron job like how you can now automate some model generating thousands of images of people rioting/looting for authoritarian purposes.
I get how this could be a problem, but it seems to me that it would only be marginally effective instead of exponentially as some assume. The reason I think this is because we already have this kind of thing going on without AI [1]. And while it does work, it's not clear to me if making the fake more realistic actually does anything for the kind of people who get worked up about this. If you want to claim Seattle is burning and it's not, just grab a picture of another place burning, and you're good. Narrative achieved.
If you can pass as authentic a photo from a different place taken at a different time, then what does it matter if you can generate a new one? Are you going to trick more people? Maybe. But I have a feeling the people who are most likely to get tricked, would have been tricked by far more mundane fraud.
People aren’t divided into “most likely to be tricked” and “least likely to be tricked”.
When you generate “photos” of Big Ben burning and throw it on Twitter, you will grab the attention of many more Londoners than you could with a random image of a burning house or whatever, many of which would at least take the time to verify. Just look at the Balenciaga Pope images.
By grabbing the attention of those you are essentially stepping into the realms of “historical alternations”.
That is only the game-logic (which has been 'open' since 1999) and not the engine. That is under a proprietary license and as per terms of the license incompatible with the code that the projects here are using (which are GPL, as they take code from the open-source Quake engines and community projects that have been created over the past 2 decades).
However the projects in question are outside of US jurisdiction for that to matter. They are in violation of id software's and Valve's copyright though.
You can find the original HL SDK license here https://pastebin.com/pAVKk1NL
The copy on GitHub has just a find-replace of the Source Engine license, which is not compatible as well. Seems like Human Error either way.
To clarify, this mixes proprietary code with GPL code from id software. Valve has spoken out about the engine used [1] before and has barred it from being used for mods on Steam.
At Planimeter, back when we were "Team Sandbox," we ran into similar issues when attempting to publish Half-Life 2: Sandbox.[1] People can still find the source and game code on our repository listings from when we migrated off Google Code.
Valve Business Development did not give us the go-ahead.
Notably, we were the first usage of LuaJIT in the Source Engine, and we think that later GMod caught up to us. But you couldn't use native Lua modules prior to then. Only with our mod.
Half-Life: Update failed for the same reasons we did, and also our team was in our 20s when we were about ready to publish, too.
Wow! Fascinating. Source needed a open-source sandbox thing from the beginning. Can't believe things went the way it did - but it doesn't surprise me. The original HL SDK license also prohibited you from releasing anything but game-code in object form. Thank you for this effort.
Wow I remember playing around with your LOVE2D engine like ages ago. It was too complex for my usecase, but I remember it being one of the most fascinating projects on the love2d forums.