If only someone would invent a co-processor that we could offload our rendering to. A "graphics processing unit," so to speak. Oh well. One day!
If you want to update some physics model at 60 FPS, you will be updating every 16 ms. An occassional 10 ms pause is not going to prevent that. Anyway, any reasonable game design needs to deal with pauses that are caused by network congestion intelligently, which can often be longer than 16 ms anyway.
Dude you have NO idea what you are talking about. An occasional 10ms pause absolutely will "prevent that". Those of us who work on software that does a lot of rendering using these fabled GPUs you mention know that feeding the GPU properly is a big problem and requires a lot of code to be running on the, err, CPU.
I don't even know what you are talking about wrt network congestion. What are you talking about??
If you're playing a multi-player game, you could easily lose touch with the servers or other players for more than 10 ms. So the game engine needs to be able to handle these pauses and extrapolate what happened with physics during the time you lost touch. It's the same thing here.
I developed games for Android at one point. GC pauses were common there. In the 1.0 version of Android, the pauses could be up to 200 ms. Now, it's more like 5 or 6 ms on average, as they improved the Dalvik VM. The GC hasn't stopped a lot of good games from being written for Android.
People need to be aware of the realities of scheduling on commodity computer hardware. A 10 ms guarantee is actually very good. You can't get much better than 10 ms timing on commodity PC hardware anyway. Even a single hard disk seek or SSD garbage collection event will probably block your thread for more than 10 ms.
Whether this guarantee is good enough for you depends on what you're doing. But I guarantee you that 99% of the crowd here doesn't know what you need to do to get better guarantees than 10 ms anyway (hint: it involves a custom kernel, not doing any disk I/O, and realtime scheduling).
Extrapolation is not at all the same thing. When you aren't getting network packets, yes, you run locally to do some kind of approximation to what is happening (in fact for a good game, your frame rate is never contingent upon what is coming in over the network, you just always do your best to display whatever info you have right now).
But that is totally different than the GC situation. With a STW GC you cannot even run, how do you expect to be able to display anything on the screen? Even with a non-STW GC, the reason the GC has to collect is because you are running out of memory (unless you are massively over-provisioned), and if you are out of memory for the moment how are you going to compute things in order to put stuff on the screen?
Accessing disk/network/etc induces latency, yes, but that is why you write your program to be asynchronous to those operations! But this is a totally different case than with GC. To be totally asynchronous to GC, you would need to be asynchronous to your own memory accesses, which is a logical impossibility. I do not see how you even remotely think you can get away with drawing an analogy between these two situations.
We both know that to acheive 60 FPS, you need to update every 16 ms. A 10 ms pause should not prevent you from doing this, provided that your code is appropriately fast to run in the remaining 6 ms. And if your code is not appropriately fast, you're going to have a lot of trouble dealing with older and slower machines anyway.
Having the latency be capped at 10 ms is an extremely powerful guarantee. It means that even if your machine is not fast enough to play game X one year, next year's machine will be. Because the GC latency doesn't change, and the rest of the program logic will run 2x as fast because of more memory and cores.
If you want to act shocked, shocked that GC has an overhead, then go ahead. If you want to pretend that nobody can ever build a game in a GC langauge (despite the fact that hundreds of thousands have, on Android and other platforms), then go ahead. Hell, even if you want to continue to use C or C++ to squeeze out every last drop of performance, then go ahead. But I find these comments really disingenuous.
If you want to update some physics model at 60 FPS, you will be updating every 16 ms. An occassional 10 ms pause is not going to prevent that. Anyway, any reasonable game design needs to deal with pauses that are caused by network congestion intelligently, which can often be longer than 16 ms anyway.