This is still Euler integration, which has poor accuracy whenever the derivative varies with time. The standard numerical integration method is 4th order Runge Kutta. RK4 is also popular for solving many forms of differential equations.
Actually it's a form of Verlet integration, which performs much better than Euler's method while being much cheaper than RK4.
Euler's method does not perform well with any form of acceleration and should not be used when acceleration is present. The equations here will remain accurate under constant gravity.
actually this is the 'midpoint' method. verlet is different in that it negates needing the first order term altogether, which is instead gotten at any instant from the previous and current position (zeroth order term).
but yes, verlet is much more powerful for game physics, especially when coupled with constraints since it allows all kinds of non-trivial behaviour (angular momentum) to fall out.
RK4 is indeed a popular and standard method. However for this particular equation or other equations with a conserved quantity a Symplectic integrator [1] should be used, as they are much better at keeping the numerical solution close to the real solution since they conserve the quantity of interest (energy, etc.).
RK4 is hard to write. Symplectic integrator looks even harder. (I'm not familiar with symplectic integration, but I've just read the Wikipedia article and it's not immediately obvious how to go from the math in the article to working code. I have written an RK4 integrator in the past, and the RK4 Wikipedia article looks like it's much easier to apply if you're writing the integrator for a physics engine.)
Euler, Euler midpoint, and Verlet are all very easy to code.
Also, you have to keep in mind the operation count versus the numerical stability. For the case under discussion -- constant acceleration -- Euler midpoint is perfectly suitable, as it gives the same answer as an exact analytical solution for the case where x(t) and y(t) are quadratic polynomials. RK4 or the like would only result in longer and slower code with no actual benefit in this particular application.
Correct me if I'm wrong, there is no reason to numerically integrate this. This is not a differential equation. The integral solution is a simple function that can just be evaluated.
Leaving it as a differential equation gives you some room to experiment more easily. For example, suppose you want to add a swimming mechanic to your game: you might end up with a completely different closed form solution, but if you left it as a differential equation it could just be a couple extra additive terms.
I think the article is handling it this way for pedagogical reasons. It's more clearly in the realm of "Software Engineering" to say "This is what our solution's trying to approximate, and the easiest approximation causes practical problems, so this is a better way to approximate the same thing (in an only slightly more complicated way) by adding an extra term." The author's approach is easier to understand because it's merely a small patch on an existing design.
If you say "By the way, there's an exact solution to this, it involves something called Integrals that's usually the focus of at least three semesters of Calculus in college, but you can't really understand it without a few courses in Real Analysis, Differential Equations, and Numerical Methods..." then it seems like you're talking too much about Math instead of Software Engineering. As a result, you lose the audience whose main interest is making games for fun and profit, and don't care about math (or so they think). It's much harder to understand because it's a complete redesign of the integrator that relies on a non-trivial body of theory.
> This is still Euler integration, which has poor accuracy whenever the derivative varies with time.
Which is why decoupling the physics delta from the rendering framerate is important. The author mentions Quake (id Tech 1, 2 or 3 ?), and I seem to recall id Tech 4 (Doom 3) was the first id Tech engine that implemented that.
I have a vague recollection that it was fixed or mostly eliminated in Quake 3. It was definitely still an issue in Quake 2, and one relatively well-known cheap trick was leading people towards particularly architecturally dense parts of the map and using a weapon such as the hyper-blaster, which was a high rate of fire laser gun for some unfortunate reason was implemented as a discrete particle/projectile for each shot. This would sometimes slow down the opponent(s) significantly enough to make a difference.
There were also special moves, especially rocket jumps, and double-jumps that were impossible <60fps, and got easier towards 100+ (This being in the days of 200MHz pentiums and software rendering, "Monster 3D 4MB", and intense envy of those who could afford 2x 12MB Voodoo II cards.
And yet another reason why the serious competitors would turn off almost all graphics so that all you saw were colored boxes running around in colored boxes.
use an accumulator to have a fixed dt no matter the framerate. With a variable step size you risk all kinds of weird bugs linked to the hard to debug rendering context. The size of dt should be consider a system parameter, tuned for your game and fixed in concrete.
This problem involves performance, precision, code stability and player perception, so there are no hard and fast rules applicable everywhere, and certainly do not need always to be fixed in concrete. A game can mix variable dt and fixed dt in different subsystems (render, logic, physics, user input, network, ...).
I have even managed to dynamically change these settings and parameters on the fly: which subsystems use which method, the lower and upper limit for variable dt, number of iterations per render frame, and even value to use for fixed dt, all adjusting depending on framerate and state of the game.
A classic example of this is, during a big explosion you may need maximum precision on physics (fixed dt with low value), can afford variable dt on behaviour and interface (since stuff is just blowing up in the air), and can benefit from a low maximum dt that causes a bit of slow-motion (John Woo style!).
When you're doing multiplayer there's a lot less you can afford to change, because you need to keep timing sane and synced across clients & server. Everything depends on the game, the engine, the platform, and the dynamics of what the player is seeing.
When your game does regain control, it can run the simulation for N steps (however many it usually does for the given dt), then finally render the scene using the latest state. (so the game will appear to pause, then skip ahead). Or it can try to play catch-up, it does 1 tick and render at a time, but with a reduced delay between frames until it's "caught up". (so the game will pause, then appear fast-forwarded for a bit).
Either way Tlark is right, you really don't want the game logic to be affected by framerate.
Then the game runs in slow motion. There's not much you can do on a slow system.
Or if you mean that you may be forced to draw at (for instance) 55fps, then the solution is to precalculate the next physics frame and linearly interpolate between the previous and next frame when drawing.
This is still basically wrong for general game physics. I.e. you should not blindly follow the author's suggestion to use it for all forces.
It happens to be an exact solution for one very specific situation -- the case of a constant force that is always applied. In this case, unvarying gravity with no air resistance.
This is typically one of the first things you learn in a Classical Mechanics course, because they can teach it using just Kinematics (the definitions of displacement, velocity, and acceleration) before introducing Dynamics (forces).
To prove it, you can just integrate the definition of acceleration twice and recognize that the integration constants are your initial position and velocity.
If the time-step changes or if forces are due to input, or other changing factors then this is still a pretty terrible method.
Ah ... I should clarify ... the method is not so terrible, but rather the author's explanation and rationale are. Basically, it slightly improves on one little piece of the puzzle, and completely ignores the real issues like fixed time steps, render/physics/network/game logic loop decoupling, and stiff systems.
FWIW, I shipped two hit games this year that only used Euler integration and worse hacks. It made life painful, though.
Udacity has a course, "Differential Equations in Action", that's about numerical solutions of equations of motion and other differential equations from physics, biology, and so on. http://www.udacity.com/overview/Course/cs222/CourseRev/1
Anyone has any insight into the strenghts and weaknesses of each one? I've been unable to find any comprehensive review online about them. I don't have time to do both simultaneously, but can do one first and the other later or alternate between them.
I don't have a strong calculus background though I'm above the average "programmer" or compsci graduate. I'm interested in simulation and numerical problems (specially finite element method) but theoretical background is welcome when its not overwhelming (i.e. when its there for you to understand but its not the focus of the course).
The Python necessary for the Udacity one is a bit of a pig if you've never coded in Python before. They help out a lot by essentially presenting complete programmes with you needing only to fill out some extra calculations, so you can concentrate on the important bit, but nonetheless if you're used to being able to identify the type of an object by looking at its creation point it's a bit of a mystery to begin with, especially since the first module uses 2D arrays to represent location and velocity - there are some handy Python functions to turn them into distances/vectors from various points, but if you're not familiar with them you'll spend too much time wrestling with Python instead of thinking about the DEs.
There's also the occasional big gap here and there between the video lecture and what you're expected to do (indeed, sometimes it's actually quite tricky to just work out what you're expected to do, despite the helpful comments in the code - I've found that the hardest aspect is not solving the problem, but getting a clear picture of the question being asked and translating the solution into python). For language reference, I am an experienced coder in C and C-related languages (C++ and non-Cocoa Obj-C).
Still, it's early days and I expect these things will be ironed out over time.
For other simple methods of numerical integration, look at the Trapezoidal Rule and Simpson's Rule, two staples of high school (or college) calculus.
Since we're talking about gaming, it bears noting that Box2D (and most physics engines, for that matter) uses the Semi-implicit Euler method (http://en.wikipedia.org/wiki/Symplectic_Euler_method). The author of Box2D mentions that this is a better method than Verlet integration because calculating friction requires knowing velocity.
This same thing is used in molecular dynamics simulations. For instance, there is an algorithm called RESPA that is used to break integrations of different types of particle interactions into appropriate timestep intervals. Bond vibrations must be calculated much more frequently than non-bonded interactions.
The algorithm (reversible RESPA) is formally derived from the Liouville operator (which governs the time evolution of any property):
A(t) = exp(iLt) * A(0)
For instance, A(t) can be position or momentum. The Liouville operator must be symmetric in order to generate a reversible numerical integration algorithm.
This was also colloquially known as the "leapfrog" algorithm and is the simplest of a class of integrators that are symmetric (in simulation time) and symplectic which are crucial properties for some simulations. I take it that RESPA is the partitioning of the time evolution operator into components with different force gradients, and the application of different integration schemes to those components.
One can also generalise leapfrog to integrate the momentum (or some part) with n steps of dt/n.
The metric used to describe their accuracy is the degree to which they violate the conservation of energy, which can be shown to be an odd integral power of the timestep dt per step [0]. The error in leapfrog goes like the 3rd power.
Higher order integration schemes can be derived e.g. [1].
They may not be useful in practice, depending on the cost of
computing the individual terms and the accumulation of finite precision errors. But the known scaling behaviour provides a nice way of verifying the calculation of the evolution operators.
Another nice thing to do is to compute the "round trip", i.e. integrate forwards in time, and then backwards. With a symmetric integrator you should end up where you started in terms of position, momentum and energy, regardless of step size, so computing a suitable difference and seeing how it scales with trajectory length can be informative. (e.g. one can compute a Lyapunov exponent from such round-trips to see if the underlying dynamics are chaotic).
It's an interesting post, but perhaps a simpler way of looking at the whole thing is you should be using average velocity over the time elapsed rather than the just-calculated new velocity.
It makes immediate, intuitive sense (at least it does to me) and doesn't require even thinking about differential equations (at least for constant forces).
His improved graph actually still doesn't hit the peak at all frame rates. The right way to do things from the usability perspective would be the calculate the peak and make sure the player can hit exactly that at some point. Otherwise areas that are supposed to be reachable may not be, as he says. The code for that wold be a lot more complex, though, so it may be the wrong thing from a business perspective, spending large amounts of your dev time on a small edge case of users and user situations.
You can cut down the amount of error even further by not iterating. Instead of iteratively updating the height, just store the initial position, velocity and time. You still compute the current position as pi + vi * dt - a * dt * dt/2, but intermediate results are discarded to avoid compounding floating point errors.
Of course, you will need to update the initial position/velocity/time whenever the jump is interrupted or modified. The reduction in error is also quite small.
That assumes that there are no other forces. Your solution doesn't handle momentum changes (throwing an object, or taking a bullet, or being close to an explosion) while in the air, and it assumes that there's no maximum or terminal velocity in the game. It also becomes trickier to compute when your objects hit other objects.
You say that "you will need to update the initial [state vector] whenever the jump is interrupted or modified" - that's exactly what numerical integration does. So your solution seems like it would use a closed-form solution for some cases, and numerical integration for others, which would make the dynamics code easily twice as complicated.
While switching from Euler to leapfrog integration is a couple of lines of change, for a better approximation.
Why does this even matter? If you have a large delta time then your game is fucked anyway. Collision detection will likely also be broken and the game is unplayably choppy anyway so it doesn't even matter.
I remember a car trick (do a barrel roll) in GTA San Andreas that for the life of me I could not accomplish. I found out that this variable physics was likely the problem - I dropped the graphics settings to minimum, and I was able to do the trick.
This is on a quad core, 8GB ram, 1GB video card machine - it's no slouch - but the subtle difference was enough to make my task impossible.
This is incorrect. The suggested approach makes the game significantly more accurate adding 3 multiplications and an addition. This does not make the game noticeably slower. If you're concerned about 3 multiplications and an addition you're going to need to show me some profiler data that shows this as a specific bottle-neck in the game-logic.
No, you are talking about adding all of these operations per object in the game world.
It doesn't stop at gravity either. Virtually everything in the game world is moving via acceleration. If you were going to do this in a consistent manner you would need to do this for every single physics calculation for every single game world object.
You are correct that you would have to profile it to determine how much of an issue it would be but it sounds like potentially a lot of deadweight to add to the game. Especially since when games slow down it is often due to having a lot of objects in the gameworld simultaneously.
You'll have small variations at higher framerates too. Why not just avoid variation in this altogether? It's almost no extra work, and it makes things more consistent.
There's a large difference in feel between 30 Hz and 60 Hz. So much so that some devs will be absolutely inflexible about dropping below 60.
Also, it's very important to people's perceptions of performance (whoa) to maintain a consistent frame-rate. The variations are much more noticeable than the absolute rate. Finally, on lots of hardware, you can't actually display at 59Hz, say. You'll either be at 60 or 30, and will oscillate between the two in a most annoying way.
That is my point. You are sacrificing the accuracy of a double for the speed of a float.
You are already storing all these values in floats which is inherently inaccurate. Also if the dt becomes very large you have bigger problems than gravity anyway.
So, why is it important to have a fixed deltaTime? Except in cases where determinism really does matter. I.E. lock-step based networking. Why is this important?
Because without fixed deltaTime, if the framerate is too low, you can't jump in Quake. With the "new algorithm", in the 3fps example, you still have to luck out and get a tick at the right time.
And with a constant delta, if you choose it right, it can be easier to get collision detection right. You can possibly get away with checking if two things are colliding with each other right this tick instead of checking if they might have run through each other between ticks. With variable delta, if you want to get it right, you might have to check where things where half a frame ago and such, and at that point it might be easier to just check more often instead.
And I think you should probably ask the same question about variable deltaTime. Assuming that coupling rendering and game logic is not some best practice that you should default to, why would you want variable deltaTime? (There may or may not be some good answers to that, and maybe it totally depends on the game and so on.)
Constand delta time does not solve collisions. You should always be using continious collision detection for things you care about, or eventually something will go to fast even for fixed dT.
One good argument for variable dT is if you do enough logic such that game time may be a major performance bound. If on a machine that is running fixed dT cannot perform all computation in the allotted time, then the game will get behind schedule, and how do you resolve the problem of having a wall clock 10, 100, or more frames ahead of simulated time when you can't keep up. In that regard variable dT degrades more gracefully.
The point being that it doesn't matter if you believe people will not play the game because game performance; if they really like it the will ignore it. And if they paid for your game you should at least try to give them an experience as good as possible.
Yeah, in the perfect world all your buyers will have them. In real life sometimes don't.
Plus there is a lot of things you don't know; maybe he haves the minimum requirements but he is running a lot of background process because he installed a bunch of things he doesn't use.
If the game is so bogged down that the fixed timestep cycle is slowing down the game is broken for so many other reasons. Objects will start tunneling through other objects.
No I am just being realistic as an actual game developer. This is not a conceptual discussion. It doesn't make any sense to implement this change because if your fixed timestep for the physics system is breaking down then gravity not calculating correctly is the least of your problems. The controls would be unresponsive if the dt became large anyway. You are talking about more than doubling the number of operations for all acceleration based movement which is just about every movement and applied force in the game. If you thought the framerate was hurting before because too many objects were on screen doubling the physics calculations is not going to help you out. It might not hurt that bad but it really depends, you would need to profile it. It doesn't matter though because the tunneling would be another bigger issue that you would also have to address for this to even matter, and that would definitely hurt bad to check for.
During the far majority of use cases the timestep is fine, and you are just creating deadweight by doing this. Plus I guess you are now doing tunneling checks as well to create further deadweight for the rare occasion when someone has framerate issues.
I am looking at this in the sense that I would actually implement this in a game I make, and I would not because the upside of doing this is basically "if the game is already fucked I want it to be maybe not as fucked but still having many other big issues with pretty much every other component of the game logic", vs a bunch of deadweight when the game is running correctly, which should be 99.5% of use cases.
Am I adding assumptions here? I guess so but they are real ones for real game developers. Unless you are doing something very unusual these would be your concerns.
This is not an academic exercise for me like it maybe is for you and most of the commenters it seems like.
> the game is unplayably choppy anyway so it doesn't even matter.
My point originally was about this statement of yours, that is purely conceptual and you skew the discussion to address the other part of your comment (or you through that was the point but didn't explain so in your answers)
But anyway, one assumption is that every game should have a fixed gamestep, in many single player games (real single player games) you don't want to do that, because is preferable that the user can see the ball coming to the player than to magically appear behind the player even if that is "timely correct"; the ball/interface going slow is a lot less frustrating that losing without realizing why. Platformers for PC with (virtual) high speed movements come to mind as a common example of this.
Even multiplayer games suffer of this; in most online FPS if the server suddenly slows down all the players start experiencing lag and everyone looks like they are "teleporting"; with a not-fixed-timestep you could slow everyone down so the problem becomes a lot less frustrating because the player still have complete control and understanding of their in-game character; just a bit slow-mo until the server speeds up to normal. The teleporting should be used only to sync with the server when the client connection is the one with problems updating.
Oh well unplayably choppy is more subjective but I was going with the true problems a large dt would create like collision detection failure and controls not working.
I can't think of a framework that doesn't have a fixed timestep except for maybe gamemaker or some of the simpler frameworks. Maybe some of the html5 stuff is non-fixed timestep but that is a field that is just coming into its own right now.
None of the bigger 2d or 3d physics engines today that I know of use a non-fixed timestep.
In platformers with high speed movements fixed timesteps are even more important because of tunneling. You can often account for tunneling in some of the better physics engines but it is sloooow.
In fps games there is a lot of interpolation going on with the server correcting player and object positions. If the server lags you and everything else is going to teleport no matter what. The server is not even accepting your control input at that point so either you are going to desync or you are going to teleport when the server tells you your real position. Generally in fps games only critical physics objects are being server corrected anyway, this is usually a small fraction of the physics objects. For example if an enemy fires a rocket at you that is not usually server corrected. If you get hit in the server calculations then you got hit whether or not you think you dodged it client side.
My point is that if the server hiccups your jump being slightly off is not the big problem, The big problem is that the controls are unresponsive. Also when the server hiccups it is rarely the actual fixed timestep serverside that is your problem, it is the internet. If your computer hiccups then it is not updating positions from the server so all critical objects are going to teleport. Could there maybe be slightly less teleporting on non-player controlled critical physics objects? Maybe so but if you are losing that many timesteps than things are going to jump around on your screen no matter what because your framerate is like 5.
> in most online FPS if the server suddenly slows down all the players start experiencing lag and everyone looks like they are "teleporting"
Why would server lag effect FPS? I can only see how it would effect the reported positions of the other players because you aren't getting updates to changes in their movement. Server lag should never effect your FPS.
It isn't that they cut the interval by half. It happens that for constant acceleration, this midpoint happens to lay on the actual solution. Notice that there is only one summing of position, and two of velocity.
Even if something was first published in 1969, some of us are seeing it for the first time. Anyhow, no one was claiming that this was a "new development".
X is time, Y is height. So something like, X time after hitting the jump button, the guy is Y above the ground.
(Or you can pretend that the guy is moving towards the right at constant speed, and jumping, and the points in the graphs are the different positions he'll be at :)
(Edit: I'm not very sure about the pictures to the left though. Maybe higher jumps/longer time or somethingsomething.)
Yeah, but shouldn't a delta of 1/3s be the same as 3fps? But the 1/3s to the left does not look like the 3fps to the right.
And it does say that the picture to the right is just like in Quake. So I was guessing that maybe the one to the left is not. Like, it has higher initial velocity and max height, and longer time spent in the air. Like the picture to the right is a jump that lasts for <1s while the one to the left is a way longer one.
Would be nice to have like axes with labels and things on them.
There are several ways to do "game gravity right" and not just one answer.
For example some games have entirely reproducible game states which depends on only two things: the seed given to the PRNG and the inputs made the player (and the time at which they happened).
There are a lot of games (probably most of them) which have gravity but which aren't "real" physics simulation and/or which do definitely not need a "real" physics simulation to be, well, fun games.
Some of them do simply use integer-math and precomputed "gravity lookup tables" entirely consisting of integers. You then use the time elapsed to know where you should look in your table and you can of course compute a value "between two lookup indices".
The advantage of integer math (either using gravity lookup table or not), compared to floating-point math, is that your game engine can stay deterministic even if the floating-point numbers implementation vary from one platform / virtual machine to the other.
A good summary is here: http://gafferongames.com/game-physics/integration-basics/