Hacker News new | past | comments | ask | show | jobs | submit login

Here's a common problem that shows up in implementations of games:

    class Time {
        uint32 m_CycleCount;
        float m_CyclesPerSec;
        float m_Time;
        
    public:
        Time() {
            m_CyclesPerSec = CPU_GetCyclesPerSec();
            m_CycleCount = CPU_GetCurCycleCount();
            m_Time = 0.0f;
        }

        float GetTime() { return m_Time; }

        void Update() {
            // note that this is expected to wrap
            // during the lifetime of the game --
            // modular math works correctly in that case
            // as long as Update() is called at least once
            // every 2^32-1 cycles.
            uint32 curCycleCount = CPU_GetCurCycleCount();
            float dt = (m_CycleCount - curCycleCount) / m_CyclesPerSec;
            m_CycleCount = curCycleCount;
            m_Time += dt;
        }
    };

    void GAME_MainLoop() {
       Timer t;
       while( !GAME_HasQuit() ) {
           t.Update();
           GAME_step( t.GetTime() );
       }
    }
The problem is that m_Time will become large relative to dt, the longer the game is running. Worse, as your CPU/GPU gets faster and the game's framerate rises, dt becomes smaller. So something that looks completely fine during development (where m_Time stays small and dt is large due to debug builds) turns into a literal time bomb as users play and upgrade their hardware.

At 300fps, time will literally stop advancing after the game has been running for around 8 hours, and in-game things that depend on framerate can become noticably jittery well before then.




Though inside game logic you should probably default to double, which would easily avoid this problem.


If I'm going to use a 64 bit type for time I'd probably just use int64 microseconds, have over 250,000 years of uptime before overflowing, and not have to worry about the precision changing the longer the game is active.


So using fixed point. You could do that, but you can't make every single time-using variable fixed point without a lot of unnecessary work. Without sufficient care you end up with less precision than floating point. If you don't want to spend a ton of upfront time on carefully optimizing every variable just to avoid wasting 10 exponent bits, default to double.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: