Why do some old games run much too quickly on modern hardware?
I've got a few old programs I pulled off an early 90s-era Windows computer and tried to run them on a relatively modern computer. Interestingly enough, they ran at a blazing fast speed - no, not the 60 frames per second kind of fast, rather the oh-my-god-the-character-is-walking-at-the-speed-of-sound kind of fast. I would press an arrow key and the character's sprite would zip across the screen much faster than normal. Time progression in the game was happening much faster than it should. There are even programs made to slow down your CPU so that these games are actually playable.
I've heard that this is related to the game depending on CPU cycles, or something like that. My questions are:
- Why do older games do this, and how did they get away with it?
- How do newer games not do this and run independently of the CPU frequency?
Solution 1:
I believe they assumed the system clock would run at a specific rate, and tied in their internal timers to that clock rate. Most of these games probably ran on DOS, and were real mode (with complete, direct hardware access) and assumed you were running a iirc 4.77 MHz system for PCs and whatever standard processor that model ran for other systems like the Amiga.
They also took clever shortcuts based on those assumptions including saving a tiny bit of resources by not writing internal timing loops inside the program. They also took up as much processor power as they could – which was a decent idea in the days of slow, often passively cooled chips!
Initially one way to get around differing processor speed was the good old Turbo button (which slowed your system down). Modern applications are in protected mode and the OS tends to manage resources – they wouldn't allow a DOS application (which is running in NTVDM on a 32-bit system anyway) to use up all of the processor in many cases. In short, OSes have gotten smarter, as have APIs.
Heavily based off this guide on Oldskool PC where logic and memory failed me – it's a great read, and probably goes more in depth into the "why".
Stuff like CPUkiller use up as many resources as possible to "slow" down your system, which is inefficient. You'd be better off using DOSBox to manage the clock speed your application sees.
Solution 2:
As an addition to Journeyman Geek's answer (because my edit got rejected) for the people who are interested in the coding part/developer perspective:
From the programmers perspective, for those who are interested, the DOS times were times when every CPU tick was important, so programmers kept the code as fast as possible.
A typical scenario where any program will run at the max CPU speed is this simple pseudo C:
int main()
{
while(true)
{
}
}
This will run forever.
Now, let's turn this code snippet into a pseudo-DOS game:
int main()
{
bool GameRunning = true;
while(GameRunning)
{
ProcessUserMouseAndKeyboardInput();
ProcessGamePhysics();
DrawGameOnScreen();
// close game
if(Pressed(KEY_ESCAPE))
{
GameRunning = false;
}
}
}
Unless the DrawGameOnScreen
function uses double buffering/V-sync (which was kind of expensive in the days when DOS games were made), the game will run at maximum CPU speed. On a modern day mobile i7, this would run at around 1,000,000 to 5,000,000 times per second (depending on the laptop configuration and current CPU usage).
This would mean that if I could get any DOS game working on my modern CPU in my 64-bit Windows, I could get more than a thousand (1000!) FPS – which is too fast for any human to play – if the physics processing "assumes" it runs between 50 and 60 FPS.
What modern-day developers (can) do
- Enable V-Sync in the game (not available for windowed applications* – i.e., only available in full-screen apps)
- Measure the time since the last update and adjust the physics processing accordingly, which effectively makes the game/program run at the same speed regardless of the CPU speed
- Limit the framerate programmatically
* depending on the graphics card/driver/OS configuration, it may be possible.
For the first option, I won't show any examples because it's not really "programming" – it's just using the graphics features.
As for the other two options, I will show the corresponding code snippets and explanations.
Measure the time since last update
int main()
{
bool GameRunning = true;
long long LastTick = GetCurrentTime();
long long TimeDifference;
while(GameRunning)
{
TimeDifference = GetCurrentTime() - LastTick;
LastTick = GetCurrentTime();
// process movements based on time passed and keys pressed
ProcessUserMouseAndKeyboardInput(TimeDifference);
// pass the time difference to the physics engine, so it can calculate anything time-based
ProcessGamePhysics(TimeDifference);
DrawGameOnScreen();
// close game if escape is pressed
if(Pressed(KEY_ESCAPE))
{
GameRunning = false;
}
}
}
Here you can see that the user input and physics take the time difference into account, yet you could still get 1000+ FPS on screen because the loop is running as fast as possible. Because the physics engine knows how much time passed, it doesn't have to depend on "no assumptions" or "a certain frequency", so the game will work at the same framerate on any CPU.
Limit the framerate programmatically
What developers can do to limit the framerate to, for example, 30 FPS isn't any more difficult – just take a look:
int main()
{
bool GameRunning = true;
long long LastTick = GetCurrentTime();
long long TimeDifference;
double DESIRED_FPS = 30;
// how many milliseconds need to pass before the next draw so we get the framerate we want
double TimeToPassBeforeNextDraw = 1000.0/DESIRED_FPS;
// note to geek programmers: this is pseudo code, so I don't care about variable types and return types
double LastDraw = GetCurrentTime();
while(GameRunning)
{
TimeDifference = GetCurrentTime() - LastTick;
LastTick = GetCurrentTime();
// process movements based on time passed and keys pressed
ProcessUserMouseAndKeyboardInput(TimeDifference);
// pass the time difference to the physics engine, so it can calculate anything time-based
ProcessGamePhysics(TimeDifference);
// if certain number of milliseconds pass...
if(LastTick-LastDraw >= TimeToPassBeforeNextDraw)
{
// draw our game
DrawGameOnScreen();
// and save when we last drew the game
LastDraw = LastTick;
}
// close game if escape is pressed
if(Pressed(KEY_ESCAPE))
{
GameRunning = false;
}
}
}
What happens here is that the program counts the milliseconds passed, and when a certain amount is reached (33 ms), it redraws the game screen, effectively applying a frame rate near 30 FPS.
Also, the developer may choose to limit all processing to 30 FPS by slightly modifying the above code:
int main()
{
bool GameRunning = true;
long long LastTick = GetCurrentTime();
long long TimeDifference;
double DESIRED_FPS = 30;
// how many milliseconds need to pass before the next draw so we get the framerate we want
double TimeToPassBeforeNextDraw = 1000.0/DESIRED_FPS;
// note to geek programmers: this is pseudo code, so I don't care about variable types and return types
double LastDraw = GetCurrentTime();
while(GameRunning)
{
LastTick = GetCurrentTime();
TimeDifference = LastTick - LastDraw;
// if certain number of milliseconds pass...
if(TimeDifference >= TimeToPassBeforeNextDraw)
{
// process movements based on time passed and keys pressed
ProcessUserMouseAndKeyboardInput(TimeDifference);
// pass the time difference to the physics engine, so it can calculate anything time-based
ProcessGamePhysics(TimeDifference);
// draw our game
DrawGameOnScreen();
// and save when we last drew the game
LastDraw = LastTick;
// close game if escape is pressed
if(Pressed(KEY_ESCAPE))
{
GameRunning = false;
}
}
}
}
Other alternatives
There are a few other methods, and some of them I really do hate. For example, using sleep(NumberOfMilliseconds)
.
I know this is one method to limit the framerate, but what happens when your game processing takes 3 milliseconds or more and then you execute the sleep? This will result in a lower framerate than the one which only sleep()
should be causing.
Let's, for example, take a sleep time of 16 ms. This would make the program run at 60 Hz. Now let's say the processing of the data, input, drawing and all the stuff takes 5 milliseconds. This gets us to 21 milliseconds for one loop, which results in slightly less than 50 Hz, while you could easily still be at 60 Hz, but because of the hardcoded sleep, it's impossible.
One solution would be to make an "adaptive sleep", in the form of measuring the processing time and deducting the processing time from the desired sleep, resulting in fixing our "bug":
int main()
{
bool GameRunning = true;
long long LastTick = GetCurrentTime();
long long TimeDifference;
long long NeededSleep;
while(GameRunning)
{
TimeDifference = GetCurrentTime() - LastTick;
LastTick = GetCurrentTime();
// process movements based on time passed and keys pressed
ProcessUserMouseAndKeyboardInput(TimeDifference);
// pass the time difference to the physics engine, so it can calculate anything time-based
ProcessGamePhysics(TimeDifference);
// draw our game
DrawGameOnScreen();
// close game if escape is pressed
if(Pressed(KEY_ESCAPE))
{
GameRunning = false;
}
NeededSleep = 33 - (GetCurrentTime() - LastTick);
if(NeededSleep > 0)
{
Sleep(NeededSleep);
}
}
}
Solution 3:
One main cause is using a delay loop which is calibrated when the program starts. They count how many times a loop executes in a known amount of time and divide it for generating smaller delays. This can then be used to implement a sleep() function to pace the game's execution. The problems come when this counter is maxed due to processors being so much faster on the loop that the small delay ends up being way too small. In addition modern processors change speed based on load, sometimes even on a per-core basis, which makes the delay off even more.
For really old PC games they just ran as fast as they could with no regard to trying to pace the game. This was more the case in the IBM PC XT days however where a turbo button existed that slowed the system to match a 4.77mhz processor for this reason.
Modern games and libraries like DirectX have access to high precession timers so don't need to use calibrated code based delay loops.