I thought this might interest some people.
I had Outpost 2 running full screen, and I was thinking my units were moving rather sluggish. So I decided to reduce the window size to about a quarter of the screen, and suddenly my units started moving a lot faster. So I got curious as to just how long it takes to render a scene. (After all, my computer is a number of times faster than what was on the market when OP2 was released, so why the slowdown?)
I ended up hooking the virtual function responsible for drawing the detail pane, and added some timing code, using the RDTSC instruction (Read Time Stamp Counter), to see just how long it took to draw a single frame. This was only for the detail pane drawing, not the mini map or command pane, or any of the rest of the game logic. For some perspective, my computer runs at 800 MHz, and the RDTSC instruction can be used to find how many processor cycles a given section of code took to execute, in cycles. (Note that it's hard to get accurate results for small sections of code with this instruction, possibly due to caching or pipelining issues, I'm not really sure, but this wasn't a very small section of code being tested, so hopefully that's not as much of an issue). I used the full 64 bit precision, since 32 bits can rollover pretty fast, although, even 32 bits should have been enough for up to greater than 1 second timing on even the fastest CPUs. The tests were done on a single CPU machine, so the timing values likely include time spent executing other processes, and context switch time. Basically these results kind of blow, and add in the fact that I don't really know anything about collecting good statistical results. But, numbers are usually interesting to look at anyways. =)
I'm afraid some of the results were also a bit too sporadic to draw much of a conclusion, so take everything with a grain of salt. Mind you, I haven't done much more than simply eyeball the numbers so far either. Typical timings results between tests were anywhere from about 1 million cycles, up to about 65 million cycles, which a few sporadic results outside of that range.
When running full screen and doing nothing but looking at my base, it took around 13 million to 19 million cycles, with an average of probably somewhere in the 14 million range. When scrolling in full screen those values shot up to about 65 million cycles or so. That abouts 81 ms for a single frame. Definately more time than is needed to keep the game running at full speed.
With a smaller window, the results seemed a bit more sporadic. Some runs, it typically took about 2 million to 4 million cycles, with an average of about 3 million cycles, although these results didn't typically replicate very well, and the average and typical range may have been a few million higher at some times (about 5-8 million cycles). When looking at some random part of the map without units, the times ranged from about 1.1 million cycles to about 1.5 million cycles. Scrolling brough the time up to about 17 million to 26 million cycles. The average seemed to remain consistently higher after scrolling around for a while.
I also tried checking into the effects of day and night. The numbers looked about 0.3 million higher on average for an empty nighttime screen, than an empty daytime screen.
The number of units on the map also seems to have a big impact. On a map with no units, and no day/night, it took about 0.1 million cycles per frame. On the map with over 2000 lynx, about half of which were in view (and no day/night), it took about 26 million cycles per frame. (About 32 ms per frame).
One of the reasons why I wanted to look into this, is the drawing code I've seen is a bit odd. Plus, certain functions look like there is a lot of overhead before it even gets down to copying the bitmaps around. When it does get to copying the bitmaps, it does it using an indirect function call to draw each scanline. I suspect that function call overhead may be rather significant.