Yes, it sounds like what you'd proposed for reducing research contribution for each level was mathematically equivalent to increasing research costs for each level. Instead of dividing the contribution by a certain amount, multiply the cost by the same amount to get an equivalent system where research contribution remains fixed, but research costs grow (exponentially, or not, depending on the sequence of numbers you choose). Well, equivalent aside from rounding error.
Regarding integer math, avoid division whenever possible. There are two main reasons:
1) Division is much slower than other basic mathematical operations (addition, subtraction, multiplication).
2) Division introduces rounding errors.
For instance, instead of checking if ((x / 10) < y), you could check if (x < y * 10).
A slightly related topic, avoid floating point whenever possible. You really need to know what you're doing to work with floating point properly. There are many commonly assumed mathematical properties that people assume hold which actually fail for floating point. You can also often do equivalent things with integer math. Here are some common issues:
1) The spacing between consecutive floating point numbers is not fixed. For large numbers, the minimum distance between numbers can be greater than one. For a 32-bit floating point value I believe that threshold is at 2^24. For 64-bit float point I believe that threshold is at 2^53. This of course means rounding errors greater than 1. Try it:
float f = 16777216; // 2^24
f++; // No change (rounded down)
2) Floating point addition is not necessarily associative.
Associative means ((A + B) + C) == (A + (B + C))
See the above point. If one of those values is large, say 2^24, and the other two values are 1, then ((2^24 + 1) + 1) rounds down to ((2^24) + 1), which leaves you at 2^24. If you add the small numbers first so they get bigger (2^24 + (1 + 1)) = (2^24 + (2)) = 2^24 + 2, with no rounding errors.
3) Common decimal values are not exactly represented by floating point values. Like say, "0.1". Powers of 2 work well, (including negative powers which produce fractions less than 1), hence 0.5 = 2^(-1) is not a problem, much like how 0.25 = 2^(-2) is not a problem. Sums of powers of 2 also work, such as 0.75 = 0.5 + 0.25. When working base 10, remember that 10 = 2*5, and that factor of 5 throws things off. It's like when you divide by 3 or 7 and get a repeating decimal answer. The same thing happens in a binary system when you divide by 5. That 5 is not a factor of the base of a binary system, and so the representation will be a repeating sequence of bits, representing an infinite sequence of additions of (negative) powers of 2. Since registers store values in a fixed number of bits, there will be truncation of repeating values.
4) Exact value comparisons often fail. This can be due to rounding errors in calculations, or rounding errors in converting decimal numbers in source code to actual floating point values (see above). Instead a close enough approach if often needed.
if (x == y) { /*Code not likely to execute when expected*/ }
if (abs(x - y) < epsilon) { /*This should work for appropriate small epsilon value*/ }
5) Conversion between floating point and integers can be slow
6) Different processors implement floating point to different precision, which means results often differ when the same algorithm is run on different CPU families. On x86 you have a choice of 4 byte floats (32-bit), 8 byte floats (64-bit) and 10 byte floats (80-bit). I'm not aware of other processors that offer the last option.
Not surprisingly many code optimizers won't touch floating point code. Consider what reordering operations could do considering they are not associative, and with rounding errors which can build up over operations, and will almost certainly be different if operations are reordered.
It often makes sense to avoid floating point when possible. Consider this, to calculate a Cartesian distance, you can use Pythagoras theorem along with a square root.
dx = x1 - x2;
dy = y1 - y2;
distance = sqrt(dx*dx + dy*dy);
The stock square root function uses floating point, so even if distances were integer, you'd convert to floating point, perform the square root, and then possibly convert back to integer. You could use a custom integer square root routine, but it's still going to be a bit slow.
But if you're only checking a distance against a threshold, like say a proximity check for enemy units, or a range check for firing a weapon, this is a different problem. You only need a yes/no type answer, not the exact distance. Hence, square both sides of the equation.
dx = x1 - x2;
dy = y1 - y2;
if (dx*dx + dy*dy <= distance*distance) { /*Within range*/}
The above uses only integer math, and avoids a slow square root calculation. Note that it never actually calculates the distance between two objects. It only determines if two objects are within a certain distance. Also, if many objects are being tested for proximity, the square of the distance threshold does not change, so that multiplication can be lifted out of the loop.
But that was a bit of a side tracked rant.
As for Runescape, depending on the skill, most levels unlock a new ability. Smithing had a whole set of weapons and armour, and for I think 6 different materials. It got crowded near level 99, in that the last level unlocked a whole bunch of things. There weren't enough levels to evenly distribute all of it. Skills like fletching had fewer abilities to unlock, so they were spaced out more. That unlocking of new abilities made each level more rewarding.
Edit: Meant "associative", not "commutative".
Perfect is subjective. Technically code that compiles, is syntacally perfect, as code won't compile with syntax errors, thus the syntax is technically perfect.
Now you're playing semantics. You know exactly what I was talking about. :P
Rushing something out the door to make it playable isn't always a good idea though
Except that's not what I was suggesting. Rushing something out the door gets you Outpost. Or Daikatana. Or E.T. on Atari.
What I was saying is something you need to just make a decision and start plodding through the code. If you don't do this you will never finish. You will never always have good code. Sometimes you have mostly good code with a few very ugly sections. It happens. Note what's ugly, why it's ugly and move on. E.g., in my own code for OutpostHD I have several things like this:
Structure* _s = reinterpret_cast<Structure*>(tile->thing());
Robot* _r = reinterpret_cast<Robot*>(tile->thing());
Structure and Robot both derive from Thing. Tile's only store Thing's. In this case I'm casting from a Thing to a Structure and Robot and proceeding. This is terrible practice in modern programs (though it's really common in a lot of legacy C programs).
How do I fix this? Stop storing the base class Thing in Tile and store Structure and Robot instead with their own get/set functions. I know the fix. I know how to clean it up. But it would require additional checks and more code in Tile to handle it all properly. Not that it's really a headache to do it but for now this works and I know how to fix it later when I've got other things fleshed out.
This is an example of "It does the job even though it's bad but I can fix it later and focus on other more important things."
Coding Savant as in someone who is naturally good at coding...
Really? -smh- You're taking things too literally. :P