Hmm, this is kind of neat, and potentially quite useful. I remember Eddy-B had to deal with the static memory issue for Renegades. I'll have to look over the code when I get some time.
I had considered this problem myself, but had come up with a different idea on how to address it. I was thinking some kind of stream object could be passed to a custom DLL function (if it existed), allowing the DLL to write free-form to the stream, as much data as it wanted. Similarly, a stream object could be passed to the DLL to load data. I was thinking a custom section could be added to the saved game file format to store the stream data, and the passed stream object could ensure any read/write was restricted to that section of the file. A possible implementation might be a memory stream object, so the data could be length prefixed when flushed to disk using a small header for the data block. That should play nice if other expansions to the saved game file format are desired. It may require more memory than is strictly need in some cases though. An alternative might be to assume the data block extends to the end of the file, but that could limit future format expansions. You could also place a marker at the end of the section, but that either requires special encoding rules to prevent premature end of the section, or simply use it as a verification check, so the stream could read past the end, or be short of the end, but then an error will (likely) occur when it tries to verify the end tag. You might also leave it up to the level to figure out how much data it should read. Basically pass the responsibility for determining how much data to read/write on to the level read/write code. Personally, I think the memory stream object would work the best, and the amount of data that would need to be stored to the memory stream would likely be negligible compared to how much RAM computers have these days. Mind you, that was all just an idea. I never had any demo code.
I also had an idea to collapse read/write code into a single serialization function, so you didn't need to write nearly identical code for both reading and writing. That technique seemed to apply more generally to save/load code of all kinds. I sort of felt a lot of stream implementations lost something by combining read and write abilities into the same objects, and having two separate functions, one for reading and one for writing. When you think about it, there isn't much difference between a read and a write. The function parameters are the same, a length, and a pointer to a buffer. The only difference is the direction of the data copy. But the stream itself knows what direction it should have. This can be handled by virtual function dispatch. The only catch, is you need to perform memory allocation when reading, but not when writing. But a stream can be queried for it's direction, and memory allocation can be done before the serialize operation if it's a read stream. I've never quite understood where the concept of a "bidirectional" stream came from. The abstraction is nonsense, and very limiting when you consider what can be re-used if the read and write methods are the same virtual function table entry.
Idea (off the top of my head):
// Library/support code
// =============
interface Stream{
enum Dir { Read, Write };
void serialize(int length, void* buffer);
Dir direction();
}
class StreamRead : Stream {
void Constructor(char* fileName){
file.open(fileName, Stream::Dir::Read);
}
void serialize(int length, void* buffer){
file.read(length, buffer);
}
Dir direction(){
return Dir::Read;
}
}
class StreamWrite : Stream {
void Constructor(char* fileName){
file.open(fileName, Stream::Dir::Write);
}
void serialize(int length, void* buffer){
file.write(length, buffer);
}
Dir direction(){
return Dir::Write;
}
}
// Client code
// =======
// Note: There are potentially lots of classes, and so potentially lots of functions like this
void SomeClass::serialize(Stream& stream){
// Serialize (read or write) some fixed sized struct
stream.serialize(sizeof(header), &header);
// If we are reading, allocate dynamic storage
if (stream.direction == Stream::Dir::Read){
someArray = new SomeObject[header.numSomeObject];
}
// Serialize (read or write) dynamic array
// General idea anyway, as sizeof(pointer) probably doesn't do what you expect in C++
stream.serialize(sizeof(someArray), someArray);
// Serialize (read or write) a sub object
someSubObject.serialize(stream); // This works recursively quite nicely
}
// To start the read/write process
void SaveData(fileName){
StreamRead stream(fileName);
rootObject.serialize(stream);
}
void LoadData(fileName){
StreamWrite stream(fileName);
rootObject.serialize(stream);
}
That should save a lot of almost identical, and usually separate read/write code for a whole collection of objects, provided the Stream interface you're using supports a common method for both reading and writing, where the direction depends on the type of the object. You only need to care about the direction for dynamic memory allocations when reading, and when you initially open the stream to start the serialization process. The bulk of object read/write code shouldn't care about direction.