Alright - here goes. None of the below will contain code, mainly because the whole idea of having a learning system is purely theoretical right now, and may or may not work. Here's my take on things.
Possibility number 1)We could create 'scripted' AI just like that of the triggers, only have these scripted systems actually use the AI functions instead of having an arbitrary time give an arbitrary research topic, etc.
Possibility number 2)We could create a hard-coded AI setup that consists of two sets of loops.
Loop set 1 (named HomeBase for future reference)Outer loop
{
Builds population (inner loop 1, includes research towards pop. building sciences.)
Builds morale (inner loop 2, includes research towards morale building sciences.)
}
Loop set 2 (named buildAttackDefense for future reference)Outer loop
{
Researches weapons (inner loop 1, usually runs simultaneous to inner loop 2 in order to create a weapons base.)
Builds weapons (inner loop 2, builds a crapload of the best weapons available to the AI... while also building walls and defenses and such)
}
Loop set 3 (named searchAndDestroy for future reference)Outer loop
{
Scout seeking (Loop 1, runs until it finds the nearest enemy colony, searching with scouts (just to add insult to injury.))
Complete and utter destruction (Loop 2, launches full scale attack using all mobile weapon units on that nearest base it just found. Does not cease until base=vaporized.)
}
Of the above three sets of loops, the sub loops all run entangled with eachother. Example: You can have a standard lab researching Environmental Psychology at the same time an Adv. Lab is researching Mobile Weap. Platforms assuming you have enough scientists. (Note: This above AI is NOT linear, meaning different loops can run at different times.) Another example is: Say it finished with the pop. building loop, and someone destroys enough residencies to make the demand go above 100%, the loop runs some more, and it rebuilds the destroyed buildings. (This can apply to any building/researching loop.)
Possibility number 3)Build all different AI loops as stated above, but rather than hard-coding the links between the loops, have the links dynamic, so if one strategy fails, a new order of links is created and thus a new strategy is attempted. These link setups will differ from game-type to game-type... and until the optimal strategy is obtained, from game-to-game. Use two data structures for handling all of these links between the loops. You can use a hash function to store and retrieve the strategies and their links, and while retrieving the links, line them up in a queue so the computer can then run in a linear mode, following the strategy it's just loaded. How it's going to try different links for different strategies is still up for debate.... and the reason I'm posting.
Would the links be all pre-programmed into the hash table and pulled out at random until the optimal strategy is formed? And say that 'optimal strategy' is beaten... should it be possible to figure out where the strategy failed and modify it accordingly using pattern recognition code? Again - this third possibility is all theoretical and may not even be possible given a relatively small time constraint.
The up-side about this possibility is that, in order to have a working, intelligent AI in your map, you won't have to spend two months developing it. You load a hash table, probably from an encoded *.ini file, and import the AILoops class into your project and according to what type of map you're making, you pass in a certain set of variables into the hashing function, and out pops your links directly into a queue, which then run through grabbing functions out of the AILoops class and running them.
Any comments, questions, answers, or any other type of response at all is appreciated.
(thumbsup) I'm dying for input on these ideas.