Author Topic: Adaptive Ai  (Read 3023 times)

Offline DarekStar

  • Jr. Member
  • **
  • Posts: 79
Adaptive Ai
« on: July 17, 2007, 04:24:31 PM »
iv played against C&C 3's adaptive AI and its very fun iv won only 1/100 times its is hard and i think it will be a nice feture for op3. but the fact remains how hard is it to make an adaptive AI.


ill give an example

_________________
|......|.....X.....|........|
|......|............|........|
|......-----..-----........|
|...........|..|.............|
|............................|
|............................|
|---------.....---------|
|...........|...|...........|
|............................|
|--....----------....----|
|..|..|.............|..|....|
|............................|
|............................|
|..O.................O ....|
__________________

the map is Unfair Advantage

im the x and the enamys are o


i had a grate defence of 20 antitank weapons and 16 andi infantry

the anamy loanched wave after wave to no avail and i stop hearin from it
i learned through a scout ship that it hat durn its attention to gatherin rescources
and bottlin me into that area i doubled my defences around the opening at  10 minets later i was attacked at the opening i sent all units to engadge the attackin enamys as they were masive 20 units that was doin good damage then iheard from the computer :: HARVESTER UNDERATTACK :: i looked to see that they had flown in an even more masive armadfa inbehind me i fended it off but lost 7/10ths of my base in the attack but was able to maintain my defences and recource gathering.
 GASP. long sentince my english and typing teachers would have a fit at seein that  :P .
anyways i held for 2 hours it did not muchm more then out of the blue i was attacked over the walls again i was ready and fended it of to just les then a minet later come under attack from the oposit side fanded it off to be attacked yet again from the bottleneck and both walls with a hidden special forces team to comein and wipe out my power and refinerys with my cc as well.......
I lost and that instants i heard

You have failed our colonie is doomed in the back of my head....


advatages: a new depth of play that you canot learn the ai's path and stratagy as it keeps adpating that has only been seen 1 time before

disadvantage: wel may be a lot of codeing and its extreamly hard to play but it is fun.


BAH rules ok uh

the ai can chose its path ether to focus on rscource and survival or on both attackin you and survival.

my brain herts after recallin that match....
« Last Edit: July 17, 2007, 06:40:20 PM by DarekStar »
-Darek

Caution this user refuses to spell check or even try and spell corectly unless posting a story.

so bare with him

Offline Arklon

  • Administrator
  • Hero Member
  • *****
  • Posts: 1269
Adaptive Ai
« Reply #1 on: July 17, 2007, 06:51:47 PM »
Doesn't sound like an "adaptive" AI to me? I was thinking more like a learning AI (using a "Genetic Algorithm").
« Last Edit: July 17, 2007, 06:52:01 PM by Arklon »

Offline Hooman

  • Administrator
  • Hero Member
  • *****
  • Posts: 4955
Adaptive Ai
« Reply #2 on: July 17, 2007, 07:42:31 PM »
A learning AI isn't necessarily going to use a genetic algorithm. In the context of a learning computer game AI for an RTS, it's almost certainly not a genetic algorithm.

Genetic algorithms tend to be more of a topic of study for computer science researchers, and their theoretical notion of Artifical Intelligence really has very little to do with computer games. If they study "games" then they are usually simple and have perfect rules (like chess, checkers, go, etc.), not something as complex as an RTS. Or they study other abstract problems that likely aren't most people's idea of fun. The answers in that field of research often take many hours to compute. Many of these AI techniques that you hear about from that field of study don't relate very well to computer game AIs.

Quite often those techniques don't apply because of how long those methods take to compute an answer. You don't want to wait a few hours between frames, nor do you want the computer to sit there for an hour while you play before it makes it's next move. The other main problem is that the simple rules used for the "games" that they study often lead to good fitness function which are needed to train the learning of those methods. A typical RTS has much too fuzzy of a notion of what is a good move, or what the objective is, which may of course change from level to level, that it's pretty much an impossible task to program a fitness function. Without that, those traditional AI techniques are essentially useless and unable to learn. (They start off in a fairly useless state, and if they can't learn and converge on a better solution, they'll remain useless). I would also like to state that even in the presence of such a good fitness function (which hopefully doesn't take too long to compute), the algorithms typically need many many iterations to converge on an acceptable solution. (As in, an acceptable learned state, and not necessarily the output for a given situation, which may also be slow to compute). Even if the fitness function was fast to compute, the number of times the algorithms must iterate to produce an acceptable solution would make it dead slow.



Although it is possible to write a good RTS AI, I don't really expect anyone here to have the skill or knowledge to create the kind of AI you're suggesting.  

Offline Brazilian Fan

  • Sr. Member
  • ****
  • Posts: 302
Adaptive Ai
« Reply #3 on: July 18, 2007, 11:01:24 AM »
How about using an ANN? They could be trained with a competitive like system (pretty like an Evolutionary System).
 
« Last Edit: July 18, 2007, 11:39:17 AM by Freeza-CII »

Offline Hooman

  • Administrator
  • Hero Member
  • *****
  • Posts: 4955
Adaptive Ai
« Reply #4 on: July 18, 2007, 08:00:26 PM »
An artificial neural net is also one of those more theoretical tools. I doubt it would be of much use for an RTS, and for many of the reasons I stated in the above post. Granted it's probably going to apply itself to certain tasks a little easier than genetic algorithms will. A small one that doesn't require training, or much training might work for certain things, but there are still likely better techniques.


Keep in mind that neural nets and genetic algorithms are just approximating techniques. If you have some unknown function, but do know certain properties of it, then you can try to converge on it through some kind of iterative technique, where the fitness function is some measure of the known properties.

In the case of chess, an unknown function might be to find the best next move given the current state of the board. The input is the current state of the board, the output is the move the take, and the fitness function might be some measure of what would stand to lose by making a given move. i.e. if the "NextMove" function outputs a move that puts your opponent in checkmate, then it's a good function and shouldn't be changed. If the "NextMove" function lets your opponent put you in checkmate, then it's a very bad function and needs to be altered somehow so it doesn't produce that bad output.

Keep in mind that you are trying to achieve some global approximation for all possible inputs being mapped to good outputs in each case, but using only some degree of measurement for the goodness of individual function outputs. This is a big problem for functions with discrete inputs and outputs (such as in the above example). Suppose we know whether or not the function needs to be changed to avoid a bad output for a given point. How do we change the function to improve on that point (the input), but not disrupt (too much) other points (the other inputs) which already give good output? Also, what if the input space is very large (like the configurations of a chess board) so it's essentially impossible to test your function on every input. How would you converge on some solution globally using only a few input points? It helps to have some idea of "closeness" for the inputs, so that an adjustment to the function to improve the output for a given input should hopefully also improve the output for nearby inputs that you might never test on.

Ideally what you want is some output that varies continuously with the input. i.e., if the input changes only slightly, then the output should change only slightly. This doesn't seem to be the case with the chess example. If you move one piece slightly (or remove it), it can completely change the "goodness" of a given next move.

Basically those techniques (neural nets, and genetic algorithms) don't really apply very well to the problem at hand.

Where those techniques might come in handy is the following setting. Suppose you have an electrical circuit, where you push a button, and it makes a light blink unsteadily for a short period of time, and then stops. We might want to know the voltage on the wire (output) given some time after the button has been pressed (the input value to the function). We don't know how this voltage varies. It is the unknown function we wish to approximate. (Maybe it varies like a sine wave, but we don't know that at this point). Now suppose we can take some sort of measurement at specific points in time. There are two settings I can see. One is, we can measure the voltage directly (with a voltmeter). This however is a little trivial in that there is no real "learning" process. We simple record the (input, output) pair, and then just interpolate between the measured points. Perhaps with lines, or perhaps a cubic interpolation to smooth things out a bit. If we have enough data points we can create an approximation to the original function. Here, the interpolation uses the continuity concept, in that nearby inputs should produce nearby outputs. The other more interesting case the requires a bit of learning is the following. We are now only allowed to measure the voltage on the line indirectly though the intensity of the light. We do not know the exact voltage given the intensity, but, we are allowed the drive the wire at any voltage we want, and measure the light intensity for comparison purposes (perhaps with a second apparatus, so we can run the two simultaneously). We then start with a guess, say a function with constant output (constant voltage level). We run the comparison, and if at a given point our light is dimmer than the one we are approximating, then we adjust the voltage of our guess up a little at that point, and if it's brighter than we adjust it down a little, and we don't adjust if they are the same. (If there is a big different then maybe we vary the voltage more than if there was a small difference). We iterate this a few times (pushing the start button and measuring). Maybe we only get one point per run, maybe we get a random point in time, or maybe we get some fixed sequence of points each run, but we do eventually get more data from running it more times. To extend these points to a function over the whole interval, we again interpolate between them. Eventually our guess function should start to resemble the original function.

We could try doing the above example with a neural net. In this case, the entire neural net IS the approximating function. It would have one input, the time, and would have one output, the voltage. We train the net, by iterating the bliking cycle a few times. The fitness function is the relative comparison of the light intensities. We use this comparison between the output of the net (our current guess function) and the desired output from the reference system, and use it to train the net (using some training algorithm that adjust weights in the internal nodes of the net). There are a number of training algorithms, that basically corresponds to how the weights are to be adjusted internally given the output of the fitness function. The algorithm selected will affect the convergence speed, and stability (it might not ever converge), and how close to "optimal" the results are (how closely our guess function immitates the reference system).

One thing to note about the above example is that we actually had the reference system to compare with. We don't actually need the reference system. We only need the fitness function. It's easy to develop a fitness function given a reference system, but the fitness function could be developed another way. Maybe instead of trying to immitate the voltage on a wire, we've come to realize that bliking lights may increase the happiness of the local lab rat.  :o So instead we'd like to make the light blink in such a way so as to maximize the happiness of our lab rat.  :P To do this, we've implanted electrodes into the brain of the lab rat to measure the level of certain chemicals associated with happiness.  :whistle: Our fitness function is then the level of chemicals found in the rat's brain.  (thumbsup) Although probably a more interesting and irregular function, it would likely be very hard to converge on. Particularly since the "reference function" might not be the same from run to run. After all, there are other inputs to the rat that will affect it's happiness, and these have not been captured by the training of the neural net. So in this case we have another problem, in that we will probably fail to achieve stability. In this case stability is unlikely since we aren't capturing all input that affects the behavior we are modelling, although even in cases where we are monitoring all relevant input and the reference function isn't changing from run to run, it's still possible to fail to achieve stability, particularly if the wrong training algorithm is used.

A better example might be to vary the voltage on a wire controlling a motor to get a vehicle up to a certain speed while using as little power as possible. Maybe constant acceleration isn't the best. What effect does "flooring it" have?

The examples given so far have been simple functions with one input and one output. They could be modelled as some 2D graph of a function, and trying to get a guess function look like the reference function. We could of course add more inputs. Typically neural nets have many inputs. This just corresponds to a higher dimensional graph. Like a 3D graph of a 2 input function. The same concept still applies. The techniques are basically just trying to approximate some unknown mathematical function.

Genetic algorithms seem even worse in that they can be much more difficult to get to converge and to remain stable. At the heart they're pretty stupid, let's guess a random function and see, type of method. The basic idea is very similar to trying to sort a list by randomly choosing two elements, swapping them, and then checking if the list is sorted, and if it's not, to go back and choose another two random elements. You might never get where you want to go, and you only really get there by getting lucky. Not that your chances are very good in most cases, so expect many many iterations. Learning by guess and check can be very slow, especially when there are many things to check. Granted, I'm sure there are techniques people have used to try and improve on the convergence aspect, but in general it seems to be very poor for this method. It does offer a much greater variety of output guess functions though that may have properties you won't get from a neural net function. Or at least not by one of any reasonable size.


In short, don't expect to code a game AI just by making a neural net or genetic algorithm. These aren't magic methods that you can just plug in and do anything. They're almost pretty much impossible to debug if something isn't quite right. That and if you're looking for a learning AI, you generally want it to learn after the player has done something bad to your AI once. Maybe 3 or 4 times. But certainly not 100000 times, which is generally on the order of how many iterations some of these techniques take to learn a new behavior. If you're going to use something like a neural net, you probably want to hardcode it to a specific manner of operation, or if you insist on training it, then train it before hand, and then probably have it run static in the game. It's fairly unlikely you're going to train one of those things during a game in any useful way.
 

Offline Freeza-CII

  • Administrator
  • Hero Member
  • *****
  • Posts: 2308
Adaptive Ai
« Reply #5 on: July 19, 2007, 11:52:00 AM »
You guys have to remember those game have finanial backing and have whole teams of coders.  Genesis has a hand full of people.  Do not expect a impressive ai like in those games.

Offline Hooman

  • Administrator
  • Hero Member
  • *****
  • Posts: 4955
Adaptive Ai
« Reply #6 on: July 19, 2007, 06:22:56 PM »
Yeah, with just a few people with little experience and no money, who are also busy with other things like school or a job that actually pays, we'd be lucky just to expect a game, never mind a game with AI, and especially not one with a good AI.  

Offline Stormy

  • Hero Member
  • *****
  • Posts: 678
    • http://www.op3game.net
Adaptive Ai
« Reply #7 on: July 19, 2007, 11:35:41 PM »
Quote
Yeah, with just a few people with little experience and no money, who are also busy with other things like school or a job that actually pays, we'd be lucky just to expect a game, never mind a game with AI, and especially not one with a good AI.
Indeed.

I should be posting more often once I'm back from vacation this week. I've also been animating for some people and figuring out how things will work as we move into the engine. (at least, that's my goal to figure out  :P )

Til then,

stormy
`·.¸¸.·´¯`·.¸¸.·´¯`·.¸¸.·´¯`·.¸¸.·´¯`·.¸¸.·´¯`·.¸¸.·´¯`·.¸¸.·
3D artist in Blender, MS3D, and Terragen.
Trying to get good with Scene composition and lighting.

Offline Marukasu

  • Full Member
  • ***
  • Posts: 110
Adaptive Ai
« Reply #8 on: November 22, 2007, 09:52:54 AM »
On the topic of ai.
I have a have always liked the idea of giving a computer traits like a human.

Now that sounds crazy but ive done something like this before.

each trait on the list would have a value between one trait and another.

Trait list

0 - - - - - - - - - -100(How long the ai waits before there first attack.)
Rusher-----------Builder(Doesn't attack early in game)


0 - - - - - - - - - -100(Chance of event(or other) changing the ai)
Traditionalist-----progressive(Determines the computers rate of "Personality change in response to events")


100 - - - - - - - - - - - - - -0(distance from base building is alowed)
Expansionist------------(Im not to sure what its called but its were you build up your base in a space saving maner. Destroy unused or out of date buildings. In outpost 2 it would be like replacing a residence with a reinforced residence)


0 - - - - - - - - - -100(Additional units required before the ai even considers attack)
Squad------------Battalion(Later game average unit attack force)


0 - - - - - - - - - -100(Chance when units required are ready to send units off(Also the percent of ore not allocated to defense))
Defender---------Attacker(If defender and Battalion are higher then the force builds
both a good defence and a large attack)


An example of how to implement this is that while the game is loading the ai is setup and a large number of building locations are placed(Future potential locations). Then as the ai's "personality" changes the "Goal" build sites are switched on and off.

Example: if the ai started of with a defensive approach and either they were attacked or they simply change (more progressive) then the building goals for walls would be switched off(or at least most of or some of them) then they would be replaced with a building goal of a vehicle factory(or something. maybee just switched off)

These are just a few things that can be made into 2d ai. 3d would just half again the starting locations variables. It would also increase then lines of code to prevent overlapping by a fairly large amount. Also one of the far more challenging parts of making ai is for it to know what is defendable and what isn't(and for that to change its approach)
« Last Edit: November 23, 2007, 10:53:45 PM by Marukasu »