Wednesday, March 6, 2013

Search in Game AI

In my experience search has had very limited use in game AI.  For the most part it's used for navigation and for the rare special case action/point selection situation.  This strikes me as not nearly enough.  As an AI developer how are we supposed to avoid the local maximum that is the current state of game AI if we throw one of our most powerful tools out the window.

I think we should flip the situation entirely.  Our default setup should be built around search.  Everything in the AI should be hooked up with search in mind.  Action selection, animation selection, dialog selection, target selection, and so on should all be setup as searches. 

I can already hear my colleagues screaming.  "Search is too slow!"  "It uses too much memory!"  "Don't search if you already know the path!"  And of course all of those statements are perfectly correct.  My contention is that even with this in mind we should still be using search.  Essentially I think if we develop the 'craft' of using search we will find that we can mitigate many of the problems of search while gaining most of the benefits.

When I say craft of search I'm trying to describe how you use and implement search.  If you naively implement search then the time space behavior of search will be prohibitive.  Instead you need to architect your search to work in a real time environment.  To do this we need to do four things.
  1. Pick the right algorithm
  2. Constrain the search space
  3. Expand the solution space
  4. Guide the search
Picking the right search algorithm is the most important step.  Can you beam it?  Is A* a possibility?  Can you use ends means analysis?  Your particular situation will require careful thought and consideration about what type of search you need. 

Constraining the search space means that we limit the number of possible states that we consider.  How much you constrain the space is dependent on your problem.   The craft/trick/magic is constraining your search space without excluding your solution from the remaining search space.  In the general case this can't be done.  In most game situations I think this can absolutely be done. 

Expanding the solution space means accepting more possible solutions.  This has the same effect as constraining the search space.  So instead of looking for one possible end state you need consider a range or the best available.

Guiding the search will speed up the search in the general case.  A heuristic or hard coded expert knowledge is perfect for most game situations.  If before you were able to explicitly path the AI to the desired action now you can guide the search to the same exact results.

Given there is some overhead to using search that would not exist in a non search environment you might ask what do we gain for all this hard work.  You might also be thinking "I'm not a dumbass I know all this already and you're still wrong".  In either case let me explain what I think the benefits are/could be to take a more search oriented approach.

AI would be smarter
AI would be more robust
AI would be easier to develop

The AI would be smarter because it would consider more of the state space than you could ever hope to explicitly encode.  The best example I can think of is considering future actions.  It is hard enough to encode all the responses to current state without having to encode responses to future state.  Searching would trade some runtime cycles to explore this space to find better smarter solutions.

The AI would be more robust because it could find solutions to situations we have not explicitly encoded.  We might have a rule for jumping on turtle heads but what if those turtles have wings?  Searching would allow us to find solutions to problems we haven't planned for.  It would also enable us to escape the local minimum.  The AI could get stuck in the valley never able to escape because the tuned thresholds don't account for the fact that turning around just hits the same threshold on the other side.  Searching allows us to get past this issue into the state space of correct solutions.

Search is easier to develop because it requires the AI developer to have less expert knowledge.  You don't have to add as many tuning parameters and better yet you don't have to tune those parameters.  Instead of the AI developer heuristic of we should jump at velocity X, height Y, and Time T we can search and find the optimal best point to jump without the AI programmer ever spending any time trying to figure out good strategies and tuning values for when to jump.

There you go.  That's my manifesto on why we should be using search in games.  I've done hacky one off search stuff but never anything on the scale I'm proposing.  I do have the architecture ready to go though if ever I get something close to a sure lets try it out on project X.

As a follow up I found this article today.  I'm not totally alone.

https://www.skatgame.net/mburo/ps/aiide12-combat.pdf

Sunday, March 3, 2013

Game AI Behavior

I'm just going to list out things I think will improve game AI from a behavior standpoint.  I'm going to ignore all the nitty gritty details.  For example I don't think it matters how your AI decides what to do.  I think some  mechanisms are more powerful than others and will help empower you as the developer to make the AI better but ultimately it matters what the AI decides to do and not how it makes that decision. 

AI's should always be purposeful.
  • Don't use random 
  • Don't use wander
  • Don't use idle
Start your model as close to reality as possible
  • If you want an airplane to look like an airplane it helps to start with the basic mechanics of airplane flight. 
Don't model bullshit
  • Bullshit doesn't scale.  Trying to tune a bunch of AI's to look natural by giving them some half assed model of human intentions is never going to work.  You are far better off just telling them to do what you want to achieve your desired effect rather than tuning some goofy model. 
Respond to everything
  • Car wrecks, gun shots, staring, you name it and your AI should respond.
Foreshadow everthing
  • Everything the AI does should be signaled. The AI is leaving cover to get a better vantage point. Don't just fucking do it. Signal the player that your doing it and why. It should be clear at all times why an AI is doing something
Execution is king
  • Doesn't matter how smart your decision maker is if the AI looks like shit doing it then don't bother
Flicker is the devil
  • Once your AI decides to do something it's better to continue doing it long after whatever reasonable reason for doing it has passed then it is to stop doing it then decide to start again in a short period of time.
Spatial hacks are good.
  • Allow the world to be annotated to tune specific behavior
Global hacks are bad
  • Don't let the AI's be universally tuned
Don't randomly select behaviors
  • Don't 50% of the time attack and 50% retreat.  See the first point.  Be purposeful!
Ok I'm out of pearls of wisdom/bs on AI behavior design.  As more come to me I might come back and update this post or repost it etc.  Not sure what works better from a blog perspective. 

Economics is bunk


What does this survey mean?  To me this complete lack of consensus on fairly basic issue means that either the entire field of economics is fundamentally rubbish or it's fundamentally corrupt.  My tendency is to lean towards the corruption side but I don't think that is the truth.  There are no boogeymen just people being people.  That means that the profession is bunk.  The inability to test models or reproduce results means all models are equally accurate or inaccurate.  Krugman and Schiff have equal predictive power. 

As someone who strives to be rational I'm forced to admit that my strong economics views can only be bias based and irrational.

Thursday, February 28, 2013

Soccer Analytics

I'm back.  Honestly I forgot I had this blog.  I would say everybody else forgot too but that would require others to know it existed in the first place.  I suppose this is more of a journal if I'm the only one that reads it but... oh well.

On to the topic of the day.  Soccer Analytics.

I'm psyched to try my hand at some spatial sports analysis.  The MIT sports analytics conference is in town and although I'm not going it's rekindled my desire to do some data analysis.

The idea.

Compare player actions on the field with some idealized sim.  So basically write some soccer AI and then feed it some sample game state and see what choice your AI would make vs what choice the real live player made. 

I'm actually all ready to try this out as I have a c++ AI framework and a fancy C# data visualizer app.  See below


So in the app I would load up one second of the gameplay.  In the console I would type run

    insert (@(../)) (#(run simulation_soccer ($(@(./)))))

where the commands are:
  • insert - insert item into knowledge database at some location
  • @ - this grabs key value pair at this location
  • # - deserialize data into key value pair
  • run - execute external .exe
  • $ - serialize object to text
So the command above does the following
  1. grabs the current key value pair that is selected
  2. serializes that item to text
  3. executes external .exe simulation_soccer with text data as argument
  4. deserializes that data back to a key value pair
  5. inserts that data into the parent of the current node
Hopefully you can see the workflow here.  I'm hoping to fancy this up to use in video game AI but I think it will work as a general purpose AI simulator/visualizer. 

So what I need is some sample data to start writing simulation_soccer.exe.  This will obviously be extremely difficult but there are some things that make it not so terrible.  For one I don't really have to do everything all at once.  I could do simulate_soccer_left_back_on_break_aways.exe or some other reduction of the problem.  I also already have an AI framework that may or may not be fast enough to use in real time games but will definitely work for this.

AI Framework

Basically it's the above knowledge format with save/load etc + libraries for querying, searching and even planning.  You might think to yourself why planning?  Well because planning is awesome and how better to evaluate actions than to consider what might have been?  Passing left or right at a particular instant might seem arbitrary but if the next pass on the left is possibly across the face of goal while right is back to the fullback well then the value becomes more clear.

So how does the API look you might ask.  Well I'll try to show you.

BOOST_AUTO_TEST_CASE( walk_10m )
{
 allocator::stack<20*KILOBYTES> a;
 model_test::model_factory f(a);
 model_test::model_locomotion m(a);
 model_test::condition_distance condition(a);
 model_test::heuristic_distance heuristic(a);

 aisearch_astar::search::algorithm alg(a, condition, heuristic);
 aisearch_astar::search::state s(a);
 aisearch_astar::search::context context(s);
 aisearch::search::result result(a);
 aisearch::vm::machine machine;

 aiplan::search::monitor monitor(a, m, f);
 aiplan::graph::edge e1;
 aiplan::graph::vertex v1(a);
 aiplan::model::op_noop noop;
 aiplan::model::state start = f.create_state();
 start.store(aiplan_test::pose_key, aiplan_test::standing);
 start.store(aiplan_test::position_key, aiquery::float3(0.0f, 0.0f, 0.0f));
 start.store(aiplan_test::target_key, aiquery::float3(10.0f, 0.0f, 0.0f));
 start.store(aiplan_test::forward_key, aiquery::float3(1.0f, 0.0f, 0.0f));

 v1.setstate(start);
 e1.setoperator(noop);
 e1.setvertex(v1);

 machine.start(monitor, context, alg, e1);
 while( machine.step(monitor, context, alg) == aisearch::vm::machine::kprocessing )
 {
 }
 machine.stop(monitor, context, alg);
}


The above code is the unit test I use for testing animation planning for walking 10m.  There is a ton missing but hopefully it makes some tiny bit of sense to the c++ devs out there.  Basically it's just a bunch of declarations followed by the creation of the start state followed by the while loop that searches to find the plan. 

The interesting bit here is the model that is planned over.  model_locomtion does 2 things.  It selects the operators for the plan when a node is expanded and it scores the operators for sorting the fringe of the the search.

  virtual void select(aiplan::model::model::operators& o_operators, const aiplan::model::state& in_state) const
  {
   o_operators.push_back(&op_walkstart);
   o_operators.push_back(&op_walkforward);
   o_operators.push_back(&op_walkstop);
  }

  virtual float score(const aiplan::model::op& in_operator, const aiplan::model::state& in_prestate, const aiplan::model::state& in_posttate) const
  {
   allocator::stack<2*KILOBYTES> a;
   aiquery::state s(a);
   aiquery::context c(s);
   aiquery::binding b;
   aiquery::predicate1<aiquery::float3> position(s, aiplan_test::position_key);
   aiquery::predicate1<aiquery::float3> target(s, aiplan_test::target_key);
   aiquery::value<float> value = aiquery::distance(position(b), target(b));
   b.bind(in_prestate);
   float distance;
   value.evaluate(distance, c);

   if ( &in_operator == &op_walkstop
    && distance > 1.0f )
   {
    return distance;
   }


   return 0.0f;

  }

So how will I use this for generating soccer simulations and analyzing performance.  Well step one is to create the operators.  I think this is actually a really hard step because coming up with a good set of planning operators will determine whether this methodology is worth a shit.  Step two is to come up with a way to score the operators based on the coaches model.  Step 3 is to compare the output of the model vs the next second/data step of gameplay.

Thats the master plan.