AI Rework – Prologue

The AI desperately needed a rework, because having creatures with the intellectual capacity of mildly concussed bacteria is proving generally detrimental to the simulation as a whole. It unfairly advantages the ‘viral’ survival strategy of producing hundreds of children and hoping that some of them randomly beat the odds and survive.

This isn’t a problem in and of itself (“it’s a viable strategy!”), but the it dampens other effects of natural selection. It would be more interesting and less random to see them exercising a bit more influence over their own survival. Plus, having goals we humans can empathise with might make them somewhat interesting to watch on an individual level.

I’ll try to tell the saga of intelligence as it happens, somewhat, but to start with, here’s a technical summary of what we’re currently doing with AI, and what we’re planning to do by 0.8.0.

The current AI implementation is, for the most part, a Finite State Machine. As detailed in this, somewhat embarrassing 3-year old post, a finite state machine is a machine with a finite number of states. /captain obvious.

The Finite State Machine implementation in species is effectively just a glorified switch statement: more complex implementations can and do exist, but for simple AI they often aren’t necessary. The operative phrase there, though is “for simple AI”.

The problem with Finite State Machines is that they are by their very definition rigid and inextensible. They provide a control and organisation framework for building an AI, but extending them into something larger and more complex is difficult (though not impossible).

In the current implementation, the biggest result of this inextensibility is a lack of foresight on the creatures part. Creature’s don’t differentiate between chasing another creature, approaching a tree, or random wandering: they’re all just “Moving” to them. They don’t know or care *why* they’re moving, and only when they reach whatever the target is do they decide what to do with it.

Earlier in the year, my plan to extend the AI had involved multiple finite state machines. At first, this was going to be simply two: one for “plan” and another for “action”. So at any one moment a creature would be in two separate states: “Hunting : Attacking Creature” or “Mating : Moving To Target”.

That gives each creature two states to work with “why” and “what”. This should allow them to perform a sequence of actions to achieve a goal, for instances “Hunting” would be [“Find”, “Approach”, “Kill”, “Eat”].

After a while, I realised this would require a third, even higher state, which would be chosen based on emotion. “Hunting” would be a sub-state of “Seeking food”, along with “Grazing Grass”, “Grazing Trees” and “Scavenging”. And it was around this point that I realised a hierarchy of state machines was just going to get more and more restrictive. While you have 4 sub-states for hunting, each with their own set of sub-states, you only have 1 for “dead” (maybe 2 if we add a “dying” animation NO, BAD FEATURECREEP! DOWN!).

Food => Hunt => Approach
Dead => Dead => Dead //redundant states, just to comply with the 'levels' of the FSM hierarchy

The complexity of intelligence I wanted in the game simply couldn’t be represented by a set number of hierarchy levels and a set number of states on each. The finite state machine abstraction was simply too rigid to encompass everything I wanted it to do.

It was while I was realising this that I stumbled onto the latest Next Big Thing in game dev: Behaviour Trees.

I first learned about Behaviour Tree’s here, and got going by modifying some code from here.

The first thing I feel I should do is completely disillusion everyone. Behavioural Tree’s aren’t some magic bullet that makes all your AI instantly awesome. They’re an organisational tool for complex AI: well suited to simulations and complex NPC behaviour, but would be complete overkill for basic, attack-on-sight enemies.

Now that the cynicism’s out of the way… OMG Behaviour Tree’s! They’re like this magic bullet that’s gonna make the Species AI instantly awesome!

Behaviour trees are a system that do not enforce horizontal rigidity the way state machines do. Where the State Machine Implementation required redundant structures like the Dead=>Dead=>Dead above, in a behavioural tree the first “Die” node can simply be an action node.

But that’s far from the biggest advantage. The biggest advantage of a behavioural tree is in the way it handles failure.

In a finite state machine, each state has to handle it’s transitions to other states. The “idle” state needs code that goes to “moving” when a creature gets bored of standing around. The “moving” state needs code that goes to “eat”, “attack”, “mate” and a variety of other actions depending on what they reach. And every living state needs code that goes to “dead” when a creature runs out of hit points. (or “dying” if we implement that animation state- I SAID DOWN!)

Behavioural tree’s don’t work this way. Each node in (a standard implementation of) a behaviour tree can only do one of three things: it can succeed, fail, or continue running. Eat() doesn’t have to know how to respond if it’s attacked: all it has to know is that taking damage causes it to fail, and the tree above it will take care of finding the appropriate response.

Better yet, these failure conditions can be implemented much higher in the tree than the action states. The “hP < 0” test, for example, doesn’t have to be implemented in Idle(), Graze(), Eat() and Attack() separately: with a bit of inventiveness, it can be implemented in a Live() node that encompasses all of these. Failing at Live() automatically causes all the lower-level actions to fail.

This failure system allows for something that would be difficult to implement otherwise: fallback plans. A scavenger seeking meat might check it’s Field of Vision for delicious corpses*. If it succeeds, good: eat that. If it fails, it could check for easily defeatable creatures to kill. If it fails at that, it might consider trying to eat inefficient food sources like grass and trees, and if it fails at that, it might wander to adjust its FOV before recursively calling itself to begin the original sequence over again.

*footnote: I had a strange blast of perspective while typing this sentence. Energy sources aren’t actually delicious, and creatures don’t actually have any emotional response to them, but they *act* as if they do, approaching and eating them when they find them. It sounds weird, but I’m strangely proud of that.

The end result of the behaviour tree system is an extremely versatile mechanism for intelligently achieving a goal, or gracefully handling a failure if it can’t find any way to achieve it. It’s not perfect for every situation, and is probably over-engineered for many, but I’m convinced it will fit Species well.

Next post: The AI saga begins, like all good stories, with a lobotomy.

Advertisements
  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: