AI Rework – Behavior Tree Level 3

AI Rework – Behavior Tree Level 3.

Something a little different this time: rather than extending the tree, I’m taking measures to make it more versatile.

Last time we were here, we had this:


That’s all well and good for finding the closest tree, but what if we want to find the closest creature? We end up writing 5 more routines simply to do something we’ve already done.

That’s a waste of code, and as everyone knows, code is extremely expensive. It’s like printer ink. That’s why programmers get paid so much, cause of all the code they go through.

3 Kilobytes to the Euro

3 Kilobytes to the Euro

3 Kilobytes to the Euro

Seriously though, “find the closest X” is a routine I expect to be using a lot. “Find the closest genetically-compatible creature”, “find the closest corpse”, “find the closest prey”, “find the closest rocket-propelled chainsaw”, “find the closest chainsaw-propelled rocket” and so on. You get the idea.

So with that in mind, it’d be cool if I didn’t have to rewrite 5 routines for ever variation on a simple premise. Good code is reusable code, and right now the Behavior Tree system is making it a lot harder to write good code.

So, the obvious thing to do would be change this:

    new CheckClosestEdibleTree()

… to this

    new CheckVariable(ClosestTree)

Rather than hard coding ClosestTree, I’m feeding it in as a parameter. Each of the dependent methods store it and act on it when their Act() method is called.

This is a fairly elegant solution, with only one minor drawback: the fact that it will not work.

The reason it won’t work is a bit complex, but I’ll try to summarise it with an example. Suppose we’re trying to define a routine that will get the creature to walk towards the “ClosestCorpse“. When the creature is born, it’s AI is initialised and Context.ClosestCorpse is fed into the MoveTo routine:


The creature then Moves during it’s life. As it moves, the game updates the Context.ClosestCorpse object.


But the MoveTo routine was set back when the AI was initialised: it still references the original Corpse A! If the creature is told to approach the closest corpse now, it’ll wander off in the wrong direction…


This is where Lazy Evaluation comes in. We don’t want a reference to whatever corpse was fed in when the AI was initialised, we want a
reference to whatever corpse is currently stored in Context.ClosestCorpse. Luckily C# provides a fairly neat way of doing this: rather than
providing a Corpse to the MoveTo routine, we provide a Function That Returns A Corpse (Func).

Then there’s a neat syntax to generate exactly that:


And we’re done! MoveToClosestCorpse() can now be changed to the much more generic MoveTo() command, and can use any variable we want to feed into it.

Except, we’re not quite done. This system works exceptionally for *getting* a variable, but what if we want to set one?

Back to our original example. Some fiddling with the above syntax means we can now generate a FindClosestMatch() method, which identifies the closest Creature or Tree that meets a given criteria. For example, this:

new FindClosestCreatureMatch(parent,
    new Inverter(new IsAlive(() => parent.Context.NextCreature)),
    new Inverter(new IsEntityBehindFence(() => parent.Context.NextCreature))

… identifies a creature that is a) dead, and b) inside the map boundaries, without me having to create a whole new FindClosestCorpse() routine.

But this result doesn’t go into the “ClosestCorpse” variable, because FindClosestMatch() can’t just set any variable it wants. It’s a predefined routine: it stores it’s variable into a generic “ClosestMatch” variable.

This is still somewhat workable, so long as we only need the ClosestMatch variable for one thing, within the routine in which it is defined. It just means that the value can’t be retained for later use without custom routines.

But can we do better? Can we use a similar syntactic trick to lazy evaluation, but for a getter? As it turns out, yes! But… it looks like this…

No seriously, what?

No seriously, what?

Can you read that? I can’t read that. I don’t even know how it’s supposed to be read. Great Cthulhu that’s a hideous pile of mess.

Sadly, this is a case where reusability and readability are diametrically opposed requirements. If I were to refactor for reusability using this trick, the code would be rendered unreadable.

So, I think in this case I shall err on the side of readability. While I do have a SetVariable routine in case I’m ever that desperate to reuse code, I think I’m better off creating custom “SetClosestCorpse” style routines for storing variables and using the temporary “ClosestMatch” variable wherever possible.

Next Time On Somebody Help He’s Forcing Me To Write His Blog Posts: Hunting and Fleeing! Plus: we find out what happens when 250 identical Primum specium become predators who prefer live prey. (hint: they die. A lot)


Leave a comment

AI Rework – EmotionController

So, I may have skipped ahead a little in the previous post. We need a system to control which emotion is currently strongest in a creature, and what they do as a result.

Before we go into that though, let me introduce you to the Other Most Important Node in BehaviorTrees. The first one was the “Sequence” node, which we originally discussed back here.

The second one is the “Selector“. It’s basically an inverted Sequence:

- A Sequence runs though the nodes in it’s purview, advancing to the next one when the previous one succeeds, and Failing if one fails.
- A Selector does the opposite: it runs through the nodes in it’s purview, advancing to the next one if the previous on fails and Succeeding if any of them succeed.

This is brilliant for when you have multiple ways of achieving the same goal: it means that if the first method fails, they can fall back to the second, then the third and so on. For example, for an omnivore that prefers meat:


The selector would start the Hunt behavior. If Hunt failed, maybe because there was no prey to be found or the would-be prey had really big, scary teeth, it would advance to Scavenge. If Scavenge also failed, it would advance to Eat Tree, and so on.

Only when one of these food-finding strategies succeeded would the ‘Seek Food‘ selector as a whole succeed.

Selectors are rarer than Sequences, but no less important: without them, the tree wouldn’t be capable of ‘intelligent’ decision making.

Okay, now that you’re familiar with that concept, let me introduce you to “EmotionController“.


This is a unique composite node near the top of the tree. It has six “Emotion” nodes.

Each Emotion is a Selector, and will be filled with ways of sating that emotion: the ‘Seek Food‘ selector above would belong with the Hunger emotion. At this point it becomes closer to a cause-and-effect Needs system rather than a fuzzier Emotion one. I’m okay with this: it makes the creature’s behavior a bit more understandable.

It was this structure that lead to me requiring a way for creatures to respond to Pain, which lead to the implementation of Sleep, as discussed in the last post.

The EmotionController itself isn’t a selector though: it’s behavior is a lot more complicated. I’ll explain it as best I can…

Each Emotion has a Value, determined from a constantly evaluated Base equation (for example, hunger is always inversely proportional to energy), and a more dynamic Floating one (for example, being attacked triggers a flash of fear which fades over time). This value determines the current priority of the emotion.

very placeholder such art

very placeholder such art

When any emotion succeeds or fails, the creature will reassess the urgency of all it’s emotions and act on the strongest. The concept is similar to a Repeat node, except with a variety of options to choose from depending on the strongest emotion at the time.

At this stage, the design makes the creature’s very single-minded. If they start seeking food, they will continue seeking food until they either find it or exhaust every possible food-seeking strategy (or die a horrifying death, but that’s a given). This is a good strategy in general, but it makes no concessions to interruptions. Waiting to finish your meal before reacting to the Murderus deathicus that is currently tearing your limbs off is not the model of intelligence that I was going for.

So I introduced an interrupt mechanism to the EmotionController. If an emotion exceeds the Currently-Being-Acted-Upon emotion (I may need to work out a more concise name for that), the creature will immediately fail the previous emotion (and any child routines that happen to be running) and begin acting on the new one.

This strategy is better, but it has a major problem: it leads to behavior loops, where a creature gets stuck between two states. Imagine a creature with low Energy and Health: it eats for a frame to raise Energy, then interrupts to pain, sleeps for a frame to raise Health, and interrupts back to hunger. Rinse and repeat.

The solution I’m using for this is an urgency threshold for each emotion, specifically for interrupts. Fear and Anger, for instance, have a very low threshold: if they exceed the currently prioritised emotion, the creature will react *immediately* and flee or fight whatever triggered the interruption. Pain, hunger, discomfort and amorousness all have much larger modifiers: to interrupt hunger and cause sleep, pain needs to exceed hunger plus a large modifier.

This doesn’t actually prevent loops: it just lengthens them. Hypothetically, a hungry creature on the edge of a strong temperature gradient like a lava-field could venture into the field to eat, only to be interrupted by discomfort, retreat back to the comfortable area, and interrupt back to hunger again. But with a sufficiently high urgency threshold on the relevant emotions, this behavior is likely to be much rarer.

It’s not particularly unrealistic either: biological organisms get stuck in loops all the time…


I’ve been spending quite a bit of time tweaking the emotional systems, to the point that it now works reasonably well. Some emotions require unique attributes to ensure correct behavior: for instance, “discomfort“, “pain” and “fear” have no maximum, so that in extreme situations (spreading lava, for instance) they will be prioritised even if other, more limited emotions (like hunger or amorousness) are near 100%.

Even with very basic placeholder behaviors (the ones implemented so far are “eat tree“, “wander“, “sleep” (idle + healing) and “move to more comfortable area“*), the system is proving its worth. We can plonk down a temperature device and watch the creatures respond to the added discomfort and flee. Plus there’s signs of improvement in their life cycle: rather than the uncontrolled randomness of before, new creatures act sanely: they seek food initially, then sleep to convert that food to health, which they can use to begin a pregnancy and produce babies.

Additionally, if they manage to gather food well enough to reach a “well fed and healed” state, they then start to act on the lower-priority emotions: discomfort and amorousness.

Finally, the new system is much more communicable. The debug method for showing their priority emotion at the moment is simply printing text above their head, which looks terrible above crowds, but “hungry: seeking food” and “exhausted: sleeping” are infinitely more interesting than “approaching tree because REASONS” and “attacking own offspring because SHUT UP THAT’S WHY“. I’ll need to find a way to communicate that information, because it really adds to the sense of character you get from the creatures.

It’s rapidly becoming apparent that this update is going to have some far-reaching consequences on the game as a whole. And, as always, I have no way of predicting what they’ll be. It’s as much a process of discovery for me as anyone!

I love this job.

*footnote: I really need a more concise term for “move to a more comfortable area”. A term for travelling to a more hospitable climate… oh, of course! “Migrate“!

… and I’ve implemented a much-requested feature without realising it again, haven’t I?


AI Rework – Rapidly losing control…

It’s fascinating how one thing in development leads to another, which leads to another, which in turn means a major change in gameplay.

In implementing the emotional systems, I’ve subconsiously stumbled into a Needs-like system, requiring that every emotion be coupled to a specific action: creature’s eat when hungry, flee when afraid, mate when amorous, etc.

This has lead to me having to work out the need for Pain. It’s a given that a Pain meter needs to be implemented (because reasons. SCIENCE reasons), but what actions can a creature take to reduce pain? They could flee the source, but that’s what fear does: we don’t need a duplicate of fear. And pain fades on it’s own: there’s no real *decision* one can make to reduce pain after it occurs: pain in real life is a preventative measure. There’s no appropriate biological response to it except to…

… rest and recover.

And thus, the creatures in Species will, as of 0.8.0, have the ability (and need) to sleep.

Pain will be generally reflect the percentage of health, while Hunger will generally reflect the total percentage both health and energy. Thus, a starving creature (no Energy and low HP) will have a higher hunger than pain value and will prioritise food, while an injured one (high Energy, low HP) will have a much higher pain.

This high pain value will cause the creature to lose consciousness, which will trigger rapid healing: creature’s healing rate will be vastly reduced when awake.

So, why not call it ‘tiredness’ then? (aside from the absolute necessity of having a pain meter for aforementioned SCIENCE reasons). Well, Pain has the advantage of being triggered by other means, such as rapid dealing of damage. This provides an interesting selection pressure for combat-oriented creatures: rather than trying to kill their prey outright, they could aim for maximum pain so as to knock their prey unconscious The Pain Threshold gene is thus exposed to two competing selection pressures: high enough to ensure prompt healing, but low enough to prevent the loss of consciousness when attacked.

Hyperbole and a Half

‘To the pain’ also becomes a viable solution for non-lethal intra-species disputes, opening up feature creep possibilities like fighting for food and mating rights.

This does lead to an odd situation, though. If creature’s sleep to heal, they will empty their energy bar. That energy bar is what they use to reproduce. Thus, it would probably be to their benefit NOT to sleep, and just act like baby factories, eating food and immediately converting it to babies in order to maximise production and put out hunnards o babbies.

The obvious solution is to treat ‘energy’ the way it was originally supposed to be treated: as a measure of the amount of undigested food in their stomach. I don’t intend to use it *quite* like this just yet: I still intend to take some metabolic and walking energy requirements from it. But moving the reproduction energy requirement to health seems like a good idea: eat, sleep, digest, THEN make baby.

Which in turn leads to another odd situation. Pain is triggered by rapid drops of health. Giving birth takes quite a bit of energy from health in an instant. Did I just accidentally a pain of childbirth?

That’s… actually pretty cool. Or possibly cruel, I often get those two mixed up.

That said, the instant energy drop is a bit absurd: it triggers just as much pain as being slaughtered by a malevolent god (as determined by statistically significant testing). Well, I can at least reduce that by making the energy drop over a certain period of time, rather than- I just did it again, didn’t I?

So, um… pregnancy! That’s a thing now.
I think I need to repeat myself here:

“It’s fascinating how one thing in development leads to another, which leads to another, which in turn means a major change in gameplay.”

I am so glad I finally got around to upgrading the AI.


Since I wrote this post, it just. kept. happening. Moving reproduction loss to health had side effects, and I’ve had to keep making changes so that creatures could remain capable of reproduction without killing themselves in the process. The pain value is looking more and more like “exhaustion” and may have to be renamed, creature’s are deciding of their own volition not to reproduce in uncomfortable zones, a starving creature actually falls unconscious shortly before dying, and NONE OF THIS WAS PLANNED.

I had planned to liken this to how Dr Frankenstein must have felt when he lost control of his monster, but frankly I’ve lost control of creations and accidentally set them loose on innocent townspeople before, and this feels different. Less frantic somehow. Maybe it’s the lack of lightning.


AI Rework – Level 2 Tree

Okay, now we get to the fun stuff! And by fun I of course mean “densely technical and hard-to-follow”.

With creature’s wandering randomly until they invariably died horribly, I wanted my next focus to be giving them the ability to move towards things. This means a lot more than just having a MoveToPlant() routine, though: they first have to be able to find a plant in the world.

I could probably have implemented this as a simple black-box method that takes a creature and returns the nearest edible tree, but I wanted to test out my understanding of behavior trees. Additionally, implementing this as a behavior tree opens up the possibility of alternate methods for finding an edible plant in the future: checking memory, other senses, or may be even finding and asking other creaures where food may be found.

So anyways, this was the result:

You'll have to imagine your own funny comment, I got nuttin.

You’ll have to imagine your own funny comment, I got nuttin.

… and the code looks summit like this:

new Sequence(
	new SortLocalTrees(parent),
	new RepeatUntilFail(
		new Sequence(
			new GetNextTree(),
			new Inverter(
				new Sequence(
					new IsTreeEdibleSize(),
					new IsTreeInMapBounds(),
					new Inverter(new IsTreeBehindFence())
					new StoreClosestEdibleTree()
	new CheckClosestEdibleTree()

A bit more complex than last time! But don’t worry, it’s the same basic concepts. Let’s run through it:


SortLocalTrees() is a fairly simple routine, and is called once when the root sequence starts up. It gathers up all the trees in the local area, puts them into a List, and sorts them by distance. It doesn’t actually *do* anything with items in the list: just sorts them.

If SortLocalTrees() fails (say, because there are no trees in the area), the parent Sequence fails, and the entire FindClosestEdibleTree() routine Fails. Back in the main AI, this causes the creature to fall back to Wandering. If SortLocalTrees() succeeds, however, it moves on to…


A curious node: this one will keep on repeating so long as it’s child succeeds. It is actually incapable of failing: if the child fails, this routine succeeds, if it succeeds it just keeps going.

This can actually be something of a liability: it’s why we need the CheckClosestEdibleTree() routine at the end, so that if there are no edible trees in the area, the routine doesn’t return a misleading success.

It also has the potential to get caught in an infinite loop: simply tie it to a routine that never fails.


GetNextTree will get the next tree from the local tree list, succeeding when it returns a new tree, and failing if it’s at the end of the list. The Tree itself is stored in the creature’s “NextTree” field, so future routines can access it.

Combined with the RepeatUntilFail() above, this means that the behavior tree will go through the local tree list one by one, continually pulling out a new one and performing the following sequence on it. Since we sorted the list earlier, it will do so in order from the closest to the furthest.


A simple NOT gate. If the child fails, the inverter succeeds. If the child succeeds, the inverter fails. The reason for this will become apparent in a moment.

IsTreeEdible(), IsTreeInmapBounds(), IsTreeBehindFence()

Succeeds if the tree is edible/in map bounds/behind a fence, fails if it’s not. The inverter on BehindFence is there because we want the find to succeed if it’s not behind a fence.


Copies the NextTree variable into the ClosestEdibleTree variable, so that later routines (like MoveTo() and Eat()) can target it.


As mentioned above, this ensures the routine as a whole fails if the RepeatUntilFail() Succeeded without filling out ClosestEdibleTree.

So, Here is where the entire routine finally comes together:

- The list is sorted, and GetNextTree() pulls out the closest tree.
- IsTreeEdible() Fails: this tree is not edible: maybe it’s too high for the creature to reach.
- The Sequence above it fails, so the Inverter above that succeeds. Thus, the entire GetNextTree() Sequence succeeds, and RepeatUntilFail() starts it over.
- GetNextTree pulls out the second closest tree.
- IsTreeEdible Succeeds this time. We have found an edible tree!
– For the sake of clarity, the tree is also in the map bounds and not behind a fence, so they succeed too.
- The Sequence stores the current tree in the “ClosestEdibleTree” variable, and succeeds.
- The Inverter get’s a Successful “tree found and stored” signal, and turns it into a Failure().
- This Fail() feeds up the tree to the RepeatUntilFail(), which in turn turns it back into a Success().
- When RepeatUntilFail() succeeds, it advances on to the final check: CheckClosestEdibleTree().
– If there’s something in this variable, it means the whole routine succeeded. If there’s not, it means the routine went through every local tree and didn’t find a single edible one.
- And with that, the FindClosestEdibleTree() routine succeeds. Future routines can now make use of the ClosestEdibleTree variable with impunity.

Now, I freely admit implementing a single-frame routine like this as ordinary code would have been a heck of a lot simpler. A sort(), a foreach() loop, the same checks as above and break when the checks passed. It could even be encompassed in a behavior tree wrapper, to succeed or fail depending on what it returned.

But the goal here wasn’t to program something efficiently, but to learn how to use behavior trees. And I think I’m getting the feel for them: it’s becoming easier for me to think my way through them and identify the problems I’m having when I test them.

The big advantage to Behavior Trees though is that, unlike code, they can be data driven, are inherently extensible, and most importantly can span multiple frames. Non-indicative example: it would be a matter of moments to change this to have creatures wait a frame between getting each new tree, reducing the per-frame computation and making it possible to actually see creatures weighing their options when they make decisions.

They can also be manipulated on the fly and even stored genetically, but I’m not sure how far down this rabbit hole I want to go. A simple system that manipulates behavior via emphasising or disregarding certain emotions or leaves of the tree is likely to be a lot more intuitive and communicative than genetic behavior trees.

I’m rambling and can’t think of a good segue to end the post on, so I’ll cut it short here.

Next time: And Suddenly There Was Gameplay Changes. Seriously: a completely new behavior and several major change to existing ones, none of which I was planning to implement. I just sorta happened. Stay tuned!


AI Rework – Level 1 Tree

The next step after the lobotomy was to begin work on the Behaviour Tree system. I needed a first-attempt tree that would give me a chance to set up and test the basic functions: I went for this one.

So complex much behaviour tree.

So complex much behaviour tree.

This is a nice simple tree, and great for starting up. It’s hardly ancient, occult code transmitted from the blackest depths of humanities soul, but everyone has to start somewhere. It’s like the first gun you get in an RPG. Later, when I’m killing skyscraper-sized programming with a gun that shoots lightning, I’ll look back on this moment and be like “heh”.

What it does should be fairly self explanatory, but I’ll go through it step by step. This might not be necessary for *this* tree, but things’ll get more and more complex in the future, and if I tell you all about it now, the future posts will be somewhat easier to follow.


Repeats. This ensures that every time the creature finishes its task (in this case, the sequence below it), it starts a new one. Simple!


One of the foundational concepts of behaviour trees. A sequence works by starting up the first routine in it’s list (Wander) and waiting for it to succeed. When it does succeed, the sequence will continue to the next item in the list, and then the next, and then the next. Only after all items in the list Succeed will the Sequence itself succeed and return control to the parent (which in this case, given that the parent is a Repeat, will cause the sequence to start over).

If any routine fails (say Wander fails because there’s a fence between the creature and the wander location), the entire Sequence will fail. If Wander fails, the creature will NOT idle for 1 second: it will immediately go up to Repeat and search for a new location to wander to.

The practical upshot of this is that Sequences are great for things that need to be done in order. Hunting, for example: a hunter needs to [find food, catch food, kill food, eat food]. If any one of these operations fails; say the hunter has poor senses, or the prey escapes, or the hunter spontaneously decides to become a vegetarian; then the entire Hunt() sequence fails.


This is really just a wrapper for the MoveTo method. All it does is reset the target location, then call MoveTo(), which does all the actual work of moving the creature.

That said, this is also a method ripe for improvement. The previous Wander method, hidden away somewhere in the Finite State Machine code, simply picked a random (x,y) location in a square around the creature. The new method picks a location in a 180 degree arc in *front* of the creature, meaning they backtrack less and maximise the ground they cover.

In the long term, this won’t be the only upgrade to this method: it will also have code to disincentivise (shut up spellcheck, I know that one’s a word) them from wandering into inhospitable biomes like lava, deserts and oceans. Rather than just hard-coding a list of ‘inhospitable biomes’, I plan to implement this as an aspect of the “Discomfort” emotion: they will avoiding wandering in directions that make them less comfortable, such as area’s too hot/cold or dry/wet for them.

I don’t want to completely eliminate attempts to cross the desert/ocean, though, so Wander may have to become even more complex than that to facilitate migration behaviors.


This actually moves the creature to a target, Succeeding when the creature reaches the target, and Failing in a variety of other situations (such as an obstacle, like a fence or a homicidal rover).

This is another task that, while simplistic, is ripe for extension. “Move” could mean a lot of things: Walk, Run, Stalk, Pounce, Fly, Swim: it’s entirely possible we’ll end up doing away with the “MoveTo” leaf node entirely, replacing it with a sub-tree that chooses between all of those variations depending on circumstance.

For now though, it serves its purpose.


A simple node that causes the creature to wait around for a set period of time. Nothing complex about it: it just prevents the creature from immediately re-wandering.

So, in totality…

… this tree has a creature walk in a random direction, then wait for a second, and repeat.

Wow. I murdered a lot words for that. Oh well, words are cheap and delicious.

This tree might not be particularly impressive, but it’s a foundation for something a lot more complex. It’s existence means we have a Routine base class, with leaf (MoveTo and Idle), decorator (Wander and Repeat) and composite (Sequence) nodes. And importantly, it’s a way to test and debug each of these individual types of nodes before we start putting together something more interesting.

With this working, we can move on to more complex behavior, and finally begin unlocking the Strange Occult Power of Behavior Tree’s. In the name of SCIENCE. I have it on good authority that strange occult powers are very scientific.

NEXT TIME on BoringTechnicalCrapNobodyCaresAbout: FindClosestEdibleTree();

Are you excited? I’m excited.


AI Rework – Lobotomy

The Brain object in Species on the 2nd (yes, I may be lagging behind somewhat with my blogging. Call it a buffer) looked something like this:

Imagine another 20 pages or so of this...

Imagine another 20 pages or so of this…

(Okay technically that’s a picture of the Finite State Machine object, but really, who wants to look at brains).

The Brain object in Species on the 3rd looked something like this:

My god, it's full of stars.

My god, it’s full of stars.

Okay, the deletion process wasn’t quite as simple as that makes it out to be: due to the fact that I am terrible at everything ever, there was quite a bit of behavioral responsibility tied up in the FSM object that had to be carefully tweezed out and re-assigned (how is it that spellchecker accepts “tweeted” and “twee zed”, but not “tweezed”?).

This was another reason I decided to nuke the entire AI from orbit rather than attempting a salvage operation: I want the brain as decoupled as possible so can re-use bits and pieces of it.

I honestly don’t know what else yet I might their brain for, but that’s kinda the point of decoupling: I don’t know, so I should make it possible to use it in anything. One possible example might be ‘food creatures’: schools of small fish or swarms of bugs that don’t have to do much besides move around and be edible. Being able to simply and easily create a cut-down version of the creature’s brain that works in non-creatures would be a huge asset in this scenario.

Anyway, by the end of the fixing and subsequent deletion, we had a fairly successful population of vegetables: all 250 the initial creatures stopped moving, eating, breeding and, after their health ran out due to normal starvation, living.

I suppose technically it wasn’t necessary to test this repeatedly, but I had to make sure there were no other side effects to the lobotomy. And I suppose technically the maniacal laughter probably wasn’t necessary either, but if we’re going to get hung up on technicalities we’ll never get anywhere. Moving on!

Next step: Behaviour Tree Mk 1.0.

Leave a comment

AI Rework – Prologue

The AI desperately needed a rework, because having creatures with the intellectual capacity of mildly concussed bacteria is proving generally detrimental to the simulation as a whole. It unfairly advantages the ‘viral’ survival strategy of producing hundreds of children and hoping that some of them randomly beat the odds and survive.

This isn’t a problem in and of itself (“it’s a viable strategy!”), but the it dampens other effects of natural selection. It would be more interesting and less random to see them exercising a bit more influence over their own survival. Plus, having goals we humans can empathise with might make them somewhat interesting to watch on an individual level.

I’ll try to tell the saga of intelligence as it happens, somewhat, but to start with, here’s a technical summary of what we’re currently doing with AI, and what we’re planning to do by 0.8.0.

The current AI implementation is, for the most part, a Finite State Machine. As detailed in this, somewhat embarrassing 3-year old post, a finite state machine is a machine with a finite number of states. /captain obvious.

The Finite State Machine implementation in species is effectively just a glorified switch statement: more complex implementations can and do exist, but for simple AI they often aren’t necessary. The operative phrase there, though is “for simple AI”.

The problem with Finite State Machines is that they are by their very definition rigid and inextensible. They provide a control and organisation framework for building an AI, but extending them into something larger and more complex is difficult (though not impossible).

In the current implementation, the biggest result of this inextensibility is a lack of foresight on the creatures part. Creature’s don’t differentiate between chasing another creature, approaching a tree, or random wandering: they’re all just “Moving” to them. They don’t know or care *why* they’re moving, and only when they reach whatever the target is do they decide what to do with it.

Earlier in the year, my plan to extend the AI had involved multiple finite state machines. At first, this was going to be simply two: one for “plan” and another for “action”. So at any one moment a creature would be in two separate states: “Hunting : Attacking Creature” or “Mating : Moving To Target”.

That gives each creature two states to work with “why” and “what”. This should allow them to perform a sequence of actions to achieve a goal, for instances “Hunting” would be ["Find", "Approach", "Kill", "Eat"].

After a while, I realised this would require a third, even higher state, which would be chosen based on emotion. “Hunting” would be a sub-state of “Seeking food”, along with “Grazing Grass”, “Grazing Trees” and “Scavenging”. And it was around this point that I realised a hierarchy of state machines was just going to get more and more restrictive. While you have 4 sub-states for hunting, each with their own set of sub-states, you only have 1 for “dead” (maybe 2 if we add a “dying” animation NO, BAD FEATURECREEP! DOWN!).

Food => Hunt => Approach
Dead => Dead => Dead //redundant states, just to comply with the 'levels' of the FSM hierarchy

The complexity of intelligence I wanted in the game simply couldn’t be represented by a set number of hierarchy levels and a set number of states on each. The finite state machine abstraction was simply too rigid to encompass everything I wanted it to do.

It was while I was realising this that I stumbled onto the latest Next Big Thing in game dev: Behaviour Trees.

I first learned about Behaviour Tree’s here, and got going by modifying some code from here.

The first thing I feel I should do is completely disillusion everyone. Behavioural Tree’s aren’t some magic bullet that makes all your AI instantly awesome. They’re an organisational tool for complex AI: well suited to simulations and complex NPC behaviour, but would be complete overkill for basic, attack-on-sight enemies.

Now that the cynicism’s out of the way… OMG Behaviour Tree’s! They’re like this magic bullet that’s gonna make the Species AI instantly awesome!

Behaviour trees are a system that do not enforce horizontal rigidity the way state machines do. Where the State Machine Implementation required redundant structures like the Dead=>Dead=>Dead above, in a behavioural tree the first “Die” node can simply be an action node.

But that’s far from the biggest advantage. The biggest advantage of a behavioural tree is in the way it handles failure.

In a finite state machine, each state has to handle it’s transitions to other states. The “idle” state needs code that goes to “moving” when a creature gets bored of standing around. The “moving” state needs code that goes to “eat”, “attack”, “mate” and a variety of other actions depending on what they reach. And every living state needs code that goes to “dead” when a creature runs out of hit points. (or “dying” if we implement that animation state- I SAID DOWN!)

Behavioural tree’s don’t work this way. Each node in (a standard implementation of) a behaviour tree can only do one of three things: it can succeed, fail, or continue running. Eat() doesn’t have to know how to respond if it’s attacked: all it has to know is that taking damage causes it to fail, and the tree above it will take care of finding the appropriate response.

Better yet, these failure conditions can be implemented much higher in the tree than the action states. The “hP < 0″ test, for example, doesn’t have to be implemented in Idle(), Graze(), Eat() and Attack() separately: with a bit of inventiveness, it can be implemented in a Live() node that encompasses all of these. Failing at Live() automatically causes all the lower-level actions to fail.

This failure system allows for something that would be difficult to implement otherwise: fallback plans. A scavenger seeking meat might check it’s Field of Vision for delicious corpses*. If it succeeds, good: eat that. If it fails, it could check for easily defeatable creatures to kill. If it fails at that, it might consider trying to eat inefficient food sources like grass and trees, and if it fails at that, it might wander to adjust its FOV before recursively calling itself to begin the original sequence over again.

*footnote: I had a strange blast of perspective while typing this sentence. Energy sources aren’t actually delicious, and creatures don’t actually have any emotional response to them, but they *act* as if they do, approaching and eating them when they find them. It sounds weird, but I’m strangely proud of that.

The end result of the behaviour tree system is an extremely versatile mechanism for intelligently achieving a goal, or gracefully handling a failure if it can’t find any way to achieve it. It’s not perfect for every situation, and is probably over-engineered for many, but I’m convinced it will fit Species well.

Next post: The AI saga begins, like all good stories, with a lobotomy.

Leave a comment


Get every new post delivered to your Inbox.

Join 392 other followers