It’s fascinating how one thing in development leads to another, which leads to another, which in turn means a major change in gameplay.
In implementing the emotional systems, I’ve subconsiously stumbled into a Needs-like system, requiring that every emotion be coupled to a specific action: creature’s eat when hungry, flee when afraid, mate when amorous, etc.
This has lead to me having to work out the need for Pain. It’s a given that a Pain meter needs to be implemented (because reasons. SCIENCE reasons), but what actions can a creature take to reduce pain? They could flee the source, but that’s what fear does: we don’t need a duplicate of fear. And pain fades on it’s own: there’s no real *decision* one can make to reduce pain after it occurs: pain in real life is a preventative measure. There’s no appropriate biological response to it except to…
… rest and recover.
And thus, the creatures in Species will, as of 0.8.0, have the ability (and need) to sleep.
Pain will be generally reflect the percentage of health, while Hunger will generally reflect the total percentage both health and energy. Thus, a starving creature (no Energy and low HP) will have a higher hunger than pain value and will prioritise food, while an injured one (high Energy, low HP) will have a much higher pain.
This high pain value will cause the creature to lose consciousness, which will trigger rapid healing: creature’s healing rate will be vastly reduced when awake.
So, why not call it ‘tiredness’ then? (aside from the absolute necessity of having a pain meter for aforementioned SCIENCE reasons). Well, Pain has the advantage of being triggered by other means, such as rapid dealing of damage. This provides an interesting selection pressure for combat-oriented creatures: rather than trying to kill their prey outright, they could aim for maximum pain so as to knock their prey unconscious The Pain Threshold gene is thus exposed to two competing selection pressures: high enough to ensure prompt healing, but low enough to prevent the loss of consciousness when attacked.
‘To the pain’ also becomes a viable solution for non-lethal intra-species disputes, opening up feature creep possibilities like fighting for food and mating rights.
This does lead to an odd situation, though. If creature’s sleep to heal, they will empty their energy bar. That energy bar is what they use to reproduce. Thus, it would probably be to their benefit NOT to sleep, and just act like baby factories, eating food and immediately converting it to babies in order to maximise production and put out hunnards o babbies.
The obvious solution is to treat ‘energy’ the way it was originally supposed to be treated: as a measure of the amount of undigested food in their stomach. I don’t intend to use it *quite* like this just yet: I still intend to take some metabolic and walking energy requirements from it. But moving the reproduction energy requirement to health seems like a good idea: eat, sleep, digest, THEN make baby.
Which in turn leads to another odd situation. Pain is triggered by rapid drops of health. Giving birth takes quite a bit of energy from health in an instant. Did I just accidentally a pain of childbirth?
That’s… actually pretty cool. Or possibly cruel, I often get those two mixed up.
That said, the instant energy drop is a bit absurd: it triggers just as much pain as being slaughtered by a malevolent god (as determined by statistically significant testing). Well, I can at least reduce that by making the energy drop over a certain period of time, rather than- I just did it again, didn’t I?
So, um… pregnancy! That’s a thing now.
I think I need to repeat myself here:
“It’s fascinating how one thing in development leads to another, which leads to another, which in turn means a major change in gameplay.”
I am so glad I finally got around to upgrading the AI.
Since I wrote this post, it just. kept. happening. Moving reproduction loss to health had side effects, and I’ve had to keep making changes so that creatures could remain capable of reproduction without killing themselves in the process. The pain value is looking more and more like “exhaustion” and may have to be renamed, creature’s are deciding of their own volition not to reproduce in uncomfortable zones, a starving creature actually falls unconscious shortly before dying, and NONE OF THIS WAS PLANNED.
I had planned to liken this to how Dr Frankenstein must have felt when he lost control of his monster, but frankly I’ve lost control of creations and accidentally set them loose on innocent townspeople before, and this feels different. Less frantic somehow. Maybe it’s the lack of lightning.
Okay, now we get to the fun stuff! And by fun I of course mean “densely technical and hard-to-follow”.
With creature’s wandering randomly until they invariably died horribly, I wanted my next focus to be giving them the ability to move towards things. This means a lot more than just having a MoveToPlant() routine, though: they first have to be able to find a plant in the world.
I could probably have implemented this as a simple black-box method that takes a creature and returns the nearest edible tree, but I wanted to test out my understanding of behavior trees. Additionally, implementing this as a behavior tree opens up the possibility of alternate methods for finding an edible plant in the future: checking memory, other senses, or may be even finding and asking other creaures where food may be found.
So anyways, this was the result:
… and the code looks summit like this:
new Sequence( new SortLocalTrees(parent), new RepeatUntilFail( new Sequence( new GetNextTree(), new Inverter( new Sequence( new IsTreeEdibleSize(), new IsTreeInMapBounds(), new Inverter(new IsTreeBehindFence()) new StoreClosestEdibleTree() ) ) ) ), new CheckClosestEdibleTree() )
A bit more complex than last time! But don’t worry, it’s the same basic concepts. Let’s run through it:
SortLocalTrees() is a fairly simple routine, and is called once when the root sequence starts up. It gathers up all the trees in the local area, puts them into a List, and sorts them by distance. It doesn’t actually *do* anything with items in the list: just sorts them.
If SortLocalTrees() fails (say, because there are no trees in the area), the parent Sequence fails, and the entire FindClosestEdibleTree() routine Fails. Back in the main AI, this causes the creature to fall back to Wandering. If SortLocalTrees() succeeds, however, it moves on to…
A curious node: this one will keep on repeating so long as it’s child succeeds. It is actually incapable of failing: if the child fails, this routine succeeds, if it succeeds it just keeps going.
This can actually be something of a liability: it’s why we need the CheckClosestEdibleTree() routine at the end, so that if there are no edible trees in the area, the routine doesn’t return a misleading success.
It also has the potential to get caught in an infinite loop: simply tie it to a routine that never fails.
GetNextTree will get the next tree from the local tree list, succeeding when it returns a new tree, and failing if it’s at the end of the list. The Tree itself is stored in the creature’s “NextTree” field, so future routines can access it.
Combined with the RepeatUntilFail() above, this means that the behavior tree will go through the local tree list one by one, continually pulling out a new one and performing the following sequence on it. Since we sorted the list earlier, it will do so in order from the closest to the furthest.
A simple NOT gate. If the child fails, the inverter succeeds. If the child succeeds, the inverter fails. The reason for this will become apparent in a moment.
IsTreeEdible(), IsTreeInmapBounds(), IsTreeBehindFence()
Succeeds if the tree is edible/in map bounds/behind a fence, fails if it’s not. The inverter on BehindFence is there because we want the find to succeed if it’s not behind a fence.
Copies the NextTree variable into the ClosestEdibleTree variable, so that later routines (like MoveTo() and Eat()) can target it.
As mentioned above, this ensures the routine as a whole fails if the RepeatUntilFail() Succeeded without filling out ClosestEdibleTree.
So, Here is where the entire routine finally comes together:
- The list is sorted, and GetNextTree() pulls out the closest tree.
– IsTreeEdible() Fails: this tree is not edible: maybe it’s too high for the creature to reach.
– The Sequence above it fails, so the Inverter above that succeeds. Thus, the entire GetNextTree() Sequence succeeds, and RepeatUntilFail() starts it over.
– GetNextTree pulls out the second closest tree.
– IsTreeEdible Succeeds this time. We have found an edible tree!
– For the sake of clarity, the tree is also in the map bounds and not behind a fence, so they succeed too.
– The Sequence stores the current tree in the “ClosestEdibleTree” variable, and succeeds.
– The Inverter get’s a Successful “tree found and stored” signal, and turns it into a Failure().
– This Fail() feeds up the tree to the RepeatUntilFail(), which in turn turns it back into a Success().
– When RepeatUntilFail() succeeds, it advances on to the final check: CheckClosestEdibleTree().
– If there’s something in this variable, it means the whole routine succeeded. If there’s not, it means the routine went through every local tree and didn’t find a single edible one.
– And with that, the FindClosestEdibleTree() routine succeeds. Future routines can now make use of the ClosestEdibleTree variable with impunity.
Now, I freely admit implementing a single-frame routine like this as ordinary code would have been a heck of a lot simpler. A sort(), a foreach() loop, the same checks as above and break when the checks passed. It could even be encompassed in a behavior tree wrapper, to succeed or fail depending on what it returned.
But the goal here wasn’t to program something efficiently, but to learn how to use behavior trees. And I think I’m getting the feel for them: it’s becoming easier for me to think my way through them and identify the problems I’m having when I test them.
The big advantage to Behavior Trees though is that, unlike code, they can be data driven, are inherently extensible, and most importantly can span multiple frames. Non-indicative example: it would be a matter of moments to change this to have creatures wait a frame between getting each new tree, reducing the per-frame computation and making it possible to actually see creatures weighing their options when they make decisions.
They can also be manipulated on the fly and even stored genetically, but I’m not sure how far down this rabbit hole I want to go. A simple system that manipulates behavior via emphasising or disregarding certain emotions or leaves of the tree is likely to be a lot more intuitive and communicative than genetic behavior trees.
I’m rambling and can’t think of a good segue to end the post on, so I’ll cut it short here.
Next time: And Suddenly There Was Gameplay Changes. Seriously: a completely new behavior and several major change to existing ones, none of which I was planning to implement. I just sorta happened. Stay tuned!
The next step after the lobotomy was to begin work on the Behaviour Tree system. I needed a first-attempt tree that would give me a chance to set up and test the basic functions: I went for this one.
This is a nice simple tree, and great for starting up. It’s hardly ancient, occult code transmitted from the blackest depths of humanities soul, but everyone has to start somewhere. It’s like the first gun you get in an RPG. Later, when I’m killing skyscraper-sized programming with a gun that shoots lightning, I’ll look back on this moment and be like “heh”.
What it does should be fairly self explanatory, but I’ll go through it step by step. This might not be necessary for *this* tree, but things’ll get more and more complex in the future, and if I tell you all about it now, the future posts will be somewhat easier to follow.
Repeats. This ensures that every time the creature finishes its task (in this case, the sequence below it), it starts a new one. Simple!
One of the foundational concepts of behaviour trees. A sequence works by starting up the first routine in it’s list (Wander) and waiting for it to succeed. When it does succeed, the sequence will continue to the next item in the list, and then the next, and then the next. Only after all items in the list Succeed will the Sequence itself succeed and return control to the parent (which in this case, given that the parent is a Repeat, will cause the sequence to start over).
If any routine fails (say Wander fails because there’s a fence between the creature and the wander location), the entire Sequence will fail. If Wander fails, the creature will NOT idle for 1 second: it will immediately go up to Repeat and search for a new location to wander to.
The practical upshot of this is that Sequences are great for things that need to be done in order. Hunting, for example: a hunter needs to [find food, catch food, kill food, eat food]. If any one of these operations fails; say the hunter has poor senses, or the prey escapes, or the hunter spontaneously decides to become a vegetarian; then the entire Hunt() sequence fails.
This is really just a wrapper for the MoveTo method. All it does is reset the target location, then call MoveTo(), which does all the actual work of moving the creature.
That said, this is also a method ripe for improvement. The previous Wander method, hidden away somewhere in the Finite State Machine code, simply picked a random (x,y) location in a square around the creature. The new method picks a location in a 180 degree arc in *front* of the creature, meaning they backtrack less and maximise the ground they cover.
In the long term, this won’t be the only upgrade to this method: it will also have code to disincentivise (shut up spellcheck, I know that one’s a word) them from wandering into inhospitable biomes like lava, deserts and oceans. Rather than just hard-coding a list of ‘inhospitable biomes’, I plan to implement this as an aspect of the “Discomfort” emotion: they will avoiding wandering in directions that make them less comfortable, such as area’s too hot/cold or dry/wet for them.
I don’t want to completely eliminate attempts to cross the desert/ocean, though, so Wander may have to become even more complex than that to facilitate migration behaviors.
This actually moves the creature to a target, Succeeding when the creature reaches the target, and Failing in a variety of other situations (such as an obstacle, like a fence or a homicidal rover).
This is another task that, while simplistic, is ripe for extension. “Move” could mean a lot of things: Walk, Run, Stalk, Pounce, Fly, Swim: it’s entirely possible we’ll end up doing away with the “MoveTo” leaf node entirely, replacing it with a sub-tree that chooses between all of those variations depending on circumstance.
For now though, it serves its purpose.
A simple node that causes the creature to wait around for a set period of time. Nothing complex about it: it just prevents the creature from immediately re-wandering.
So, in totality…
… this tree has a creature walk in a random direction, then wait for a second, and repeat.
Wow. I murdered a lot words for that. Oh well, words are cheap and delicious.
This tree might not be particularly impressive, but it’s a foundation for something a lot more complex. It’s existence means we have a Routine base class, with leaf (MoveTo and Idle), decorator (Wander and Repeat) and composite (Sequence) nodes. And importantly, it’s a way to test and debug each of these individual types of nodes before we start putting together something more interesting.
With this working, we can move on to more complex behavior, and finally begin unlocking the Strange Occult Power of Behavior Tree’s. In the name of SCIENCE. I have it on good authority that strange occult powers are very scientific.
NEXT TIME on BoringTechnicalCrapNobodyCaresAbout: FindClosestEdibleTree();
Are you excited? I’m excited.
The Brain object in Species on the 2nd (yes, I may be lagging behind somewhat with my blogging. Call it a buffer) looked something like this:
(Okay technically that’s a picture of the Finite State Machine object, but really, who wants to look at brains).
The Brain object in Species on the 3rd looked something like this:
Okay, the deletion process wasn’t quite as simple as that makes it out to be: due to the fact that I am terrible at everything ever, there was quite a bit of behavioral responsibility tied up in the FSM object that had to be carefully tweezed out and re-assigned (how is it that spellchecker accepts “tweeted” and “twee zed”, but not “tweezed”?).
This was another reason I decided to nuke the entire AI from orbit rather than attempting a salvage operation: I want the brain as decoupled as possible so can re-use bits and pieces of it.
I honestly don’t know what else yet I might their brain for, but that’s kinda the point of decoupling: I don’t know, so I should make it possible to use it in anything. One possible example might be ‘food creatures': schools of small fish or swarms of bugs that don’t have to do much besides move around and be edible. Being able to simply and easily create a cut-down version of the creature’s brain that works in non-creatures would be a huge asset in this scenario.
Anyway, by the end of the fixing and subsequent deletion, we had a fairly successful population of vegetables: all 250 the initial creatures stopped moving, eating, breeding and, after their health ran out due to normal starvation, living.
I suppose technically it wasn’t necessary to test this repeatedly, but I had to make sure there were no other side effects to the lobotomy. And I suppose technically the maniacal laughter probably wasn’t necessary either, but if we’re going to get hung up on technicalities we’ll never get anywhere. Moving on!
Next step: Behaviour Tree Mk 1.0.
The AI desperately needed a rework, because having creatures with the intellectual capacity of mildly concussed bacteria is proving generally detrimental to the simulation as a whole. It unfairly advantages the ‘viral’ survival strategy of producing hundreds of children and hoping that some of them randomly beat the odds and survive.
This isn’t a problem in and of itself (“it’s a viable strategy!”), but the it dampens other effects of natural selection. It would be more interesting and less random to see them exercising a bit more influence over their own survival. Plus, having goals we humans can empathise with might make them somewhat interesting to watch on an individual level.
I’ll try to tell the saga of intelligence as it happens, somewhat, but to start with, here’s a technical summary of what we’re currently doing with AI, and what we’re planning to do by 0.8.0.
The current AI implementation is, for the most part, a Finite State Machine. As detailed in this, somewhat embarrassing 3-year old post, a finite state machine is a machine with a finite number of states. /captain obvious.
The Finite State Machine implementation in species is effectively just a glorified switch statement: more complex implementations can and do exist, but for simple AI they often aren’t necessary. The operative phrase there, though is “for simple AI”.
The problem with Finite State Machines is that they are by their very definition rigid and inextensible. They provide a control and organisation framework for building an AI, but extending them into something larger and more complex is difficult (though not impossible).
In the current implementation, the biggest result of this inextensibility is a lack of foresight on the creatures part. Creature’s don’t differentiate between chasing another creature, approaching a tree, or random wandering: they’re all just “Moving” to them. They don’t know or care *why* they’re moving, and only when they reach whatever the target is do they decide what to do with it.
Earlier in the year, my plan to extend the AI had involved multiple finite state machines. At first, this was going to be simply two: one for “plan” and another for “action”. So at any one moment a creature would be in two separate states: “Hunting : Attacking Creature” or “Mating : Moving To Target”.
That gives each creature two states to work with “why” and “what”. This should allow them to perform a sequence of actions to achieve a goal, for instances “Hunting” would be ["Find", "Approach", "Kill", "Eat"].
After a while, I realised this would require a third, even higher state, which would be chosen based on emotion. “Hunting” would be a sub-state of “Seeking food”, along with “Grazing Grass”, “Grazing Trees” and “Scavenging”. And it was around this point that I realised a hierarchy of state machines was just going to get more and more restrictive. While you have 4 sub-states for hunting, each with their own set of sub-states, you only have 1 for “dead” (maybe 2 if we add a “dying” animation NO, BAD FEATURECREEP! DOWN!).
Food => Hunt => Approach Dead => Dead => Dead //redundant states, just to comply with the 'levels' of the FSM hierarchy
The complexity of intelligence I wanted in the game simply couldn’t be represented by a set number of hierarchy levels and a set number of states on each. The finite state machine abstraction was simply too rigid to encompass everything I wanted it to do.
It was while I was realising this that I stumbled onto the latest Next Big Thing in game dev: Behaviour Trees.
The first thing I feel I should do is completely disillusion everyone. Behavioural Tree’s aren’t some magic bullet that makes all your AI instantly awesome. They’re an organisational tool for complex AI: well suited to simulations and complex NPC behaviour, but would be complete overkill for basic, attack-on-sight enemies.
Now that the cynicism’s out of the way… OMG Behaviour Tree’s! They’re like this magic bullet that’s gonna make the Species AI instantly awesome!
Behaviour trees are a system that do not enforce horizontal rigidity the way state machines do. Where the State Machine Implementation required redundant structures like the Dead=>Dead=>Dead above, in a behavioural tree the first “Die” node can simply be an action node.
But that’s far from the biggest advantage. The biggest advantage of a behavioural tree is in the way it handles failure.
In a finite state machine, each state has to handle it’s transitions to other states. The “idle” state needs code that goes to “moving” when a creature gets bored of standing around. The “moving” state needs code that goes to “eat”, “attack”, “mate” and a variety of other actions depending on what they reach. And every living state needs code that goes to “dead” when a creature runs out of hit points. (or “dying” if we implement that animation state- I SAID DOWN!)
Behavioural tree’s don’t work this way. Each node in (a standard implementation of) a behaviour tree can only do one of three things: it can succeed, fail, or continue running. Eat() doesn’t have to know how to respond if it’s attacked: all it has to know is that taking damage causes it to fail, and the tree above it will take care of finding the appropriate response.
Better yet, these failure conditions can be implemented much higher in the tree than the action states. The “hP < 0″ test, for example, doesn’t have to be implemented in Idle(), Graze(), Eat() and Attack() separately: with a bit of inventiveness, it can be implemented in a Live() node that encompasses all of these. Failing at Live() automatically causes all the lower-level actions to fail.
This failure system allows for something that would be difficult to implement otherwise: fallback plans. A scavenger seeking meat might check it’s Field of Vision for delicious corpses*. If it succeeds, good: eat that. If it fails, it could check for easily defeatable creatures to kill. If it fails at that, it might consider trying to eat inefficient food sources like grass and trees, and if it fails at that, it might wander to adjust its FOV before recursively calling itself to begin the original sequence over again.
*footnote: I had a strange blast of perspective while typing this sentence. Energy sources aren’t actually delicious, and creatures don’t actually have any emotional response to them, but they *act* as if they do, approaching and eating them when they find them. It sounds weird, but I’m strangely proud of that.
The end result of the behaviour tree system is an extremely versatile mechanism for intelligently achieving a goal, or gracefully handling a failure if it can’t find any way to achieve it. It’s not perfect for every situation, and is probably over-engineered for many, but I’m convinced it will fit Species well.
Next post: The AI saga begins, like all good stories, with a lobotomy.
The time has finally come: the creature’s artificial stupidity has been affecting the results of natural selection since 0.4.0, and it’s finally time to change that. We will at last be upgrading the creature’s brainz.
- Perception – Will be simplified and streamlined into a more reactive, less proactive, system, which should be both easier on the computer and more responsive when it counts (such as responding to attacks and seeking food). The system will be based on hearing and smell: the idea being that’s it’s the responsibility of the entity being detected to emit a noise/scent that creatures in the vicinity can detect and act upon.
- Stimuli – Creatures will become more responsive: stimuli like pain, temperature, starvation, sexiness and being attacked by enormous scary things will prompt actual responses from them.
- Emotions – Each creature will have a suite of emotions: the strongest will determine what task they attempt to perform next. At this stage I plan to include hunger, fear, anger, amorousness, and discomfort, as well as make the emotional system easily extensible for additional future requirements.
- Forethought – When a creature has decided to do something, they will need to plan out a series of actions. Fixing hunger doesn’t just involve moving to the nearest object and sucking on it: creatures will identify the best form of food for themselves, seek it out, kill it (if necessary) and then eat. This will be done through a Behavior Tree, allowing for complex fallback plans in the event the initial attempt fails, easy extensibility, and possibly even opening up the possibilities of modding.
All of this will hopefully lead to interesting creatures: not just on a population level, but on an individual level. The plan is that when this is done, an individual creature’s struggle for survival will be a lot less random and a lot more interesting to observe.
Additionally, this should give make creatures in general much more capable of survival, allowing me to reduce the number of trees and food available.
I had originally intended to release an additional second set of features with 0.8.0, focused around making some overdue improvements to the gameplay cycle, but have since decided against it: if I focus solely on the AI, the update will be out quicker and I’ll be less likely to get distracted. We won’t be addressing anything else this update: the AI deservedly takes center stage.
And here we are again. A new version of Species: Artificial Life, Real Evolution with more rules governing survival, new gameplay features, general improvements and cheese.
Free as always, at least until we get out of alpha development. Changelog is as follows:
- FISSICKS! A full skeletal analysis is now performed on each creatures body plan, determining internal forces, torques and balances, and applying energy costs to each. Now, instead of abusing the wonky, overly-simplistic physics to produce insane and hideous body plans, creatures will abuse the wonky, overly-complex physics to produce insane and beautiful body plans! Click on a creature and bring up the details to see the forces being applied to it’s skeleton.
- Accurate mass and volume values. Yes, we measured. Be grateful, these little buggers are slippery and have a tendency to suck viciously when disturbed. Primum specium is 62 centimeters long and weighs 2.7kg, or to put it another way, they’re as long as a adult human arm and weigh as much as my laptop. This mass is calculated by multiplying their volume with a (placeholder) density of 0.9196 kg/L.
- Fences. Yes, ladies and gentlemen, we have spared no expense in bringing you the very latest in rotting post and cheap wire technology. The Fence(TM, patent pending) is guaranteed to stop all creatures from passing it, even when they’re big enough to just step over. We’re, uh… well, okay, we’re not entirely sure why, but our scientists urge us not to question it lest it stops working. Useful for geographically separating populations, building race-tracks for your rovers, and trapping creatures in a tiny area and watching them starve to death. Which Would Be Wrong.
- Climate Control Devices. So apparently, and despite the wishes of many, when you pump a crapton of chemicals into the atmosphere it DOES STUFF. Useful for creating specific area’s of cold, hot, fertile or dead ground without glaciating or frying your entire world.
- Rover Programming. Rovers now have the ability to deforest or fertilize trees, manage the population in a crisis to keep it from going extinct, or feed/kill creatures based on their genetic distance to a target.
- Genome Editor. Now an official part of the game! Because nothing says “ethical scientific advancement” like genetic experimentation. Primarily added to allow people to create ‘targets’ for the rover programming option, but for now you can import them into the game as well if you want to.
- Creature Descriptions. In the details of each creature, in addition to the exact values like 0.2 and 7.9, you can see descriptions like “Mostly Harmless” and “Barely Multicellular”.
- Neck Rotation Gene, +Animation. Creatures now move their neck and head about to bring it in line with food sources, and can only eat food they can bring their head into range of.
- Background improvements. A load of changes to the codebase to make the game more data driven, easier and faster to extend and add features to, and generally better to work with.
And now, onwards! To glory and 0.8.0!