Posts Tagged gaming

Old Projects

Exhausted right now, so instead of writing something new, I thought I’d show off a few older prototypes and games to demonstrate my programming bona-fides a bit. And hey, might as well let you guys download and make fun of them: they’re a little embarrassing, especially some of the older ones.

I make no guarantees’s that any of these games, prototypes or technical demo’s will work on your machine. Very much provided as-is.

Chaotic (Game)

Made this little Actionscript game when I was still in senior year at school, 7 years ago now. It was one of my first projects on a larger scale than mouse-controlled point and click games (I’m not going to post any of those, they’re embarrassing), and it’s buggy as all hell: but it’s more the endearing, entertaining kind of buggy than the crappy crash-and-burn kind of buggy.

A stick figure fighting a scorpion. Because I could.

It’s also a suprising amount of fun with two players, although you’re both stuck on the same keyboard. The single player AI is utterly merciless, but easily killable once you learn it’s bugs (it was programmed before I learned about Finite State Machines, so it’s a massive mess of if statements).

Download

Controls
– You’ll need a keyboard with a num-pad. Controls can be found in the tutorial.

Untitled (Game)

Showing off my leet naming skills here. It’s another little two-player flash game: a lot cleaner than Chaotic, and it has very basic destructable terrain. No single player, sorry.

Real men shoot pink death lasers.

Download

Controls:
– Can also be found in-game. Click the Help button.

RT-22 (Prototype)

This was one of my initial forays into 3d games programming. Built in Blitz3d and abandoned when I went to XNA and started work on Species, the idea was to make a non-generic third person shooter. The result was the little floating robot you see here, which I labelled the RT-22 droid.

Blue ‘splosions!

The game was to be small, (seven fairly short levels), and have a comedic futuristic setting inspired by Douglas Adams (I’d been reading Hitchikers Guide to the Galaxy at the time). It never got past the testing phase, since I changed programming environments before I was done with it and didn’t want to bother with porting.

Download

It features 4 different weapons (middle click to switch), some pretty neat hover-flight controls, and my very first attempt at Finite State Machine AI in the Debug Squids (that’s their name canonically, too).

Controls:
– Mouse: Aim
– WASD: Move.
– Left Click: Fire Weapon
– Middle Click: Change Weapon
– Right Click: Zoom.
– “+/=”: Toggle Turbocharge

Voxel OcTerrain (Technical Demo)

The heightmap QuadTerrain class in Species isn’t my only attempt at terrain: after I built that, I tried my hand at Voxel Terrain for another project.

To quickly clarify: a “voxel” is a single cube in a 3d grid. You might be familiar with the term thanks to Minecraft. This terrain isn’t cubey though: voxels are the method by which it’s geometry is split. Imagine putting a 3d object through one of those slicers that cuts things into squares: along the cuts is where this method of rendering will create polygon edges. This method of terrain generation is used by, among other things, Starforge, Crysis and the NVidia Cascades Demo.

It’s a good way to create 3d procedural geometry, and you an use it to make caves and overhangs and… well quite frankly, any 3d object can be rendered with enough voxels. It’s far more versatile than a height map: you can render vertical towers, round planets… anything you can think of.

Naturally, I went and extended the concept to be editable. As you can see here, I succeeded fairly well: it’s quite possible to carve out canyons, mountains and paths with this little editor.

Download

Blob world

Not included in the sample: I also went and implemented a save/load system and an octree LOD system similar to the quadtree system in species, but with geomorphing (Geomorphing is a method to stop the terrain geometry from ‘popping’ from high-detail to low detail as you move away from it).

Voxel Terrains are not the easiest system to geomorph. It fought me every step of the way, and still has occasional gap artefacts in that mode, but I’m proud of it anyway because, seriously, Geomorphing Voxel Terrain. That’s not the sort of thing most people don’t code because it’s hard, it’s the sort of thing most people don’t code because it’s frikkin’ nuts.

Swarm (Prototype)

An attempt to make something epic-but-unpolished with a two week deadline. It came out pretty cool, but also with some rediculous performance issues. Features my first attempt at instanced models, flocking behavior, and insane amounts of… whatever those things are. Kamakaze robots or something.

They’re everywhere!

Download

WASD to walk, Mouse to aim, Left Click to shoot.

Project Aria (Technical Demo)

Project Aria was the first project I did in cooperation with Jade (the artist working with us on Species), and it’s noticable. In addition to the environment, character models and textures, Jade kept pushing me to implement various graphical effects: fancy water, shadows and lighting, normal maps, bloom and even a special sorting method for rendering the characters hair. This resulted in the game demo you see here, which I rather think speaks for itself.

Sooooo preeeetttyyyyy…

Download

I should warn you though: it has a large number of effects as well as realistic physics, and nothing is optimised: I was more interested in getting all the features in than in keeping it from lagging. It’s pretty, but its performance is a dog.

WASD to walk, Mouse to aim, Right click to change modes
– Telekinetic mode: Left Click to pick up objects, Mousewheel/Q/E to push/pull them. You can use that to throw them rediculous distances, by releasing as you roll the wheel.
– Gun mode: Left Click to shoot stuff.
B – Blur Shadows (performance setting)
M – Draw Shadows (performance setting)

Welp, uploading all that was a lot harder than writing a new post. I’m goin’ to bed now.

Cheers all.
Qu

Our IndieGoGo project is still here:

Contribute, Share, help us out!

The rise hasn’t been exactly meteoric, but there’s still a month for it to pick up. Besides, we can use any amount of funding. Species getting made is not dependant on us making our goal. Funding is a way to accellerate our progress towards the goals and give us more chances to polish the game when we reach them.

, ,

4 Comments

GRASS GRASS GRASS GRASS

GRASS GRASS GRASS GRASS GRASS GRASS

There’s not too much to say about the obvious mechanisms of the 0.5.0 in-game grass: it uses the billboard system which I explained a while back. I could mention the other technical details: it only draws a few, nearby, vegetation nodes, grazing affects the length of the entire node, it fades out at a distance because there’s way too much of it to draw it across the entire map… okay, done that, now what?

Well, seeing as how talking about implementation is boring, I guess we might as well talk about the design aspects.

The moment I decided on grazing as a feature for 0.5.0, I knew I’d have to represent it somehow. The grass itself, though, wasn’t originally meant to be that representation. Indeed, I’m still not entirely convinced that grass is the best representation for it: rendering grass has limitations that make it less than ideal.

But…

Uh…

Hmmm…

Okay, I’m in the middle of typing up this post and I’m beginning to realise that the height of the billboard grass really isn’t the best way to represent grazable material. That’s what I get for blogging about a feature I’m in the middle of coding. Oh the joys of an evolving project.

Okay, new approach: I’m going to use this post as an opportunity to get my thoughts in order before I go off and play with the code a bit more.

The plan (prior to about about 30 seconds ago) was to include a grazables container or bucket within each square of terrain. This container would contain all the energy that creatures could graze from, and how ‘full’ it was would determine how long the grass in that square was. Fertility loss due to grazing would occur when the bucket was empty and had to ‘buy’ more energy to regrow.

Documenting my design decisions. I am a professional and I act like a professional.

The big pro to this approach is that grass height shows you at a glance how much grazable energy the local area has. Unfortunately, there’s also a lot of cons:

– all grazable scatter-material grows like grass and is edible, regardless of what it actually looks like. This includes pebbles on rocky terrain, shells on the beach, salt in a salt plain, and lava rocks in lava.
– grass in an area is all the same height. Since an ‘area’ is an exact square of roughly 10m x 10m, this would be especially noticable at borders where grass on one side is short and grass on the other is long.
– fertility loss is applied on an area-scale, not on a local one. A creature eating at the very corner of an area will affect the fertility of ground 14.1421m away (+/- 10m, anyway. Oh who am I kidding, I have no idea how big the vegetation squares are), while not affecting the ground just behind it.
– Grass is invisible at a distance, so you can’t see the direct effects of grazing from far away.

Yes, the rocks are edible. Why wouldn’t the rocks be edible? Stop being so carbon-based in your thinking.

Now, all of these are things that can be dealt with to eliminate or reduce their effect: scattering a squares vegetation a bit beyond it’s border would blur the straight line between squares, by applying fertility loss on a macro level makes it less apparent that it’s related to overall area and not to the actions of individual creatures.

But what if we could deal with all of these problems just by changing the way ‘grazable energy’ is stored? This is the idea I’ve just had:

Eliminate the energy buckets in terrain squares, and effectively remove all terrain-based control over grazable energy. Grass no longer has any limit: creatures can just keep grazing and grazing within their biome… until the biome changes.

OH-EM-GEE I’M SO FRIKKIN PROFESSIONAL YOU GUISE!!! LIKE TOTALLY!!!!!

With this system, a creature emits a ‘death aura’ while grazing, gradually reducing the fertility in a small area directly underneath themselves. Eventually, the biome under them degrades. This introduces a direct correlation between fertility and energy: a creature absorbs fertility from the ground and gains energy in exchange.

Do vegetarians usually suck dry the very soul of the planet? Oh no wait that’s vegans. Ba-dum tchch! (teasing vegans. I’m so original)

Since grazables are no longer dependant on area but on biome, we’ll be able to introduce a variety of biome dependant statistics (starting with a simple isGrazable boolean) as a central function of the simulation, rather than as something tacked on afterwards as was originally planned.

A conventient bonus is the fact that the ‘death aura’ code already exists, in the form of biome stabilisation from trees. The only difference is that where tree’s stabilise the habitat they’re best suited for (with the exception of some unbalanced pioneering species which make the simulation more dynamic by stabilising towards biomes they can’t survive in), grazing creatures stabilise the habitat towards arid, desert biomes, and then have to move on to find more grass.

Of course, I’ll have to rework some of the code for this: the ‘buckets’ system already exists in the development version. But that’s the nature of prototyping.

[The following day]

Welp, that’s done. This actually makes the environment feel a lot more ‘directed’, since you can now pinpoint the source of every fertility change: it’s either grazing creatures, trees or water. I might have to add a few more fertility-change sources, just to make it less predictable.

Ultimately, this change leaves the grass itself as little more than an aesthetic item. Oh well: the system’s in place now, it’s useful in defining the presence and quantity of grazable material in each biome, and when even the placeholder art looks good, you know you’re doing something right..

GRASS GRASS GRASS GRASS GRASS GRASS

Cheers,
Qu

He doesn’t mean it about the vegans. We actually think vegans are pretty awesome: it must take a lot of willpower and strength of conviction.

Please don’t kill us with your psychic vegan powers.

, , , , , , , , , , , , , ,

1 Comment

IndieGoGo Campaign

Species will be on IndieGoGo by the end of the week.

This crowdfunding campaign has been a while in development: the FAQ has been promising it since at least April, and the backstory is long, arduous and painful, so I’ll spare you that much and move on to what we hope to achieve with it.

The obvious one is funding for basic expenses. Species has website costs, business costs and software costs to cover. At the moment the project is acting like that housemate who eats all your food and doesn’t pay any rent. Friggin’ pre-teens. GET A REAL JOB.

Less obvious, but waaay more exciting, is expansion costs. Development so far has been done part-time by myself, our artist Jade, and our sound developer Brain Sugar. Paying them, spending more time on it, and possibly (if things go really well and we find the right person) expanding the team, could accelerate the games development significantly.

And more exciting than either of those for me personally, is publicity. I’ll never be able to measure Species’ success by money: instead I want to see people playing it, enjoying it, talking about it, studying it, learning from it, raging at it, calling it science, calling it blasphemy, giving me suggestions and idea’s I can steal use to make the game even better, and generally just PLAYING IT. Wait, I already said that one.

Anyway, this campaign is the perfect excuse to go against my introverted nature and shout madly from the rooftops, and encouraging who supports the project to do their own shouting until everyone who wants Species to succeed is raising their voices to heaven in a beautiful, amazing chorus of- no who am I kidding that would sound like someone torturing a load of cats. But it’d still be pretty cool.

There will of course be tiered perks/rewards for donors, including the classic preoreder-the-game entrance-level perk. Above that, we’ve limited ourselves to stuff we can deliver digitally to keep production costs down. This will include (among other things) the soundtrack, “playtester” status, several ways to get your name into the game, an exclusive copy of 0.4.1 I’ve been working on in the background, and a few chances to influence the development of the game for high-tier donors.

Oh, and we’re also makkin’ a proper trailer and upgrading the website to look less like a dogs breakfast. That’s why the forums are down (actually they’re down ’cause I’m an idiot who really should learn to read installation instructions before trying to upgrade stuff. But we’re working on fixing them. Will keep you updated)

Cheers,
Qu.

PS: And don’t worry: 0.5.0 development will continue apace through the course of the campaign. MULTITASKING!

PPS: The vast majority of humans think they’re pretty good multitaskers. They’re not. They’re embarrassingly terrible at it. But they think they’re multitaskers because they’re pretty fast at switching from one train of thought to another, something the average human brain is actually remarkably good at. So yeah, I won’t actually be multitasking, I’ll be switching, but shouting that as a way to punctuate the end of a sentence is nowhere near as much fun.

PPPS: Spare time? What’s that?

, , , , , ,

Leave a comment

Depth

I’ve written this post over and over again, and dumped what I’ve written just as many times. There are so many different aspects to “depth” that I strongly doubt my ability to elocute them all. Nevertheless…

“Depth” means different things depending on what you’re talking about. Depth of setting, for instance, is a very different thing to depth of characterisation… and both are aspects of depth of narrative. The whole concept is like a fractal algorithm of amateurish literary criticism.

This post is about depth of gameplay, which narrows things down, but not by much: is the gameplay in Species about how the player interacts with the simulation, or about how the simulation interacts with itself?

If it’s the player, we have a problem: 0.4.1 doesn’t really have much in the way of that sort of gameplay, so depth there is pretty meaningless. Sure you can dramatically affect the lives of individual creatures, but unless you spend a lot of time playing about with the feed and kill buttons, your influence on the simulation as a whole will be negligible.

Which leads us to how the simulation affects itself, and to yet another facet of gameplay depth: is it about the individual creatures, or the simulation as a whole?

On a “simulation as a whole” level, Species has a surprising and impressive amount of depth. Population explosions, extinctions, punctuated equilibrium, convergence, speciation, biogeography… by recreating the basic mechanisms of evolution, we managed to unlock a smorgasbord of unpredictable, but understandable, results from a variety of complex, interacting simulations. This most certainly qualifies as depth.

And on an “individual creature” level the game still feels shallow to me. It took me a while, but I think I’ve finally worked out why, and it has to do with what the term “Depth” actually refers to.

When it comes to depth of any variety, what matters is layers.

“Video games are like onions…”

Consider characterisation: a shallow character is exactly what they appear to be: when you peel back the first layer there’s nothing underneath it. If, even during their most vulnerable moment, the badass action hero is still spouting one-liners and generally acting tough, then they’re a shallow, single-layer character.

Deeper characters might have two layers: maybe the badass character is secretly afraid, but disguises it until the moment you see through the mask. That’s a two-layer character. The mark of a truly great character-writer is the ability to write many of these layers, onioned on top of each other… is a character a selfish bastard, a selfless hero, intensely loyal or understand the need to make sacrifices, hilarious or serious?

Or all of them at once?

All depth follows this basic rule, it’s just that the layers change their nature: in gameplay, the layers are interacting systems.

Here’s an example: guns. The shallowest possible first person shooter would only have a single type of gun, where the only difference is in damage. The next layer up has one or two competing stats: do you sacrifice damage for fire rate (SMG) or the inverse (Sniper Rifle). From there, every interacting system that you add on increases the depth: do critical hits like headshots do extra damage? Are different types of guns effective against different types of enemies? And every time you add a new system, the “best” result becomes less predictable, more open to player choice and creativity. At the upper end of the scale you have games like Borderlands, where gun-depth is taken to it’s logical extreme: between wildly varying enemies, elemental effects, level and rarity, and the insanely unbalanced stats of each of the different manufacturers, there simply are no “best” weapons in the game.

The depth in Species as an evolution simulator is similar: once you peel back the “evolution” layer, you find mutation, variation and natural selection. And once you peel back “natural selection”, you find the feeding rules, the walking rules, the speed and stamina stats, combat, eyesight, behavioral modifiers… depth.

But that’s at the high-end statistical level. From a closer level, when you’re just watching creatures walk about and interact, you’re seeing all of those systems directly. You don’t see how a higher walk rate affects a creatures survival and reproduction, you just see them walking faster.

Remember our earlier example of an FPS whose only variable stat was damage? That’s what the combat system in Species currently consists of: a damage value and a hit-points value. It’s as shallow as it possibly can be, because the depth is layered on top of it rather than underneath it. It’s a subsystem of the evolution simulator, not a system in-and-of itself.

And if the only aspect of the game anyone was interested in was the statistical evolution, that would probably be fine. The combat allows creatures to kill each other, which affects their natural selection: it’s done it’s duty. But people are going to watch the combat, because it’s a far more interesting, far more personal narrative. And they’re going to get bored with it, because independently of the evolution simulation, it’s shallow.

This applies to most of the specific-creature systems: they’re all no more than one or two layers deep, if you ignore the evolutionary layers on top of them. Genetics are represented by a simple floating point number list. Creature behavior and statistics all come down to equations. Health and energy are represented as scalar values.

For me, this one is a large priority for improvement: adding depth at the individual-creature level not only makes the creatures lives more interesting, it cascades up the system and affects their evolution in usually-unexpected ways. Every layer added improves the games realism, making the top-most simulation deeper and more interesting.

It’s not the only area for improvement, of course: as already mentioned, the player interaction is currently limited to individual creatures. This results in a nasty discrepency between the gameplay on an individual level, and the simulation-depth on a statistical level. Making the creatures lives more interesting is one way to deal with this: the other is giving the player ways to interact on a global level.

Both of those, then, will be major priorities once 0.5.0 is out of the way. I’d say I’m more than halfway done with that: everything’s in place, it just neads a lot of tweaking (for example, tree’s should probably have a stablising effect on the biome under them, rather than simply sucking it dry as they grow).

And possibly some art assets to replace the placeholder grass:

GRASS GRASS GRASS GRASS GRASS GRASS GRASS GRASS GRASS GRASS GRASS GRASS

Cheers,
Qu

PS: Dammit, there’s so much else I want to say on the subject of depth, like how physical laws provide a depth barrier the best simulations manage to reach and model, (eg. Mass = density * volume) and how even an apparently ultra-shallow game like Serious Sam manages to combine enough systems and tactics to be deeper than it appears, and how the deeper a simulation is the more you have to learn about it in order to do what you want to within it (tangential learning)…

Who’d have thought the subject of “Depth” would be so deep?

And he didn’t use Spore as an example even once. Huh. That was… unexpected.

, , , , , , ,

1 Comment

State Batching and Instanced Models

First of all everyone, sorry about the late post. Long story short, I’ve been trying to set up a Kickstarter campaign for Species to pay the people working with me, cover the website costs, and fund further expansion of the game. That failed miserably a week or two back due to Amazon Payments not supporting non-US applicants (AT. ALL. Trust me on this: if the name on the account is not a US-based individual, it doesn’t matter how many hoops you jump through they’ll still turn you down), so my motivation for blogging was somewhat sapped.

I’ve since moved to IndieGoGo, who aren’t as popular but support international projects (and who deserve a bit more love), and am gradually getting my motivation back.

Don’t worry, the campaign shouldn’t interfere with my work on 0.5.0. If anything, it’ll result in some functionality earlier: I’m working on a super secret version of 0.4.1 with fancy new prototype features as an IndieGoGo exclusive. The campaign should be going live sometime in November.

Alright, on to the post.

For all that the Species graphics engine doesn’t look like much, it’s actually quite complex. This is because, even though it’s not performing any fancy tricks like real-time shadows or normal mapping (both things I’ve implemented in other engines), it has to send a lot more information to the graphics card than most games.

For example, consider what a generic FPS might have on screen. The sky and terrain, a few types of trees and environmentals, and maybe three types of enemies underneath the wrapping of browny bloomy violence.

Also a destroyed city at least 60% of the time

Now compare that to Species. The sky and terrain, a few types of trees and environmental… and then hundreds of unique types and shapes of torso’s, limbs, heads, tails, necks and feet.

Note that I’m specifying *types* of object, not quantity. That’s because there are actually quite a few well-known tricks for drawing lots and lots of similar models to the screen. That’s how Crysis (the good one) can render more vegetation per-square meter than exists in 99% of western civilisation.

I’ll take this opportunity to discuss two of those tricks, both implemented in Species to some degree. I’ve already discussed billboards, so this will be about 3d models.

Firstly, it’s important to understand how the CPU tells the GPU to render a model. It’s not a matter of just saying “Draw Model”: there’s a lot of ancillary information and overhead that passes between the two processors. If it helps, think of it as painting a picture: you don’t just start painting, you first need to set up the easel, get out your paints, etc.

//Model 1               
[Specify shader]               [Set up easel]
[Send Model 1 Vertex List]     [Decide what to paint]
[Send Model 1 Index List]      [Work out what colours you'll need]
[Send Textures]                [Get out your colours]
[Set Shader Parameters]        [Put canvas on easel]
[Draw]                         [Paint]

//Model 2
[Specify shader]               [Set up easel]
[Send Model 1 Vertex List]     [Decide what to paint]
[Send Model 1 Index List]      [Work out what colours you'll need]
[Send Textures]                [Get out your colours]
[Set Shader Parameters]        [Put canvas on easel]
[Draw]                         [Paint]

//Model 3
[Specify shader]               [Set up easel]
[Send Model 1 Vertex List]     [Decide what to paint]
[Send Model 1 Index List]      [Work out what colours you'll need]
[Send Textures]                [Get out your colours]
[Set Shader Parameters]        [Put canvas on easel]
[Draw]                         [Paint]

As you can see, this is quite an expensive routine. All of these processes add to the amount of time it takes to draw each of these models. But as I said, there are tricks to make it cheaper. The first, and simplest to implement, is State Batching.

State Batching involves simply working out what things you only need to send to the GPU once. Let us suppose that in the above example all of the models are identical. In that case, it would be possible to rearrange things to set the common features: the shader, vertex lists, index lists and textures, only once:

//Starting pass
[Specify Shader]               [Set up easel]
[Send Model Vertex List]       [Decide what to paint]
[Send Model Index List]        [Work out what colours you'll need]
[Send Textures]                [Get out your colours]
                    
Model 1
[Set Shader Parameters]        [Put canvas on easel]
[Draw]                         [Paint]

//Model 2
[Set Shader Parameters]        [Change canvas]
[Draw]                         [Paint]

//Model 3
[Set Shader Parameters]        [Change canvas]
[Draw]                         [Paint]

Since the position, rotation and scale can be sent as Shader Parameters, this can be used to render hundreds of similar objects more cheaply.

The torso rendering in Species uses a similar system: sending the torso model only once, but specifying colour, width and height values as shader parameters. It’s not quite as simple as this example (torso models can have a variety of textures, so the models need to be sorted by texture in order to avoid sending them every time), but it makes rendering lots of them much cheaper.

But this system still calls the Draw() method for every single model. What if you want to go cheaper still, for thousands of trees? How can you draw all these models for the cost of a single Draw() method?

//Instanced Models
[Send ALL OF THE THINGS]       [Umm...]
[Draw ALL OF THE THINGS]       [Yeah, I got nothing]

Instanced models defy the analogy for a variety of reasons, but they essentially boil down to sending all the data at once, and drawing all the objects as if they were a single model. Without any of the overhead associated with more than 1 model, they draw much, much quicker.

How they do this depends on the type of instancing you use, which I’m not going to get into: suffice it to say, older machines do instancing differently from newer machines, and consoles are different again. What they share in common is sending all the data required to draw all the models to the GPU in one hit.

But the natural restriction on this is that all the models have to be the same. I am able to draw lots of identical tree’s this way, but not limbs because they’re shaped and animated on an individual basis.

So naturally, given that the tree’s need to be identical to use this system, I decided to give the 3d trees in Species 0.5.0 unique hereditable features. Thankfully I’ll be handling those with colours, which with some modification the system can handle, so it’s still quite viable.

I’m still working with placeholder meshes: the final result will be prettier than this.

Of course, with the hereditable features working, I decided to push my luck a bit further, by giving the tree’s consumable foliage. Rather than the 3d tree’s shrinking when creatures eat them, in 0.5.0 the creatures will eat away at the foliage instead, leaving a stem or trunk behind. I’ve yet to see how that turns out, but in theory at least it should be quite possible.

I don’t really have a snappy conclusion to this post, so here’s a mystery picture:

What.

“What is that I don’t even…”

1 Comment

The Science is Already Awesome

Loads of stuff going on behind the scenes right now: in addition to 0.5.0 (Got biomes and 3d trees done! Currently working on the interaction between the two) I’m also working hard on two parallel Species-related goals.

One has been mentioned both explicitly and implicitly several times in the last few months (I expect it to grow our audience a little), but I’m going to keep it under wraps for now because of unforseen difficulties and explosions. The other is a super-secret prototype.

So, I promised a post on science communication at the end of that comment on Bill Nye (that whole thing is still going on, btw. Ken Ham of Answers in Genesis is really pissy about it. It’s hilarious). It can’t promise the following won’t come out a little bit rambly (okay, a lot bit rambly), but here are my thoughts on the subject…

Science communication is a strange and somewhat tainted field, mostly because many of the people actively engaged in it don’t seem to realise they’re engaged in it. It’s deceptively easy to categorise the world into scientists and non-scientists, but most science communicators aren’t actually scientist communicators: they’re teachers, journalists, authors, TV personalities and (disturbingly) polititians and pundits.

There are some wonderful exceptions: scientist bloggers and writers like… (EDIT: Nevermind. I started writing this list and couldn’t stop, plus then I went researching and holy mother of cheeses there are a lot of them out there and I don’t read nearly enough of them regularly). But their audience is the people who go looking for scientist communicators: the vast majority of people don’t read proudly nerdy stuff like science blogs, and “proudly nerdy” is a good description of most scientist communicators.

In short, the general public get their info from other sources.

And those other sources usually aren’t scientists. In fact, I’d say the vast majority of the time they aren’t scientists. In general, they’re either…

a) people with an moderate understanding of science, tasked to transfer a preset curriculum of facts to a group of uninterested teenagers so they can pass their tests and promptly forget everything but the most trivial framework, or…
b) people with a barely rudimentary understanding of science, tasked to produce something they think other people with a barely rudimentary understanding of science would want to read/watch/play, or…
c) people with no understanding of science, who misheard something with sciencey sounding jargon in it and latched onto their misunderstanding as a certitude.

So that’s why we need dedicated science communicators: not just underpaid and overworked journalists tasked with getting a hyped up article about a discovery they don’t understand out in a few hours, and not just underpaid and overworked teachers tasked with making sure their students do okay on a standardised test at the end of the term. We need to expand the field.

This is especially vital in our modern society. We live in a world with uprecedented knowledge, and unprecedented access to that knowledge, yet science is still seen as an esoteric concept: the domain of nerds and geeks who use multisyllabic words like “esoteric” and “multisyllabic”. In a world where 5 minutes on wikipedia can inform anyone of things you used to need a bachelor degree to know, somehow people in general trust science even less than they used to.

So we need science communicators. I trust the scientists themselves to keep pushing at the boundaries, but if all they’re doing is pushing the boundries further and further away from the public, instead of bringing the public along for the ride, then science will suffer and society as a whole will suffer. On a societal level, education is something with no negative consequences and oh so many benefits.* Inversely, ignorance tears us all down.

*note: Okay, hypothetically, there is a level at which too many people are educated and with the surplus of skilled labour not enough are willing to do unskilled labour, and the country collapses. In reality, no society has yet reached that level: if the US had, for example, this chart would show equal levels of unemployment at all levels of education. It’s an interesting concept for science fiction to explore, though.

The alternative to an education-induced apocalypse: equally interesting.

However, we also have to be wary. Science communication is an easy thing to fail at. It requires two skill sets that, stereotypically at least, are diametrically opposed: a logical, analytic mind to understand the specifics of the science in the first place, and an ability to market yourself and your subject: to communicate enthusiasm and empathise with your audience. It requires you to be a Spock and a McCoy at the same time. (this blogs first Star Trek analogy. Oh… yeeeaaah)

To showcase this, here’s a few examples, of both successes and failures.

First of all, the Mythbusters. Indubitably a success. They might lack basic rigour, but as Zombie Feynman says: they got a whole generation interested in science. They made science cool. Ergo, if we want to communicate science, we should follow their example: sciencey stuff + funny hosts + blowing stuff up. Right?

No. None of these things were what made Mythbusters cool. I only ever saw one of the subsequent copycat shows, a series called Braniac hosted by Richard Hammond (who, for the record, is actually pretty good at communicating this stuff in more scripted shows, like documentaries), and it demonstrated quite thoroughly that “making science cool” is probably one of the worst things you can do to it. The show had it’s moments, but generally it was just a bunch of unconnected science-skits wrapped up in hype and sillyness. If you try to make science cool you fail at both science and coolness.

If you take a closer look at most episodes of Mythbusters you see fairly quickly that they’re not trying to be cool, and the moments when they are are painfully scripted. Adam, Jamie and the Build Team are by far at their best when they’re improvising, debating, making mistakes and being silly: in other words, acting like human beings. That human face, in addition to the usually excellent pacing of each episode (the show follows a pacing structure which should be familiar to anyone who has done a rudimentary literature course or remembers their high-school english), is what really made the Mythbusters popular. It was more than just a bunch of guys faffing about with science trivia and ‘splosions: it was a story, built around the scientific method.

Stand there and look cool for the camera. No, that’s constipated, try again.

Let’s take a look at another example, this time not of a success but of a failure, and not a particular work, but an entire genre. Edutainment.

For those of you who didn’t just hiss and cringe away from your computer, and thus we can assume were spared the horror of actually playing one of these games, edutainment was (and to an extent still is) the product of a bunch of people (likely older people) who saw that kids liked video games and hated being taught stuff, and thought “We can combine the two to make kids like being taught stuff!”

Unfortionately, the people put in charge of designing and making the resultant wave of educational video games didn’t understand video games. Based on the examples I’ve seen, it’s possible they didn’t understand teaching either. In some of the worst cases, I am forced to wonder if they had ever actually met a human child. For the most part, the games were what you’d get if you took a generically poor ‘memorise this’ classroom lecture and made the teacher stand behind a cardboard cutout of a cartoon character.

But it’s unconstructive (fun, but unconstructive) for me to keep insulting edutainment games without exploring why they failed. And to explore that, I need some successes to compare to. Now I’m sure that there are some edutainment games which are entertaining, but I’m not familiar with enough of them to know which ones those are. The only edutainment game I remember genuinely enjoying was an aquarium one, where you could gather fish by solving math puzzles, which appealed to my latent OCD in the same way that Pokemon did for cooler kids than I (yes, I was the kid who wasn’t cool enough to give a crap about Pokemon).

But I’m not really looking for a successful edutainment game: I’m looking for a successful game which educates. And those are surprisingly common, once you realise that games don’t have to try to educate in order to do so. This is due to a thing called Tangential Learning (yes, another link to Extra Credits. If you’re at all interested in games and you’re not watching the series, you should be).

My very first proper game, when I was in primary school, was The Incredible Machine. Anyone else remember T.I.M? It was basically a 2d Rube Goldberg Machine-maker, where you could place balls, platforms, trampolines, switches, lasers… a whole variety of things. And as a result of that game, long before I would have been capable of understanding a word with as many syllables as “algorithm”, I was making them. “The bowling ball falls to this light switch, which activates the fan, which blows the tennis ball off it’s platform…” The same logical, sequential thought patterns that game worked by would later come in handy when I was learning how to code.

When I was a little older, I picked up Sim City. Sim City taught me about complex, interacting systems in society: how doing one thing in one area could have dire consequences in another, and how the easy route (borrowing money) can get you into hot water later down the track. (although mostly what I remember learning from it is giant spider robots are bad news and that you can get money for nothing if you type F-U-N-D-S).

And just to prove that games like these aren’t a product of the past, I highly encourage everyone to check out Kerbal Space Program. For all that I thought I understood orbital physics, I never really grasped them intuitively until playing this game, which is also a whole load of fun (especially if you like explosions, and let’s face it, who doesn’t?).

At this point, you might be noticing the common thread: they’re all simulations. This means that what the game teaches you isn’t something tacked on afterwards, like a quiz or a cutscene: it’s a fundamental part of the game mechanics. By building a game around a simulation, they’ve improved both: the simulation provides depth to the game, and the game makes the simulation entertaining. And because the game mechanics revolve around the simulation, simply playing games like these tests your understanding of the simulation in a way conventional educational curriculi are simply incapable of.

This is actually a similar message to the one we took from Mythbusters earlier: you don’t have to make the science/learning fun/cool, like awesomeness is something you have to tack on to science in order to sell it, or worse: like science is mutually exclusive with awesomeness and you need to sacrifice one for the other in order to be accessable (they know know who they are). The science is already awesome: what makes a Science Communicator good is their ability to show us how awesome it truly is.

That’s what we need to get across. We shouldn’t be teaching people with games as if you can just pour information into their brain: we should be showing them how awesome the information is, letting them drink it up of their own volition, and then telling them where they can find more awesomeness of the same nature. That’s what the best communicators: the Neil DeGrasse Tysons, the David Attenboroughs, the Carl Sagans, keep telling us.

Already awesome.

And, ultimately that’s what I’m trying to do with Species: not create a game that’s awesome and scientific, but create a game that’s awesome because it’s scientific.

Cheers,
Qu

“The optimism, IT BURNS!”

, , , , , , ,

1 Comment

Multi-texturing and Biomes

One of the priorities for 0.5.0 (the environmental update) has been to overhaul the multi-texturing system.

In 0.4.1 the best the engine could do was 4 textures (desert, grass, forests and cliffs), and those textures were placed at world-start, never changing. The end result was a very static map, very boring environment: basic tree loss/growth was the entire extent of environmental dynamics.

All this changes in 0.5.0. The design plan calls for a dynamic, responsive environment, and by crikey that’s what we’re going to have. (Crikey is Australian for… something. I dunno. Honestly I don’t even know what the word means, but it’s a stipulation of citizenship that we all say it at least once a week)

The first thing to add is more biomes. Desert/grass/forest works well as a basic, semi-tropical biome set, but what about colder biomes? Or warmer ones? Or wetter ones or drier ones? My current biome map concept has more than 20 different biomes drawn on a temperature/fertility map, including some hilariously extreme ones

If you can’t survive in a lava lake you’re simply not trying hard enough.

But that many biomes means a whole different approach to multi-texturing… or at least, I thought it did. This was when development started to go bad…

In 0.4.1 I used a multitexturing method called ‘texture splatting’. Imagine you’re painting the ground texture on a blank canvas. What texture splatting does is gives you a ‘stencil’ for each texture: so you can paint forest through the red stencil, grass through the green stencil and desert through the blue one, and by the time you’ve used all the stencils you’ve painted the entire terrain. All these stencils are nicely wrapped up into a single texture, called the blend texture.

Unfortunately, this only works for a limited number of textures per draw call: you can’t load more than a set limit of textures onto the GPU without it having a fit. So in order to render an unlimited number of textures, I’d need to change how I was approaching it.

My first attempt at a replacement system was sort of like an extremely complicated colouring book, where each grid square is numbered 1,2,3,4,etc and our hypothetical artist fills in the grid with grass/desert/forest from a colour key. Since we can include as many numbers as we like in the key, we can have as many biomes as we want.

Quick! Chop down a tree with your fists!

Once that’s done, our artist is faced with a problem: (s)he has to blur these colours together smoothly without pixilation or sharp edges. And since it’s going to be animated, a simple linear interpolation just isn’t going to cut it.

As it turns out, this is HARD. My own approach was to sample the biome-legend multiple times to give each pixel *four* biomes, each with a weighting value, which I then folded into the same channel as the key
(effectively, I designated the first digit of the channel as the key, and the remaining digits as the weight).

Honestly, the most surprising thing about this ridiculously complicated strategy is that it worked… mostly.

The grid is meant to be there.

But “works in general”, doesn’t mean “works well enough to use”. On closer inspection, the system produced artifacts all over the terrain. (not a typo: graphical glitches are “artefacts”, ancient relics are “artifacts”. I read that somewhere and internalised it, so it must be true) I actually got it working perfectly wherever 2 biomes met, but that was the limit: at any intersection with more than 2 it produced hard edges and odd colours, and the moment it was animated these artefacts started jumping about and generally making themselves easy to spot.

How many artefacts can you count?

So I ended up scrapping the idea entirely, and starting over with a new strategy, one requiring less math and more art.

This new strategy was much simpler: we go back to using the ‘stencil painting’ system, but this time once we’re done painting with the first 4 stencils (biomes), we put a new, transparent canvas over the existing canvas and keep on painting on that with a new set of stencils. Rinse and repeat.

This method turned out to have it’s own set of pitfalls, chief among them, alpha-blending and redrawing the entire terrain multiple times, with different textures each time. For an item which takes up as much of the screen as the terrain, this is a large graphics cost, and in a GPU-bound game probably would have spelled the end of this strategy. But Species is CPU-bound: it has GPU cycles to spare. So full-steam ahead.

“Everlasting pure darkness” will not be a biome in the full game. (Unless someone mods it in).

The artefacts of this method also turned out to be quite different to the ones I faced with the other method. The other method loved to produce singular, localised artefacts: hard edges and biome colours where they shouldn’t be. This method’s artefacts usually affected the entire terrain. I’d say two in particular are worthy of noting here, mainly because I haven’t actually managed to fix them: double rendered polygons and biome-set edges.

Biome-set edges were where one transparent layer tried to fade out into another. I never had any trouble with the inter-set blending, but proper alpha blending is a temperamental thing. In this case, because the biome colour fades out at the same time as the opacity does, the end result was a faded-but-noticeable black ‘border’ between different blend-sets.

Oh so that’s what wars are fought over.

I managed to ‘fill-in’ the majority of these borders by extending the colour to the very edge, but there is still a faint one around the first draw pass. Thankfully it’s subtle, and dealing with it had some odd side effects, so I’m going to leave that one alone. It’s not hugely noticeable.

Double-rendered polygons, on the other hand, are a problem.

It’s zombie-triangle apocalypse! Only trigonometry geeks will survive.

This isn’t actually a problem with the rendering method: it’s a problem with the QuadTerrain itself, which I didn’t know about until the rendering method made it visible. See, when the terrain is completely opaque, rendering a polygon twice has no effect. The colour from both renderings is the same, so it’s an invisible artefact. But when you render a *transparent* object, like, say… one of the 4-biome passes of the terrain… *then* it becomes quite visible, as you can see above.

But fixing it means re-familiarising myself with the QuadTerrain class, which I haven’t touched in quite some time. I’ve already made a little progress, eliminating about half of the artefacts above with one fix. Hopefully the next fix will get the rest, but I doubt it: bugs like this are often inversely exponential. You might be able to fix the majority of them easily, but there are always one or two subtle, extremely well hidden ones that you have next-to-no chance of ever finding.

Oh well, best I can do is to make sure I get most of them.

Currently, I have 23 biomes defined as rough shapes on a temperature/fertility axis. This includes ‘extreme’ biomes, like lava and salt-plains, and a number of underwater biomes that will only be found… err, underwater. (Fertility will at least partly be determined by height: everything below the water plain will be water).

So far everything is coming together as planned. Biomes are done, 3d trees are done (I’ll do a proper post on them later), I’m midway through tying the two together, and soon the nanozombies will be unleashed on the unsuspecting… oh wait wrong blog. You didn’t hear that.

Qu

“Other stipulations of Australian citizenship include:

– Must defend vegemite no matter personal opinion of taste. (sticky salty gunk)
– Must be willing to throw foreign tourists in front of a croc to save yourself. (Or was that the other way around?)
– Must know how to defend against drop-bears (trick question: there is no way to defend against drop bears)”

, , , , , , , , ,

Leave a comment