Posts Tagged temporally displaced self-hatred
aka. Requiem of a crappy blogger.
Geez, I really suck at this “blogging” thing don’t I? “You’re supposed to leave something here more than once a month, dumbass!”
Okay, rivival time. I’ll throw up another cheatpost-from-the-past for today and work on having something actually worth reading here for every weekend from now on with more links and maybe something else, at least until the world ends in 2012. No promises, though: I have it on good authority that the world might definately totally end in October, this time for realz, and if that happens I won’t be able to keep up the once-a-week schedule until the world ends in 2012. In fact, the schedule is already on thin ice, what with the world ending two months ago in May and what-not. But I’ll try to keep up with it anyway.
In the meantime, enjoy your cheatpost:
Written September 2008, prior to the billboard vegetation system
The next step will be vegetation. Trees in species will be edible to creatures with herbivorous and omnivorous mouths: how much energy they get out of it will depend on the type of tree and their (mutable) digestive system.
One of the major features of Species will be the mutability of the organisms, and the complex effects of natural selection that result. As an example, the tree-nutrition system (as currently envisaged) will work like this: Each tree type has an nutrient value, and a digestibility value, and each species has an acid value. High acid levels would allow the creature to digest the indigestible, like bark or cacti, but would mean fewer nutrients could be converted to energy. In comparison, low acid would make it only able to survive on easily digestible vegetation, like fruit or green grass, but the creature would be able to take full advantage of the nutrients.
[Present me says: Okay, this requires comment. This sort of complexity does not exist in the vegetation or digestion systems. It was planned, but like many similar features, it was removed from the design prior to being implemented. The primary reason for this is extremely simple: Species is about seeing evolution happen. Features like this are invisible. End of story]
This level of complexity applies to all the organisms traits, but I’m trying to make it intuitive and visible. For example, Shoulder muscle, Arm muscle, Arm length, Hand type and hand size would all affect that arms damage levels (which is complex), but in an intuitive manner: if you see a huge creature with massive, muscular arms and giant claws, you can instantly identify that it’s dangerous. In addition, since it will have formed by natural selection, you can make other conclusions: the species has to fight fairly often with its arms in order to reproduce. If this conclusion turns out to be right, I’ll know I’ll have set up the statistics system properly.
[Present me says: this is still mostly accurate, with one exception: combat is no longer limb-specific. It was getting too confusing and arbitrary to have creatures with 5 different damage values depending on what limb they used to attack (for example, a big muscley creature with spikes and claws and teeth could be killed by a far inferior opponent if it decided to attack said opponent with it’s tail), so this has been streamlined into a single value which takes modifiers from the limbs, head, tail and features]
It’s not only the physical traits that will be mutable – behaviour will also be dictated by natural selection. All creatures will have one or two fixed behaviours (eat and mate are the obvious ones), but other behaviour will be dictated by a special AI system. Of course, these behavioral traits will be genetic (instinct): I had considered making individuals capable of ‘learning’ behaviours, but I suspect that would be both rediculously complex and entirely useless: individual creatures don’t live very long, so learning wouldn’t do them much good. The species can learn, but not the individual. A larger brain size would allow more instinctual behaviours.
[Present me says: That wasn’t so bad. I was expecting a whole load of completely wrong detail on the behavioral system, but it’s vague enough to be fairly… well, vague. And eat and mate have been made mutable since then too!]
I could keep going on and on, but I won’t. You get the idea. I don’t really expect to appeal to a wide audience with this project, but it’s captured my imagination and I fully intend to follow it through. I’ll try to keep this blog up to date.
[Present me says: Hah! I said that back then too? I really wasn’t kidding when I said I suck at this blogging stuff, was I?]
… [/end cheatpost]
Please excuse me while I murder “Present Qu”s face until he gives me my job back. Thank you for you continuing patience.
PS: [Present me says: OW! AAARRRGH! NOT THE FACE, NOT THE FACE! OKAY OKAY NO MORE CHEATPOSTS, I PROMISE! PLEASE, FOR THE LOVE OF ALL THAT IS SHINY, STOP DOING THAT!]
Written July 2008
In 3 dimensional graphics, a computer can only render so many triangles before it suffers death by framerate. This is a problem in most games: the artists can never put as much detail into their 3D models as they would like.
With games where you can see and walk to the horison and beyond, this is especially awful. How can a computer render every triangle on a terrain so many kilometers wide, and still get a decent framerate?
The answer is fairly simple: it can’t. But the computer can reduce the number of triangles to be rendered on the graphics card. A distant mountain doesn’t need to have the same level of detail as the ground under the players feet, after all.
This is the point of Level of Detail (LOD) Algorithms: render fewer triangles, but get a visual result very similar to what you would have had you rendered them all. There are many different styles of LOD Algorithm. Species uses a QuadTree.
A Quadtree is a structure where each node is subdivided into 4 children nodes, each child node subdivided into 4 grandchildren nodes, etc. In the context of a terrain, this means that the ‘root’ node would have a low detail mesh, the children would have a higher detail mesh, the grandchildren have higher detail still, etc…
As the camera moves about the terrain, therefore, you can sink deeper into the quadtree and render high detail squares for nearby objects, and not sink anywhere near as far for distant objects, thus rendering them at a much lower detail.
[Present me says: this is where I seamlessly stitched in further detail from a more technically orientated post. You may not be able to follow much of the following without a background in graphics programming and/or wikipedia]
Of course, nothing is perfect. Two same sized quadnode meshes next to each other will fit seamlessly, but imagine a high detail node next to a low detail node. The result will be artifacts known as gaps.
This can be fixed by ‘stitching’, where you either add or remove edge vertices from the nodes so that they match each other. In this case, I removed vertices from the higher detail node, creating a triangle pattern which connects to the lower detail node. In this screenshot, you can see it applied to every edge of every quadnode. In this one, it is applied correctly.
Of course, this isn’t as simple as it sounds: each quadnode is now accompanied by 9 meshes, 1 for no stitching, 4 for a single edge stitched, and another 4 for two stitched edges. Since these are generated at runtime, however, they aren’t a problem. [Present me says: well, they’re less of a problem than they would be if I tried to save them in memory. This is a case of putting up with longer loading times to reduce the memory footprint of the game]
It isn’t only distant objects need to be LODd. Nearby nodes which are not in view because you are looking in the opposite direction need not be rendered at all.
By doing a collision test between the camera’s “Bounding Frustrum” (a 3d shape representing the cameras viewport) and a quadnodes bounding box, we can determine which nodes are outside the view and quickly cut a whole heap of geometry from the render. By combining this with the distance testing, we can stop that from sinking further into off-screen nodes as well!
Multiple Vertex Buffers
One of the things I spent a lot of time on was converting the terrain to run with more than one vertex buffer.
The vertex buffer contains all the vertices read off of the heightmap. In general, it’s best to use a single buffer, because each buffer must be rendered with a separate Draw call. Unfortunately, there is a maximum limit to the number of vertices that can be rendered at once on the graphics card, and going above this limit will force the call to be rendered on the CPU (resulting in death by frame rate).
This limit varies, but on my home computer over a million vertices (a 1024×1024 terrain) is just a bit too much.
So, I implemented a nasty and complex algorithm to separate the terrain into a number of vertex buffers, based on the idea that each quadnode could be entirely contained within a parent vertex buffer if we split it correctly. This (eventually) came out well: it is now possible to split your terrain based on a size value. A 1024×1024 terrain split by 512 vertex buffers will come out as 4, 512×512 vertex buffers, with a fifth for all quadnodes which cover an area greater than 512 (only the root node, in this case).
It’s worth noting that I may have managed this whilst drunk, tired or otherwise incapacitated, as I cannot remember actually coding it, and have no really idea how or why it works. But it does, and seems to be bug free. [Present me says: Yes, that’s right. Apparently, the Ballmer Peak actually exists]
Global Normal Map
This shader was my first attempt at HLSL (High Level Shader Language), and came out brilliantly in my opinion. [Present me says: Hah! Oh you poor naive fool]
Dynamically changing the geometry can have a nasty effect: although the shape of a low detail node may be very similar to that of a high detail node, the shading can result in a fairly large difference in appearance if done per vertex.
To solve this, I took advantage of the power of HLSL, and told the terrain shader to use a global normal texture rather than per vertex normals. The advantage was twofold: My vertex buffers halved in size, and more importantly shading no longer changes between high and low detail quadnodes.
Detail Normal Map
Initially, I didn’t understand the mathematics behind normal mapping, and thought that by simply adding a detail normal map value to my global value in the shader I could create detail normal mapping. The result looked OK, but it wasn’t accurate: the detail normal ‘pulled’ the global normal upwards. When I tested with a high strength value for the detail normal, this was instantly apparent: the entire terrain was shaded as if it was a lot flatter than it truly was.
It wasn’t until after I’d finished the multitexturing that I worked out how to fix this, but when I did the difference was apparent (see below).
Using a single texture for the entire terrain has two problems: the terrain is too big to be covered with a single texture with a high enough resolution, and tiling the same ground texture over the entire terrain is very bland. Therefore, I went for a multitexturing approach that made use of a blend texture.
In short, mutitexturing is using more than one texture on the same object, and a blend texture uses it’s red, green and blue channels to define the amount of each texture to show. In the sample, Blue represents sand, Green – grass, Red – Foliage and Black displays as Rock.
Quickly back on the subject of Detail Normal Mapping: you can see the difference between my original method, and my final method.
“Overdraw” is a term used when the pixel shader renders a distant object, then renders a closer object which completely obscures the distant object. Obviously, rendering the distant object was unnecessary: it didn’t end up being drawn on the screen, and rendering it simply hurt the fill-rate.
The GPU can overcome this if the near object is drawn first: when rendering the distant object, the fact that an object with a smaller Z-Buffer value has already been rendered will be detected, and the rendering process will be skipped.
Therefore, it is to our advantage to render the near quadnode before the far ones. Since a list of Quadnodes to be drawn is built seconds before the quadnodes indices are compiled into the index buffer, it’s a relatively easy fix to add a sort function in between.
Isn’t snarking in italics supposed to be my job?”