Home Store Films Submissions

A Primer to Methods in Computer Gaming Graphics
by Mark Barry

Part I

Abstract

Modern computer gaming graphics are pushing the limits on today’s computers. With games getting more realistic, interactive, and detailed, more and more computing power is demanded. In order to make computer games run in real time while maintaining near photorealistic graphics, programmers find it absolutely essential to use a number of efficient and clever algorithms. This primer focuses on the techniques used to make modern computer gaming graphics fast while at the same time keeping the graphics and visual effects very realistic and convincing.

Introduction

Modern 3D computer games give the player a virtual world to navigate through and explore. The popular “first person shooter” type games are an example of these. The player is given the freedom to interact with and experience the virtual world as if it were real. Other game examples include flight simulators and car racing games, just to name a few. In such games, 3D models are used to represent objects in the virtual world. A major goal in the creation of 3D computer games is to present these objects and their environments in a way that is realistic and convincing to the player. But to model real world phenomenon in the computer is a challenging problem, and even more so for computer games specifically. Computer games consist of real time, interactive, animation where frames of the animation must be rendered between 30 and 60 times per second. Anything slower than this makes for jerky animation and an unhappy gamer. The demand for realism and detail while streamlining the rendering process presents a challenging problem to the programmer. But from the dawn of computers, computer graphics has been a hot research area. Many of the apparent problems and difficulties have been solved and improved upon over the years. This survey touches on several of the most common, crucial, and time-tested techniques used in computer game graphics today.

Progressive Meshes

Simple 3D models are a combination of points or “vertices” in 3D space connected together to form polygons describing the object’s surface – often called a “mesh”. Since most real world objects have curved surfaces, and polygons only represent flat surfaces, it takes more polygons joined together to fit the shape of a curved surface. In real-time games, there isn’t the computational luxury of being able to use a high number of polygons to accurately represent an object’s complex surface. Instead, game designers must be able to maintain a balance between good-looking models (high polygon-count) and acceptable game performance.

One way game designers enhance performance and “cut corners” is to use a low- polygon-count model at times when the model appears very small on the screen. If the rendered model covers only a handful of pixels on the screen, it is not necessary to carry out the computation associated with rendering a highly complex model; the visual results will look the same. Likewise, when an object is very large on the screen and the player can make out fine shape details, then it is appropriate to use a high-polygon-count model. One highly used method of representing models at different levels of detail is called “Progressive Meshes”, introduced through the work done by Hugues Hoppe [8],[14]. As the name implies, meshes are able to progress to different levels of detail as necessary within a game. In a game, an object appearing very far away will be loaded and rendered with a very low polygon-count. As the player moves closer to the object, the object progressively becomes more detailed (more polygons added) as it becomes larger on the screen. There are several major benefits to using progressive meshes. One is that a progressive mesh model is not a series of separate meshes of varying detail. If this were the case, it would be inefficient to store all the variations, loading and unloading meshes each time a change in detail is needed. Instead, the structure of a progressive mesh is such that it starts as a very primitive, low detail, base mesh where a succession of small changes can be applied quickly and efficiently, adding or subtracting detail. Transitions amoung levels of detail are also smooth where a simpler mesh is a direct subset of a more complicated one, meaning they share the same vertices with each other, one only having more than the other. These transitions are accomplished with either a series of vertex splits or edge collapses, which avoid the discontinuous jumps that occur while switching between distinct models. The second major benefit of using the progressive mesh representation is the accurate preservation in shape and visual quality of each level of detail. Across all levels of detail, important mesh features such as sharp edges and creases are very well preserved. These are often pivotal visual features that should be persevered even at a low level of detail. Other progressive-mesh-type representations do not do an accurate job of preserving these features but instead tend to smooth them out and only preserve them in the highest levels of detail. Because of Hoppe’s research done at Microsoft, the latest versions of DirectX have progressive meshes integrated into the API. The downside to progressive meshes is that it takes a very long time to start with a given model and compute its progressive mesh representation – on the order of hours. Also, through the simplification process, topology is preserved. This is not always desirable because often the preservation of certain topology is not visually noticeable in a low detail approximation.

survey1.jpg (11686 bytes)
Figure 1: Progressive meshes. [8][14]

Subdivision Surfaces

Another form of the progressive mesh idea is that of subdivision surfaces. The benefits of subdivision surface meshes are similar to progressive meshes. Like progressive meshes, subdivision surfaces start out as a simple base mesh whose polygons are iteratively subdivided to create a more detailed surface. The transitions in level of detail can be represented as a succession of changes applied on one mesh to move to another. The transitions are smooth – all meshes being subsets of the others. The major downside to using subdivision surfaces is that not all, in fact very few, surfaces start out as a subdivision-capable surface. In other words, an irregular mesh must first be converted to a subdivision surface. An algorithm presented by Eck et al [4] does a good job at converting an arbitrary mesh to a subdivision surface but unlike progressive meshes, the subdivision surface does not preserve sharp edges and creases very well. Like progressive meshes, computation time to make the conversion is long. One major benefit of subdivision surfaces is its ease of use in multiresolution editing. An artist can manipulate vertices on a low detail surface to affect large sections of the mesh. Subdividing to create more vertices then allows for finer editing. This certainly has been an excellent feature for artists modeling objects and characters for games. [10]

Texture Mapping

To better model 3D objects, a technique called texture mapping was developed [3]. The idea is simple: apply an image to a surface to model its material properties and make it look more realistic. For example the top of a wooden desk can be a flat rectangular shape but won’t look like wood until a wood grain texture is applied. The idea of texture mapping has been around a long time but only until about a decade and a half ago did it become a standard in computer games. Texture mapping is important in that it makes up for the lack of details a 3D mesh itself provides. Texture maps are relatively cheap to add to models – computationally cheaper than adding a lot more polygons. Texture maps are also easy to use in that they are simply images that can be conveniently stored, compressed, and edited. Artists can quickly “peel off” textures from a model, manipulate them in an image editing program, then easily reapply them to change the model’s appearance. Editing the appearance of a model via texture maps is much easier than editing a model’s geometry which may be a very complicated mesh. Also, in the dynamic gaming environment, the appearance of objects may change. The classic is shooting a character causing its blood to splatter on the wall. Applying the bloodstain spatter texture can be done quickly and easily in this case. Texture mapping is also useful in conveying small perturbations of surface geometry such as bumps, cracks, and scratches. The benefit of texture mapping becomes evident when the alternative to modeling these types of details is editing the underlying geometry of the surface. Modeling bumps into a flat surface would create far too many polygons to make rendering the surface efficient. Also, small bumps and scratches are not meant to be examined closely and thus can be approximated with a texture map. The main technical drawback of textures is that they typically require greater memory storage than the surfaces they cover. A game with many high-resolution textures can use up a lot of memory. Today’s very latest off-the-shelf graphics acceleration hardware includes up to 512 Megabytes of memory [7].

Bump and Normal Mapping

As mentioned above, texture maps are good for conveying the general material properties of a surface along with some small surface perturbations. But a lack in visual accuracy of small surface perturbations becomes evident when the viewer, lights, or the surface itself in the scene change positions. In reality, small bumps and cracks would appear slightly different as the viewer or lights move, due to shadows and specular highlights. Since a texture map itself does not model the physical geometry, lighting effects cannot be accurately applied to that surface. Modeling a surface’s geometry by applying a type of texture, rather than modifying the geometry itself has been a large area of study. Several techniques have been developed to address this problem. James Blinn introduced the earliest technique in 1978 as what is now referred to as bump mapping [1]. The idea is to apply to a surface a texture that instead of representing the surfaces colors, represents perturbations in the surface. The “texture” applied is a grayscale image with each pixel value representing the height to which the surface should be perturbed. Though applying this texture does not actually change the surface’s underlying geometry, it does change the way light interacts with it which is a key visual queue. A variation of bump mapping is normal mapping. Instead of being a grayscale image representing a height field, the normal map texture is an RGB image representing the surface normals. Each color channel corresponds to an x, y, or z component in 3D space. Normal mapping does an excellent job of accurately conveying complex surface geometry mapped onto an otherwise flat surface. Many examples of normal mapping show that the appearance of a highly complex mesh is indistinguishable from a relatively simple mesh with normal mapping applied. Normal mapping is an excellent feature included in most of today’s 3D games. Game designers are able to use low-polygon-count models while still maintaining a high level of detail by simply applying normal maps. One problem with the whole concept of normal maps is that lighting calculations must be performed on a per-pixel basis. Basic texture maps are applied to each pixel but lighting calculations are not performed on each because the surface is uniformly flat. Because calculations per pixel become very computationally intensive under the demand for real-time graphics, such an approach requires the use of graphics acceleration hardware, which is dedicated to performing these kinds of calculations very quickly. The earliest off-the-shelf hardware supporting 3D graphics did not include normal mapping capabilities. But this feature has certainly become standard in modern graphics acceleration hardware.

Parallax Mapping

The problem with normal mapping is that the surface it is applied to is still flat, and is easily noticeable by viewing the surface at steep angles at which point the appearance of bumps deteriorates. Another problem with normal maps viewed at steep angles is that areas that should to be occluded are not. Looking at steep angles, the bumps on a bumpy surface will occluded each other. Similarly, opposite sides of a deep impression are visible from certain angles and not others. This phenomenon is known as a parallax. Since a combination of texture maps and normal maps alone does not fix this, a new technique was developed called parallax mapping. Parallax mapping attempts to remedy the parallax problem and does a decent job of doing so. Parallax mapping uses per-pixel texture coordinate addressing rather than per-vertex texture coordinate addressing. The visual results of parallax mapping vary. The authors admit that the texture appears to distort as the viewer changes viewing angles. But overall, the effect is worth implementing, as it is very fast and cheap on modern graphics acceleration hardware. But again, parallax mapping only serves as a fancy texture on a flat surface. The realistic effects of parallax mapping fall apart and revert to appearing as a flat surface when viewed at steep angles. [9]

Continue to Part II


HOME | FILMS | SUBMISSIONS | MEMBERSHIP | COMMENTS | PROMO | CONTACT
2003-2013 AnimationTrip