Mesh Simplification Methods With Application To Games Modern 3D games demand a great amount of computing power. In realtime interactive games new frames in the scene must be rendered between 30 to 60 times per second to maintain a smooth visual appearance. Anything below this will result in jerky animation, degrading the overall quality of the gaming experience. Although there are several computationally intensive aspects of modern 3D games such as artificial intelligence, physics, and character animation, the most intensive is the rendering of the graphics. To get a sense of the computational power necessary, modern 3D games require at least some form of dedicated graphics processing hardware, specifically designed for rendering complex 3D scenes and special visual effects in realtime. Even with some graphics processing units bigger and more powerful than standard central processing units, games these days can push the limits. Realizing the high computational demand for good 3D games, programmers find it crucial to eliminate unnecessary computations that don’t affect the visual outcome of the game. One important way of cutting these costs is by using lowpolygoncount models (or “meshes”). The more polygons handed to the graphics card for processing, the longer the scene will take to render. A large research topic in computer graphics is how to go about converting a highly detailed, highpolygoncount model into a smaller, more convenient model while still preserving the overall shape and appearance of the model [2]. This is also known as generating “multiresolution meshes”. Much work has been (and continues to be) done in the area of creating good algorithms to automate the task of mesh simplification. All the algorithms have the same basic task but all with their positives and negatives, some working better than others. Microsoft researcher, Hugue Hoppe, has done a number of papers relating to this work. Hoppe developed what are known as “Progressive Meshes” [5]. The major strength of Hoppe’s algorithm is that given a 3D mesh, it will generates a series of mesh refinements that can be applied incrementally to a simple base mesh until the original mesh is restored. With Hoppe’s method, a mesh can be represented at any desired levelofdetail (complexity). Although the algorithm to create a progressive mesh representation from a single given mesh is computationally intensive and takes on the order of hours, transitioning from one level of detail to another is very fast and can certainly be done in realtime. Because of Hoppe’s work, progressive meshes have been integrated into Microsoft’s DirectX graphics API – a popular gameprogramming interface these days. Progressive meshes have other benefits. A distant object in a gaming environment can be represented as a simple, lowpolygoncount mesh because it appears very small on screen. As the player moves closer, the object “progresses” to higher levels of geometric detail which are more appreciated as the object becomes larger on the screen. Hoppe’s algorithm of mesh simplification consists of collapsing edges by merging adjacent vertices in the given mesh [6]. Iteratively merging vertices continues to reduce the complexity of the mesh. While “progressing” from simplified mesh to complex mesh, vertices are iteratively split to create more vertices, eventually regenerating the original mesh. The use of progressive meshes has proven very fast and effective in gaming environments. Sander et al (including Hoppe) have done further work to improve the usefulness of progressive meshes [8]. It is extremely useful, in games especially, to have a progressive mesh where all meshes in the sequence share a common texture parametrization. When converting a complex mesh to a simple one, one does not want the associated texture map to be distorted and made useless through the process. A lower detail geometric model should roughly have the same colors, textures, and appear the same as its high detail equivalent. Sander et al address this issue as it pertains to progressive meshes. They are able to construct a common texture parametrization that minimizes overall metric dispersion and distortion. Related to the above subtopic, the paper by Cignoni et al discusses an algorithm for mapping textures from a complex mesh to a simplified mesh regardless of the simplification process used [1]. It is very general, only requiring that the complex and simplified meshes be closely matched in overall geometry (which is a goal of simplification anyways). This texturepreserving algorithm thus applies to any simplification process discussed here. Overall, it proves to do an excellent job and is considered the standard procedure for texture preservation across mesh simplification. Eck et al develop a method of creating a subdivision surface mesh from a given arbitrary mesh [3]. The benefits of subdivision surfaces are similar to the progressive mesh idea. A subdivision surface can start with a simple, lowpolygoncount mesh but further subdivide its surface polygons to obtain a finer, more detailed mesh. This algorithm is similar to that of progressive meshes in that it takes a long time to compute but once computed, transitions among levels of details is fast. The downside of this approach is that sharp ridges and creases, which are often defining features of a model, in the original mesh are not well preserved when converted to a subdivision surface. Progressive meshes do not display this type of problem – creases and edges are preserved very well over all levels of detail. Other, more general, algorithms exist for simplifying a mesh that are not necessarily suitable for “progressive” uses. Give a complex mesh, these algorithms will produce a single simplified mesh to a certain, parametercontrolled, degree. Various levels of detail can be specified but these algorithms don’t provide a smooth progression among mesh detail levels. Nevertheless they are still useful when a highly complex model needs to be simplified for use in a game. Garland and Heckbert develop an algorithm that is similar to the one Hoppe devises for progressive meshes [4]. Their algorithm uses iterative edge collapses to reduce the vertex count of a mesh but in much less time than Hoppe’s (seconds versus hours). Their algorithm also differs from Hoppe’s in that it allows for nonadjacent edge collapses – meaning that vertices not connected with an edge may be merged. This lifts the restriction of maintaining mesh topology from original mesh to simplified one and allows for nonmanifold meshes to be used. This is a key benefit of their algorithm. Many times a complex mesh’s simplified equivalent does not need to preserve topology in order for the visual similarities to be convincing. Usually this is useful when small disjoint surfaces need not be preserved. Overall, Garland and Heckbert’s algorithm is very fast and efficient, yielding excellent results. Schroeder et al present a simple and straightforward approach to reducing mesh complexity termed “decimation of triangle meshes” [9]. Their algorithm chooses a vertex on the mesh and removes its adjacent faces. The remaining hole left behind is then retriangulated. Vertices are iteratively chosen and removed based on their offset from the local surface or angle between adjacent polygons. The idea is to remove vertices that don’t perturb the local surface by a significant amount thus, upon being removed, leave a surface not much different from the original. The downside to their algorithm is that it is restricted to use on manifold surfaces. This means that mesh topology is maintained through simplification, which ends up being a restriction when topology preservation is not necessary. Overall, their algorithm remains a simple and tidy one, yielding quality simplified meshes for most meshes of interest. A method similar to mesh decimation is described by Rossignac and Borrel, termed “vertex clustering” [7]. The idea is to divide up a complex mesh with a 3D grid and cluster together vertices lying in a common cube space. All mesh vertices lying in a common cube are merged together to form a single vertex. The 3D grid may be tuned to a course or fine sampling, which in turn defines the complexity of the mesh produced. Though this algorithm is straightforward and fast, it has a couple shortcomings. One major shortcoming is that the structure of the resulting, simplified, mesh will vary depending on its orientation to the grid. Rotating the original mesh by a few degrees will place vertices in new grid cubes resulting in a different clustering. This potential inconsistency among runs of the algorithm is clearly undesirable. And to top it off, this algorithm tends to yield poor results. It becomes obvious that a uniform grid sampling is not sufficient for accurately preserving defining large features of a mesh. In conclusion, many approaches have been taken to the mesh simplification problem. Each has their advantages and disadvantages, which the user must weigh according to issues such as quality versus speed and whether topology preservation is of importance. In the end, the subjective critique of mesh quality comes down to the gamer – the one who interacts with the meshes presented him/her. References 1. Cignoni, P., Montani, C., Rocchini, C., and Scopigno, R. A general method for preserving attribute values on simplified meshes. In Proceedings IEEE Visualization '98, pp. 5966. IEEE, 1998. 2. Cohen, J., Olano, M., and Manocha, D. 1998. Appearancepreserving simplification. In Proceedings of the 25th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '98. ACM Press, New York, NY, 115122. 3. Eck, M., DeRose, T., Duchamp, T., Hoppe, H., Lounsbery, M., and Stuetzle, W. 1995. Multiresolution analysis of arbitrary meshes. In Proceedings of the 22nd Annual Conference on Computer Graphics and interactive Techniques S. G. Mair and R. Cook, Eds. SIGGRAPH '95. ACM Press, New York, NY, 173182. 4. Garland, M. and Heckbert, P. S. 1997. Surface simplification using quadric error metrics. In Proceedings of the 24th Annual Conference on Computer Graphics and interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/AddisonWesley Publishing Co., New York, NY, 209216. 5. Hoppe, H. 1996. Progressive meshes. In Proceedings of the 23rd Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '96. ACM Press, New York, NY, 99108. 6. Hoppe, H., DeRose, T., Duchamp, T., McDonald, J., and Stuetzle, W. 1993. Mesh optimization. In Proceedings of the 20th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '93. ACM Press, New York, NY, 1926. 7. Rossignac, J., Borrel, P. 1993. MultiResolution 3D Approximations for Rendering, in Modeling in Computer Graphics: SpringerVerlag, pp. 455465. 8. Sander, P. V., Snyder, J., Gortler, S. J., and Hoppe, H. 2001. Texture mapping progressive meshes. In Proceedings of the 28th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '01. ACM Press, New York, NY, 409416. 9. Schroeder, W. J., Zarge, J. A., and Lorensen, W. E. 1992. Decimation of triangle meshes. In Proceedings of the 19th Annual Conference on Computer Graphics and interactive Techniques J. J. Thomas, Ed. SIGGRAPH '92. ACM Press, New York, NY, 6570. 8

HOME  FILMS  SUBMISSIONS  MEMBERSHIP  COMMENTS  PROMO  CONTACT 
© 20032020 AnimationTrip 