Introduction into Zedalos Computer Graphics – The Rendering Pipeline

Hi, I’m Mark and I work for Zedalos and its predecessor Espadon Online since summer 2009. Internally, I’m the “Graphics Guy”, so I’m resposible for making the most of our graphic. Our project has made a great progress on its rendering technique and we at Zedalos want to share our experience and knowledge we made in the last four years. I decided to start with some general basic background, so that everyone can follow my information. If this is too easy for you, then come back later. I promise you, we’ll discuss interesting stuff. As a side node: I’m not going to explain something, which was explained earlier in another article, so I assume you’ve read the previous articles. If you have any questions, remarks or any suggestions, feel free to contribute them!

The Rendering Pipeline

The Rendering-Pipeline is a model, which describes how an object and its mathematical description (the vector model often called “Mesh”) is displayed in a 3D-Scene on your screen. Most 3D-Applications follow this model, because this model was built into hardware – the GPU. The older the GPU is the more it is specified. In DirectX-Versions less 9.x the model is such inflexible that the pipeline is called “fixed-function-pipeline”. Modern DirectX-Versions (9,10,11) are more and more programmable, so that only some basic elements are fixed. I will show you a basic model of a pipeline:

Example Rendering Pipeline

 

The procedure of turning a mathematical description of a 3D-Scene into a complete frame is called “rendering”. During rendering all data goes through this pipeline like a assembly line. In the beginning there are (many) vertices. A vertex is a vector, which describes a corner of an object. Every solid consists of many primitives (usually triangles). If you only display these triangles, you’ll get a mesh and therefore the object is called mesh.

Vertex-Processing

Example Mesh

Example Mesh

The vertices of a mesh are transfered as a list of coordinates and other information (the “Vertex-Buffer”) to the GPU. These coordinates are relative to  the origin of their objects. That’s why they are are called “local coordinates”. Certainly all local coordinates have to be transfered into a consistent space for all objects to get a relation between the objects in the world. To get these “world coordinates” we transform each vertex with a matrix-multiplication, which considers rotation, translation and scaling, into the world space. After this all vetices are transformed into a “camera space ” or “view space”. The origin point of this space is the position of the viewer, i.e. your position! I know this is for beginners sometimes a bit confusing, but I’ll tell you my mnemonic for this. Each object has it’s local space. All objects have a position in the world. That’s the god-mode! Why? Cause some people believe the world-origin is in god. You look from far far above so that you can see all objects! But sadly your not god. So you have to transfer everything into a space, in which you’re the origin or exactly your eyes. Every human or player (our first player was a chicken :) ) sees the world from another perspective. Therefore the last transformation depends on your current position! In a 3D-Scene every vertex consists of three elements. The position in x-, y- and z-coordinates. In the view space the z-coordinate is the distance of an object to the viewer. After that the scene is now in the right space, but still 3-dimensional and your screen can only display 2D-coordinates. For this purpose all coordinates are transformed into the screen-space. The mathematical details of this procedure go beyond the scope of this article. For more information search on the web or buy a book, because if you want to develop in the graphics sector, you’ll have to unterstand this procedure!

Culling and Clipping

As soon as the vertices are in “view space” the GPU tries the so-called “backface culling”. The backface culling rejects alls triangles, which are not visible, because of its orientation. That are all triangles with a normal that points nearly in the same direction as the viewer looks. E.g. you look on an object. The back side of this object will not be visible. You can test every triangle with the aid of the dot product. If the dot product is negative, the triangle will be rejected. Additionally, there are other culling methods like “frustum culling“, which are important for rendering only needed triangles. Once the vetices are in screen space, polygons could be clipped. A common technique is called Z-buffering. If an object has a greater distance to the viewer (bigger z-value), than an other object, the object will not be visible, cause the object is hidden. You need to check every pixel with this depth-test to ensure that the frame is rendered correctly. Otherwise you’ll see objects, which are far away, on top of near objects. This is not only incorrect, but also looks weird.

Rasterization

Rasterization

The rasterization tries to solve the problem, that all transformed coordinates have an infinite accurateness, but your screen does has a finite one. That depends on your screen’s resolution. For this reason you have to map your scene into your pixels. These procedure is called rasterization or sampling. In doing so there might occur some artifacts (aliasing effects). To hide these artifacts you eventually use a technique called “anti-aliasing”.

Shader

Originally this pipeline was neither expandable nor programmable. Since the pipeline had too many limitations, it has been more and more improved. As an example, DirectX 9 has come with a support for a high-level language for shader programming. A Shader is a software module, which runs in the GPU and is programmable. While software runs only once for a data stream on the CPU, a shader runs for every datum of the data stream once. Meanwhile DirectX and OGL support a lot of different shader types. Cause I currently work with DirectX 9.x I limit myself to these two important types: Vertex Shader and Pixel Shader (sometimes called Fragment Shader).

Vertex Shader

A vertex shader replaces the transformation unit of the pipeline, i.e. all transformations have to be manuelly programmed. The shader runs for every single vertex once and is able to manipulate it in any way (e.g. shift it). Just as well the shader can pass custom parameters, which are set by our program manually, to the pixel shader or use it itself. Thus, complex rendering mathods like water rendering are possible. An important note here: A vertex shader can not create or delete a vertex!

Pixel Shader

A pixel shader replaxes the lighting or shading unit of the pipeline. This shader is called after rasterization for every single pixel. The pixel shader is able to manipulate every single RGBA-channel of a pixel. Usually you use a lighting model and/or external texture sources for your own shading.

Author Description

Mark

I'm studying computer science at the RWTH Aachen University. I'm addicted to graphics, artificial intelligence and mobile development.

No comments yet.

Join the Conversation


*