Graphics Pipeline Flashcards
Stages in the graphics pipeline
- Vertex processing
- Clipping and primitive assembly
- Rasterization
- Fragment processing
Vertex processing
Each vertex is processed independently. The two major functions of this
block are to carry out coordinate transformations and to compute a colour for each vertex.
Per-vertex lighting calculations can be performed in this box.
Clipping and primitive assembly
Sets of vertices are assembled into primitives, such as line
segments and polygons, before clipping can take place. In the synthetic camera model, a
clipping volume represents the field of view of an optical system. The projections of objects
in this volume appear in the image;those that are outside do not (clipped out); and those
that straddle the edges of the clipping volume are partly visible. Clipping must be done on a
primitive-by-primitive basis ratherthan on a vertex-by-vertex basis. The output of this stage
is a set of primitives whose projections should appear in the image.
Rasterization
The output of the
rasterizer is a set of fragmentsfor each primitive. A fragment can be thought of as a
potential pixel that carries with it information, including its colour, location, and depth.
Fragment processing
It takes the fragments generated by the rasterizer and updates the
pixels in the frame buffer. Hidden-surface removal, texture mapping, bump mapping, and
alpha blending can be applied here.
What is he graphics pipeline
We start with a (possibly enormous)set of vertices which defines the
geometry of the scene.We must process all these vertices in a similar manner to form an image in
the frame buffer.
Immediate mode graphics
As vertices are generated by the application, they are sent directly to the graphics processor for rendering on the display. One consequence of immediate mode is that there is no memory of the geometric data. Thus, if we want to redisplay the scene, we would have to go through the entire creation and display process again
Retained mode graphics
We compute all the geometric data first and store it in some data structure. We then display the scene by sending all the stored data to the graphics processor at once. This approach avoids the overhead of sending small amounts of data to the graphics processor for each vertex we generate, but at the cost of having to store all the data
Primitive functions
Define the low-level objects or atomic entities that our system can display. WebGL supports only points, line segments, and triangles.
Attribute functions
Govern the way that a primitive appears on the display
Viewing functions
Allow us to specify various views. WebGL does not provide any viewing functions, but relies on the use of transformations in the shaders to provide the desired view.
Transformation functions
Allow us to carry out transformations of objects, such as rotation, translation, and scaling. In WebGL, we carry out transformations by forming transformation matrices in our applications, and then applying then either in the application or in the shaders.
Input functions
Deals with input devices
Control functions
Enable us to communicate with the window system, to initialize our programs, and to deal with any errors that take place during the execution of our programs
Query functions
Allow us to obtain information about the operating environment, camera parameters, values in the frame buffer, etc.