Week 5 - OpenGL and Texture & Environment Mapping Flashcards
What is the OpenGL rendering pipeline?
Vertex specification
Vertex shader
Tessellation
Geometry shader
Vertex pre-processing
Primitive assembly
Rasterization
Fragment shader
Per-sample operations
What do OpenGL’s “shaders” do?
They are small programs compiled at run-time by the OpenGL program that can be programmed in a C-like language called GL Shading Language (GLSL). There are two types; vertex shaders and fragment shaders
What do vertex shaders do?
They perform basic processing of individual vertices in the scene (e.g. viewpoint transformation)
What do fragment shaders do?
They process fragments generated by the rasterization
Name 3 GLSL data types.
Basic types (similar to C++; bool, int, uint, float, double)
Vector types
Matrix types
What are the GLSL vector operations?
cross, dot, length, normalize, distance
In terms of GLSL, what are uniforms?
Variables that are shared by all instances of the shader, denoted by the key word uniform
In terms of GLSL, what are inputs?
Variables specific to each instance, denoted by keyword in
In terms of GLSL, what are outputs?
The result of the shader computation, sent down the rendering pipeline, denoted by keyword out
What is the purpose of vertex shaders?
To perform operations applied to all vertices in the model, as all vertices need to be transformed (ie rotated, translated) from world coordinates to the camera view space, including normals
What are the default input and output for vertex shaders?
input: index of the current vertex
output: vertex position in clip space (after projection)
What is the purpose of fragment shaders?
To calculate shading for all fragments (ie pixels), e.g. using Gouraud, Phong or Blinn shading
What is texture mapping?
Mapping a two-dimensional texture (image) onto the surface of a three-dimensional object
How are textures represented in OpenGL?
Textures are images, and each pixel is also called a texture element (texel)
Each vertex of the mesh is associated to a point in the texture ([0, 1] x [0, 1])
Colour is samples from the texture at each fragment at interpolated locations
What are the four texture wrapping options in OpenGL?
GL_REPEAT (the texture pattern is repeated periodically)
GL_MIRRORED_REPEAT (same as above but the texture is flipped in alternance)
GL_CLAMP_TO_EDGE (the coordinate is clamped to [0, 1])
GL_CLAMP_TO_BORDER (anything outside of [0, 1] is set to a default value
Where is texture coordinate information stored?
At vertex locations, and when rendering, each fragment’s texture coordinates are interpolated from the triangle it belongs to, e.g. using Barycentric interpolation
How can you sample colour from a texture in OpenGL (as pixels don’t always align with textures)?
GL_NEAREST (returns the texel closest to (s, t))
or
GL_LINEAR (returns the weighted average of the four neighbours)
What is forward mapping?
The texture space is mapped to object space via surface parameterization, and then it is mapped to the screen space via projection
What is inverse mapping?
A pixel is mapped to a pre-image of the pixel
Name a benefit of forward mapping.
Simple to comprehend
Name 2 benefits of inverse mapping.
Suited to scan-line algorithms
Efficient (only required textures are computed)
How does OpenGL use textures?
- Create the texture in Python - create the texture object, bind it, and set the texture data
- In the shader, add texture coordinates per fragment, create a texture object, and an output colour to display
- Sending the texture in Python - activate texture zero, get the location of the texture uniform by name, set this uniform to point of texture unit zero, and bind the texture we want to this unit
What is environment mapping?
Representing the distant “environment” as a texture
Intensity and colour of the reflected ray found by texture mapping the environment map onto the rendered polygon
How does sphere mapping work?
Map the environment onto the surface of a distant sphere surrounding the scene
Simple calculations to find incident illumination
Inadequate resolution of texels near boundaries of map leads to distortions
How does cube mapping work?
The environment is mapped onto 6 faces of a cube
Simple to render map and calculate incident light
Each image easily rendered by placing “camera” at the centre of the cube facing each face in turn
6 textures need to be accessed (easy on GPUs)
Cube maps need to be recalculated for objects moving in the environment or changes in viewpoint
How is a cube map rendered?
Need to render the view from all six faces of the cube with framebuffers
How do you sample from a cube map texture?
Calculate which texel from the cube map would reflect on which fragment by reflecting the view vector with the surface normal (dome by the GLSL reflect() function)
The result of what is a sequence of fragments?
Rasterization
We calculate texture coordinates (s, t) for each pixel using what of the texture coordinates at the triangle’s vertices?
Barycentric interpolation
When Scissor test is enabled, it fails if the fragment’s pixel lies outside of a specified rectangle of the screen. Where in the graphics pipeline is this test located?
Per sample operations