Graphics Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is a frame buffer?

A

A memory space that stores a grid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is double buffering?

A

Where the front buffer’s image is being displayed on screen, and the back buffer contains the image that is currently being rendered. Then you swap the two buffers so the image that was being rendered earlier is now being displayed. WebGL does this for you automatically.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Vertex shader vs fragment shader

A

Vertex shader: The programmable shader that handles the processing of individual vertices

Fragment shader = pixel shader: takes care of how the pixels between the vertices look

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Vertex Buffer Objects

A

Contain the data WebGL needs to describe the geometry that will be rendered

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Index Buffer Objects

A

Contain integers that are used as references pointing to data in VBOs, in order to enable the reuse of the same vertex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Local coordinates and world coordinates

A

Each object in a 3D space is constructed on a dedicated coordinate system, and the purpose of this is to reduce complication for 3D scene construction. Then we apply the model transform which gives us the world coordinates (the single coordinate system for all objects in the scene). The view transform gives us the inverse, turning the world coordinates into view coordinates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Perspective projection vs orthographic projection

A

Perspective: near and far planes of frustum are different
Orthographic: near and far planes are the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

View frustum

A

Name of the visible region, defined by six planes: near/far, top/bottom, right/left.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Viewport transform

A

The viewpoint transform maps the projected view to the available space in the computer screen (canvas).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Scaling operation

A

(a, 0 | 0, b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Anticlockwise rotation

A

(cos, -sin | sin, cos)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Shear along x

A

(1, k | 0, 1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Reflect in x-axis

A

(1, 0 | 0, -1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Homogeneous coordinates

A

Coordinates in “projective space”. The W component scales all the others. If we have the homogeneous 2D coordinate (x, y, w), the Cartesian form is (x/w, y/w). Under homogeneous coordinates, 2D transformation matrices become 3D. Multiplying homogeneous coordinates by a scalar represents the same point as before.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Perspective transformation

A

Transforms each object so that the distant object appears smaller. The view volume in the shape of a frustum will become a regular parallelepiped.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Perspective transformation equations

A

x’ = dx/z
y’ = dy/z
z’ = z

d is the distance of the image plane from the centre of projection

17
Q

Shading

A

The process of altering the colour of an object/surface/polygon, based on the type of light source emitting light.

18
Q

Flat shading

A

Assign a single colour to each face of an object

19
Q

Smooth shading

A

Apply lighting against the normal vector at each vertex, to calculate vertex colours (vertex shader). The colours across a face are generated by interpolating the colours obtained at the corner vertices of the face (rasterisation).

20
Q

Phong shading

A

Normal vector at each point over an object surface is obtained by interpolating normal vectors of the corner vertices of the surface (rasterisation). The colour of each surface point is then calculated by applying lighting against the interpolated normal vector at the point (fragment shader)

21
Q

Types of light source

A

Directional light: like the sun, generates light from very far away, parallel rays
Point light: like a light bulb that emits light artificially in all directions from a point
Ambient light: represents indirect light, light emitted from all light sources and reflected by walls

22
Q

Calculating overall surface colour

A

surface colour by diffuse reflection + surface colour by ambient reflection

23
Q

Ambient reflection

A

Reflection of light from indirect light sources, illuminates an object equally from all directions with the same intensity, its brightness is the same at any position

24
Q

Diffuse reflection

A

Reflection of light from a directional or point light. Light is reflected equally in all directions from where it hits (due to rough surface). “t” is the angle between light direction and surface orientation (direction perpendicular to surface).

surface colour by diffuse = light colour * base colour of surface * cos(angle between light and normal)

25
Q

Using a point light object

A

The direction of the light from a point light source differs at each position in the 3D scene, so when calculating shading, you need to calculate the light direction at the specific position on the surface where the light hits. Pass the position of the light source and then calculate the light direction at each vertex position.

26
Q

Different coordinates used in mapping

A

Parametric coordinates (uv): a logical coordinate system for processing the surface and internal space of a 3D object
Texture coordinates (ts): used to identify points in the image to be mapped
Local or world coordinates: used to position 3D objects
Window coordinates: where the output image is really produced

27
Q

Forward texture mapping

A

Compute 3D positions of the texture points, and then project them onto the image plane. Mapping from texture coordinates to points on a surface requires three functions: x = x(s, t), y = y(s, t), z = z(s, t). The main problem is that adjacent texture points may project onto non-adjacent image points, creating an uncoloured area.

28
Q

Backwards texture mapping

A

Given a point on an object, we identify which point in the texture it corresponds to. We need a map of the form s = s(x, y, z), t = t(x, y, z). Good: makes sure every object point has a corresponding texel. Bad: mapping functions are hard to find.

29
Q

Two-part mapping

A

Map the texture to a simple intermediate surface, then the textured intermediate surface is mapped to the object.

Second phase: normals mapped from intermediate to actual and vice versa, vectors from center of intermediate

30
Q

MIP-mapping

A

MIP-mapping uses an image pyramid to precompute coarse versions of a texture. It allows us to solve the problem of aliasing caused by over-sampling. Only requires 1/3 more storage, reduces memory consumption of running applications, supports anti-aliasing.

31
Q

Normal mapping

A

Where normal vectors are encoded as an image, allowing us to generate a visually 3D effect by applying lighting to perturbed normal vectors on the object surface.
Advantage: we are using textures to alter the surface normal, which does not change the actual shape of the surface, meaning it does not impose performance overhead, it’s just shaded as if it was a different shape.

32
Q

Bump mapping

A

We treat the texture as a single-valued height function, and compute the normal from the partial derivatives of the texture. The heights encode the amount by which to perturb N in the (u, v) direction of the space describing the object surface. Greyscale encodes height, harder to implement but easier to specify (opposite of normal map)

33
Q

Displacement mapping

A

Use texture map to actually move surface points, meaning geometry must be displaced before visibility is determined. Done as a preprocess or with complicated vertex/fragment shader implementation.

34
Q

Environment maps

A

We can simulate reflections by using the direction of the reflected ray to index a spherical texture map at infinity

35
Q

Texture maps for illumination

A

Also known as light maps
The lighting is set up beforehand, and all the expensive lighting calculations are done during pre-process time, avoiding runtime overhead. Visual quality of the lighting is directly dependent on the size of the light map textures. A diffuse texture map is applied first, and then a light map is multiplied with it. The lightmaps for different parts of an object are packed into a large lightmap texture.

36
Q

Fog maps

A

Fog maps allow dynamic modification of light maps. We put fog objects into the scene, compute where they intersect with geometry and then paint the fog density into a dynamic light map, using the same mapping as a static light map.