3D Graphics Flashcards

Covers a wide range of topics regarding 3D graphics and rendering

1
Q

List several differences between GPUs and CPUs.

A

CPU | GPU
————————————-|—————————–
Several cores | Many cores
Low latency | High throughput
Good at serial processing | Good for parallel processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What components does a typical GPU consist of? How are they organised?

A

GPUs organise their cores into groups (called blocks or compute units). Each one has the following:
- A set of cores
- Scheduler
- Register file
- Instruction cache
- Texture and Texture mapping units
- L1 cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is GPU and CPU memory incompatible?

A

GPU memory typically has a wide interface with more room for p2p connections. This means it runs at a higher clock speed than CPU memory. CPU memory tends to have lower latency, but lower bandwidth.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What bottlenecks can occur when transferring data between GPU and CPU?

A

Slow CPU, fast GPU - CPU cannot send enough data to GPU fast enough, underutilizing GPU.
Fast CPU, slow GPU - GPU cannot process and return sent data quickly enough.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

List and briefly describe some modern graphics APIs.

A

OpenGL - An old graphics API developed by the Khronos group.
OpenGL ES - OpenGL for embedded systems (a subset of OpenGL). Stripped down - all ES apps will work on non-ES, but not vice-versa.
WebGL - A limited version of OpenGL ES - designed to run in a web browser.
Vulkan - Lower-level API by Khronos allowing more control and optimisations than OpenGL.
Metal - Graphics API designed specifically for Apple’s hardware.
DirectX - Lower-level API by Microsoft, has similar advantages over OpenGL, but Windows-only. Direct3D is the actual 3D graphics component, DirectX includes other parts such as text, audio, video, raytracing etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain how double-buffered rendering works, and why it is used?

A

This is a technique used by the GPU for avoiding stuttering, tearing and other artifacts. It accomplishes this by ensuring there is always a buffer to obtain graphics data from every frame.

The GPU sends data to one of two buffers. The data on this buffer is displayed on the screen as a frame. While being displayed, the GPU fills up the other buffer with new data. When displaying the next frame, it switches to the other buffer and the cycle repeats.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the differences between vector and raster graphics.

A

Raster Graphics | Vector Graphics
———————————————————-|——————————————
Composed of pixel data. | Composed of paths
Refresh image regardless of complexity | Refresh tied to number of lines displayed
Scan conversion required | Scan conversion not necessary
Scaling shows deformities | Scaling does not show deformities
Space occupied dependent on quality | Not dependent on quality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe the steps of the rendering pipeline.

A

1) Vertex Specification - Prepares vertex data for processing.
2) Vertex Shader - Processes vertex data to produce output vertex data 1:1.
3) Tessellation - Splits up primitives, used to add higher LOD dynamically.
4) Geometry Shader - Processes primitives to modify them or output new ones.
5) Vertex Post-Process - Caches previous results, performs clipping of non-visible primitives.
6) Primitive Assembly - Converts vertices into primitives (vertex groups), does face-culling.
7) Rasterization - Scan-converts primitives to fragments.
8) Fragment Shader - Processes fragment data, outputs colour, depth and stencil values.
9) Sample Operations - Includes stencil and depth tests, blending and more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe how transparency can be achieved in 3D rendering.

A

Transparency can be achieved by blending fragments that lie on semi-transparent textures. The distance order from the camera needs to be known to achieve this, so it can be achieved with the following steps:

1) Draw all opaque objects first.
2) Sort all transparent objects in distance order from the camera (nearest to furthest).
3) Draw all transparent objects in that order.

This may be difficult to achieve depending on how the scene is setup, and won’t work with deferred rendering. Order-independent transparency techniques do exist however.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the differences between forward and deferred rendering?

A

In forward rendering, geometry from many sources is passed throuh their respective pipelines through vertex and fragment shaders. The fragments are all passed to a final render target.

In deferred, the fragment shaders omit lighting calculation. The fragments are instead passed to an intermediate stage, in which all lighting calculations are done instead before passing to a final render target.

The main benefit is performance improvement. Instead of doing lighting calculations per fragment (many of which will be discarded), we are doing it per pixel. Given that there are often far more fragments than pixels, this reduces the number of calculations required.

There are several disadvantages however:
- Transparency isn’t possible (a workaround is to use forward with transparent objects).
- Use of the G-buffer to store intermediate data can eat a lot of memory.
- Anti-aliasing is difficult to achieve without workarounds.
- Can only use a single material type for each object.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the differences between rasterization and raytracing?

A

Rasterization works by iterating over object vertices, then projecting the calculated fragments onto the screen. While this approach is fast, it doesn’t describe how the object should appear and limits its ability to produce photorealistic images.

Raytracing works the opposite way - it fires rays through each screen pixel at objects in the scene. Rays bounces off objects are calculated until reaching a light source (though bounces are limited to keep calculations manageable). This allows raytracing to produce more photorealistic images. Pathtracing is a similar concept, but bounces are limited to 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How is shadow mapping accomplished?

A

Shadow mapping is accomplished in two passes. The first pass generates a depth map from the light source’s perspective. This requires a light space matrix (using light source view / projection matrices) and vertex positions. No fragment shader is required as depth values get calculated anyway per fragment. The result is rendered onto a texture of some given size.

The second pass then renders using the shadow map texture. The texture is passed into the fragment shader for a given object. We can then determine if a fragment lies within the shadow map, and adjust its brightness accordingly so.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What causes shadow map aliasing, and how can we combat it?

A

Shadow map aliasing are visual defects in shadows due to a mismatch in shadow map resolution and pixel distance from eye. Too high a pixel-texel ratio and shadows appear blocky, too low and artifacts start getting introduced.

Cascading shadow maps are a technique that can be used to fix this. It works by splitting the light projection frustum into several sub-frustums and calculating shadow maps for each one with lowering resolutions. The vertex shader calculates lighting with each map, then the fragment shader determines the appropriate shadow map to use for the distance and lights it accordingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

List 3 culling techniques.

A

Frustum - Clips all objects not lying within the viewing frustum.
Depth - Used for hidden surface removal, gives z-values to each fragment to determine if one object lies in front of another (from the camera’s perspective).
Backface - Using depth values, culls back faces for each polygon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What buffers are available for each pixel?

A

Color - Holds the pixel color data.
Depth - Holds the depth value for a given pixel (AKA Z-buffer).
Stencil - Extra per-pixel data mask to determine if pixel gets drawn.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Z-fighting? What techniques can be used to combat it?

A

This occurs when two coplanar polygons occupy a similar space. The depth values are so close that the renderer is unsure which one is in front of the other, creating a visual artifact. The following techniques can be used to deal with it:
- Use a higher-resolution depth buffer.
- Move polygons further apart.
- Reduce near/far plane distance ratio to improve depth buffer accuracy.
- Use stencil buffer to store data on what polygon should be displayed in front.

17
Q

What is the difference between left and right-handed coordinates?

A

In both systems, X-axis goes ‘right’, Y-axis goes ‘up’.
With LHC Z-axis goes ‘forward’, with RHC Z-axis goes ‘backwards’.

18
Q

What is the vector cross and dot products? What might we use each for?

A

Cross product - combines two vectors to produce another vector perpendicular to the two original vectors. One example of its use is calculating a normal to a plane for lighting calculations.

Dot product - combines two vectors to produce a scalar value. This scalar value corresponds to how closely aligned two vectors are (parallel at 1, perpendicular at 0, opposite at -1). Can be used to calculate the angle between two vectors. Also useful for things such as determining intersection between a ray and a plane.

19
Q

What is a unit vector? What might we use it for?

A

A unit vector is one in which the total magnitude of the vector is 1. Such a vector represents direction only without any scaling.

20
Q

Explain the Model-View-Projection matrix components, and what it is used for.

A

MVP matrix is a series of transformations used to transform vertices in model space into screen clip space.

Model - Composed of translation, rotation and scale matrices. Transforms vertices from model to world space.
View - Matrix describing camera’s own position and rotation. Typically premultiplied with model matrix as we often render from a single camera. Transforms vertices from world to camera space.
Projection - Applies a projection matrix to transform from camera to screen clip space. Matrix is either perspective (trapezoid frustum, closer objects appear larger than further ones), or orthographic (cuboid frustum, all objects appear same size regardless of distance).

21
Q

What is a quadtree? What might you use them for?

A

A quadtree is an efficient data structure for storing 2D points with ordering that captures data of spatial closeness between points. The 3D equivalent is an octree.

A quadtree is formed by doing repeating the following steps:
1) Divide 2D space into 4 boxes.
2) If a box contains one or more points, create a child object.
3) Recurse over each child.

Quadtrees are typically used for static objects. Dynamic objects would require constant changes to the quadtree that can become expensive to keep updating.

There are several applications of quadtrees, including:
- Frustum culling
- Collision detection
- Mesh generation
- Spatial indexing

22
Q
A