Volume Rendering - Week 9 Flashcards

1
Q

What is volume rendering?

A

Look at how to draw the insides of things.

Drawing things that are amorphous or don’t have clear boundaries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Is Lidar / photo scanner / photogramattry point cloud an example of the volumetric rendering in this course?

A

No, that’s about surfaces, Week 9 isn’t talking about surfaces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does CAT stand for in CAT scanning?

A

Computed axial tomography

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe what a CAT scan does?

A

Uses X-rays to take a sequence of 2D cross-sectional images and computationally interpret that stack of images as a 3D volume.

It returns these values as radio density values (numbers) in Hounsfield units

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a transfer function?

A

Relating number data to colours and opacity’s to be 3d rendered (e.g. different colour for bone and muscle tissue)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the steps for a direct volumetric rendering?

A
  1. Design a transfer function to classify / assign opacities/alphas and colours to the data points
  2. Raycast through the viewplane to figure out the colour of each pixel

Stopping here gives us images that look flat like an X-ray - pretty much replicated that process that happens with an actual X-Ray. Reversing the process the CAT scanner used in the first place. This is called Additive Reprojection.

  1. Generate faux surface normals from the data by looking at the changes in the voxel value over 3 dimensions as giving the x, y and z components of a value that can then be used as a kind of surface normal vector (doesn’t mean anything in the physical/absolute sense, but if the data varies smoothly, should do because they are samples of a continuous feature in the real world, then it in some sense represents the surface for lighting models)

Plug these normals into local illumination models to get plausible shading without having to pin down a definite surface.

This is called direct volume rendering (not making any hard decisions about the data, we let the human brain interpret that for us)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is additive re-projection?

A
  1. Design a transfer function to classify / assign opacities/alphas and colours to the data points
  2. Raycast through the viewplane to figure out the colour of each pixel

Stopping here gives us images that look flat like an X-ray - pretty much replicated that process that happens with an actual X-Ray. Reversing the process the CAT scanner used in the first place. This is called Additive Reprojection.

Can use this to create lots of angles for animation, to get a good feeling for the object. Brain can probably deconstruct a single 2D frame well too

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a downside to direct volume rendering?

A

It is viewpoint dependent, whilst CPUs/GPUs are getting faster at doing that, it’s still computationally fairly painful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are iso-lines?

A

The lines used to denote height on maps, or pressure on weather charts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is marching cubes?

A

Separates out (from the 8 corners of a cube) what triangles separate corners that are “inside” the surface vs “outside” of the surface, defined by a limit. To sort of generate 3d iso-lines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a triangulation table?

A

Matches triangles in a cube with the corners that are “inside” or “outside” of the mesh, using a dictionary lookup

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can we improve the marching cube algorithm, to stop it looking blocky because we are always using the middle of the edge?

A

Interpolation, using the values of the two datapoints (perhaps in terms of Hounsfield units) to place the vertex on the edge instead of always using the middle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is proxy geometry?

A

Adding slices of textured images on-top of each-other to create the 3D effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly