Volume Rendering - Week 9 Flashcards
What is volume rendering?
Look at how to draw the insides of things.
Drawing things that are amorphous or don’t have clear boundaries.
Is Lidar / photo scanner / photogramattry point cloud an example of the volumetric rendering in this course?
No, that’s about surfaces, Week 9 isn’t talking about surfaces
What does CAT stand for in CAT scanning?
Computed axial tomography
Describe what a CAT scan does?
Uses X-rays to take a sequence of 2D cross-sectional images and computationally interpret that stack of images as a 3D volume.
It returns these values as radio density values (numbers) in Hounsfield units
What is a transfer function?
Relating number data to colours and opacity’s to be 3d rendered (e.g. different colour for bone and muscle tissue)
What are the steps for a direct volumetric rendering?
- Design a transfer function to classify / assign opacities/alphas and colours to the data points
- Raycast through the viewplane to figure out the colour of each pixel
Stopping here gives us images that look flat like an X-ray - pretty much replicated that process that happens with an actual X-Ray. Reversing the process the CAT scanner used in the first place. This is called Additive Reprojection.
- Generate faux surface normals from the data by looking at the changes in the voxel value over 3 dimensions as giving the x, y and z components of a value that can then be used as a kind of surface normal vector (doesn’t mean anything in the physical/absolute sense, but if the data varies smoothly, should do because they are samples of a continuous feature in the real world, then it in some sense represents the surface for lighting models)
Plug these normals into local illumination models to get plausible shading without having to pin down a definite surface.
This is called direct volume rendering (not making any hard decisions about the data, we let the human brain interpret that for us)
What is additive re-projection?
- Design a transfer function to classify / assign opacities/alphas and colours to the data points
- Raycast through the viewplane to figure out the colour of each pixel
Stopping here gives us images that look flat like an X-ray - pretty much replicated that process that happens with an actual X-Ray. Reversing the process the CAT scanner used in the first place. This is called Additive Reprojection.
Can use this to create lots of angles for animation, to get a good feeling for the object. Brain can probably deconstruct a single 2D frame well too
What is a downside to direct volume rendering?
It is viewpoint dependent, whilst CPUs/GPUs are getting faster at doing that, it’s still computationally fairly painful.
What are iso-lines?
The lines used to denote height on maps, or pressure on weather charts
What is marching cubes?
Separates out (from the 8 corners of a cube) what triangles separate corners that are “inside” the surface vs “outside” of the surface, defined by a limit. To sort of generate 3d iso-lines
What is a triangulation table?
Matches triangles in a cube with the corners that are “inside” or “outside” of the mesh, using a dictionary lookup
How can we improve the marching cube algorithm, to stop it looking blocky because we are always using the middle of the edge?
Interpolation, using the values of the two datapoints (perhaps in terms of Hounsfield units) to place the vertex on the edge instead of always using the middle
What is proxy geometry?
Adding slices of textured images on-top of each-other to create the 3D effect