02 - Computer Graphics Flashcards
Definition Computergrafik
Computer graphics are graphics created using computers and, more generally, the representation and manipulation of pictorial data by a computer. Methods and techniques for converting data to and from graphic displays via a computer
Wie kann Computergrafik anders definiert werden?
Perhaps the best way to define computer graphics is to find out what it is not. It is not a machine. It is not a computer, nor a group of computer programs. It is not the know-how of a graphic designer, programmer, a writer, a motion picture specialist, or a reproduction specialist. Computer graphics is all these - a consciously managed and documented technology directed toward communicating information accurately and descriptively
Für was wird Computergrafik verwendet?
digital Media und Print Media (web pages, apps, newspapers, werbung) Special Effects und computer generierte Filme (Special Effects ind Filmen, Animierte Filme, …) Computerspiele (PC, Konsolen, Mobile,…) VR und AR
Wie können wir generell Farben wahrnehmen ?
Über elektromagnetische Wellen mit unterschiedlichen Wellenlängen.
In welchen Wellenlängen des elektromagnetischen Spektrums können wir Farben/Licht sehen?
Zwischen 380nm und 740nm
Welche Wellenlängen können wir nicht mehr sehen und wie nennt man diese Bereiche?
Kürzer als 380nm: ultraviolet, x-ray
Länger als 740nm:
infrarot, radio
Was ist das RGB Farbenmodell und wie funktioniert es?
Additiv.
Red, Green, Blue
Was ist das RGBA Modell?
A = Alphachannel
Transparenz
Was ist wichtig bei der Auswahl des Farbenmodells?
Die Displayart die verwendet wird (LCD, plasma, OLED, …)
Was ist die Graphics Pipeline?
Ein Modell, das die Prozessschritte beschreibt um von einem mathematischen Modell eines Objektes und dessen Abbildung am Bildschirm zu kommen
Wie ist die Graphics Pipeline aufgebaut?
Was passiert in der Graphics Pipeline im Application Schritt?
Enthält das mathematische Modell und die Manipulierungen die darauf ausgeführt werden. Wird auf der CPU ausgeführt. Kann Sachen wie Interaktion, Animation und Physiksimulation enthalten
Was passiert in der Graphics Pipeline im Geometry Schritt?
Verarbeitung der Geometrie.
Beinhaltet Licht, Transformation, Projektion und Clipping
Was passiert in der Graphics Pipeline im Rasterisation Schritt?
Generiert 2D Bild anhand des Modells aus dem Geomery Schritt
Was passiert in der Graphics Pipeline im Screen Schritt?
Das Bild wird am Bildschirm dargestellt
Was sind Geometry Primitives?
Die einfachsten/primitivsten geometrischen Teile die es gibt
Wie sind 2D/3D Oberflächen normalerweise aufgebaut?
2D oder 3D Oberflächen sind normalerweise mit Dreiecken aufgebaut
Welche Geometric Primitives gibt es?

Aus was besteht ein Dreieck im Kontext von Geometric Primitives?
3 Vertices, die mit Edges verbunden sind
1 Normalvektor, der die Vorderseite angibt
Zusätzliche Normalvektoren für die Vertices

Warum müssen wir bei Dreiecken wissen wo die Vorderseite ist?
Für Optimierung (siehe Culling)
Für was braucht man Normalvektoren der Fläche und der Vertices?
Um die Farbe des Dreiecks zu berechnen (siehe Shading)
Wie können Polygone gespeichert werden?
Es gibt verschiedene Wege Polygone zu speichern. Typische Formate: .obj, .3ds, .dae
zB: Vertex Lists and Face Lists
(Vertices werden gespeichert und Information wie sie verbunden sind)
Welche Koordinatensysteme gibt es?
2D und 3D Koordinatensysteme
Welche Koordinatensysteme sind 2 dimensional?
- Screen Coordinate System
- Viewport Coordinate System
Welche Koordinatensysteme sind 3 dimensional?
- World Coordinate System
(Objects are located and placed inside this coordinate system)
(Coordinate system of the virtual world) - Object Coordinate System
(Local coordinates of each object)
(Each object has own coordinate system) - Hierarchical Coordinate Systems (used in scene graphs)
Was sind left und right handed Koordinatensysteme?
Welche Art von Koordinatensystemen (left handed, right handed) verwendet Unity und OpenGL?
Unity = left handed
OpenGL = right handed
Was sind typische geometrische Transformationen?
Translation,Rotation, Scaling
Wie funktioniert eine 2D Transformation (Scaling) automatisch?
Mit Vektor Matrix Multiplikation
Wie sind Projektionen in Computer Graphics aufgebaut und was ist der Unterschied zu Projektionen in der Fotografie?
● In photography, we usually have the centre of projection in-between the object and the image plane
● Image on film or sensor is recorded upside down
● In CG perspective projection, the image plane is in front of the camera
● Image in CG is not flipped
● In general the same mechanisms and rules apply for a perspective projection in graphics as in film
● Virtual camera (centre of projection) has to be placed in world space as well as the objects in the scene
Was ist das Frustum und wie ist es aufgebaut?
● The Frustum describes a volume, which content is projected for rendering areas outside the frustum are not to be rendered
● With perspective projection the frustum is a truncated pyramid
● Top, bottom, right and left planes restrict the area as well as the near and the far clipping plane
● Alternative naming convention for clipping planes
- Near Clipping Plane (hither)
- Far Clipping Plane (yon)
● With parallel projection we use a cuboid instead of a frustum with on surface matching the shape of the display plan
Welche Parameter sind für das Frustum und die perspective Projection relevant?
- Aspect Ratio -> Breite des Sichtfeldes dividiert durch die Höhe (4:3, 16:9)
- Field of View -> Winkel der angibt, wie “offen” die Kamera ist
Wie wirken sich aspect ratio und field of view (FOV) auf das erzeugte Bild auf?
Was ist das Level of Detail?
- Used to increase rendering performance by reducing the amount of drawn triangles
- Same 3d object is reproduced with different amount of triangles
- Choice of displayed object depending on distance between object and the camera
- Ideally the different displayed levels of detail are not distinguishable for the user
- Can be pre-processed or calculated dynamically
- Mesh simplification is complex, computationally intense, and error prone
- Variety of optimisation algorithms exist
Was sind Scene Graphs, warum werden sie verwendet und wie sind sie aufgebaut?
● Used to provide a higher abstraction layer for graphics programming compared to low-level APIs (used as well in file formats and game engines)
● Scene graphs are used in 3 key areas to support graphics programming
- Object representation
- Interactivity
- Architecture
● Organised as a Directed Acyclic Graph (DAG)
● Follow the concepts of nested transformations
● Structure
- Internal nodes are used for hierarchy building
- Leaf nodes contain geometry
- Attributes are typically inherited top – down
- Parent nodes can have more than a single child node
● Hierarchy
- Structure lends itself naturally to intuitive spatial organisation
● Culling
- Removing everything from a scene that does not contribute to the final image
- Frustum culling compares object boundaries with viewing frustum
- Occlusion culling checks whether objects are hidden behind others (not often implemented in scene graphs)
● Bounding volume hierarchy
- Parent nodes bounding volume contains all volumes of child nodes
- If parent is culled away, all of its children are culled away
● File I/O
- Reading and writing of 3D models from disk
- Internal data structure allows the application to easily manipulate dynamic 3D data
Wie geht man durch den Scene Graph? (Traversal of the scene graph)
Visiting of each node of the graph consists normally of 3 phases
Update or application
Update traversal allows modifications of the scene graph
Updates are applied either directly by the application or with callback functions assigned to nodes within the scene graph
Used for animation or input processing
Cull
The scene graph tests the bounding volumes of all nodes
If leaf node is within the view, a reference to the leaf node geometry is added to a final rendering list
List is sorted by opaque versus translucent, and translucent geometry is further sorted by depth
Draw or render
List of geometry created during the cull traversal is traversed and low-level graphics API is called to render that geometry
Was ist Culling?
While clipping will cut off parts during the rendering process culling is used earlier to remove whole polygons which are not visible
Warum benötigt man Culling?
Optimisation is often required or desired since calculation of colour and light can be expensive
Only visible parts will contribute to the final image
Welche Culling Techniken gibt es?
Frustum culling
Backface culling (provides typically biggest performance increase)
Occlusion culling (provides typically lowest performance increase, becomes especially relevant with dense scenes with high amount of polygons)
Contribution culling (if objects become to small they are simply not rendered anymore)
Wie kann nach Frustum Culling getestet werden?
● Is active if object is outside the region of the clipping planes
● Simplified frustum culling test
Assuming our view vector is aligned with the z-axis
pnear < zview < pfar
X- and Y-Axis are centric inside the viewing volume
-wview < xview < wview
-hview < yview < hview
● Two simple comparisons are required for each axis
● Still not ideal since test has to be performed for each triangle or primitive
●The naive frustum culling needs O(n) tests where n = number of objects, primitives or triangles
Wie kann Frustum Culling optimiert werden?
● Use of bounding volumes
Instead of testing each triangle or primitive a bounding volume encapsulating an object based on primitives is used for testing, if the bounding volume is out of the frustum no further tests are needed
Bounding volumes are typically axis aligned bounding boxes (AABB)
● Bounding volume hierarchy (BVH) as we briefly heard in the scene graph section
If hierarchical structure of the scene is available the parent objects bounding volumes would encapsulate their children’s bounding volumes
If parent bounding volumes is out of the frustum no child volumes would have to be tested
Was ist Backface Culling?
● The most simple approach of removing triangles for rendering is backface culling
● If the polygons on the back side of objects of an object form a closed surface and we perceive them from the outside we have no need for drawing the back faces of the polygons
● The polygons on the back side of objects face backwards
● We use the normal vector of the polygon to check for orientation
Typically normal vectors are stored in face mesh structure
If not we can calculate the normal vector as cross product of 2 triangle edges
Was ist Occlusion Culling?
● Objects that are completely hidden behind other objects in the scene do not have to be drawn
● Has to be considered globally, while backface culling and frustum culling can be considered locally
● Planar Occluders
Uses projection of objects on viewing plane blocking area for depth tests later on
Can be implemented by scene geometry or using separate occlusion proxies
● Shadow Volumes
Creation of a shadow volume between the viewpoint and the far clipping plane
More on that in texture and buffer part of the lecture
Wovon hängt die Intensität des Lichts (Farbe) auf einer Oberfläche ab?
- Lichtquellen
- Struktur und Materialeigenschaften des Objekts
- Reflexionseigenschaften de Objekts (Illumination, Reflectance Model)
- Shading Eigenschaften des Models
Was sind Lichtquellen?
- Simulieren physikalisches Licht
- Hat eine eingestellte Intensität und Farbe
- Hat Position, Typ und Orientierung
Was ist Ambient Light?
- indirekt
- typisches Umgebungslicht
- Licht verteilt sich in alle Richtungen gleichmäßig
- Erstellt Bilder mit niedrigem Kontrast
Was ist Directional Light?
- direkt
- Lichtstrahlen sind parallel
- Intensität ist abhängig von der Distanz zwischen Objekt und Lichtquelle
- Reichweite ist normalerweise unendlich
Was ist Point Light?
- direkt
- Licht, das von einem Punkt weggeht
- Intensität wird mit Distanz weniger
- Reichweite ist limitiert
Was ist Spot Light?
- direkt
- Point Light mit gegebenem Winkel
- Intensität wird mit Distanz weniger
- Reichweite ist limitiert
Was ist Flat Shading oder Constant Shading?

Was ist Gouraud Shading?
- Berechnet die individuellen Farben der Vertices
- Farbe des Dreiecks wird durch Interpolation berechnet
Wie funktioniert ein Basic Algorithm vom Gouraud Shading?
- Calculate the vertex normal vectors if not already stored or available (this is achieved by averaging the normal of all adjacent face normals)
- Calculate the colour intensity of each vertex normal depending on the current local illumination model
- Calculate the colour intensities of the edges by linear interpolation of the vertex normal
- Calculate the colour intensities of the inside of the polygon by using linear interpolation of the edges
Was ist Phong Shading?
Similar idea than Gouraud shading, but instead of interpolation of the colour intensities normal vectors are interpolated
Was ist ein Basic Algorithm von Phong Shading?
- Calculate the vertex normal vectors if not already stored or available (this is achieved by averaging the normal vector of all adjacent face normals). Same approach as in Gouraud shading
- Calculate the normal vectors along the edges based on linear interpolation of the vertex normal and normalise them
- Calculate the colour intensities along an edge based on a local illumination model
- Calculate the normal vectors along the scan lines inside a polygon based on linear interpolation based on the edge normal with normalisation
Was ist eine Textur?
● With shading colourful images of geometries can be created but often more detail is desired
● A texture in general is used to add surface details to a 3d model
● That could be hatching, patterns, whole images but also other attributes like wrinkling of the surface, terrain, reflections of the environment, transparencies
Wie funktionieren Texturen?
● 2d images are used and mapped onto a 3d geometrical object
● Think of it as shrink wrap or gift wrapping paper
● Parts of the texture could be mapped individually only on parts of the object
Wie wird bestimmt welche Dreiecke in der Tiefe (z-Buffer/Depth Buffer) angezeigt werden?
- For each polygon in the Frustum
- Determine the pixels to be drawn
- For each pixel to be drawn
- Compute the depth value z (distance to the camera) at the x,y position
- If z(x,y) < current value (x,y) then current value (x,y) := z(x,y)
- Render image

Wie ist ein Z-Buffer aufgebaut und welche Probleme können damit entstehen?
Die Zahl gibt an wie weit die Pixel weg sind. Problematisch wird es, wenn mehrere Pixel sich überschneiden und gleich weit weg sind. Dann kann es zum Flackern der Oberflächen kommen.

Wie kann Flackern bei z Buffer verhindert werden?
Man verändert die Objekte soweit, dass sie nicht mehr direkt übereinander liegen.