Lecture 5: Interaction, Tools & Collaboration Flashcards

1
Q

How do we design a good interaction?

A

The design of the interactions in a VE can heavily influence how it is perceived by the users.

Good interactions should be easy to learn and use and should not distract or hinder the user.

A good interaction is used intuitively without thinking about it too much!

Keep in mind: an interaction is usually used to achieve some kind of task!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which criteria should be considered when designing interaction for VR?

A
  • Effectiveness/Efficiency
  • Easy to learn
  • Easy to use
  • Comfort
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Tell the difference between Effectiveness and Efficiency

A

Effectiveness:
How good is something at achieving the goal of some task.
Effectiveness only cares about the outcome, not the time, energy, etc.

Efficiency:
How good is something at achieving a task whilst using the least amount of
resources. Resources may be anything, depending on the context!

For our purpose, an interaction in VR should be reliable, accurate and fast.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the meaning of Easy to learn, Easy to use and Comfort?

A

Easy to learn
There is a learning curve to pretty much any non-common interaction method. The goal should be to keep this learning curve as flat and short as possible! Interaction should not require months to master.

Easy to use
An interaction is something that may potentially be executed many times and in short succession by the user, depending on what it does.
After a short learning curve, the user should be familiar with the interaction, executing it “automatically” and without being forced to think about it too much.

Comfort
Interaction in VR may be a lot more engaging than “classical” interfaces.
Especially 6DOF-based interaction may require a lot of physical activity.Keep in mind that an interaction may be executed a lot! Forcing users in uncomfortable positions, etc. can be very detrimental to the user experience!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How can we classify interactions in VR? Give examples for it.

A

As VR is not limited by the constraints of the real world, we may also classify interaction based on whether it would be possible in the real world:

Natural or Supernatural (Magical)

Natural Interaction: Walking, grabbing, pointing, …

Supernatural Interaction: Teleportation, Flying, …
Even 3D GUIs might be considered supernatural interaction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Categorize how we can interact with the virtual world

A

Locomotion, Selection, Manipulation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Locomotion?

A

Locomotion means moving and looking around a virtual environment.
Locomotion is not only limited to the use of a tracking system, but may be achieved via other inputs/methods as well! Looking around is the easy part.
Choosing the right method for locomotion is important, as it may cause cybersickness!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why not just use tracking for locomotion?

A

A 6DOF room-scale tracking system allows us to mirror movements in the real world within the virtual environment. The problem with the real world is, that it can be quite limiting and not reflect the possibilities we can have in VR. Using just room-scale would prevent any form of “magical” interaction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe an approach for extending the real world, when a user is bound on a room-scale setup.

A

Often a hybrid approach is used: the room-scale setup is seen as a box in which the real-world movement can be mirrored into the virtual environment.

Some other form of movement may be used to change where this box is located within the virtual world.

Some input device may be used to move this “box”, e.g. by flying, teleportation, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Name and desribe different locomotion techniques. Which technique is suitable for a room-scale setup and which not? What is to consider?

A

**Flying: **
Using some sort of input (like a controller) to allow the user to fly around in the virtual environment usually works well.

Flying not only works in combination with a room-scale setup, but is also a feasible solution to locomotion for standing/sitting VR (even with 3DOF).

Teleportation:
Is also commonly used in today’s VR applications. Coupling teleportation with a room-scale setup works very well!
Often the transition when teleporting is “softened” by some sort of animation like a dash, sprint or blinking. Teleporting large distances or to some unknown part of a virtual environment can be disorientating. Careful with changing the user’s orientation during teleportation as it can be very
confusing!

Redirected Walking:
Redirected Walking is based on the idea that humans are not very good at walking in a straight line.
Introducing a slight rotation when applying a real world movement to the motion in the virtual environment can trick a user to walk in a circle (IRL), but seemingly straight in VR. Best suited for large tracking spaces.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What technique is also applied in smaller tracking spaces as idea of redirected walking?

A

For smaller tracking spaces the idea of redirected walking might also be applied by decoupling the tracking and virtual environment when needed!
Using an input to decouple the tracking system from the movement in VR can allow the user to change rotation without moving in VR.

A user can walk to the edge of the room-scale space available, push a button that decouples, turn around in the real world (not mirrored in VR) and then release the button. The rotation in VR has not changed, but the user can now walk “back” to the other side of the room scale setup while walking straight in VR!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

When we think about selection in VR. What is to consider by choosing the right selection method?

A

available devices (controllers, 3DOF vs 6DOF, etc.) and application

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Name and describe different selection techniques.

A

Gaze-based
Gaze-based selection works by turning our head to look at some object for a certain amount of time. This method works with 3DOF systems as well and does not require a controller.Gaze-based selection usually only takes head rotation into account, not what part of the image we focus on. Often a dot or crosshair is displayed to help us with selection.

Pointing or raycasting
Pointing or raycasting basically turns some tracked device into a laser pointer. The rotation of the device determines where our ray is cast to. Often a button is used to select what we are pointing at. Commonly used with many systems that come with a controller, but can be imprecise for
selection over long distances.

Grabbing (with controllers)
With controllers we can also mirror the real world and implement a grabbing-based selection. Grabbing is a natural and intuitive solution, but requires a 6DOF system and usually only works within the tracking space.Also keep in mind that often there is no haptic feedback!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What means Manipulation in VR?

A

Manipulation means changing something about an object or part of a virtual environment. Manipulation might just be modifying the position but may also be some complex change. Similar to selection, being precise with 3DOF or 6DOF devices can be very difficult. How manipulation works exactly is highly dependent on the application and the hardware used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Name a challange regarding selection in VR

A

pixel perfect selection (we know from screens) is very difficult in VR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe the possible role of GUIs in VR

A

Some elements (like text) are best presented in 2D and can be integrated into a 3D environment. Integrating a 2D GUI in a 3D environment while maintaining immersion, good usability and readability can be challenging!

The software for most HMDs uses 2D GUIs for settings, storefronts, etc.
They may be attached to the head rotation, static or movable by the user.

17
Q

How are GUIs often controlled in VR?

A

GUIs are often controlled using pointing/raycasting
or gaze.

18
Q

What should we consider when we integrate 2D text into our Virtual Environment?

A

2D text can be used in VR, but readability should be taken into consideration!
In VR, the user position might change, so keep in mind from what distance the text should be easily readable! Target hardware is another issue! Especially the first generation of today’s HMDs suffers from lower resolutions and screen door effect, affecting the readability of text!

19
Q

What usually providing Runtimes or APIs?

A

Runtimes or APIs usually provide low-level access to the specific hard-/software systems.

20
Q

What is OpenXR?

A

OpenXR is an open, cross-platform standard for VR/AR hardware with the first release in 2019. OpenXR is specified by the Khronos Group, who also specify OpenGL, Vulkan, OpenCL, etc. In order for a hardware to work with OpenXR, manufacturers need to implement the standard (similar to OpenGL)

21
Q

What is SteamVR/OpenVR?

A

SteamVR/OpenVR is Valve’s ecosystems for VR hardware.
SteamVR is closed source and requires a Steam account, OpenVR source code can be found on github and provides an API as well as allows to add additional hardware to the SteamVR ecosystem. While SteamVR is proprietary, there are interfaces for all major VR devices (Oculus,
Microsoft WMR), so they can be used with SteamVR. A SteamVR application can therefore run on most headsets!

22
Q

Which Runtimes do exist beside APIs? What should we keep in mind?

A

Besides the APIs that provide access to multiple devices, Oculus, Microsoft WMR and some of the VIVE systems (Cosmos, etc.) come with vendor specific SDKs/APIs/runtimes. These are proprietary systems that only allow access to the manufacturer’s devices and are usually not cross-compatible.

Keep in mind: even when using SteamVR, these vendor-specific runtimes have to be active, as SteamVR does not access the hardware directly, but interacts with the vendor’s ecosystem.

23
Q

Name and describe different Game Engines

A

Unity Engine
Originally designed for MacOS, Unity now supports most desktop, console and VR platforms. Unity is considered to be one of the main competitors to Unreal, at least for game engines available to the public.

CryEngine & Amazon Lumberyard
CryEngine is a game engine with a licensing model similar to Unreal (royalty based). Amazon licensed CryEngine and is developed their own engine (Lumberyard), which is a modified/extended version of CryEngine.

24
Q

What is WebXR?

A

Originally developed in 2014 by Mozilla as WebVR, WebXR is now specified by the W3C. WebXR is a standard that allows access to VR hardware from a browser. While browser-based applications suffer from many limitations like low performance, just opening a webpage and have content in your headset is a very easy and convenient way to access VR!

25
Q

Give one example for WebXR

A

Mozilla hubs is a social platform that includes support for WebXR.

26
Q

List Scientific Software for VR Applications

A
27
Q

What does NVE stands for? Briefly desribe it

A

NVEs (Networked Virtual Environments) are software systems that can support multiple users, which can interact both with each other and with the environment in real time and aim at providing to the users a high-sense of realism by incorporating 3D graphics and multimedia

For our purpose, an NVE is a virtual environment that is accessed by multiple users at the same time. NVEs are usually accessed via a network, allowing users to be physically located at various places, while experiencing a common virtual environment.

28
Q

What challanges will enhanced through NVE?

A

VR in general can be a challenge due to the requirements for real-time, visual fidelity, latency, etc. With NVEs, these challenges extend to cover all users within the virtual environment. Additionally, new challenges arise like the representation of other people, network characteristics like latency and a multi-system setup (server-client, peer-to-peer, etc.).

29
Q

Which common features are ususally applied to NVEs?

A

There are some common features, that usually apply to NVEs:
- Shared sense of space: all users feel like they are in the same space
- Shared sense of presence: users feel like they are part of the same virtual environment
- Shared sense of time: users feel like they are simultaneously experiencing the same events
- Interaction: Users can interact with the virtual environment and each other

30
Q

What is important for presence in terms of social VR? How is this also called?

A

In order to achieve good presence, we need to find a way to represent the various users – an avatar.

The virtual representation of a user is also known as “embodiment”.

31
Q

Which principles also apply to NVEs?

A

The basic principles of VR also apply to NVEs: we want to achieve immersion and good presence. This means, that a NVE should have good responsiveness, be consistent. For tracking systems we mentioned latency as important factor. The same applies to NVEs, but this time it is network latency.

32
Q

Which network issues can lead to a bad tracking system? What is necessary for a good presence?

A

latency, jitter and limited bandwidth
(Similar to multiplayer games)

For good presence, other users’ inputs should be transmitted with low latency and high accuracy.This is especially true, if collaboration means multiple users interacting with a single object!

33
Q

What is the Metaverse?

A

The metaverse is a network of multiple virtual environments, a bit like the internet is multiple websites. The idea of the metaverse is to create a single virtual space by connecting many smaller virtual environments.