L3 - Feedback cycles: In- and Output Flashcards
Why feedback?
informing the user about what has happened in order to let him know what to do next
- > computer “talks” to user
- > it’s cyclic (consider the human action cycle)
- > interaction between computer and user over interface: do something, observe result, adjust behavior if necessary (computer does something and sends result of that - over interface)
How does HCI feedback take place (over which channels)?
- visual (text, color change, …)
- auditory (sounds)
- tactile (vibration, …)
- > combinations of these
How does feedback need to be?
- prompt (immediate, timely)
- clear to understand
Between which parts of the (human) action cycle does the Gulf of Execution take place? (going from user (task goal) to computer system)
- intent to act (identify goal in terms of the computer system)
- plan action(s) (identify necessary actions to accomplish goal)
- execute action(s) (within the interface)
Between which parts of the (human) action cycle does the Gulf of Evaluation take place? (going from user (task goal) to computer system)
- perceive system state (perceive form of output)
- interpret perception
- evaluate the interpretation (evaluate feedback in terms of the task goals)
What is the Gulf of Execution?
difference between the intentions of the users and what the systems allows them to do (or how well the system supports those actions)
How can the Gulf of Execution be narrowed?
- help pages, constrains, use conventions/standards
- use affordances/real-world mapping (skeuomorphic design)
- for novice user: make goal identification and actions to get there visible and discoverable
- for expert user: reduce number of steps
- > heuristic of flexibility (for different users)
- provide feedforward; showing users what will happen if action continues
- minimize effort needed to execute each action (execution as easy and useful as possible)
What is the Gulf of Evaluation?
difficulty of assessing the state of the system and how well the artifact supports the discovery and interpretation of that state
Googled:
Disparity between the user’s perception (or discovery) of the system state, and the actual system state.
How can the Gulf of Evaluation be narrowed?
- giving feedback frequently (keeping updated about system status)
- giving immediate feedback (right after action was taken)
- balance feedback with actions (no overload or big feedback for tiny action)
- vary the feedback (type and sensory change; makes context of feedback better understandable)
- being specific
- allowing direct manipulation (feeling that action is taken)
- > use visible actions (signifiers), feedforward signals and feedback
Important to consider using the gulf of execution/evaluation concept …
it’s just a model to examining the principles -> there are different models available (may also contradict)
Actions and Feedback towards input and output - their relation?
- input devices mediate input (actions) with a computer system; software gives affordance on how to use device (guides actions)
- output devices mediate feedback from computer system (device feedback informs evaluation)
- > feedback in computer systems mich more constrained than in physical devices
what are characteristics of input?
- size and shape
- wireless or tethered (without or with cable, trade off between their pros and cons)
- degrees of freedom (describes how many different types of input it can accept, allowing for direct mapping (more action possibilities) -> more intuitive)
- relative vs. absolute (concerning how precisely one is tracked in space)
- hand-based (ex. glove, controller, …)
- body-worn
- world- grounded (ex. washing machine, treadmill, bike, …)
- > choice which one is used depends on device (object)
VR: why and how - tracking and sensing?
most VR input relies on special sensors to keep track of user’s behavior
- head tracking (detect head turning)
- full body capture (detecting surroundings)
- hand worn/held
- tracked controllers
- physical input (ex. shacking the phone)
- position/orientation (VR is real world oriented and user expects that as well (things represented as in real world))
- eye tracking
- HR, EMG, GSR etc (muscle signals, skin measures to see stress for example)
- > every part of input created has to be evaluated by the software in order to generate and appropriate response (happens inside the system)
- > it’s not the computer giving the signal, it’s the software
what are characteristics of output?
- sensory channel(s)
- resolution/range
- spatialization (sounds from different directions enriches the experience)
- head mounted (ex. headphones, AR glasses, …)
- world fixed (ex. computer screen, speaker, …)
- hand-held
- body-worn (ex. smart watch)
What is meant by passive haptics and substitutional reality?
use real world objects to convince people VR experience is real
thereby, texture is important
-> a certain point of divergence is ok (tricking the brain)