Observation Flashcards
two basic classes / times for studies and evaluation
- formative:
at the beginning to inform about context and to study possible options - summative:
to judge on the impact of a HCI design
(a summative evaluation of a design might be a formative one for the next step)
why, what, where and when to evaluate
why: study question (check user' requirements and that they can use the product and they like it)
what:
a conceptual model, early prototypes of a new system and later, more complete prototypes, human behaviour…
where:
in natural and laboratory settings
when:
* formative: throughout design;
* summative: finished products can be evaluated to collect information to inform new products
three classes of measures
user effectivity
user efficiency
user satisfaction
evaluation classes
- setting
- evaluation time
- evaluation partner
- result type
controlled settings
- setting conditions are controlled
- non-controllable conditions are measured
- e.g. lab experiments, living labs
natural settings
- study in ‘everyday’ and natural conditions that cannot be controlled
- some, but not all non-controllable conditions can be measured
- e.g. field studies, in-the-wild studies
types of evaluation time
inspective:
* inspection / evaluation while run of an experiment or while use
retrospective:
* evaluation after run of the experiment or after use
short term: short session
long term: long session
evaluation partners
the user:
- gives direct feedback e.g. for use
- best for gaining new insights into context
- if its an experiment: called “subject”
the expert:
- allows for best practice information
- reported expert experience may require many users / test subjects to be collected
Result types
subjective:
* results cannot be directly compared between subjects
objective:
* results can be directly compared between subjects
quantative:
* results are numbers
qualitative:
* results are text
Interviews - Five key issues
- setting goals
decide how to analyze data once collected - Identifying participants
decide who to gather data from - relationship with participants
clear and professional, informed consent when appropriate - Triangulation
look at data from more than one perspective
collect more than one type of data, e.g. qualitative from experiments and quantitative from interviews - Pilot studies
small trial of main study
Data recording
- notes, audio, video, photographs can be used individually or in combination
- always use a visual impression
- different challenges and advantages with each combination
three types of interviews
structured interviews
- pre-developed questions
- strictly following the wording
- easy to carry out - but limited to the question set
- more precise to evaluate
semi-structured interviews
* structured part + ‘open’ questions
unstructured interviews
- used when little background information available
- minimizes the influence of the questioner
Running the interview - structure
Introduction - introduce yourself, explain the goals of the interview, reassure about the ethical issues, ask to record, present the informed consent form
warm-up - make first questions easy and non-threatening
main body - present questions in a logical order
a cool-off period - include a few easy questions to defuse tension at the end
closure - thank interviewee, signal the end, e.g. switch of the recorder
encouraging a good response
- make sure purpose of study is clear
- promise anonymity
- ensure questionnaire is well designed
- follow-up with emails, phone calls, letters
- provide an incentive
- 40% response rate is good, 20% is often acceptable
Standard questionnaires used in HCI
SUS - system usability scale
TLX - NASA task load index
QUIS - Questionnaire for User interface satisfaction
CSUQ - Computer system usability questionnaire
SUS - benefits and restrictions
+ very easy to scale (likert)
+ useful in small sample sizes with o.k. results
+ validity o.k. (you see differences in bad and good design)
- Score 0-100 -> association with percentage
- not diagnostic, just to classify
problems with online questionnaires
- sampling is problematic if population size is unknown
- preventing individuals from responding more than once can be a problem
- individuals have also been known to change questions in email questionnaires
Types of observation
direct observation in the field
- structuring frameworks
- degree of participation
- ethnography
direct observation in controlled environments
indirect observation: tracking user’s activities
- diaries, experience sampling method
- interaction logging
- video and photographs collected remotely by drones or other equipment
Planning and conducting observation in the field
- decide on how involved you will be: passive observer to active participant
- how to gain acceptance
- how to handle sensitive topics, eg. culture, private spaces, etc.
- how to collect the data:
- what data to collect - what equipment to use - when to stop observing
Ethnography
Goal: to experience the participant and it’s context
Ethnographers immerse themselves in the culture that they study
analyzing video and data logs can be time-consuming
collections of comments, incidents and artifacts are made
co-operation of people being observed is required
informants are useful
data analysis is continuous
interpretivist technique
questions get refined as understanding grows
reports usually contain examples
online enthography
interaction online differ from face-to-face
virtual worlds have persistence that physical worlds do not have
ethical considerations and presentations of results are different
observations and materials that might be collected
- activity or job descriptions
- rules and procedures
- descriptions of activities
- recordings
- informal interviews
- diagrams (of the physical layout,…)
- photographs, videos, workflow diagrams, process maps, …