UI Design - Test 3 Flashcards
Requirements Analysis
1. Needs assessment If we decide to go on: 2. Functional requirements 3. Non-functional requirements 4. Data requirements 5. Environmental requirements (context of use) 6. Social Environmental Requirements 7. Organizational Environmental Requirements
Needs Assessment
Burton & Merrill’s 4 steps (1991)
1) Identify a broad range of possible goals 2) Rank goals in order of importance 3) Identify discrepancies between expected and actual performance 4) Set priorities for action
Preference Metrics
(affective data)
commonly use a Likert Scale: _ _ _ _ _ (A lot, A little,)
*often require special statistical measures *
often correlate weekly, if at all
SUSS: Subjective Usability (Scales for Software)
a. Valence-> liking or personal preference
b. Aesthetics -> attractiveness
c. Organization -> graphical design + layout
d. Interpretation->
e. Acquisition->
f. Facility -> overall ease of use
Performance Metrics
- Completeness-> given a set series of tasks, how much is done in a certain time period
- Correctness-> how many were done correctly
- Effectiveness-> how much work was done correctly out of the total
- Efficiency-> what was the effectiveness relative to some time period
- Proficiency-> compared to an expert, what is the efficiency of subject
- Productiveness-> of all the time spent on the task, how much was productive ((TimeTotal – TimeProductive)/TimeTotal )
Predictive Metrics
- predicts how “good” the final product will be.
Criteria for a good metric
- Easy to calculate
- Apply to passive prototypes as well as active prototypes
- Must have a strong rationale and conceptual basis
- Must be able to discriminate between designs
- Offer guidance for design
- Must effectively predict actual usability
- Directly indicate the quality of the design
Essential Efficiency
a procedural metric based on essential use cases
EE = 100 * Sessential / Senacted (enacted is often larger)
Task Concordance
A measure of how pairs of items are correctly ordered versus incorrectly ordered.
Task Visibility
procedural metric based on essential use case. Measures how visible the tools and components of a task are when needed.
- 1.0 - entirely available/visible (i.e. a button that directly performs the task)
- 0.5 - retrievable from a menu
- 0 - if not obvious at all
Layout Uniformity
a structural metric that asserts that L.U. is lowered by a highly chaotic interface/design. Good score is 50-85%
Gathering Data - Categories
- Structured - all questions set up ahead of time
- Unstructured - some set up ahead of time, some are based upon results, comments.
- Open-ended - elicit responses that the person may want to tell us
- Close-ended - All answers are options
Gathering Data - Basic Rules
- Make them short + sweet, 1 page is best
- Easy Questions - closed ended, structured
- Open ended questions - if any at all, should be last
- Do a pilot study to verify that all questions are understood - usually 7-10 people
- Can use methods to determine reliability and validity
- All users complete the instrument
- Privacy is good if it can be managed
- Explain your purpose - get buy in
- Thank them for their participation
Surveys - Advantages & Disadvantages
Advantages: cheap, everyone can participate.
Disadvantages: What do we do when things are blank? Responses may be more about how question is worded rather than what were trying to measure.
Methods of Gathering Info
- Surveys/Questionnaires
- Interviews (P2P Questionnaires)
- Focus Groups
- Naturalistic Observation - Ethnography
- Studying Historical Documents
Usability Testing (Evaluation)
- Set up goals and measurable objectives
- Determine the methods for monitoring results to ensure accuracy and reliability.
- Create any testing instruments and validate with a pilot study.
- If necessary, train expert evaluators.
- Set up carefully monitored locations
- Pick the sample - random samples if possible
- Run the experiments - be consistent
- Pool the data and perform statistics
- Draw conclusions, if possible
Expert Evaluation
- a subjective measure of interface analysis by a paid expert
○ May find 60%-70% of errors
○ Fairly quick
○ All you get is a list of errors
○ Methods used
A. Studied Ignorance - look at interface as a rank novice would. (Good method to find immediate errors).
B. Over the Edge - try to overuse the system and see how it reacts
Every Mountain - test every menu option, combo of options, buttons, tools, etc.
Peer Evaluation
- get your pal at work to test the system
A. May see things the way a developer does and not
other users would
B. Bias - for or against your product
C. Politics status
D. Their time costs money too
Inspections (Evaluation)
Intended ONLY to identify errors.
- Heuristic Evaluations - evaluates using Jacob Nielsen original 10 heuristics (95% of errors are in those rules)
Cognitive Walkthrough (Evaluation)
Tast scenarios identified and success criteria set. Experts watch walk-through and finds problems keeping tasks and users in mind.
Collaborative Usability Inspection (Evaluation)
A systematic examination of a finished product, design, or prototype from the point of view of its usability by the intended end users. (Quality Assurance technique)