5. Privacy Interfaces Flashcards
Just-in-time privacy interfaces
Just-in-time interfaces are shown in the same transactional context as the data practice they pertain to, which supports reasoning in the moment and means they can be specific and short, communicating only the most relevant information and choices.
Context-dependent privacy interfaces
Context-dependent privacy interfaces are triggered by certain aspects of the user’s context.88 For instance, being in physical proximity to an IoT sensor may cause the device to announce its presence (e.g., flashing an LED, beeping, sending a description of its data practices to the user’s phone). Other context factors might be someone accessing previously uploaded information or analyzing the user’s previous privacy settings behavior to warn about potentially unintended privacy settings. For example, Facebook displays a message warning that it is about to post publicly when the user’s last post was public, but they are typically not in the habit of posting publicly.
Periodic reminders
Periodic reminders are useful to remind users about data practices that they agreed to previously and to renew consent if the practice is still ongoing, especially if the data practice occurs in the background, invisible to the user.
Persistent privacy indicators
Persistent privacy indicators are shown whenever a data practice is active. For instance, cameras often have lights to indicate when the camera is recording. Persistent indicators can provide an unobtrusive cue about especially critical data practices, but there’s also a risk that the indicator will not be not noticed.89
On demand privacy information and controls
On demand privacy information and controls allow users to seek out and review privacy information or their privacy settings and opt-outs any time. On-demand interfaces should be made available in a well-known or easily findable location (e.g., a website should have a “privacy” subdomain or “/privacy/” folder and provide links to privacy controls from their privacy policy and in relevant menus).
Channels to deliver privacy interfaces
Privacy interfaces can be delivered through different communication channels:
Primary (the device the user is using)
Secondary channel (if the primary device is constrained, using a different one)
Public channels
Modality to present privacy interfaces
Privacy interfaces can be presented in different ways, with different interaction modalities:
Visual privacy interfaces
Auditory privacy interfaces
Haptic and other modalities may also be leveraged as privacy interfaces. For instance, device vibration could be used as an indicator for data collection.
Olfactory displays could be used to inform about privacy risks with different scents (e.g., lavender scent when visiting a privacy-friendly website; sulphur when visiting a privacy-invasive one). Ambient lights could also serve as privacy indicators. Could taste or skin conduction be used in privacy interfaces? Although it might not be immediately apparent how less conventional modalities could be used for privacy interfaces, the important point is to creatively explore even unconventional design opportunities. Such exploration often leads to helpful insights that can inform practical solutions.
Machine-readable specifications of privacy notices and controls enable the consistent presentation and aggregation of privacy information and controls from different systems or apps. Mobile apps have to declare their permission requests in a machine-readable format, and the mobile operating system is responsible for providing permission prompts and managing the user’s consent. IoT devices that lack screens and input capabilities could broadcast their machine-readable privacy notices to smartphones of nearby users, which then present the respective privacy information and choices to the user.
Control of privacy notices
User choices, consent dialogs and privacy settings can be delivered in different ways that affect how users interact with them.
Blocking privacy controls force users to interact with the privacy interface in order to be able to proceed. Blocking controls are useful when the user must make a choice, e.g., when consent is needed. However, how choices are presented affects whether the interaction is actually an expression of the user’s preference, and so should be considered carefully. For example, presenting a complex privacy policy and providing only the options to accept or not use the app is not suitable for eliciting consent, as users are likely to click the warning away without reading. Preferable are prompts that are specific to a single data practice and provide options to both allow or deny the practice (e.g., mobile permission requests). All choices should be equally easy for the user to exercise.
Nonblocking privacy controls do not interrupt the interaction flow but are rather integrated as user interface elements into the UX. For example, social media apps might provide an audience selector (e.g., private, friends, public) within the interface for creating a post. The control is available but does not have to be used and, at the same time, reminds the user of their current privacy settings.
“Decoupled privacy controls are not part of the primary UX. They are useful to provide the user the opportunity to inspect their privacy settings or the data the system has collected about the user. Common examples are privacy settings and privacy dashboards. The advantage of decoupled privacy controls is that they can be more comprehensive and expressive than integrated controls; the downside is that users need to actively seek them out. Good practice is to provide decoupled privacy controls at a central place and then point to them from other, more concise privacy notices and controls where appropriate.
Notice and choice model
Transparency and user rights are core concepts of privacy legislation and guidelines globally, ranging from the Organisation of Economic Co-operation and Development (OECD) privacy guidelines, to the U.S. Federal Trade Commission’s (FTC’s) fair information practice principles, to Europe’s GDPR, and privacy legislation in many other countries.9 While specific requirements may vary, companies that collect or process personally identifiable information (PII) typically have to be transparent about their data practices and to inform data subjects about their rights and options for controlling or preventing certain data practices. This is often known as the notice and choice model.
Privacy harm
A negative impact of data processing on the data subject’s privacy. Sometimes people observe tangible harm, but often they are not aware of the harm. Privacy harms may also not manifest immediately but rather much later.
Control paradox
Control paradox, as perceived control over privacy may lead to increased sharing, which in turn may increase privacy risks.
Bounded rationality
Generally, humans are limited in their ability and time to acquire, memorize and process all information relevant to making a fully informed and rational decision. Behavioral economists call this deviation from the ideal of a fully rational actor bounded rationality. To compensate for the inability and impracticality of considering all potential outcomes and risks, humans rely on heuristics in their decision-making to reach a satisfactory solution rather than an optimal one. However, decision heuristics can lead to inaccurate assessments of complex situations.23 Rational decision-making is further affected by cognitive and behavioral biases—systematic errors in judgment and behaviors.
Bounded rationality also persists in privacy decision-making.
Dark Patterns
“Dark patterns are interface or system designs that purposefully exploit cognitive and behavioral biases in order to get people to behave a certain way regardless of whether that behavior aligns with their preferences. Some common privacy dark patterns include the following:
Default settings—Default settings frequently exploit status quo bias. Similarly, preselecting a certain option nudges users towards accepting that choice.
Cumbersome privacy choices—Making it more difficult, arduous and lengthy to select a privacy-friendly choice compared with a privacy-invasive one deters users from privacy-friendly options.
Framing—How a choice is described and presented can affect behavior. An emphasis on benefits, a de-emphasis on risks or the presentation of trust cues may lead to people making riskier privacy decisions than they would with a neutral presentation of choices.
Rewards and punishment—Users are enticed to select a service’s preferred choice with rewards or are deterred from privacy-friendlier choices with punishments.
“Forced action—Users must accept a data practice or privacy choice in order to continue to a desired service (hyperbolic discounting), regardless of whether they actually agree with the practice. They are forced to act against their privacy preference.
Norm shaping—Other people’s observed information-sharing behavior, say, on a social media service, shapes the perceived norms of information sharing on a platform and individuals’ own disclosure behavior. For example, a controlled experiment showed that people who see more revealing posts in the feed of a photo-sharing service tend to consider such photos more appropriate and are more likely to share more revealing information themselves than those who see less revealing photos.42 Thus, the algorithm for determining news feed content, which might purposefully highlight or suppress certain content to activate such anchoring effects, has a lot of power in steering users’ behavior, regardless of whether the displayed posts are actually representative of user behavior.
Distractions and delays—Even small distractions or delays can create a distance between awareness of privacy risks and behavior that can cancel out the effects of a privacy notice.
Such manipulations of privacy behavior are unethical, as they constrain people in their self-determination and agency over their privacy. Moreover, manipulations of privacy behavior further exacerbate the misalignment between people’s privacy preferences, expectations and their actual behavior and the data practices to which they are subject.
Usability
ISO 9241-11 defines usability as the “extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.”
“components that determine a system’s usability:
Learnability—How easy is it for users to accomplish basic tasks the first time they encounter the system?
Efficiency—Once users have learned the system, how quickly can they perform tasks?
Memorability—When users return to the system after a period of not using it, how easily can they reestablish proficiency?
Errors—How many errors do users make, how severe are these errors and how easily can they recover from the errors?
Satisfaction—How pleasant is it to use the system?
Usability
Utility is about functionality. Does the system support users in satisfying their needs and accomplishing their goals? An interface can be very usable, but it is useless if it does not align with users’ actual needs and expectations.
For example, the unsubscribe mechanism might be easy, fast and pleasant to use (great usability), but only gives users the option to unsubscribe from all of an organization’s communication or none of them, even though some users might want to unsubscribe from marketing emails but continue receiving important notifications about their account activity. As a result, people may not use the very usable opt-out mechanism because it is not useful for them.
A system with high utility meets the exact needs of users. A useful system has both good utility and good usability.
User Experience
Usability is important but, on its own, often not sufficient to characterize what constitutes a good or bad experience for users. UX design takes a more holistic perspective that places users and their needs at the center and “encompasses all aspects of the end-user’s interaction with the company, its services, and its products.”47 This might include the actual product; terms of service and privacy policies tied to the product; the product’s purchase, unboxing and sign-up experience as well as customer support, documentation, manuals, privacy settings and so on.
A system’s UX encompasses the extent to which a system meets users’ needs (utility), usability, aesthetics and simplicity, and the joy, emotional reactions and fulfillment a system provides. UX design therefore integrates user interface design, engineering, visual design, product design, marketing and branding as well as business models and related considerations.
User-centred design process
UX design follows a principled and systematic process. At the center of the design process are users—typically a set of anticipated user populations and stakeholders. While different methodologies follow slightly different steps, generally the user-centered design process consists of three phases—research, design, evaluation—with projects typically going through multiple iterations of the user-centered design process to iteratively refine designs and better align them with user needs.
UX Design research & analysis phase
UX design starts with research and analysis. The goal of this phase is to understand the context in which a certain system and its UX will operate and function. An important aspect of this is identifying a system’s user populations and then analyzing their specific characteristics and needs as they relate to both the system and its context of use. This often includes learning about people’s mental models of systems or processes in order to identify and understand potential misconceptions.
Common user research methods and activities include semistructured interviews, diary studies, contextual inquiry, survey research and usability tests (with the current system or related competitors). User research is typically complemented by desk research, involving competitive analysis, heuristic evaluation and review of relevant academic research and literature. Personas, scenarios, user journeys and affinity diagrams are tools used for synthesizing findings into higher-level insights that can be used and referenced throughout the design process.48
Based on the research findings, the requirements for the UX design are defined.
UX Design Phase
The requirements identified in the research phase inform the design phase. UX design aims to find and create solutions that meet user needs, as well as other system requirements. Good UX design takes users’ cognitive and physical characteristics into account. UX design is highly iterative and user centric. Solutions are developed in an iterative process that aims to put potential solution designs into the hands of users early on and throughout the refinement process in order to ensure that designs are properly supporting user needs. Designs typically start with very basic prototypes, often sketches and paper prototypes, which look far from the final product but make it possible to simulate and test interaction flows before investing time, effort and resources in further development.
UX Evaluation
Throughout the design process, designs should be iteratively evaluated with current or future users of a system. As such, the design phase and the evaluation phase are closely interlinked. The purpose of UX evaluation is to validate that the system’s designs and prototypes indeed meet the user needs and requirements identified in the research phase. Evaluation methods are often the same or similar to the user research methods mentioned in Section 5.3.2.1, with the addition of A/B testing and production deployment of developed solutions. UX validation may include both quantitative assessments (e.g., log file analysis, interaction times, success rates) and qualitative assessments (e.g., perceived usability, perceived cognitive load, joy of use, comprehension), with both providing important insights to evaluate and further refine designs and potentially also the design requirements.
Value-sensitive design
Value-sensitive design is a design approach that accounts for ethical values, such as privacy, in addition to usability-oriented design goals.51 Value-sensitive design methods help to systematically assess the values at play in relation to a specific technology and respective stakeholders and the ways the technology might meet or violate those values. They also help to iteratively develop designs that are sensitive to and respectful of those values.
Privacy reminders
Organizations may choose or be required to remind people about data practices they are subject to. For instance, financial institutions in the United States are required to provide an annual privacy notice to their customers under the Gramm-Leach-Bliley Act (GBLA). However, privacy reminders can also take a more proactive shape and make users aware of data practices they had previously agreed to or nudge them to check and update their privacy settings.
Privacy notices
Privacy notices aim to provide transparency about an organization’s data practices and other privacy-related information, such as measures taken to ensure security and privacy of users’ information. Privacy notices need to be provided to users—typically before a data practice takes place—and explain what information about data subjects is being collected, processed, retained or transferred for what purpose. Laws and regulations in different countries may pose specific transparency requirements in terms of what information needs to be provided, when it needs to be provided and how it needs to be provided. Privacy notices can take different shapes and forms
Privacy policies
Privacy policies are probably the most common type of privacy notice. A privacy policy holistically documents an organization’s data collection, processing and transfer practices and also includes other privacy-related information. While most common, privacy policies are also among the most ineffective privacy user interfaces. Most people do not read privacy policies, as they have little incentive to do so.