1. & 2. Privacy Flashcards
Security definition
“Security is traditionally defined as a set of activities that supports three different quality attributes:
- confidentiality, which ensures that information is only accessible by authorized individuals;
- integrity, which ensures that information has not been unintentionally modified;
- availability, which ensures that information is readily available whenever it is needed.
Some have argued that privacy is simply a subset of security, because privacy includes restricting access to information or ensuring confidentiality. This is convenient, as it suggests that organizations who practice good security practices have already addressed privacy.
Privacy risks
Privacy risks concern the likelihood that a privacy threat will exploit an IT vulnerability and the impact of this exploit on the individual and organization that retains information on the individual. The source of a threat, called the threat agent, may be internal to an organization (i.e., an insider threat), or it may be external.
Identity theft
Conducting fraudulent transactions on person’s behalf, using stolen personal data.
Phishing
Phishing, is a form of social engineering that uses a routine, trusted communication channel to capture sensitive information from an unsuspecting employee.
“Phishing is a type of social engineering attack in which a victim is tricked into logging in to what they think to be a legitimate site, but which is actually just a front set up by the attacker to collect users’ login credentials.
Spear-phishing or whaling
Phishing is called spear-phishing or whaling when the activity targets high-profile personnel, such as corporate executives or HR managers who have more extensive access or access to more sensitive information.
Data Privacy Principles (historical development)
“The more prominent principles that developers should be familiar with include the following:
The Fair Information Practice Principles (FIPPs) (1977), published by the U.S. Federal Trade Commission (FTC) and used as guidance to businesses in the United States
The Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (1980), published by the Organization for Economic Cooperation and Development (OECD)
The Privacy Framework (2005), published by the Asia-Pacific Economic Cooperation (APEC)
The Generally Accepted Privacy Principles (GAPP) (2009), published by the American Institute of Certified Public Accountants (AICPA) and The Canadian Institute of Charted Accountants (CICA) NISTIR 8062, An Introduction to Privacy Engineering and Risk Management in Federal Systems (2017), published by the U.S. National Institute of Standards and Technology (NIST)
The 1980 OECD Guidelines provide a foundational and international standard for privacy. The guidelines contain principles that are not found in the FTC’s FIPPs, such as the collection limitation principle, and the GAPP largely refine the guidelines into more concrete privacy controls in a similar manner to the NIST privacy controls.
The Data Life Cycle
“The Data Life Cycle:
- Consent & Notice
- Collection -> Disclosure
- Processing -> Retention -> Destruction
Types of data collection
(1) First-party collection, when the data subject provides data about themselves directly to the collector, e.g., in a web-based form that is only submitted when the data subject clicks a button;
(2) surveillance, when the collector observes data streams produced by the data subject without interfering with the subject’s normal behavior; (3) repurposing, which occurs when the previously collected data is now assigned to be used for a different purpose, e.g., reusing a customer’s shipping address for marketing and (4) third-party collection, when previously collected information is transferred to a third-party to enable a new data collection.
Each of the above four collection types may be either active, which occurs when a data subject is aware of the collection, or passive, when the data subject is unaware.
Mechanisms to obtain consent
Various consent mechanisms exist to engage the data subject in the collection activity to make the collection more overt. The best practice is to obtain consent prior to the collection, to avoid any misconceptions and allow the data subject to opt out of or opt in to the collection before it occurs. In an explicit consent, the individual is required to expressly act to communicate consent.
Passive or implied consent is generally obtained by including a conspicuous link to a privacy notice that describes the collection activities.
Media-appropriate techniques for sanitizing storage devices and destroying data
The U.S. NIST Special Publication 800-88, Appendix A, describes several media-appropriate techniques for sanitizing storage devices and destroying data that range from clearing the data by overwriting the data with pseudorandom data, to degaussing electromagnetic devices to, finally, incinerating the physical media. The level of destruction required is determined by the sensitivity of the data, and, in many situations, simply deleting the data may offer adequate protection.
Role of the Area Specialist
The area specialist has several responsibilities: to collect critical regulatory requirements from lawyers, to validate that marketing requirements are consistent with laws and social norms, to meet with designers to discuss best practices when translating requirements into design specifications, and to collect user feedback and monitor privacy blogs, mailing lists and newspapers for new privacy incidents. As a privacy engineer, the area specialist develops a community of practice—“a collective process of learning that coalesces in a shared enterprise,” such as reducing risks to privacy in technology.” To bridge Scrum and privacy, the area specialist can participate in developing user stories to help identify privacy risks and harms and then propose strategies to mitigate those risks. Furthermore, the area specialist may review the sprint backlog, which contains the list of stories that will be implemented during the current sprint, to ensure that the working increment produced by the iteration does not contain major privacy risks.
Methods for engineering privacy into systems
Methods for engineering privacy into systems basically amount to specialized life cycles themselves. These include the:
Privacy Management Reference Model and Methodology (PMRM)—promulgated by the Organization for the Advancement of Structured Information Standards (OASIS)
Preparing Industry to Privacy-by-design by supporting its Application in REsearch (PRIPARE) privacy and security-by-design methodology, funded by the European Commission.
The LINDDUN threat modeling method developed at KU Leuven in Belgium
Privacy Risk Assessment Methodology (PRAM) developed by the U.S. National Institute of Standards and Technology (NIST).
LINDDUN & PRAM are much more atomic and aimed at specific engineering activities
Defect
A flaw in the requirements, design or implementation that can lead to a fault.
Fault
An incorrect step, process or data definition in a computer program.
Error
The difference between a computed, observed or measured value or condition and the true, specified or theoretically correct value or condition.
Failure
The inability of a system or component to perform its required functions within specified performance requirements.
Harm/hazard
The actual or potential ill effect or danger to an individual’s personal privacy, sometimes called a hazard.
Functional violation of privacy
A functional violation of privacy results when a system cannot perform a necessary function to ensure individual privacy.
For example, this occurs when sensitive, personally identifiable information (PII) is disclosed to an unauthorized third party. In this scenario, the defect is the one or more lines of computer source code that do not correctly check that an access attempt is properly authorized, and the fault is the execution of that source code that leads to the error. The error is the unauthorized access, which is an observed condition that is different from the correct condition—“no unauthorized access will occur.” The failure is the unauthorized third-party access; failures are often described outside the scope of source code and in terms of business or other practices. Privacy harms may be objective or subjective: An objective harm is “the unanticipated or coerced use of information concerning a person against that person”; a subjective harm is “the perception of unwanted observation,” without knowing whether it has occurred or will occur.1
Definition of Risk
Risk is defined as a potential adverse impact along with the likelihood that this impact will occur. The classic formulation of risk is an equation: risk = probability of an adverse event × impact of the event.
Risk comparisons
Risk comparisons are used to prioritize risks, nominally on the basis of the risk score, but sometimes based primarily on the highest impact or highest probability. However, it is often the case that a technical or empirical basis for one or both of these numbers is nonexistent, in which case an ordinal measure is used, such as assigning a value of low, medium or high impact to an adverse event.
Ordinal measures are subject to the limitations of human perception and bias, as are numerical measures, and all measures with the same level (e.g., low) remain contextually relative and not easily comparable. One approach is to identify a relative median event that a risk analyst can use to assign values to other events (e.g., event X is higher or lower impact than event Y). However, caution should be used when treating such measures as quantitative data, because normal arithmetic may not be applied to this data: One cannot, for example, take the sum of two or more ordinal values, that is, low + high ≠ medium, though explicit relations that map combinations of ordinal values to a resultant value can be rationalized. Hubbard and Seiersen as well as Freund and Jones, though, assert that when properly approached, quantitative measures are readily ascertainable and that any imprecision pales in comparison to that of qualitative, ordinal measures. In pursuit of a quantitative risk score, Bhatia and Breaux introduced an empirically validated privacy risk scale based on the theory of perceived risk described by Paul Slovic.