1. & 2. Privacy Flashcards

1
Q

Security definition

A

“Security is traditionally defined as a set of activities that supports three different quality attributes:
- confidentiality, which ensures that information is only accessible by authorized individuals;
- integrity, which ensures that information has not been unintentionally modified;
- availability, which ensures that information is readily available whenever it is needed.
Some have argued that privacy is simply a subset of security, because privacy includes restricting access to information or ensuring confidentiality. This is convenient, as it suggests that organizations who practice good security practices have already addressed privacy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Privacy risks

A

Privacy risks concern the likelihood that a privacy threat will exploit an IT vulnerability and the impact of this exploit on the individual and organization that retains information on the individual. The source of a threat, called the threat agent, may be internal to an organization (i.e., an insider threat), or it may be external.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Identity theft

A

Conducting fraudulent transactions on person’s behalf, using stolen personal data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Phishing

A

Phishing, is a form of social engineering that uses a routine, trusted communication channel to capture sensitive information from an unsuspecting employee.
“Phishing is a type of social engineering attack in which a victim is tricked into logging in to what they think to be a legitimate site, but which is actually just a front set up by the attacker to collect users’ login credentials.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Spear-phishing or whaling

A

Phishing is called spear-phishing or whaling when the activity targets high-profile personnel, such as corporate executives or HR managers who have more extensive access or access to more sensitive information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Data Privacy Principles (historical development)

A

“The more prominent principles that developers should be familiar with include the following:
The Fair Information Practice Principles (FIPPs) (1977), published by the U.S. Federal Trade Commission (FTC) and used as guidance to businesses in the United States
The Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (1980), published by the Organization for Economic Cooperation and Development (OECD)
The Privacy Framework (2005), published by the Asia-Pacific Economic Cooperation (APEC)
The Generally Accepted Privacy Principles (GAPP) (2009), published by the American Institute of Certified Public Accountants (AICPA) and The Canadian Institute of Charted Accountants (CICA) NISTIR 8062, An Introduction to Privacy Engineering and Risk Management in Federal Systems (2017), published by the U.S. National Institute of Standards and Technology (NIST)
The 1980 OECD Guidelines provide a foundational and international standard for privacy. The guidelines contain principles that are not found in the FTC’s FIPPs, such as the collection limitation principle, and the GAPP largely refine the guidelines into more concrete privacy controls in a similar manner to the NIST privacy controls.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The Data Life Cycle

A

“The Data Life Cycle:
- Consent & Notice
- Collection -> Disclosure
- Processing -> Retention -> Destruction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Types of data collection

A

(1) First-party collection, when the data subject provides data about themselves directly to the collector, e.g., in a web-based form that is only submitted when the data subject clicks a button;
(2) surveillance, when the collector observes data streams produced by the data subject without interfering with the subject’s normal behavior; (3) repurposing, which occurs when the previously collected data is now assigned to be used for a different purpose, e.g., reusing a customer’s shipping address for marketing and (4) third-party collection, when previously collected information is transferred to a third-party to enable a new data collection.

Each of the above four collection types may be either active, which occurs when a data subject is aware of the collection, or passive, when the data subject is unaware.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Mechanisms to obtain consent

A

Various consent mechanisms exist to engage the data subject in the collection activity to make the collection more overt. The best practice is to obtain consent prior to the collection, to avoid any misconceptions and allow the data subject to opt out of or opt in to the collection before it occurs. In an explicit consent, the individual is required to expressly act to communicate consent.
Passive or implied consent is generally obtained by including a conspicuous link to a privacy notice that describes the collection activities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Media-appropriate techniques for sanitizing storage devices and destroying data

A

The U.S. NIST Special Publication 800-88, Appendix A, describes several media-appropriate techniques for sanitizing storage devices and destroying data that range from clearing the data by overwriting the data with pseudorandom data, to degaussing electromagnetic devices to, finally, incinerating the physical media. The level of destruction required is determined by the sensitivity of the data, and, in many situations, simply deleting the data may offer adequate protection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Role of the Area Specialist

A

The area specialist has several responsibilities: to collect critical regulatory requirements from lawyers, to validate that marketing requirements are consistent with laws and social norms, to meet with designers to discuss best practices when translating requirements into design specifications, and to collect user feedback and monitor privacy blogs, mailing lists and newspapers for new privacy incidents. As a privacy engineer, the area specialist develops a community of practice—“a collective process of learning that coalesces in a shared enterprise,” such as reducing risks to privacy in technology.” To bridge Scrum and privacy, the area specialist can participate in developing user stories to help identify privacy risks and harms and then propose strategies to mitigate those risks. Furthermore, the area specialist may review the sprint backlog, which contains the list of stories that will be implemented during the current sprint, to ensure that the working increment produced by the iteration does not contain major privacy risks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Methods for engineering privacy into systems

A

Methods for engineering privacy into systems basically amount to specialized life cycles themselves. These include the:
Privacy Management Reference Model and Methodology (PMRM)—promulgated by the Organization for the Advancement of Structured Information Standards (OASIS)
Preparing Industry to Privacy-by-design by supporting its Application in REsearch (PRIPARE) privacy and security-by-design methodology, funded by the European Commission.
The LINDDUN threat modeling method developed at KU Leuven in Belgium
Privacy Risk Assessment Methodology (PRAM) developed by the U.S. National Institute of Standards and Technology (NIST).
LINDDUN & PRAM are much more atomic and aimed at specific engineering activities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Defect

A

A flaw in the requirements, design or implementation that can lead to a fault.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Fault

A

An incorrect step, process or data definition in a computer program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Error

A

The difference between a computed, observed or measured value or condition and the true, specified or theoretically correct value or condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Failure

A

The inability of a system or component to perform its required functions within specified performance requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Harm/hazard

A

The actual or potential ill effect or danger to an individual’s personal privacy, sometimes called a hazard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Functional violation of privacy

A

A functional violation of privacy results when a system cannot perform a necessary function to ensure individual privacy.
For example, this occurs when sensitive, personally identifiable information (PII) is disclosed to an unauthorized third party. In this scenario, the defect is the one or more lines of computer source code that do not correctly check that an access attempt is properly authorized, and the fault is the execution of that source code that leads to the error. The error is the unauthorized access, which is an observed condition that is different from the correct condition—“no unauthorized access will occur.” The failure is the unauthorized third-party access; failures are often described outside the scope of source code and in terms of business or other practices. Privacy harms may be objective or subjective: An objective harm is “the unanticipated or coerced use of information concerning a person against that person”; a subjective harm is “the perception of unwanted observation,” without knowing whether it has occurred or will occur.1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Definition of Risk

A

Risk is defined as a potential adverse impact along with the likelihood that this impact will occur. The classic formulation of risk is an equation: risk = probability of an adverse event × impact of the event.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Risk comparisons

A

Risk comparisons are used to prioritize risks, nominally on the basis of the risk score, but sometimes based primarily on the highest impact or highest probability. However, it is often the case that a technical or empirical basis for one or both of these numbers is nonexistent, in which case an ordinal measure is used, such as assigning a value of low, medium or high impact to an adverse event.
Ordinal measures are subject to the limitations of human perception and bias, as are numerical measures, and all measures with the same level (e.g., low) remain contextually relative and not easily comparable. One approach is to identify a relative median event that a risk analyst can use to assign values to other events (e.g., event X is higher or lower impact than event Y). However, caution should be used when treating such measures as quantitative data, because normal arithmetic may not be applied to this data: One cannot, for example, take the sum of two or more ordinal values, that is, low + high ≠ medium, though explicit relations that map combinations of ordinal values to a resultant value can be rationalized. Hubbard and Seiersen as well as Freund and Jones, though, assert that when properly approached, quantitative measures are readily ascertainable and that any imprecision pales in comparison to that of qualitative, ordinal measures. In pursuit of a quantitative risk score, Bhatia and Breaux introduced an empirically validated privacy risk scale based on the theory of perceived risk described by Paul Slovic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Risk management framework and options

A

Risk management frameworks provide a process for applying a risk model to a specific information system in order to identify and address risks. Risk models directly addressthe domain-specific issues, while risk management frameworksare moreaboutprocess and are therefore more generic. ISO 31000 is an example of anexisting generic risk management framework. Indeed, ISO 31000 is so generic that it essentially includes construction of a risk model as part of the process.
Risk management framework is a step-by-step process for identifying threats, vulnerabilities and risks and deciding how best to manage the risks.
Conventional risk management options can include (1) accepting the risk as is, (2) transferring the risk to another entity (which is how property insurance works), (3) mitigating the risk by introducing an appropriate privacy control or a system design change or (4) avoiding the risk (e.g., by abandoning particular functionality, data or the entire system).
A risk-risk trade-off is not risk avoidance per se, but the result of interdependent risk mitigation and acceptance decisions.

22
Q

Risk models

A

Risk model an analyst uses to identify and align threats with the system vulnerabilities that the threats may exploit to yield risks, which are adverse events associated with degrees of likelihood and impact.
The most frequently used and long-standing privacy risk models are the:
Compliance model
Fair Information Practice Principles (FIPPs).
Over the last decade or so, other models have been introduced, including:
Calo’s subjective/objective dichotomy,
Solove’s taxonomy of privacy problems
Nissenbaum’s contextual integrity heuristic.
NIST’s Privacy Engineering Program promulgated its own privacy risk model.
There are likely diminishing marginal returns to integrating each new privacy risk model into an organization’s risk management framework, a privacy analyst should instead pursue a limited but advantageous combination. Both the S/OD and taxonomy of privacy problems models provide a set of potential adverse privacy events but do little to assist in finding the threats and corresponding vulnerabilities that could lead to these events. Combining either of these models with the contextual integrity heuristic could provide mechanisms for recognizing both vulnerabilities and events, leaving only threat identification as an exercise for the analyst.

23
Q

Compliance Privacy Risk Model

A

The compliance model is relatively straightforward; risks are delineated as the failure to do what is required or to avoid what is prohibited. Under the compliance model, identification and alignment of threats and vulnerabilities amounts to examining the elements of the system that relate to each specific legal or policy requirement (e.g. GDPR, HIPAA…). To maintain a record of compliance, the privacy risk analyst can employ a traceability matrix.
Legal regimes, which include statutory and regulatory mandates at any level of government, usually prescribe or proscribe certain aspects of a system in terms of what data it contains, what the system does with that data and how the system protects that data.

24
Q

Fair Information Practice Principles (FIPPs) Privacy Risk Model

A

The FIPPs are a collection of widely accepted principles that agencies use when evaluating information systems, processes, programs, and activities that affect individual privacy. The FIPPs are not requirements; rather, they are principles that should be applied by each agency according to the agency’s particular mission and privacy program requirements. They were published in 1980 by the Organization for Economic Cooperation and Development (OECD) and a number of countries agreed upon them in principle. Several of the principles listed in the FIPPs are included in important privacy frameworks like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). FIPPs mostly prescribe, and in some cases proscribe, specific qualities and behaviors of systems that handle personal information. However, because FIPPs sit at a higher level of abstraction than legal and policy strictures typically do and because most of the principles are relative to the purpose of the system, significant interpretation by analysts and developers is necessary to determine how the FIPPs should manifest themselves in a given system.

25
Q

Subjective&Objective Dichotomy (S/OD) Privacy Risk Model

A

Ryan Calo’s subjective/objective dichotomy (S/OD) focuses on privacy harms, which he argues fall into two categories: Subjective harms are grounded in individual perception (irrespective of its accuracy) of unwanted observation, while objective harms arise out of external actions that include the unanticipated or coerced use of that person’s information. Subjective privacy harms amount to discomfort and other negative feelings, while objective privacy harms are actual adverse consequences. To assess the potential for subjective and objective harm, an analyst may examine elements of the system that relate to individuals’ expectations of how their information may be used, actual usage—including surveillance or tracking—and consent or lack thereof to the collection and use of that information.

26
Q

Solve’s Taxonomy of Privacy Problems Privacy Risk Model

A

The taxonomy consists of 16 distinct privacy problems, organized into four categories: information collection, information processing, information dissemination and intrusion and decisional interference. https://wiki.openrightsgroup.org/wiki/A_Taxonomy_of_Privacy

27
Q

Contextual Integrity Heuristic Privacy Risk Model

A

Contextual integrity is defined as maintaining personal information in alignment with informational norms that apply to a particular context. The contextual integrity heuristic posits that privacy problems arise out of disruptions to these informational norms. Contexts are socially constructed settings characterized by, among other things, norms or rules and internal values in the form of purposes or goals. Context-relative, informational norms involve actors (information senders, recipients and subjects), attributes (information types) and transmission principles that govern the flows of information. When an IT system violates or otherwise disrupts a context’s informational norms, this can result in a perceived privacy problem. Using the contextual integrity heuristic entails analysis to surface what norms govern a given context

28
Q

NIST Privacy Risk Model

A

The specifics of NIST’s privacy risk model are embedded in its Privacy Risk Assessment Methodology (PRAM). “ In NIST’s model, vulnerabilities are problematic data actions. These describe system behaviors with privacy implications that, while they may be authorized, create the potential for adverse events. These adverse events are prosaically termed problems for individuals and represent how problematic data actions may affect individuals in negative ways.
While not necessarily exhaustive, NIST’s catalog of problematic data actions is wide ranging:

			Appropriation occurs when personal information is used in ways beyond what is expected or authorized by the individual
			Distortion involves the use or dissemination of inaccurate or misleading personal information
			Induced disclosure takes place when individuals are pressured to provide personal information
			Insecurity involves lapses in data security
			Surveillance occurs when personal information is tracked or monitored out of proportion to system objectives
			Unanticipated revelation is unexpected exposure of facets of an individual as a result of processing
			Unwarranted restriction involves imposition of unjustified constraints on individuals regarding access to the system and its information as it relates to them
		
		“The catalog of problems for individuals is equally expansive:
		
			Loss of autonomy involves self-imposed restrictions on behavior
			Exclusion denies individuals knowledge about their personal information or the ability to act upon that knowledge
			Loss of liberty improperly raises the possibility of arrest or detainment
			Physical harm is direct bodily harm to an individual
			Stigmatization links information to an identity so as to stigmatize the person associated with that identity 
			Power imbalance enables abusive or unfair treatment of an individual
			Loss of trust can result from violations of implicit or explicit expectations or agreements regarding the treatment of personal information
			Economic loss involves direct or indirect financial loss
29
Q

Risk Management Framework Steps

A
  1. Characterisation
    Describe the purpose of the system, primary actors, how the data flows through the system and what technologies are in place
  2. Threat, vulnerability and event identification
    Use privacy risk models to identify and analyse threat, vulnerability and events
  3. Risk assessment
    Assign likelihood and impacts to previously identified events, which yields risks. Likelihood may be expressed as an ordinal value (low, medium, severe) or as a numerical value (0.0–1.0). Likelihood is sometimes assumed to mean the probability that the vulnerability would be exploited to yield the given event. Where specific threats can exploit specific vulnerabilities, the likelihood associated with that risk is significant. Where a vulnerability does not align with any specific threat, the likelihood may be less significant.
  4. Risk response determination
    1. Accept the risk
    2. Transfer the risk (If there are other entities that can do a better job managing the risk, transferring the risk may be the best option.)
    3. Mitigate the risk
    4. Avoid the risk
  5. Risk control implementation
    1. Administrative controls
    2. Technical controls
    3. Physical controls
  6. Monitor and review
    1. Periodic reviews
    2. Review b4 a change is initiated
      • Example of a risk management framework: ISO 31000
30
Q

Engineering for Privacy Requirements

A

Requirements describe constraints on software systems and their relationship to precise specifications that change over time and across software families. Any changes to reengineer a software application in response to a known privacy threat will be more costly than addressing the privacy threat during requirements engineering. Thus, it is critical for engineers to capture privacy requirements by participating in the community of practice and monitoring existing sources of information about privacy.

31
Q

Types of engineering requirements

A
  1. Functional requirements
  2. Non-functional requirements

Requirements can be written to be reusable across multiple systems. Reusable repositories of privacy requirements allow the area specialist to coordinate standard approaches across their organization and to discern strategies for handling exceptions, such as a novel technology that deviates from traditional norms. In addition to textual documentation, privacy requirements may be specified using visual models. These include process diagrams, information flow diagrams, role and permission matrices and state diagrams, to name a few. These models serve to make relationships between the objects of discourse (actors, data, systems and processes) explicit and enable additional analysis.

32
Q

Sources of Privacy Requirements

A

Privacy requirements may be acquired from multiple, diverse sources. This includes eliciting requirements from stakeholders using interviews, case studies and focus groups, as well as by extracting or mining text documents—such as contracts, standards, laws, newspapers and blogs—for requirements. There are standard elicitation techniques for working with subject-matter experts. Interview and survey techniques have been established for conducting surveys and focus groups.30 In addition to elicitation, the privacy standards, such as the FIPPs and the NIST privacy control catalog, serve as a source of requirements

33
Q

Managing Privacy Requirements

A

The requirements engineer uses trace matrices for encoding relationships between requirements and other software artifacts. Each trace link has a special type that describes the meaning of the link. Separate trace matrices are also used to trace requirements to downstream artifacts, such as software designs, source code and test cases. In privacy, trace matrices should also trace requirements to user agreements, such as privacy policies, terms of use (ToU) agreements, end-user license agreements (EULA) and so on. Whenever a requirement or IT system component changes, the trace matrices should be consulted to determine the impact of the change on other parts of the system, including privacy policies.

34
Q

Analysing Privacy Requirements

A

Requirements analysis describes activities to identify and improve the quality of requirements by analyzing the system and deployment environment for completeness and consistency. This includes identifying relevant stakeholders to ensure no one was overlooked and examining user stories and requirements for ambiguities, conflicts and inconsistencies.

35
Q

Privacy Completeness Arguments

A

Completeness arguments can be constructed for a specific data life cycle, wherein the argument asserts that every step in the data life cycle was visited for a particular data element or dataset. At each step in the data life cycle for a specific data type, the engineer considers whether the data type requires special consideration.
“some omissions lead to unwanted behavior, such as privacy harms. To improve requirements quality in general, we recommend constructing completeness arguments that ensure limited aspects of the requirements are complete. These arguments are constructed using stepwise analyses to ensure that a finite list of concerns has been reviewed in its entirety. For privacy, this includes reviewing requirements trace matrices to ensure that all privacy standards, guidelines and laws have been traced to a requirement. Completeness arguments can be used to cover every step in the data life cycle for especially sensitive data types and to expand one’s interpretation of a privacy law or regulation, which we now discuss. Is tracing complete? Completeness arguments can be constructed for privacy policies, wherein the argument determines whether the tracing is complete from privacy policy statements to software artifacts that implement those statements.

36
Q

Threat Modelling

A

IT developers use threat modeling to identify risks to the system based on concrete scenarios. Threat modeling considers the negative outcomes enabled by a particular threat agent or type of agent. There are several examples of security threat analysis artifacts and methods available to the engineer, such as anti-goals and misuse and abuse cases, that can be adapted to privacy.

37
Q

Anti-Goal Analysis

A

Anti-Goals are an attacker’s own goals or malicious obstacles to a system. Goal-oriented analysis to identify anti-goals begins with the engineer identifying the system’s positive goals, before identifying anti-goals that describe how an attacker could limit the system’s ability to maintain or achieve the positive goals.
Steps: (1) identify the anti-goals that obstruct relevant privacy goals, such as confidentiality and integrity goals; (2) identify the attacker agents who would benefit from each anti-goal and (3)for each attacker agent and anti-goal pair, elicit the attacker’s higher-level goal that explains why they would want to achieve this anti-goal. (This step continues in a series of how and why questions to elaborate the anti-goal graph.) The final steps are: (4) derive anti-models that identify the attacker, object of the attack and anti-goals and (5) operationalize the anti-model in terms of potential capabilities that the attacker agent may use in this scenario.

38
Q

Misuse and abuse Analysis

A

For IT developers preferring a lighter-weight method, misuse and abuse cases were developed to adapt the existing use case methodology to describe negative intents. Similar to anti-goals, an abuse case describes a complete interaction between a user and the system that results in a harmful outcome.48 Although misuse and abuse cases were originally developed to describe security attacks, the same notation can be used to describe potential privacy pitfalls.

39
Q

Tagging

A

Tagging involves adding labels or tags to data so that they can be easily categorized and analyzed by computers.

40
Q

Static vs ephemeral cryptographic keys

A

Cryptographic keys may be either static (designed for long term usage) or ephemeral (designed to be used only for a single session or transaction). The crypto-period (i.e., lifetime) of static keys may vary from days to weeks, months, or even years depending on what they are used for. In general, the more a key is used, the more susceptible it is to attack, and the more data is at risk should it be revealed, so it is important to ensure keys are replaced when required (this process is called updating or cycling).

41
Q

OECD Privacy Principles

A

Collection Limitation Principle
Data Quality Principle
Purpose Specification Principle
Use Limitation Principle
Security Safeguards Principle
Openness Principle
Individual Participation Principle
Accountability Principle
http://www.oecdprivacy.org/#participation

42
Q

k-anonimity

A

The concept of k-anonymity was introduced into information security and privacy back in 1998. It’s built on the idea that by combining sets of data with similar attributes, identifying information about any one of the individuals contributing to that data can be obscured. k-Anonymization is often referred to as the power of “hiding in the crowd.” Individuals’ data is pooled in a larger group, meaning information in the group could correspond to any single member, thus masking the identity of the individual or individuals in question.
The k in k-anonymity refers to a variable — think of the classic ‘x’ in your high school algebra class. In this case, k refers to the number of times each combination of values appears in a data set. If k=2, the data is said to be 2-anonymous. This means the data points have been generalized enough that there are at least two sets of every combination of data in the data set. For example, if a data set features the locations and ages for a group of individuals, the data would need to be generalized to the point that each age/location pair appears at least twice. The k-anonymity does not provide an absolute guarantee of privacy protection

43
Q

WORM Media

A

In computer media, write once, read many, or WORM, is a data storage technology that allows data to be written to a storage medium a single time and prevents the data from being erased or modified. Data stored on a WORM-compliant device is considered immutable; authorized users can read the data as often as needed, but they cannot change it.
There are no specific requirements for how it is delivered, except for the following three rules:
The data can be written only one time.
It must be immutable.
Authorized users must be able to read the data a number of times.

44
Q

Record management best practices

A

Create a records management strategy to capture, record, track and monitor
Regular review of document inventory
Adhere to document retention lifecycles and destroy records when necessary
https://www.archive-vault.co.uk/best-practice-for-records-management

45
Q

Federated Identity Management

A

Federated identity management, which is the practice of outsourcing authentication not just to a different service within an organization, but to a different organization. Federated identity management can go beyond just authenticating users: Identity providers can also supply service providers with attributes that describe users.
Federated identity management also has other advantages, for both users and service providers: Users no longer have to remember multiple sets of authentication credentials, and service providers are relieved of the burden of implementing authentication and protecting users’ authentication information. This comes with a cost, however. Every time a service provider needs to authenticate a user, the identity provider will be consulted; hence, the identity provider learns all the service providers each user visits as well as in what order, at what times of day and from what locations. This can add up to a far more detailed picture of a user’s behavior and preferences than users may be comfortable sharing with an identity provider—clearly a privacy concern.

46
Q

Identity provider

A

The service that authenticates users is called the identity provider (IdP)

47
Q

Service provider

A

The services that rely on the IdP to authenticate users are called service providers (SPs).

48
Q

Cross-enterprise authentication and authorization

A

In cross-enterprise authentication and authorization, two enterprises may each run their own identity provider, primarily intended to authenticate users within each enterprise. For business reasons, however, the two (or more) enterprises may decide to trust each other’s identity providers. This enables company A to allow users from company B to access A’s computer systems, but while relying on B’s identity provider to authenticate these users. Unlike the case in typical uses of federated identity management, these identity providers would be configured to work only with service providers from the two enterprises. In a cross-enterprise setting, it is also more common for identity providers to give service providers information about the roles and other attributes of authenticated users.

49
Q

OAuth

A

OAuth is a technical standard for authorizing users. It is a protocol for passing authorization from one service to another without sharing the actual user credentials, such as a username and password. With OAuth, a user can sign in on one platform and then be authorized to perform actions and view data on another platform.
OAuth makes it possible to pass authorization from one application to another regardless of what the two applications are. OAuth is one of the most common methods used to pass authorization from a single sign-on (SSO) service to another cloud application, but it can be used between any two applications. Other protocols can perform this function as well, although OAuth is one of the most widely used ones.
https://www.cloudflare.com/en-gb/learning/access-management/what-is-oauth/

50
Q
A