Transparency Flashcards

1
Q

What is meant with opacity?

A

In relation to AI or computer programs opacity means that we do not know what (exactly) they do, why they do it, or how they work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Name three kinds of opacity as specified by Burrell.

A
  1. Opacity due to secrecy or confidentiality. Companies might want to be opaque in the interest of the secrecy of their business process.
  2. Computer illiteracy: Most people are not literate when it comes to computers and the digital world. Understanding computer systems and how code works is currently in the hand of a small number of people in the world
  3. Algorithmic opaqueness: Algorithms, especially those of the likes of machine learning are often opaque by definition. From the intermediate steps of the algorithm, like a hidden layer of a neural net, we can often not extract human-interpretable patterns.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are meant with the ecosystem roles of creators, executors, operators, data subjects, decision subjects and examiners in relation to

A

Creators: developers of a system
Executors: make data-driven decisions
Operators: Enter inputs, receive outputs (like bank employees)
Data subjects: Provide data for training (like previous customers)
Decision subjects: Receive decision
Examiners: Evaluate decisions against relevant norms (regulators)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain how creators, executors, examiners and decision subjects are effected by opacity.

A
  • Creators: these have to develop and maintain a system. They select architecture, algorithms and the data but the way input-output mapping is done is not designed by them.
  • Executors consider the system’s outputs in order to make well-reasoned decisions. If the system is opaque, they do not know “why the system does what it does”, and thus cannot use its output to make well-reasoned decisions, which reduces the quality of data-driven decisions.
  • If examiners do not know “why” the system does what it does, they cannot determine whether a software tool e.g. “punishes” individuals with certain attributes such as a specific gender, ethnic or racial background, which hinders regulatory oversight
  • Decision subjects are (usually) laypeople who are affected by data-driven decisions. They must agree to be affected in this way, and must know how to effect different decisions in the future. They want to know “why the decision was made” so that they can be sure that the decision was fair, and so that they can (if possible) change things in the future. Opacity leads to reduced autonomy and trust.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Name four examples of effects of opacity that are ethical in nature.

A
  1. AI systems that don’t perform highly can be unsafe.
  2. Bad data-driven decisions might be biased.
  3. A lack of regulatory oversight can lead to a lack of accountability.
  4. AI systems that leave users unsatisfied and unsure of what to do are untrustworthy.

Keywords: safety, bias, accountability, trust

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Summarize three different methods that could increase transparency for technology, as stated in the paper.

A

Education in digital literacy, computer science, data science, and evidence-based methods may give non-expert stakeholders a greater appreciation of how their data can be used.

Regulatory instruments can mandate insight into relevant
software tools’ workings. For example, it should be possible to receive an explanation for a decision that was made using AI systems for social critical application, like mortgage applications.

Technical solutions might be able to give different stakeholders greater insight into the inner workings of opaque systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What type of transparency-giving method is explainable AI. Also provide a description of ‘explainable AI’ and provide examples of how this might be implemented in practice.

A

Explainable AI is a technical transparency-giving method. Explainable AI (XAI) refers to artificial intelligence systems that can provide clear and transparent explanations for their decisions and actions. This could either be through the use of interpretable systems or by applying post-hoc analysis to explain opaque systems’ behavior.

Examples:

  1. Linear approximation methods can approximate an opaque system’s behavior, allowing us to predict its decision just on the basis of a few inputs. Might be used to give a decision-subject a simple story of “why” a certain decision was reached.
  2. Saliency maps can help us interpret a system’s behavior helping us to recognize meaningful patterns that the system has learned to track. Can tell creators, executors or examiners that a system is focusing on certain features rather than others, answering questions about “why” the system does what it does.

(activation-maximization and counterfactual explanations are also examples)

Keywords: Technical, explanation, interpretability, post-hoc analysis, approximation, saliency-maps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly