lecture 8 Flashcards
Transparency
Refers to the willingness and ability of
an organization developing or using an
ML system to be transparent and share
information with all its stakeholders
about the whole ML pipeline (data and
its provenance, annotation process,
algorithms used, purpose, risks)
Explainability
Refers to the process of “providing
information” about how the ML model
reached a given output(s), often based
on input data and/or features.
Examples of harms
Asymmetrical concentration of power
* think of google facebook amazon and apple
Social sorting and and discrimination
profiling and mass manipulation
minority erasure
Reliability
When a good thing happens: The
“system” does what it is supposed
to do as it is “intended”
Safety
When no bad things happen: The
“system” and its usages do not cause “unintended” harm (including metal-health discrimination, leading to bad decision-making, etc.) to users (non-users, society, the environment, etc.)
Accountability
The “responsible(s)” for the failure should be held accountable and take the burdens e.g., repairing the consequences and damages
Liability
The harmed parties are owed “duty of care”: indentities, medical care, mending reputation damage, etc. This often depends on the nature of the harm but also its causes
What is the difference between ideal and the actual ML transparency?
For ideal Transparency:
* implies accountable
* ensures Fairness and Justice
* increases trust
Actual ML Transparency
- Seriously missing transparency
- Lots of papers and research projects are on this topic
- Makes the News Headlines often
What is the difference between ideal explainability and actual explainability
ideal explainability:
* explains the outputs, like trust and uncertainty
* explains biases and help alleviating biases
* collaborates and augments human knowledge and intelligence –> helps humans improve and come together on knowledge and intelligence
Actual Explainability
* involves a lot of debugging
* improves the model accuracy wise
What is the difference between interpretability and explainbility?
There is no clear answer but:
Interpretability
* is dependedent on the model architecture (model specific)
* is computationally fast
* can be inferred (deduced, concluded) from the model alone
wheras:
explainability
* Focus on model agnostic (where the structure of the model is not taken into account; black box model approach –> solely focus on observable elements)
* is computationally intensive
* is based on feature values and predictions
Stakeholders
who needs explainability?
and
who are we explaining to?
Three groups are most often proposed:
* practioners: consisting of data scientists and ML engineers
* Observers: consisting of business stakeholders and regulators
* Users: consisting of the domain experts
Additional groups:
* decision subjects: persons who are directly affected by ML usage
* non-users: persons or groups directly or indirectly affected by the ML usage
* civil-society watch dogs
What is desiderata
the desirable properties or characteristics that a model or algorithm should possess
What will explainable AI = XAI bring?
Explainability desiderata
Todays age questions the understanding of the machine learning process is unclear. The machine learning process leads to a learned function that performes a task or makes a decision or recommendation which is unclear to the the users what happen and why choices were made.
In the future (with XAI) there is a new machine learning process that leads to an explainable model with an explanation interface. Ensuring that the user can understand the task performed or why not and what happened inside the model.
Why is XAI needed?
Model will improve
* the quality of model will increase
Verify performance
* confirmation that model behaves as expected
Build trust
* Increase confidence in the reliability of the model
Remediation
* Understanding what actions to take to alter a prediction
Understand model behaviour:
* Can be used to construct a simplified model in the users mind, which can be used as a surrogate for understanding the model’s performance
Monitoring (and Accountability)
* There is an ongoing assessment that a model’s performance remains acceptable and compliant (to the eventual standards and regulations)