Week 6: Algorithms and the challenge to diversity Flashcards
How does Copson refer to the role of diversity in humanism in relation to discrimination?
Humanism requires that we organise society in a way that promotes the freedom, prosperity,
creativity and fulfilment of all, regardless of one’s class, colour, race, sex, or status.
What are algorithms?
a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.
For what can algorithms be used?
recommendation, personalisation, filtering, profiling, classification
Are algorithms only used by computers?
Nope, also in daily life (when crossing the street or brushing your teeth)
Why is it important that all steps are carefully formulated and evaluated?
Human beings can usually rely on common sense to find a solution for problems with an algorithm.
This is not the case for computers and AI.
It is therefore important all steps are carefully formulated and evaluated.
(Mittelstadt et al) What is the main aim of the article?
Mapping out different ethical concerns regarding algorithmic technologies. The authors identify how the ethics of algorithmic technologies can be further developed. They want to bring greater clarity to the debate by seperating concerns that are often treated as a cluster.
(Mittelstadt et al) What do they identify in their map of the ethics of algorithms?
In their map of the ethics of algorithms, Mittelstadt and his colleagues identify six types of ethical concerns about technologies that use algorithms.
(Mittelstadt et al) Which 6 types of ethical concerns do they have?
Three are epistemic: What is the quality of the evidence based on which the outcome is reached? (inconclusive evidence, inscrutable evidence, misguided evidence)
Two are normative: Are the actions or decisions made by the algorithm fair and how do they affect the way we understand the world? (unfair outcomes, transformative effects)
Traceability refers to how well we can identify the cause and responsibility for harm done by algorithmic technologies.
What is relevant when analysing algorithms?
Relevant for our analysis today is the idea that certain ethical concerns about algorithmic technologies are related to the technology itself, while others arise from human bias.
(…)
The second in misguided evidence, which comes from users giving morally problematic input (‘garbage in, garbage out’).
(Mittelstadt et al) What do they point out regarding ‘garbage out’ outcomes?
They point out that algorithms also affect the way we see the world by providing us with ways to categorise and conceptualise it.
The autocomplete function of Google, for example, does not just reflect the ways in which many people looked for information but also guides this process by suggesting what users may be looking for.
These suggestions present a way of seeing the world which is not morally neutral as shown by the UN Women campaign on oppressive autocomplete Google search suggestions.
(Noble) What does she argue?
She argues that search engines, such as Google, reinforce racism and sexism by using biased algorithms that perpetuate discrimination against marginalised groups in society.
Noble is concerned in the role of algorithms in masking and deepening social inequality.
Algorithms are often perceived to be neutral but they are shaped and influenced by the values and biases of developers as well as by the values and biases in the underlying datasets.
(Noble) Why are problematic outcomes of algorithmic technologies not glitches?
Noble argues that the persistent nature of these failures and their application to the same marginalised groups, highlights a systematic issue, which she calls algorithmic oppression.
In this way, even neutral search terms related to such groups “offer up racism and sexism as the first results” (Noble, p. 5).
(Noble) What is her main argument?
Noble calls attention to how social groups are represented in distinct ways by information systems.
Wrongful, stereotypical and discriminatory portrayals may affect self-perception, but also social beliefs and expectations and thereby contribute to (implicit) negative perceptions of members of particular social groups, which impact on decision-making, changing the social order.
(Heinrichs) What are two ways of understanding discrimination?
1) as treating someone differently because of their belonging to a particular social group
2) as treating someone worse because of their belonging to a different group when we believe that this disadvantegeous treatment cannot be justified based on other relevant concerns.
(Heinrichs) What does he argue?
That when we use artificial intelligence and automated decision making processes, this can make moral concerns about discrimination more pertinent.