Module 4 Flashcards
Use AI Responsibly
What is responsible AI?
It is the principle of developing and using AI ethically with the intent of benefiting people and society, while avoiding harm.
What can affect the output from an AI tool?
The output from an AI tool can be affected by both systemic bias and data bias.
What is systemic bias?
A tendency upheld by institutions that favors or disadvantages certain outcomes or groups, existing within societal systems like healthcare, law, education, and politics.
What is data bias?
Data bias occurs when systemic errors or prejudices lead to unfair or inaccurate information, resulting in biased outputs.
What is allocative harm?
An AI system’s use or behavior that withholds opportunities, resources, or information, affecting a person’s well-being.
Example: Misidentification in tenant screening leading to lost opportunities and financial loss.
What is quality-of-service harm?
An AI system’s performance that is significantly worse for certain groups of people based on their identity.
Example: Speech recognition technology failing for people with disabilities.
What is representational harm?
An AI system’s reinforcement of the subordination of social groups based on their identities.
Example: Gender-biased translations in language translation apps.
What is social system harm?
Macro-level societal effects that amplify existing disparities or cause physical harm due to AI development or use.
Example: Deepfakes influencing elections and public opinion.
What is interpersonal harm?
The use of technology to create a disadvantage for certain people, negatively affecting their relationships or sense of self.
Example: Sharing private information that leads to surveillance or loss of agency.
What is a deepfake?
AI-generated fake photos or videos of real people saying or doing things they didn’t do.
What is privacy in the context of AI?
The right for a user to control how their personal information and data are collected, stored, and used by AI systems.
What is security in the context of AI?
The act of safeguarding personal information and private data to prevent unauthorized access and ensure the system is secure.
What are 3 measures needed to protect privacy and security?
- Be Aware of Terms of Use and Privacy Policies
- Avoid Inputting Personal or Confidential Information
- Stay Updated on the Latest Tools and Security Strategies
What is drift in AI models?
The decline in an AI model’s prediction accuracy due to changes over time not reflected in the training data, which is commonly caused by knowledge cutoff.
What is knowledge cutoff in AI?
That is when a model is trained up to a certain point in time and lacks knowledge of events or information after that date.