AI Flashcards
Is AI Regulated currently?
- While there are no UK laws that were explicitly written to regulate AI, it is partially regulated through a patchwork of legal and regulatory requirements built for other purposes which now also capture uses of AI technologies.
- For example, UK data protection law includes specific requirements around ‘automated
decision-making’ and the broader processing of personal data, which also covers processing for the purpose of developing and training AI technologies.
Why is AI ethics not enough?
Not all uses of AI are savoury or built on palatable values.
AI could become ‘god like’ in nature: Left to its self-proclaimed
ethical safeguards, AI has been shown to be discriminatory and
subversive i.e., Unfair Algorithms, Biased Data = Biased Results etc.
Imposing mandatory rules on AI would help prevent technology
infringing human rights. Regulation has the potential to ensure that
AI has a positive, not negative effect on lives.
Transformer Language Model used to output text-based, or image-based content
How adaptive is it?
How autonomous is it?
What are the potential AI-related regulatory implications?
Adaptive: transformer models have a large number of parameters, often derived from data from the public internet. This can harness the collective creativity and knowledge present online, and enable the creation of stories and rich, highly-specific images on the basis of a short textual prompt.
Autonomous: These models generate their output automatically, based on the text input, and produce impressive multimedia with next to no detailed instruction or ongoing oversight from the user.
regulatory implications: Security and privacy concerns from inferred training data;
Inappropriate or harmful language or content output;
Reproduction of biases or stereotyping in training data
Self-driving car control system
How adaptive is it?
How autonomous is it?
What are the potential AI-related regulatory implications?
Adaptive: These systems use computer vision as well as iteratively learning from real-time driving data to create a model which is capable of understanding the road environment, and what actions to take in given circumstances.
Autonomous: These models directly control the speed, motion and direction of a vehicle.
regulatory implications: Safety and control risks if presented with unfamiliar input; Assignation of liability for decisions in an accident/dispute;
Opacity regarding decision-making and corresponding lack of public trust
Who should lead AI standardisation?
Technical standardization is taking the lead on the regulation of AI through associations like the IEEE and ISO, and national agencies like the NIST in the US, and the CEN, CENELEC, AFNOR, Agoria and Dansk Standards in Europe.
In these settings, one key issue is the extent of government involvement.
* Are politicians capable enough to understand and make complex decisions about how to regulate technology?
The application and optimization of technical standards require the collaboration between lawmakers, policymakers, academics and engineers, and the support of different stakeholder groups, such as corporations, citizens, and human rights groups. Without this balance, BigTech lobbyists or geopolitics will have a disproportionate influence.
Who should lead the
charge? Probably the EU &
US. However – they’d have
to agree and work together.
How likely if this?
Why is global regulation a challenge?
Complexity and Rapid Advancement
Lack of Consensus: Different countries and regions have diverse perspectives, priorities, and values when it comes to AI regulation
Jurisdictional Issues
Balancing Innovation and Risk
Technical Complexity and Understanding
Cross-Sectoral Impact: AI will have applications across multiple different sectors
Compliance and Enforcement
what is the EUs regulatory framework?
- The proposed rules will:
– address risks specifically created by AI applications;
– propose a list of high-risk applications;
– set clear requirements for AI systems for high risk applications;
– define specific obligations for AI users and providers of high risk applications;
– propose a conformity assessment before the AI system is put into service or placed on the market;
– propose enforcement after such an AI system is placed in the market;
– propose a governance structure at European and national level. - The Regulatory Framework
defines 4 levels of risk in AI: - Unacceptable risk
- High risk
- Limited risk
- Minimal or no risk
How does EU define Unacceptable risk?
- Unacceptable risk; All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using
voice assistance that encourages dangerous behaviour.
How does the EU define and mediate High risk?
AI systems identified as high-risk include AI technology used in:
– critical infrastructures (e.g. transport), that could put the life and health of
citizens at risk;
– educational or vocational training, that may determine the access to
education and professional course of someone’s life (e.g. scoring of exams);
– safety components of products (e.g. AI application in robot-assisted surgery);
– employment, management of workers and access to self-employment (e.g.
CV-sorting software for recruitment procedures);
– essential private and public services (e.g. credit scoring denying citizens
opportunity to obtain a loan);
– law enforcement that may interfere with people’s fundamental rights (e.g.
evaluation of the reliability of evidence);
– migration, asylum and border control management (e.g. verification of
authenticity of travel documents);
– administration of justice and democratic processes (e.g. applying the law to
a concrete set of facts).
Mediate;
- High-risk AI systems will be subject to strict obligations before they can be put on the market:
– adequate risk assessment and mitigation systems;
– high quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
– logging of activity to ensure traceability of results;
– detailed documentation providing all information necessary on the system and its purpose for
authorities to assess its compliance;
– clear and adequate information to the user;
– appropriate human oversight measures to minimise risk;
– high level of robustness, security and accuracy.
How does the EU define and mediate limited risk?
- Limited risk: Limited risk refers to AI
systems with specific
transparency obligations.
When using AI systems such
as chatbots, users should be
aware that they are
interacting with a machine so
they can take an informed
decision to continue or step
back.
mediate:
- The proposal allows the free use of minimal-risk AI. This includes applications such as
AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.
How does the EU define and mediate minimal or no risk?
- The proposal allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.
What is the USAs initial regulation in 2022?
- Rise of Specific AI-use cases
– New York joined a number of states, including Illinois and Maryland, in regulating automated
employment decision tools (AEDTs) that leverage AI to make, or substantially assist, candidate
screening or employment decisions.
– Under New York’s law, AEDTs must undergo an annual “bias audit,” and results of this audit need to
be made publicly available. - Equal Opportunity Employment Commission (EEOC) - launched initiative “algorithmic
fairness” in employment.
State privacy laws concerning AI in 2023?
Consumer Rights for AI-Powered Decisions: Essentially, state
privacy laws will grant consumers opt-out rights when AI algorithms
make high-impact decisions
AI Transparency: Proposed Colorado privacy regulations would
require companies to include AI-specific transparency in their
privacy policies. Privacy policies would need to list all high-impact
“decisions” that are made by AI and subject to opt-out rights.
AI Governance via Impact Assessments: When data processing
presents a “heightened risk of harm to consumers,” companies must
internally conduct and document a “data privacy impact
assessment” (DPIA).
State privacy laws concerning AI in 2023?
Consumer Rights for AI-Powered Decisions: Essentially, state
privacy laws will grant consumers opt-out rights when AI algorithms
make high-impact decisions
AI Transparency: Proposed Colorado privacy regulations would
require companies to include AI-specific transparency in their
privacy policies. Privacy policies would need to list all high-impact
“decisions” that are made by AI and subject to opt-out rights.
AI Governance via Impact Assessments: When data processing
presents a “heightened risk of harm to consumers,” companies must
internally conduct and document a “data privacy impact
assessment” (DPIA).
Proposed federal AI regulation?
- Make sure AI is trained using data sets that are representative and do not “miss
information from particular populations.” - Test AI before deployment – and periodically thereafter – to confirm it works as intended and does not create discriminatory or biased outcomes.
- Ensure AI outcomes are explainable, in case AI decisions need to be explained to consumers or regulators.
- Create accountability and governance mechanisms to document fair and responsible development, deployment, and use of AI.
At the federal level, AI-focused bills have been introduced in Congress but have not gained significant support or interest.
Federal Trade Commission (FTC) has however taken an interest. They have produced several statutes & publications that embed AI. This
means the following must occur (under the remit of the Fair Credit Reporting Act, Equal Credit Opportunity Act, and FTC Act).