Responsible AI Flashcards
HIC
High-Income Countries
LMIC
Low- and Middle- Income Countries
Artificial Intelligence (AI)
The ability of algorithms encoded in technology to learn from data so that they can perform automated tasks without every step in the process having to be programmed explicitly by a human.
6 key ethical principles for the use of AI for health
- -Protecting human autonomy
- -Promoting human well-being and safety and the public interest
- -Ensuring transparency, explainability, and intelligibility
- -Fostering responsibility and accountability
- -Ensuring inclusiveness and equity
- -Promoting AI that is responsive and sustainable
Protecting human autonomy
One of the 6 key ethical principles for the use of AI for health that stipulates that:
the use of AI or other computational systems does not undermine human autonomy - i.e., that humans remain in control of health care systems and medical decisions.
providers have the information necessary to make safe, effective use of AI systems and that people understand the role that
such systems play in their care.
there is protection of privacy and confidentiality and obtaining valid informed consent through appropriate legal frameworks for data protection.
Promoting human well-being and safety and the public interest
One of the 6 key ethical principles for the use of AI for health that stipulates that:
AI should not harm people nor result in mental or physical harm that could be avoided by use of an alternative practice or approach.
Ensuring transparency, explainability and intelligibility
One of the 6 key ethical principles for the use of AI for health that stipulates that:
AI technologies should be intelligible or understandable to developers, medical professionals, patients, users and regulators.
Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology and that such information facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.
AI technologies should be explainable according to the capacity of those to whom they are explained.
Fostering responsibility and accountability
One of the 6 key ethical principles for the use of AI for health that stipulates that:
AI stakeholders are responsible for ensuring that AI can perform its tasks and that AI is used under appropriate conditions and by appropriately trained people.
Responsibility can be assured by application of “human warranty”, which implies evaluation by patients and clinicians in the development and deployment of AI technologies. Human warranty requires application of regulatory principles upstream and downstream of the algorithm by establishing points of human supervision.
If something goes wrong with an AI technology, there should be accountability. Appropriate mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.
Ensuring inclusiveness and equity
One of the 6 key ethical principles for the use of AI for health that stipulates that:
AI for health be designed to encourage the widest possible appropriate, equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes. AI technologies should:
be available for the needs in HIC and LMIC.
avoid biases to the disadvantage of identifiable groups, especially groups that are already marginalized.
minimize inevitable disparities in power that arise between providers and patients, between policy-makers and people and between companies and governments that create and deploy AI technologies and those that use or rely on them.
be monitored and evaluated to identify disproportionate effects on specific groups of people.
Promoting AI that is responsive and sustainable
One of the 6 key ethical principles for the use of AI for health that stipulates that:
designers, developers and users continuously, systematically and transparently assess AI applications during actual use.
determine whether AI responds adequately and appropriately and according to communicated, legitimate expectations and requirements
AI systems should be designed to minimize their environmental consequences and increase energy efficiency.
Who are the primary stakeholders for responsible AI?
The development, adoption and use of AI requires an integrated, coordinated approach among these stakeholders
Gov’t health agencies - determine how to introduce, integrate and harness these technologies for the
public good while restricting or prohibiting inappropriate use
Gov’t Regulatory agencies - validate and define whether, when and how such technologies are to be used
Gov’t Educational agencies - teach current and future health-care workforces how such technologies function and are to be integrated into everyday practice
Gov’t Information Technology - facilitate the appropriate collection and use of health data and narrow the digital divide
Government Legal systems - ensure that people harmed by AI technologies can seek redress
Non Gov’t medical researchers, scientists, health-care workers and, especially, patients.
Technologists and software developers
Companies, universities, medical associations and international organizations
What are some examples where AI can improve the delivery of health care?
Prevention
Diagnosis and treatment of Disease
Augment the ability of health-care providers to improve patient care
Optimize treatment plans
Support pandemic preparedness and response
Inform the decisions of health policy-makers or allocate resources within health systems
Empower patients and communities to assume control of their own health care and better understand their evolving needs
Enable resource-poor countries, where patients often have restricted access to health-care workers or medical professionals, to bridge gaps in access to health services.
supervised learning
A subcategory of Machine Learning (ML) where data used to train the model are labelled (the outcome variable is known), and the model infers a function from the data that can be used for predicting outputs from different inputs.
Unsupervised learning
A subcategory of Machine Learning (ML) that does not involve labelling data (like with supervised learning) but involves identification of hidden patterns in the data by a machine
Reinforcement learning
A subset of Machine Learning (ML) that involves machine learning by trial and error to achieve an objective for which the machine is “rewarded” or “penalized”, depending on whether its inferences reach or hinder achievement of an objective
Deep learning or Deep structured learning
A subcategory of Machine Learning (ML) that is based on the use of multi-layered models to progressively extract features from data. Deep learning can be supervised, unsupervised or semi-supervised. Deep learning generally requires large amounts of data to be fed into the model.