Healthcare Compliance Risk Areas Flashcards
What Is Artificial Intelligence in Relation to Compliance Programs?
Artificial intelligence (AI) is simply the application of computer processing to simulate the actions of a person. One of the earliest AI systems was, in fact, a medical application called “MYCIN.” The program was designed to diagnose bacterial infections and recommend appropriate medications, with the dosage adjusted for the patient’s body weight. Viewed from current technology, MYCIN was quite primitive, using an inference engine with approximately 600 rules derived from interviews with expert human diagnosticians. MYCIN was originally written as part of a doctoral dissertation at Stanford University and was never used in actual medical practice for legal and ethical reasons (along with limitations related to the technology of the day.) But it formed the basis for continued experimentation and development. There are aspects of AI that are continuously evolving, but there are some basic terms that are worth understanding.
What is Machine Learning?
This is a subset of AI in which the computer’s algorithms (essentially the AI computer program) are able to modify the computer’s actions with the objective of improving through experience. In many settings (medicine, aviation, or automobiles for example), learning through actual experience could be counterproductive. For example, imagine stating that a number of airplane crashes happened because the airplane’s computer program hadn’t learned to deal with unexpected turbulence yet. So, typically machine-learning systems are given what is called “training data” in order to learn how to function. Provided with data and outcomes, the software should be able to modify processing to result in better—or more accurate—performance.
What is Rule-Based Machine Learning?
This involves systems that evolve a set of rules by which a program makes decisions. MYCIN, for example, had hundreds of rules. In a rule-based system, the program uses its experience to identify which rules are more or less useful and to modify the rules or the weights given to them to improve processing outcomes.
What are Deep Learning Systems?
These systems are generally characterized as having multiple layers of processing, using layers that go from general to specific analysis, that are often being applied to large networks of unstructured data. An example might be a system that is designed to read human handwriting. Clearly, experience tells us that this isn’t easy, as there are as many variations in handwriting as there are people. But there are generalizations that can be used to do some preliminary analysis (for example, that a given character is uppercase) that can lead to deeper analysis to try to figure out which character is being represented.
What is Cognitive Computing?
This is generally thought of as an alternative name for AI. There is no widely accepted definition, but you may run into the term as a synonym for AI.
What is Computer Vision?
This is a subset of AI that focuses on how computers use digital images (still or video) in their processing. An assembly line for drug packaging can use computer vision technologies, for example, to inspect sterile vials of injectable medication to ensure that labels have been affixed and that the top is properly sealed. This can be done at the speed of the assembly line, with a mechanical “kicker” used to eject vials not meeting the specifications.
What is Natural Language Processing (NLP)?
This is the part of AI that focuses on enabling interactions with humans by interpreting their language. It includes automated language understanding and interpretation, automated language generation, speech recognition, and responding with spoken responses. In the past few years, this has gone from the lab to millions of homes, with digital assistants like Siri and Alexa ready to listen and respond to requests. In many cases, the vendors of these systems seek user’s permission to use recordings of these interactions to improve the system’s performance. This has been recognized as a privacy issue. In at least one case, recordings of interactions with a digital assistant have been subpoenaed in connection with a murder trial.
What are Chatbots?
These are very similar to natural language processors, although they were developed to replace human operators in online text-based chat systems. For example, a chat system could be fielded to answer routine questions and to forward difficult or complex ones to human operators, thus reducing the workload on the humans. In some cases, these can use text-to-speech processing to enable spoken responses.
What are Graphics Processing Units (GPUs)?
These are specialized processors operating within computers designed to process image data. A GPU could be used to create the images displayed on a computer’s screen. However, these powerful units have been used for many other purposes. A current example is that GPUs are often used to process cryptocurrency transactions (a process known as mining, which can be very profitable). Specialized computers using massive numbers of GPUs have been developed as mining machines for cryptocurrency processing.
What is Internet of Things?
This is a term that refers to the abundance of devices that can connect to a network that are not traditional computers (or smartphones or tablets). Ranging from smart lightbulbs to cameras to refrigerators, they enable remote control and monitoring of connected devices. There has been enormous growth in the number of medical devices that can connect to a network. Unfortunately, there are serious security concerns that have resulted in Food and Drug Administration (FDA) warnings relating to several devices, including network-connected infusion pumps.
What are Application Programming Interfaces (APIs)?
This refers to the connections between devices and the rules by which these connections are made and interpreted. So, for example, if an AI-based analytic engine is to be given access to a particular database, an API defines the way the systems interact, how requests are made, and how they are responded to.
AI and the Compliance Function
When it comes to AI, compliance professionals are presented with what could be characterized as a double-edged sword. On one hand, AI represents an opportunity for compliance professionals to automate certain compliance activities. AI software can perform a compliance function within a given automated function. For example, an AI system could be instructed to issue a report (or email or text message) to a compliance officer if certain values are exceeded or fall below a specified threshold. If regular reports from multiple people are required, the system can monitor whether it has received the reports. It can be programmed to send a notice to those who have not made their report, and eventually to the compliance officer if reports are not received within a specified time period. The system can adjust processing based on an individual reporter’s performance. So, for example, more leeway might be given to someone who always files their reports on time versus someone who is frequently late.
For compliance officers, using AI represents what might be called a force multiplier, in that it enables compliance tasks to be assigned to a machine rather than requiring a human to track and identify reports not received on a timely basis. Because typical budgets for compliance are never enough to do everything a compliance officer might like, automating some processes can make those resources go further, which can be a valuable part of the overall compliance process in an organization.
On the other hand, AI software cannot exist in a vacuum. It needs to be properly controlled and carefully examined by a compliance professional. This person should be involved in the development or adoption of the AI software, along with its customization and testing. Compliance professionals should not underestimate the importance of being involved in testing. Problems with data used to train the system can produce results that might seem completely appropriate to the AI technical team, but may be recognized by compliance specialists as reflecting, for example, inherent biases that may be implicit in the training data, which is often historic in nature and may have been obtained from periods where various issues (like racial or gender bias) may not have been recognized. The technical people involved in the AI development process may not be sensitive to these issues. Compliance professionals must be—and can serve as—a vital system of checks and balances to assure that old problems are not carried forward into the new AI-based system.
AI and deep-learning systems can impact the traditional compliance function. Compliance professionals can both protect the organization from AI-related problems and take advantage of AI’s potential capability to enhance and serve the compliance function.
AI Risk Area Governance
In thinking about AI systems, remember that the entire spectrum of AI is still an emerging area of technology. As a result, there are no laws at this time specifically regulating or otherwise uniquely addressing AI systems.
AI systems, however, can violate laws. For example, consider an AI system designed by a bank to make decisions on mortgage loan applications. For example, during the development and training of the system, AI could determine that a significant predictor of whether a mortgage will be successfully paid is the postal code of the borrower. From a technical standpoint, it might be reasonable to let the system make loan decisions—including the interest rate and other terms of a loan with significant weight given to the borrower’s postal code. But doing so might be determined to be an unlawful practice called redlining, which is defined as denying a service to someone on the basis that they live in an area believed to be a financial risk to the lender. This discriminatory practice was generally outlawed in the Fair Housing Act of 1968 and the Community Reinvestment Act of 1977.[3],[4] But those developing the AI system may be experts in technology—and not in banking or the application of those laws. This is an example of a system that could perpetuate bias if the problem went unrecognized.
It is necessary to consider AI in terms of the risks associated with:
Any compliance system (manual or automated)
Applicable laws and regulations
The need for controls over that system
The requirement that a compliance function be able to provide assurance that the necessary controls are working as intended.
What are the common compliance risks of AI?
AI Development Team May Not Include Sufficient Input from Counsel: AI systems are subject to all relevant laws, regulations, contractual agreements, and company/agency policies covering both the subject matter of the system and any technological issues relating to the system. If a system is created by external specialists as a work made for hire or the system is acquired under some form of a license agreement from the developer/owner of the system, the legal issues regarding the acquisition, ownership, and duties of the parties represent legal issues. For that reason, compliance professionals should consider whether the development of the AI system has received sufficient input from either in-house or outside counsel to assure that the system is in compliance with the applicable laws, regulations, and contractual agreements (such as for remote storage, data breach incidents, and privacy and security requirements). Compliance professionals are more aware of the potential impact of a system that violates laws, regulations, or contractual agreements than the average person, so it is important that they help assure the AI system has been subject to appropriate review and follow-up by counsel.
AI Development Team May Not Include Sufficient Input from Compliance Professionals: It is not unusual for an AI development team to be largely composed of technology and AI specialists. They are not compliance professionals, and one must not assume that this kind of technology team will adequately design or implement the needed compliance controls. Consider the extent to which non-AI systems require compliance oversight. AI systems often have greater freedom of action based on their rulesets and the experience that they gain during their operation. Compliance professionals have to review in detail the controls being implemented into the AI system to determine whether the properly implemented controls are sufficient. If they aren’t, the compliance professional must take whatever steps necessary to get those controls into the system, or to develop compensating controls that can replace missing controls within AI systems.
AI System May Not Be Designed to Retain All Records Required by Law or Regulation: There are many records that a company or governmental body must retain for specified periods of time, as required by law, regulation, or contractual provision. Tax-related information is a good example, but not the only one that is relevant. AI systems being built or licensed may not take all of the relevant laws and regulations into account. Both legal and compliance professionals must work together to understand what the requirements are and the extent to which the existing system design accurately reflects those requirements.
AI System May Not Be Designed to Retain Records That Could Become Important Evidence in the Event of Litigation Relating to the System’s Operations: The information that counsel wants preserved in logs or other records of an AI system may go beyond requirements set by laws, regulations, or contractual provisions. For example, there is very little legal guidance on exactly what data an autonomous driving vehicle has to maintain. But counsel may have some very specific ideas on what should be available if—as has happened—the self-driving car kills a pedestrian. What were the sensors seeing? What was the ruleset that led the car to hit the pedestrian? If the data is not stored in a log or other record, it won’t be available, and that fact may, in and of itself, be seen as problematic if litigation ensues. History tells us that AI systems are no less likely than other systems to result in litigation, and as a result, thinking in terms of the evidence that counsel would like to have in the event of litigation is very important. The compliance department needs to ensure that those records are being created by the AI system and stored for the time period designated by counsel.
AI System’s Learning and Testing Data Sets May Be Ineffective in Preventing Unwanted Behavior or in Identifying Potential Issues with the AI System’s Performance: AI systems referred to as having machine-learning or deep-learning attributes are different than traditional AI programs in that these systems modify their functionality over time based on experience. These systems simulate the learning that would happen to a human. The set of rules that is part of the software determines how the system can change as it “learns.” What limits are set for these changes? Who has looked at the data used to train and test the system? Unless you actively look at these issues, you can’t simply assume that everything will be OK. For example, an AI computer-vision system that inspects vials to ensure that the label was properly attached might need to be adjusted if the label size changes or the dimensions of the vials change to avoid rejecting vials that are acceptable. Consider the example of AI facial recognition systems. At first, there was a general assumption that these systems worked well. But as facts emerged, that assumption had to be challenged. A federal study demonstrated that facial recognition systems misidentified people of color more than white people.[5] According to a report in The Washington Post, “Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search.”[6] This was not a study of systems that were being considered for use. These systems were in actual use and were misidentifying people of color. It raises a question of how that could have happened and why it was not noticed. Certainly, there was inherent bias in the data, but it’s also important to note that the people building these systems either did not understand that (or chose to ignore it). What might seem like an academic issue of how the system works can in reality result in life-or-death situations. For example, an innocent person inaccurately identified as a dangerous criminal who has resisted arrests in the past might lead to a rapidly escalating —and deadly—situation when police attempt to apprehend that innocent person.
Updates and Changes to the AI System May Impact the System’s Operations in a Way That Presents Increased Risks That Must Be Evaluated by Counsel and Compliance Professionals: AI and machine-learning systems, as all systems, will be updated at some point or on a regular basis. The changes could be a result of changing the underlying operating systems of the computers on which they run or a change in the desired functionality of the system. Regardless, it’s important that compliance personnel be involved to provide assurance that the changes won’t result in a degradation of controls or in reporting mechanisms.
How are AI Compliance Risks Addressed?
During the Developmental Phase of AI System Development: The compliance function plays (or at least should play) an important role during the development (i.e., programming, installation, or customization) of the AI system. Taking an active role to understand what the system does, how it does it, any limitations on the system’s freedom of action, designed-in controls, reporting, data logging and preservation, and error reporting is key to being able to accurately report to management on how well the system is controlled and how those controls can be overseen.
During the Testing Phase of AI System Development: The compliance function should be involved in testing. The objective of testing should be to detect problems. All too often, developers want the system to be accepted, and may take shortcuts. For example, the developers may have a large file of data that is relevant to the system. They can take half of the file and use it for training the system, and then use the other half of the file as the test set. The problem with this is that any problems or bias that are consistent throughout the file will most likely not be caught, since the same error that is in the test set was also in the set of data used for testing. Making sure that the test data actively challenge the system is important.
During the Operational Phase of AI System: During the operational life of the system, the compliance function must examine reports coming from the system to understand potential problems. Compliance professionals looking at AI systems must do what they are good at—asking the “What if?” questions that may have been overlooked by the development team. Additionally, those using AI systems may not like the discipline imposed by these systems and develop ways to bypass them or render them less effective. Recognizing this possibility can lead a compliance professional to closely examine how the AI system is operating and be on the lookout for behaviors their experience tells them may be present. At the same time, sufficient testing must be developed that can determine that the right controls are in place and that they are working. AI systems are no different than any other corporate system in this regard.
Updating and Maintaining AI Systems: As AI systems are updated and maintained, compliance specialists should be involved to understand the changes to ensure that the systems do not negatively impact the controls in place and determine whether they require additional controls and whether the overall system of controls will continue to work properly.