Healthcare Compliance Risk Areas Flashcards
What Is Artificial Intelligence in Relation to Compliance Programs?
Artificial intelligence (AI) is simply the application of computer processing to simulate the actions of a person. One of the earliest AI systems was, in fact, a medical application called “MYCIN.” The program was designed to diagnose bacterial infections and recommend appropriate medications, with the dosage adjusted for the patient’s body weight. Viewed from current technology, MYCIN was quite primitive, using an inference engine with approximately 600 rules derived from interviews with expert human diagnosticians. MYCIN was originally written as part of a doctoral dissertation at Stanford University and was never used in actual medical practice for legal and ethical reasons (along with limitations related to the technology of the day.) But it formed the basis for continued experimentation and development. There are aspects of AI that are continuously evolving, but there are some basic terms that are worth understanding.
What is Machine Learning?
This is a subset of AI in which the computer’s algorithms (essentially the AI computer program) are able to modify the computer’s actions with the objective of improving through experience. In many settings (medicine, aviation, or automobiles for example), learning through actual experience could be counterproductive. For example, imagine stating that a number of airplane crashes happened because the airplane’s computer program hadn’t learned to deal with unexpected turbulence yet. So, typically machine-learning systems are given what is called “training data” in order to learn how to function. Provided with data and outcomes, the software should be able to modify processing to result in better—or more accurate—performance.
What is Rule-Based Machine Learning?
This involves systems that evolve a set of rules by which a program makes decisions. MYCIN, for example, had hundreds of rules. In a rule-based system, the program uses its experience to identify which rules are more or less useful and to modify the rules or the weights given to them to improve processing outcomes.
What are Deep Learning Systems?
These systems are generally characterized as having multiple layers of processing, using layers that go from general to specific analysis, that are often being applied to large networks of unstructured data. An example might be a system that is designed to read human handwriting. Clearly, experience tells us that this isn’t easy, as there are as many variations in handwriting as there are people. But there are generalizations that can be used to do some preliminary analysis (for example, that a given character is uppercase) that can lead to deeper analysis to try to figure out which character is being represented.
What is Cognitive Computing?
This is generally thought of as an alternative name for AI. There is no widely accepted definition, but you may run into the term as a synonym for AI.
What is Computer Vision?
This is a subset of AI that focuses on how computers use digital images (still or video) in their processing. An assembly line for drug packaging can use computer vision technologies, for example, to inspect sterile vials of injectable medication to ensure that labels have been affixed and that the top is properly sealed. This can be done at the speed of the assembly line, with a mechanical “kicker” used to eject vials not meeting the specifications.
What is Natural Language Processing (NLP)?
This is the part of AI that focuses on enabling interactions with humans by interpreting their language. It includes automated language understanding and interpretation, automated language generation, speech recognition, and responding with spoken responses. In the past few years, this has gone from the lab to millions of homes, with digital assistants like Siri and Alexa ready to listen and respond to requests. In many cases, the vendors of these systems seek user’s permission to use recordings of these interactions to improve the system’s performance. This has been recognized as a privacy issue. In at least one case, recordings of interactions with a digital assistant have been subpoenaed in connection with a murder trial.
What are Chatbots?
These are very similar to natural language processors, although they were developed to replace human operators in online text-based chat systems. For example, a chat system could be fielded to answer routine questions and to forward difficult or complex ones to human operators, thus reducing the workload on the humans. In some cases, these can use text-to-speech processing to enable spoken responses.
What are Graphics Processing Units (GPUs)?
These are specialized processors operating within computers designed to process image data. A GPU could be used to create the images displayed on a computer’s screen. However, these powerful units have been used for many other purposes. A current example is that GPUs are often used to process cryptocurrency transactions (a process known as mining, which can be very profitable). Specialized computers using massive numbers of GPUs have been developed as mining machines for cryptocurrency processing.
What is Internet of Things?
This is a term that refers to the abundance of devices that can connect to a network that are not traditional computers (or smartphones or tablets). Ranging from smart lightbulbs to cameras to refrigerators, they enable remote control and monitoring of connected devices. There has been enormous growth in the number of medical devices that can connect to a network. Unfortunately, there are serious security concerns that have resulted in Food and Drug Administration (FDA) warnings relating to several devices, including network-connected infusion pumps.
What are Application Programming Interfaces (APIs)?
This refers to the connections between devices and the rules by which these connections are made and interpreted. So, for example, if an AI-based analytic engine is to be given access to a particular database, an API defines the way the systems interact, how requests are made, and how they are responded to.
AI and the Compliance Function
When it comes to AI, compliance professionals are presented with what could be characterized as a double-edged sword. On one hand, AI represents an opportunity for compliance professionals to automate certain compliance activities. AI software can perform a compliance function within a given automated function. For example, an AI system could be instructed to issue a report (or email or text message) to a compliance officer if certain values are exceeded or fall below a specified threshold. If regular reports from multiple people are required, the system can monitor whether it has received the reports. It can be programmed to send a notice to those who have not made their report, and eventually to the compliance officer if reports are not received within a specified time period. The system can adjust processing based on an individual reporter’s performance. So, for example, more leeway might be given to someone who always files their reports on time versus someone who is frequently late.
For compliance officers, using AI represents what might be called a force multiplier, in that it enables compliance tasks to be assigned to a machine rather than requiring a human to track and identify reports not received on a timely basis. Because typical budgets for compliance are never enough to do everything a compliance officer might like, automating some processes can make those resources go further, which can be a valuable part of the overall compliance process in an organization.
On the other hand, AI software cannot exist in a vacuum. It needs to be properly controlled and carefully examined by a compliance professional. This person should be involved in the development or adoption of the AI software, along with its customization and testing. Compliance professionals should not underestimate the importance of being involved in testing. Problems with data used to train the system can produce results that might seem completely appropriate to the AI technical team, but may be recognized by compliance specialists as reflecting, for example, inherent biases that may be implicit in the training data, which is often historic in nature and may have been obtained from periods where various issues (like racial or gender bias) may not have been recognized. The technical people involved in the AI development process may not be sensitive to these issues. Compliance professionals must be—and can serve as—a vital system of checks and balances to assure that old problems are not carried forward into the new AI-based system.
AI and deep-learning systems can impact the traditional compliance function. Compliance professionals can both protect the organization from AI-related problems and take advantage of AI’s potential capability to enhance and serve the compliance function.
AI Risk Area Governance
In thinking about AI systems, remember that the entire spectrum of AI is still an emerging area of technology. As a result, there are no laws at this time specifically regulating or otherwise uniquely addressing AI systems.
AI systems, however, can violate laws. For example, consider an AI system designed by a bank to make decisions on mortgage loan applications. For example, during the development and training of the system, AI could determine that a significant predictor of whether a mortgage will be successfully paid is the postal code of the borrower. From a technical standpoint, it might be reasonable to let the system make loan decisions—including the interest rate and other terms of a loan with significant weight given to the borrower’s postal code. But doing so might be determined to be an unlawful practice called redlining, which is defined as denying a service to someone on the basis that they live in an area believed to be a financial risk to the lender. This discriminatory practice was generally outlawed in the Fair Housing Act of 1968 and the Community Reinvestment Act of 1977.[3],[4] But those developing the AI system may be experts in technology—and not in banking or the application of those laws. This is an example of a system that could perpetuate bias if the problem went unrecognized.
It is necessary to consider AI in terms of the risks associated with:
Any compliance system (manual or automated)
Applicable laws and regulations
The need for controls over that system
The requirement that a compliance function be able to provide assurance that the necessary controls are working as intended.
What are the common compliance risks of AI?
AI Development Team May Not Include Sufficient Input from Counsel: AI systems are subject to all relevant laws, regulations, contractual agreements, and company/agency policies covering both the subject matter of the system and any technological issues relating to the system. If a system is created by external specialists as a work made for hire or the system is acquired under some form of a license agreement from the developer/owner of the system, the legal issues regarding the acquisition, ownership, and duties of the parties represent legal issues. For that reason, compliance professionals should consider whether the development of the AI system has received sufficient input from either in-house or outside counsel to assure that the system is in compliance with the applicable laws, regulations, and contractual agreements (such as for remote storage, data breach incidents, and privacy and security requirements). Compliance professionals are more aware of the potential impact of a system that violates laws, regulations, or contractual agreements than the average person, so it is important that they help assure the AI system has been subject to appropriate review and follow-up by counsel.
AI Development Team May Not Include Sufficient Input from Compliance Professionals: It is not unusual for an AI development team to be largely composed of technology and AI specialists. They are not compliance professionals, and one must not assume that this kind of technology team will adequately design or implement the needed compliance controls. Consider the extent to which non-AI systems require compliance oversight. AI systems often have greater freedom of action based on their rulesets and the experience that they gain during their operation. Compliance professionals have to review in detail the controls being implemented into the AI system to determine whether the properly implemented controls are sufficient. If they aren’t, the compliance professional must take whatever steps necessary to get those controls into the system, or to develop compensating controls that can replace missing controls within AI systems.
AI System May Not Be Designed to Retain All Records Required by Law or Regulation: There are many records that a company or governmental body must retain for specified periods of time, as required by law, regulation, or contractual provision. Tax-related information is a good example, but not the only one that is relevant. AI systems being built or licensed may not take all of the relevant laws and regulations into account. Both legal and compliance professionals must work together to understand what the requirements are and the extent to which the existing system design accurately reflects those requirements.
AI System May Not Be Designed to Retain Records That Could Become Important Evidence in the Event of Litigation Relating to the System’s Operations: The information that counsel wants preserved in logs or other records of an AI system may go beyond requirements set by laws, regulations, or contractual provisions. For example, there is very little legal guidance on exactly what data an autonomous driving vehicle has to maintain. But counsel may have some very specific ideas on what should be available if—as has happened—the self-driving car kills a pedestrian. What were the sensors seeing? What was the ruleset that led the car to hit the pedestrian? If the data is not stored in a log or other record, it won’t be available, and that fact may, in and of itself, be seen as problematic if litigation ensues. History tells us that AI systems are no less likely than other systems to result in litigation, and as a result, thinking in terms of the evidence that counsel would like to have in the event of litigation is very important. The compliance department needs to ensure that those records are being created by the AI system and stored for the time period designated by counsel.
AI System’s Learning and Testing Data Sets May Be Ineffective in Preventing Unwanted Behavior or in Identifying Potential Issues with the AI System’s Performance: AI systems referred to as having machine-learning or deep-learning attributes are different than traditional AI programs in that these systems modify their functionality over time based on experience. These systems simulate the learning that would happen to a human. The set of rules that is part of the software determines how the system can change as it “learns.” What limits are set for these changes? Who has looked at the data used to train and test the system? Unless you actively look at these issues, you can’t simply assume that everything will be OK. For example, an AI computer-vision system that inspects vials to ensure that the label was properly attached might need to be adjusted if the label size changes or the dimensions of the vials change to avoid rejecting vials that are acceptable. Consider the example of AI facial recognition systems. At first, there was a general assumption that these systems worked well. But as facts emerged, that assumption had to be challenged. A federal study demonstrated that facial recognition systems misidentified people of color more than white people.[5] According to a report in The Washington Post, “Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search.”[6] This was not a study of systems that were being considered for use. These systems were in actual use and were misidentifying people of color. It raises a question of how that could have happened and why it was not noticed. Certainly, there was inherent bias in the data, but it’s also important to note that the people building these systems either did not understand that (or chose to ignore it). What might seem like an academic issue of how the system works can in reality result in life-or-death situations. For example, an innocent person inaccurately identified as a dangerous criminal who has resisted arrests in the past might lead to a rapidly escalating —and deadly—situation when police attempt to apprehend that innocent person.
Updates and Changes to the AI System May Impact the System’s Operations in a Way That Presents Increased Risks That Must Be Evaluated by Counsel and Compliance Professionals: AI and machine-learning systems, as all systems, will be updated at some point or on a regular basis. The changes could be a result of changing the underlying operating systems of the computers on which they run or a change in the desired functionality of the system. Regardless, it’s important that compliance personnel be involved to provide assurance that the changes won’t result in a degradation of controls or in reporting mechanisms.
How are AI Compliance Risks Addressed?
During the Developmental Phase of AI System Development: The compliance function plays (or at least should play) an important role during the development (i.e., programming, installation, or customization) of the AI system. Taking an active role to understand what the system does, how it does it, any limitations on the system’s freedom of action, designed-in controls, reporting, data logging and preservation, and error reporting is key to being able to accurately report to management on how well the system is controlled and how those controls can be overseen.
During the Testing Phase of AI System Development: The compliance function should be involved in testing. The objective of testing should be to detect problems. All too often, developers want the system to be accepted, and may take shortcuts. For example, the developers may have a large file of data that is relevant to the system. They can take half of the file and use it for training the system, and then use the other half of the file as the test set. The problem with this is that any problems or bias that are consistent throughout the file will most likely not be caught, since the same error that is in the test set was also in the set of data used for testing. Making sure that the test data actively challenge the system is important.
During the Operational Phase of AI System: During the operational life of the system, the compliance function must examine reports coming from the system to understand potential problems. Compliance professionals looking at AI systems must do what they are good at—asking the “What if?” questions that may have been overlooked by the development team. Additionally, those using AI systems may not like the discipline imposed by these systems and develop ways to bypass them or render them less effective. Recognizing this possibility can lead a compliance professional to closely examine how the AI system is operating and be on the lookout for behaviors their experience tells them may be present. At the same time, sufficient testing must be developed that can determine that the right controls are in place and that they are working. AI systems are no different than any other corporate system in this regard.
Updating and Maintaining AI Systems: As AI systems are updated and maintained, compliance specialists should be involved to understand the changes to ensure that the systems do not negatively impact the controls in place and determine whether they require additional controls and whether the overall system of controls will continue to work properly.
What are the possible penalties of AI noncompliance?
Legal Penalties
While no current laws focus specifically on AI, there are no exemptions for those systems either. Any violations of law that are attributable to the operation of AI systems are subject to the same penalties as any other violation. Depending on their functionality, AI systems can be subject to multiple laws in multiple countries.
Reputational Costs
Cases in which driverless cars have been involved in accidents—and, in at least one case, killed a pedestrian—are examples of incidents involving AI systems that can produce substantial reputational damage. The same can be said of the revelations of the differing error rates of facial recognition systems based on the role of the individual being matched having had a substantial effect on the developers of those systems. Reputational risk is always a factor to be considered.
Reputational risks cover a broad range of issues. Some examples include:
Negative publicity from the AI-related incident resulting in a loss of customer confidence.
Negative publicity from fines or other sanctions imposed by regulators in response to a data breach or other incident. This can also result in loss of confidence or a general degradation of the overall perceived reputation of the organization.
Loss of market valuation if the incident results in reduction in stock prices. This can be sudden and precipitous. It can then result in higher borrowing costs or access to capital markets.
The initiation of shareholder and related civil actions based on an incident can also contribute to reputational damage.
Loss of reputation of executives and board members, which can have a personal effect on them.
What Are Financial Conflicts of Interest in Clinical Research?
Financial conflicts of interest (FCOIs) in clinical research are external interests held by the research investigator, and in some cases by the institution, that are financial in nature that could directly and significantly affect and/or appear to affect the design, conduct, and or reporting of research. Individual FCOIs are specifically defined in federal regulations that apply to Public Health Service (PHS)-funded research as external interests of the research investigator that reasonably appear related to their institutional responsibilities and are considered significant financial interests (SFIs).[2] FCOIs can be complex, considering that interactions have become increasingly more common among academia, industry/private sector, and government agencies in pursuit of advancing scientific discoveries using cutting-edge science and technology. The Bayh–Dole Act enacted in 1980 was the key piece of legislation that changed the landscape at academic institutions by allowing institutions and faculty to retain rights to inventions from federally funded research.[3] This provided a pathway toward commercialization through technology licensing and transfer and also opened up a wider door for interactions with industry. Therefore, it is critical to understand where FCOI risks may occur in such interactions and employ strategies to ensure that research remains objective and is ethically conducted.
In clinical research, FCOIs are important to detect and effectively manage in order to prevent bias from negatively or inappropriately affecting the design, conduct, or outcome of the research. These steps are also essential to maintaining the public’s trust in the research produced by healthcare or research institutions and individual researchers. Healthcare institutions often participate in clinical research through government, industry, or internal funding mechanisms. The federal government funds clinical research at healthcare institutions through the form of grants that require recipient institutions and research investigators to comply with the terms and conditions of the award, including FCOI disclosure and management. Industry-sponsored clinical research is governed by contracts between the institution and company developing or marketing the drug, device, or biologic under study, and is often bound by disclosure and reporting requirements pursuant to the Food and Drug Administration (FDA) financial disclosure regulations. There may also be other state and local laws or institutional policies that govern conflict of interest (COI) disclosure and management pertaining to business interactions and transactions. Clinical researchers and institutions have a shared responsibility in ensuring compliance with regulatory and other local requirements to promote objectivity in the research conducted at their organizations, which is why this remains an important and complex risk area for compliance professionals.
A number of factors that affect institutional FCOI risk and risk tolerance in clinical research include the overall maturity of compliance and research integrity programs; leadership support and investment; nature and breadth of research programs, including funding sources; degree of interactions with industry and commercialization activity; institutional culture; reporting mechanisms; and reputational impacts. The increasing pressures and complexities of the academic and scientific environments combined with heightened public scrutiny regarding FCOIs require more sophisticated oversight programs to ensure transparency, accountability, and effective management of conflicts.
A comprehensive lens should be applied when evaluating FCOI risks associated with clinical research. This is because other types of individual conflicts (e.g., conflict of commitment, role-based conflicts, conflict of conscience) may arise or be comingled with FCOI in the context of clinical research. Researchers in healthcare environments often have multiple roles, including that of a healthcare provider, faculty member or student, administrator, and institutional official, or serve on institutional review committees. They may have external interests (e.g., start-up companies) and collaborations or relationships (e.g., advisory board roles) that may intersect or conflict with their institutional responsibilities. Growing concerns by the US government over inappropriate influence by foreign governments on federally funded research have led to reinforcement by the National Institutes of Health (NIH) of appropriate disclosure by researchers and review by institutions of foreign support, relationships, and activities that represent an FCOI or conflict of commitment.[4] Institutional FCOIs may also arise from institutionally held investments or equity, royalties, significant donations from or interactions with industry, or from institutional officials who have substantial purchasing or business decision-making authority. There are currently no federal regulations that govern FCOIs on an institutional level, which are often left up to institutional policies. Despite this, there have been mounting concerns over increased institutional FCOIs at academic institutions and the need to ensure effective oversight and management of this particular risk area.[5] Therefore, healthcare institutions should be attuned to the various types of conflicts that may occur and employ ways to comprehensively review and manage these risks in clinical research.
Risk Area Governance of Financial Conflicts of Interest in Clinical Research
FCOI federal regulations were promulgated in 1995 by the Office of the Secretary of the U.S. Department of Health & Human Services (HHS) to promote objectivity of PHS-funded research. HHS revised the regulations and issued a final rule on August 25, 2011, requiring compliance by institutions applying for or receiving PHS funding by August 24, 2012.
Codified at:
42 C.F.R. §§ 50.601-50.607 (Subpart F), Responsibility of Applicants for Promoting Objectivity in Research for which Public Health Service Funding is Sought and Responsible Prospective Contractors
45 C.F.R. §§ 94.1-94.6 , Responsible Prospective Contractors[8]
The FCOI regulations apply to institutions and research investigators that are recipients of funding from PHS funding agencies such as the NIH. Research investigators are defined by the regulations as the “project director or principal Investigator and any other person, regardless of title or position, who is responsible for the design, conduct, or reporting of research funded by the PHS, or proposed for such funding, which may include, for example, collaborators or consultants.”[9] These regulations do not apply to Phase I Small Business Innovation Research (SBIR) and Small Business Technology Transfer Research (STTR) applicants.
Investigators are responsible for complying with their institutional FCOI policy and disclosing any external interests (including those of their spouse and dependent children) that are reasonably related to their professional responsibilities at their institutions and considered SFIs. Review of SFI disclosures must occur no later than at the time of applying for PHS funding, at least annually during the period of the award, and within 30 days of acquiring or discovering a new SFI. Investigators must also complete FCOI training before engaging in PHS-funded research, at least every four years and under certain circumstances. Institutions have additional responsibilities under the regulations, including review of SFIs disclosed by investigators, identifying any COIs that require management or reduction or elimination of the interest as appropriate, and reporting FCOIs to the PHS awarding component prior to expenditure of funds and subsequently as required. Part of this process involves a designated institutional official(s) that determines that the investigator’s SFI could directly and significantly affect the design, conduct, or reporting of the PHS-funded research and therefore represents an FCOI. There are other oversight, policy, education and training, and handling of noncompliance requirements that institutions must comply with per the regulation.
SFIs are defined in the PHS regulations, which include the aggregate amount of remuneration received or value of equity interest from publicly traded entities in the past 12 months preceding the disclosure of $5,000 or more. SFIs also include those from non-publicly traded entities where remuneration exceeds $5,000 or any equity interest, intellectual property rights and interests upon receipt of income and reimbursed or sponsored travel from certain entities. Since industry interactions between industry and researchers may be sporadic or ongoing, new information representing an SFI should be disclosed to institutions within 30 days during the period of the research.
Institutions must be aware of any other applicable federal regulations, state or local laws, funding agency requirements, and institutional policies, especially if they differ from PHS rules, impose additional requirements, or govern other types of COIs and transactions. For example, the National Science Foundation, which is a federal agency that provides research funding, requires investigator disclosures of certain external interests in accordance with a higher SFI threshold amount of more than $10,000.[12] The FDA also requires clinical investigators (including their spouse and dependent children) to disclose certain financial interests, payments, or arrangements to the sponsor of a covered clinical study; however, their threshold amounts differ from PHS rules. Investigator interests that require reporting and disclosure to the FDA include equity interests in the sponsor and, for publicly held companies, any interest of more than $50,000 in value, any significant payments of other sorts of more than $25,000 from the sponsor, proprietary interests in the tested product, and other compensation that could be affected by the study outcome.
Requirements for other conflict review and management areas may depend on the institution (e.g., state-funded institutions, nonprofits) and type of individuals covered (e.g., state employees, healthcare providers, institutional officials, key employees). The Physician Payments Sunshine Act, which was passed in 2009 and embedded within the Affordable Care Act, requires applicable drug, device, biological, or medical supply manufacturers and group purchasing organizations (GPOs) to report annually to the Centers for Medicare & Medicaid Services (CMS) any payments or transfers of value to physicians and teaching hospitals worth more than 10 dollars. The legislation was meant to increase transparency of financial relationships between physicians and teaching hospitals and industry. CMS publishes this information on its website.
Certain institutions apply PHS regulations to a subset of research funded by PHS, whereas others apply it to all research activities, regardless of funding source and per institutional policies. Extending regulatory requirements more broadly depends on the risk strategy that an institution takes depending on the type of institution, makeup of researchers, funding, and type of research that is conducted. Many institutions also have policies that cover both individual and institutional FCOIs and include other types of conflicts that may occur within the clinical research environment.
What are the common compliance risks of Financial Conflicts of Interest in Clinical Research?
Lack of Effective Organizational Oversight
Organizations that receive PHS research funding are responsible for complying with FCOI regulations, informing investigators about their FCOI policies, and ensuring effective oversight and management. The authorized organizational representative certifies when submitting a PHS grant application that the applicant’s institution is in compliance with the regulations. Other requirements and types of conflicts including intuitional FCOIs require review and management at an organizational level. Due to the complexity of this space, an effective organizational oversight structure for COIs in research must be in place to ensure compliance and mitigate risks.
Not Maintaining Up-to-Date Policies or Education and Training
FCOI policies are required to be written, up to date, and available to the general public via an accessible website or provided within five business days pursuant to a public request. Education and training should be ongoing and also updated to ensure that they contain relevant information and effectively address any conflicts of interests that may arise in the organizational environment. Research investigators must also complete FCOI training requirements for PHS-funded research.
Not Identifying or Managing FCOIs in a Timely Fashion
External financial interests including consulting, employment, remuneration, service on boards, equity interest (such as stock, stock options, or company ownership interest), and any others considered SFIs should be disclosed by investigators in a timely fashion and evaluated for FCOI before submission of a grant application, prior to engaging in any research activity, and annually or regularly thereafter. These steps are important to reduce the risk that the researcher’s judgment may be compromised by financial ties they have with industry and negatively affect the objectivity or integrity of the research.
Failure to Fulfill Federal Reporting Requirements
The institution is required to identify, manage, and report FCOIs to the PHS funding agency through initial and annual FCOI reports through eRA Commons. The reports must be submitted prior to expenditure of the funds and when renewals are granted for ongoing projects. Reporting by institutions is also required for any retrospective reviews in cases where FCOIs were not previously disclosed by the investigator and bias was found in the conduct of the research or in instances of noncompliance with the management plan. Clinical investigators who are also considered sponsors of covered clinical trials (sponsor-investigators) must ensure that they fulfill reporting requirements under the FDA financial disclosure rules.
Not Managing Risk of Subrecipient FCOIs
Collaborative PHS-funded research requires the awardee and subrecipients (e.g., subcontractor or consortium members) to either certify that their policy complies with regulations or rely on the awardee’s FCOI policy and incorporate which institution’s policy will apply in the agreement to identify and manage investigator FCOIs.
Inadequate Compliance Monitoring
Ongoing institutional review of compliance with FCOI management plans is required until the completion of the PHS-funded research.
Not Evaluating Overall COI Risk in Relation to Business and Environmental Changes
When macro-level changes occur that affect the nature of the business at organizations or when the regulatory environment or public perceptions change, this may lead to downstream impacts on an organization’s COI compliance risk profile. Certain types of arrangements may increase risk in clinical research:
Individual or institutional FCOIs related to clinical research, especially the high-profile ones, involve significant financial gains or greater than minimal risk in human subject research.
Researcher or faculty start-ups, employment or financial interests in companies seeking SBIR and STTR funding that involve subcontracting part of the research work to their departments or institutions, or involving licensing activities and clinical research at the institution.
Organizational investment arms that seek to invest in investigator or institutional start-ups or innovations that are tied to sponsored research at the organization.
Foreign collaborations, activities, or support that represent an FCOI or other type of conflict.
How are compliance risks addressed for Financial Conflicts of Interest in Clinical Research?
Effective Program Oversight and Implementation
FCOIs must be managed by institutions that receive PHS funding, which means investing adequate resources into a COI research compliance program and staff, committees and designated official(s) that can effectively review investigator disclosures to identify any SFIs representing FCOIs that require mitigation or management, and reporting. Conducting regular compliance risk assessments to ensure that the institution has adequate resources and a good level of oversight is key. Reputational harm and risks to research integrity and the rights, safety, and welfare of research participants can occur as a consequence of noncompliance, which is why it is important to implement robust and effective COI compliance programs. Areas to evaluate include the following.
Program and Governance Structure
Ensure programs are structured appropriately and include the following elements:
Overseen by a centralized department with compliance oversight that is supported by leadership and organization-wide COI policies and procedures.
Led by an individual with executive-level and board-reporting responsibilities and a close connection to clinical research activities occurring at the organization.
Adequate resources, systems, and training for COI staff and committees to review both individual and institutional disclosures and mitigate or manage FCOIs.
Standardized procedures and mechanisms (e.g., hotline, nonretaliation policy) to report, investigate, and handle noncompliance.
Compliance coordination with other departments and offices across the organization, including, but not limited to: Human Research Protection Programs, Institutional Review Boards (IRBs), Grants and Contracts, Procurement, Foundations, Technology Transfer, Legal, Ventures and Innovations, and Academic and Medical Affairs.
Awareness and Education
Complexities involving both individual and institutional COIs in today’s environment require ongoing education and training efforts to reduce risks to objectivity and integrity in clinical research. Education and training on institutional COI policies can raise investigator awareness of regulatory requirements, enhance conflict identification, and foster better FCOI mitigation or management strategies. Education and training can be facilitated through organization-wide learning management systems or programs that track training and notify investigators prior to expiration.
Centralized Disclosure and Review Using Technology
Use of organization-wide technology, such as web-based platforms and electronic systems, to centrally capture and manage investigator disclosures allow for the following:
Timely disclosure of external interests by investigators, including any updates and real-time review of the information in relation to anticipated or ongoing grants and research activities.
Documenting FCOI review determinations and any other institutional actions.
Easy access and maintenance of records for at least three years from the date of submission of the final PHS expenditure report or where otherwise required.
Cross-referencing other sources of information, such as Open Payments, as part of the review process for any physician researchers.
Coordinating and sharing up-to-date disclosures and FCOI management plans through automated feeds or reports with IRB offices and committees, grants and contracts offices, and any other organizational departments requiring the information.
Facilitating posting and updating of information on a publicly available website or fulfilling written requests within five business days of any public requests regarding FCOIs of senior or key personnel that include the required elements.
Running reports and information to facilitate compliance monitoring and evaluating organizational risks over time.
Effective FCOI Management Strategies
FCOIs in clinical research should be reviewed by the designated official(s) and/or COI committee, and management plans should be developed if they cannot be reduced or eliminated. Management plan strategies should comprehensively cover individual and any institutional conflicts and require certain conditions or restrictions for conducting the research, depending on the nature of the study. These can include, but are not limited to, the following:
Restricting conflicted individuals from participating in certain aspects of the research study, such as recruiting, enrolling, and obtaining consent from research participants; collecting or analyzing data; or assessing adverse events and safety monitoring.
Removing conflicted individuals (including institutional officials) from oversight of the research; lines of reporting tied to the research; or certain individuals involved in the design, conduct, or reporting of the research.
Recusal of conflicted individuals that serve on any institutional research review or other committee when a review is related to the entity or product in which they have a financial interest.
Modifying the research plan to reduce risk of bias resulting from the FCOI, such as randomization or blinding procedures, independent third-party analysis, or validation of results.
Ensuring the research team and research participants can approach an unconflicted individual or a compliance representative for any COI concerns.
For institutional interests, requiring an external IRB review, independent safety monitor/board, or monitoring body.
Employing an independent monitor or data reviewer or requiring independent audits to ensure the design, conduct, and reporting of the research is protected against bias.
Disclosing the FCOI:
To potential research participants by including language in the informed consent form
To collaborators and sponsors
To procurement
In publications and presentations
To any other parties deemed necessary
Compliance Monitoring and Handling of Noncompliance
Institutions are required to establish adequate and appropriate enforcement mechanisms to ensure compliance. This includes ensuring timely disclosure of SFIs and adherence to FCOI management plans by investigators. The following are ways to address these risks:
Develop institutional policies and procedures for escalation of identified noncompliance and any necessary reporting to IRBs, PHS funding agencies, institutional officials or committees, research integrity officers, and any others as required.
Perform regular monitoring of compliance with FCOI management plans. This can be done through regular check-in questionnaires with investigators, audits of research documentation at research sites, comparing publicly available information or publications against FCOI management plans, or requesting regular reports from independent monitors.
Create a tool for retrospective reviews of research that are required within 120 days if an FCOI was not disclosed by the researcher. If the institution determines as a result of the review that there was bias in the design, conduct, or reporting of the research during the noncompliant period, then they need to develop a mitigation report that includes actions taken to eliminate or mitigate the bias. A PHS mitigation report template should be developed that includes all required regulatory elements.
Evaluate and Identify Other Risk Areas
Regularly review COI processes that may touch other departments to detect any information or process gaps requiring improvements or enhanced coordination.
Provide additional education and training to sponsor-investigators holding an investigational new drug application or investigational device exemption, who are required by the FDA to collect information regarding financial interests and report appropriately.
Confirm ongoing review and management of subrecipient investigator FCOIs and any required reporting to PHS by the awardee institution.
Ensure effective measures are taken to manage other types of conflicts that may arise, such as institutional, commitment, role-based, procurement/purchasing, or others. This may require other disclosure and review mechanisms and additional COI management strategies to be applied in the context of the research.
What are the possible penalties for Financial Conflicts of Interest in Clinical Research?
Noncompliance with PHS regulations by investigators may result in the PHS awarding component imposing special award conditions, suspension of funding, or other enforcement action. Institutional-level sanctions could occur and affect an investigator’s ability to conduct research at the organization. Sanctions can depend on the seriousness and severity of the noncompliance, taking into account the reasons for noncompliance, whether the noncompliance is continuing, and impact to the objectivity and integrity of the research involved and human subject protections. Remedial measures could include retraining, increased monitoring of the investigator’s compliance, or individual disciplinary measures. On a broader level, the impact of noncompliance may result in reputational harm to the institution or researcher and erosion of public trust.
What Are Human Research Protections in Clinical Research?
Human Research Protections were founded on ethical principles that evolved over time as a result of past atrocities involving humans in research experiments. The Nuremberg Code and the World Medical Association’s Declaration of Helsinki were developed after the World War II Nuremberg trials and established ethical codes such as explicit and voluntary consent from patients and guiding principles for physicians. The Belmont Report was published in 1979 by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research and described the basic ethical principles of respect for persons, beneficence, and justice. These principles collectively provided a framework for research ethics that led to today’s regulatory framework designed to protect human research participants.
Today, clinical research requires review by an institutional review board (IRB), which is a committee constituted by a group of individuals that ensures that any proposed research involving human subjects is ethical; adheres to established principles and rules; and has procedures in place to adequately protect the rights, safety, and welfare of humans participating in the research. Informed consent from participants is also a requirement under the regulatory framework. Clinical research is critical to contributing to scientific knowledge and advancing medicine. Over time, clinical research has become more fast-paced and complex as a result of advanced technology and expansion to multiple sites due to increased collaborative research efforts with industry and government agencies.
Healthcare institutions often participate in clinical research due to factors such as ties to research institutes and medical schools, provision of options for patients, and prestige. Institutions may participate in research that is funded internally, by the government, or industry. Externally funded research is governed by contracts and agreements that require adherence to various rules and regulations pertaining to human research protections and other areas of research conduct. Clinical researchers at healthcare institutions are required to navigate through a complex regulatory environment because research regulations add another layer on top of the already highly regulated healthcare environment. Therefore, depending on the complexity and extent of research, appropriate levels of monitoring and oversight of the research should be implemented to ensure compliance with regulations and adequate human subject protections during the research period.
Compliance risks for organizations depend on the nature and scale of the research, institutional oversight and culture, researcher qualifications and experience, populations involved, funding mechanisms, and legal and regulatory requirements that apply. Other factors that may affect human research protections, ethical conduct, or objectivity of the research include those related to academic pressures, researcher or institutional financial conflicts of interest, therapeutic misconception from research participants, community and cultural differences, and adequate resources to support and conduct the research. Thus, it is important to understand compliance risks more holistically when evaluating human research protections and consider both internal and external factors.
Risk Area Governance for Human Research Protections in Clinical Research
Federal regulations that govern human research protections were promulgated by the Department of Health & Human Services (HHS) and apply to research conducted or supported by HHS.[5] The subparts of the regulation include:
The Common Rule;
Additional protections for pregnant women, human fetuses, and neonates;
Additional protections for prisoners; and
Additional protections for children.
The Common Rule, which is the federal policy for human research protections that defines ethical standards in human subject research, was revised in 2017 with compliance dates of January 19, 2018, and January 20, 2020.
The Office for Human Research Protections (OHRP) within the Office of the Secretary of HHS provides regulatory and compliance oversight and develops policies, guidance, and education for human subject protections. Institutions that are required to comply with federal regulations must promptly report to OHRP “(1) Any unanticipated problems involving risks to human subjects or others; (2) any…serious or continuing noncompliance with [the] regulations or the requirements or determinations of the IRB; or (3) any suspension or termination of IRB approval.”[8] Institutions that receive HHS support for research involving human subjects must also have a Federal-wide Assurance (FWA) or commitment to comply with federal regulations signed by an institutional official and designate an IRB that is registered with OHRP. Institutions may choose to apply Common Rule requirements to all research or just those that are federally funded.
Institutions that operate an IRB must ensure regulatory requirements are met and establish policies and procedures. This includes an appropriate IRB committee constitution and meeting IRB review and approval criteria under the Common Rule. IRB approval criteria include: ensuring that the risks to participants are minimized and reasonable in relation to anticipated benefits, an equitable selection of subjects, ensuring that informed consent is sought from the prospective participant or their legally authorized representative and documented, ensuring that adequate privacy protections are in place to maintain confidentiality of the data, and monitoring the data to ensure subject safety where appropriate. IRBs must also comply with Food and Drug Administration (FDA) regulations.
There are various FDA regulations that apply to clinical research that involves drugs, devices, and biologics. The following are FDA regulatory subparts that are more applicable to human research protections and IRBs:
21 C.F.R. § 50 (Informed consent)[9]
21 C.F.R. § 54 (Financial disclosure by clinical investigators)[10]
21 C.F.R. § 56 (IRBs) [11]
21 C.F.R. § 312 (Investigational new drug application)[12]
21 C.F.R. § 812 (Investigational device exemptions)[13]
FDA-regulated clinical trials must also adhere to good clinical practice (GCP), which is an “international ethical and scientific quality standard for designing, conducting, recording and reporting trials that involve the participation of human subjects.”[14] GCP is a set of international standards that were developed by the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) to ensure that the rights, safety, and well-being of human participants in clinical trials are protected in accordance with ethical principles and the data is credible. There are a variety of ICH GCP guidance documents, and the most relevant one to healthcare institutions conducting clinical research is GCP ICH E6 (R2), which was amended to take into account the modern complexities of clinical research and use of electronic records.
Registration of “applicable clinical trials” on ClinicalTrials.gov, including summary results, is required per the FDA Amendments Act of 2007 and final rule. Registration is required for National Institutes of Health (NIH)-funded clinical trials and posting of a consent form is required for any clinical trial conducted or supported by a Common Rule agency. These requirements are part of efforts to provide the public with greater transparency and access to information about clinical research.
Institutions supported by Public Health Service (PHS) funding must comply with other federal regulations. This includes research misconduct regulations that serve to promote the responsible conduct of research. The Office of Research Integrity oversees PHS research integrity–related activities, and institutions are required to submit reports pertaining to research misconduct when certain criteria are met. Research misconduct is defined as “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results,” and any such allegations require prompt review by institutions. Other PHS regulations govern financial conflict of interest (FCOI) and aim to promote objectivity in the design, conduct, and reporting of research.
Other federal regulations that pertain to protecting the privacy and security of information may apply to clinical research. The Health Insurance Portability and Accountability Act (HIPAA) Privacy and Security rules govern privacy and security of protected health information (PHI) for institutions that are considered covered entities.[19] Use and disclosure of PHI for research purposes by covered entities can occur through signed HIPAA authorizations (which may be combined with the research consent form) from research participants or waivers or alterations of HIPAA authorization granted by a privacy board or IRB. Covered entities must also implement appropriate administrative, physical, and technical safeguards to ensure the confidentiality, security, and integrity of electronic PHI. Research participant information may also be protected by researchers that have obtained a certificate of confidentiality (CoC) for research funded by HHS agencies (e.g., NIH) that serves to protect the privacy of research participants by prohibiting the disclosure of identifiable, sensitive research information.
Institutions must be aware of any other applicable federal regulations, state and local laws, or funding agency policies that apply to clinical research activities. This will depend on the type of institution that is conducting the research, nature and type of research, funding source, and contract and agreement terms. Special attention should be paid to high-risk or early-phase trials evaluating safety or research involving vulnerable and critically ill populations. Also, international research will require broader evaluation of other human research and data protection rules and regulations specific to the local and cultural context of the locations and populations.
What are the common compliance risks for Human Research Protections in Clinical Research?
Lack of Effective Organizational Oversight
Organizations that conduct clinical research are responsible for complying with all applicable federal, state, and local requirements as well as contractual agreements, and therefore, must ensure effective oversight. Lack of knowledge regarding the clinical research portfolio (e.g., federally funded, sponsored by industry, investigator initiated), policies and procedures, or institutional-level infrastructure for oversight and monitoring will leave organizations vulnerable to compliance risks.
Not Maintaining Up-to-Date Policies or Education and Training
Up-to-date and accessible institutional policies and procedures should be available to researchers. Education and training should be ongoing and also updated to ensure that they contain relevant information and effectively address risk areas that may arise in the organizational environment. Research investigators should complete research training in accordance with institutional or funding agency requirements.
Investigator Noncompliance
This can occur due to a variety of reasons including lack of training, qualification, or experience, and not having standard operating procedures (SOPs) or adequate supervision of the research. Noncompliance, protocol deviations or violations can potentially affect the integrity of the research data or safety of the research participants.
Inadequate Protections for Research Participants
Protecting the rights, safety, and welfare of research participants are principal tenets of human research protections. Risks to this can occur if research is initiated without IRB review and approval or informed consent is not obtained appropriately from research participants. Risks pertaining to safety and welfare of research participants can occur if adverse events or unanticipated events that represent a risk to subjects or others are not assessed or reported to the IRB and other parties as required (e.g., sponsor, FDA).
Inadequate Protections of Privacy and Confidentiality
Breaches of research information can cause risks to research participant privacy and confidentiality of sensitive information.
Lack of Institutional IRB Compliance and Reporting
Organizations that operate their own IRB committee(s) and receive federal support for research must register their IRB with OHRP and comply with FWA requirements that include reporting serious and continuing noncompliance to OHRP. IRBs that review research involving FDA-regulated products require adherence to FDA regulations and reporting and are subject to routine FDA inspections.
Failure to Comply with FDA Rules
Clinical research involving FDA-regulated products requires adherence to FDA regulations. FDA rules apply to other uses of investigational products for treatment use under expanded access (including emergency use) or emergency use authorizations pathways. Clinical investigators that conduct clinical research involving FDA-regulated products are routinely inspected by the FDA.
Lack of Procedures to Handle Reports of Research Noncompliance
Organizations that do not have a process to handle and investigate such reports may open themselves up to risk. This is an important element of institutional oversight and institutions should have a process to investigate the allegation; if substantiated, ensure they follow any required reporting to federal agencies and institutional officials. Noncompliance or complaints from whistleblowers, research participants, or the public can be submitted to federal agencies such as OHRP that will then take necessary actions to investigate. Institutions will also need to take measures to investigate any allegations and may need to report back to the agency. PHS-supported institutions must have a program in place for reporting and investigating allegations of research misconduct.
Inadequate Compliance Monitoring
Ongoing institutional review of compliance with regulatory and institutional requirements is necessary to quickly identify issues that may affect human research protections, evaluate organizational risks, and inform education and training of researchers.