DOJ Releases AI-Related Compliance Guidance
On September 23, 2024, the Department of Justice updated its guidance on the Evaluation of Corporation Compliance Programs to include questions specifically focused on companies’ use and implementation of artificial intelligence. While the guidance does not have the force of law, it provides a useful set of considerations for executives, board members, compliance officers and other key stakeholders in companies that use AI for commercial purposes or in compliance programs.
Background: What is the DOJ’s guidance on the “Evaluation of Corporate Compliance Programs”?
Under U.S. law, a company can be held criminally responsible for the actions of its agents and employees. Prosecutors though wield discretion as to whether, and how, to hold a company responsible for those actions. To guide their exercise of that discretion, DOJ policy instructs federal criminal prosecutors to consider a number of factors, including the “the adequacy and effectiveness of the corporation’s compliance program at the time of the offense, as well as at the time of a charging decision” and the corporation’s remedial efforts “to implement an adequate and effective corporate compliance program or to improve an existing one.”
Several years ago, the DOJ’s Criminal Division began making publicly available the factors it tells prosecutors to look at when evaluating corporate compliance programs, in a document titled “Evaluation of Corporate Compliance Programs.” While not legally binding, the guidance provides useful insights for companies in considering how to implement and improve their own compliance systems. At a high-level, it directs prosecutors to consider (i) whether “the corporation’s compliance program [is] well designed,” (ii) whether “the program [is] being applied earnestly and in good faith,” such that the program adequately resourced and empowered to function effectively”; and (iii) whether the program “work[s] in practice.”
DOJ has periodically published updates to the guidance. The most recent update was issued September 23. Among other things, in asking whether the program is “well-designed,” the guidance now directs prosecutors to consider how the company is managing “emerging risks to ensure compliance with applicable law,” including risks related to AI.
What Does It Now Say About AI?
The updated guidance asks nine questions about AI. While these will not apply in the same way to every company---indeed, DOJ has expressly cautioned against the use of “out of the box” compliance programs---they provide useful insight into how DOJ expects companies to manage AI-related risk. The nine questions are:
- “How does the company assess the potential impact of new technologies, such as artificial intelligence (AI) , on its ability to comply with criminal laws?”
- “Is management of risks related to use of AI and other new technologies integrated into broader enterprise risk management (ERM) strategies?”
- “What is the company's approach to governance regarding the use of new technologies such as AI in its commercial business and in its compliance program?”
- “How is the company curbing any potential negative or unintended consequences resulting from the use of technologies, both in its commercial business and in its compliance program?”
- “How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?”
- “To the extent that the company uses AI and similar technologies in its business or as part of its compliance program, are controls in place to monitor and ensure its trustworthiness, reliability, and use in compliance with applicable law and the company's code of conduct?”
- “Do controls exist to ensure that the technology is used only for its intended purposes? What baseline of human decision-making is used to assess AI?”
- “How is accountability over use of AI monitored and enforced?”
- “How does the company train its employees on the use of emerging technologies such as AI”
What’s Next?
It’s worth noting that DOJ here is not taking a pro or anti-AI stance. The guidance, in particular question 6 above, seems to recognize, there are potential ways for AI to help with compliance. But the guidance also instructs companies to seriously consider, and mitigate, the potential for misuse of AI.
At bottom, it is important that companies consider how emerging technologies impact their compliance risk profile, document that consideration, and, if necessary, their compliance policies in accordance with that risk assessment.
While a well-designed compliance program does not necessarily prevent all wrongdoing by employees or contractors, showing the company has a well-designed program can mitigate or reduce the risk of financial or other penalties (such as debarment) that the company might face as a result of a government investigation. Companies should consider the guidance in light of their use of AI, both commercially and in their compliance programs, and discern whether any changes or improvements may be necessary. When it comes to front-end compliance, the adage “an ounce of pound cure” is always apt.
ABOUT BAKER BOTTS L.L.P.
Baker Botts is an international law firm whose lawyers practice throughout a network of offices around the globe. Based on our experience and knowledge of our clients' industries, we are recognized as a leading firm in the energy, technology and life sciences sectors. Since 1840, we have provided creative and effective legal solutions for our clients while demonstrating an unrelenting commitment to excellence. For more information, please visit bakerbotts.com.