Thought Leadership

AI Legal Watch: March 3, 2025

Client Updates

EU AI Act Draft Guidelines
Joe Cahill
On February 4, 2025, the European Commission issued draft guidelines that provide clarifications on the prohibited AI practices under the EU Artificial Intelligence Act. Article 5 of the AI Act prohibits certain AI practices that are considered to raise unacceptable risks, including AI systems that manipulate or exploit individuals, engage in social scoring, or infer emotions in workplaces and educational settings. Although non-binding until formally adopted, these guidelines offer concrete examples—for instance, they clarify that using AI to determine insurance premiums or assess creditworthiness based on unrelated personal characteristics may amount to social scoring, while also distinguishing between harmful practices and those with legitimate applications. 

The guidelines also outline responsibilities for AI providers, emphasizing that they must ensure their systems are not “reasonably likely” to be used for prohibited purposes and must adopt safeguards to prevent foreseeable misuse. This safeguards may include implementing technical measures, providing clear user instructions, and continuously monitoring system compliance. The guidance and detailed examples serve to assist providers in aligning with the Act’s requirements and help businesses anticipate and mitigate potential legal risks.

Federal District Court Holds AI Training Not Protected by Fair Use
Ben Bafumi*
In Thomson Reuters Enterprise Centre GmbH et al. v. ROSS Intelligence Inc., a Federal District Court found that ROSS’s use of Plaintiff’s proprietary Westlaw headnotes to develop their competing legal research tool infringed Thomson Reuters’ copyrights and was not a fair use under the Copyright Act.

The fair use factors, considered by courts as a balancing test, are: (1) the purpose and character of the work (e.g., commercial vs nonprofit); (2) the nature of the work; (3) the amount and substantiality of the work taken relative to the work’s whole; and (4) the use’s potential effects on the work’s value or the market. Here, the Court found that factors two and three favored ROSS, but on balance, rejected ROSS’s claim of fair use based largely on the first and fourth factors. As for the first factor, the Court found that ROSS’s use was commercial and non-transformative, rejecting their reliance on “intermediate copying,” case law, which depend upon the copying’s necessity to reach the underlying ideas, as opposed to ROSS’s copying to merely make their development easier. Discussing the fourth factor, the Court identified relevant markets as legal research platforms (the primary market) and data to train legal AI models (a potential derivative market) and determined that the unauthorized use of Westlaw’s content impacted (or would impact) both.

The Court carefully emphasized that this case does not involve generative AI; however, its impact may prove significant for all uses of third-party content to train AI. Those who develop, train, and use AI models should continue to exercise caution with respect to where and how training data is sourced, including through investigation and suitable contractual protections.
*Ben Bafumi is a law clerk at Baker Botts

DeepSeek Under Scrutiny 
As we reported in our last newsletter, everyone is talking about DeepSeek for its increase performance and efficiency.  However, with that comes additional risks, as DeepSeek is subject to Chinese national law, and there are still many unanswered questions regarding DeepSeek, including what data was used in training, how much the model cost to develop, and what additional risks may arise from using foreign-sourced AI technologies.  Recently, South Korea announced that it has temporarily banned new downloads of DeepSeek due to concerns over data privacy among other things. This decision highlights the growing scrutiny on DeepSeek and its compliance with data protection and other laws. Despite DeepSeek’s chatter, it has sparked debates about security and the geopolitical implications of AI advancements. As South Korea works to ensure the app meets its stringent privacy standards, other countries like Taiwan and Australia have also expressed similar concerns, advising against the use of DeepSeek for government employees.

Quick Links
For additional insights on AI, check out Baker Botts’ thought leadership in this area:

  1. AI Counsel Code Podcast: Maggie Welsh welcomes Parker Hancock to explore AI advancements and upcoming regulatory shifts for 2025 in "AI Outlook for 2025." They delve into key technologies like DeepSeek models and test time compute, while unpacking crucial updates on AI regulations, including the EU AI Act and state-level U.S. laws. The conversation also covers how companies can navigate AI risks, with a particular focus on copyright and application-specific compliance.
  2. Copyright Office Releases Part 2 of Artificial Intelligence Report: Senior Associate Nick Palmieri reviews part two of U.S. Copyright Office's report on Artificial Intelligence ("AI"), which addresses the topic of “copyrightability” as it relates to AI.
  3. Client Update: Federal District Court Holds AI Training Not Protected by Fair Use: Read more key takeaways of Judge Stephanos Bilbas' opinion in Thomson Reuters Enterprise Centre GmbH et al. v. ROSS Intelligence Inc.

For additional information on our Artificial Intelligence practice, experience and team, please visit our page here


ABOUT BAKER BOTTS L.L.P.
Baker Botts is an international law firm whose lawyers practice throughout a network of offices around the globe. Based on our experience and knowledge of our clients' industries, we are recognized as a leading firm in the energy, technology and life sciences sectors. Since 1840, we have provided creative and effective legal solutions for our clients while demonstrating an unrelenting commitment to excellence. For more information, please visit bakerbotts.com.

Related Professionals