Thought Leadership

AI Legal Watch: May 2, 2025

Florida Bar Passes Pioneering Cybersecurity Recommendation
Michelle Molner
On March 28, 2025, the Florida Bar unanimously approved Recommendation 25-1, which was proposed by its Cybersecurity & Privacy Law Committee and encourages all Florida Bar members and their firms to adopt certain proactive cybersecurity measures. Specifically, the Recommendation urges firms to perform a Data Mapping Survey and Cybersecurity Maturity Assessment within two years, and to develop an Incident Response Plan (IRP) within three years.

Data mapping helps firms identify what sensitive information they possess, where it resides, and how it moves through their systems. Maturity assessments evaluate a firm’s current cybersecurity posture, establish a baseline, and highlight areas of improvement. The Recommendation’s cornerstone is development of an IRP, which prepares a firm to respond promptly to cyber incidents, minimize operational disruption, protect client and third-party data, and reduce liability exposure.

Although purely voluntary, Recommendation 25-1 reflects cybersecurity best practices and positions Florida as the first U.S. state bar to formally pass cyber resilience guidelines. It comes amid several high-profile breaches at law firms, including at Gunster (2022 attack impacting 9,000 individuals and resulting in a $8.5 million settlement) and at Orrick, Herrington & Sutcliffe (2023 attack impacting 600,000 individuals and resulting in a $8 million settlement).
 
AI Governance in the Agent Era
Coleman Strine
“AI governance” is a rapidly developing field of research that focuses on the risks and controls related to AI platforms. Recently, a team of researchers from the Institute for AI Policy and Strategy has proposed a framework for such governance in the “agent era.” Notably, the risks associated with AI agents present unique challenges, as agents can cooperate with one another and perform real-world tasks independently of their principals.

In particular, the framework includes five categories of “agent interventions”:

  1. Agents should be aligned with their principals’ values. This may be accomplished by incorporating reinforcement learning, calibrating risk tolerances, and paraphrasing chain of thought outputs.
  2. Principals should maintain strict control over their agents’ behaviors. For example, principals should develop tools to void or roll back their agents’ actions, shut down or interrupt their agents’ tasks, and restrict the actions available to their agents.
  3. Principals should ensure that the behavior, capabilities, and actions of their agents are observable and understandable to users. These measures may include providing unique IDs for each agent, logging agent activities, and publishing detailed reports on the reward mechanisms for any reinforcement learning-based agents.
  4. Principals should employ security and robustness measures to mitigate any external threats to agentic systems or their underlying data. For example, standard access control and sandboxing measures should be implemented, and adversarial testing and rapid response defenses should be deployed on a consistent basis.
  5. Agents should be integrated with social, political, and economic systems. The researchers suggest that agents be provided with internal liability regimes which mirror relevant legal schemes, principals should provide mechanisms which allow agents to enforce agreements between each other (e.g., smart contracts), and principals should ensure that access to agentic services is provided to users on an equitable basis. Additionally, principals should ensure that detailed evaluations and monitoring are implemented at each stage of the development and deployment processes to ensure compliance with all of the above measures.

This framework represents many of the current best practices for the development of AI agents and should be considered by any company seeking to develop or deploy such systems.

Quick Links
For additional insights on AI, check out Baker Botts’ thought leadership in this area:

  1. AI Counsel Code: In the episode "Protecting Trade Secrets in the AI EraMaggie Welsh and Julie Albert discuss the challenges of protecting trade secrets in AI. They cover the definition of trade secrets, methods for safeguarding valuable information, and the risks posed by unauthorized access and data scraping. Emphasizing the importance of understanding and securing AI tools, they provide insights on maintaining confidentiality in the evolving AI landscape.
  2. Federal Circuit Refines Section 101 Eligibility of Machine Learning Inventions: Nick Palmieri and Chris Palermo examine the significant decision in Recentive Analytics, Inc. v. Fox Corp, the Federal Circuit's first-ever patent eligibility decision involving machine learning. 
  3. Federal Lawmakers Reintroduce NO FAKES Act to Combat Unauthorized Digital Replicas: In early April 2025, a bipartisan group of U.S. Senators reintroduced the NO FAKES Act, signaling renewed momentum for federal legislation addressing the rise of unauthorized digital replicas powered by artificial intelligence. Read more from Joe Cahill in the linked article. 

Related Professionals