AI Legal Watch: February 6, 2025
New AI Executive Order Signals U.S. AI Policy Transition
Joe Cahill
On January 23, 2025, The White House issued an Executive Order titled “Removing Barriers to American Leadership in Artificial Intelligence.” The Order aims to maintain U.S. leadership in AI innovation by “revok[ing] certain existing AI policies and directives that act as barriers to American AI innovation.” The impacted policies are to be identified by the Assistant to the President for Science and Technology, in consultation with agency heads, while a new AI action plan must be developed within 180 days.
Prior to signing this Executive Order, President Trump also signed an Order rescinding Biden’s AI Executive Order issued on October 30, 2023. Biden’s Order had been designed to regulate the development, deployment, and governance of artificial intelligence within the United States and urged companies to take issues such as data privacy, worker safety and possible discrimination seriously when developing or using AI.
In contrast, the Trump administration’s new direction signals a shift toward deregulation and private-sector collaboration. The recently unveiled “Stargate” partnership, involving major industry players such as OpenAI, SoftBank, and Oracle, underscores this pivot by committing to the development of $100 billion worth of technology infrastructure. This move is intended to bolster U.S. competitiveness as debates continue over balancing innovation with necessary safeguards.
Although the rescission of Biden’s Executive Order is largely symbolic—given that it did not impose direct requirements on employers—it may influence how companies approach AI safety measures. With federal agencies now expected to issue revised guidance in due course, stakeholders should closely monitor these developments and adjust their internal policies accordingly.
Copyrightability: Updates to USCO’s Report on Copyright and AI
Ben Bafumi*
Late in January 2025, the U.S. Copyright Office issued Part 2 (of 3) of their Report on Copyright and Artificial Intelligence. Generally, Part 1 discussed unauthorized digital replicas (e.g., AI-generated music, voice impersonations, and doctored photographs) and recommended responsive legislation to safeguard those falsely depicted. Part 2 dives a bit deeper, addressing the copyrightability of generative AI outputs by analyzing the “type and level of human contribution” needed to warrant copyright protection.
In response to thousands of comments, the USCO offered the following key conclusions: (1) copyrightability of AI-generated works can be resolved by current law, and thus we do not need legislative intervention at this time; and (2) copyrightability still does not extend to solely AI-generated works, and whether AI-generated outputs are copyrightable turns on whether the associated human contribution is “sufficient to constitute authorship”. As to the second takeaway, the report notes that “prompts alone” do not meet this level of control, but “expressive inputs” may. If expressive elements of an already copyrightable work are perceptible in the output, it may rise to the level of control needed to be copyrightable, at least with respect to those particular creative elements.
The report also notes that copyrightability similarly extends to works where humans creatively select, coordinate, arrange, or modify output materials.
While we await the Office’s publication of Part 3, which will address the training of AI models on already copyrighted works, it is important to keep a steady hand on the level of control exerted when using generative AI to assist in creative pursuits, and, to the extent possible, always utilize “expressive inputs” or other strong forms of control.
*Ben Bafumi is a law clerk at Baker Botts
Leading Financial Organizations Review Emerging AI-Enabled Cybersecurity Risks
Coleman Strine
Recently, both the World Economic Forum and the Financial Institution Regulatory Authority (“FINRA”) released papers outlining updates on cybersecurity risks stemming from advancements in AI technology. These papers provide stakeholders with an overview of the current threat landscape, as well as several mitigation techniques to combat these threats.
On January 21, 2025, the World Economic Forum released a white paper detailing several AI-enabled risks to financial firms, including prompt injection, data leakage, data reliability, and training data poisoning. The paper suggests that firms implement a suite of techniques to mitigate such risks, including regular digital asset inventories, information governance policies, incident response strategies, output verification, and adversarial testing.
On January 28, 2025, FINRA released its 2025 Annual Regulatory Oversight Report, which provides firms with insights into FINRA’s regulatory findings over the past year. The report discusses a number of trending AI threats, including investment club scams, account fraud, business email compromise, imposter scams, and market manipulation. FINRA suggests that firms mitigate these threats by identifying and investigating suspicious account activities, conducting regular risk assessments, enhancing customer identity verification techniques, and providing additional training to personnel.
In light of the increasingly rapid advancement of AI, it is more important than ever that firms across all industries stay up to date on the latest AI and cybersecurity threats and regularly update and maintain their security strategies in line with industry best practices.
Quick Links
For additional insights on AI, check out Baker Botts’ thought leadership in this area:
- What is DeepSeek, and why does it matter? Parker Hancock provides some of the basic facts around DeepSeek and identifies a few new issues and opportunities that may be relevant to corporate cybersecurity and AI adoption efforts. Read the full article here.
- AI For Patent Drafting in 2025: Can AI be used to draft a patent application? The answer is complicated. Read the full article by Parker Hancock and Christopher Palermo to get their insights.
- How law firms are using tech to draft invalidity claim charts: Partner Bethany Salpietra discusses in this article from Managing IP how through AI, certain software can create an initial assessment of how strong a prior art reference is, giving the reference a numerical score.
- "Help! Have AI and Cybersecurity Changed My Ethical Duties?": Partner Paul Morico will be participating in a discussion at the ABA-IPL Section Annual Meeting examining the rapid evolution of AI and digital tools and their profound impact on legal practice.
- AI Counsel Code Podcast: Listen to the podcast to stay up to date on all the artificial intelligence legal issues arising in the modern business frontier.
For additional information on our Artificial Intelligence practice, experience and team, please visit our page here.
ABOUT BAKER BOTTS L.L.P.
Baker Botts is an international law firm whose lawyers practice throughout a network of offices around the globe. Based on our experience and knowledge of our clients' industries, we are recognized as a leading firm in the energy, technology and life sciences sectors. Since 1840, we have provided creative and effective legal solutions for our clients while demonstrating an unrelenting commitment to excellence. For more information, please visit bakerbotts.com.