The EU AI Act: What Energy Executives Should Know Before August 2026
The Compliance Window Is Closing
The EU AI Act is in force. Its most demanding obligations—those applicable to high risk AI systems—will apply starting August 2, 2026.
For energy companies operating in or serving the EU market, this is not a niche compliance issue. Many AI systems already used across exploration, production, transport, power generation, and grid operations may fall within the Act’s “high risk” category. Companies that have not yet begun structured compliance efforts should treat August 2, 2026 as a critical deadline. Penalties for non-compliance can reach €15 million or up to 3% of global annual turnover, whichever is higher.
Why Energy AI Systems are Often “High Risk”
The AI Act takes a risk based approach. Its most onerous obligations apply to AI systems designated as high risk under Annex III.
For energy companies, two provisions of the EU AI Act work together to determine high-risk status. Annex III, Section 2 classifies as high-risk any AI system that functions as a 'safety component' in the management or operation of 'critical infrastructure,' including electricity, gas, heating, and other essential energy services. The Act adopts a broad definition of 'critical infrastructure' from the Critical Entities Resilience Directive (EU) 2022/2557, encompassing both physical and digital assets across the energy value chain — from upstream operations through transmission, distribution, and retail supply. An AI system qualifies as a 'safety component' where its failure or malfunction could result in physical damage to infrastructure or harm to persons or property. AI systems used solely for cybersecurity purposes are expressly excluded.
Annex I provides a second, independent pathway to high-risk classification. Where an AI system is embedded as a safety component in a product already subject to third-party conformity assessment under EU harmonization legislation — such as the Machinery Regulation, the Pressure Equipment Directive, or the ATEX Directive — that AI system is independently classified as high-risk under Article 6(1). Energy companies should be aware that a single AI system may trigger high-risk classification under both Annex I and Annex III, and that each classification carries its own compliance obligations.
Where classification is borderline, the cost of under-classification—particularly given the infrastructure context—may outweigh the burden of treating a system as high-risk. Regulators are unlikely to read “safety component” narrowly when enforcement activity turns to energy.
AI Systems To Consider Carefully
The following examples are illustrative, not exhaustive. The common thread is operational consequence—these are systems whose failure may, under certain facts and circumstances, affect safety, supply continuity, or infrastructure integrity.
Upstream (Exploration & Production): Automated well control, pressure monitoring, and blowout prevention systems; predictive structural integrity analytics; AI-assisted offshore platform safety monitoring.
Midstream (Pipelines, Storage & Transport): SCADA-integrated AI systems controlling or monitoring pipeline operations; automated leak detection and anomaly platforms; pipeline and storage integrity management systems.
Downstream (Refining, Distribution & Retail): AI process control and safety monitoring in refineries; automated hazard detection at terminals; equipment integrity monitoring with automated response functions.
LNG: AI safety monitoring for liquefaction and regasification operations; automated detection and control across cryogenic infrastructure.
Power Generation & Utilities: AI control and safety systems for thermal, nuclear, and renewable generation; grid management, load forecasting, and real-time dispatch tools; automated fault detection, isolation, and restoration systems.
Energy companies should also assess whether systems fall within other Annex III categories—particularly AI deployed as safety components in regulated products, or biometric systems used for facility access control, health monitoring of employees, or workforce safety systems.
Core Compliance Obligations
Energy companies may act as providers (developing or placing systems into service), deployers (using third party systems), or—commonly—both. Provider obligations are the most extensive, though deployers also carry meaningful duties.
For each high-risk AI system, providers must implement and document:
Governance and Oversight
- A documented, lifecycle spanning risk management system (Article 9)
- Design level human oversight enabling monitoring, intervention, and override (Article 14)
Technical Readiness
- Robust data governance, including data quality and bias controls (Article 10)
- Built in logging and record keeping (Article 12)
- Demonstrated accuracy, robustness, and cybersecurity appropriate to infrastructure risk (Article 15)
Regulatory Readiness
- Comprehensive technical documentation and Annex IV compliance files (Article 11)
- Clear instructions for use and transparency materials for deployers (Article 13)
- Completion of a conformity assessment and registration in the EU high risk AI database before deployment (Articles 43, 71)
What to Do Now
Compliance cannot begin without operational visibility. No company can satisfy the Act’s requirements without first knowing what AI systems are in use, where they are deployed, and who built or procured them. That inventory is the prerequisite for everything else—and for many companies, completing it alone will take longer than expected.
- Conduct a structured AI inventory across all EU-facing operations and business units—this is the non-negotiable first step
- Classify each system against Annex III conservatively, with documented rationale for every in-scope and out-of-scope determination
- Map provider versus deployer status for each in-scope system, with particular attention to internally customized or integrated platforms
- Audit AI vendor contracts for compliance gap allocation, indemnification, and pass-through obligations—most existing contracts were not drafted with the Act in mind
- Assign cross-functional AI governance ownership (legal, engineering, operations, procurement) before the technical compliance work begins
- Initiate technical documentation for known high-risk systems in parallel with the broader inventory—do not wait for the inventory to close
- Assess human oversight architecture for existing systems and identify any redesign requirements
- Plan EU database registration timelines now; the registration process assumes all preceding documentation is complete
How We Can Help
Energy companies are finding that AI Act compliance is less about any single system and more about coordinating legal, engineering, operations, and procurement teams across the organization.
Baker Botts’ AI Governance practice advises energy clients on AI inventories, high risk classification, technical documentation, conformity assessments, vendor contract strategy, and regulatory engagement. We support compliance efforts across all segments of the energy sector, with a focus on practical, defensible implementation ahead of the August 2, 2026 deadline.
Key Contacts
Samir A. Bhavsar | Partner, Privacy, Cybersecurity & AI Governance | Dallas | AIGP · CIPP/US · CIPP/E
Samir counsels organizations on the full spectrum of AI governance and regulatory compliance: assessing AI system risk under emerging legal frameworks, designing and implementing AI governance programs, advising on high-risk AI system obligations under the EU AI Act, and managing AI-related regulatory exposure in operationally complex environments. He counsels energy clients on high-risk AI system compliance, governance program design, EU Act conformity assessment preparation, and the intersection of AI deployment with traditional energy regulatory risk.
Matthew R. Baker | Partner & Chair, Privacy, Cybersecurity & AI Governance | San Francisco | CIPM · CIPP/US · CIPP/E
Matt leads Baker Botts' data privacy and cybersecurity practice, bringing deep expertise in global privacy compliance — GDPR, CCPA, and the full spectrum of international data protection frameworks — that forms the regulatory foundation for AI governance programs operating across jurisdictions. His practice covers data governance strategy, privacy regulatory compliance, cyber incident response, and the privacy dimensions of AI system design and deployment.
This client alert is for informational purposes only and does not constitute legal advice. It reflects the EU AI Act and publicly available guidance as of March 2026. Organizations should seek specific legal counsel regarding their individual circumstances.
ABOUT BAKER BOTTS L.L.P.
Baker Botts is an international law firm whose lawyers practice throughout a network of offices around the globe. Based on our experience and knowledge of our clients' industries, we are recognized as a leading firm in the energy, technology and life sciences sectors. Since 1840, we have provided creative and effective legal solutions for our clients while demonstrating an unrelenting commitment to excellence. For more information, please visit bakerbotts.com.

