As artificial intelligence becomes increasingly integrated into business operations, governments around the world are stepping up their oversight and launching a wave of regulations to address the risks in this area. Global businesses face a patchwork of complex and often conflicting legal requirements.
In-house legal departments are uniquely equipped to lead organizational AI governance with their understanding of regulatory risks, operational context, and cross-functional collaboration.
Regulatory trends
AI regulation covers several legal areas, including global regulations, sector-specific guidance, and broader frameworks such as consumer protection, employment law, privacy, and security. In the absence of comprehensive federal legislation, U.S. federal agencies have begun to shape AI oversight by developing rules and taking enforcement actions.
Earlier this year, the Federal Trade Commission filed a complaint against Air AI for “AI washing,” which involves misrepresenting the AI capabilities of a product or service. In previous years, the Consumer Financial Protection Bureau issued a new rule regulating the use of AI and algorithms for home appraisals and valuations.
The Equal Employment Opportunity Commission previously ruler with iTutorGroup for allegedly programming its AI to discriminate against certain job applicants. These cases demonstrate increased federal interest in corporate AI practices through existing consumer protection frameworks.
Nearly every state has introduced or enacted some form of AI legislation applicable to various industries and sectors. Most notably, California has led the charge with BS53the Transparency in Frontier Artificial Intelligence Act, a comprehensive law aimed at advanced AI models. Other states have implemented some form of AI laws, many of which regulate the use of AI in specific industries or sectors.
Globally, countries and regions are also making progress in adopting comprehensive AI laws. For example, the European Union’s large-scale AI Act has been gradually rolled out in Member States since its promulgation in 2024, with most of the substantive requirements taking effect in 2026.
Italy is the first EU country to implement a national AI law that aligns with European AI law. In China, effective new generative AI labeling rules now apply to Internet information service providers and require AI-generated content to be labeled both implicitly and explicitly, where appropriate.
Companies that use AI and whose products or services span multiple jurisdictions need to understand the applicability of these regulations. In-house legal teams are naturally equipped to interpret overlapping legal frameworks, anticipate and respond to application trends, and integrate AI governance into overall business strategy. Internal teams’ proximity to the business, combined with their legal expertise, positions them ideally to help businesses navigate this landscape.
Partnerships for AI Governance
Cross-service functionality. Legal’s mastery of the underlying technology helps unify key risk information and regulatory considerations, enabling a more coordinated approach to implementation.
Because legal teams naturally work cross-departmentally, across stakeholders and subject matter experts, legal departments are uniquely positioned to have or obtain a comprehensive inventory of AI use and risks. Cross-functionality allows legal teams to coordinate AI governance policies that align with regulatory obligations as well as an overall strategic business strategy for the company. This enables a stronger, more informed ability to oversee and evaluate the use of AI in due diligence and contracting with primary and third-party vendors.
Privileged protections. As AI accelerates and optimizes internal processes and external offerings to customers, companies are re-evaluating their own systems and procedures to respond more quickly and at greater scale to growing customer needs. The development and implementation of AI is therefore highly proprietary in nature.
With in-house legal teams included in these conversations, decision-makers can speak candidly about AI innovation and understand legal risks related to attorney-client privilege. This can preserve competitors’ business strategy through legal protections, as opposed to these types of conversations simply being everyday business communications.
Managing AI at the board level. Senior legal leaders engage with the board and management teams, translating complex legal and technical issues into strategic business insights.
This experience enables legal departments to elevate AI risks to governance conversations, further guiding the development and implementation of AI frameworks within the enterprise. Legal teams can use this strategic position to ensure that enterprise-wide AI initiatives are not only compliant, but also aligned with broader business objectives.
Operationalizing AI
In-house legal teams can position themselves as strategic business partners and prepare their organizations for responsible AI in several ways.
Develop a comprehensive internal governance framework. Develop clear policies that define responsible AI implementation. Consider ethical implications across all business units.
Perform initial AI assessments. Conduct impact assessments to understand the scope, context, and decisions that current or future AI tools will influence within the enterprise. Understand AI model limitations and maintain up-to-date documentation on use case justifications.
Stay informed. Monitor the use and impact of AI within your organization, tracking changing regulations, raising awareness of policy updates to leaders and employees, and understanding how third-party vendors are using AI in ways that could implicate your business.
Protect confidentiality and trade secrets. Adopt protective measures to protect against sensitive data leaks and maintain privileges. Use AI tools that encrypt sensitive data or require access controls.
Stimulate innovation and competition. Corporate counsel should rely on AI to guide teams toward ethical innovation and compliant implementation. Use only necessary safeguards that would enhance strategic growth and minimize legal exposure.
This article does not necessarily reflect the views of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax and Bloomberg Government, or its owners.
Author information
Whitney Ford is corporate counsel at Sanofi, advising on US market access strategy, regulatory innovation and AI integration.
Write for us: Guidelines for authors