The European Union has introduced a voluntary code of practice for artificial intelligence for general use. The guidelines are aimed at helping companies comply with the AI law of the block, which is expected to come into force next month.
The new rules target a small number of powerful technological companies such as Openai, Microsoft, Google and Meta, which develop fundamental AI models used on several products and services.
Although the code is not legally binding, it presents the requirements of transparency, protection against copyright and security. Managers say that companies that adopt the code will benefit from a “reduced administrative charge and increased legal certainty”.
Focus on transparency and risk
Under the code, companies must detail the content used to train their AI systems, a number raised for a long time by publishers and rights holders. Companies will also have to carry out risk assessments to report potential improper use, including scenarios such as the development of organic weapons.
The rules arise from the broader AI law, adopted last year. Although certain parts of the law take effect on August 2, sanctions for non-compliance will not be applied before August 2026. Violations could lead to fines of up to 35 million euros (\ $ 41 million) or 7% of world income.
The European Commission said that the new guidelines aimed to point out advanced AI systems “not only innovative but as safe and transparent”.
Henna Virkkunen, executive vice-president of the Commission for the Sovereignty, Security and Democracy of the Commission, said in a press release: “The publication of today of the final version of the Code of Practice for AI for general use marks an important step to make the most advanced AI models available in Europe not only innovative but as safe and transparent.”
Some technological companies still examine the code. Openai and Google said they were studying the final text.
CCIA Europe, a commercial group representing Amazon, Google and Meta, criticized the directives. The group said that the code “imposes a disproportionate burden on AI suppliers” The New York Times.
Resistance to industry and lobbying concerns
Critics argue that the final version of the code has been diluted to appease large companies.
Nick Moës, Executive Director of the Future Society, said that technological companies have managed to advance softer rules. “The lobbying they did to change the code has really led them to determine what is acceptable to do,” he said The New York Times.
Despite the decline, the Commission shows no signs of delay in implementation.
Earlier this year, more than 40 European companies, including Airbus, Mercedes-Benz, Philips and Mistral, signed an open letter urging a postponement of two years.
They have cited “EU regulations are not clear, overlapping and increasingly complex” which threaten the competitiveness of Europe ‘AI.
The Biden administration has also weighed. The American senator JD Vance, speaking in Paris earlier this year, warned against an “excessive regulations” which could suffocate innovation.
Europe remains determined to carry out the regulation of AI, even if it is strongly based on systems developed abroad.
The voluntary code marks one of the first concrete stages to transform its wider legislation of AI into action.