Following the rollout of the AI Act last week, Meta has announced that it will no longer make its upcoming AI model, as well as future models, available in the European Union. The decision, reported by “Axios,” is attributed to the EU authorities’ failure to provide legal clarity.
This move marks the latest development in the ongoing power struggle between Meta and the EU, which has introduced a series of regulations for AI and platform services over the past years. Meta’s decision highlights the growing willingness of US tech companies to withhold products from European customers if necessary.
“We will be launching a multimodal Llama model in the coming months, but not in the EU due to the unpredictable European regulatory environment,” said a Meta spokesperson in a statement to Axios.
The AI Act, recently enacted by the European Union, represents a comprehensive legal framework for the use and development of artificial intelligence. The goal of the legislation is to ensure that AI systems are safe, transparent, and ethically sound. The AI Act imposes strict requirements on the development and implementation of AI technologies, particularly concerning risk assessment, data protection, and the avoidance of discrimination. These stringent regulations are designed to protect consumers and build trust in AI systems. However, critics argue that the complex and sometimes vague provisions of the AI Act could hamper the innovation and competitiveness of European companies.
Meta’s new multimodal models are designed to recognize video, audio, images, and text. These models will be integrated into various products, but European companies will not be able to use them despite their release under an open license. Consequently, companies outside the EU that use the new Meta AI models will also be unable to offer their services in Europe.
Similarly, US tech giant Apple announced last month that it would not release its Apple Intelligence features in Europe due to regulatory concerns.