Global AI Regulation Changes: Impacts and Future Directions

Global AI Regulation

In 2025, artificial intelligence (AI) regulations are undergoing significant changes worldwide, driven by economic and technological superpowers. National policies are shaping the future of AI, particularly in the United States, the European Union, the United Kingdom, and China. These shifts could influence the future direction of AI development. This article explores the key factors shaping global AI regulations and their potential impacts.

Key factors that could significantly impact global AI regulations

Musk’s influence on U.S. policy

Currently, the U.S. lacks a comprehensive federal AI law, relying instead on a patchwork of state and local regulations. AI-related bills have been introduced in 45 states, as well as in Washington D.C., Puerto Rico, and the U.S. Virgin Islands.

U.S. President Donald Trump appointed Elon Musk as co-lead of the Department of Government Efficiency. Musk has influenced AI and cryptocurrency policies, and his expertise may help shape AI regulations. While Trump has previously warned of AI’s potential risks to humanity, Musk’s involvement could drive more strategic risk-mitigation measures and foster legislation that supports AI development.

The EU AI Act

The EU AI Act, which came into force in 2020, was the first legislation to regulate AI systems based on risk levels. AI applications such as self-driving cars, medical devices, and remote biometric identification are classified as high risk and subject to stricter oversight. This regulation impacts major U.S. technology companies developing advanced AI, creating tensions that could potentially hinder innovation in Europe.

In December 2024, the EU established a new AI Office and released a second draft of the General-Purpose AI Code of Practice for models like OpenAI’s GPT. Set to take effect in February 2025, the updated framework will cover high-risk AI applications, including remote biometric identification, loan decisions, and educational scoring.

European tech leaders worry that the EU’s strict measures against U.S. tech firms could prompt Trump to impose retaliatory actions. This may pressure the EU to ease regulations, including antitrust laws targeting U.S. tech giants.

UK Copyright Review

The U.K. is concerned that the EU’s new AI legislation may be too restrictive. However, Keir Starmer’s government plans to draft its own AI regulations, focusing on model developers. Instead of adopting the EU’s risk-based framework, the U.K. is expected to take a more principles-based approach.

Additionally, the government is planning measures to regulate the use of copyrighted content in AI training. While the U.K. is making an exception to copyright law for AI model training, copyright holders can still opt out, ensuring transparency, accessibility, and broader adoption.

U.S.-China relations may escalate into heightened tensions

As governments worldwide seek to regulate AI, tensions between the U.S. and China persist. Trump has enforced strict policies on China, particularly in economic matters, which could impact AI development. Both nations are advancing AI systems that may surpass human intelligence and operate independently, posing potential risks. To mitigate these dangers, each country must establish robust AI safety regulations, ensuring AI development remains beneficial to humanity.

2025 marks a pivotal moment for global AI regulation. The U.S. is likely to adjust its policies under Elon Musk’s influence, while the EU is enforcing stricter controls. The U.K., meanwhile, aims to balance innovation with copyright protection. At the same time, escalating U.S.-China tensions could fuel AI competition. While regulation is crucial, international cooperation remains essential for ensuring a safe and sustainable future for AI technology.

Source: CNBC

    wpChatIcon