Artificial intelligence (AI) is getting smarter every day. But who decides how we use this powerful new technology? That’s a huge question facing the world.
Here are the key points:
- Countries are creating new laws to control AI and keep it safe
- The European Union (EU) just made extensive rules about using AI responsibly
- The United States (US) and other nations don’t have as many official regulations yet
- Experts disagree on how much to regulate AI to allow innovation while preventing risks
The EU Takes the Lead
Earlier this week, leaders from all 27 European Union countries approved the EU’s first AI laws. So what do these new rules say?
According to Margrethe Vestager, the EU’s competition commissioner, the goal is to “regulate the use of technology” rather than limiting AI development.
For uses of AI that could seriously impact people’s lives, like:
- Job interviews
- Getting a mortgage loan
- Deciding medical treatments
The new EU law will require AI systems to:
- Involve human oversight
- Explain their decision-making logic clearly
That way, essential choices impacting citizens are only partially up to computer code.
US and Others Playing Catch-Up
While the EU moves ahead with AI regulation, the United States federal government has been much slower in its action.
Miles Taylor, an AI expert advising US lawmakers, says there’s “still quite far” a divide between the European and American approaches.
He points out that many US tech entrepreneurs worry that “overly regulating AI” could hinder helpful, even “life-saving” innovations.
So, for now, Congress has only released a basic framework giving existing regulators the authority over AI in their particular industries, such as:
- Healthcare oversight agencies for medical AI
- Financial watchdogs for banking/lending AI
- And so on
However, critics argue that this piecemeal strategy needs to include the comprehensive safeguards of the EU’s new law.
Global Cooperation or Conflict?
Some experts call for an international AI authority to align regulations across countries and companies. But national self-interests may prevent this.
For example, the US seems reluctant to enforce stricter rules out of fear China could then “speed ahead” in developing AI first.
So, for the time being, we have a regulatory patchwork. The EU has established baseline AI requirements. The US favors a sectoral, state-by-state approach. Tech hubs like China and the UAE remain fairly unrestricted.
Will nations cooperate on unified AI governance to manage risks? Or will competing for supremacy in this transformative technology breed conflict? Only time will tell.
Creativity Unleashed or Chaos?
One area where cooperation may prove challenging is around intellectual property rights. We’re already seeing disputes, like:
- Actress Scarlett Johansson suing after her voice was recreated in an AI system without consent
- But bands like Pink Floyd encouraging AI remixes and animations of their classic albums
As AI capabilities expand rapidly, controlling and regulating its creative capacities grows more complex. How do we promote innovation while still protecting rights?
The “Who controls AI?” dilemma impacts everything from job security to artistic expression. Resolving this issue will require careful cooperation between nations, companies, and citizens worldwide. How we navigate this challenge will shape our shared future significantly.
What should priorities be in this powerful new technology responsibly?
Photo Credit – Freepik