Global Regulators Push for Standards on AI-Generated Images

Global Regulators Push for Standards on AI-Generated Images

Regulatory scrutiny over AI-generated images is increasing as the EU, US, and other nations propose new watermarking and detection standards. The main concern is to counteract deepfakes and misinformation while fostering innovation, pointing to the urgent need for international cooperation on AI policy.

Deepfakes have rapidly become a significant issue, with AI-generated images deceiving viewers and spreading false information. To mitigate these risks, authorities are considering standards that require AI-generated images to carry identifiable watermarks. Detection tools are also under development to help differentiate between authentic and AI-created visuals. Policymakers believe that these measures will promote transparency and trust in digital content.

The conversation isn’t just about regulation, though. It’s also a call for cross-border coordination, ensuring that policies are harmonised and implementable on a global scale. This is crucial for maintaining a level playing field and preventing any one region from being disproportionately impacted by AI-related risks. Already, discussions are taking place in forums such as the European Union’s proposed AI Act and similar legislative efforts in the United States.

For more details on the European Union’s efforts, you can check out their proposed European Approach to Artificial Intelligence. Over in the United States, the National Institute of Standards and Technology (NIST) has also been actively working on AI standards, which you can read about here.