CfEPS in the media

Heavenly AI Promises and Real-World Side Effects

At the forefront of Open AI’s latest “innovations,” an increase in disinformation seems inevitable. Current regulations and technical solutions often fall short. Jürgen Pfeffer of TU Munich and Matthias Pfeffer from the Council for European Public Space argue that AI companies must prove the benefits of their products before causing further harm to democracy.

 

Open AI recently introduced “Voice Engine,” a tool that can clone a voice from a 15-second sample. This follows reports of voters in New Hampshire being misled by a fake call from a voice mimicking Joe Biden. While Open AI acknowledges the risks and promises caution, their text-to-video tool “Sora” has faced skepticism regarding its capabilities. Despite advancements, the spread of misinformation remains a critical concern, especially with numerous significant elections approaching. Industry leaders like Open AI’s Sam Altman and Google’s Sundar Pichai call for regulations, but political and lobbying challenges hinder progress. As Big Tech proposes internal ethics committees and technical fixes, experts argue for more rigorous testing and clear regulations to mitigate AI’s risks and ensure its benefits.

Supported by grants from the European Cultural Foundation.

The Council for European Public Space is registered in the Transparency Register of the European Commission.