Members of the European Parliament have voted in favor of creating a new legal framework for the music streaming sector. This new framework will require platforms to inform the listeners when the songs they are listening to are artificial intelligence (AI)-generated music and urge for deep fakes to be curbed. It would also require streaming platforms to make their algorithms and recommendation tools transparent, to prevent unfair practices, like manipulation of streaming figures. Such practices are allegedly used to reduce artists’ fees. This framework is meant to address the revenue allocation imbalances in the music streaming industry which leave artists with low compensation.
This isn’t the first time that the EU has flagged its concerns about deep fakes. Back in September last year, the European Commission’s Vice President Vera Jourova raised alarm about the potential of realistic AI products for creating and disseminating disinformation. She had urged platforms to provide efficient safeguards for this in the context of elections.
Concerns surrounding a deep fake election are rising in other parts of the world as well. In India, the IT Ministry recently announced its plans to update its platform governance laws (the IT Rules, 2021) to regulate generative artificial intelligence (AI) and artificial intelligence companies.
Is content labeling the solution?
According to a researcher at UC Berkeley, Gautham Koorma, labeling content as it comes onto a platform can pose a variety of challenges. “The computational complexity associated with [labeling], when you’re uploading a video, every time you have to analyze it using many models, plus even if you do that, the accuracy is relatively not at the level that they would want to productionize,” he explained at MediaNama’s recent discussion on deep fakes and democracy.
Further, even if the companies were to spend the money and put in place labeling systems at source, this solution would only have success on the streaming service. Koorma explained that when a piece of content is uploaded on social media it becomes much harder to detect whether or not it is a deep fake. “When you upload an audio clip to Facebook, or when you send it on WhatsApp, each of these platforms do something that’s called transcoding, essentially changing the bit rate, changing some properties of the media. And once that happens, we see that the accuracy of detection drops a lot,” he had explained. So, even if YouTube music labels a song as AI-generated if that song is downloaded using the plethora of YouTube downloaders circulating on the internet and uploaded on a social media platform, the purpose of labeling would be defeated. The re-upload would also make it harder for detection tools to accurately tell whether the song was AI-generated or not.
Also read:
- Why Tech Companies In Europe Want EU Lawmakers To Revise The Recently Adopted AI Act
- YouTube Announces New Measures For AI-Generated Content, Users Can Ask For Takedown Of Deepfakes
- 11 Talking Points From MediaNama’s ‘Deep Fakes And Democracy’ Discussion #NAMA
STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!