Can AI Threaten the Integrity of Creative Arts?

Picture yourself as a music producer with a penchant for deception. You devise an ingenious plan. As you upload music, Spotify compensates you based on the number of streams. You realize that by creating and managing fake accounts, you can generate repeated streams of your music, thereby earning more from royalty payouts than your expenditures. It’s like a machine for generating income.

However, an obstacle arises. Spotify actively works to detect and eliminate fraudulent streams. Scaling your operation risks raising suspicions about why certain songs suddenly seem to have become popular. Fortunately, a solution exists: artificial intelligence.

This purported method has reportedly been employed by a US music producer charged with extensive streaming fraud this month. The charges claim he purchased up to 10,000 AI-generated songs monthly from an AI music firm, sharing them across various streaming services, and distributing his fake streams to evade detection. This operation allegedly netted him $12 million over five years.

When this indictment period began, there were only about five AI music companies, and I was the CEO of one (not the one involved in this case). If someone were to implement this scheme today, they wouldn’t even need to collaborate with an AI music company or invest any money in obtaining tracks. It’s now feasible to generate as many songs as desired through AI at no cost.

Many individuals are thrilled about the newfound ability to effortlessly create any digital media, from images and videos to text, speech, and music, all of which can rival the finest human production. This could pave the way for accessible AI tools, a higher volume of art creation, and even AI outpacing human innovation. However, this transformation also brings about significant challenges for which solutions currently elude us.

Fraud β€” of which more instances are likely to surface β€” is just the beginning. Deepfake technology presents another alarming issue. Reports indicate that children from the US to South Korea are receiving deepfaked pornographic images of themselves from peers. Similar altered images of Taylor Swift garnered 27 million views on X (formerly known as Twitter) before being removed. Although people could create such content in the past, it was typically difficult to produce. Thanks to AI, that obstacle has been eliminated.

The flood of AI-generated content also jeopardizes our ability to distinguish truth from falsehood. A Google image search for Beethoven returns an AI-generated image as the top result. Misleading images of Taylor Swift supporters backing Trump were even shared by the former president himself. AI chatbots, viewed by some as Google replacements, routinely blend fact with fiction β€” a feature present in all of them. This situation further complicates academic assessments in the age of ChatGPT, alongside the potential repercussions on the creative job market due to an impending surplus of content.

After speaking with numerous individuals in AI firms, it is clear that AI could allow anyone to author a novel or influence election outcomes. Long before we confront dystopian scenarios, we are entering a period where anyone could generate images, fabricate audio, or write articles within seconds. As someone who generally supports technology, I believe it’s vital for even its advocates to ponder: have we truly considered the implications of these advancements?

Ed Newton-Rex is the CEO of the AI non-profit organization Fairly Trained and also works as a composer.

Post Comment