Journalists and Artists Lose Out to AI Corporations as Trump Fires Copyright Director

 

Senior reporter Karina Montoya warns that Shira Perlmutter’s firing reflects Big Tech’s campaign to undermine copyright safeguards, as AI giants seek to freely exploit creative works without consent or accountability.


The abrupt firing of the U.S. Copyright Office Shira Perlmutter by President Trump, following the agency’s draft report on copyright and generative artificial intelligence, marks a new chapter in the battle to prevent Silicon Valley from advancing an AI business model based on using copyrighted works to train their systems, without the consent of — or compensation to — their creators.

Perlmutter’s firing has sparked a variety of speculation about the motivations behind it. Some saw it as a power play led by Elon Musk, based both on his close relationship to Trump and his new AI business venture. More recent reports, though, show Google and Meta also paid lobbyists to lead a campaign against Perlmutter while her office prepared to issue its AI report. What’s clear is the dominant AI corporations don’t want copyright law to stop them from using other people’s work for their own private purposes.

In the draft report, the Copyright Office focused on whether AI companies should compensate copyright holders for using their works to train AI models, following a 2023 public consultation (in which Open Markets participated). That question is also at the heart of more than 20 lawsuits making their way through U.S. courts. The Copyright Office’s opinion is not legally binding, but courts routinely rely on such expert research to make decisions.

Google, Meta, Amazon, and Microsoft, as well as some of their AI rivals, fiercely contend that fair use should apply to the internet content and databases they use to build their AI models. They also argue that enforcing copyright law or implementing a new content licensing regime would impede “innovation” and stall progress on generative AI.

Critics of copyright enforcement for the AI market often point to how some corporations have used the law to fortify their market power. In recent decades, for instance, U.S. copyright law has often benefited dominant entertainment companies rather than the original individual creators.

In the draft report, the Copyright agency said the first key question in assessing fair use of copyrighted works is what the AI model will ultimately be used for. For instance, using the copyrighted books to train an AI model to remove harmful content online is very different than use of those same books — or images or videos — to train an AI model to produce content “substantially similar to copyrighted works in the dataset.”

The agency also calls for developing a consent framework beyond the opt-out standard, which is when tech companies first collect user data and ask for permission later to profit from it. Dominant AI corporations have exploited putting the onus on users to opt out of data collection as a license to gather, store, and profit from copyrighted works. When creators specifically opt out of allowing use of copyrighted materials, AI corporations may stop collection, but they can continue using previously collected works.

The report also warns that AI models trained on copyrighted works can hurt original creators’ property rights in various ways. This includes by preventing them from licensing the use of their works to others, and by flooding the market with stylistic imitations that diminish the value of their original works.

The Copyright Office’s guidance came at a pivotal time for AI regulation around the world. In February, in Thomson Reuters v. ROSS, a U.S. federal court rejected a fair use defense of copyrighted works in training AI and machine learning systems, setting a potentially important precedent for other similar cases in training generative AI.

In the UK, a massive campaign by news media and creators to raise awareness of the same risks the U.S. Copyright Office describes led the UK Parliament to reconsider changes in legislation that would have hurt creators and journalism. Last week, the California Assembly passed the AI Copyright Transparency Act, a first step toward transparency and accountability for the use of copyrighted works in AI model training.

In both cases, though, legislatures are still placing too much of the burden on creators to detect and challenge misuse of their works in AI. Big Tech’s growing data monopolies in AI continue to pose a real and growing threat to creative industries and journalism. The time has come to complete a solid new framework that makes copyright work for the creators that it’s meant to protect in the first place.

This article was featured in The Corner Newsletter: May 23rd, 2025.

Subscribe to The Corner below.