DON'T WAIT.

We publish the objective news, period. If you want the facts, then sign up below and join our movement for objective news:

TOP STORIES

Latest News

Newsom Signs Laws to Regulate Use of Deepfakes in Political Ads

 September 20, 2024

California Gov. Gavin Newsom has signed a set of laws aimed at regulating the use of AI-generated deepfakes, particularly in political advertisements.

The laws come in response to a viral parody of Vice President Kamala Harris, which gained widespread attention online and sparked concerns about the potential misuse of AI in politics, but critics are now raising concerns about the balance between free speech and protecting against disinformation, as the Daily Wire reports.

The controversy began in late July when an anonymous X (formerly Twitter) account, named "Mr Reagan USA," uploaded a satirical video that portrayed Harris in a less-than-flattering light. The video labeled the vice president as “a ladder-climbing moron” and “the ultimate diversity hire.”

Despite being clearly marked as a parody with the caption “Kamala Harris Campaign Ad PARODY,” the ad quickly gained traction, racking up over 60 million views and resonating even with prominent Democrats.

Newsom’s Outrage and Legislative Response

The viral success of the video did not go unnoticed by California’s governor. Newsom expressed his outrage over the ad within two days of its release, calling for immediate action to curb what he described as manipulative content. "Manipulating a voice in an 'ad' like this one should be illegal," Newsom stated publicly, emphasizing that his administration was already working on legislation to address the issue.

On Tuesday, Newsom signed three new laws related to deepfakes and elections. These laws target AI-generated content in political campaigns, aiming to regulate what Newsom and his supporters see as deceptive practices. One of the laws explicitly bans "deceptive" deepfakes and allows individuals to file civil suits against such posts.

Laws Target AI Content in Political Campaigns

The second law mandates that any politically generated AI content must be labeled as such starting in January. This requirement is intended to provide transparency and prevent voters from being misled by manipulated content that appears authentic. Additionally, the third law compels social media platforms with over 1 million users to either label or remove deceptive AI-generated content within 72 hours of a complaint being lodged.

While the laws aim to crack down on misinformation, they also raise concerns about free speech. Critics argue that the legal definitions of "deceptive" content are vague and leave too much room for subjective interpretation. According to the new laws, materially deceptive content includes any media that is digitally modified in such a way that it could mislead a reasonable person into believing it is authentic. However, minor modifications, such as adjustments to brightness or background noise, are exempt.

Concerns About Free Speech and Subjectivity

Opponents of the legislation worry that the rushed nature of the bills—triggered by a satirical video—leaves little room for meaningful debate on how AI should be regulated. The subjective nature of determining what is "deceptive" could result in uneven enforcement and potentially hinder legitimate political discourse. "Why waste your time with a politician unless they’re going to do something for you? That’s how easy it is to govern," Newsom said in defense of the laws, but critics point out that this approach could have unintended consequences.

The economic impact of over-regulating AI is also a major concern. Some argue that stringent laws could stifle innovation in the United States, particularly when competitors like China are advancing rapidly in AI development. The fear is that over-regulation could leave the U.S. trailing in this critical technological race, potentially damaging the country’s global competitiveness.

Irony In Kamala Harris Campaign’s Practices

Another layer of irony in this situation is the criticism aimed at the Harris campaign itself. A CNN headline noted that her campaign had previously been accused of misleading edits and captions in its own social media content. This raises questions about the consistency of applying such laws, and whether similar practices by political campaigns will also be scrutinized under the new regulations.

Moreover, the subjective nature of determining what constitutes "deceptive" content creates potential pitfalls in enforcement. There is concern that political bias or selective interpretation of the law could lead to unfair treatment of some individuals or organizations while allowing others to skirt the rules.

Potential Impact on Political Speech and Debate

The laws are expected to have far-reaching implications, particularly as the 2024 election season approaches. Political campaigns have increasingly relied on AI-generated content for advertising and outreach. While the goal of these laws is to protect voters from manipulation, there is growing concern that they may instead limit legitimate political expression.

The lack of clear, objective standards for what constitutes "deception" could lead to legal challenges and confusion among content creators. Some legal experts argue that the laws, while well-intentioned, are not precise enough to ensure fair enforcement, particularly when it comes to satire or parody—forms of expression that have long been protected under the First Amendment.

Conclusion: Balancing Free Speech with Misinformation

Gavin Newsom's decision to sign laws regulating deepfakes in political ads reflects growing concern over the impact of AI on elections and public discourse.

However, the rush to pass these laws in response to a viral parody video has left many questioning whether they strike the right balance between preventing misinformation and preserving free speech.

With the subjective nature of "deception" and potential economic ramifications, the debate over AI regulation in politics is far from settled.

As these laws take effect, their real-world implications will likely shape the future of political campaigning and media in the digital age.