That unregulated proliferation of AI-altered messaging is making some Democrats uncomfortable, and could spur movement on Beacon Hill to tamp down on how and when campaigns can use AI-generated content.
House leaders are teeing up a pair of bills for votes Wednesday, including one that would bar campaigns from spreading “materially deceptive” synthetic media intended to “injure [a] candidate’s reputation” within 90 days of an election. A Republican-led proposal would require campaigns and PACs to disclose when AI is deployed in a campaign “communication.”
“People want to have assurances that what they’re consuming and what they’re reading is what their eyes tell them it is,” said Representative Dan Hunt, chair of the committee on election laws.
“When you open up social media you almost expect it to be fake or AI. You see someone throw a football from one end zone outside of the stadium, it’s fake,” the Dorchester Democrat said. “But using someone’s image or likeness . . . it’s not always easy to tell.”
Hunt said the House’s goal is to have the law in place for this fall’s election. Some Senate leaders, however, said they’re willing to go further in regulating how AI is deployed, including to bar its use year-round.
“If we don’t stop it, I think this is going to be a part of campaigns and something, I believe, is out of bounds,” said Senator Barry Finegold, who cochairs the Legislature’s emerging technologies committee.
The Andover Democrat pointed to various AI content targeting Healey as an example of why lawmakers should put in limits on AI-generated imagery for this fall’s election.
“When we saw what happened with Governor Healey, we want to move this quicker,” he said.
About half of states currently regulate the use of AI in political messaging, with some Republican-led (Texas) and Democrat-led (Minnesota) states outright banning them close to an election. But Massachusetts is not one of them.
In 2024, state lawmakers passed, and Healey signed, a measure that was designed to prevent so-called deepfakes in campaign videos and ads by barring campaigns from spreading deceptive images with “actual malice” — in other words, knowing that something was false and publishing it anyway.
But the measure was designed to be only temporary. It expired a year ago, and given the timing of its passage, the law never was actually in effect for any state election.
One of the bills the House is planning to vote on this week includes similar language, without the expiration date. It also includes exemptions for materials that “constitute satire or parody.”
Senator Michael Moore, a Millbury Democrat who sponsored the 2024 measure the Legislature passed, also offered a similar bill this session. He said the first iteration was temporary because some legislators were leery of language that would have a “long-term effect.”
But he said there’s more urgency to pass something as campaign attacks increasingly shift away from using unflattering, albeit real, photos of an opponent to publishing images or audio created whole cloth.
“Right now they’re testing the waters,” Moore said of the Republican attacks on Healey.
Those examples, he said, show “exactly why we should have something in place right now. They can say it’s parody or comedy. But people who are watching it don’t necessarily know that.”
Healey said in Worcester on Tuesday that she will not use AI in her own campaign messaging and that she supports the legislation moving through Beacon Hill.
“The use of AI when it comes to political campaigns and the way that it has been contorted and abused and misused, I understand why the Legislature is taking that up,” she said.
Brian Shortsleeve, one of three GOP candidates vying for the party’s gubernatorial nomination, has released videos depicting Healey as the Grinch and a vampire. The Commonwealth Unity PAC, a separate group backing Shortsleeve, posted what appears to be an AI-generated video on Christmas morning depicting a woman resembling Healey rushing to open a gift, only to find coal.
Shortsleeve’s campaign said its videos are “clearly” parodies that are designed to make a point, not intentionally deceive viewers. One highlights news stories about how Healey’s former aide, LaMar Cook, received a $31,000 payout after his arrest on cocaine trafficking charges. The video shows Cook, in one moment, standing with his hands behind his back in a real image taken in a courtroom, before an AI-generated video begins depicting him dancing with joy as money drops from the ceiling.
Shortsleeve’s campaign said its AI videos collectively have 350,000 views online.
“Based on the feedback we receive, voters get the joke and they love it,” said Patrick Nestor, a spokesperson for Shortsleeve’s campaign.
Nestor said the campaign’s policy is to include a disclaimer “any time AI is used to represent a real person in a way that would not be obvious to a reasonable viewer.”
Yet most of the clips from Shortsleeve’s campaign don’t include such a disclosure, including a satirical “radio ad” released in late January on the day Healey formally launched her reelection campaign. The campaign did include a disclaimer in a video posted Friday, telling viewers it features an AI-generated replica of Healey’s voice listing off Massachusetts companies that have reported laying off employees.
The content is parody, the note says, “but the information presented is factual.”
“Brian supports clear rules that respect the constitutional right to free speech, including the Supreme Court’s clear protection of the use of parody and satire, and our campaign already follows those principles,” Nestor said in a statement.
Nestor said the firm Swiftkurrent has produced the campaign’s AI-generated content. Shortsleeve has paid the company more than $14,000 since the summer.
The Commonwealth Unity PAC backing Shortsleeve operates separately from his campaign. But it has also leaned heavily into AI-generated content, typically with Healey as the target, and is already spending heavily.
The super PAC reported spending nearly $430,000 by the end of December, including $34,000 on production costs and digital advertising opposing Healey. In October, it released a digital ad and social media post that included images of the Democratic incumbent wearing a sombrero and criticizing her for, among other things, providing “free hotels for migrants.”
Lydia Goldblatt, a Republican operative and the chair of the super PAC, did not respond to multiple requests for comment.
AI’s proliferation is happening far beyond Massachusetts politics. President Trump’s media team has long pumped out AI imagery. Some are cartoonish versions of the president himself while critics say others, such as an edited photo of a civil rights attorney, distorted reality in dangerous ways.
California Governor Gavin Newsom has increasingly wielded AI, too, mainly as a means to mock Trump. New York Governor Kathy Hochul, another Democrat, is pushing to bar campaigns from spreading AI-generated images of people ahead of an election.
So far, 26 states have passed some type of laws targeting the use of “deepfake,” or “deceptive” media in politics, with the vast majority requiring disclosure when it’s used, according to the National Conference of State Legislatures, a bipartisan organization that tracks legislation nationwide.
Some states, such as Rhode Island and New Hampshire, passed their laws in the wake of a robocall that circulated in New Hampshire mimicking President Joe Biden’s voice and urging voters to skip the state’s presidential primary in January 2024. The political consultant who sent it was ultimately acquitted on charges of voter suppression and impersonating a candidate.
The spread of AI through campaign messaging has worried those who study it.
“When people can’t tell what’s real and what’s fake, in order for elections to establish credibility, we should eliminate the fake stuff,” said Jim Glass, a senior Research Scientist at MIT in its computer science and artificial intelligence laboratory.
Matt Stout can be reached at matt.stout@globe.com. Follow him @mattpstout.