Microsoft Warns Against The Use Of AI To Disrupt Elections

Tech giant Microsoft has expressed concerns over the usage of AI-generated content in upcoming elections of major countries like South Korea, India, and the U.S.  The AI content that the Guardian mentioned is purportedly produced in China.

Takeaway Points:

  • Microsoft has raised concerns over the involvement of AI-generated content in upcoming elections in countries like the U.S., India, and South Korea.
  • The strategies they employ involve the creation of artificial intelligence-generated fake images and audio clips on contentious matters such as immigration and racial tensions.
  • The report stated that China had already attempted an AI-generated disinformation campaign in the Taiwan presidential election in January.

AI to Disrupt Elections

China will attempt to disrupt elections in the US, South Korea, and India this year with artificial intelligence-generated content after making a dry run with the presidential poll in Taiwan, Microsoft has warned.

The US tech firm said it expected Chinese state-backed cyber groups to target high-profile elections in 2024, with North Korea also involved, according to a report by the company’s threat intelligence team published on Friday.

“As populations in India, South Korea, and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent, North Korean cyber actors, worktowardsd targeting these elections,” the report reads.

Microsoft said that “at a minimum,” China will create and distribute through social media AI-generated content that “benefits their positions in these high-profile elections.”

The company added that the impact of AI-made content was minor but warned that it could change.

“While the impact of such content in swaying audiences remains low, China’s increasing experimentation in augmenting memes, videos, and audio will continue and may prove effective down the line,” said Microsoft.

AI’s Impact on Elections

Microsoft said in the report that China had already attempted an AI-generated disinformation campaign in the Taiwan presidential election in January. The company said this was the first time it had seen a state-backed entity using AI-made content in a bid to influence a foreign election.

A Beijing-backed group called Storm 1376, also known as Spamouflage or Dragonbridge, was highly active during the Taiwanese election. Its attempts to influence the election included posting fake audio on YouTube of the election candidate, Terry Gou, who had bowed out in November, endorsing another candidate. Microsoft said the clip was “likely AI generated”. YouTube removed the content before it reached many users.

The Beijing-backed group pushed a series of AI-generated memes about the ultimately successful candidate, William Lai, a pro-sovereignty candidate opposed by Beijing, that levelled baseless claims against Lai, accusing him of embezzling state funds.

Increasing Depth and Complexity

There was also an increased use of AI-generated TV news anchors, a tactic that has also been used by Iran, with the “anchor” making unsubstantiated claims about Lai’s private life, including fathering illegitimate children.

Microsoft said the news anchors were created by the CapCut tool, which is developed by Chinese company ByteDance, the owner of TikTok.

Microsoft added that Chinese groups continue to mount influence campaigns in the US. It said Beijing-backed actors are using social media accounts to pose “divisive questions” and attempt to understand issues dividing US voters.

“This could be to gather intelligence and precision on key voting demographics ahead of the US Presidential election,” said Microsoft in a blog post accompanying the report.

Global Concerns and Reactions

While Russian state actors have been noted for their disinformation tactics, China’s rapid improvement in this arena is raising alarms internationally. The use of AI-generated news anchors and the expansion of disinformation content targeting Taiwan demonstrate a concerted effort to refine and amplify these influence operations.

 Despite Beijing’s denials of producing and spreading false information, the evidence presented by cybersecurity researchers paints a different picture, one of a strategic and well-resourced campaign to sway public opinion and electoral outcomes globally.

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here