Big Tech’s New Rules of the Road for AI and Elections

  • Legislators and election officials share deep concerns about the impact of misinformation created by AI technology.
  • A group of the world’s largest technology companies have signed a voluntary accord to address this problem.
  • Threats posed by AI-generated content can only be contained by the companies behind the tools used to create and distribute it. The impact of this voluntary commitment will be determined by the resources they devote to upholding it.
  • There’s disagreement over many questions about elections, but it seems like partisans of all stripes are uneasy about the new kinds of disruption artificial intelligence could bring to upcoming elections. Since the start of the year, more than a dozen states have introduced bills to combat AI-generated threats such as deepfake videos, images or robocalls. But they have little control over the technology that makes any of these possible in the first place.

    Earlier this month, major tech companies including Amazon, Google, Meta, Microsoft, Adobe and TikTok signed off on “A Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” Given the fact that these firms create digital infrastructure that is used around the world, their pledge has ramifications for elections in which an estimated 4 billion voters will participate.

    The companies hope to limit the risks that come from deceptive content — from generation to unclear content origin to the dissemination of misinformation created using AI across platforms. No other sector has the collective resources to take this on. Unlike earlier technology breakthroughs, AI was developed by private companies.

    Social media is already awash with misinformation about candidates and elections. Can these companies do enough, fast enough, to limit confusion about the truth and provenance of communications created using AI?

    Doing Rather Than Talking

    No single company, even a technology giant, can control all the pieces of the technology infrastructure that come into play with AI-generated content.

    That’s why it’s significant that companies associated with AI-producing software, such as OpenAI, partnering with content platforms like Meta and TikTok, according to Rachel Orey, senior associate director at the Bipartisan Policy Center. This will set the stage for them to share what they learn about emerging or unexpected risks and how to mitigate them, as well as ways that AI itself is being used to circumvent safeguards.

    Noah Praetz, president of The Elections Group, agrees that cooperation between these companies may be the only path toward the positive outcomes election officials would hope to see. The baseline principles are lofty, but the story of how the parties adhere to them is yet to be told.

    Orey notes that the parties have only agreed to “seek” to detect and address deceptive content. “Seeking is one thing, doing is another,” she says. Achieving these goals will require devoting significant internal resources at a time when tech companies have been laying off trust and safety teams.

    The companies can ease this concern by being transparent with one another and the public about their work. “Will they report, on at least a quarterly basis, the progress they are making, what they are doing, how much they are investing?” says Lawrence Norden, senior director of elections and government at the Brennan Center for Justice, a think tank at NYU law school.


    Members of the Georgia House Technology and Infrastructure Innovation Committee approved legislation to address the use of AI technology in elections.


    Still Plenty of Misinformation

    AI may be bringing new dimensions to misinformation, but false or misleading campaign messages, video and photos are not new. The current information environment is already a “dumpster fire,” Praetz says. AI can act as an accelerant to this, but he’s not sure restricting its use will leave voters less angry or less confused.

    Even if the tech companies fully commit and execute on all of their promises, Norden says, this will slow the deceptive use of AI, rather than “save” the election from its impacts. For one thing, the accord does not mention unsecured, open source AI systems. “These tools can be used by bad actors to interfere in our elections,” says Norden. “The commitments by these companies cannot prevent that.”

    The agreement’s definition of “deceptive AI election content” encompasses AI-generated audio, video and images. It does not mention AI-generated text, or AI tools that push messages into the information stream, already a major source of election misinformation.

    The most powerful strategy set forward in the accord might be “engaging in shared efforts to educate the public about media literacy best practices.” To the extent that this is done in a timely manner — and draws on the enormous communications power that resides in the parties — it could have a major influence on voters’ ability to detect and evaluate misinformation of all kinds.

    Tech companies have reasons that go beyond safeguarding democracy to build trust in AI. Some forecasters estimate the generative AI market could become a $1 trillion market over the next decade. It won’t help in terms of branding if high-profile controversies give the public the idea that it’s a tool for people with bad intentions, or create the impression that the companies that develop it are ambivalent about its misuse.

    Kent Walker, Google’s president of global affairs made this point in an announcement about the accord. “We can’t let digital abuse threaten AI’s generational opportunity to improve our economies, create new jobs, and drive progress in health and science,” he said.

    Source link

    Learn More →