A Culture-War Test for AI
Do both candidates secretly agree on the technology?
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Did someone forward you this newsletter? Sign up here.
You might think, given the extreme pronouncements that are regularly voiced by Silicon Valley executives, that AI would be a top issue for Kamala Harris and Donald Trump. Tech titans have insisted that AI will change everything—perhaps the nature of work most of all. Truck drivers and lawyers alike may see aspects of their profession automated before long. But although Harris and Trump have had a lot to say about jobs and the economy, they haven’t spoken much on the campaign trail about AI.
As my colleague Matteo Wong wrote yesterday, that may be because this is the rare issue that the two actually agree on. Presidential administrations have steadily built AI policy since the Barack Obama years; Trump and Joe Biden both worked “to grow the federal government’s AI expertise, support private-sector innovation, establish standards for the technology’s safety and reliability, lead international conversations on AI, and prepare the American workforce for potential automation,” Matteo writes.
But there is a wrinkle. Trump and his surrogates have recently lashed out against supposedly “woke” and “Radical Leftwing” AI policies supported by the Biden administration—even though those policies directly echo executive orders on the technology that Trump signed himself. Partisanship threatens to halt years of bipartisan momentum, though there’s still a chance that reason will prevail.
Something That Both Candidates Secretly Agree On
By Matteo Wong
If the presidential election has provided relief from anything, it has been the generative-AI boom. Neither Kamala Harris nor Donald Trump has made much of the technology in their public messaging, and they have not articulated particularly detailed AI platforms. Bots do not seem to rank among the economy, immigration, abortion rights, and other issues that can make or break campaigns.
But don’t be fooled. Americans are very invested, and very worried, about the future of artificial intelligence. Polling consistently shows that a majority of adults from both major parties support government regulation of AI, and that demand for regulation might even be growing. Efforts to curb AI-enabled disinformation, fraud, and privacy violations, as well as to support private-sector innovation, are under way at the state and federal levels. Widespread AI policy is coming, and the next president may well steer its direction for years to come.
What to Read Next
- The slop candidate: “In his own way, Trump has shown us all the limits of artificial intelligence,” Charlie Warzel writes.
- The near future of deepfakes just got way clearer: “India’s election was ripe for a crisis of AI misinformation,” Nilesh Christopher wrote in June. “It didn’t happen.”
P.S.
Speaking of election madness, many people will be closely watching the results not just because they’re anxious about the future of the republic but also because they have a ton of money on the line. “On Polymarket, perhaps the most popular political-betting site, people have wagered more than $200 million on the outcome of the U.S. presidential election,” my colleague Lila Shroff wrote in a story for The Atlantic yesterday. So-called prediction markets “sometimes describe themselves as ‘truth machines,’” Lila writes. “But that’s a challenging role to assume when Americans can’t agree on what the basic truth even is.”
— Damon