Was Sam Altman Right About the Job Market?

Tech companies are unleashing AI products that do much more than answer questions.

Was Sam Altman Right About the Job Market?

The automated future just lurched a few steps closer. Over the past few weeks, nearly all of the major AI firms—OpenAI, Anthropic, Google, xAI, Amazon, Microsoft, and Perplexity, among others—have announced new products that are focused not on answering questions or making their human users somewhat more efficient, but on completing tasks themselves. They are being pitched for their ability to “reason” as people do and serve as “agents” that will eventually carry out complex work from start to finish.

Humans will still nudge these models along, of course, but they are engineered to help fewer people do the work of many. Last month, Anthropic launched Claude Code, a coding program that can do much of a human software developer’s job but far faster, “reducing development time and overhead.” The program actively participates in the way that a colleague would, writing and deploying code, among other things. Google now has a widely available “workhorse model,” and three separate AI companies have products named Deep Research, all of which quickly gather and synthesize huge amounts of information on a user’s behalf. OpenAI touts its version’s ability to “complete multi-step research tasks for you” and accomplish “in tens of minutes what would take a human many hours.”

AI companies have long been building and benefiting from the narrative that their products will eventually be able to automate major projects for their users, displacing jobs and perhaps even entire professions or sectors of society. As early as 2016, Sam Altman, who had recently co-founded OpenAI, wrote in a blog post that “as technology continues to eliminate traditional jobs,” new economic models might be necessary, such as a universal basic income; he has warned repeatedly since then that AI will disrupt the labor market, telling my colleague Ross Andersen in 2023 that “jobs are definitely going to go away, full stop.”

Despite the foreboding nature of these comments, they have remained firmly in the realm of speculation. Two years ago, ChatGPT couldn’t perform basic arithmetic, and critics have long harped on the technology’s biases and mythomania. Chatbots and AI-powered image generators became known for helping kids cheat on homework and flooding the web with low-grade content. Meaningful applications quickly emerged in some professions—coding, fielding customer-service queries, writing boilerplate copy—but even the best AI models were clearly not capable enough to precipitate widespread job displacement.

[Read: A chatbot is secretly doing my job]

Since then, however, two transformations have taken place. First, AI search became standard. Chatbots exploded in popularity because they could lucidly—though frequently inaccurately—answer human questions. Billions of people were already accustomed to asking questions and finding information online, making this an obvious use case for AI models that might otherwise have seemed like research projects: Now 300 million people use ChatGPT every week, and more than 1 billion use Google’s AI Overview, according to the companies. Further underscoring the products’ relevance, media companies—including The Atlanticsigned lucrative deals with OpenAI and others to add their content to AI search, bringing both legitimacy and some additional scrutiny to the technology. Hundreds of millions were habituated to AI, and at least some portion have found the technology helpful.

But although plain chatbots and AI search introduced a major cultural shift, their business prospects were always small potatoes for the tech giants. Compared with traditional search algorithms, AI algorithms are more expensive to run. And search is an old business model that generative AI could only enhance—perhaps resulting in a few more clicks on paid advertisements or producing a bit more user data for targeting future advertisements.

Refining and expanding generative AI to do more for the professional class—not just students scrambling on term papers—is where tech companies see the real financial opportunity. And they’ve been building toward seizing it. The second transformation that has led to this new phase of the AI era is simply that the technology, while still riddled with biases and inaccuracies, has legitimately improved. The slate of so-called reasoning models released in recent months, such as OpenAI’s o3-mini and xAI’s Grok 3, has impressed in particular. These AI products can be genuinely helpful, and their applications to advancing scientific research could prove lifesaving. Economists, doctors, coders, and other professionals are widely commenting on how these new models can expedite their work; a quarter of tech start-ups in this year’s cohort at the prestigious incubator Y Combinator said that 95 percent of their code was generated with AI. Major firms—McKinsey, Moderna, and Salesforce, to name just a handful—are now using it in basically every aspect of their businesses. And the models continue getting cheaper, and faster, to deploy.

[Read: The GPT era is already ending]

Tech executives, in turn, have grown blunt about their hopes that AI will become good enough to do a human’s work. In a Meta earnings call in late January, CEO Mark Zuckerberg said, “2025 will be the year when it becomes possible to build an AI engineering agent” that’s as skilled as “a good, mid-level engineer.” Dario Amodei, the CEO of Anthropic, recently said in a talk with the Council on Foreign Relations that AI will be “writing 90 percent of the code” just a few months from now—although still with human specifications, he noted. But he continued, “We will eventually reach the point where the AIs can do everything that humans can,” in every industry. (Amodei, it should be mentioned, is the ultimate techno-optimist; in October, he published a sprawling manifesto, titled “Machines of Loving Grace,” that posited AI development could lead to “the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights.”) Altman has used similarly grand language recently, imagining countless virtual knowledge workers fanning out across industries.

These bright visions have dimmed considerably when put into practice: Elon Musk and the Department of Government Efficiency’s efforts to replace human civil servants with AI may be the clearest and most dramatic execution of this playbook yet, with massive job loss and little more than chaos to show for it so far. Meanwhile, all of generative-AI models’ issues with bias, inaccuracy, and poor citations remain, even as the technology has advanced. OpenAI’s image-generating technology still struggles at times to produce people with the right number of appendages. Salesforce is reportedly struggling to sell its AI agent, Agentforce, to customers because of issues with accuracy and concerns about the product’s high cost, among other things. Nevertheless, the corporation has pressed on with its pitch, much as other AI companies have continued to iterate on and promote products with known issues. (In a recent earnings call, Salesforce CEO Marc Benioff said the firm has “3,000 paying Agentforce customers who are experiencing unprecedented levels of productivity.”) In other words, flawed products won’t stop tech companies’ push to automate everything—the AI-saturated future will be imperfect at best, but it is coming anyway.

The industry’s motivations are clear: Google’s and Microsoft’s cloud businesses, for instance, grew rapidly in 2024, driven substantially by their AI offerings. Meta’s head of business AI, Clara Shih, recently told CNBC that the company expects “every business” to use AI agents, “the way that businesses today have websites and email addresses.” OpenAI is reportedly considering charging $20,000 a month for access to what it describes as Ph.D.-level research agents.

Google and Perplexity did not respond to a request for comment, and a Microsoft spokesperson declined to comment. An OpenAI spokesperson pointed me to an essay from September in which Altman wrote, “I have no fear that we’ll run out of things to do.” He could well be right; the Bureau of Labor Statistics projects AI to substantially increase the demand for computer and business occupations through 2033. A spokesperson for Anthropic referred me to the start-up’s initiative to study and prepare for AI’s effect on the labor market. The effort’s first research paper analyzed millions of conversations with Anthropic’s Claude model and found that it was used to “automate” human work in 43 percent of cases, such as identifying and fixing a software bug.

Tech companies are revealing, more clearly than ever, their vision for a post-work future. ChatGPT started the generative-AI boom not with an incredible business success, but with a psychological one. The chatbot was and is still possibly losing the company money, but it exposed internet users around the world to the first popular computer program that could hold an intelligent conversation on any subject. The advent of AI search may have performed a similar role, presenting limited opportunity for immediate profits but habituating—or perhaps inoculating—millions of people to bots that can think, write, and live for you.