The Technology That Actually Runs Our World
The most dominant algorithms aren’t the ones choosing what songs Spotify serves you.
This article was featured in the One Story to Read Today newsletter. Sign up for it here.
You might have heard that algorithms are in control of everything you hear, read, and see. They control the next song on your Spotify playlist, or what YouTube suggests you watch after you finish a video. Algorithms are perhaps why you can’t escape Sabrina Carpenter’s hit song “Espresso” or why you might have suddenly been struck by the desire to buy one of those pastel-colored Stanley cups. They dictate how TV shows are made and which books get published—a revolutionary paradigm shift that’s become fully entrenched in the arts and media, and isn’t going away anytime soon.
In 2024, culture is boring and stale due to the algorithms calling the shots on what gets produced and praised—or so the critics say. The New Yorker staff writer Kyle Chayka wrote an entire book about how Big Tech has successfully “flattened culture” into a series of facsimile coffee shops and mid-century-modern furniture. The critic Jason Farago argued in The New York Times Magazine that “the plunge through our screens” and “our submission to algorithmic recommendation engines” have created a lack of momentum. Pinning the blame on new inventions isn’t a fresh argument either: In a 1923 essay, Aldous Huxley pointed to the ease of cultural production, driven by a growing middle-class desire for entertainment, as a major culprit for why mass-market books, movies, and music were so unsatisfying. “These effortless pleasures, these ready-made distractions that are the same for everyone over the face of the whole Western world,” he wrote, “are surely a worse menace to our civilization than ever the Germans were.”
And yet, cultural algorithms are only downstream of the larger, intractable forces that shape how art is made and supported. It’s not that Spotify has made music more boring or that Instagram has made art more stale, but rather that skyrocketing rents and yawning inequality have destroyed many of the necessary components for culture to spawn and mature. Galleries and playhouses have closed; music venues get turned into luxury condos and bank branches. Some of these outcomes are the results of algorithms—albeit ones that garner far less attention than those powering TikTok or YouTube.
Part of the fixation on cultural algorithms is a product of the insecure position in which cultural gatekeepers find themselves. Traditionally, critics have played the dual role of doorman and amplifier, deciding which literature or music or film (to name just a few media) is worthwhile, then augmenting the experience by giving audiences more context. But to a certain extent, they’ve been marginalized by user-driven communities such as BookTok and by AI-generated music playlists that provide recommendations without the complications of critical thinking. Not all that long ago, you might have paged through a music magazine’s reviews or asked a record-store owner for their suggestions; now you just press “Play” on your Spotify daylist, and let the algorithm take the wheel.
In that way, some consumers have yielded to a type of techno-fatalism. People know that algorithms exist and often dictate how culture is disseminated to them—and that there’s not much they can do about it, save for abandoning the platforms altogether and embracing a retro-Luddism about their consumption choices. (Not a bad outcome, actually.) But algorithms aren’t just being used to feed you content. They’re also employed to fix real-estate prices, make probation and asylum decisions, determine Uber prices during a hurricane, dictate whether elderly people receive potentially lifesaving medical care, assess the risk posed by an abusive partner, and decide who gets targeted in a war zone.
In the United States, algorithms are now embedded in companies and various levels of government, speeding along processes that used to be handled by humans. In the private sector, algorithms are attractive because they can automate such tasks as price adjustment in real time. Consider Walmart, which is replacing its price tags with electronic ones that can be changed remotely, ostensibly to save on labor costs. But in 2021, researchers found that algorithmically derived pricing does not pass savings down to the customer; after looking at how prices change on e-commerce sites such as Amazon, Target, and Walmart, they discovered that “high-frequency pricing algorithms can soften competition and increase profits,” helping rich firms get richer.
These sorts of algorithms are a lot more sinister, and harder to untangle from our everyday life, than the ones making every coffee shop look the same. There’s also a lack of legal frameworks with which to challenge the impact of these algorithmic manipulations. When redlining—the longtime practice of denying government-backed home loans to Black Americans—was outlawed in 1968 with the Fair Housing Act, the rationale was simple: People shouldn’t be denied the ability to purchase a home because of their skin color. That was a more straightforward consideration when mortgages were a face-to-face business, but the majority of them today are started online, where applications are run through software meant to improve risk management for the lender.
[Read: We’re entering an AI price-fixing dystopia]
The digitization of the mortgage industry has been hailed for its potential to even the playing field for prospective homeowners—automated systems ostensibly don’t come with the same prejudices as human loan officers. But because these online mortgage brokerages use algorithms that place a lot of weight on things like an applicant’s assets and credit score, both of which have their own discriminatory history, they run the risk of perpetuating structural inequities and biases. And those perils aren’t theoretical: A 2021 investigation by The Markup found that Black applicants were 80 percent more likely to be denied a loan by an algorithmic underwriting engine, compared with white applicants with similar financial backgrounds.
Renters in Massachusetts also recently won a settlement in a class-action lawsuit against a tenant-screening company called SafeRent, which uses an algorithm to provide landlords with a “score" that speaks to the risk level of potential renters. SafeRent’s algorithm applied flawed factors such as credit scores, and the plaintiffs accused the company of discriminating against Black and Hispanic applicants who used federal housing vouchers by not considering how those could help them pay rent. But few laws in the United States defend against this sort of prejudicial treatment by technology, making cases like the one in Massachusetts an uphill battle.
There are some signs that the government is at least trying to limit these sorts of practices. Early last year, the Department of Justice warned landlords and tenant-screening companies that they aren’t “absolved from liability” if an algorithm they’re using ends up violating fair-housing laws by, say, disproportionately denying people of color a place to live. The DOJ’s statement was not so much a threat of immediate prosecution as it was a signal that the government was paying attention to cases like the one against SafeRent.
Additionally, tech executives have a genuine nemesis in Federal Trade Commission Chairwoman Lina Khan, who has been suing giants such as Amazon and Meta for antitrust violations. In July, the agency said that it would be studying “surveillance pricing,” whereby individuals are being served different price tags based on their browser history. Last year, President Joe Biden issued an executive order requiring several federal agencies to create a regulatory framework for how they plan to establish guardrails against discrimination when using algorithmically powered systems to, for example, hire contractors or award housing grants. That said, like many efforts to rein in tech, the executive order turned out to be more like a committee to discuss a meeting about a plan rather than real action. And the incoming Trump administration is stuffed with right-wing tech doyens, including the Peter Thiel disciple J.D. Vance and the odd coupling of Elon Musk and Vivek Ramaswamy, who were recently selected to run something called the Department of Government Efficiency. (They may find themselves at odds with potential FCC chair Brendan Carr, who has been a broad, albeit selective, critic of consolidation in the tech industry.)
Maybe the most worrying thing about algorithms is that they give institutional actors a degree of plausible deniability. If the decisions dictating our lives are being made by equations rather than people, then the blame for those decisions shifts from something concrete to something abstract. As consumers, we are inured to thinking of technology as the product of companies rather than a collection of individuals making decisions about what those companies do—and because these equations tend to be cloaked in logos and branding, we can lose sight of the fact that algorithms are susceptible to the biases of the people who create them.
That makes ignoring these powerful currents—and focusing on the algorithms dictating what song or video is served up next—easier. When artists argue that their song didn’t catch a tailwind from the right keywords, or that their book isn’t selling well because it didn’t reach the right influencers, what they’re really doing is expressing fear that there is no room for artists to exist anymore. Musicians and writers and painters have not somehow become less interesting in the course of a single generation. Instead, the ground has shifted. Rents are too high; wages are too meager; wealth is too concentrated. Artists are forced to focus on their survival rather than their work, leaving little time or space to cultivate their skills. Shifting those foundational structures can feel impossible, so by the time those endeavors reach consumers, the critiques are laid at the feet of delivery mechanisms such as Spotify and Instagram.
It’s not that we should all learn to stop worrying and to love the algorithmic-recommendation engine, but this consistent harping on how boring everything has become does distract us from the larger issues. Some technologies really are eroding the viability of public pursuits, and perhaps enjoying or criticizing cultural works based on their merits could help us focus our attention a bit. Asking yourself whether you like something because you actually enjoy it can help slough off the feeling of ambient dread that some critics have called “algorithmic anxiety.”
I’ve always found this term too small for what people feel now; it suggests that the source of our neuroses can be found in the soft glow of a smartphone screen rather than the superstructures of power and influence that surround us. What drives many of us mad is the inability to change those power dynamics that are so plain to see. Art—good art—is still being made, but without some massive changes, that may not be the case for long. Too many of us are stuck on one station, and it’s getting harder to turn the dial.