Why Colorado’s artificial intelligence law is a big deal for the whole country
Nearly a year after it passed, it’s still the only one of its kind, so far. But as revisions are still planned, could Colorado’s law become a model to emulate — or avoid?


All state Sen. Robert Rodriguez wanted to do was add some guardrails for one of the biggest and fastest developing technologies to hit mainstream America since smartphones.
But a month after last year’s passage of Colorado’s artificial intelligence law, which goes into effect Feb. 1, there was so much pushback from the tech community that even Gov. Jared Polis and Colorado Attorney General Phil Weiser asked for revisions to avoid “unintended consequences” that could dull the state’s appeal to startups and tech businesses that bring higher-paying jobs.
Not that consumer advocates were pleased with how Senate Bill 205, passed by the legislature in 2024, turned out either. Some wanted even stricter regulations.
A special task force was set up and monthly meetings began in August. In February, the task force shared a five-page report of recommendations that had four points everyone agreed on, four that needed work and seven that were nowhere near consensus. Smaller meetings began. The wait continued. A group spearheaded by local tech leaders last week began spreading a “Pause SB-205” campaign.
Rodriguez, a Denver Democrat and the Senate majority leader, has been drafting a bill to revise the law for months. It’s almost ready.
“It’s in editing. I just finished the last tweaks this morning,” Rodriguez said Tuesday.
The senator, who said he’s being hounded by national media as well, said the earliest he would introduce the measure is late this week or early next week.
“It depends on the feedback I get,” he said, declining to discuss what’s in the legislation.
Time is running out in this legislative session, which ends May 7. If a revised bill doesn’t pass, the law will remain untouched. And that’s a big deal, more so for those on the industry side.
Consumer advocates, who said they tried to find consensus with industry, fear that further delays will harm consumers, said Matthew Scherer, senior policy council at Center for Democracy and Technology.
“What that’s really code for,” said Scherer, who also served on the task force, “is we want another year to bring in armies of industry lobbyists to pressure Colorado legislators to repeal the law.”
Colorado’s AI law: Model to emulate or avoid?
Nationwide, Colorado is a leader with its model law to regulate how AI systems might harm consumers. All but seven states have proposed some sort of AI-related legislation though not at the level of Colorado’s, according to law firm BCLP, which is tracking legislation state by state.
Not that other states haven’t tried.
Last year, Connecticut’s bill, which Rodriguez took inspiration from, stalled after Gov. Ned Lamont, a Democrat, said he’d veto it. A similar bill from California made it all the way to Democratic Gov. Gavin Newsom before he vetoed it last fall. Last month, Virginia’s governor, Glenn Youngkin, a Republican, vetoed his state’s AI bill, which was also similar to Colorado’s.
“It’s very meaningful to have Colorado be the first in this country to enact this sort of law,” said Margot Kaminski, a law professor at the University of Colorado law school who also sat on the legislative task force. “Multiple blue states around the country will look to this bill as a model. And I think that’s why it has attracted so much attention from national-level tech lobbying groups to make it a blueprint, and something that they’d feel more comfortable with having enacted across blue states.”
But why not red states? “Because it deals with anti-discrimination,” she said, alluding to the Trump administration’s upheaval of any policy that hints of promoting diversity.
The nation doesn’t quite have an AI law, though the Biden administration took a stab at creating one, which the Trump administration revoked in favor of its executive AI order to come up with an action plan. It’ll likely not be as restrictive on industry as the European Union’s AI Act, passed last year. That one is more of an “omnibus approach” that basically tries to cover everything, including general purpose AI and foundation models, Kaminski said. Colorado’s is not that.
The national mood has changed, said Amy de La Lama, a Boulder-based lawyer at BCLP who specializes in privacy and technology law. And the complexity of the Colorado law has been noticed. While the law doesn’t go into effect until next year, she said she’s not seeing the rush from clients or other organizations to prepare for the state’s AI requirements, as they did a few years ago with data privacy and security laws.
“Six months ago, I would have said absolutely we will see significant legislation (from other states). There were hundreds of bills proposed across the nation,” she said. “I, truthfully, don’t have a great feel now. … I think the types of legislation we will see would be a little bit less ambitious.”
What does Colorado’s AI law do?
As is, Colorado’s law targets developers and companies that use “high-risk” AI technologies to make a “consequential decision,” such as who gets a job, an apartment or health insurance. Eight categories are called out in the law as areas in which a person’s fate could be harmed due to AI discrimination:
- Education
- Employment
- Loans or other financial service
- Health care
- Housing
- Insurance
- Legal services
- Essential government services
The law also mentions a few examples of what won’t be affected, like AI video games, robocall filters and anti-fraud technology that doesn’t use face recognition. Those aren’t considered high risk because they don’t make decisions that hurt consumers (but if they do, they aren’t exempt).
The law also doesn’t count newer forms of generative AI like the consumer-friendly ChatGPT as high risk, and allows “technology that communicates with consumers in natural language for the purpose of providing users with information” and has a use policy that “prohibits generating content that is discriminatory or harmful,” the law reads. While such services might have trained on biased data, they’re OK as long as they’re not making critical and discriminatory decisions about someone’s fate.
But sticking points for local businesses include notification requirements, disclosures and ongoing updates. And if a consumer doesn’t like the result, they can ask for an explanation, though the company isn’t required to reverse the decision. The company, however, must provide “the degree” AI contributed to the decision and “the type of data that was processed” to make the decision, according to the law.
The responsibility is on developers and companies that use high-risk AI systems, even if they didn’t develop it themselves. They need to do their best to make sure their technology doesn’t discriminate against protected classes, like race, gender and age. Just take “reasonable care,” Kaminski said.
“Some of the law is basically like the Colorado Privacy Act, asking the state attorney general’s office to come up with more detailed rules for application,” she said. “It’s not actually that onerous as far as legal obligations go.”
Complaints would be turned over to the Colorado Attorney General’s Office, which is tasked with protecting Colorado residents.
Companies that do use AI to make critical decisions that could harm consumers are exempt if they employ fewer than 50 full-time workers.
Concern that all companies that “deploy” AI are affected
That exemption just doesn’t sit well with Jon Nordmark, the Denver-based founder of Iterate.ai. His company developed tools including Generate AI, a personal generative AI tool that trains on the user’s own data and keeps it all private. But it’s not his tech he’s worried about. It’s the burden on small businesses, and his is not that small anymore, with about 100 people.
“What I think is being missed is that we don’t want to become uncompetitive on the global scene and we don’t want to become less competitive on the national scene,” said Nordmark, also a member of the legislative task force. “And in some of the ways the bill is written, it could harm every company in Colorado.”
The boot-strapped company has invested in engineers, and not office support staff. Only recently did it hire its first HR manager. Certain job openings tend to attract 800 to 1,000 applicants each. Right now, Iterate has four openings, but it is managing more than 4,600 active applications. It gets candidates from LinkedIn, which has an AI-powered job matching service to help job seekers find suitable employers.
“If even half of those applicants — say 2,000 — requested a detailed explanation about why they weren’t selected, that would be a massive resource burden,” Nordmark said. “And under a law like SB 205, we could see a number of people making that request just because the law gives them that right.”

Lawyers asked about this jobs scenario feel the market would figure this out, and likely the third-party job service to vet candidates would handle the appeals. But companies still need to notify users of AI’s use in hiring and respond to questions — just as they would if AI wasn’t used. And the AG tends to pursue the larger violators first.
The AI law, by the way, still needs the AG’s office to come up with the rules, a process that includes public comment and hearings. The office declined to comment on how Weiser interprets the law.
“It’s a disclosure law, not an anti-discrimination law,” Nordmark said.
In their letter to revise the law, Polis, Weiser and Rodriguez asked that the task force focus on developers, not “those smaller companies.” But the task force had no consensus on what deployers obligations should be.
And that’s problematic for small companies, especially startups, said Chris Erickson, cofounder of venture-capital firm Range Ventures.
Spotting startups that could change the world is something Range specializes in. It was the first venture capital firm to offer capital to local high-school principal Adeel Khan for MagicSchool AI, which provides AI tools for overworked teachers. But MagicSchool, like other startups, uses technology developed by OpenAI’s GPT-4o, Anthropic’s Claude or Google’s Gemini. They just figured out a way to, simply, build an app that helps teachers.
“I don’t think (adding guardrails) is a terrible place to be, given some of the perceptions,” Erickson said. “If we would have started with a law that said, ‘Hey, this is going to be about developers and making sure they measure these things,’ I think we would be in a place where instead of having this long task force process with a lot of disagreement, we would have gotten to a place of alignment earlier.”
Multiple sides, no concessions
But even consumer advocates aren’t satisfied.
Allowing smaller companies to discriminate because they have fewer employees almost makes the law obsolete, said Kjersten Forseth, a lobbyist for the Colorado AFL-CIO, who also was a task force member.
“Their intention here is because AI is a black box and is very difficult to parse out — in addition to the biases that might be inherently built into the system — that doesn’t mean we can’t do it,” Forseth said. “This idea that we just have to live with the inherent problems that AI is going to create in our society … because it’s AI? I mean, come on!”
A Consumer Reports survey last year found the majority of Americans were uncomfortable with AI being used in employment housing and health care decisions. And in one task force meeting, Rodriguez said he was hopeful some understanding could be made between the various sides because dealing with AI harms legislatively is much more productive than a ballot measure that leaves it up to voters.
The AFL-CIO initially opposed last year’s bill but after amendments were added to allow consumers to appeal adverse decisions, the labor union came around to a “not happy with it but it’s OK, a step forward,” Forseth said.
But pausing the law now is meaningless because she felt the industry failed to offer any concessions.
“It’s really frustrating to have this conversation over and over again. I’ve been in the building since the end of 2012, when Uber and Lyft came in in 2014. They were saying the same thing, ‘We’re innovative. You can’t regulate us too much because once you do, it kills the model.’ And now we’ve got one of our legislators getting sexually assaulted in one of those vehicles because of the lack of regulation,” Forseth said. “Both drivers and consumers are paying for it and that was all done under this whole same thing of, ‘It’s innovation.’”
The thing is, the AI bill already passed
Another member of the task force took a different approach. The bill already passed. This was a starting point, said Vivek Krishnamurthy, a professor at the University of Colorado Law School who advises governments and others on the human rights impacts of new technologies.
“I was much more interested in making it better and accepting the premise that this does need to be regulated in a distinctive way,” Krishnamurthy said.
The law is quite clear in many areas, he said. Will spell check be regulated? No. Will photo-editing software be regulated because it uses AI to add in a background? No. Will Open AI’s generative technology ChatGPT, which has enthralled the world with its humanlike responses to questions or digital chores, be regulated? No, he said.
“Let’s be clear about what the law does. The law applies to AI systems that are a substantial factor, which is the first set of important words, in making a consequential decision in eight areas: education, employment, finance, government services, health care, housing, insurance and legal services,,” Krishnamurthy said. “If you’re not in those eight areas, the law does not apply to you.”
Existing laws already tackle discrimination, but machines and data appear to discriminate differently from humans. An AI system may not reject a job candidate due to their skin color or gender. But if the system were trained on biased models, they might. A decade ago, Amazon scrapped an AI-based recruiting tool trained on 10 years of resumes, according to a report by Reuters in 2018. The system favored words like “executed” and “captured” that were on male engineers’ resumes and eventually penalized resumes that used the word “women’s.”
The point is that those systems need to be monitored in case a new form of discrimination is detected, Krishnamurthy said
After sitting on the task force, reading the law carefully and hearing from the AG, Krishnamurthy called the law “very strong and thoughtful.”
“The whole point of this sort of mode of regulation is to get companies of various kinds to think carefully about the risks that their products entail and then to go and mitigate those and develop systems to sort of monitor the performance of their AI when deployed out there in the world. And if problems are identified, to fix them. You’re not going to be hit with massive fines the moment you make a small mistake,” he said. “The entire point of this is to incentivize continuous improvement over time.”
Politics reporter Jesse Paul contributed to this story