Colorado Advances Bill to Require Labels for AI-Generated Content | Westword
Navigation

Colorado Legislators Advance Bill to Require Labels for AI-Generated Content

Over 100 members of the AI industry are rallying against the bill, but some say it's necessary to build trust.
A photo from the documentary What Jennifer Did that is accused of being AI-generated.
A photo from the documentary What Jennifer Did that is accused of being AI-generated. Netflix
Share this:
While watching Netflix's true-crime documentary What Jennifer Did, viewers noticed something peculiar. Photos of the titular Jennifer Pan appeared deformed; her fingers blended into one another, her teeth were strangely elongated, her dress looked painted on.

Although the documentary presents the photos as genuine, viewers and tech-industry experts alleged they were created or manipulated using artificial intelligence. After a week of controversy and debate about the morality of using AI-generated content without revealing it to the audience, it's still unclear whether the photos are real or not.

In Colorado, lawmakers want to eliminate that uncertainty.

Senate Bill 24-205 seeks to require that AI-generated content like images, videos and audio be clearly marked as synthetic beginning in 2026. The bill passed its first vote 3-2 in the Senate Judiciary Committee on Wednesday, April 24. It will now go to the full Senate for consideration.

The bill also attempts to restrict bias in AI decision-making by mandating that certain AI developers and deployers take steps to protect against algorithmic discrimination beginning in October 2025. The steps include performing risk assessments, disclosing the type of data used to train the AI system, and disclosing known risks of discrimination that arise from using the system.

Democratic Senator Robert Rodriguez, the bill's sponsor, said these regulations would help protect Colorado from AI scandals — like UnitedHealth allegedly using an AI system with a 90 percent error rate to deny health care to patients, or the tutoring company iTutorGroup programming recruitment software to automatically reject older applicants.

"Those are just a few examples of the ways AI has caused harm and the negative outcomes we will continue to mount unless action is taken," Rodriguez said during Wednesday's committee hearing. "Although the federal government has yet to respond, Colorado can take action to put in place common-sense requirements that anyone in the AI industry already should be doing."

Colorado's bill largely aligns with AI regulation currently being considered in Connecticut. In addition to Colorado and Connecticut, lawmakers in Texas, Alaska, Georgia and Virginia have been considering legislation on the issue.

The effort has received major pushback from the AI industry, however. More than 100 members of Colorado's AI and technology industries signed a letter to Rodriguez opposing the measure, arguing that it would be overly burdensome and cause businesses that rely on AI to leave the state.

"We're at a critical juncture with generative AI. The technology is literally changing on a weekly basis, just like the early days of the World Wide Web," Kyle Shannon, CEO of Storyvine, a video storytelling software, and co-founder of AI Salon, said during a press conference on April 23. "How stifling a bill like this would have been in the mid-’90s."

Shannon said the requirement to mark AI-generated content doesn't align with "the nuanced reality of how AI is actually used." He questioned whether a passage of text that uses AI to complete the sentence would have to be labeled. The bill lays out exceptions, like for AI systems used to perform standard assistance or editing, but Shannon claims that the requirement would still discourage the use of AI.

"If we make every small business have to think about every single thing they output or every single way they use [AI], I just think people won't use it," he said. "Our ability to experiment with these technologies is critical."

Kelly Kinnebrew, founder of the AI life-coaching platform Minerva, said the bill's requirement to disclose the data used to train AI systems would take away her business's competitive advantage.

"If this [bill] becomes law, Minerva may have to leave the state. The state that I love, the state that I was born and raised in," Kinnebrew said during the press conference. "I don't think it would work for any business that's trying to build in this space."

However, not everyone in the AI industry is against the measure. Beth Rudden, CEO of conversational technology firm Bast AI, testified in support of the bill during Wednesday's committee hearing.

Rudden said these kinds of regulations are essential to build trust in the AI industry, at a time when the public is still very wary. Over half of Americans say they feel more concerned than excited about the increased use of AI, while only 10 percent say they are more excited than concerned, according to a Pew Research survey last August.

At the same time, AI is becoming harder to recognize, with only 30 percent of adults able to correctly identify examples of AI in everyday life during a February 2023 survey.

"The stipulations...are exactly the type of oversight needed to build trust in AI technology," Rudden said. "Such measures are not only about compliance, but about fostering a culture of accountability and ethical consideration within the AI community."

"We should be concerned when people have concerns with these basic principles of disclosure," Rodriguez added. "It's not excessively burdensome, other than [saying] they should be responsible."

All three of the committee's Democratic members voted in support of the measure, while the two Republican members voted against it. The opponents raised concerns about pushing through such a complicated bill with less than two weeks left in the legislative session. The bill will need approval from the full Senate and House by May 8 to become law. 

Rodriguez argued that Colorado cannot wait another year to take action on AI.

"It's just the fears of what this information can do and this technology can do," Rodriguez said. "The amount of information that it can extract that's out there that can be used to manipulate people and to harm people — that's what we're trying to get at."

Shannon said it's easy to fall into those fears, "but what doesn't get reported enough is the other side of the fear. What if AI actually makes us better?"
BEFORE YOU GO...
Can you help us continue to share our stories? Since the beginning, Westword has been defined as the free, independent voice of Denver — and we'd like to keep it that way. Our members allow us to continue offering readers access to our incisive coverage of local news, food, and culture with no paywalls.