The dawn of the AI era has sparked a wide range of reactions, from exhilaration over the technology’s capabilities to deep distress.
Such responses to a new communicative tool are nothing new, and indeed, AI presents new and unique challenges that will require deep thought and sensitivity.
But a heavy-handed congressional response that erodes long-standing American freedoms isn’t the answer. The Senate Judiciary Committee’s passage last week of SB 3062, the GUARD Act, shows the substantial risk that Congress’ “do something” energy poses to free speech.
Restrictions violate the First Amendment by regulating the protected editorial decisions of developers and by infringing on individuals’ rights to create and receive lawful expression.
The bill regulates AI chatbots — especially so-called “AI companion” systems — through access limits, design mandates, and disclosure requirements, backed by civil and criminal penalties of up to $100,000 per violation.
If enacted, it puts the federal officials squarely in the position of deciding how this technology is built and used, limiting engagement with information and compelling speech along the way.
Growing calls for a federal solution to the fragmented landscape of state regulations reflect a clear political appetite for legislative action. And a single national standard has obvious appeal for an industry seeking consistency across jurisdictions. But consistency isn’t the same as constitutionality.
If federal proposals like the GUARD Act replicate the speech restrictions found in state laws, they just hardwire those problems into federal law.
Take the bill’s age verification requirements. The GUARD Act forces Americans to create accounts and prove their ages. Existing accounts are frozen until verified, and companies are required to recheck users’ ages periodically.
Age-verification mandates like this one force individuals to disclose their identity to seek answers and thus give up anonymity, a right the Supreme Court has repeatedly recognized as central to free expression.
Faced with mandatory identity disclosure, many think twice before asking sensitive questions. Would someone trapped in an abusive relationship be more or less willing to seek advice from a chatbot if she had to surrender her privacy? Or how about the employee who is consistently harassed at work but is worried about asking for advice?
There’s a reason that the Federalist Papers were written under a pseudonym. Even public debate sometimes requires distance from the speaker’s identity. That protection is still needed today, allowing people to seek information, test ideas, and ask sensitive questions without fear of legally required exposure.
Then there are rules about content. The bill makes it unlawful to design, deploy, or make available chatbots that, in the government’s view, “encourage” or “promote” certain categories of constitutionally protected speech.
RELATED: Age verification laws do not make us safer
Samuel Boivin/NurPhoto/Getty Images
Who do we want to be in charge of determining that? Those restrictions violate the First Amendment by regulating the protected editorial decisions of developers and by infringing on individuals’ rights to create and receive lawful expression.
Proposals like the GUARD Act dictate how chatbots respond and intrude on editorial judgment by putting Congress’ thumb on the scale of what is acceptable speech. This means control over who can speak, what can be said, and how ideas are expressed.
Those choices shape the substance of speech and risk reducing a chorus of voices to a single, government-shaped note.
Finally, disclaimer mandates can cross constitutional lines by compelling speech. The GUARD Act requires chatbots to deliver federally imposed messages in every interaction. While informing users, its application in every circumstance alters the content and flow of communication itself.
All of this points to a deeper reality that AI systems cannot perfectly predict or control every output. That is not a defect. It’s a core feature of how these models generate responses from probabilistic patterns.
Artificial intelligence, and chatbots in particular, has become Washington’s latest political punching bag. Accusations of manipulation and harm are driving a slew of legislative proposals to censor this emerging technology. The GUARD Act isn’t alone. The recently introduced CHATBOT Act presents many of the same threats.
The same impulse to move quickly in Congress is playing out nationwide, with proposals in states like Minnesota, Florida, and Washington targeting chatbots through access restrictions, disclosure mandates, and content-related rules.
The Constitution doesn’t permit any government to address concerns about AI by broadly restricting protected expression. The First Amendment demands solutions that target illegal conduct without burdening the exchange of ideas.
This article was originally published by RealClearPolitics and made available via RealClearWire.
Ai systems control, Chatbot act, Guard act, Age verification requirements, First amendment, Free speech, Ai regulation, Chatbots, Opinion & analysis
