States should work with AI, not against it

For decades, Americans have been conditioned to fear AI. From big-budget blockbusters portraying apocalyptic scenarios to TV shows and books that show AI in a negative light, AI has been shown negatively ever since HAL refused to open the bay doors.

This Hollywood-driven fear has affected real policy change on the state level. The problem is that many of these policies are overly restrictive and come from a place of fear rather than objectivity.

AI innovators should have one set of rules to follow nationwide, rather than being forced to tailor products and services according to a patchwork of laws.

They come from an understandable place, of course. AI has been known to hallucinate legal cases and run roughshod over privacy law, and it can be used in abusive and hurtful ways. It is imperative that humans remain involved in decision-making and implement strong safeguards against misuse. The White House recently called for such policies in the National AI Legislative Framework.

But the Trump administration has also recognized that regulations can be a hindrance.

This is why President Trump issued an executive order to establish a federal framework for AI regulation last December. “My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones,” he wrote in the order. “The resulting framework must forbid State laws that conflict with the policy set forth in this order. … A carefully crafted national framework can ensure that the United States wins the AI race, as we must.”

The order also directed the secretary of commerce to publish a report examining AI regulations from coast to coast. It will identify state AI laws the administration considers “onerous” to create a targeting map that will inform the priorities of the Justice Department’s AI Litigation Task Force.

Colorado — which is already in the administration’s crosshairs, according to the executive order — and other states whose laws make the list (such as California, New York, and Illinois) could lose significant federal dollars.

Although President Trump’s order targets states, cities aren’t in the clear. The DOJ recently created a new Enforcement and Affirmative Litigation Branch within the Civil Division that is tasked with “filing lawsuits against states, municipalities, and private entities that interfere with or obstruct federal policies,” underscoring the administration’s intent to challenge local laws that appear to violate the Supremacy Clause.

RELATED: California’s next dumb tech idea: Show your papers to scroll

Samuel Boivin/NurPhoto/Getty Images

Centralizing AI oversight makes sense. Without a deep understanding of artificial intelligence and machine learning, city and state leaders can inadvertently hinder progress in the field of technology (such as restricting the use of aged, anonymized data for algorithm training).

Regardless of the federal funding at stake, city and state statutes governing AI should be reviewed for conflicts with federal policy, which is being carefully designed to allow growth across industries where, today, progress is often powered by AI.

For the good of America’s economic engine, AI innovators should have one set of rules to follow nationwide, rather than being forced to tailor products and services according to a patchwork of laws.

The future is here, and we should not be afraid of it. AI is a powerful driver for progress in business, science, medicine, and a variety of other fields. Efficiency, accuracy, productivity, creativity, and analysis are magnified and elevated by this technology.

Cities and states should seek to harness this tool and use it for their people. The way forward is smart, federally driven guardrails that allow innovation to flourish, not a giant stop sign.

​Ai, Artificial intelligence, Donald trump, Trump administration, State ai laws, Ai regulation, Ai litigation task force, Opinion & analysis 

You May Also Like

More From Author