Public distrust of AI platforms has reached an all-time high, especially when it comes to users’ private data. Not only are people leery of the AI platforms themselves, but they’re also concerned over how the government can access and use this information. Until recently, AI development felt like the Wild West, with little regulation to get in the way of supposed innovation. That’s all about to change, however, with a brand-new federal bill based on the big national framework Trump rolled out in advance.
The study
In late March, cybersecurity software company Malwarebytes conducted a survey among its newsletter readers, asking them to share their thoughts on AI and user privacy. After collecting 1,200 responses, the results were clear: A staggering 90% of readers simply don’t trust AI with their data.
There are a lot of good things in the bill, but it falls short in other key areas.
One of the main distrust drivers stems from fears of “personal data being used inappropriately” by both corporations and the government. Recent large data breaches that put user credentials directly on the dark web, along with commercial surveillance campaigns, have further contributed to the problem.
To combat the distrust, users have turned to privacy tools, such as VPNs, ad blockers, and identity theft protection services. However, these only go so far when AI platforms can circumvent the usual digital defense mechanisms to learn more about your online presence than it was ever meant to know.
The only firm solution to assuage public distrust is to give the AI companies some legal guardrails to ensure that they don’t overstep the rights and security of the people, with the majority of respondents agreeing that they “support national laws regulating how companies can collect, store, share, or use our personal data.”
The bill
A new federal bill proposed by Senator Marsha Blackburn aims to bridge the gap between AI’s fast-paced development and shielding the public from emerging dangers brought on by these rapidly evolving platforms. Dubbed the Trump America AI Act, the bill will codify President Trump’s previously touted National AI Legislative Framework by focusing on two major goals: protection and empowerment of the following:
1. Protections for children
AI developers are required to “prevent and mitigate foreseeable harm to users” with an emphasis on protecting the data, mental health, and safety of minors. AI developers can also be held legally accountable for any damage done by a failure to comply.
Pro: AI providers would be immediately responsible for their chatbots encouraging and even coaching young and vulnerable people on how to take their own lives.Con: AI developers will need a way to verify the age of their users, potentially leading to the mass collection of user IDs, resulting in a digital database. Also, while the bill protects minors’ user data, adults are exempt, failing to solve the concerns expressed in Malwarebytes’ survey.
2. Protections for communities
U.S. companies across all industries, as well as federal agencies, must send quarterly reports to the Department of Labor that detail AI-related impacts on the workforce, including job layoffs and displacements. Data centers are also barred from siphoning energy resources from communities and driving up prices.
Pro: As AI-driven layoffs have already begun, this is a good first step to highlight how AI realistically affects the unemployment rate.Con: Although companies must report AI-related job losses, the bill doesn’t prevent the displacement of employees outright. Companies can still fire employees en masse in a way that would cripple the workforce, impact the economy, and drive the U.S. toward forced universal basic income.
RELATED: Elon Musk’s Terafab is coming, and you’re not ready
onurdongel/Getty Images
3. Protections for intellectual property
AI companies are prohibited from feeding their LLMs with copyrighted materials, including books, movies, music, and more. Under the bill, AI is excluded from fair use under the Copyright Act.
Pro: Content creators no longer have to worry about their creative catalogs being stolen, uploaded, copied, and remixed into new bodies of work when prompted by users on any given platform. They retain full ownership of their content without threat of subjugation.Con: A lack of copyrighted information could lead to gaps in AI platforms’ knowledge graphs, potentially slowing or even stifling development.
4. Protections for conservatives
AI companies are banned from injecting woke ideologies into their large language models, and AI chatbots are no longer allowed to express biases against conservative ideas and values, all of which will be verified through third-party audits.
Pro: Despite efforts to attract support on the right, many Big Tech giants are still dominated by left-wing elites. These safeguards will ensure that their personal and factional beliefs don’t poison the datasets behind their platforms, instead aiming to support truth and facts.Con: The bill isn’t specific enough in defining the woke ideas, political biases, and discrimination it aims to prevent. Unless the bill intends to leave a loophole for the left to exploit, these exact parameters need to be spelled out, lest they be left open to interpretation.
5. Protections for innovation
One of President Trump’s biggest AI goals is to secure America’s place as the global leader in AI technology. As such, this bill encourages partnerships between the government, businesses, and education to accelerate research and development with limited barriers to the infrastructure needed for rapid growth.
Pro: This piece of the bill ultimately centralizes the resources underpinning the United States’ AI development, including computing power, datasets, and advanced infrastructure. By combining the knowledge and experience of multiple groups across various expertise with the best technology available, our AI program will theoretically evolve even faster than it already has over the last several years.Con: Centralized AI development that happens too quickly could potentially lead to developmental mistakes with big consequences, such as launching untested models that underperform, building agents that aren’t fully capable of completing the jobs they’re designed to do, and even causing economic instability should an AI bot or agent run rogue within critical infrastructure, such as businesses, medical facilities, and even military applications.
History in the making
This is a unique time in history. Society has never witnessed a more disruptive technology than generative artificial intelligence, and it takes a lot of watching, waiting, debating, and legislating to get the regulations right for a piece of tech that will touch nearly every facet of modern life.
The Trump America AI Act is merely a launch pad — a starting point — that will guide America’s future of AI research, development, and execution for decades to come. There are a lot of good things in the bill, but it falls short in other key areas:
It doesn’t protect adult users’ privacy, especially in terms of user data and surveillance.It doesn’t protect human workers from mass layoffs and unemployment.It indirectly encourages a digital ID database for age verification with no clear guidelines on how IDs should be gathered, stored, or deleted.
That said, AI regulation has to start somewhere, and the Trump America AI Act is still in its infancy. There will be opportunities to amend the bill as it moves through the legislative process. For now, this version offers a solid foundation for governing the AI tech of tomorrow.
Tech
