Chatbot-user selfies are reportedly being analyzed and not only checked for suspiciousness, but to see if they match the faces of any public figures.
Video gamers staged a collective rebellion when they discovered that Discord, the dominant gamer chat platform, had slipped a pilot program into the U.K. user experience that could route personal information to the government via a company called Persona, linked to OpenAI. Discord quickly backtracked, frustrated that new age verification laws in the Anglosphere have made it difficult to find partners that pass user muster. But the controversy rages on.
A group of researchers say they stumbled upon publicly available code in OpenAI that shows an in-depth system meant for analyzing user facial data while also checking to see if the user has hijacked a dead person’s identity.
‘269 checks. for wanting to use a chatbot in 2026.’
Researchers from website Vmfunc recently revealed they came across 53 megabytes of “unprotected source maps” that are set up for government use. The researchers also stated that any suspicious user activity would be filed with federal authorities, while user selfies are analyzed and screened using facial recognition.
The data being collected is reportedly through Persona’s Know Your Customer service. Simply put, it is an identity verification program.
Not only does OpenAI publicly state that it uses Persona as a “trusted third-party company” to “help verify age,” but Persona itself announced it is authorized to “serve federal agencies where the loss of confidentiality, integrity, or availability of processed data could result in limited adverse effects.”
It was Persona that the researchers poked fun at, revealing a complex verification system that checks user selfie data.
RELATED: Sam Altman slams ICE in message to OpenAI employees: ‘What’s happening … is going too far’
“So you uploaded a selfie to use a chatbot. Congratulations!!!” the report joked. “It’s now being compared against a database of every politician, head of state, and their extended family tree on Earth. Similarity scored. Low, medium, high. The machine looked at your face and asked itself: ‘Does this person resemble the deputy finance minister of Moldova?’ And it answered. And it wrote the answer down.”
The report then described 269 verification checks that perform acts like comparing a user’s selfie to their ID or other existing accounts.
Other checks like “public figure detection” allegedly seek to check if the user looks like someone famous, while “suspicious entity detection” reportedly is checking to see if the user looks “suspicious.”
In total, there are an alleged 43 government ID checks and 27 database checks that cross-reference social security numbers, phone carriers, and death databases.
“269 checks. For wanting to use a chatbot in 2026,” researchers wrote.
RELATED: ChatGPT says it is not sharing your conversations with advertisers, but there’s a catch
Photo by David Burnett/Newsmakers via Getty Images
Neither OpenAI nor Persona responded to Return’s request for comment. However, Persona founder Rick Song has publicly stated he would cooperate with the researchers and answer their questions.
After stating he had an “online crashout” in response to misinformation, Song said his dialogue with Vmfunc is ongoing and shared several emails he has exchanged with the company. One of the emails stated that OpenAI does not use Persona’s “biometrics for Watchlists” or products related to identifying politically exposed persons.
He also noted that Persona’s max retention for data is three years, while OpenAI’s policy is just one year.
For additional information on how OpenAI treats user data, please visit its website.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Return, Ai, Chatgpt, Openai, Federal government, Face scan, Digital id, Tech
