AI and LLMs appear to be in a bit of a slump, with the latest revelatory scandal coming out of a major study showing that large language models, the closest we’ve come yet to so-called artificial general intelligence, are degraded in their capacities when they are subjected to lo-fi, low-quality, and “junk” content.
The study, from a triad of college computer science departments including University of Texas, set out to determine relationships between data quality and performance in LLMs. The scientists trained their LLMs on viral X.com/Twitter data, emphasizing high-engagement posts, and observed more than 20% reduction in reasoning capacity, 30% falloffs in contextual memory tasks, and — perhaps most ominously, since the study tested for measurable personality traits like agreeableness, extraversion, etc.— the scientists saw a leap in output that can technically be characterized as narcissistic and psychopathic.
Sound familiar?
The paper analogizes the function of the LLM performance with human cognitive performance and refers to this degradation in both humans and LLMs as “brain rot,” a “shorthand for how endless, low-effort, engagement-bait content can dull human cognition — eroding focus, memory discipline, and social judgment through compulsive online consumption.”
The whole project reeks of hubris, reeks of avarice and power.
There is no great or agreed-upon utility in cognition-driven analogies made between human and computer performance. The temptation persists for computer scientists and builders to read in too much, making categorical errors with respect to cognitive capacities, definitions of intelligence, and so forth. The temptation is to imagine that our creative capacities ‘out there’ are somehow reliable mirrors of the totality of our beings ‘in here,’ within our experience as humans.
We’ve seen something similar this year with the prevalence of so-called LLM psychosis, which — in yet another example of confusing terminology applied to already confused problems — seeks to describe neither psychosis embedded into LLMs nor that measured in their “behavior,” but rather the severe mental illness reported by many people after applying themselves, their attention, and their belief into computer-contained AI “personages” such as Claude or Grok. Why do they need names anyways? LLM 12-V1, for example, would be fine …
The “brain rot” study rather proves, if anything, that the project of creating AI is getting a little discombobulated within the metaphysical hall of mirrors its creators, backers, and believers have, so far, barged their way into, heedless of old-school measures like maps, armor, transport, a genuine plan. The whole project reeks of hubris, reeks of avarice and power. Yet, on the other hand, the inevitability of the integration of AI into society, into the project of terraforming the living earth, isn’t really being approached by a politically, or even financially, authoritative and responsible body — one which might perform the machine-yoking, human-compassion measures required if we’re to imagine ourselves marching together into and through that hall of mirrors to a hyper-advanced, technologically stable, and human-populated civilization.
RELATED: Intelligence agency funding research to merge AI with human brain cells
Photo by VCG / Contributor via Getty Images
So, when it’s observed here that AI seems to be in a bit of a slump — perhaps even a feedback loop of idiocy, greed, and uncertainty coupled, literally wired-in now, with the immediate survival demands of the human species — it’s not a thing we just ignore. A signal suggesting as much erupted last week from a broad coalition of high-profile media, business, faith, and arts voices brought under the aegis of the Statement on Superintelligence, which called for “a prohibition on the development of superintelligence, not lifted before there is 1. broad scientific consensus that it will be done safely and controllably, and 2. strong public buy-in.”
There’s a balance, there are competing interests, and we’re all still living under a veil of commercial and mediated fifth-generation warfare. There’s a sort of adults-in-the-room quality we are desperately lacking at the moment. But the way the generational influences lay on the timeline isn’t helping. With boomers largely tech-illiterate but still hanging on, with Xers tech-literate but stuck in the middle (as ever), with huge populations of highly tech-saturated Millennials, Zoomers, and so-called generation Alpha waiting for their promised piece of the social contract, the friction heat is gathering. We would do well to recognize the stakes and thus honor the input of those future humans who shouldn’t have to be born into or navigate a hall of mirrors their predecessors failed to escape.
Tech, Superintelligence, Ai, Artificial intelligence, Brain rot, Llm, Large language models
