Author: tio

  • Pluralistic: Three more AI psychoses (12 Mar 2026)

    Today’s links



    A cross-section of a man's head. His brain has been replaced with an intricate mass of wooden gearing, being pumped and cranked by three 16th century druges. Behind them is a blown up view of a microchip.  Behind the head is a stylized illustration of grey matter, blown out with lots of saturation and blended in places with tumbled rocks.

    Three more AI psychoses (permalink)

    “AI psychosis” is one of those terms that is incredibly useful and also almost certainly going to be deprecated in smart circles in short order because it is: a) useful; b) easily colloquialized to describe related phenomena; and c) adjacent to medical issues, and there’s a group of people who feel very strongly any metaphor that implicates human health is intrinsically stigmatizing and must be replaced with an awkward, lengthy phrase that no one can remember and only insiders understand.

    So while we still can, let us revel in this useful term to talk about some very real pathologies in our world.

    Formally, “AI psychosis” describes people who have delusions that are possibly induced, and definitely reinforced and magnified, by a chatbot. AI psychosis is clearly alarming for people whose loved ones fall prey to it, and it has been the subject of much press and popular attention, especially in the extreme cases where it has resulted in injury or death.

    It’s possible for AI psychosis to be both a new and alarming phenomenon and also to be on a continuum with existing phenomena. Paranoid delusions aren’t new, of course. Take “Morgellons Disease,” a psychosomatic belief that you have wires growing in your body, which causes sufferers to pick at their skin to the point of creating suppurating wounds. Morgellons emerged in the 2000s, but the name refers to a 17th-century case-report of a patient who suffered from a similar delusion:

    https://en.wikipedia.org/wiki/A_Letter_to_a_Friend

    Morgellons is both a 400 year old phenomenon and an internet pathology. How can that be? Because the internet makes it easier for people with sparsely distributed traits to locate one another, which is why the internet era is characterized by the coherence of people with formerly fringe characteristics into organized blocs, for better (gender minorities, #MeToo) and worse (Nazis).

    Morgellons is rare, but if you suffer from it, it’s easy for you to locate virtually every other person in the world with the same delusion and for all of you to reinforce and egg on your delusional beliefs.

    Morgellons isn’t the only delusion that the internet reinforces, of course. “Gang stalking delusion” is a belief in a shadowy gang of sadistic tormentors who sneak hidden messages into song lyrics and public signage and innuendo in overheard snatches of other people’s conversations. It is an incredibly damaging delusion that ruins people’s lives.

    Gang stalking delusion isn’t new, either – as with Morgellons, there are historical accounts of it going back centuries. But the internet supercharged gang stalking delusion by making it easy for GSD sufferers to find one another and reinforce one another’s beliefs, helping each other spin elaborate explanations for why the relatives, therapists, and friends who try to help them are actually in on the conspiracy. The result is that GSD sufferers end up ever more isolated from people who are trying mightily to save them, and more connected to people who drive them to self-harm.

    Enter chatbots. Ready access to eager-to-please LLMs at every hour of the day or night means that you don’t even have to find a forum full of people with the same delusion as you, nor do you have to wait for a reply to your anguished message. The LLM is always there, ready to fire back a “yes-and” improv-style response that drives you deeper and deeper into delusion:

    https://pluralistic.net/2025/09/17/automating-gang-stalking-delusion/

    It’s possible that there are delusions that are even more rare than GSD or Morgellons that AI is surfacing. Imagine if you were prone to fleeting delusional beliefs (and whomst amongst us hasn’t experienced the bedrock certainty that we put something down right here, only to find it somewhere else and not have any idea how that happened?). Under normal circumstances, these cognitive misfires might be fleeting moments of discomfort, quickly forgotten. But if you are already habituated to asking a chatbot to explain things you don’t understand, it might well yes-and you into an internally consistent, entirely wrong belief – that is, a delusion.

    Think of how often you noticed “42” after reading Hitchhiker’s Guide to the Galaxy, or how many times “6-7” crops up once you’ve experienced a baseline of exposure to adolescents. Now imagine that an obsequious tale-spinner was sitting at your elbow, helpfully noting these coincidences and fitting them into a folie-a-deux mystery play that projected a grand, paranoid narrative onto the world. Every bit of confirming evidence is lovingly cataloged, all disconfirming evidence is discounted or ignored. It’s fully automated luxury QAnon – a self-baking conspiracy that harnesses an AI in service to driving you deeper and deeper into madness:

    That’s the original “AI psychosis” that the term was coined to describe. As Sam Cole notes in her excellent “How to Talk to Someone Experiencing ‘AI Psychosis,’” mental health practitioners are not entirely comfortable with the “psychosis” label:

    https://www.404media.co/ai-psychosis-help-gemini-chatgpt-claude-chatbot-delusions/

    “Psychosis” here is best understood as an analogy, not a diagnosis, and, as already noted, there is a large cohort of very persistent people who make it their business to eradicate analogies that make reference to medical or health-related phenomena. But these analogies are very hard to kill, because they do useful work in connecting unfamiliar, novel phenomena with things we already understand.

    It’s true that these analogies can be stigmatizing, but they needn’t be. As someone with an autoimmune disorder, I am not bothered by people who describe ICE as an autoimmune disorder in which antibodies attack the host, threatening its very life. I am capable of understanding “autoimmune disorder” as referring to both a literal, medical phenomenon; and a figurative, political one. I have never found myself confusing one for the other.

    “AI psychosis” is one of those very useful analogies, and you can tell, because “AI psychosis” has found even more metaphorical uses, describing other bad beliefs about AI. Today, I want to talk about three of these AI psychoses, and how they relate to one another: the investor AI delusion, the boss AI delusion, and the critic AI delusion.

    Let’s start with the investors’ delusion. AI started as an investment project from the usual suspects: venture capitalists, private wealth funds, and tech monopolists with large cash reserves and ready access to loans during the cheap credit bubble. These entities are accustomed to making large, long-shot bets, and they were extremely motivated to find new markets to grow into and take over.

    Growing companies need to keep growing, but not because they have “the ideology of a tumor.” Growing companies’ imperative to keep growing isn’t ideological at all – it’s material. Growth companies’ stock trade at a high multiple of their “price to earnings ratio” (PE ratio), which means that they can use their stock like money when buying other companies and hiring key employees.

    But once those companies’ growth slows down, investors revalue those shares at a much lower PE multiplier, which makes individual executives at the company (who are primarily paid in stock) personally much poorer, prompting their departure, while simultaneously kneecapping the company’s ability to grow through acquisition and hiring, because a company with a falling share price has to buy things with cash, not stock. Companies can make more of their own stock on demand, simply by typing zeroes into a spreadsheet – but they can only get cash by convincing a customer, creditor or investor to part with some of their own:

    https://pluralistic.net/2025/03/06/privacy-last/#exceptionally-american

    Tech companies have absurdly large market shares – think of Google’s 90% search dominance – and so they’ve spent 15+ years coming up with increasingly absurd gambits to convince investors that they will continue to grow by capturing other markets. At first, these companies claimed that they were on the verge of eating one another’s lunches (Google would destroy Facebook with G+; Facebook would do the same to Youtube with the “pivot to video”).

    This has a real advantage in that one need not speculate about the potential value of Facebook’s market – you only have to look at Facebook’s quarterly reports. But the downside is that Facebook has its own ideas about whether Google is going to absorb its market, and they are prone to forcefully make the case that this won’t happen.

    After a few tumultuous years, tech giants switched to promoting growth via speculative new markets – metaverse, web3, crypto, blockchain, etc. Speculative new markets are speculative, and the weakness of that is that no one can say how big those markets might be. But that’s also the strength of those markets, because if no one can say how big those markets might be, then who’s to say that they won’t be very big indeed?

    There’s a different advantage to confining your concerns to imaginary things: imaginary things don’t exist, so they don’t contest your public statements about them, nor do they make demands on you. Think of how the right concerns itself with imaginary children (unborn babies, children in Wayfair furniture; children in nonexistent pizza parlor basements, children undergoing gender confirmation surgery). These are very convenient children to advocate for, since, unlike real children (hungry children, children killed in the Gaza genocide, children whose parents have been kidnapped by ICE, children whom Matt Goetz and Donald Trump trafficked for sex, children in cages at the US border, trans kids driven to self-harm and suicide after being denied care), nonexistent children don’t want anything from you and they never make public pronouncements about whether you have their best interests at heart.

    But as the AI project has required larger and larger sums to keep the wheels spinning, the usual suspects have started to run out of money, and now AI hustlers are increasingly looking to tap public markets for capital. They want you to invest your pension savings in their growth narrative machine, and they’re relying on the fact that you don’t understand the technology to trick you into handing over your money.

    There’s a name for this: it’s called the “Byzantine premium” – that’s the premium that an investment opportunity attracts by being so complicated and weird that investors don’t understand it, making them easy to trick:

    https://pluralistic.net/2022/03/13/the-byzantine-premium/

    AI is a terrible economic phenomenon. It has lost more money than any other project in human history – $600-700b and counting, with trillions more demanded by the likes of OpenAI’s Sam Altman. AI’s core assets – data centers and GPUs – last 2-3 years, though AI bosses insist on depreciating them over five years, which is unequivocal accounting fraud, a way to obscure the losses the companies are incurring. But it doesn’t actually matter whether the assets need to be replaced every two years, every three years, or every five years, because all the AI companies combined are claiming no more than $60b/year in revenue (that number is grossly inflated). You can’t reach the $700b break-even point at $60b/year in two years, three years, or five years.

    Now, some exceptionally valuable technologies have attained profitability after an extraordinarily long period in which they lost money, like the web itself. But these turnaround stories all share a common trait: they had good “unit economics. Every new web user reduced the amount of money the web industry was losing. Every time a user logged onto the web, they made the industry more profitable. Every generation of web technology was more profitable than the last.

    Contrast this with AI: every user – paid or unpaid – that an AI company signs up costs them money. Every time that user logs into a chatbot or enters a prompt, the company loses more money. The more a user uses an AI product, the more money that product loses. And each generation of AI tech loses more money than the generation that preceded it.

    To make AI look like a good investment, AI bosses and their pitchmen have to come up with a story that somehow addresses this phenomenon. Part of that story relies on the Byzantine premium: “Sure, you don’t understand AI, but why would all these smart people commit hundreds of billions of dollars to AI if they weren’t confident that they would make a lot of money from it?” In other words, “A pile of shit this big must have a pony underneath it somewhere!”

    This is a great narrative trick, because it turns losing money into a virtue. If you’ve convinced a mark that the upside of the project is a multiple of the capital committed to it, then the more money you’re losing, the better the investment seems.

    So this is the first AI psychosis: the idea that we should bet the world’s economy on these highly combustible GPUs and data centers with terrible unit economics and no path to break-even, much less profitability.

    Investors’ AI psychosis is cross-fertilized by our second form of AI psychosis, which is the bosses’ AI psychosis: bosses’ bottomless passion for firing workers and replacing them with automation.

    Bosses are easy marks for anything that lets them fire workers. After all, the ideal firm is one that charges infinity for its outputs (hence the market’s passion for monopolies) and pays nothing for its inputs (e.g. “academic publishing”).

    This means that the fact that a chatbot can’t do your job isn’t nearly as important as the fact that an AI salesman can convince your boss to fire you and replace you with a chatbot that can’t do your job. Bosses keep replacing humans with defective chatbots, with catastrophic consequences, like Amazon’s cloud service crashing:

    https://www.techradar.com/pro/recent-aws-outages-blamed-on-ai-tools-at-least-two-incidents-took-down-amazon-services

    Bosses are haunted by the ego-shattering knowledge that they aren’t in the driver’s seat: if the boss doesn’t show up for work, everything continues to operate just fine. If the workers all stay home, the business grinds to a halt. In their secret hearts, bosses know that they’re not in the driver’s seat – they’re in the back seat, playing with a Fisher Price steering wheel. AI dangles the possibility of wiring that toy steering wheel directly into the drive-train, so that the company’s products go directly from the boss’s imagination to the public without the boss having to ask people who know how to do things to execute their cockamamie schemes:

    https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism

    This is a powerfully erotic proposition for bosses, the realization of the libidinal fantasy in which sky-high CEO salaries can be justified by the fact that everything that happens in the company is truly, directly attributable to the boss. Like the delusional person who can be led deeper and deeper into a fantasy world by a chatbot, a boss’s delusion that they are worth thousands of times more than their workers makes them easy prey for a chatbot salesman that pushes them deeper and deeper into that delusion, until they bet the whole company on it.

    Now we come to the third and final novel AI psychosis, the critics’ psychosis, that AI is an abnormally terrible technology. This is a species of “criti-hype,” which is when critics repeat the hyped-up claims of the companies they’re targeting, but as criticism (think of all the people who believed and uncritically amplified the ad-tech industry’s self-serving claims of being able to control our minds by “hacking our dopamine loops”):

    https://peoples-things.ghost.io/youre-doing-it-wrong-notes-on-criticism-and-technology-hype/

    AI is a normal technology. The people who made it, and the circumstances under which it was made, are normal. Its uses and abuses are normal. That doesn’t make it good, but it does make it unexceptional:

    https://www.normaltech.ai/p/a-guide-to-understanding-ai-as-normal

    The exceptional part of AI isn’t the technology, it’s the bubble. There’s nothing about AI per se that makes it exceptionally prone to devouring our natural resources, or endangering our jobs, or abetting war crimes. That’s all because of the bubble, and the bubble relies on the idea that AI is exceptional, not normal. Repeating and amplifying claims about AI’s exceptionalism helps the AI companies, because they rely on exceptionalism to keep the capital flowing and the bubble inflating.

    AI is a normal technology. It’s normal for a technology to be invented by unlikable and immoral people and institutions. Not every technology is invented by a shitty person, but shitty people and institutions are well represented (and possibly disproportionately represented) in the history of technology. Charles Babbage invented the idea of general purpose computers as a way of improving labor control on slave plantations:

    https://logicmag.io/supa-dupa-skies/origin-stories-plantations-computers-and-industrial-control/

    Ada Lovelace wasn’t interested in making slavery more efficient, but neither was she driven by pure scientific inquiry. She invented programming to help her bet on the horses (it didn’t work):

    https://en.wikipedia.org/wiki/Ada_Lovelace

    The silicon transistor was co-invented by William Shockley, one of history’s great pieces of shit, a eugenicist who was committed to exterminating all non-white people that he never managed to ship a commercial product:

    https://pluralistic.net/2021/10/24/the-traitorous-eight-and-the-battle-of-germanium-valley/

    IBM built the tabulators for Auschwitz. HP were the Pentagon’s go-to contractors for any tech project that was so dirty no one else would touch it. We only got Unix because Bell Labs committed so many antitrust violations that they weren’t allowed to productize it themselves.

    It’s not exceptional for AI companies to have terrible, piece-of-shit founders. It’s not exceptional for these companies to participate in war crimes. It’s not exceptional for these founders to want to pauperize workers. It’s not exceptional for these companies to lie about their products, bankrupt naive investors through stock swindles, and pitch themselves to investors as a way for capital to win the class war.

    None of this means that AI companies are good, it just means that they are not exceptional. And because they aren’t exceptional, the same dynamics that govern other technologies apply to AI companies’ products. Their utility is a function of what they do, not who made them or how they were sold. The utility of AI products is based on whether people find ways to use them that make them happy – not whether the people who made those technologies are good people, or whether the funding for the technology was fraudulent, or whether other people use the technology to harm others.

    Automation comes in two flavors: there’s automation that produces things more quickly (and hence more cheaply), and there’s automation that makes better things. Generally, capital prefers to use automation to increase the pace at which things are made, while workers prefer to use automation to improve the quality of the things they make.

    Think of a hobbyist who pines for an automated soldering machine. That hobbyist longs to make board-level repairs and modifications that require precision that humans struggle to match. The hobbyist is a centaur, using a machine to help achieve human goals.

    Now think of a factory owner who invests in an assembly line of the same machines: that boss wants to fire a bunch of workers and make the survivors of the purge take up the slack. The boss want to achieve corporate goals, to “sweat the assets,” making maximum use of the soldering machines. The pace at which the line runs is set to be the maximum that the workers can match. The workers on the line are “reverse centaurs” – humans who are pressed into service as peripherals for machines, at a pace that is constantly at the very limit of their endurance.

    Reverse centaurs are trapped in capital’s automation plan – to make everything faster and cheaper. But that’s the result of bosses. It’s not the result of technology.

    This is not to say that technology is apolitical. Only a fool would imagine that there are no politics embedded in technology. But you’d be a far greater fool if you asserted that the politics of a technology were simple, clear, and immutable.

    Nor is this to say that when workers get to decide when and how to use technology, we will always make wise decisions. Perhaps the hobbyist who opts for an automated soldering machine will lose out on the opportunity to refine their hand-eye coordination in ways that will have many other benefits to their practice.

    Or perhaps attempting to improve their hand-eye coordination to that point will wreck so many projects that they grow discouraged and give up altogether. Others’ choices that seem unwise to you might have perfectly good explanations that aren’t visible from your perspective. Ultimately, the world is a better place where workers get to decide which parts of their jobs they want to automate and which parts they want to lean into.

    This is an extremely normal technological situation: for a new technology to be promoted and productized by shitty people who have grandiose goals that would be apocalyptic should they ever come to pass – and for some people to find uses of that technology that are nevertheless beneficial to them and their communities.

    The belief that AI is an exceptionally bad technology (as opposed to an exceptionally bad economic bubble) drives AI critics into their own absurd culs-de-sac.

    There are many, many skilled and reliable practitioners of technical and creative trades who’ve found extremely reasonable, normal ways in which AI has automated some part of their job. They aren’t hyperventilating about how AI has changed everything forever and the world is about to end. They’re not mistaking AI for god, or a therapist.

    They’re just treating AI like a normal technology, like a plugin. Programmers’ tools have acquired useful automation plugins at regular intervals for decades – syntax checkers, advanced debuggers, automated wireframe utilities. For many programmers – including several of my acquaintance, whom I know to be both thoughtful and skilled – AI is another plugin, one they find useful enough to be modestly enthusiastic about.

    It is nuts to deny the experiences these people are having. They’re not vibe-coding mission-critical AWS modules. They’re not generating tech debt at scale:

    https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes

    They’re just adding another automation tool to a highly automated practice, and using it when it makes sense. Perhaps they won’t always choose wisely, but that’s normal too. There’s plenty of ways that pre-AI automation tools for software development led programmers astray. A skilled, centaur-configured programmer learns from experience which automation tools they should trust, and under which circumstances, and guides themselves accordingly.

    It’s only the belief that AI is exceptional – exceptionally wicked, but exceptional nevertheless – that leads critics to decide that they are a better judge of whether a skilled worker should or should not use certain automation tools, and to make that judgment not based on the quality of the work in question, but on the moral character of the tool itself.

    AI is just normal. The bubble is what drives the environmental costs. If the only LLMs were a couple big data-centers at Sandia National Labs, no one would be particularly exercised about the water and energy demands they represented. Big scientific endeavors – from NASA launches to the large Hadron Collider – often come with immense material and energy needs. The bubble causes massive, wasteful, duplicative efforts that chase diminishing returns through farcical scale.

    Nor are AI bros exceptional. The stock swindlers who’ve blown $700b (and counting) on AI aren’t cyber-Svengalis with the power to cloud investors’ minds. They’re just running the same con that tech has been running ever since its returns started to taper off and survival became a matter of ginning up enthusiasm for speculative new ventures.

    That doesn’t mean those people aren’t awful shits. Fuck those people. It just means that they’re normal awful shits. We don’t have to burnish their reputations by elevating them to the status of archdemons who taint everything they touch with unwashable sin. Sam Altman isn’t Lex Luthor. He’s just a conman:

    https://open.substack.com/pub/garymarcus/p/breaking-sam-altmans-greed-and-dishonesty?r=8tdk6&utm_medium=ios

    The fact that these bros are just normal assholes means that we don’t have to treat everything they do as a sin. Scraping the entirety of human knowledge to make something new out of it isn’t “stealing.” Depending on why you’re doing it, it can be archiving, or making a search engine:

    https://pluralistic.net/2023/09/17/how-to-think-about-scraping/

    Too many AI critics have started from the undeniable fact that these guys are odious creeps who boast about wanting to ruin the lives of workers and then worked backwards to find the sin. The sin isn’t performing mathematical analysis on all the books ever written. That’s actually kind of awesome. It’s the kind of thing Aaron Swartz used to do – like when he ingested every law review article ever published and used it to trace the way that oil companies’ donations to law schools resulted in profs writing articles about why Big Oil can’t be held liable for trashing the planet:

    https://web.archive.org/web/20111129181943/https://www.stanfordlawreview.org/print/article/punitive-damages-remunerated-research-and-legal-profession

    AI bros’ sin isn’t making copies of published works. Hammering servers with badly behaved crawlers is a dick move and fuck them for doing it. But if these jerks made well-behaved scrapers that placed no abnormal demand on servers, it’s not like their critics would say, “Oh, I guess it’s fine, then.”

    AI bros’ sin is running an economy-destroying, planet-wrecking stock swindle whose raison d’etre is pauperizing every worker and transferring 100% of the dying world’s wealth to a small cadre of morbidly wealthy, eminently guillotineable plutes. Making plugins? That’s not exceptional. It’s just normal.

    The fact that something is normal doesn’t make it good. There’s a lot of normal things that I’d like to throw into the Sun. But we don’t do ourselves any favors when we amplify our enemies’ self-aggrandizing narratives by accusing them of being exceptional, even when we mean “exceptionally evil.” They’re normal assholes.

    Fuck ’em.

    (Image: ZeptoBars, CC BY 3.0, modified)


    Hey look at this (permalink)



    A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

    Object permanence (permalink)

    #15yrsago Notorious financier gets a “super-injunction” prohibiting the press from revealing that he is a banker https://www.telegraph.co.uk/finance/newsbysector/banksandfinance/8373535/Sir-Fred-Goodwin-former-RBS-chief-obtains-super-injunction.html

    #10yrsago Shortly after her death, Harper Lee’s heirs kill cheap paperback edition of To Kill a Mockingbird https://newrepublic.com/article/131400/mass-market-edition-kill-mockingbird-dead

    #10yrsago Web security company breached, client list (including KKK) dumped, hackers mock inept security https://arstechnica.com/information-technology/2016/03/after-an-easy-breach-hackers-leave-tips-when-running-a-security-company/

    #10yrsago Microsoft spams corporate users with messages denigrating their IT departments https://web.archive.org/web/20160309195537/https://www.infoworld.com/article/3042397/microsoft-windows/admins-beware-domain-attached-pcs-are-sprouting-get-windows-10-ads.html

    #10yrsago Cycle and Recycle: gorgeous photos of the European recycling process https://www.wired.com/2016/03/paul-bulteel-cycle-recyle-europe-recycles-tons-of-waste-and-its-pretty-gorgeous/

    #10yrsago Fellowships for “Robin Hood” hackers to help poor people get access to the law https://web.archive.org/web/20160304221459/https://labs.robinhood.org/fellowship/

    #10yrsago 3D printed battle-armor for cats https://web.archive.org/web/20160311224139/http://sinkhacks.com/making-3d-printed-cat-armor/

    #10yrsago Great moments in the history of black science fiction https://web.archive.org/web/20160308034421/http://www.fantasticstoriesoftheimagination.com/a-crash-course-in-the-history-of-black-science-fiction/

    #1yrago Daniel Pinkwater’s “Jules, Penny and the Rooster” https://pluralistic.net/2025/03/11/klong-you-are-a-pickle-2/#martian-space-potato


    Upcoming appearances (permalink)

    A photo of me onstage, giving a speech, pounding the podium.



    A screenshot of me at my desk, doing a livecast.

    Recent appearances (permalink)



    A grid of my books with Will Stahle covers..

    Latest books (permalink)



    A cardboard book box with the Macmillan logo.

    Upcoming books (permalink)

    • “The Reverse-Centaur’s Guide to AI,” a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
    • “Enshittification, Why Everything Suddenly Got Worse and What to Do About It” (the graphic novel), Firstsecond, 2026

    • “The Post-American Internet,” a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

    • “Unauthorized Bread”: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

    • “The Memex Method,” Farrar, Straus, Giroux, 2027



    Colophon (permalink)

    Today’s top sources:

    Currently writing: “The Post-American Internet,” a sequel to “Enshittification,” about the better world the rest of us get to have now that Trump has torched America (1081 words today, 48461 total)

    • “The Reverse Centaur’s Guide to AI,” a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
    • “The Post-American Internet,” a short book about internet policy in the age of Trumpism. PLANNING.

    • A Little Brother short story about DIY insulin PLANNING


    This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

    https://creativecommons.org/licenses/by/4.0/

    Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


    How to get Pluralistic:

    Blog (no ads, tracking, or data-collection):

    Pluralistic.net

    Newsletter (no ads, tracking, or data-collection):

    https://pluralistic.net/plura-list

    Mastodon (no ads, tracking, or data-collection):

    https://mamot.fr/@pluralistic

    Bluesky (no ads, possible tracking and data-collection):

    https://bsky.app/profile/doctorow.pluralistic.net

    Medium (no ads, paywalled):

    https://doctorow.medium.com/
    https://twitter.com/doctorow

    Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

    https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

    When life gives you SARS, you make sarsaparilla” -Joey “Accordion Guy” DeVilla

    READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (“BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

    ISSN: 3066-764X

  • ‘I missed my chemo and have a £12,000 hotel bill’: British holidaymakers stranded by Iran war

    Flights are restricted due to the conflict leaving people stuck running up bills for rooms and food.
  • A.B. 1043’s Internet Age Gates Hurt Everyone

    EFF has long warned against age-gating the internet. Such mandates strike at the foundation of the free and open internet. They create unnecessary and unconstitutional barriers for adults and young people to access information and express themselves online. They hurt small and open-source developers. And none of the available age verification options are perfect in terms of protecting private information, providing access to everyone, and safely handling sensitive data. 

    Last year, EFF raised concerns about A.B. 1043 as one of several bills in the California legislature that took the wrong approach to protecting young people online—by focusing on censorship rather than privacy. Now that A.B. 1043 is set to go into effect in 2027, we’ve received a lot of questions about its possible effects. 

    A.B. 1043’s Censorship Trap

    Even proposals that may not explicitly mandate age verification, such as A.B. 1043, can still create many of the same censorship problems. A.B. 1043 requires all operating systems and app stores to create age bracketing systems that will segment their users based on their ages. Users are then required to provide operating systems and apps their birth date or age so that they can be placed in their respective age bracket. A.B. 1043 also requires application and software developers to collect this age bracket information when a user want to use that software or application.

    A.B. 1043 treats the age-bracket signal sent by a user as giving the application or service actual knowledge of users’ ages. Knowledge that the user is a minor could provide the basis for liability under other laws, such as California Age-Appropriate Design Code.

    The result is a recipe for censorship. Applications and software developers for operating systems may interpret A.B. 1043 and its potential enforcement by the California Attorney General as requiring them to exclude users who say they are minors or who don’t fit in a specific age bracket they believe is acceptable to use their application or software. But minors have a First Amendment right to access the vast majority of these apps and services. What California has done is essentially outsource censorship to developers, who are likely to lean into over-censorship.

    Broad Language Undercuts Policy Goals

    A.B. 1043’s one-size-fits-all approach is also problematic because it disregards the many ways in which we make and use digital tools. It assumes the internet and digital devices begin and end with the dominant technology companies and device makers, when we know that’s not the case. Additionally, many families share devices, especially in low-income households. These proposals do not account for situations where there is more than one user of a device.

    Additionally, broad proposals that demand the implementation of such censorship tools under the guise of protecting young people’s safety force developers to reach for imperfect solutions—or risk being found non-compliant and pushed out of markets. Many of these mandates imagine technology that does not currently exist. Such poorly thought-out mandates, in truth, cannot achieve the purported goal of age verification. Often, they are easy to circumvent and many also expose consumers to real data breach risk.

    Squeezing Small and Open-Source Developers Hurts Everyone

    A.B. 1043’s burdens fall particularly heavily on developers who aren’t at large, well-resourced companies, such as those developing open-source software. Not recognizing the diversity of software development when thinking about liability in these proposals effectively limits software choices—which is especially harmful at a time when computational power is being rapidly concentrated in the hands of the few. This harms users’ and developers’ right to free expression, their digital liberties, privacy, and ability to create and use open platforms. It also, perversely, entrenches the dominance of major operating system developers and device makers.

    A.B. 1043 and similar proposals also raise considerable implementation issues because they cast a potentially wide net. A.B. 1043, for example, carves out “broadband internet access service,” “telecommunications service,” and the “use of a physical product,” whereas “mobile devices” and “computers” are covered. However, so many devices could fall into these categories; people consider smart watches to be computers, for example. Virtually every digital device that runs software built in the past three decades could fall into that category. This means that consumers may have to submit age information to more companies than ever, again increasing the possibility of data misuse and data breach.

    There Is Still A Better Way

    Legislators do not need to sacrifice their constituents’ First Amendment rights and privacy to make a safer internet, but they can address many of the harms these proposals seek to mitigate. Many lawmakers have recognized these approaches, such as data minimization, in their proposals. Rather than creating age gates, a well-crafted privacy law that empowers all of us—young people and adults alike—to control how our data is collected and used would be a crucial step in the right direction.

  • Rep. Finke Was Right: Age-Gating Isn’t About Kids, It’s About Control

    When Rep. Leigh Finke spoke last month before the Minnesota House Commerce Finance and Policy Committee to testify against HF1434, a broad-sweeping proposal to age-gate the internet, she began with something disarming: agreement.

    “I want to support the basic part of this,” she said, the shared goal of protecting young people online. Because that is not controversial: everyone wants kids to be safe. But HF1434, Minnesota’s proposed age-verification bill, simply won’t “protect children.” It mandates that websites hosting speech that is protected by the First Amendment for both adults and young people to verify users’ identities, often through government IDs or biometric data. As we’ve discussed before, the bill’s definition of speech that lawmakers deem “harmful to minors” is notoriously broad—broad enough to sweep in lawful, non-pornographic speech about sexual orientation, sexual health, and gender identity.

    Rep. Finke, an openly transgender lawmaker, next raised a point that her critics have since tried to distort: age-verification laws like the Minnesota bill are already being used to block young LGBTQ+ people from exercising their First Amendment rights to access information that may be educational, affirming, or life-saving. Referencing the Supreme Court case Free Speech Coalition v. Paxton, she noted that state attorneys general have been “almost jubilant” about the ability to use these laws to restrict queer youth from accessing content. “We know that ‘prurient interest’ could be for many people, the very existence of transgender kids,” she added, referring to the malleable legal standard that would govern what content must be age-gated under the law. 

    But despite years’ worth of evidence to back her up, Finke has faced a wave of attacks from countless media outlets and religious advocacy groups for her statements. Rep. Finke’s testimony was repeatedly mischaracterized as not having young people’s best interests in mind, when really she was accurately describing the lived reality of LGBTQ+ youth and advocating in support of their access to vital resources and community.

    In fact, this backlash proves her point. Beyond attempting to silence queer voices and to scare other legislators from speaking up against these laws, it reveals how age-verification mandates are part of a larger effort to give the government much greater control of what young people are allowed to say, read, or see online. 

    Rep. Finke was also right that these proposals are bad policy—they prevent all young people from finding community online—and that they violate young people’s First Amendment rights.

    Why FSC v. Paxton Matters

    Rep. Finke was similarly right to bring up the Paxton case, because beyond the troubling Supreme Court precedent it produced, Texas’s age-verification law also drew eager support from an extraordinary number of amicus briefs from anti-LGBTQ organizations (some even designated hate groups by the Southern Poverty Law Center). 

    In FSC v. Paxton, the Supreme Court gave Texas the green light to require age verification for sites where at least one-third of the content is sexual material deemed “harmful to minors,” which generally means explicit sexual content. This ruling, based on how young people do not have a First Amendment right to access explicit sexual content, allows states to enact onerous age-verification rules that will block adults from accessing lawful speech, curtail their ability to be anonymous, and jeopardize their data security and privacy. These are real and immense burdens on adults, and the Court was wrong to ignore them in upholding Texas’ law. 

    But laws enacted by other states and Minnesota HF 1434 go further than the Texas statute. Rather than restricting minors’ from accessing sexual content, these proposals expand what the state deems “harmful to minors” to include any speech that may reference sex, sexuality, gender, and reproductive health. But young people have a First Amendment right to both speak on those topics and to access information online about them.

    We will continue to fight against all online age restrictions, but bills like Minnesota’s HF 1434, which seek to restrict minors from accessing speech about their bodies, sexuality, and other truthful information, are especially pernicious.

    EFF and Rep. Finke are on the same page here: age verification mandates create immense harm to our First Amendment rights, our right to privacy, as well as our online safety and security. These proposals also fully ignore the reality that LGBTQ young people often rely on the internet for information they cannot get elsewhere. 

    But the Paxton case, and the coalition behind it, illustrates exactly how these laws can be weaponized. They weren’t there just to stand up for young people’s privacy online—they were there to argue that the state has a compelling interest in shielding minors from material that, in practice, often includes LGBTQ content. Ultimately, these groups would like to age-gate not just porn sites, but also any content that might discuss sex, sexuality, gender, reproductive health, abortion, and more.

    Using Children as Props to Enact Censorship 

    The coalition of organizations that filed amicus briefs in support of Texas’s age verification law tells us everything we need to know about the true intentions behind legislating access to information online: censorship, surveillance, and control. After all, if the race to age-gate the internet was purely about child safety, we would expect its strongest supporters to be child-development experts or privacy advocates. Instead, the loudest advocates are organizations dedicated to policing sexuality, attacking LGBTQ+ folks and reproductive rights, and censoring anything that doesn’t fit within their worldview.

    Below are some of the harmful platforms that the organizations supporting the age-gating movement are advancing, and how their arguments echo in the attacks on Rep. Finke today:

    Policing sexuality, bodily autonomy, and reproductive rights

    Many of the organizations backing age-verification laws have spent decades trying to restrict access to accurate sexual health information and reproductive care.

    Groups like Exodus Cry, for example, who filed a brief in support of the Texas AG in the SCOTUS case, frame pornography as part of a broader moral crisis. Founded by a Christian dominionistactivist, Exodus Cry advocates for the criminalization of porn and sex work, and promotes a worldview that defines “sexual immorality” as any sexual activity outside marriage between one man and one woman. Its leadership describes the internet as a battleground in a “pornified world” that has to be reclaimed. 

    Another brief in support of the age-verification law was filed by a group of organizations including the Public Advocate of the United States (an SPLC-designated hate group) and America’s Future. America’s Future is an organization that was formed to “revitalize the role of faith in our society” and fiercely advocates in favor of trans sports bans

    These groups see age-verification laws as attractive solutions because they create a legal mechanism to wall off large swaths of content that merely mentions sex from not only young people but millions of adults, too.

    Attacking LGBTQ+ Rights

    Several of the most prominent legal advocates behind age-verification laws have also led the crusade against LGBTQ+ equality. The internet that these groups envision is one that heavily censors critical and even life-saving LGBTQ+ resources, community, and information. 

    The Alliance Defending Freedom (ADF), for instance (which is another SPLC-designated hate group), built its reputation on litigation aimed at rolling back LGBTQ+ protections—including  allowing businesses to refuse service to same-sex couples, criminalizing same-sex relationships abroad, and restricting transgender rights

    The internet that these groups envision is one that heavily censors critical and even life-saving LGBTQ+ resources, community, and information. 

    Then there’s other groups like Them Before Us and Women’s Liberation Front, both of which submitted amici in support of the Texas Attorney General and are devoted to upending LGBTQ+ rights in the United States. Them Before Us says it’s “committed to putting the rights and well-being of children ahead of the desires and agendas of adults.” But it’s also running a campaign to “End Obergefell,” the 2015 Supreme Court case that upheld the right to same-sex marriage, and has been on the cutting edge of transphobic campaigning and pseudoscientific fearmongering about IVF and surrogacy. The Women’s Liberation Front, on the other hand, is an organization that has a long track record of supporting transphobic policies such as bathroom bills, bans on gender-affirming healthcare, and efforts to define “sex” strictly as the biological sex assigned at birth. 

    Through cases like FSC v. Paxton, groups like these three continue to advance a vision of society that creates government mandates to enforce their worldviews over personal freedom, while hiding behind a shroud of concern for children’s safety. But when they also describe LGBTQ+ people as “evil” threats to children and run countless campaigns against their human rights, they are being clear about their intentions. This is why we continue to say: the impact of age verification measures goes beyond porn sites.

    Expanding censorship beyond the internet into real-life public spaces

    As we’ve said for years now, the push to age-gate the internet is part of a broader campaign to control what information people can access in public life both on- and offline. Many of the same organizations advancing these proposals claim to be acting on behalf of young people, but their arguments consistently use children as props to justify giving the government more control over speech and information.

    Many of the organizations advocating for online age verification have also supported book bans, attacks on DEI policies and education, and efforts to remove LGBTQ+ materials from schools and libraries. Two of the organizations who supported the Texas Attorney General, Citizens Defending Freedom and Manhattan Institute, have led campaigns around the country to “abolish DEI” and ban classical books like “The Bluest Eye” by Toni Morrison from school libraries. These efforts are not different from the efforts to restrict access to the internet—they reflect a broader strategy to restrict access to ideas or information that these groups find objectionable. And they discourage free thought, inquiry, and the ability for people to decide how to live their lives. 

    These campaigns rely on the same core argument: that certain ideas are inherently dangerous to young people and must therefore be restricted. But that framing misrepresents an important reality: if lawmakers genuinely want to address harms that young people experience online, they should start by listening to young people themselves. When EFF spoke directly with young people about their online experiences, they overwhelmingly rejected restrictions on their access to the internet and came back with nuanced and diverse perspectives. Once that principle—that certain ideas are inherently dangerous—is accepted, the internet, once a symbol of free expression, connection, creativity, and innovation, becomes the next logical target. 

    Once that principle—that certain ideas are inherently dangerous—is accepted, the internet, once a symbol of free expression, connection, creativity, and innovation, becomes the next logical target. 

    This also wouldn’t be the first time a vulnerable group is used as a prop to advance internet censorship laws. We’ve seen this playbook during the debate over FOSTA/SESTA, where many of the same advocates claimed to speak for trafficking victims/survivors and sex workers, while pushing legislation that ultimately censored online speech and harmed the very communities it invoked. It’s a familiar pattern: invoke a vulnerable group, frame certain speech as a threat, and use that as a way to expand government control over the flow of information. And as we said in the fight against FOSTA: if lawmakers are serious about addressing harms to particular communities, they should start by talking to those communities. This means that lawmakers seeking to address online harms to young people should be talking to young people, not groups who claim their interests. 

    Rep. Finke Was Not Radical. She Was Right.

    The Paxton case, and the coalition backing age verification laws in the U.S., shows us exactly why the messaging around these laws draws superficial support from parents and lawmakers. But we’ve heard the quiet part said out loud before. Marsha Blackburn, a sponsor of the federal Kids Online Safety Act, has said that her goal with the legislation was to address what she called “the transgender” in society. When lawmakers and advocacy groups frame queer existence itself as a threat to young people, age-verification laws become ideological enforcement instead of regulatory policy.

    When lawmakers and advocacy groups frame queer existence itself as a threat to young people, age-verification laws become ideological enforcement instead of regulatory policy.

    In defending free speech, privacy, and the right of young people to access truthful information about themselves, Rep. Leigh Finke was not radical—she was right. She was warning that broad, ideologically driven laws will be used to erase, silence, and isolate young people under the banner of child protection. 

    What’s at stake in the fight against age verification is not just a single bill in a single state, or even multiple states, for that matter. It’s about whether “protecting children” becomes a legal pretext for embedding government control over the internet to enforce specific moral and religious judgments—judgments that deny marginalized people access to speech, community, history, and truth—into law. 

    And more people in public office need the courage of Rep. Finke to call this out.

  • Court Officially Orders U.S.-Based IPTV Operator to Pay Amazon & Netflix $18.75 Million

    Court Officially Orders U.S.-Based IPTV Operator to Pay Amazon & Netflix $18.75 Million

    In March of 2024, the Dallas-based IPTV operator William Freemon was sued for copyright infringement by Amazon, Netflix, and several major Hollywood studios.

    Freemon defended himself but failed to hire a lawyer for his company, Freemon Technology Industries (FTI). Instead, he responded by filing various motions while refusing to formally answer the copyright infringement complaint.

    With the case not moving forward, the movie companies eventually had enough and requested a default judgment of $18,750,000 in copyright damages.

    Last month, a Texas magistrate judge recommended granting this in full, and this week, the order was formally adopted by U.S. District Judge Sam A. Lindsay.

    Judge Grants $18,750,000 Judgment

    As detailed in our earlier coverage, Freemon allegedly operated four unauthorized streaming services: Streaming TV Now, TV Nitro, Instant IPTV, and Cash App IPTV. In addition, he was accused of running a pirate IPTV reseller operation called Live TV Resellers.

    ‘Streaming TV Now’ was the most popular IPTV service, according to the legal paperwork. It first appeared online in 2020 and offers access to 11,000 live channels, as well as on-demand access to over 27,000 movies and 9,000 TV series.

    The studios identified a sample of 125 copyrighted works that were available through the IPTV services, including Universal’s Oppenheimer. As damages compensation, the court granted the recommended statutory maximum of $150,000 per work for willful infringement, for a total of $18,750,000.

    This judgment amount will continue to grow, as the court approved a 3.51% annual post-judgment interest rate until the amount is paid in full. In addition, the attorneys’ fee award has yet to be determined and will also add to the total.

    From the default judgment

    default

    In addition to the damages, Judge Lindsay also entered a permanent injunction, which bars Freemon and FTI from reproducing, distributing, or publicly performing any of the plaintiffs’ copyrighted works, and from assisting others in doing so.

    Injunction Targets Domain Names

    The signed injunction also requires the eight domain names to be transferred immeidately to the studios’ control: instantiptv.net, streamingtvnow.com, streamingtvnow.net, tvnitro.net, cashappiptv.com, livetvresellers.com, stncloud.ltd, and stnlive.ltd.

    The associated domain registrars have five days to facilitate theese transfers. If they fail to do so, the TLD registries can be ordered to either transfer the domains to a registrar of the studios’ choosing, or place them on registry hold, which would make them inaccessible too.

    To address a potential whack-a-mole scenario, the studios can also return to court to add further domains to the injunction, as long as evidence shows Freemon operates them.

    All in all, the court order is a clear victory for the movie companies. Whether the defendant will be able to pay over $18 million in damages is another matter. The domain seizure order does not have an immediate effect either, as all the mentioned domains have been offline for a while already.

    That said, if Freemon ever attempts to relaunch the services, the movie companies will come prepared.

    A copy of the default judgment, signed March 11, at the U.S. District Court for the Northern District of Texas, is available here (pdf).

    From: TF, for the latest news on copyright battles, piracy and more.

  • Global Coalition Dismantles Massive Cybercriminal Proxy Network

    An international law enforcement operation has dismantled a sprawling, illicit IP proxy network used by cybercriminals to mask their digital footprints, Eurojust announced Thursday. The coordinated strike, spanning eight countries and supported by Europol, successfully knocked the hidden service offline.

    The network is believed to have infected about 369,000 modems and routers across 163 countries with malware, allowing its roughly 124,000 customers to seamlessly route their internet traffic through the compromised devices without the owners’ knowledge. Customers paid for the proxy access anonymously using a dedicated cryptocurrency platform that generated more than five million euros ($5.76 million) in illicit revenue for the administrators.

  • NIH Files Reveal Broader Coronavirus Engineering Research Before COVID-19

    Years before the first known COVID-19 cases in Wuhan, China, a loose network of scientists — backed by U.S. government grants and linked through recurring collaborations — was already running experiments to answer a deceptively simple question: What would it take for newly discovered bat coronaviruses to infect humans?

    The question surfaced repeatedly in grant applications, emails and internal National Institutes of Health reviews. Researchers proposed modifying viral spike proteins — the part that latches onto host cells — to test whether bat viruses could bind more tightly to human receptors. They also explored changes to the viruses’ cleavage sites, molecular switches that allow them to unlock and enter cells more efficiently.

    Those kinds of viral modifications would soon become entangled in the pandemic’s most contentious question: Did SARS-CoV-2 emerge through natural spillover from animals to humans, or through a laboratory incident tied to research intended to anticipate the next outbreak? A flashpoint in that debate has been Project DEFUSE — a 2018 grant proposal submitted to the Pentagon’s Defense Advanced Research Projects Agency, or DARPA.

    Led by the U.S. nonprofit EcoHealth Alliance alongside collaborators including University of North Carolina virologist Ralph Baric, a pioneer in coronavirus reverse genetics, and Zhengli Shi of the Wuhan Institute of Virology, who led China’s largest bat coronavirus sampling program, DEFUSE outlined plans to test spike-protein swaps and cleavage-site insertions in bat coronaviruses. The project was never funded.

    After details of the proposal surfaced in the wake of the pandemic, some proponents of the lab-leak hypothesis argue that DEFUSE reads like a blueprint for SARS-CoV-2. They point to the virus’ distinctive furin cleavage site — a spike feature absent from its closest known relatives that can boost human infectivity and transmissibility — and to DEFUSE’s aim to insert similar human-specific cleavage sites into bat coronavirus spikes. Critics counter that the proposal is irrelevant because DARPA rejected it.

    But a more fundamental question is whether DEFUSE was an isolated idea or part of a broader, ongoing line of research already underway.

    DEFUSE outlined plans to test spike-protein swaps and cleavage-site insertions in bat coronaviruses.

    Newly obtained NIH records suggest that the experimental concepts later spotlighted in DEFUSE — tuning bat coronavirus infectivity through spike swaps, receptor-binding changes and cleavage-site insertions — were already embedded in multiple U.S.-funded coronavirus research projects years before the pandemic.

    Late last month, members of the World Health Organization’s Scientific Advisory Group for the Origins of Novel Pathogens (SAGO) addressed DEFUSE directly in a Nature comment.

    “Even if the DEFUSE grant application had been approved,” SAGO wrote, “it is scientifically implausible for SARS-CoV-2 to have been derived from the genome elements in the chimeric vaccine backbone or proposed spike protein.” The group stressed that the broader origins investigation remains open and that critical data gaps persist.

    The records, recently obtained by U.S. Right To Know, do not contradict SAGO’s narrow conclusion about whether SARS-CoV-2 could have arisen directly from the DEFUSE proposal as written. But they show that multiple research proposals were already exploring similar approaches to altering how bat coronaviruses enter cells — experiments aimed at testing whether genetic tweaks could expand the viruses’ ability to infect new hosts, including humans.

    Internal NIH reviews also show that agency scientists recognized the potential hazards. As early as 2016, reviewers warned that modifying spike binding or cleavage sites in recombinant coronaviruses could produce “novel and unexpected” viral traits, even as similar work continued under other federally funded grants.

    Internal NIH reviews also show that agency scientists recognized the potential hazards.

    The documents show that projects with clear conceptual overlaps were proposed, debated, revised, rejected and approved in various combinations in the years leading up to 2019 — with the same researchers repeatedly involved.

    “That’s absolutely true,” said Dr. Stanley Perlman, a University of Iowa coronavirus researcher, when asked if the broader scientific ecosystem pushed to explore how bat coronaviruses could become more infectious. “There’s no question about that.”

    The newly released records include internal NIH correspondence from the Obama-era pause on funding certain gain-of-function experiments, as well as previously blacked-out University of Minnesota emails now unredacted after five years. Together, they reconstruct how U.S.-Chinese coronavirus collaborations operated and how federal officials assessed the risks.

    At the center of many of those exchanges is Fang Li, a University of Minnesota virologist whose laboratory became a key structural biology hub connecting multiple research groups. Li’s expertise in mapping how spike proteins interact with receptors and antibodies positioned his team to identify key viral features while collaborators constructed recombinant viruses, swapped spikes and ran animal-infection studies.

    One of Li’s collaborations with Baric in 2016 drew scrutiny inside the NIH. Records show agency reviewers concluded that a proposed experiment altering receptor binding in SARS-like bat coronaviruses could generate a virus with enhanced risk — and ultimately blocked the work under the federal gain-of-function funding pause.

    “Novel and unexpected” risks

    In spring 2016, the White House-ordered research funding pause was reshaping virology, and NIH officials were pressing grant applicants to clarify whether their coronavirus experiments might fall within the restrictions.

    The moratorium applied to influenza, SARS and MERS viruses. Closely related bat coronaviruses sometimes fell outside the pause because they had not yet been shown to infect humans. 

    That distinction became central in early 2016 when the NIH reviewed a renewal application from Li and Baric titled “Receptor recognition and cell entry of coronaviruses.”

    Baric, a University of North Carolina virologist widely known for pioneering coronavirus reverse-genetics systems, had collaborated for years with Li and other international researchers studying how coronaviruses adapt to new hosts.

    On March 31, 2016, the NIH notified the University of Minnesota that the Li-Baric grant might include research subject to the funding pause and asked the scientists to clarify whether any proposed work could result in “enhanced pathogenicity and/or transmissibility in mammals via the respiratory route.”

    A few days later, Li and Baric sent their response.

    Most of the grant, they wrote, involved structural biology and pseudovirus systems. But one component — “Experiment 4” — proposed creating live recombinant SARS-like viruses with mutations designed to test how efficiently the viruses could infect different species. In particular, the experiments would test how efficiently the engineered virus could use ACE2 receptors — proteins on the surface of cells that some coronaviruses use as a doorway to infect them.

    The scientists argued that stronger receptor binding — which their experiment was designed for — did not necessarily translate into increased “pathogenicity,” or the virus’ ability to cause disease. They also promised to halt the work if engineered viruses showed substantial increases in replication, meaning the virus multiplied more rapidly in infected cells.

    The experiments would test how efficiently the engineered virus could use ACE2 receptors.

    Inside the NIH, the proposal’s reviewers saw potential risk.

    In an internal “biohazard comment,” a grants manager warned that recombinant coronaviruses engineered to enhance spike cleavage or strengthen ACE2 binding “may have novel and unexpected virulence phenotypes” — or, new and unpredictable traits that could make the virus more dangerous.

    The comment recommended allowing the work under biosafety level 3 precautions before the proposal went to the agency’s internal gain-of-function oversight committee.

    On May 18, 2016, NIH officials made a decision.

    While allowing other parts of the grant to proceed, the agency blocked Experiment 4. Engineering SARS-like viruses with enhanced receptor binding, the NIH concluded, fell under the federal gain-of-function funding pause and “may not be conducted under this grant,” according to a letter signed by NIH program officer Erik Stemmy, who oversaw coronavirus grants at the agency.

    The determination marked an early instance in which the NIH formally concluded that altering receptor binding in bat SARS-like viruses could plausibly create a more dangerous pathogen.

    It also exposed inconsistencies in how the pause was applied. While the NIH halted Li and Baric’s proposed “enhanced affinity” experiments, the agency allowed closely related work to proceed under an EcoHealth Alliance grant involving the Wuhan Institute. In that case, reviewers — including Stemmy — concluded that the bat coronaviruses under study had not yet been shown to infect humans and therefore fell outside the pause’s scope, despite internal questions about the experiments.

    The decision to block Li and Baric’s experiment did not settle the broader scientific debate — or stop the scientists from proposing similar experiments.

    A regulatory gray zone

    Despite the NIH’s 2016 rejection, Li and Baric continued exploring related ideas as federal policy evolved.

    In March 2017 — two months after the first Trump administration took office — Li contacted Stemmy about a new proposal.

    Baric, Li wrote, was “considering making a synthetic construct of a chimeric bat SARS-like coronavirus.”

    Li forwarded a letter signed by Baric formally proposing the idea: Create a hybrid virus by combining most of one bat SARS-like coronavirus, SHC014, with the spike protein from another bat virus recently discovered in Uganda. Emails show Baric had obtained the full sequence of the Uganda virus a month earlier from Columbia University virologist Simon Anthony, who at the time was collaborating with the EcoHealth Alliance and others on the PREDICT Project — a 10-year U.S. Agency for International Development-funded effort led by researchers at the University of California, Davis to catalog emerging viruses in wildlife.

    The new Baric and Li proposal also called for modifying the spike region that binds to ACE2 receptors — essentially adjusting how tightly the Uganda hybrid could attach to cells of different species.

    The team planned to test whether the engineered virus could infect cells carrying human, mouse, bat or civet receptors. If the modified virus replicated efficiently, the researchers proposed constructing full versions of the Uganda virus incorporating those receptor-enhancing mutations.

    In their letter, the scientists argued the experiment would not meet the federal definition of a potential pandemic pathogen, because neither of the original bat viruses had been shown to cause disease in humans.

    Scientists argued the experiment would not meet the federal definition of a potential pandemic pathogen.

    They also promised to stop the work if any engineered virus replicated more than 10 times better than the SARS virus used as a benchmark.

    Two weeks later, Stemmy replied that the NIH’s internal committee had not yet reached a decision.

    The proposal, he wrote, had landed in a regulatory transition.

    “Nothing to report yet on this,” Stemmy wrote. “Our internal committee hasn’t met yet. It’s a little bit of a gray area at the moment since the GoF research funding pause is still technically in effect while the department implements the P3CO [Potential Pandemic Pathogen Care and Oversight] policy that will replace it.”

    The available records do not indicate what decision the NIH ultimately reached on the Uganda chimera proposal.

    Neither Li nor Baric responded to requests for comment. The NIH and Stemmy also did not respond to questions about the agency’s grant decisions.

    The exchange illustrates how researchers continued to explore new combinations of bat coronavirus genomes and spike proteins even as federal officials struggled to define where the gain-of-function boundary lay.

    Green lights amid red flags

    The Li-Baric grant was not unique.

    Across multiple projects during the same period, the NIH reviewed — and frequently approved — experiments designed to alter receptor-binding domains, swap spike proteins between viruses or modify cleavage sites that influence how coronaviruses infect cells.

    Under a grant led by Baric and Vanderbilt University virologist Mark Denison, researchers proposed generating mouse-adapted versions of the SARS-related bat viruses WIV1 and SHC014. Internal reviewers noted the resulting strains were likely to show enhanced pathogenicity or transmissibility in animals. An early draft response questioned whether the work fit the grant’s aims, but a later version prepared to approve the experiments.

    Another Baric project constructed SARS chimeras bearing SHC014 or WIV1 spike proteins. The NIH concluded those experiments did not meet its gain-of-function definition at the time, even though the purpose of such spike swaps was to test whether bat viruses could infect new hosts.

    An early draft response questioned whether the work fit the grant’s aims.

    Earlier approvals in 2015 included experiments inserting MERS cleavage-site sequences into the related bat virus HKU4 and altering receptor-binding regions to increase how efficiently the virus could attach to cells. Alterations to MERS cleavage sites — modifications that can influence viral entry efficiency — also were allowed under the same grant.

    Taken together, the records show the NIH blocking certain receptor-binding experiments under the gain-of-function pause while allowing closely related spike-swap and cleavage-site research to proceed under other grants.

    By early 2018, that regulatory landscape formed the backdrop for a more ambitious proposal: DEFUSE.

    Informal scientific exchanges

    The day-to-day emails also show how ideas and genetic sequences moved informally among collaborators.

    In one February 2018 exchange, Li told Shi, the renowned Wuhan Institute coronavirus researcher, that Baric wanted access to her unpublished sequence of a spike protein from a MERS-like bat coronavirus known as “422.”

    Li wrote that he had “mentioned” Shi’s recent work with the virus to Baric, who asked if he could “get the sequence of the 422 spike protein.”

    “I said that I would need to check with you,” Li added.

    Shi replied that she saw “no problem” sharing the information if Baric’s work didn’t overlap too much with her own. She also described experiments her lab already had attempted.

    “We have already done this swapping on the MERS-CoV backbone,” she wrote.

    The recombinant virus, Shi said, could be rescued after transfection, but “couldn’t grow in the following passage.”

    “We have stop here for the moment,” Shi added. “I would encourage him to have a try.”

    The exchange offers a look inside the collaborative scientific ecosystem reflected throughout the records, one in which American and Chinese researchers shared sequences, experimental ideas and preliminary findings in real time.

    The following month, Baric and Shi joined EcoHealth Alliance in submitting the DEFUSE proposal to DARPA.

    Scientists debate the risks

    Perlman, the University of Iowa coronavirus researcher who collaborates with Li, said the broader research ecosystem described in the records was real, though loosely organized.

    Researchers, he said, were trying to determine which bat coronaviruses “could infect human cells” and therefore posed pandemic risk.

    At the same time, Perlman said he believes some of the scientific questions driving chimera experiments could often be addressed using simpler approaches.

    “It wasn’t necessary to make chimeric viruses to get some information,” said Perlman, who added he believes SARS-CoV-2 originated naturally. “Chimeric viruses are not my favorite way.”

    Two other scientists who separately reviewed the newly surfaced NIH files were more critical.

    The broader research ecosystem described in the records was real.

    Simon Wain-Hobson, a British-French virologist who has long opposed gain-of-function research, called one of Baric’s proposed recombinant virus concepts “bonkers.”

    “After the GoF flu virus controversy,” he said, referencing a decade-long debate over whether researchers should intentionally make deadly viruses more contagious to study them, “this shows that Baric has learned nothing.”

    Steve Massey, a bioinformatics professor who also examined the records, said they reveal what he sees as a recurring pattern: researchers — particularly Baric — pushing experimental boundaries, using technical language to “bamboozle” reviewers and persuade them to approve studies he believes constituted gain-of-function research.

    Massey also pointed to what he sees as a throughline: proposals to alter “human protease cleavage sites” in MERS, which he said resemble later cleavage-site engineering debates around SARS-CoV-2.

    “Such experiments could easily enhance pathogenicity or transmissibility,” Massey said. “This is playing with fire.”

    Post outbreak: Testing the furin site

    The records also include a proposal drafted early in the pandemic that focused on one of SARS-CoV-2’s most debated features: its furin cleavage site.

    In early 2020, Baric and Li proposed experiments inserting that cleavage site into RaTG13 — the bat coronavirus most closely related to SARS-CoV-2 — along with additional mutations affecting spike binding and viral entry.

    The goal was to test whether those changes could allow the virus to infect new species or make it infect cells more easily.

    The researchers acknowledged the possibility that such work might require additional review under the Potential Pandemic Pathogen Care and Oversight rules, the federal framework implemented after the gain-of-function funding pause.

    They proposed conducting the experiments under strict biosafety conditions while also pursuing loss-of-function studies designed to weaken the virus.

    But the proposal also made clear what the scientists expected the mutations might do.

    “We anticipate,” the researchers wrote, that inserting the furin cleavage site into RaTG13 may increase the virus’s ability to infect and cause disease in living organisms.

    What the records show

    The newly surfaced documents do not prove that SARS-CoV-2 was engineered or escaped from a laboratory.

    But they provide contemporaneous evidence of how researchers and federal officials were thinking about coronavirus engineering years before the outbreak began.

    NIH reviewers warned that modifying spike proteins could create “novel and unexpected” viral traits. Scientists debated how far those experiments should go. And proposals to reshape receptor binding or cleavage sites appeared across multiple grants.

    By the time the pandemic began, and even before DEFUSE was rejected, the tools and scientific concepts for tuning how coronaviruses enter human cells were no longer speculative. They had already been proposed, debated in federal oversight letters and pursued across an international network of collaborating laboratories.

    The post NIH Files Reveal Broader Coronavirus Engineering Research Before COVID-19 appeared first on Truthdig.

  • AI Accelerates UK Fraud Cases to a Record 444,000 in 2025

    Fraud in the United Kingdom surged to unprecedented levels in 2025, with reported cases hitting a record 444,000, according to new data released Thursday by the fraud prevention service Cifas.

    In its annual Fraudscape report, the agency noted that an average of more than 1,200 fraud cases were recorded daily. Identity fraud remained the most prevalent threat, accounting for 54 percent of all filings, followed by misuse of facility, which represented 24 percent of cases.

    The agency cautioned that fraud has become an industrialized, cross-border threat, with criminal syndicates now “mimicking the size and structure of large corporations,” according to Cifas Chief Executive Mike Haley.

    In its assessment, Cifas projected that online fraud will become increasingly “sophisticated, supercharged by AI-powered impersonation, synthetic media and accessible fraud-as-a-service tools that are likely to ensure that identity fraud and account takeover remain major threats.”

    Haley warned that AI is accelerating fraud “that is increasingly digital, organised and international,” such as being employed to automate attacks on users and bypass detection. 

    “We anticipate more use of AI to personalise attacks and build credible, long-term profiles – reinforcing the need for cross-sector collaboration to spot patterns earlier,” said Cifas director of intelligence, Director of Intelligence, at Cifas.

    In a move to combat this surge in sophisticated cybercrime, Meta removed 10.9 million accounts on Facebook and Instagram associated with criminal scam centers and disabled more than 150,000 accounts linked to operations in Southeast Asia. This crackdown also led to the arrest of 21 individuals by Thai police, according to Meta.

    In a Wednesday press release, Meta announced the rollout of new AI protective tools across its social media platforms, including a warning system on WhatsApp to alert users to potential scammers.

    Meta launched the first of such tools in March last year, beginning with an AI advertisement scanner to detect fraudulent images of celebrities. In 2025, the company reportedly removed more than 159 million scam advertisements.

    The industry of cyber scams has evolved into a borderless threat, with the prevalence of global call center scams. In March 2025, OCCRP published Scam Empire, an investigation revealing how criminal call centers use fake investment schemes to target thousands of victims worldwide, exposing the inner workings of these industrial-scale operations.

  • Former Rapper Officially Declared Victor in Nepal’s Post-Uprising Election

    Balendra “Balen” Shah, a 35-year-old former rapper, structural engineer, and mayor of Kathmandu, is set to become Nepal’s next Prime Minister after his landslide victory in the country’s general election was announced Thursday.

    Nepal’s Election Commission released its final report, showing Balen’s Rastriya Swotantra Party (RSP) won 182 of 275 seats in parliament, just two seats shy of a two-thirds supermajority. 

    Running head-to-head against former premier KP Sharma Oli in the constituency of Jhapa-5, Balen beat him by nearly 50,000 votes. Balen received 68,348 votes while Oli got 18,734.

    The former prime minister’s Communist Party of Nepal (Unified Marxist–Leninist) won 25 seats in parliament. Meanwhile, Nepali Congress won 38 seats, Nepali Communist Party won 17 seats, Shram Sanstrikti Party won 7 seats, Rastriya Prajatantra Party won 5 seats, and an independent candidate won one seat.

    The election follows the September 2025 dissolution of the House of Representatives, brought down by a widespread “Gen-Z” protest movement that was sparked by a government ban on social media and fueled by anger over corruption and governance failure.

    While the September protests forced the resignation of KP Sharma Oli, who had been elected prime minister four times previously, more than 2,000 were injured and 77 killed, many of them shot by security forces. Amid the violence, crowds burned a number of buildings, including the Supreme Court and parliament itself.

    Following the election, protest leaders say they are hopeful for the new government to address anti-corruption issues.

    “We, who raised questions on the streets for good governance, voted for change,” Gen-Z leader Rakshya Bam told OCCRP. “Our generation has reclaimed democracy. The issue of anti-corruption is not only for Gen-Z, but for all Nepali people. To ignore this would be to insult the people.”

    Like nearly every candidate in this month’s election, Balen vowed to stamp out graft. He declared his candidacy for prime minister and joined the RSP in January, declaring in his election manifesto: “I will stand at the forefront of Parliament against irregularity and corruption.”

    OCCRP followed Balen on his campaign in Jhapa-5, where he was surrounded by social media creators and he went door-to-door to meet voters.

    The sunglasses wearing Balen, who has 3.7 million followers on Facebook, refused to give interviews to media during his campaign and even after his victory.

    During his three-and-a-half-year tenure as the Mayor of Kathmandu, he also refused interviews with local media and faced criticism for the bulldozing of a squatter settlement in the city and for allegedly only listening to a small group of aides.

    However he emerged as a popular national candidate when he expressed solidarity with the Gen-Z protests, writing in a September 7 Facebook post: “Tomorrow’s rally is clearly and spontaneously for Gen Z, they are under 28 years of age, for whom I still look old. I also want to understand their wishes, objectives, and thoughts. … I have full support. Dear Gen Z, tell me what kind of country do you want to see?”

    In its own election manifesto, Balen’s party pledges to legally and transparently investigate the assets of individuals who have held significant public positions since 1990.

    “Any wealth proven to be acquired illegally will be confiscated and nationalized through a clear working procedure,” the RSP manifesto reads.

    RSP is a centrist party established in 2022 whose chairman, Rabi Lamichhane, also won a seat in the election. Lamichhane, however, currently has several ongoing cases against him in several courts on charges of cooperative fraud, money laundering, and organized crime. 

    A hearing on the writ petition challenging the withdrawal of the money laundering and organized crime case against Lamichhane is set to begin next week in Nepal’s Supreme Court.

    Meanwhile, a high-level commission led by retired judge Gauri Bahadur Karki, formed to investigate the deadly crackdown during the September 8–9 protests, has provided its report to the government. The report’s findings have not yet been disclosed.

    As Nepal’s new government takes office, those who took to the streets in protest last year said they will be closely monitoring whether it takes meaningful steps to rooting out entrenched corruption.

    “We hope the government will work on good governance,” Abhishek Shrestha, who was shot in the leg during the Gen-Z protest, told OCCRP. “We will be working as outside watchdogs over it.”

  • Does this video show ‘Tibetan snow lion’?

    One user claimed the footage depicted the majestic animal climbing Mount Everest.