Blog

  • Neurodiversity – The Opium of the People

    Neurodiversity – The Opium of the People

    Neurodiversity – The Opium of the People 

    “There is, perhaps, no better established fact in British society than that of the corresponding growth of modern wealth and pauperism. Curiously enough, the same law seems to hold good with respect to lunacy. The increase of lunacy in Great Britain has kept pace with the increase of exports, and has outstripped the increase of population.” Karl Marx in the New York Daily Tribune 1858. (1)

    Over the last fifteen years I have seen the rise of neurodiversity, and of ADHD and autism diagnoses. I have seen friends and acquaintances talk in person and on Facebook about their recent diagnosis, first of their children, now of themselves. I was gob smacked and amazed at my friends’ naivety. Then I openly mocked the concepts of ADHD, autism and neurodiversity on Facebook. I lost Facebook friends and perhaps real friends when I questioned the validity of these diagnoses, and it did not matter how politely I did this.

    I stood my ground and now, in this blog, I am exploring why the concept of neurodiversity took off.

    In about 1992 I started investigating psychiatry. I bought Peter Breggin’s book, Toxic Psychiatry (2). In the book he takes apart both psychiatric diagnosis and drug-based treatments. He shows that psychiatric drugs are harmful and often difficult to come off. He also shows that if you put the effort in, most people’s distress is understandable without any diagnosis and often ameliorated with sympathetic understanding. I have often seen both assertions to be true.

    In Toxic Psychiatry, Breggin writes about ADHD and Ritalin. He says Ritalin is dangerous and that the diagnosis has no validity. Instead, it distracts from looking at the causes of distress in children’s lives. He presents a case history of a child who has a diagnosis of ADHD and suggests what the boy is really suffering from is DADD, Dad Attention Deficit Disorder, and that the mother is suffering from HADD, Husband Attention Deficit Disorder. He provides a psychosocial explanation of the child’s distress rather than a medical one. I was reading this before ADHD became a widespread term in the UK, so when it took off here, my understanding was very different from people who accepted the diagnosis.

    I have read other sources on ADHD that echo Breggin’s ideas (3) (4), including the blogs on this site by Cromby and Johnstone (5). I am particularly impressed, but also distressed, by the long-term outcome studies which show stimulants such as Ritalin in children cause worse behavioural problems, worse educational outcomes and worse physical outcomes than for children who have similar problems but are not medicated (6). I find it hard to imagine that the long-term outcomes in adults will be different.

    Getting a diagnosis of autism or ADHD can often be reassuring. In much the same way as when you see a homeopath or other complementary healer, they ask you lots of questions about yourself, pay close attention and then come up with a remedy that matches your “symptoms.” I once saw a homeopath after developing chronic fatigue following on from a very bad cold that developed after a relationship break up. I also went to a large group therapy event straight after seeing the homeopath and my fatigue went away. It was amazing but almost certainly placebo.

    Being recognised for who we are and what we have struggled with is important and can sometimes be all we need to turn our lives around, and a good homeopath can provide that. An assessment for ADHD or autism can also do that, but so can a good counsellor or good friend. Good friends, homeopaths and counsellors hopefully do not tell us we have a lifelong condition that can’t be cured but rather, can be managed, and neither do they prescribe damaging drugs.

    The concept of neurodiversity is based on very weak science. The “neuro” bit is contested, the brain science is weak and inconsistent, and no one’s brains are scanned or tested when people get diagnosed. It is a triumph of marketing over medicine (7).

    Why, then, as a society, have we fallen for the myth of neurodiversity and mass prescribing of stimulants? I turn to politics and economics for an answer.

    The political and economic context of psychiatric diagnosis and treatments

    As society changes psychiatry also changes.

    In the 1970s the post-war boom was over. We had full employment, strong unions and inflation which led to massive strike waves (the Winter of Discontent in the UK), rapid changes in government and the UK going to the World Bank for a loan. In 1979 Margaret Thatcher was elected Prime Minister in the UK and Ronald Reagan was elected US president in 1980. They ushered in a new form of capitalism, neoliberalism, though it had already been pioneered by James Callaghan, Prime Minister before Margaret Thatcher, and Jimmy Carter, US president before Ronald Reagan. They broke the unions, sold off public assets to create new investment opportunities and deregulated the banking sector. Manufacturing started to leave the UK, and relocated in the Far East, particularly China, because manufacturing is cheaper there – the consequences were a massive rise in unemployment.

    Two psychologists, David Smail and Dorothy Rowe, noted that they had an increase in clients in the 80s as both industry and the state sector restructured. Both these psychologists wrote that it was important to include the political context in their conversations with their clients so that instead of blaming themselves, the clients could see their misery as the outcome of impersonal economic forces.

    In 1987 Prozac was launched onto the market. Prozac was a new type of antidepressant, an SSRI, a Selective Serotonin Reuptake Inhibitor. Other similar drugs followed. The myth of the serotonin theory of depression was thought up in some drug company marketing department. That myth was repeatedly debunked but no one was listening and it became ubiquitous (8). SSRI’s main effect is to numb emotion. These drugs can be hard to come off, it may double the suicide rate, and can numb the sex organs, sometimes permanently (9).

    Bruce Cohen, a socialist commentator on health, thinks that in the 1970s there was a battle between capital and labour, in which labour lost. Prozac was a useful tool which ‘cooled’ the losers, ie the main effect of the diagnosis of depression was to stop people thinking about how they had been harmed as neoliberalism took hold, and that the drug anaesthetised them to their suffering.

    Now about one in six in the UK are taking antidepressants, yet they have been shown to be only slightly more effective than placebo (11). There are an awful lot of losers.

    There have been even more ‘losers’ since the 1980s. In 2008 we had a banking crash that rivalled the 1929 Wall Street Crash. Wages were low but business needed so sell, and deregulation of the banking sector allowed people to live on credit. Eventually the poor couldn’t pay back the loans on what were euphemistically called ‘Subprime Mortgages’. Big banks went bust, governments took out massive loans to bail out banks, the poor were evicted.

    After the crash and the so-called Credit Crisis, global capital needed to restructure. The state cut back welfare services, and homelessness increased, as did zero-hours contracts. The benefits system was changed to become a system of ritual humiliation, designed to encourage the poor to accept low wages and poor working conditions, and increasing numbers of women turned to sex work to pay the bills (12). The era of post neoliberalism is emerging, spearheaded by Trump’s tariffs.

    The education budget was also cut after 2008 and the education system became more authoritarian with more targets, less autonomy for teachers, strict discipline codes for pupils and increased privatisation via the academies. Academies like to employ young teachers because they are cheaper than experienced ones, but they have often not learnt classroom control skills. Afternoon break has been eliminated in many secondary schools, and sometimes lunch break is only half an hour, so children have no time to socialise and de-stress between lessons. Parents use apps which tell them the homework the children need to do, in addition to any disciplinary measures taken against the child. There is no escape from school discipline even when the children go home.

    It is easier to refer a child who is struggling with school for a mental health assessment than for the school to get to know the child and adapt teaching to the child’s needs. I think neurodiversity, ADHD and autism diagnosis are ways of ignoring children’s needs and the psychosocial causes of their distress. Children I know who have this diagnosis have obvious challenges: divorce and battling parents, fathers with drug problems, parents who have experienced a variety of challenges such as early loss of their own parent, child sexual assault and family violence and unresolved family conflicts – all of which could affect their ability to parent their own children.  Possibly, underfunded schools rely on this diagnosis to get extra money, perhaps because they lack the time and resources to get to know their students and make provision for what they need.

    To summarise, I think the spread of the concept of neurodiversity in adults after the 2008 crash is another way of ‘cooling the losers’ while the economy restructures. Since 2015, politics has been dominated by culture wars including Black Lives Matter, Me Too and now controversies about transgenderism. They flare up, die and achieve little. The left has almost completely ignored the working class. In such circumstances antidepressants and neurodiversity provide a degree of solace in this increasingly harsh world, just as gin and laudanum did in Victorian England.

    WHAT IS OUR HISTORY – WHERE DO WE FROM HERE?

    Psychiatry has been criticised as long as it has existed. In 1845 the Alleged Lunatics Friends Society was formed (13). Their complaints were similar to those made by modern psychiatric survivors: false imprisonment, harsh treatment and a diagnostic system that made no sense. There have always been humanitarian alternatives to psychiatry such as the York Retreat started by a Quaker family, the Tukes, in 1796 as a reaction to a Quaker dying in York Asylum due to brutal treatment (14). Karl Marx in an article in the New York Daily Tribune in 1858 about psychiatry wrote, “There is, perhaps, no better-established fact in British society than that of the corresponding growth of modern wealth and pauperism. Curiously enough, the same law seems to hold good with respect to lunacy. The increase of lunacy in Great Britain has kept pace with the increase of exports, and has outstripped the increase of population.” (1)

    The height of the critical psychiatry movement was in Italy in the 1960s – 1970s where psychiatrist Franco Basaglia spearheaded a movement that eventually closed most of the asylums in Italy and successfully instigated community psychiatric institutions where forced treatment hardly exists (15). Basaglia was riding on the radicalism of the 60s New Left and he had support from a strong left in Italy. The New Left were about politicising the margins: the civil rights movement in the USA, second wave feminism, gay rights, anti-colonialism and it included anti-psychiatry. The New Left, 50s – 70s, were a reaction to the Old Left, 30s – 50s which was Stalinism, a belief in socialism in one country enforced by brutal regimes. Soviet psychiatry was brutal and repressive; dissidents were locked up and forcibly drugged with major tranquilizers.

    Stalinism was a reaction to the failure of the Second International which formed in 1889 and which believed in working towards socialist revolutions in the core capitalist countries. (15) The Russian revolution of 1917 was the height of working-class organization but quickly followed by the failed German revolution at the end of the first world war (17). That failed revolution in Germany left Russia isolated which is why Stalinism developed, but socialism in one country was never going to last, capitalism is international and socialism needs to be too, for it to survive. The history of the 20th century is about the failure of the socialist movement that culminated in the crushing of the revolution in Germany.

    The left regressed during the twentieth century, and we are left with just a remnant of the critical psychiatry/survivor movement. It has been well researched but achieves little.

    My conclusion is that psychiatry is intimately tied up with capitalism and that a successful critical psychiatry/survivor movement might well depend on a renewed proletarian movement for socialism. A strong socialist movement offers real support to people who are struggling which makes it less likely that they will turn to damaging services and could also educate people about the dangers of traditional psychiatry. More economically equal countries use their psychiatric services less, with smaller numbers of both voluntary patients and those who are forced to use them. Economic equality depends on how well organised the working class are. The left is almost dead (18), but in a few places, seeds of working-class solidarity exist. I point to the Campaign for a Socialist Party in the USA (19) (20), which has organised renters unions, resistance to ICE and various mutual aid projects in various cities in the USA and its associated organisation, the KSP in Germany (21), the Class Work Project in the UK (22) and D Hunter’s reports on working class mutual aid on estates in the UK (23) -all examples of working-class solidarity. I think critical psychiatry and survivor movements can be a crucial part of this. Whether these projects grow into real and powerful movements only time will tell.

    ****

    Mad in the UK hosts blogs by a diverse group of writers. The opinions expressed are the writers’ own.

    1. https://www.marxists.org/archive/marx/works/1858/08/20.htm
    2. https://harpercollins.co.uk/products/toxic-psychiatry-peter-breggin?variant=32552342487118
    3. https://www.madintheuk.com/2020/11/insane-medicine-chapter-3-the-manufacture-of-attention-deficit-hyperactivity-disorder-adhd-part-1/
    4. https://www.worldscientific.com/worldscibooks/10.1142/12752?srsltid=AfmBOoq-EeIDyPDk3iHiRbrx01_mOBXg2rBnxCVlHATxJAuJWHLedXM5#t=aboutBook
    5. https://www.madintheuk.com/2024/12/part-1-neurodiversity-what-exactly-does-it-mean/
    6. https://www.madintheuk.com/2020/11/insane-medicine-chapter-3-the-manufacture-of-adhd-part-2/
    7. https://research.birmingham.ac.uk/en/publications/the-problem-with-neurodiversity/
    8. https://www.nature.com/articles/s41380-022-01661-0
    9. https://www.madinamerica.com/2023/05/critical-psychiatry-textbook-chapter-8-part-one/
    10. https://www.madinamerica.com/2017/03/psychiatric-hegemony-marxist-theory-mental-illness/
    11. https://pharmaceutical-journal.com/article/news/antidepressant- prescribing-increases-by-35-in-six-years
    12. https://www.theguardian.com/commentisfree/2019/oct/26/a-welfare-system-that-drives-mothers-into-prostitution-is-not-a-safety-net
    13. https://en.wikipedia.org/wiki/Alleged_Lunatics%27_Friend_Society
    14. https://en.wikipedia.org/wiki/The_Retreat
    15. https://www.versobooks.com/en-gb/products/79-the-man-who-closed-the-asylums?srsltid=AfmBOopUKrnyh9DlOdBqGJwslGZhrV5t-aHc9jeICHC3bTTA-niHPJ-i
    16. https://en.wikipedia.org/wiki/Second_International
    17. https://www.youtube.com/watch?v=qFLDv4NO8xE
    18. https://platypus1917.org/about/the-left-is-dead-long-live-the-left/
    19. https://www.sublationmag.com/post/socialist-unity
    20. https://campaignforasocialistparty.substack.com/
    21. https://kampagnesozialistischepartei.de/
    22. https://theclassworkproject.com/about-us/
    23. https://dhunterorganising.substack.com/

     

    The post Neurodiversity – The Opium of the People appeared first on Mad in the UK.

  • RFK Jr. is definitely coming for your vaccines (part 8): “Massive Epidemic of Vaccine Injury,” ACIP, and a prominent oncologist

    The MAHA Institute is holding an event called MEVI Roundtable: Massive Epidemic of Vaccine Injury to fear monger about vaccines. Unfortunately, Dr. Wafik El-Deiry, a prominent oncologist-scientist, will participate.

    The post RFK Jr. is definitely coming for your vaccines (part 8): “Massive Epidemic of Vaccine Injury,” ACIP, and a prominent oncologist first appeared on Science-Based Medicine.

  • Sent 90 miles after giving birth while ‘soaked in urine’

    Four days after giving birth, Lizzy Berryman’s psychosis forced her to be taken from York to Derby for specialist care.
  • NHS England pauses new prescriptions of cross-sex hormones for under-18s

    The health service said young people who already receive the drugs will continue to do so.
  • How AI Assistants are Moving the Security Goalposts

    How AI Assistants are Moving the Security Goalposts

    AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

    The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

    The OpenClaw logo.

    If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

    Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

    “The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

    You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

    “Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

    Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

    There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

    Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

    With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

    “You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

    O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

    WHEN AI INSTALLS AI

    One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

    A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rouge instance of OpenClaw with full system access installed on their device without consent.

    According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

    “On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

    “This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

    VIBE CODING

    AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

    The Moltbook homepage.

    Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

    Moltbook’s creator Matt Schlict said on social media that he didn’t write a single line of code for the project.

    “I just had a vision for the technical architecture and AI made it a reality,” Schlict said. “We’re in the golden ages. How can we not give AI a place to hang out.”

    ATTACKERS LEVEL UP

    The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

    AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

    “One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

    “This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

    For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

    “By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

    BEWARE THE ‘LETHAL TRIFECTA’

    This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

    “I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

    One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

    Image: simonwillison.net.

    “If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

    As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

    The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

    “The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

    DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

    “The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

  • Top 10 Most Pirated Movies of The Week – 03/09/2026

    Top 10 Most Pirated Movies of The Week – 03/09/2026

    The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only.

    Downloading content without permission is copyright infringement. These torrent download statistics are only meant to provide further insight into piracy trends. All data are gathered from public resources.

    This week we have two newcomers on the list. “Marty Supreme” is the most shared title.

    The most torrented movies for the week ending on March 09 are:

    Movie Rank Rank last week Movie name IMDb Rating / Trailer
    Most downloaded movies via torrent sites
    1 (2) Marty Supreme 8.0 / trailer
    2 (1) Mercy 6.1 / trailer
    3 (…) War Machine 6.5 / trailer
    4 (3) The Housemaid 6.9 / trailer
    5 (4) Shelter 6.2 / trailer
    6 (5) 28 Years Later: The Bone Temple 7.5 / trailer
    7 (7) Zootopia 2 7.6 / trailer
    8 (8) The Bluff 5.8 / trailer
    9 (…) Cold Storage 6.2 / trailer
    10 (6) Predator: Badlands 7.5 / trailer

    Note: We also publish an updating archive of all the list of weekly most torrented movies lists.

    From: TF, for the latest news on copyright battles, piracy and more.

  • Video Shows US Tomahawk Missile Strike Next to Girls’ School in Iran

    Video Shows US Tomahawk Missile Strike Next to Girls’ School in Iran

    New video footage shows a US Tomahawk missile hitting an Islamic Revolutionary Guard Corps (IRGC) facility in Minab, Iran, on Feb 28, showing for the first time that the US struck the area.

    The footage, released by Mehr News and geolocated by Bellingcat, also shows smoke already rising from the vicinity of the girls’ school where 175 people were reportedly killed, including children.

    The footage would appear to contradict US President Donald Trump’s claim that it was an Iranian missile that hit the school.

    Left: Image showing a Tomahawk missile from the airstrike in Minab. Right: A Tomahawk missile flying over Tehran earlier in the conflict.

    The US is the only participant in the war that is known to have Tomahawk missiles. Israel is not known to have Tomahawk missiles.

    The red cone superimposed over this image shows the estimated area of impact of the missile visible in the footage. The graphic also shows the position of a clinic, the school and other damaged buildings.

    Geolocation by Bellingcat showing the strike’s estimated area of impact.

    Planet Labs satellite imagery shows that only two structures within this red cone were damaged, including a clinic.

    The other structure appears to be an earth-covered magazine or bunker.

    Imagery showing two damaged structures. Source: PlanetLabs.

    Giancarlo Fiorella and Merel Zoet contributed research to this piece.

    Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here, Instagram here, Reddit here and YouTube here.

    The post Video Shows US Tomahawk Missile Strike Next to Girls’ School in Iran appeared first on bellingcat.

  • OpenAI on Surveillance and Autonomous Killings: You’re Going to Have to Trust Us

    OpenAI claims it has accomplished what Anthropic couldn’t: securing a Pentagon contract that won’t cross professed red lines against dragnet domestic spying and the use of artificial intelligence to order lethal military strikes. Just don’t expect any proof.

    Sam Altman, OpenAI’s CEO, announced the company’s big win with the Defense Department in a post on X on February 27.

    “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he wrote. The Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

    The deal came after the very public implosion of what was to be a similar contract between the U.S. military and Anthropic, one of OpenAI’s chief rivals. Anthropic had said negotiations collapsed because it could not enshrine prohibitions against killer robots and domestic spying in its contract. The company’s insistence on these two points earned it the wrath of the Pentagon and President Donald Trump, who ordered the government to phase out use of Anthropic’s tools within six months.

    But if the government booted Anthropic for refusing mass surveillance and autonomous weapons, how could OpenAI take over the contract without having the same problem?

    OpenAI has attempted to square this circle through a string of posts to X by company executives and researchers, including Katrina Mulligan, its national security chief, and a claim by Altman that the company negotiated stricter protections around domestic surveillance.

    The company and the government, however, are not releasing the only proof that matters: the contract itself.

    The Department of Defense did not respond to a request for comment.


    Related

    AI’s Imperial Agenda


    OpenAI and company personnel contacted by The Intercept did not respond when asked for specific contract language. Company spokesperson Kate Waters did not respond to questions, sending The Intercept only links to prior public statements from Altman.

    (In 2024, The Intercept sued OpenAI in federal court over the company’s use of copyrighted articles to train its chatbot ChatGPT. The case is ongoing.)

    So far, OpenAI has released only snippets of the deal’s language loaded with PR-speak and national security jargon. Without being able to verify the company’s claims, Altman’s pitch to the world comes down to one premise: Trust me — along with Trump and Defense Secretary Pete Hegseth — to do the right thing.

    Following widespread criticism of these vagaries, Altman said earlier this week that the firm was able to quickly negotiate into its contract stricter terms with the Pentagon. These additions, Altman said, include language the company claims will stop domestic spying and collaboration with the National Security Agency.

    But the company’s muddled messaging throughout the week only raised more questions about OpenAI’s willingness to do the federal government’s bidding.

    “We have been working with the DoW to make some additions in our agreement to make our principles very clear,” Altman posted on Monday, using Trump’s preferred name for the Department of Defense.

    “The Department also affirmed that our services will not be used by Department of War intelligence agencies (for example, the NSA),” Altman continued. “Any services to those agencies would require a follow-on modification to our contract.”

    Since OpenAI has not released the contract, it’s unclear if the Pentagon’s affirmation is actually reflected in binding contract language.

    Mulligan at first responded to criticism of the company’s deal with a pledge to release a “clear and more comprehensive explanation” of the relevant terms of the contract. On Tuesday, having failed to deliver such an explanation, she told one concerned X user, “I do not agree that I’m obligated to share contract language with you.”

    She added, “For the record, I would want to work with NSA if the right safeguards were in place,” but did not specify what these safeguards might be.

    Former military officials told The Intercept they had grave concerns about the arrangement based on what’s been made public. “I’m not confident in the language at all. And in some parts I don’t even believe it,” said Brad Carson, who previously served as under secretary of the Army during the Obama administration. Carson noted that blocking Pentagon spy agencies like the NSA or National Geospatial-Intelligence Agency would ostensibly prevent usage of OpenAI’s tools in pressing intelligence analysis contexts, like the ongoing war against Iran. “I don’t believe that provision is in the contract. I say that reluctantly, but I don’t,” Carson added.

    A former Pentagon official who worked on military artificial intelligence applications told The Intercept the caveats around “intentional” surveillance are worryingly unclear. “That’s the get out of jail free card right there,” this source, who spoke on the condition of anonymity, said in an interview. “The language gives them enough flexibility to still do whatever the fuck they want, more or less, and then say, whoops, sorry, didn’t mean to.”

    “There is nothing OpenAI can do to clarify this except release the contract.”

    “There is nothing OpenAI can do to clarify this except release the contract,” former Department of Justice National Security Division attorney Alan Rozenshtein said. Rozenshtein described OpenAI’s attempt to sell its contract to the public without letting the public read the contract as “not sustainable” and “bizarre.” If OpenAI will restrict its tools from the NSA, with its long-documented history of extra-constitutional dragnet domestic surveillance, this would be memorialized in the contract, not a tweet, he said. But if OpenAI has indeed come to any such agreement with the government, it is asking the world to take it as an article of faith.

    “It’s quite possible that OpenAI understands that these red lines are fake, but has written a contract to give them some PR coverage. That would be bad because that feels pretty dishonest,” Rozenshtein added. “Or it’s possible that OpenAI has a different understanding of its own contract than what DOD understands the contract to be. Which is a bad position to be in, and suggests that this contract negotiation has not been done skillfully.”

    Potentially undermining OpenAI’s credibility is that some of its public outreach has been simply untrue. Asked by an X user whether the contract would permit the Pentagon “[g]etting and/or analyzing commercially available data at scale,” Mulligan replied, “The Pentagon has no legal authority to do this.” This is false, at least according to the Pentagon. A declassified 2022 report by the Office of the Director of National Intelligence provided an overview of the collection of commercially available data by the government, including the Department of Defense — exactly the activity Mulligan was asked about.


    Related

    U.S. Spy Agencies Are Getting a One-Stop Shop to Buy Your Most Sensitive Personal Data


    The Pentagon’s domestic surveillance has been further established in news reports. In 2021, Motherboard reported a letter sent from Sen. Ron Wyden to the Department of Defense in which he urged then-Secretary Lloyd Austin “to release to the public information about the Department of Defense’s (DoD) warrantless surveillance of Americans.” A New York Times report on a related investigation by Wyden’s office that same year showed that the Defense Intelligence Agency had spied on Americans’ precise movements and locations without a warrant by simply buying access to their GPS coordinates. In a letter responding to Wyden, the Pentagon said the DIA’s lawyers had blessed the surveillance.

    “It is a fact that the Pentagon has both purchased and analyzed vast amounts of Americans’ location, web browsing, and other data, for years,” Wyden wrote in a statement to The Intercept. “I’ve personally revealed several of those programs, with the help of brave whistleblowers. Anyone who claims that isn’t happening simply doesn’t know what they’re talking about.”

    OpenAI’s rhetoric fails to reckon with the way the national security state has secured both secrecy and operational latitude through relying on misleading interpretation or radical ambiguity of words.

    For instance, Altman shared on Monday evening a purportedly updated clause stating: “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

    The phrase “Consistent with applicable laws” sounds promising until one reflects on the fact that the government claims consistency with applicable laws in every dragnet surveillance program, drone strike, kidnapping, assassination, or invasion. “I’m saying that the programs are legal, obviously,” White House spokesperson Jay Carney told reporters in the early days after whistleblower Edward Snowden revealed the existence of the NSA. (Ironically, Mulligan was part of this public relations deflection effort during her stint in the Obama National Security Council.)

    The word “intentionally” provides a miles-wide wall of plausible deniability that has helped cover for decades of domestic spying. In a March 2013 Senate hearing, Wyden asked then-Director of National Intelligence James Clapper, under oath, “Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?” Clapper replied “No, sir.” When pressed, he added “Not wittingly.” A few months later, NSA materials disclosed by Snowden would reveal this was entirely false: The agency routinely collected vast quantities of information on Americans as a routine practice.


    Related

    Alex Karp Insists Palantir Doesn’t Spy on Americans. Here’s What He’s Not Saying.


    The Clapper episode revealed the peril of public reliance on commonsense words like “wittingly” or “intentionally” in the context of national security. Offices like the NSA or ODNI are staffed by sharp legal minds, brilliant mathematicians, accomplished engineers, and funded with billions of dollars. They do little by accident. Altman’s invocation of “intentionally” spying on Americans, like Clapper’s dodge behind the term “wittingly,” reflects what’s known in the intelligence field as “incidental collection”: a euphemism that camouflages the fact that the government historically asserts spying on Americans is legal. In this case, incidental doesn’t mean by mistake, but rather secondary; while vacuuming up unfathomably large quantities of data to surveil foreigners, for whatever reasons deemed necessary, the government has asserted its legal right to catch Americans in the process, even if they are not the actual the target.

    Altman’s other revised assurances come with similar linguistic escape hatches. “For the avoidance of doubt,” he wrote on X, “the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, the word “deliberate” is load-bearing, while crucial terms like “tracking,” “surveillance,” and “monitoring” are left undefined.

    “The word surveillance doesn’t even include the kind of activities that people are most concerned about,” Carson, former general counsel of the Army, said. He doubted the Pentagon, for instance, would consider using an OpenAI large language model to build intelligence dossiers on private citizens with data pulled from federal and commercial databases as an act of “surveillance.”

    “They’re trying to blind you with complicated legal terms that ordinary people think mean something different entirely,” Carson said of OpenAI’s rhetoric. “But the lawyers know what it means. And the lawyers know that this is no guardrail at all.”

    One’s ultimate comfort with and confidence in this occluded contract will likely be reduced to one’s opinion of the integrity of the involved parties. How one of the most secretive institutions in the world will use the technology of similarly opaque corporation will remain the stuff of trade secrecy and classified records.

    Altman and Mulligan say that OpenAI engineers will make sure the Pentagon doesn’t break its commitments: “Our contract offers additional layered safeguards including our safety stack and OpenAI technical experts in the loop,” a company statement says, without explaining what its “safety stack” is or how its “technical experts” could apply oversight to the country’s single largest bureaucracy, comprised of a litany of sub-agencies and components employing over 2 million service members and nearly 800,000 civilian personnel. Indeed, in an employee all-hands meeting held Tuesday, Altman told staff that Hegseth would hold ultimate authority over how the Pentagon makes use of the contract, according to CNBC.

    When it comes to honesty and a respect for the law from Altman, Trump, and Hegseth, there is good reason for skepticism.

    Altman has been repeatedly accused of false statements by the people he works with. In a 2025 court filing submitted as part of an ongoing lawsuit by Elon Musk against Altman alleging OpenAI betrayed its original nonprofit mission, former OpenAI researcher Todor Markov — who now works at Anthropic — described Altman as a “person of low integrity who had directly lied to employees.” In a memo that surfaced after Altman was briefly ousted as CEO, OpenAI co-founder Ilya Sutskever alleged he had engaged in a “consistent pattern of lying” leading up to his firing.


    Related

    U.S. Military Makes First Confirmed OpenAI Purchase for War-Fighting Forces


    Nor is it always easy to pin down Altman’s ideological commitments or ethical boundaries. “Honestly, I’m scared for the lives of all of us,” Altman wrote in an October 2016 tweet. “My #1 fear w/Trump is war.” Ten years later, Altman announced his company would sell services to the Trump administration hours after it launched a new war in the Middle East. OpenAI itself was originally founded to benefit all of humanity, and the company officially prohibited the use of its technologies for warfare — until it silently deleted this prohibition from its terms of service.

    The tenure of Hegseth, might prompt similar wariness. He has overseen the assassination of Iran’s leader, the kidnapping of Venezuela’s head of state, and the killing of more than 150 men either blown apart or left to die in the ocean in boat strikes, all without congressional authorization.

    Trump, meanwhile, as part of a broad disregard for legal statutes or the Constitution, has refashioned the Department of Justice into his personal firm and directed his Department of Homeland Security to brutalize and warrantlessly surveil Americans across the country. Without the text of the contract in sunlight, it is ultimately these three men — and whoever succeeds them in years to come — that the world is being asked to trust. An appeal to “applicable laws” or the sanctity of contract language is only as meaningful as the people in charge want it to be.

    The former Pentagon AI official said that ceding this power to Hegseth is cause for alarm even with the most diligently crafted contract. Will anyone feel they are able to speak up should someone in the military use or be ordered to abuse OpenAI’s systems in contravention of the law or the contract? “Is the one-star general going to be able to escalate — ‘Hey, this is a huge fucking national security problem’ — appropriately without the Defense Secretary moving them around?”

    “My presumption is always to trust people in what they say,” said Carson, speaking of OpenAI. But following days of what he described as “change, backtracking, a bit of deception, [and] outright deception, I’m afraid I don’t really trust you on this one anymore.”

    The former Pentagon official agreed: “If you trust the cabal of Sam Altman, Donald Trump, and Pete Hegseth, there’s nothing I can do for you.”

    The post OpenAI on Surveillance and Autonomous Killings: You’re Going to Have to Trust Us appeared first on The Intercept.