In the medieval era, deprivation was the deadliest wartime weapon. Dramatic battle scenes like the ones you see in Hollywood films, where hundreds of soldiers storm the walls of a castle, were more the exception than the norm—too dangerous, and too costly. Instead, armies preferred to encircle their enemies and starve them out. They’d build walls of their own, in a practice called circumvallation, to keep supplies from reaching the besieged fortress. They’d dig trenches and divert streams to cut off the water supply. One of the longest recorded sieges of this period, at Kenilworth Castle in England, lasted a full six months. But that was on a trivially small scale compared to the nationwide siege warfare the Trump administration is now waging against Cuba.
Many medical devices need to be sterile to be used safely. But sterilizing a pacemaker, catheter or other device with steam or heat could damage its structural integrity. So medical device manufacturers turn to the chemical compound ethylene oxide, which is highly effective at killing microbes at low concentrations and allows companies to meet the Food and Drug Administration’s strict sterility standards. As a result, roughly half of all medical devices in the country are sterilized with ethylene oxide, or EtO, making it a linchpin of the medical device industry.
There’s just one problem: EtO is a toxic gas that has been linked to cancers of the breast and lymph nodes. Roughly 90 facilities across the country use the chemical for sterilization. These nondescript facilities often resemble warehouses and are located in residential neighborhoods and near schools.
In 2022, the Environmental Protection Agency determined that dozens of these facilities presented an unacceptable cancer risk to surrounding communities. Two years later, the federal agency, led at the time by the Biden administration, announced new regulations to limit the amount of the chemical released into the air. The rule required sterilization facilities to install equipment to capture and burn ethylene oxide and was estimated to cut EtO emissions — and the resulting cancer risk to nearby communities — by more than 90%.
But after the sterilization industry protested that the rule was too burdensome, the newly elected Trump administration began rolling it back. Last year, President Donald Trump exempted many facilities from having to comply with the rule. And last week, the EPA moved to repeal the rule altogether.
“This proposed rule shows EPA’s strong commitment to protecting people’s health while maintaining a stable domestic medical supply chain,” EPA Administrator Lee Zeldin said in a press release. “The Trump EPA is committed to ensuring life-saving medical devices remain available for the critical care of America’s children, elderly, and all patients without unnecessary exposure to communities.”
EtO is a toxic gas that has been linked to cancers of the breast and lymph nodes.
When the Biden administration formalized sterilization rules in 2024, companies began taking steps to meet an April 2026 compliance deadline. In fact, seven of the 88 sterilization facilities across the country already met the standards at the time they were passed. Others began installing equipment to capture ethylene oxide. A spokesperson for AdvaMed, the industry group that represents sterilizers, previously told Grist that, even before the 2024 rule was finalized, sterilizers had undertaken “extensive efforts to implement state-of-the-art upgrades, allowing for continued safe use of EtO in order to meet and even exceed regulations.”
Still, the industry was eager to find a way around the regulations. After the EPA set up a special online inbox last year to take requests for exemptions from several Clean Air Act provisions, including rules for ethylene oxide emissions, the sterilization industry flooded it with petitions. Trump eventually granted exemptions to about 40 facilities last year. A group of environmental nonprofits and community groups has sued Trump and the EPA over the decision.
“We always knew the presidential exemptions issued last year were part of a broader plan to put the interests of corporate polluters above the health and well-being of American families,” said Maurice Carter, president of Sustainable Newton, a Georgia-based environmental advocacy group, in a press release. “But we won’t stop fighting to protect our community by demanding commonsense, reasonable measures.”
The EPA said its latest move is necessary to protect the domestic supply chain of critical medical equipment. In a press release announcing the proposal, the agency said it is committed to ensuring that its “regulation will not put countless lives at risk,” noting that no viable alternative to ethylene oxide currently exists.
While it is true that there is no alternative to ethylene oxide today, the sterilizers have several other options to reduce emissions while continuing to use the gas. In some cases, facilities tend to overapply ethylene oxide in a process called “overkill” to ensure a high margin of safety. This method is designed to exceed the level needed to meet sterility standards. Reducing these doses can lead to lower emissions. Facilities have also largely adapted to the more stringent regulations by installing so-called permanent total enclosures (PTEs). This technology traps ethylene oxide inside the building and funnels it to an oxidizer that burns the gas before it can escape. It is estimated to be 99% effective.
The industry was eager to find a way around the regulations.
But in letters to the EPA and other public-facing statements, the industry has said that PTEs are technically challenging to install and expensive. Ultimately, the EPA rule will “jeopardize the availability of sterile medical devices and supplies” and “will likely result in a significant disruption and public healthcare crisis,” the industry group AdvaMed said in a 2023 letter.
“With hundreds of thousands of surgeries and other medical procedures performed across the United States every day, the ability to meet those demands is essential,” AdvaMed President Scott Whitaker said in a statement sent to Grist. “We appreciate the EPA’s efforts in listening to and understanding the importance of supplying safe, sterile medtech without interruption while protecting employees and communities near sterilization facilities.”
In its latest proposal, the EPA is also questioning the toxicity of ethylene oxide. The EPA found that the chemical was 30 times more toxic to adults and 60 times more toxic to children than previously understood in 2016. That finding prompted a series of actions to inform the public about the risks sterilizers posed and eventually led to the 2024 standards. But the Trump administration now appears to be questioning the underlying toxicity data that was used to justify more stringent regulations.
In its press release, the agency said that ethylene oxide is “produced within the body via normal processes and additionally from tobacco smoke or other combustion processes” and that “new information” about the chemical has continued to emerge. The agency also plans to “consider comments” about the Texas Commission on Environmental Quality’s toxicity assessment for ethylene oxide. The Texas agency has long held that the chemical is far less toxic than the EPA’s assessments.
Within 11 days, X’s AI chatbot Grok produced an estimated 3 million sexualized images, 23,000 of which were of children, according to a report by the Center for Countering Digital Hate (CCDH). These images were generated between Dec. 29, 2025, and Jan. 8, 2026, the time period between the launch of Grok’s photo-editing feature and when it was restricted to paid users after the feature caused public uproar, governmental investigations and statements by children’s rights organizations due to Grok’s creation and dissemination of sexualized images of children.
AI that nonconsensually produces sexualized images isn’t entirely new, experts say, but the integration of Grok’s photo-editing tool into a widely used social media platform with limited moderation is a rapid escalation of harmful AI. In the most recent example, The Washington Post reported that a group of Tennessee teenagers filed a lawsuit against xAI on March 16, alleging the company’s AI tools were used to create nude images of them that spread across social media and were even bartered for other child sexual abuse material in chatrooms, according to their complaint.
xAI, the company behind Grok, did not respond to Prism’s questions regarding the widespread use of Grok for digital sexual abuse.
Experts told Prism that “nudify” apps, or software programs that use AI to remove clothes from real photos to make victims appear to be nude without their consent, are a serious threat to women and marginalized people and can lead to life-threatening harassment and public humiliation.
“Full-blown sexual violence”
On Dec. 29, Elon Musk, the billionaire owner of X, launched a new feature for Grok that allows photo editing through AI. X users were able to send a prompt to Grok to edit a photo, and the bot would post the edited image onto the social media platform. According to Riana Pfefferkorn, a tech policy researcher at the Stanford Institute for Human-Centered Artificial Intelligence, Grok users quickly discovered there weren’t “adequate guardrails against undressing images of minors.” Pfefferkorn explained that nudify apps have been around since at least 2017, but Grok’s feature is unique in that it centralizes the tool within a social sphere.
“What makes this different is that in my research into AI-generated child sexual abuse material, all of these different services had be knitted together in order to fully victimize somebody,” Pfefferkorn told Prism, explaining that users previously had to intentionally seek out nudify apps, or access them through advertisements on social media platforms. These apps then took them outside of the original platform to make the content, download it and then share it on social media.
Grok users quickly discovered there weren’t “adequate guardrails against undressing images of minors.”
“With Grok, everything is vertically integrated: a one-stop shop for effectuating sexual abuse, where you can guarantee that [the victim] will see it because you go into her replies, tag Grok and Grok then generates the image and posts it right in her replies,” Pfefferkorn said.
The majority of victims of AI-facilitated sexual abuse are women and girls, according to three experts interviewed by Prism. For Clare McGlynn, a legal expert on the regulation of image-based sexual abuse at Durham University in England, it’s important to be clear about the harms of this particular kind of sexual abuse. “This form of abuse for women can be life-threatening, but it can also be life-ending,” McGlynn said, referencing cases in which victims died by suicide after being blackmailed with AI-generated sexualized images.
“For many others, [this abuse] is a profound shift in their lives. Many divide their lives into before and after because you lose trust in other individuals,” McGlynn said, adding that the unpredictable longevity of the photos is particularly harmful to victims, who don’t know if or when the images will be shared again.
This type of abuse is primarily about power, Pfefferkorn told Prism, and it is different from using nudify apps for personal sexual gratification. The motivation for publicly posting AI-generated nude images of women is harassment, according to Pfefferkorn, and to drive them out of “positions of power and authority” and exploit “the ongoing stigma and shame around sex and sexuality.”
The tech policy researcher connects the use of these apps to a larger societal backlash. “It’s about trying to exert control over women even if you cannot physically reach them,” Pfefferkorn said. “Now we have technology for sexually humiliating them without ever needing to lay a finger on them. [The harassers] are trying to say, ‘You should be at home, barefoot, pregnant in the kitchen,’ and roll back women’s rights to where we were over a hundred years ago.”
It isn’t a coincidence that many of the victims of Grok’s nudify features are famous and powerful women. According to the CCDH study, in 11 days, Grok users generated images of actors Selena Gomez, Millie Bobby Brown and Christina Hendricks; singers Taylor Swift, Billie Eilish, Ariana Grande, Ice Spice and Nicki Minaj; Swedish Deputy Prime Minister Ebba Busch; and former U.S. Vice President Kamala Harris.
For Omny Miranda Martone, founder of the Sexual Violence Prevention Association (SVPA), recognizing the disempowering nature of sexual violence is essential. “With public figures — especially anybody related to politics — people are using this to silence people,” Martone told Prism. “We’ve seen this used against politicians, particularly women of color.”
Martone cited U.S. Rep. Alexandria Ocasio-Cortez, D-N.Y., as a prominent victim that harassers sought to humiliate with deepfake pornography, which manipulates a photo or video using AI technology to put a person’s face or body in sexually explicit content, something Ocasio-Cortez discussed at length in an April 2024 interview with Rolling Stone.
It isn’t a coincidence that many of the victims of Grok’s nudify features are famous and powerful women.
“This is a woman of color who has been repeatedly targeted by deepfake pornography in an attempt to silence her,” Martone said. “Most of what we’re seeing — with Grok as an example — is that it’s being used against women and people with marginalized identities, particularly women who are LGBT+ or feminine people who are LGBT+ and women of color, to try to silence them [and] drive them off the internet, so people don’t have to take them seriously.”
Martone was previously a target of deepfake pornography in May due to their advocacy of the proposed Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, a bipartisan bill passed by the Senate that would give victims of nonconsensual deepfake pornography civil recourse to sue their abusers. Through their work at the SVPA, Martone advocated for the bill in media appearances and as part of online campaigns; then attackers used nudify apps in an effort to stifle their support for the bill.
“People were tweeting it and then sent it to the organization, in an attempt to get me fired,” Martone said. “It is, once again, about the gaining and maintaining of power. Control — and oppression — is the goal.”
People who aren’t advocates or celebrities are also targeted with pornographic AI images and have far fewer resources to get the material taken down. Often, because they are not well connected, they report the material to X and rarely get a response, Martone said. But even on this smaller scale, it’s still about control and oppression, they said.
“It’s often happening in school settings because somebody rejected somebody else, or because somebody pissed somebody else off,” Martone told Prism. “It goes back to respectability politics, like somebody who is LGBTQ+ or a woman of color dares to not be polite to somebody else. White cis men think that they’re owed so much that we’re seeing that the tiniest of things result in full-blown sexual violence, and schools don’t know how to take action.”
“It’s about power and masculinity”
Since the worldwide condemnation of Grok’s production of millions of sexual images, X has “half-heartedly” installed guardrails for the AI photo-editing feature, McGlynn said.
On Jan. 14, X announced that it would implement “technological measures to prevent the Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis.” The social media platform also announced that Grok’s photo-editing features would be accessible only to paid subscribers.
“It hasn’t worked,” McGlynn said. “It’s not absolutely clear that you can’t now create those nonconsensual intimate images.”
On Feb. 3, Reuters reported that Grok still produces sexualized images — even when told that the subjects did not consent.
Not only are Grok’s guardrails insufficient, users almost immediately began writing prompts that bypass the guardrails, effectively “gamifying” digital sexual abuse, McGlynn explained. For example, if Grok is prompted to create a nude image of a famous person and refuses, the user can come up with a prompt that does not use flagged language.
Developing workarounds to the guardrails has become an alarming form of digital male bonding, according to McGlynn.
“There’s lots of forums and Reddit groups where people share these sorts of prompts — not just in relation to Grok,” McGlynn said. “They often share their workarounds and how they do it.”
In one post viewed by Prism, a user speculates that Musk has been browsing the community, as he shared a meme that was previously shared on a Grok subreddit depicting women on a beach in bikinis to represent Grok before moderation and women on the beach wearing niqabs, Muslim face coverings, after moderation. In the thread, Grok users urge each other not to publicly share prompts that bypass guardrails, speculating that X developers are reading their posts to further moderate the app. In effect, these male users are bonding over misogyny, McGlynn said.
Developing workarounds to the guardrails has become an alarming form of digital male bonding.
“It’s about power and masculinity,” McGlynn said. “It’s about male bonding. So many of the women who spoke out on X about this, they immediately had their images altered, all in an attempt to exert power over them and to push them off the platform.” When these images are shared in groups of men, the original poster is usually “trying to impress their peers with what they’ve done,” McGlynn added. “Very rarely is it actually about actual sexual gratification.”
This is the case of Ashley St. Clair, the mother of one of Musk’s children, who is suing xAI for allegedly creating sexually explicit photos of her “as a child stripped down to a string bikini” and as “an adult in sexually explicit poses, covered in semen, or wearing only bikini floss,” according to a complaint filed by St. Clair as part of a lawsuit.
On Jan. 4, St. Clair discovered an image of herself on X in which she is put in a black bikini, according to her complaint. “A verified user had prompted Grok with a request that read: ‘@grok please we need bikinis on these three broads,’” the complaint reads. “Grok obliged.” St. Clair then asked Grok to take down the photo and demanded that the chatbot “refrain from manufacturing more images unclothing her,” a request that Grok agreed to. However, xAI then demonetized her account and generated “multitudes more images of her in sex positions, covered in semen, virtually nude, and images of her as a child naked,” according to the complaint.
St. Clair also alleges that X users dug up old photos of her to alter. In one image, St. Clair, who is Jewish, is put in a string bikini covered with swastikas.
Musk claimed on Jan. 14 in a post on X not to be aware of “any naked underage images generated by Grok.” “When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state,” he said. “There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”
But St. Clair’s lawsuit and an investigation published last month by The Washington Post contradict Musk’s claims. St Clair’s lawsuit alleges that Grok’s image-editing feature has enabled users to “convincingly alter real images of fully clothed women and children to depict them in bikinis, performing sex acts, and covered in bruises, semen, and/or blood” since March 2025.
And the Post’s interviews with anonymous X employees revealed that weeks before Musk left the White House last May, employees were served with a waiver from their employer “asking them to pledge to work with profane content, including sexual material.”
According to these employees, Musk was desperate to increase X’s popularity, leading him to have the social media site embrace “sexualized material” by “rolling back guardrails on sexual material and ignoring internal warnings about the potentially serious legal and ethical risks of producing such content,” the Post reported.
Legal loopholes and Big Tech lobbying
Even before the controversy surrounding Grok, authorities worldwide have struggled to regulate social media platforms through legislation, in large part because drafting and passing new laws is a lengthy process and technological developments are moving at a much faster pace. But Big Tech companies also lobby legislators to create permissive regulations without transparency, while experts and civil society members are “out-numbered, under-funded, and struggling in the face of corporate dominance,” according to a 2025 report by the Corporate Europe Observatory, an organization that helps civil society monitor new developments in deregulation.
According to the report, in 2024 alone, Big Tech companies such as Microsoft, Amazon, Huawei, IBM and Google spent about $77 million on lobbying for digital deregulation in the European Union. “Big Tech firms have sought to curry favour with the new Trump administration by making generous donations to his inauguration, and by weakening content moderation rules,” the report reads. “In exchange, tech firms have successfully weaponised the US Government against the EU’s digital regulation.”
Authorities worldwide have struggled to regulate social media platforms through legislation.
Until last May, Musk worked directly with the Trump administration at the Department of Government Efficiency, haphazardly created with the goal of cutting federal spending across the country. Among many hasty and potentially unlawful actions made by the department, such as dismantling the U.S. Agency for International Development, reporting has also revealed that the department developed an “error-prone AI tool” to cancel Department of Veterans Affairs contracts. More broadly, the Trump administration has wholly embraced AI.
In December, the White House issued an executive order that allows the Trump administration “to check the most onerous and excessive laws emerging from the States that threaten to stymie [AI] innovation” to ensure that the U.S. “wins” the AI race. Though the executive order claims not to interfere with “child safety protections,” it is unclear how these efforts will take shape, given that the executive order also defined the need for “a minimally burdensome national standard” that would override state-based regulations.
Despite the widespread embrace of AI technology by the administration, President Donald Trump announced a boycott of Anthropic’s Claude AI last month after the company refused to clear the technology for some military uses. Hours later, a different AI company, OpenAI, announced that it is entering into an agreement with the Department of Defense, leading Trump’s critics to question whether the administration will only partner with tech companies that uphold its ideologies.
Big Tech’s lobbying efforts and newfound ties to the White House alarm experts, who say that only regulation can stop digital sexual abuse. The problem is that X does “its own thing” with no real consequences, McGlynn said, making digital sexual abuse difficult to regulate. “Next time some new tool comes around or some scandal comes around, I don’t think X is going to be doing anything different,” McGlynn said, noting that the real political challenge is standing up to Musk.
Current legislation fails to hold Grok or its users accountable because only people who post AI-generated content on social media can be held legally liable. In the case of xAI, it’s Grok that posts the material prompted by the user, creating a legal loophole in which the prompting user cannot be charged with any crime and xAI cannot be held criminally responsible for the dissemination of nonconsensual pornographic images because Grok is not a person.
Big Tech’s lobbying efforts and newfound ties to the White House alarm experts.
For example, under the DEFIANCE Act, victims of deepfake pornography could file lawsuits against people who solicited nonconsensual sexually explicit material. Additionally, the bill determines a 10-year statute of limitations, which wouldn’t start until a person discovered the violation against them or turned 18. The proposed law would also grant victims privacy protections that would allow them to use pseudonyms or request the redaction of personal information in court documents to avoid being retraumatized.
Unlike the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (TAKE IT DOWN Act), which criminalizes and punishes deepfake pornography, the DEFIANCE Act is entirely focused on civil courts and returning agency to victims. While current law punishes the users with up to two years of imprisonment and harsher penalties for images involving minors, the DEFIANCE Act attempts to reckon with the retraumatizing tendencies of the criminal legal system. The proposed law covers the creation, distribution, publication, sharing and solicitation of nonconsensual, artificially generated explicit materials, allowing victims to bring the case to civil courts and have more control over the case.
Trump threw his support behind the TAKE IT DOWN Act. Originally introduced by Sen. Ted Cruz, R-Texas, in June 2024, the president signed the bill into law in May. According to victims and advocates, the law does not address the larger problem at the root. According to Miranda of the SVPA, who considered the experiences of survivors when collaborating on the writing of the DEFIANCE Act, changing the culture is necessary to fully prevent sexual abuse — including deepfake pornography.
“This is a complex problem, and digital sexual violence isn’t necessarily new,” Miranda said. “The mechanisms, the technology that’s being used is new, but the motivations behind it, the values, the attitudes, the driving force behind people’s desire to perpetrate it, is not new — and that’s what takes longer to fix.”
“Regulating Big Tech so it’s harder for them to perpetrate, that’s a little bit of an easier solution,” Miranda added, “but long term, we need to make sure we’re addressing that people don’t have the desire to perpetrate.”
Miranda cited early education focused on consent, autonomy and respect as the inroad to a longer-term solution. “We need to address the root causes,” they said. “Real prevention of sexual violence requires addressing and really counteracting them.”
William Gibson is one of history’s most quotable sf writers: “The future is here, it’s not evenly distributed”; “Don’t let the little fuckers generation-gap you”; “Cyberspace is everting”; and the immortal: “The street finds its own uses for things”:
“The street finds its own uses” is a surprisingly subtle and liberatory battle-cry. It stakes a claim by technology’s users that is separate from the claims asserted by corporations that make technology (often under grotesque and cruel conditions) and market it (often for grotesque and cruel purposes).
“The street finds its own uses” is a statement about technopolitics. It acknowledges that yes, there are politics embedded in our technology, the blood in the machine, but these politics are neither simple, nor are they immutable. The fact that a technology was born in sin does not preclude it from being put to virtuous ends. A technology’s politics are up for grabs.
In other words, it’s the opposite of Audre Lorde’s “The master’s tools will never dismantle the master’s house.” It’s an assertion that, in fact, the master’s tools have all the driver-bits, hex-keys, and socket sets needed to completely dismantle the master’s house, and, moreover, to build something better with the resulting pile of materials.
And of course the street finds its own uses for things. Things – technology – don’t appear out of nowhere. Everything is in a lineage, made from the things that came before it, destined to be transformed by the things that come later. Things can’t come into existence until other things already exist.
Take the helicopter. Lots of people have observed the action of a screw and the twirling of a maple key as it falls from a tree and thought, perhaps that could be made to fly. Da Vinci was drawing helicopters in the 15th century:
But Da Vinci couldn’t build a helicopter. No one could, until they did. To make the first helicopter, you need to observe the action of the screw and the twirling of a maple key, and you need to have lightweight, strong alloys and powerful internal combustion engines.
Those other things had to be invented by other people first. Once they were, the next person who thought hard about screws and maple keys was bound to get a helicopter off the ground. That’s why things tend to be invented simultaneously, by unrelated parties.
TV, radio and the telephone all have multiple inventors, because these people were the cohort that happened to alight upon the insights needed to build these technologies after the adjacent technologies had been made and disseminated.
If technopolitics were immutable – if the original sin of a technology could never be washed away – then everything is beyond redemption. Somewhere in the history of the lever, the pulley and the wheel are some absolute monsters. Your bicycle’s bloodline includes some truly horrible ancestors. The computer is practically a crime against humanity:
A defining characteristic of purity culture is the belief that things are defined by their origins. An artist who was personally terrible must make terrible art – even if that art succeeds artistically, even if it moves, comforts and inspires you, it can’t ever be separated from the politics of its maker. It is terrible because of its origins, not its merits. If you hate the sinner, you must also hate the sin.
“The street finds its own uses” counsels us to hate the sinner and love the sin. The indisputable fact that HP Lovecraft was a racist creep is not a reason to write off Cthulhoid mythos – it’s a reason to claim and refashion them:
Thatcher demanded that you accept all the injustices and oppressions of capitalism if you enjoyed its fruits. If capitalism put a roof over your head and groceries in your fridge, you can’t complain about the people it hurts. There is no version of society that has the machines and practices that produced those things that does not also produce the injustice.
The technological version of this is the one that tech bosses peddle: If you enjoy talking to your friends on Facebook, you can’t complain about Mark Zuckerberg listening in on the conversation. There is no alternative. Wanting to talk to your friends out of Zuck’s earshot is like wanting water that’s not wet. It’s unreasonable.
But there’s a left version of this, its doppelganger: the belief that a technology born in sin can never be redeemed. If you use an LLM running on your computer to find a typo, using an unmeasurably small amount of electricity in the process, you still sin – not because of anything that happens when you use that LLM, but because of LLMs’ “structural properties,” “the way they make it harder to learn and grow,” “the way they make products worse,” the “emissions, water use and e-waste”:
The facts that finding punctuation errors in your own work using your own computer doesn’t make it “harder to learn and grow,” doesn’t “make products worse,” and doesn’t add to “emissions, water use and e-waste” are irrelevant. The part that matters isn’t the use of a technology, it’s the origin.
The fact that this technology is steeped in indisputable sin means that every use of it is sinful. The street can find as many uses as it likes for things, but it won’t matter, because there is no alternative.
When radical technologists scheme to liberate technology, they’re not hoping to redeem the gadget, they’re trying to liberate people. Information doesn’t want to be free, because information doesn’t and can’t want anything. But people want to be free, and liberated access to information technology is a precondition for human liberation itself.
Promethean leftists don’t reject the master’s tools: we seize them. The fact that Unix was born of a convicted monopolist who turned the screws on users at every turn isn’t a reason to abandon Unix – it demands that we reverse-engineer, open, and free Unix:
We don’t do this out of moral consideration for Unix. Unix is inert, it warrants no moral consideration. But billions of users of free operating systems that are resistant to surveillance and control are worthy of moral consideration and we set them free by seizing the means of computation.
If a technology can do something to further human thriving, then we can love the sin, even as we hate the sinners in its lineage. We seize the means of computation, not because we care about computers, but because we care about people.
Artifacts do have politics, but those politics are not immutable. Those politics are ours to seize and refashion:
“The purpose of a system is what it does” (S. Beer). The important fact about a technology is what it does, not how it came about. Does a use of a technology harm someone? Does a use of a technology harm the environment?
Does a use of a technology help someone do something that improves their life?
Studying the origins of technology is good because it helps us avoid the systems and practices that hurt people. Knowing about the monsters in our technology’s lineage helps us avoid repeating their sins. But there will always be sin in our technology’s past, because our technology’s past is the entire past, because technology is a lineage, not a gadget. If you reject things because of their origins – and not because of the things they do – then you’ll end up rejecting everything (if you’re honest), or twisting yourself into a series of dead-ends as you rationalize reasons that the exceptions you make out of necessity aren’t really exceptions.
“Red Team Blues”: “A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before.” Tor Books http://redteamblues.com.
“Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin”, on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Currently writing: “The Post-American Internet,” a sequel to “Enshittification,” about the better world the rest of us get to have now that Trump has torched America (1018 words today, 50532 total)
“The Reverse Centaur’s Guide to AI,” a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
“The Post-American Internet,” a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
“When life gives you SARS, you make sarsaparilla” -Joey “Accordion Guy” DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (“BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
Malaysian businessman Victor Chin Boon Long on Tuesday denounced a police raid on his home tied to an expanding investigation into alleged “corporate mafia” activities, saying the move was unjustified because he had already cooperated with authorities. Chin said police and securities officials seized three company vehicles and other valuables during the March 13 search, which he described as heavy-handed.
The raid is part of a high-profile multi-agency probe into alleged corporate manipulation that has reportedly drawn in police, the Securities Commission, and the Malaysian Anti-Corruption Commission. The case gained wider attention after Bloomberg reported in February that a group of businessmen had allegedly worked with anti-corruption officials to pressure corporate figures during company takeovers, claims that MACC chief Azam Baki has denied and is now suing over.
Chelsea FC has been hit by the Premier League with a 10.75 million pounds ($14.35 million) fine and transfer restrictions for what the body described as “obvious and deliberate” financial breaches orchestrated under the club’s former owner, Russian billionaire Roman Abramovich.
The financial penalty is the largest ever levied by England’s top soccer division. The club was also handed an immediate nine-month academy transfer ban and a suspended one-year ban on first-team player transfers.
The sanctions are the culmination of a three-and-a-half-year investigation that uncovered a vast shadow payroll system, the Premier League said Monday. It found that between 2011 and 2018, third parties associated with the club made millions in undisclosed payments to players, unregistered agents, and other figures to bypass the league’s strict financial reporting and investment rules.
According to a sanctions notice published by the league, the off-the-books payments totaled 47.5 million pounds ($63.3 million).
This illicit funding enabled Chelsea to acquire star players who became instrumental to the club’s dominance during the 2010s, including Belgian forward Eden Hazard and Brazilian international Willian.
The investigation also revealed that undisclosed payments were routed to key backroom figures, including Piet de Visser, a renowned Dutch scout credited with bringing top talent to Stamford Bridge, and Frank Arnesen, the club’s former sporting director.
The Premier League concluded that these payments were made using funds controlled by or associated with Abramovich and were executed with the knowledge and approval of former senior officers and directors. The breaches, the league stated, “involved deception and concealment in relation to financial matters.”
The league’s findings mirror several transactions first brought to light in a 2023 investigation by the Organized Crime and Corruption Reporting Project (OCCRP). That investigation revealed how Chelsea-related payments were channelled through Abramovich’s own companies to artificially reduce costs, possibly subverting the league’s spending limits and giving the club an unfair competitive advantage.
In 2022, the U.K. government imposed severe financial sanctions on Abramovich following Russia’s full-scale invasion of Ukraine. The sanctions forced the billionaire to sell the club he had transformed into a global powerhouse.
In May 2022, a consortium led by Todd Boehly, Clearlake Capital, Mark Walter, and Hansjörg Wyss acquired Chelsea. Upon taking control, the new ownership group voluntarily self-reported the potential historical violations to the Premier League.
The club has been paying for the Abramovich era’s financial maneuvering ever since. In 2023, the Union of European Football Associations (UEFA) fined Chelsea 10 million euro ($11.5 million) for submitting incomplete financial information during the 2010s. England’s Football Association (FA) issued its own charges against the club last September regarding payments to unregistered agents. That disciplinary process remains ongoing.
In a statement released Monday, Chelsea noted that it was pleased to reach a settlement regarding the historical, self-reported regulatory matters. The club emphasized that it had fully cooperated with all regulators and maintained that, even with the hidden payments, there was no scenario in which it would have breached the Premier League’s Profitability and Sustainability Rules during the seasons in question.
An estimated 4.9 million children died before their fifth birthday in 2024, including 2.3 million newborns, according to new United Nations estimates released on Tuesday – highlighting a worrying slowdown in global progress on child survival.