Author: tio
-

Activist Rachel Cohen on Confronting ICE and Greg Bovino
Rachel Cohen first made headlines last year when she left her job at the high-powered law firm Skadden, Arps, Slate, Meagher & Flom, after organizing more than 600 of her fellow lawyers to sign an open letter condemning Donald Trump’s threats to the legal profession. Since then, she’s continued to call out the administration’s actions both on the ground and online. From her account @cohen.489, Cohen posts about her work community organizing and protesting ICE in Chicago, as well as the news. She recently went viral for a video where she confronted Border Patrol commander Greg Bovino at a convenience store.

-
Protecting Our Right to Sue Federal Agents Who Violate the Constitution
Federal agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights. For example, we have a First Amendment right to record on-duty police, including ICE and CBP, but federal agents are violating this right. Indeed, Alex Pretti was exercising this right shortly before federal agents shot and killed him. So were the many people who filmed agents shooting and killing Pretti and Renee Good – thereby creating valuable evidence that contradicts false claims by government leaders.
To protect our digital rights, we need the rule of law. When an armed agent of the government breaks the law, the civilian they injure must be made whole. This includes a lawsuit by the civilian (or their survivor) against the agent, seeking money damages to compensate them for their injury. Such systems of accountability encourage agents to follow the law, whereas impunity encourages them to break it.
Unfortunately, there is a gaping hole in the rule of law: when a federal agent violates the U.S. Constitution, it is increasingly difficult to sue them for damages. For these reasons, EFF supports new statutes to fill this hole, including California S.B. 747.
The Problem
In 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark statute empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.
However, there is no comparable statute empowering people to sue federal officials who violate the U.S. Constitution.
So in 1971, the U.S. Supreme Court stepped into this gap, in a watershed case called Bivens v. Six Unknown FBI Agents. The plaintiff alleged that FBI agents unlawfully searched his home and used excessive force against him. Justice Brennan, writing for a six-Justice majority of the Court, ruled that “damages may be obtained for injuries consequent upon a violation of the Fourth Amendment by federal officials.” He explained: “Historically, damages have been regarded as the ordinary remedy for an invasion of personal interests in liberty.” Further: “The very essence of civil liberty certainly consists of the right of every individual to claim the protection of the laws, whenever he receives an injury.”
Subsequently, the Court expanded Bivens in cases where federal officials violated the U.S. Constitution by discriminating in a workplace, and by failing to provide medical care in a prison.
In more recent years, however, the Court has whittled Bivens down to increasing irrelevance. For example, the Court has rejected damages litigation against federal officials who allegedly violated the U.S. Constitution by strip searching a detained person, and by shooting a person located across the border.
In 2022, the Court by a six-to-three vote rejected a damages claim against a Border Patrol agent who used excessive force when investigating alleged smuggling. In an opinion concurring in the judgment, Justice Gorsuch conceded that he “struggle[d] to see how this set of facts differs meaningfully from those in Bivens itself.” But then he argued that Bivens should be overruled because it supposedly “crossed the line” against courts “assuming legislative authority.”
Last year, the Court unanimously declined to extend Bivens to excessive force in a prison.
The Solution
At this juncture, legislatures must solve the problem. We join calls for Congress to enact a federal statute, parallel to the one it enacted during Reconstruction, to empower people to sue federal officials (and not just state and local officials) who violate the U.S. Constitution.
In the meantime, it is heartening to see state legislatures step forward fill this hole. One such effort is California S.B. 747, which EFF is proud to endorse.
State laws like this one do not violate the Supremacy Clause of the U.S. Constitution, which provides that the Constitution is the supreme law of the land. In the words of one legal explainer, this kind of state law “furthers the ultimate supremacy of the federal Constitution by helping people vindicate their fundamental constitutional rights.”
This kind of state law goes by many names. The author of S.B. 747, California Senator Scott Wiener, calls it the “No Kings Act.” Protect Democracy, which wrote a model bill, calls it the “Universal Constitutional Remedies Act.” The originator of this idea, Professor Akhil Amar, calls it a “converse 1983”: instead of Congress authorizing suit against state officials for violating the U.S. Constitution, states would authorize suit against federal officials for doing the same thing.
We call these laws a commonsense way to protect the rule of law, which is a necessary condition to preserve our digital rights. EFF has long supported effective judicial remedies, including support for nationwide injunctions and private rights of action, and opposition to qualified immunity.
We also support federal and state legislation to guarantee our right to sue federal agents for damages when they violate the U.S. Constitution.
-
Period blood test could offer less invasive alternative to cervical screening
Looking for signs of the cancer in a more convenient way could help women access the test and prevent the disease occurring, researchers say. -
Smart AI Policy Means Examing Its Real Harms and Benefits
The phrase “artificial intelligence” has been around for a long time, covering everything from computers with “brains”—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It’s a term that sweeps a wide array of uses into it—some well-established, others still being developed.
Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people’s housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.
We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.
Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.
EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.
So let’s look at the real-world landscape.
AI’s Real and Potential Harms
Thinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.
There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on. If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.
And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.
These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool. For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.
These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn’t even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.
Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don’t exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.
We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We’ve seen shades of a similar problem before online (see: “Dr. Google”), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.
Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers.
Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.
Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.
AI’s Real and Potential Benefits
However harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.
Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.
To be clear, we don’t endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.
Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.
AI Advancements in Scientific and Medical Research
AI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.
For example:
- The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system that uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
- Several models were used to forecast a recent hurricane. Google DeepMind’s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMind’s AI model).
Researchers are using AI to help develop new medical treatments:
- Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
- Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
- Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
- Machine learning has been used for years to aid in vaccine development—including the development of the first COVID-19 vaccines––accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI Uses for Accessibility and Accountability
AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:
- AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
- Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users can’t or don’t want to ask another human to describe something. For more on this, check out our recent podcast episode on “Building the Tactile Internet.”
When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:
- The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
- An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.
It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.
Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.
Context Matters
It can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.
-
Jeffrey Epstein’s Home District Congresswoman Defends Contacts and Contributions
A U.S. congresswoman who exchanged texts with Jeffrey Epstein and received campaign contributions from him issued an unapologetic statement on Wednesday, denying any wrongdoing and calling him a “demon.”
Democratic Congresswoman Stacey Plaskett represents the U.S. Virgin Islands, where Epstein owned a private island at the center of the sex-trafficking allegations against him.
In her video statement, posted to Facebook, Plaskett addressed a campaign donation she solicited in September 2018 from Epstein to promote Democrats in races across the country,
“He had previously been a donor to the party,” she said. “When the details over his crimes were exposed in November of 2018, I gave the money to women’s organizations in the Virgin Islands.”
In November 2018, the Miami Herald ran a series of articles called Perversion of Justice, which highlighted a plea deal he struck in 2008 after he was charged with soliciting a prostitute and procuring a child for prostitution.
The sweetheart deal allowed him to plead guilty to lesser state charges in Florida of solicitation, and have federal charges dropped against him. He was allowed to work from home, and spend nights in jail as part of his unusual sentencing.
The Miami Herald series made Epstein a household name, and the renewed attention led to his July 2019 arrest in New York on charges of sex trafficking of minors. His death in a jail cell in August 2019, before he could go to trial, was ruled a suicide.
Plaskett acknowledged texting with Epstein in February 2019 during a congressional hearing where she and other Democrats were grilling former Trump Organization lawyer Michael Cohen about alleged hush money payments made on behalf of Donald Trump to porn star Stormy Daniels, and other women.
“I was trained as a narcotics prosecutor. And I’ve learned to receive information from sources I do not like to obtain information that helps me get to the truth,” Plaskett explained.
“I do, however, recognize that even texting with that man was a bad idea,” the congresswoman conceded.
Plaskett began her statement by saying that “Epstein was a demon and, like you all, I’m disgusted by his deviant behavior.”
Plaskett did not respond to questions from OCCRP before publication.
But her lengthy video statement refutes suggestions that she enabled Epstein. She stated that she had been general counsel for the Economic Development Authority (EDA) of the U.S. Virgin Islands from 2006 to 2012. In 1999, the EDA had granted tax breaks to Epstein, who had purchased and lived on Little St. James Island.
“I was not at EDA when a certificate for tax benefits was initially given, and I was not at EDA when a certificate for tax benefits [was] renewed,” she said. “So, did I give Epstein tax benefits when I was at EDA? No, that would have been impossible.”
Plaskett has represented the U.S. territory on Capitol Hill since 2016. The territory elects a non-voting member of Congress, who is technically a delegate.
-
‘No Family More Evil’: The Sacklers’ Second Act
The following story is co-published with Matt Bivens’ Substack newsletter, The 100 Days.
A worried-looking man in his 50s recently arrived to our emergency department by ambulance. The paramedics wheeled his cot in with thinly-contained exasperation. Their handoff to nursing said the patient “fell asleep on the couch holding a drink and then called 911 when he spilt it.” To me, the patient offered a plausible concern that he might have just had a seizure: He’d been watching his grandchild and then woke feeling strange, in a different position on the couch, his drink on the floor, the toddler screaming in fear.
He’d never before had a seizure, but he said he was sure he’d suffered brain damage from abusing opioids. Years ago, he’d hurt his back working as a roofer and was prescribed OxyContin®. Conventional wisdom at that time, ginned up by lying pharmaceutical companies, had held that one would not get addicted to this modern opioid, and so it would be wrong — immoral, even — for a doctor to deny it to a patient in pain. So, he was prescribed a course of opioid addiction. Only for the past few years had he escaped it.
Why would you have brain damage? I asked.
“I haven’t been right since that day. My thinking, I mean.”
“All of the overdoses,” he replied. He said he’d been revived from some of them at our hospital. (Possibly even by me, although I didn’t remember his face.) “Once, I stopped breathing for seven minutes. It was not good.”
I wondered how he knew it was for seven minutes, but instead of asking, I just agreed with him: “Yeah, that’s not good.” We each nodded regretfully, and I opined that he could indeed have experienced a first-time seizure, as the consequence of a remote anoxic brain injury.
“I’m sure that’s it,” he said. “I haven’t been right since that day. My thinking, I mean.”
The Trump Justice Department let the Sacklers skate
The long, grim saga of Purdue Pharma is coming to an end of sorts, courtesy of a $7 billion bankruptcy deal approved this winter. Some hopelessly doomed appeals have been lodged by a handful of individuals, but the process will start grinding them underfoot this week and conclude at some point in the coming months.
As a footnote, this will also eventually trigger a formal sentencing hearing in the U.S. District Court of New Jersey — the same court where, six years ago, Purdue entered guilty pleas in a non-prosecution deal with the first Trump Administration. That plea deal acknowledged there had been illegal and dishonest work selling opioid addictions, and then Justice accepted a little cash to make the problem go away.
Final approval of that plea deal, as a formality, was linked to the bankruptcy hearing. The deal stipulated that a “Sentencing Hearing Date take place no earlier than seventy-five days following the date of confirmation of the … Purdue Bankruptcy.” So, sometime in March?
I have wistful fantasies of a New Jersey district judge this spring going full Joffrey Baratheon, rejecting the deal and demanding a Sackler head. But that’s unlikely.
Purdue and its Sackler family owners made billions by methodically and scientifically getting ordinary people across the country addicted to opioids; they did this over more than 20 years, despite repeated and serious warnings. The consequences for them? Few worth mentioning.
Sure, over the next 15 years the Sacklers will, per the bankruptcy plan, now have to grudgingly give back some of the billions they’ve gathered. And as a family they’ve been publicly shamed. But they remain billionaires, free to travel the world, apparently unrepentant.
The firestorm of addiction they ignited, for money, has destroyed millions of lives.
The manufactured epidemic known as the Opioid Crisis was not solely the fault of Purdue & the Sacklers. They were eagerly joined in that lucrative adventure by many, including rival opioid manufacturers like Johnson & Johnson, Mallinckrodt and the infamous Insys (a rare company where the executives did go to jail); pharmaceutical wholesalers like Cardinal Health and McKesson; big-chain pharmacies like Rite Aide and CVS; and various other camp followers, from the McKinsey & Company consultancy, to academic centers like Mass General Hospital and Tufts medical school, to untold numbers of individual physicians who were confused, cajoled and at times even bribed into writing all of those prescriptions.
That said: everyone else was largely an imitator or a facilitator. The spark of genius that lit the flames, and then fanned them furiously ever higher, even as the Feds were closing in, was that of Purdue & the Sacklers. The firestorm of addiction they ignited, for money, has destroyed millions of lives.
Smoldering rage about this casually-engineered catastrophe informs our politics in under-appreciated ways. It feeds into everything: from the suspicion and loathing many feel toward public health authorities and their (our) precious vaccines, to the widespread, sullen indifference about raining Hellfire missiles down on “drug boat people” clinging to wreckage in the Caribbean.
But perhaps better days await? Purdue this year is on schedule to rise from its own ashes and start do-gooding. It will be reborn as Knoa Pharma.
That’s pronounced “No-ah”, if you care. The AI slop-mind says that this name, apparently chosen by Purdue’s enemies, is meant to evoke “knowledge”. Does that mean we’ve learned something?
“Soon, Purdue will cease to exist,” says Purdue’s board chairman in a press release. “Knoa Pharma, a new independent company owned by a foundation, will receive valuable assets and expertise from the old company, and will carry forth.” Its mission: to clean up the mess left behind by Purdue and the Sacklers.
Well, that, and to keep selling OxyContin® for a few more years. Proceeds will no longer go to the Sacklers, but instead to public-service projects dictated by their victorious enemies. (Purdue did not reply to multiple requests for comment.)
Personally, I’d have preferred to see every Purdue building torched to the ground and the earth beneath plowed over and salted. I’d have also welcomed seeing corporate executives and Sackler family representatives do jail time, which is what we usually insist upon when we roll up an organized crime ring that’s killed a bunch of people.
But that’s not how things went.
Five years ago, in the waning days of the first Trump Administration, individual Sacklers paid a mere $225 million to buy off the U.S. Justice Department. That’s pocket change for the crew Justice dubbed “the Named Sacklers”: Richard, David, Kathe, Jonathan and Mortimer. Their family-owned company raked in $34 billion over the years, almost all of it from OxyContin®, and the Sacklers paid themselves fabulously from the proceeds. The family’s estimated worth is north of $11 billion. (Their empire includes, to this day, U.K.-based Mundipharma, which earns far more than $1 billion a year selling OxyContin® in China and other parts foreign.)
“The Named Sacklers”: Dr. Richard Sackler, who was deeply involved in running the company; his son David Sackler and cousin Dr. Kathe Sackler, as seen together on a 2020 Congressional hearing Zoom call; (the Estate of) Jonathan Sackler (Richard’s brother, who died in 2020 of cancer); and Mortimer D.A. Sackler (Richard’s cousin). The government says the Sacklers also gutted Purdue in the final years before its 2019 bankruptcy, taking out 75% of revenues a year. The $11 billion the family frantically “milked” (the Supreme Court’s word) out of the company in its dying days came on top of billions paid out in the earlier years.
So, the Justice Department’s $225 million fine — the price the Sacklers paid not to be criminally prosecuted — represented perhaps 1% of the billions the Sacklers have enjoyed.
Put another way, it left untouched 99% of the Sackler family’s ill-gotten opioid gains. But it was enough to resolve Federal allegations that Sackler-run Purdue had made billions illegally slinging dope; and that the Sacklers had then hurriedly siphoned its final billions off in “fraudulent transfers … made to hinder future creditors.”
It left untouched 99% of the Sackler family’s ill-gotten opioid gains.
(The “future creditors” the Sacklers sought to outmaneuver were mostly America’s state and local governments, which had filed hundreds of indignant lawsuits demanding compensation for the suffering and death Purdue had created in their city, town or state; but also many individuals, including more than 130,000 who had filed personal injury claims via Purdue’s bankruptcy proceedings.)
Probably the country would have shrieked in rage if it had understood the Sacklers were escaping prosecution in return for perhaps 1% of their ill-gotten gains. But the Justice Department played us all masterfully, with “but wait, there’s more!” infomercial-style addendums.
Yes, Justice was deferring prosecution in return for a small check, but as Justice explicitly stated, it could also still criminally prosecute everyone involved in this mess. A defiant press release back then stated that “years of hard work by the FBI” had found Purdue & the Sacklers guilty of “illegal and inexcusable activity,” and so Justice reserved the right to bring criminal charges, including specifically against the Sacklers, at any point. (Then why wait? If it was illegal and inexcusable, why excuse it?)
Second, the Justice Department’s announcement touted the Sackler’s personal fines of $225 million but also bandied about various other confusing billion-dollar sums. These totaled a startling $8.3 billion, which Justice claimed Purdue had agreed that it owed the government. But that money was never paid: this Purdue IOU to the Feds was only good for generating self-congratulatory headlines in the day’s news coverage.

Overly-generous headlines about money Purdue never paid, as seen in The Boston Globe (courtesy of Associated Press) and on CNBC. These billions include “the largest penalties ever levied against a pharmaceutical manufacturer,” the Trump 1.0 Justice Department crowed proudly. But once the applause died down, the $8 billion-and-change invoice was folded into the bankruptcy proceeding, where it largely evaporated.
Good luck figuring out what was actually paid to the Feds. By my reporting, and review of the 245-page “18th amended” version of the plan (PDF here), the $8 billion-plus waters down to about $275 million. There’s a $225 million from Purdue to the Justice Department — that must be the standard “make my crime go away, please” fee:

— and then $25 million more upfront —

— and another $25 million on an installment plan.

Finally, the Feds insisted that Purdue, which at that time had just entered the bankruptcy proceedings, would be punitively dissolved.
The company would be destroyed! Cue the celebrations!
Well, no, not “destroyed.” Not exactly. More like “improved.” Post-bankruptcy, the Justice Department said even back then, Purdue would emerge repurposed as “a public benefit company,” one dedicated to tidying up after Purdue’s wild party.
But no one ever went to jail.
We let the Sacklers keep their freedom and their billions.
True, two Sacklers, David and Kathe, did have to sit through a 2020 Congressional hearing, where they faced bipartisan contempt and rage. They were likened to El Chapo, described as “sickening,” and told by one Kentucky Congressman that there was “no family more evil than yours.” Of course, they didn’t actually have to sit through any of that, because it was the la-la-land of lockdowns, and thus, in honor of COVID-19, this national Congressional hearing was a Zoom call.
In other words: We let the Sacklers keep their freedom and their billions, but we did also yell at them on Zoom.
To be fair, we actually got to yell at them on Zoom twice: The U.S. Bankruptcy Court mediating the battle for Purdue’s assets also made three Sacklers sit through Three Minutes of Hate from ordinary Americans who had lost loved ones to opioids. The Sacklers were not allowed to respond, only to listen as they were called “scum of the earth,” “greedy billionaire cowards,” and so on.
But again, this 2022 hearing was … virtual. Two of the Sacklers listened on Zoom; one, Richard Sackler, got to call in by telephone. The Associated Press studied its computer screen and noted the two observable Sacklers seemed to be listening with neutral expressions. Perhaps they were thinking about how amazingly rich they are.
The death and brain damage goes on
Opioid deaths accelerated dramatically during COVID-19, as did all of the other miseries one might expect from locking people inside their homes. (There was the skyrocketing alcoholism, for example, as well as a “horrifying global surge” in spousal abuse that prompted the United Nations secretary general to call for a “domestic violence ceasefire”.)
Today, as COVID-19 has been downgraded to just-another-flu, opioid overdose deaths have also fallen precipitously. We’re supposed to celebrate this, because we’re back to the “only 80,000 or so” deaths a year we’d been seeing right before the coronavirus pandemic. That’s right, more than 200 Americans die every day of an opioid overdose, and this is progress.
How many more are revived or otherwise survive, but suffer anoxic brain injuries? That, we don’t know. It’s just one more mysterious facet of the ongoing public health crisis that Purdue 1.0 helped create, and that
Purdue 2.0Knoa Pharma is now going to fix for you. Remember to say “thank you”!The post ‘No Family More Evil’: The Sacklers’ Second Act appeared first on Truthdig.
-
Pluralistic: Justin Key’s “The Hospital at the End Of the World” (04 Feb 2026)
Today’s links
- Justin Key’s “The Hospital at the End Of the World”: A biopunk medical thriller from a major new talent.
- Hey look at this: Delights to delectate.
- Object permanence: Coconut volunteers; Astro Noise; Rich old men behind “Millennials Rising”; Stop the “Stop the Steal” steal; “Chasing Shadows.”
- Upcoming appearances: Where to find me.
- Recent appearances: Where I’ve been.
- Latest books: You keep readin’ em, I’ll keep writin’ ’em.
- Upcoming books: Like I said, I’ll keep writin’ ’em.
- Colophon: All the rest.
Justin Key’s “The Hospital at the End Of the World” (permalink)
Justin C. Key is one of the most exciting new science fiction writers of this decade and today, Harpercollins publishes his debut novel, The Hospital at the End of the World:
I’ve followed Key’s work for more than a decade, ever since I met him as a student while teaching at the Clarion West writers’ workshop in Seattle. At the time, Key impressed me – a standout writer in a year full of standouts – and I wasn’t surprised in the least when Harpercollins published a collection of his afrofuturist/Black horror stories, The World Wasn’t Ready For You, in 2023:
https://pluralistic.net/2023/09/19/justin-c-key/#clarion-west-2015
This is virtually unheard of. Major genre publishers generally don’t publish short story collections at all, let alone short story collections by writers who haven’t already established themselves as novelists. The exceptions are rare as hell, and they’re names to conjure with: Ted Chiang, say, or Kelly Link:
https://pluralistic.net/2024/02/13/the-kissing-song/#wrack-and-roll
But anyone who read World Wasn’t Ready immediately understood why Key’s work qualified him for an exception to this iron law of publishing. Key is an MD and a practicing psychiatrist, and he combines keen insights into personal relations and human frailty with a wild imagination, deep compassion, and enviable prose chops.
Hospital at the End of the World is Key’s first novel, and it’s terrific. Set in a not-so-distant future in which an AI-driven health monopolist called The Shepherd Organization controls much of the lives of everyday Americans, Hospital follows Pok, a young New Yorker who dreams of becoming an MD. Pok’s father is also a doctor, famous for his empathic, human-centric methods and his scientific theories about the role that “essence” (a psychospiritual connection between doctors and patients) plays in clinical settings.
The story opens with Pok hotly anticipating an acceptance letter from The Shepherd Organization, and the beginning of his new life as a medical student. But when word arrives, Pok learns that he has been rejected from every medical school in the TSO orbit. In desperate confusion, he works with shadowy hackers in a bid to learn why his impeccable application and his top grades resulted in this total rejection. That’s when he learns that someone had sabotaged his application and falsified his grades, and, not long thereafter, he learns that the saboteur was his father.
To make things worse, Pok’s father has fallen grievously ill – so ill, in fact, that he ends up in a Shepherd Organization hospital, despite his deep enmity for TSO and its AI-driven practice of medicine. Pok doesn’t accompany his father, though – he has secured a chance to sit a make-up exam in a desperate bid to get into med school. By the time he is finished with his exam, though, he learns that his father has died, and all that is left of him is an AI-powered chatbot that is delivered to Pok’s apartment along with a warning to flee, because he is in terrible danger from the Shepherd Organization.
Thus begins Pok’s tale as he goes underground in a ubiquitous AI surveillance dystopia, seeking sanctuary in New Orleans, hoping to make it to the Hippocrates, the last holdout from America’s AI-based medicine and surveillance dystopia. Pok’s father learned to practice medicine at Hippocrates, and had urged Pok to study there, even securing a full-ride scholarship for him. But Pok had no interest in the mystical, squishy, sentimental ethos of the Hippocrates, and had been determined to practice the Shepherd Organization’s rigorous, cold, data-driven form of medicine.
Now, Pok has no choice. Hitchhiking, hopping freight cars, falling into company with other fugitives, Pok makes his way to New Orleans, a city guarded by tall towers that radiate energy that dampens both the punishing weather events that would otherwise drown the city and the data signals by which the Shepherd Organization tracks and controls the American people.
This is the book’s second act, a medical technothriller that sees Pok as an untrusted outsider in the freshman class at Hippocrates med school, amidst a strange and alarming plague that has sickened the other refugees from TSO America who have taken up residence in New Orleans. Pok has to navigate factions within the med school and in New Orleans society, even as he throws himself into the meat grinder of med school and unravels the secrets of his father and his own birth.
What follows is a masterful and suspenseful work of science fiction informed by Key’s own medical training and his keen sense of the human psyche. It’s one part smart whodunnit, one part heist thriller, and one part revolutionary epic, and at its core is a profound series of provocations and thought experiments about the role that deep human connection and empathy play in medical care. It’s a well-structured, well-paced sf novel that probes big, urgent contemporary themes while still engrossing the reader in the intimate human relations of its principals. A wonderful debut novel from a major new writer.`
Hey look at this (permalink)

- Ken MacLeod: Imagined Futures https://plutopia.io/ken-macleod-imagined-futures/
-
Elbows Up: How Canada Can Disenshittify Its Tech, Reclaim Its Sovereignty, and Launch a New Tech Sector Into a Stable Orbit https://archive.org/details/disenshittification-nation
-
HOPE IS NOW A 501(C)(3) NON-PROFIT ORGANIZATION https://2600.com/content/hope-now-501c3-non-profit-organization
-
Department of Justice appeals Google search monopoly ruling https://www.theverge.com/tech/873438/google-antitrust-case-doj-states-appeal
-
List of Kennedy Center cancellations during the Trump administration https://en.wikipedia.org/wiki/List_of_Kennedy_Center_cancellations_during_the_Trump_administration (h/t Amanda Marcotte)
Object permanence (permalink)
#20yrsago AOL/Yahoo: our email tax will make the net as good as the post office! https://www.nytimes.com/2006/02/05/technology/postage-is-due-for-companies-sending-email.html
#20yrsago Volunteers ferry 15k coconuts every day to Indian temple http://news.bbc.co.uk/2/hi/south_asia/4677320.stm
#15yrsago Wikileaks ACTA cables confirm it was a screwjob for the global poor https://arstechnica.com/tech-policy/2011/02/secret-us-cables-reveal-acta-was-far-too-secret/
#10yrsago Laura Poitras’s Astro Noise: indispensable book and gallery show about mass surveillance https://www.wired.com/2016/02/snowdens-chronicler-reveals-her-own-life-under-surveillance/
#10yrsago How to prepare to join the Internet of the dead https://archive.org/details/Online_No_One_Knows_Youre_Dead
#10yrsago Who funds the “Millennials Rising” Super PAC? Rich old men. https://web.archive.org/web/20160204223020/https://theintercept.com/2016/02/04/millennials-rising-super-pac-is-95-funded-by-old-men/
#10yrsago They promised us a debate over TPP, then they signed it without any debate https://www.techdirt.com/2016/02/03/countries-sign-tpp-whatever-happened-to-debate-we-were-promised-before-signing/
#5yrsago Stop the “Stop the Steal” steal https://pluralistic.net/2021/02/04/vote-machine-tankies/#ess
#5yrsago Organic fascism https://pluralistic.net/2021/02/04/vote-machine-tankies/#pastel-q
#5yrsago Ron Deibert’s “Chasing Shadows” https://pluralistic.net/2025/02/04/citizen-lab/#nso-group
Upcoming appearances (permalink)

- Salt Lake City: Enshittification at the Utah Museum of Fine Arts (Tanner Humanities Center), Feb 18
https://tanner.utah.edu/center-events/cory-doctorow/ -
Montreal (remote): Fedimtl, Feb 24
https://fedimtl.ca/ -
Victoria: 28th Annual Victoria International Privacy & Security Summit, Mar 3-5
https://www.rebootcommunications.com/event/vipss2026/ -
Berkeley: Bioneers keynote, Mar 27
https://conference.bioneers.org/ -
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow -
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html -
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
Recent appearances (permalink)
- Why Everything Got Worse and What to Do About It (Jordan Harbinger)
https://www.jordanharbinger.com/cory-doctorow-why-everything-got-worse-and-what-to-do-about-it/ -
How the Internet Got Worse (Masters in Business)
https://www.youtube.com/watch?v=auXlkuVhxMo -
Enshittification (Jon Favreau/Offline):
https://crooked.com/podcast/the-enshittification-of-the-internet-with-cory-doctorow/ -
Why Big Tech is a Trap for Independent Creators (Stripper News)
https://www.youtube.com/watch?v=nmYDyz8AMZ0 -
Enshittification (Creative Nonfiction podcast)
https://brendanomeara.com/episode-507-enshittification-author-cory-doctorow-believes-in-a-new-good-internet/
Latest books (permalink)
- “Canny Valley”: A limited edition collection of the collages I create for Pluralistic, self-published, September 2025
-
“Enshittification: Why Everything Suddenly Got Worse and What to Do About It,” Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/ -
“Picks and Shovels”: a sequel to “Red Team Blues,” about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
-
“The Bezzle”: a sequel to “Red Team Blues,” about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
-
“The Lost Cause:” a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
-
“The Internet Con”: A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
“Red Team Blues”: “A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before.” Tor Books http://redteamblues.com.
-
“Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin”, on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
Upcoming books (permalink)
- “Unauthorized Bread”: a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026
-
“Enshittification, Why Everything Suddenly Got Worse and What to Do About It” (the graphic novel), Firstsecond, 2026
-
“The Memex Method,” Farrar, Straus, Giroux, 2026
-
“The Reverse-Centaur’s Guide to AI,” a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
Colophon (permalink)
Today’s top sources:
Currently writing: “The Post-American Internet,” a sequel to “Enshittification,” about the better world the rest of us get to have now that Trump has torched America (1011 words today, 21655 total)
- “The Reverse Centaur’s Guide to AI,” a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
-
“The Post-American Internet,” a short book about internet policy in the age of Trumpism. PLANNING.
-
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
“When life gives you SARS, you make sarsaparilla” -Joey “Accordion Guy” DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (“BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
-
Latvia: Seven Detained in Hospital Procurement Collusion Probe
Latvia’s anti-corruption bureau said on Tuesday it had detained seven people and carried out searches at two major hospitals and more than 20 other locations as part of an investigation into suspected bid-rigging in public procurement.
The Corruption Prevention and Combating Bureau (KNAB) said the proceedings combine three criminal cases, including two initiated by the European Public Prosecutor’s Office (EPPO) and one opened by KNAB. EPPO investigates crimes affecting the European Union’s financial interests.
Investigators suspect officials at Ogre District Hospital and Liepāja Regional Hospital colluded with at least two medical goods suppliers to steer multiple contracts to those companies.





