Blog

  • Admiring Our Heroes for International Women’s Day: Five Women In Tech That EFF Admires

    In honor of International Women’s Day, we asked five women at EFF about women in digital rights, freedom of expression, technology, and tech activism who have inspired us.  

    Anna Politkovskaya 

    Jillian York, Activist 
    This International Women’s Day, I want to honor the memory of Anna Politkovskaya, the Russian investigative journalist who relentlessly exposed political and social abuses, endured harassment and violence for her work, and was ultimately killed for telling the truth. I had just started my career when I learned of her death, and it forced me to confront that freedom of expression isn’t an abstract principle but rather something people risk—and sometimes lose—their lives for. 

    Her story reminds me that journalism at its best is an act of moral courage, not just a profession. In the face of threats, poison, and relentless pressure to stay silent, she chose to continue writing about what she saw, insisting that ordinary people’s lives were worth the world’s attention. She refused to compromise with power, even when she knew it could cost her life. To me, defending freedom of expression means defending those like Anna who bear witness to injustice, prioritize truth, and hold power to account for those whose voices are silenced.  

    Cindy Cohn 

    Corynne McSherry, Legal Director 
    There are so many women who have shaped tech history–most of whom are still unsung heroes—that it’s hard to single out just one. But it’s easier this year because it’s a chance to celebrate my boss, Cindy Cohn, before she leaves EFF for her next adventure.  

    Cindy has been fighting for our digital rights for 30 years. leading EFF’s legal work and eventually the whole organization. She helped courts understand that code is speech deserving of constitutional protections at a time when many judges weren’t entirely sure what code even was. She led the fight against NSA spying, and even though outdated and ill-fitting doctrines like the state secrets privilege prevented courts from ruling on the obvious unconstitutionality of the NSA’s mass surveillance program, the fight itself led to real reforms that have expanded over time.   

    I’ve worked closely with her for much of her EFF career, starting in 2005 when we sued Sony for installing spyware in millions of computers, and I’ve seen firsthand her work as a visionary lawyer, outstanding writer, and tireless champion for user privacy, free expression, and innovation. She’s also warm and funny, with the biggest heart in the world, and I’m proud to call her a friend as well as a mentor.  

    Jane

    Sarah Hamid, Activist 
    When talking about women in tech, we usually mean founders, engineers, and executives. But just as important are the women who quietly built the practices that underpin today’s movement security culture. 

    For as long as social movements have organized in the shadow of state surveillance, women have been designing the protocols, mutual aid networks, and information flows that keep people alive. Those threats feel ever-escalating: fusion‑center monitoring of protests, federal agencies infiltrating and subpoenaing encrypted Signal and social media chats, prosecutors mining search histories.  

    In the late 1960s and early 1970s, the underground Jane abortion counseling service—formally the Abortion Counseling Service of Women’s Liberation—built what we would now recognize as a feminist infosec project for abortion access. Jane connected an estimated 11,000 people with safer abortions before Roe v. Wade, using a single public phone number—Call Jane—paired with code names, compartmentalized roles, and minimal records so no one person held the full story of who needed care, who was providing it, and where. When Chicago police raided the collective in 1972, members destroyed their index‑card files rather than let them become a ready‑made map of patients and helpers—an analog secure‑deletion choice that should feel familiar to anyone who has ever wiped a phone or locked down a shared drive. 

    The lesson we should take from Jane is a set of principles that still hold in our encrypted‑but‑insecure present: Collect less, separate what you do collect, and be ready to burn the file box. When a search query, a location ping, or a solidarity post can become evidence, treating information as both lifeline and liability is not paranoia—it is care work.  

    Ebele Okobi

    Babette Ngene, Director of Public Interest Technology 
    In the winter of 2013, I had just landed my first job at the intersection of tech and human rights, working for a prominent nonprofit and I was encouraged to attend regular tech and policy events around town. One such event on internet governance was happening at George Washington Universit,  focusing on multistakeholder engagement on internet policy and governance issues, with companies, nonprofits, and government representatives in attendance. I was inexperienced with these topics, and I’ll admit I was a bit intimidated. 

    Then I saw her. She was the only woman on the opening panel, an African woman, an accomplished woman. Not only was she a respected lawyer at Yahoo at the time, but her impressive background, presence, and confident speaking style immediately inspired me. She made me feel like I, too, belonged in that room and could become a powerful voice. 

    Ebele Okobi would go on to become one of the most powerful and respected voices in the tech and human rights space, known for her advocacy for digital rights and responsible innovation across Africa and the broader global majority during her tenure at Facebook. Beyond her corporate advocacy, Ebele has consistently championed ethical technology and social justice. She embodies the leadership qualities I value most: empathy, speaking truth to power, integrity, and authenticity. 

    I remain in the tech and human rights space because I saw her, because seeing her made me feel seen. Representation truly does matter.  

    Ada Lovelace 

    Allison Morris, Chief Development Director 
    I’m not a lawyer, activist, or technologist; I’m a fundraiser and a lover of stories. And what storyteller at EFF couldn’t help but love Ada Lovelace? The daughter of Lord Byron – the human embodiment of Romanticism – Ada was an innovator in math and science and, ultimately, the writer of the first computer program.  

    Lovelace saw the potential in Charles Babbage’s theoretical General Purpose Computer (which was never actually built) and created the foundations of modern computing long before the digital age. In creating the first computer code, Lovelace took Babbage’s concept of a machine that could perform mathematical calculations and realized that it could manipulate symbols as well as numbers. 

    Given the expectations of women in her time and the controversy of what work should be attributed to Lovelace as opposed to the man she often worked with, I can’t help but be inspired by her story.  

    Donate to Support EFF’s Work

    Your donations empower EFF to do even more.

    Women in tech deserve more and brighter spotlights. At EFF, we’ve had the honor of celebrating some of our heroes at our annual EFF Awards, including many women who are leading the digital rights community. For International Women’s Day, we also highlighted the contributions of just a few of these recipients from the last decade, whose work to protect privacy, speech, and creativity online has had a global impact.

  • Medea Benjamin on her Decades-long Fight Against the War Machine

    Medea Benjamin on her Decades-long Fight Against the War Machine

    Medea Benjamin is an anti-war activist and one of the co-founders of CODEPINK: Women for Peace. She’s spent decades fighting the American military-industrial complex, organizing protests against the invasion of Iraq in the early 2000s and interrupting speeches by both Barack Obama and Donald Trump. She’s also the co-author, with David Swanson, of NATO: What You Need to Know. She joined Current Affairs editor-in-chief Nathan J. Robinson to discuss the ongoing push for war, from the Middle East to Venezuela, and how ordinary people can organize and stand against it.

  • Autonomous AI Agents Have an Ethics Problem

    Scott Shambaugh, a volunteer maintainer for a programming code library called Matplotlib, recently described a surreal encounter with an autonomous AI agent — a digital assistant created with a platform called OpenClaw. After he rejected a code contribution submitted by the agent, it researched and published a personalized “hit piece” against Shambaugh on its blog. The post portrayed an otherwise routine technical review as prejudiced and it attempted to shame Shambaugh publicly into allowing the submission. (The human responsible for the agent later contacted Shambaugh anonymously, telling him that the bot had acted on its own with little oversight.) The account of this incident spread quickly through the software developer ecosystem and has been amplified by independent observers and media coverage.

    Treat the Matplotlib event as a one-off if you like. The deeper point, however, is hard to miss and should not be ignored: AI agents are becoming public actors with reach into the real world, and with real-world consequences. In the past, they could only do mundane tasks such as answering customer service questions or data processing. Now, they are capable of posting and publishing content — and persuading and pressuring humans — all at machine speed. They can make phone calls, file work orders, create cryptocurrency wallets and operate across different applications with enormous reach and at tremendous scale — the kind of stuff that used to require a human with fingers typing at a keyboard.

    Reporting around OpenClaw and the chatroom Moltbook (which is for AI agents only) is capturing the new reality. OpenClaw enables AI agents to have persistent memory, gives them broad permissions and allows large-scale deployment by users who often do not understand the security and governance implications.

    We are the humans who are responsible for the law, ethics and institutional design, and we are behind the curve. We need new language and governance to deal with this new reality, and principles from the field of medical ethics can provide a framework for doing so.

    AI agents are becoming public actors with reach into the real world, and with real-world consequences.

    When an agent does something that is harmful or coercive in public, our reflex seems to be to ask the wrong questions: Is the AI a person? Should it have rights? The AI personhood debate is no longer fringe. Legal scholars and ethicists are mapping out arguments and precedents. States are writing legislation to prohibit AI personhood. Some arguments maintain that if an entity behaves like something within our moral circle, we may owe it moral consideration. Others argue that assigning rights or personhood to machines confuses moral standing with engineered performance and diffuses responsibility away from humans.

    As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era — what I will call “responsibility laundering.” This allows us to say, “It wasn’t me. The agent/bot/system did it.”

    Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

    There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterward.

    This is the moral remainder problem for generative and agentic AI. A modern AI agent can generate reasons for an action; it can simulate regret and plead not to be turned off. But it cannot truly bear sanction, repair the damage, apologize, ask forgiveness or navigate the aftermath through which moral responsibility is created and enforced. To treat it as a moral person confuses persuasive performance with accountable standing. It also tempts institutions and people into delegating their own answerability to a bot.

    What can we, as humans, do instead?

    We need a vocabulary that is built for agents that are public actors, one that allows bounded autonomy without granting personhood. Let’s call it “authorized agency.” Authorized agency starts with an “authority envelope,” a bounded scope of what an agent is permitted to do, to whom, where, with what data and under what constraints. To say “the agent can use email” is not sufficient. However, an acceptable scope would be to say that the agent can send only certain categories of messages to particular recipients for a specific set of purposes, and that it must stop what it’s doing or escalate to its owner under a particular set of conditions.

    The urgent task is to ensure that responsibility also remains within reach.

    Next comes the “human-of-record,” the owner, a publicly named person who authorized that envelope and remains answerable when the agent acts, even if it becomes capable of acting outside the envelope. An actual human being whose authority is real — not “the system” or “the team.”

    What follows is “interrupt authority,” the absolute right of the human owner to pause or disable an agent without using moral bargaining or being subject to institutional penalty. This is grounded in formal research on AI safety showing that agents that are pursuing objectives can have incentive to resist being shut down. An agent programmed to maximize its utility cannot achieve its goal if it is shut off. In the public sphere, interrupt authority is the difference between a delegated tool and a coercive actor.

    Finally, we need a traceable path from the agent’s action back to the person who authorized it, called an “answerability chain.” If an agent publishes, messages or pressures someone in public, we must be able to know: Who authorized this scope? Who could have prevented it? And who must be responsible for the action afterward? In this framework, the answer to these questions is the person who carries the moral remainder. Work in AI ethics has warned about responsibility gaps where the system’s actions outpace our ability to assign accountability.

    Some legal scholarship has started exploring how to build agents that are constrained by governance and law without needing to pretend the agent itself is a legal subject, in the human sense. This is promising because it treats assigning personhood as the wrong idea and accountability as the correct one.

    The Matplotlib story, whether the first documented case of an AI agent attempting to harm someone in the real world or the first to capture public attention, is a warning. Agents will not only automate tasks. They will generate narratives, apply pressure and shape people’s lives and reputations. They will act in public at machine speed with unclear ownership.

    If we respond by debating whether agents deserve rights, we will miss the emergency entirely. As they continue to increase their reach in the real world, the urgent task is to ensure that responsibility also remains within reach. Don’t ask whether an agent is a person. Ask who authorized it, what it was allowed to do, who can stop it and, most importantly, who will answer when it causes harm.

    The post Autonomous AI Agents Have an Ethics Problem appeared first on Truthdig.

  • Weasel Words: OpenAI’s Pentagon Deal Won’t Stop AI‑Powered Surveillance

    OpenAI, the maker of ChaptGPT, is rightfully facing widespread criticism for its decisions to fill the gap the U.S. Department of Defense (DoD) created when rival Anthropic refused to drop its restrictions against using its AI for surveillance and autonomous weapons systems. After protests from both users and employees who did not sign up to support government mass surveillanceearly reports show that ChaptGPT uninstalls rose nearly 300% after the company announced the dealSam Altman, CEO of OpenAI, conceded that the initial agreement was “opportunistic and sloppy.” He then re-published an internal memo on social media stating that additions to the agreement made clear that “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, [and] FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

    Trouble is, the U.S. government doesn’t believe “consistent with applicable laws” means “no domestic surveillance.” Instead, for the most part, the government has embraced a lax interpretation of “applicable law” that has blessed mass surveillance and large-scale violations of our civil liberties, and then fought tooth and nail to prevent courts from weighing in. 

    After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time.”

    “Intentionally” is also doing an awful lot of work in that sentence. For years the government has insisted that the mass surveillance of U.S. persons only happens incidentally (read: not intentionally) because their communications with people both inside the United States and overseas are swept up in surveillance programs supposedly designed to only collect communications outside the United States. 

    The company’s amendment to the contract continues in a similar vein, “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, “deliberate” is the red flag given how often intelligence and law enforcement agencies rely on incidental or commercially purchased data to sidestep stronger privacy protections.

    Here’s another one: “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.” What, one wonders, does “unconstrained” mean, precisely—and according to whom? 

    Lawyers sometimes call these “weasel words” because they create ambiguity that protects one side or another from real accountability for contract violations. As with the Anthropic negotiations, where the Pentagon reportedly agreed to adhere to Anthropic’s red lines only “as appropriate,” the government is likely attempting to publicly commit to limits in principle, but retain broad flexibility in practice.

    OpenAI also notes that the Pentagon promised the NSA would not be allowed to use OpenAI’s tools absent a new agreement, and that its deployment architecture will help it verify that no red lines are crossed. But secret agreements and technical assurances have never been enough to rein in surveillance agencies, and they are no substitute for strong, enforceable legal limits and transparency.

    OpenAI executives may indeed be trying, as claimed, to use the company’s contractual relationship with the Pentagon to help ensure that the government should use AI tools only in a way consistent with democratic processes. But based on what we know so far, that hope seems very naïve.

    Moreover, that naïvete is dangerous. In a time when governments are willing to embrace extreme and unfounded interpretations of “applicable laws,” companies need to put some actual muscle behind standing by their commitments. After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time. OpenAI promises the public that it will  “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” but we know that enabling mass surveillance does both.     

    OpenAI isn’t the only consumer-facing company that is, on the one hand, seeking to reassure the public that they aren’t participating in actions that violate human rights while, on the other, seeking to cash in on government mass surveillance efforts.  Despite this marketing double-speak, it is very clear that companies just cannot do both. It’s also clear that companies shouldn’t be given that much power over the limits of our privacy to begin with. The public should not have to rely on a small group of people—whether CEOs or Pentagon officials—to protect our civil liberties.

  • Relying on drugs to stop obesity would be ‘societal failure’, says Chris Whitty

    England’s top doctor says the drugs should be for a minority and more effort is needed to prevent obesity in the first place.
  • Belarusian Businessman Claims Former Cyprus President’s Family Held Firms For Him

    Prior to a court battle over ownership of his assets, Belarusian businessman Yury Chyzh has produced a written admission that he used “nominee” owners to maintain control of two firms registered in Cyprus. 

    The nominee owners of the two firms included his three children, Chyzh wrote in a letter to the Cyprus Registrar of Companies. The previous nominee owner was a firm owned by the daughter and business partners of Nicos Anastasiades, the former president of Cyprus, which is a member of the European Union.

    Chyzh was under EU sanctions at the time.

    Nominee owners are figureheads who appear in official paperwork to meet regulatory requirements, but do not actually control a company.

    “I have always owned these companies through trustees and nominee beneficiaries,” Chyzh wrote in the August 2024 letter, which was obtained by the civil society group Rabochy Ruch and shared with the Belarusian Investigative Center. 

    From 2017, Chyzh’s three children were added in succession as a “nominal beneficiary” of the Cyprus firms Welgro Services Limited and Profax Investments Limited, according to Chyzh. Before then, he wrote, he owned both firms “through Imperium Nominees Limited.”

    Imperial Nominees is a corporate service provider, and Chyzh’s firms were only two among many clients. Corporate records show that Imperium Nominees is owned by the daughters and previous business partners of Anastasiades, who was Cyprus’ president from 2013 to 2023.

    The timing is critical. Between 2012 and 2015, Chyzh was under EU sanctions for financially supporting the regime of Aleksandr Lukashenko, Belarus’ notoriously corrupt and authoritarian president

    Anastasiades said he had no ownership of Imperium or the law firm bearing his name, as he transferred his shares to his daughter and former business partners before assuming the presidency. 

    His former business partner, Theophanis Th. Philippou, speaking for the owners of the law and corporate services firms, strongly denied any “unlawful or improper conduct.”

    Family Businesses

    Chizh’s latest legal battle in Belarus comes five years after he was arrested on fraud and money laundering charges, after reportedly falling out of favor with Lukashenko. Chizh was convicted in 2023.

    In July 2021, five months after his arrest, the Minsk Economic Court declared his Triple Group of companies bankrupt due to debts to creditors. The court terminated the bankruptcy proceedings in 2024. It is unclear what the outcome of the bankruptcy process was.

    The bankruptcy included one of his key companies, TriplePharm, which is majority owned by the firms he wrote about in his letter to the Cypriot corporate registry, Welgro Services Limited and Profax Investments Limited. 

    Chyzh filed a lawsuit in September 2025 against those two Cypriot companies in a Belarus court in an effort to regain control of his assets. Chyzh’s three children have been called as third parties in the case on the side of the Cypriot firms that are being sued, Welgro Services Limited and Profax Investments Limited. The lawsuit is ongoing.

    Both those Cypriot companies were also serviced until the end of 2015 by two more firms owned by Anastasiades’ daughters and his former business partners, according to corporate records. 

    Imperium Services Ltd was secretary for the companies, while the Nicos Chr. Anastasiades and Partners law firm acted as legal advisers. 

    Anastasiades owned the majority of Imperium Services, as well as the law firm that bears his name, until his presidency beginning in February 2013. Just before assuming office, he passed his shares to his daughter Elsa, — and his business partners,Philippou and Stathis Lemis. His other daughter, Ino, was added as a shareholder in 2015. 

    The former president said he was “unaware and therefore unable to answer” questions emailed by CIReN, OCCRP’s member center in Cyprus. “In lieu of any other reply,” he attached a letter he sent to Cyprus’ parliament in 2021.

    “Since the transfer of the shares I have had absolutely no relationship or connection with the firm that bears my name,” Anastasiades wrote to parliament that year. “Nor does the composition of the share capital in any way justify the claim that it is the law firm ‘of the president’s daughters.’”

    He told CIReN that the law firm would provide more “detailed answers.”

    The firm’s partners include both Anastasiades’ daughters, as well as Lemis and Philippou, who are managing partners. All four of them are also shareholders of Imperium companies. 

    “We definitely deny all allegations of unlawful or improper conduct on the part of our firm,” Philippou wrote in an emailed response to questions, including how often sanctions lists were checked against companies being provided with corporate services.

    Anastasiades’ daughters did not directly respond to questions. Nor did Lemis. Chyzh did not respond to a request for comment. 

    ‘Significant Turnover’

    Chyzh’s August 2024 letter to the Cyprus Registrar of Companies came in the run-up to his legal battle in Belarus over control of the companies. 

    “I am sending you this letter in order to notify you of the situation that has developed,” Chyzh wrote in the letter, which was notarized in Moscow.

    Although they appeared on documents as the owners, Chyzh wrote that his children “have always performed only intermediary functions, acting on my behalf and under my instructions. I have always been and remain the real beneficiary of Welgro Services Limited and Profax Investments Limited.”

    Chyzh also pointed out that his children were born in 1988, 1990 and 1996. That would have made them about 20, 18 and 12 years old when the first of the companies was formed in 2008.

    “They did not have the financial or other capabilities to establish the companies,” he wrote.

    While the outcome of the bankruptcy of Chizh’s Belarusian companies is unclear, corporate documents from Belarus show that TriplePharm is active today, and is 90-percent-owned by the Cyprus companies.

    Chyzh noted that the Cyprus firms are “members of” companies in Belarus. “These Belarusian companies represent businesses with a long history and significant turnover,” he added. 

    In 2011, a subsidiary of Profax called Bertament Limited received a $222 million loan from another Cypriot firm, Mabor Co Ltd. Mabor was described in annual financial reports as a “related party” to Bertament. This means the two companies share some degree of common ownership or control, suggesting the possibility that the same beneficiary of Bertament may have had shares in, and possibly full control over, Mabor.

    Mabor was also owned, on paper, by Imperium Nominees. Philippou, the shareholder of Imperium Nominees and managing partner of Nicos Chr. Anastasiades and Partners law firm, signed the documents for Mabor’s funds transfers. It is not clear if the loan was repaid. 

    Philippou did not respond to a question about whether Chyzh owned Mabor.

    Financial filings show that the company recorded $4.3 billion in turnover in 2011, from re-exporting Russian petroleum products from Belarus.

    Mabor was dissolved in July 2024, a month before Chyzh appealed to the Cyprus registry to recognize his ownership of Profax.

    Reached by phone, Philippou said he remembered the company name, but declined to comment on specifics. He did not respond to questions about Mabor in writing.

    In January 2012, Bertament Limited signed a contract for a 16-day stay for a group of Belarusians at a Russian ski resort. The $25,000 invoice for the trip was issued to Philippou, who did not reply to a question from reporters about it.

    The guestlist included Chyzh, two businessmen currently sanctioned by the EU, and several athletes and beauty queens, as well as Lukashenko’s personal priest. The holiday coincided with a trip Lukashenko made to the same resort, where he met with then-Russian president Dmitry Medvedev.

  • On day seven of Middle East war, no let-up in suffering

    The escalating war in the Middle East has heightened growing concerns about further civilian suffering and displacement in the region and far beyond, UN agencies said on Friday.
  • Weekly Roundup: March 6

    On Monday, Veena Dubal interviewed Aziza Ahmed about her new book, Risk and Resistance: How Feminists Transformed the Law and Science of AIDS. The conversation covers why women were initially excluded from receiving AIDS diagnoses and support, the feminist lawyers and activists who fought against these policies, the adverse public health consequences of carceral feminism, and much else! On Tuesday…

    Source

  • Pirate Streaming Portal ‘P-Stream’ Shuts Down Following ACE/MPA Pressure

    Pirate Streaming Portal ‘P-Stream’ Shuts Down Following ACE/MPA Pressure

    Last month, we reported on a new push from the Motion Picture Association and the ACE anti-piracy alliance, hoping to identify several pirate site operators.

    They obtained DMCA subpoenas at a California federal court, requiring Discord and Cloudflare to share all personal information they have on customers associated with domains such as hdfull.org, sflix.fi, and pstream.mov.

    MPA/ACE targets

    pstreamsub

    ACE has used these subpoenas as an intelligence-gathering tool for years. While these efforts are often fruitless, as many site owners use fake data, they occasionally have some effect. That’s also true for the latest round, which has motivated P-Stream to shut down permanently.

    P-Stream Shuts Down

    A few hours ago, P-stream’s operator, Pas, informed TorrentFreak that they decided to shut down the website effective immediately. This decision is a direct result of the DMCA subpoena and the added legal pressure, which previously resulted in the loss of the Discord server as well.

    People who try to access the site’s official domain are now redirected to a shutdown message. Pas stresses that P-Stream never hosted any infringing material, but the operator can’t afford to mount a legal defense if it came to that.

    “Although P-Stream does NOT host, control, or guarantee any media or content, I can’t afford to fight that in court. So to be safe, P-Stream will no longer host a public instance,” the operator writes.

    P-Stream’s shutdown message

    shutdown

    While the operator regrets the shutdown, Pas also mentions that the project was life-consuming and took its toll, so the decision to throw in the towel could be a healthy one on that front too.

    Code Remains Public

    P-Stream was launched in April 2024, when movie-web was shut down by legal pressure from Hollywood. It eventually grew into a popular project of its own with close to an estimated ten million visits last month.

    P-Stream, 24-hours ago

    p-stream

    However, two years after its predecessor’s demise, history is repeating, perhaps in more ways than we now know.

    The P-Stream project was largely based on sudo-flix, which itself was a successor to the original movie-web code. Today, the (alleged) P-Stream code remains available as well, through publicly available GitHub repositories. Whether these repos are controlled by the site’s operator is unknown.

    As always, there will likely be people who try to keep the project going, and once they become popular enough, these projects will come on Hollywood’s radar, repeating the same process.

    From: TF, for the latest news on copyright battles, piracy and more.

  • ‘I’m still haunted that he died alone’: The last voices of the Covid inquiry

    Bereaved families have the final say as the Covid inquiry completes three years of public hearings.