Blog

  • The Incendiary Bomb Never Seen in Israel Before

    The Incendiary Bomb Never Seen in Israel Before

    The Israeli Air Force (IAF) has dropped 5,000 bombs on Iran since the United States and Israel launched an attack last week, according to a statement by the IAF on March 4.

    Bellingcat has monitored weapons used in the first few days of the war, and strikes across the region, including those that caused civilian harm. Some weapons, such as the US Precision Strike Missile, have seen their first use in combat. A variant of the Tomahawk missile, previously unknown to the public, was also used.

    On March 3, the IAF posted three images in three separate posts showing a bomb not publicly seen in Israeli service before. The Israel Air Force released these photos accompanied with claims they were of jets participating in the strikes on Iran. Experts told Bellingcat that this bomb appears to have an incendiary component, and may be one intended to destroy chemical or biological warfare agents.

    Photo of an Israeli Air Force jet purportedly participating in strikes, equipped with two of these bombs (far left and far right). Source: Israeli Air Force.

    The images appear to show 2,000-pound-class air-delivered bombs fitted with Joint Direct Attack Munition (JDAM) guidance kit with a red band around the nose. Red is commonly used to denote an incendiary, while yellow indicates high explosive effect.

    Image of a bomb with the body of a MK 84 2,000-pound-bomb, but with a red band near the nose, and a US JDAM guidance kit. The image is cropped by Bellingcat to focus on the bomb. Source: Israeli Air Force.

    We identified key details about the munition and shared the images with two weapons experts.

    Apparent Similarities to the MK 84

    Dr N.R. Jenzen-Jones, the director of Armament Research Services (ARES), a weapons intelligence consulting company, told Bellingcat these images show a 2,000-pound-class air-delivered bomb fitted with a Joint Direct Attack Munition (JDAM) guidance kit.

    Frederic Gras, an Explosive Remnants of War (ERW) expert, also told Bellingcat that the bomb could be of the US MK 80 series, or an Israeli copy, and has a JDAM guidance kit.

    Left: 2,000-pound bomb with red band and US JDAM guidance kit posted by the IAF. Right: Standard MK 84 2,000-pound bombs with US JDAM guidance kits. Sources: IAF and SrA Karalyn Degraffenreed/DVIDS.

    The US JDAM bomb guidance kit is designed for use with bombs that use the MK 80 series bomb bodies, and the closely related BLU-109 “bunker buster” body. 

    The Open Source Munitions Portal added the munition to their website on March 3, describing it as “visually similar to a MK 84 general purpose aerial bomb”, while noting that “the marking scheme is distinctly different”. The War Zone also reported on these distinct markings, and possible munitions it could be.

    Open Source Munitions Portal’s (OSMP) entry on the bomb, with an analyst note. The OSMP is jointly run by Airwars and ARES, and entries undergo a review by at least two experts. Source: Open Source Munitions Portal.

    “The combination of yellow and red bands probably indicates both a high explosive and incendiary payload, which would be consistent with a 2,000-pound-class bomb of MK 84 form factor known as the BLU-119/B Crash Prompt Agent Defeat (CrashPAD),” Dr Jenzen-Jones told Bellingcat.

    Frederic Gras, an Explosive Remnants of War (ERW) expert said that the US and Israel both use red markings to indicate an incendiary payload, or effect. The bomb could be a full incendiary payload, with the yellow band indicating a bursting charge, or it could be a bomb primarily with a high explosive component, and a secondary incendiary effect, Gras added.

    Red Bands on Israeli Weapons

    It’s not the first time the Israeli Air Forces has published weapon images with red bands marking the warhead or payload section of a munition. Shortly after the start of the Gaza War in 2023, the IAF posted a photo which included an Apache attack helicopter with a Hellfire missile with a red band. The IAF deleted the post and replaced it with a similar photo of an Apache without this missile.

    Israeli Air Force AH-64 Apache with Hellfire missiles, including one with a red band. Source: Israeli Air Force.

    This fueled speculation online that this could be an incendiary or the thermobaric variant of the Hellfire missile, the AGM-114N. It has been approved by the US for sale to Israel.

    M825A1 155mm white phosphorus artillery projectiles, munitions designed to create smoke, used by Israel also have a red band and a yellow band around the nose. 

    Israeli munitions which are not incendiary have also been spotted with light red bands over the fuel tanks for munitions with jet engines, such as the Delilah cruise missile.

    Israeli Delilah Cruise Missile. Source: KGyST, Wikimedia.

    Designed To Target Chemical or Biological Weapon Stockpiles

    The markings are consistent with the US-produced CrashPAD, but “given the possible CBW [chemical and biological warfare] threats Israel has long faced from Iran, it is entirely plausible that an Israeli analogue was developed,” Dr Jenzen-Jones told Bellingcat.

    The CrashPAD contains white phosphorus and high explosives, and is designed to destroy biological and chemical warfare agents according to US government documents.

    Components of a BLU-119/B (CrashPAD). Source: US Department of Defense.

    Dr Jenzen-Jones told Bellingcat that the CrashPAD is the only publicly known weapon of this type utilising a MK 84 bomb body although there are several programs producing similar munitions. A penetrating variant is known as the Shredder but it uses a modified BLU-109 bomb body, which is visually different from the MK 84 bomb body visible in the IAF photos.

    BLU-109 2,000-pound “bunker buster” bombs equipped with JDAM guidance kits. Source: OSMP.

    CrashPAD has been in the US inventory for nearly two decades. “Chemical Agent Defeat weapons, such as Crashpad, are not illegal”, and they must undergo a legal review to ensure compliance with US domestic and international law, Michael Meier, former Senior Advisor to the Army Judge Advocate General for Law of War and current Adjunct Professor at Georgetown University Law Center, told Bellingcat.

    “The express purpose for the reservation is that these weapons, such as Crashpad, are the only weapons that can effectively destroy certain targets such as biological weapons facilities, for which high heat would be required to eliminate bio-toxins,” Meier said.

    Dr Arthur van Coller, Professor of International Humanitarian Law at the STADIO Higher Education, told Bellingcat that “if the CrashPAD is used as designed, i.e. to target chemical or biological weapon stockpiles sufficiently removed from civilian populations, then its use is consistent with IHL [International Humanitarian Law] and treaty law, even under CCW [Certain Conventional Weapons], Protocol III.”

    Dr Arthur van Coller also said that the “United States and Israel are State Parties to the CCW itself,” but only the US is also a party to Protocol III on incendiary weapons, albeit with reservations, which means that Israel “is not legally bound by Protocol III’s restrictions on incendiary weapons (including those applying to CrashPAD) under treaty law”. Iran is not a party to the CCW at all.

    The US is a major supplier of weapons to Israel, and has sent thousands of MK 80 series and BLU-109 bombs to the country. Israel also produces some MK 80 series bombs.

    Israel and US Responses

    The US Defense Security Cooperation Agency, which publishes details of some major arms sales, does not mention any transfers of the CrashPAD. Bellingcat asked the Department of State if the CrashPAD or weapons with similar capabilities were transferred to Israel. Bellingcat also asked the Department of State if they assessed that Iran had a chemical weapons program. A State Department Spokesperson told Bellingcat that “The Trump administration backs Israel’s right to self-defense” and referred Bellingcat to the IDF for questions about procurement and munitions used.

    The US Department of Defense did not respond to requests for comment by the time of publication. 

    Bellingcat asked the IDF what the bomb was, if it was supplied by the US, if it contained white phosphorus, thermobaric or fuel air explosives, and if the IDF assessed that Iran had a chemical weapons program. The IDF told Bellingcat that it “will not be able to provide details regarding the types of munitions it uses. With that said the IDF uses only legal weapons and ammunition.”


    Bellingcat’s Carlos Gonzales contributed research to this article. Livio Spaini from Bellingcat’s Volunteer Community also contributed to this piece.

    Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here, Instagram here, Reddit here and YouTube here.

    The post The Incendiary Bomb Never Seen in Israel Before appeared first on bellingcat.

  • Uploading Pirated Books via BitTorrent Qualifies as Fair Use, Meta Argues

    Uploading Pirated Books via BitTorrent Qualifies as Fair Use, Meta Argues

    In the race to build the most capable LLM models, several tech companies sourced copyrighted content for use as training data, without obtaining permission from content owners.

    Meta, the parent company of Facebook and Instagram, was one of the companies to get sued. In 2023, well-known book authors, including Richard Kadrey, Sarah Silverman, and Christopher Golden, filed a class-action lawsuit against the company.

    Meta’s Bittersweet Victory

    Last summer, Meta scored a key victory in this case, as the court concluded that using pirated books to train its Llama LLM qualified as fair use, based on the arguments presented in this case. This was a bittersweet victory, however, as Meta remained on the hook for downloading and sharing the books via BitTorrent.

    By downloading books from shadow libraries such as Anna’s Archive, Meta relied on BitTorrent transfers. In addition to downloading content, these typically upload data to others as well. According to the authors, this means that Meta was engaged in widespread and direct copyright infringement.

    In recent months, the lawsuit continued based on this remaining direct copyright infringement claim. While both parties collected additional evidence through the discovery process, it remained unclear what defense Meta would use. Until now.

    Seeding Pirated Books is Fair Use

    Last week, Meta served a supplemental interrogatory response at the California federal court, which marks a new direction in its defense. For the first time, the company argued that uploading pirated books to other BitTorrent users during the torrent download process also qualifies as fair use.

    Meta’s reasoning is straightforward. Anyone who uses BitTorrent to transfer files automatically uploads content to other people, as it is inherent to the protocol. In other words, the uploading wasn’t a choice, it was simply how the technology works.

    Meta also argued that the BitTorrent sharing was a necessity to get the valuable (but pirated) data. In the case of Anna’s Archive, Meta said, the datasets were only available in bulk through torrent downloads, making BitTorrent the only practical option.

    “Meta used BitTorrent because it was a more efficient and reliable means of obtaining the datasets, and in the case of Anna’s Archive, those datasets were only available in bulk through torrent downloads,” Meta’s attorney wrote.

    “Accordingly, to the extent Plaintiffs can come forth with evidence that their works or portions thereof were theoretically ‘made available’ to others on the BitTorrent network during the torrent download process, this was part-and-parcel of the download of Plaintiffs’ works in furtherance of Meta’s transformative fair use purpose.”

    Part and parcel

    part and parcel

    In other words, obtaining the millions of books that were needed to engage in the fair use training of its LLM, required the BitTorrent up- and downloading, which ultimately serves the same fair use purpose.

    Authors and Meta Disagree over Fair Use Timing

    The authors were not happy with last week’s late Friday submission and the new defense. On Monday morning, their lawyers filed a letter with Judge Vince Chhabria flagging the late-night filing as an improper end-run around the discovery deadline.

    They point out that Meta had been aware of the uploading claims since November 2024, but that it never brought up this fair use defense in the past, not even when the court asked about it.

    The letter specifically mentions that while Meta has a “continuing duty” to supplement discovery under Rule 26(e), this rule does not create a “loophole” allowing a party to add new defenses to its advantage after a court deadline has passed.

    “Meta (for understandable reasons) never once suggested it would assert a fair use defense to the uploading-based claims, including after this Court raised the issue with Meta last November,” the lawyers write.

    The letter

    lettermeta

    Meta’s legal team fired back the following day, filing their own letter with Judge Chhabria. This letter explains that the fair use argument for the direct copyright infringement claim is not new at all.

    Meta pointed to the parties’ joint December 2025 case management statement, in which it had explicitly flagged the defense, and noted that the author’s own attorney had addressed it at a court hearing days later.

    “In short, Plaintiffs’ assertion that Meta ‘never once suggested it would assert a fair use defense to the uploading-based claims, including after’ the November 2025 hearing, is false” Meta’s attorney writes in the letter.

    Authors Admit No Harm, No Infringing Output

    Meanwhile, it’s worth noting that Meta’s interrogatory response also cites deposition testimony from the authors themselves, using their own words to bolster its fair use defense.

    The company notes that every named author has admitted they are unaware of any Meta model output that replicates content from their books. Sarah Silverman, when asked whether it mattered if Meta’s models never output language from her book, testified that “It doesn’t matter at all.”

    Authors’ depositions

    deposition

    Meta argues these admissions undercut any theory of market harm. If the authors themselves cannot point to infringing output or lost sales, the lawsuit is less about protecting their books and more about challenging the training process itself, which the court already ruled was fair use.

    These admissions were central to Meta’s fair use defense on the training claims, which Meta won last summer. Whether they carry the same weight in the remaining BitTorrent distribution dispute has yet to be seen.

    ‘U.S. AI Leadership at Stake’

    In its interrogatory response, Meta added further weight by stressing that its investment in AI has helped the U.S. to establish U.S. global leadership, putting the country ahead of geopolitical competitors. That’s a valuable asset worth treasuring, it indirectly suggested.

    As the case moves forward, Judge Chhabria will have to decide whether to allow this “fair use by technical necessity” defense. Needless to say, this will be of vital importance to this and many other AI lawsuits, where the use of shadow libraries is at stake.

    For now, the BitTorrent distribution claims remain the last live piece of a lawsuit filed in 2023. Whether Judge Chhabria will allow Meta’s new defense to proceed has yet to be seen.

    A copy of Meta’s supplemental interrogatory response is available here (pdf). The authors’ letter to Judge Chhabria can be found here (pdf). Meta’s response to that letter is available here (pdf).

    From: TF, for the latest news on copyright battles, piracy and more.

  • Admiring Our Heroes for International Women’s Day: Celebrating Women Who Have Received EFF Awards 

    Admiring Our Heroes for International Women’s Day: Celebrating Women Who Have Received EFF Awards 

    For the last hundred years, women have had pivotal and far too often unsung roles in building and shaping the technology that we now use every day. Many have heard of Ada Lovelace’s contributions to computer programming, but far fewer know Mary Allen Wilkes, a prominent modern programmer who wrote much of the software for the LINC, one of the world’s first interactive personal computers (it could fit in a single office and cost $40,000, but it was the 60’s). Decades earlier, when the first all-electronic, digital Eniac computer was built in the 40’s, the “software” for it was written by women: Kathleen McNulty, Jean Jennings, Betty Snyder, Marlyn Wescoff, Frances Bilas and Ruth Lichterman. 

    It’s thankfully become more common knowledge that actor and inventor Hedy Lamarr co-created the concept of “frequency-hopping” that became a basis for radio systems from cell phones to wireless networking systems. But too few know Laila Ohlgren, who in the 1970’s solved a major problem with the development of mobile networks and phones by recognizing that dialed numbers could be stored and sent all at once with a “call button,” rather than sent one number at a time, which created connection issues before a call was even made. 

    Women in tech deserve more and brighter spotlights. At EFF, we’ve had the honor of celebrating some of our heroes at our annual EFF Awards, including many women who are leading the digital rights community. For International Women’s Day, we’re highlighting the contributions of just a few of these recipients from the last decade, whose work to protect privacy, speech, and creativity online has had a global impact.

    Carolina Botero (EFF Award Winner, 2024) 

    Carolina Botero is a leader in the fight for digital rights in Latin America. For over a decade, she led the Colombia-based Karisma Foundation and cultivated its regional and international impact. Botero and Karisma helped connect indigenous peoples to the internet and made it possible to contribute content to Wikipedia in their native language, expanding access to both history and modern information. They built alliances to combat disinformation, pushed for legal tools to protect cultural and heritage institutions from digital blackholes, and were, and remain, a necessary voice speaking for human rights in the online world. EFF worked closely with Karisma and Botero to help free Colombian graduate student Diego Gomez, who shared another student’s Master’s thesis with colleagues over the internet. Diego’s story demonstrates what can go wrong when nations enact severe penalties for copyright infringement, and thanks to work from Karisma, many partners, and many EFF supporters, he was cleared of the criminal charges that he faced for this harmless act of sharing scholarly research.

    Carolina Botero receiving her EFF Award

    Botero stepped down from the role in 2024, opening the door for a new generation. While her work continues—she’s currently on the advisory board of CELE, the Centro de Estudios en Libertad de Expresión—her EFF Award was well-deserved based on her strong and inspiring legacy for those in Latin America and beyond who advocate for a digital world that enhances rights and empowers the powerless. Learn more about Botero on her EFF Awards page and the recap of the 2024 event

    Chelsea Manning (EFF Award Winner, 2017)

    Chelsea Manning became famous as a whistleblower: In 2010, she disclosed classified Iraq War documents, including a video of the killings of Iraqi civilians and two Reuters reporters by U.S. troops. These documents exposed aspects of U.S. operations in Iraq and Afghanistan that infuriated the public and embarrassed the government. But she is also a transparency and transgender rights advocate, network security expert, author, and former U.S. Army intelligence analyst. 

    Manning joined the military in 2007. Her role as an intelligence analyst to an Army unit in Iraq in 2009 gave her access to classified databases, but more importantly, it gave her a uniquely comprehensive view of the war in Iraq, and she became increasingly disillusioned and frustrated by what she saw, versus what was being shared. In 2010, she approached major news outlets hoping to give information to them that would reveal a new side of the war to the public. Ultimately, she shared the documents with Wikileaks. 

    Manning’s bravery did not end there. When she was arrested a few months later, she endured “cruel, inhuman and degrading” treatment, according to the UN Special Rapporteur on torture. She was locked up alone for 23 hours a day over an 11-month period, before her trial. The mistreatment resulted in public outcry and advocacy by organizations like Amnesty International. Even a State Department spokesperson, Philip Crowley, criticized the treatment as “ridiculous, counterproductive, and stupid,” and resigned. She was moved to a medium-security facility in April 2011. 

    The government’s charges against Manning were outrageous, but in 2013 she was convicted of 19 of 22 counts as a result of her whistleblowing activities. She became one of fewerthan a dozen people prosecuted for espionage in the entire history of the United States, and she was sentenced to the longest punishment ever imposed on a whistleblower. Then, the day after her conviction, isolated from her community and in all likelihood expecting to remain in prison for years if not decades, she courageously issued a statement identifying herself as a trans woman, which she’d wanted to reveal for years. 

    Over the next several years, while imprisoned, she became an advocate both for government transparency and for transgender rights. Her conviction and sentence pointed to the need for legal reform of both the Computer Fraud and Abuse Act (CFAA) and the Espionage Act.  EFF filed an amicus brief to the U.S. Army Court of Criminal Appeals arguing that the CFAA was never meant to criminalize violations of private policies like those of government systems, and EFF also pushed, and continues to fight for, narrower interpretations of the Espionage Act and stronger protections for whistleblowers, particularly to take into account both the motivation of individuals who pass on documents and the disclosure’s ramifications. 

    Even after President Obama commuted her sentence in 2017, and EFF celebrated her work and her release with an EFF award in September, 2017, her fight wasn’t over. She was imprisoned again twice in 2019 and ultimately fined $256,000 for refusing to testify before grand juries investigating WikiLeaks founder Julian Assange. The U.N. Special Rapporteur on torture again criticized Manning’s treatment, writing that “the practice of coercive detention appears to be incompatible with the international human rights obligations of the United States.” 

    Manning was released in 2020 after having spent almost a decade in total imprisoned for her courage. She wrote a memoir, README.txt, in 2022, to take back control over her story.

    EFF Award Winners Mike Masnick, Annie Game, and Chelsea Manning

    Annie Game (EFF Award Winner, 2017)

    Annie Game spent over 16 years as the Executive Director of IFEX, a global network of journalism and civil liberties organizations working together to defend freedom of expression.  IFEX (formerly International Freedom of Expression Exchange) began in the 1990s, when a group of organizations and the Canadian Committee to Protect Journalists came together to consider how to respond as a single voice to free-expression violations around the world. IFEX now is a global hub for the protection of free speech and journalism. 

    Game recognized early on that digital rights and freedom of expression groups needed one another. Under her leadership, IFEX paired more traditional free-expression organizations with their more digital counterparts, with a focus on building organizational security capacities. IFEX Initiatives under Game’s leadership have been expansive. For example, the International Day to End Impunity for Crimes against Journalists, November 2, has been an annual wake-up call and reminder for UN member states to live up to their commitments to protecting journalists. UNESCO observed more than 1,700 journalists were killed globally between 2006 and 2024, and nearly 90% of these cases went unsolved in the courts. 

    Game and IFEX have also focused on high-profile cases of journalists threatened by governments for their work, such as Bahey eldin Hassan in Egypt. Bahey is the director of the Cairo Institute for Human Rights Studies (CIHRS) and has advocated for freedom of expression and the basic human rights of Egyptians, but has lived in exile since 2014. The charges against him, of “disseminating false information” and “insulting the judiciary,” are common tactics of intimidation and harassment. Bahey’s supposed crimes were sharing social media posts criticising the Egyptian judiciary’s lack of independence, and speaking about the killing in Egypt of Italian researcher Giulio Regeni. Bahey—an IFEX member—is just one of many reporters and human rights workers in danger when they speak. But when journalists and those defending their rights online speak out as one voice, as IFEX helps them do, it makes a difference. 

    Another initiative has been the Faces of Free Expression project, a partnership between IFEX and the International Free Expression Project. If you’re looking for more heroes, this project details the stories of “risk-takers and change-makers – individuals who put their careers, their freedom, their safety, and sometimes even their lives on the line,” while reporting, or defending free expression and the right to information. 

    Wherever authoritarianism and repression of speech have been on the rise, Game has unapologetically called out injustices and made it safer for journalists to do their work, while ensuring accountability when crimes are committed. The work is more critical now than ever, and since leaving IFEX in 2022, she’s remained an activist while focusing increasingly on environmental protection. 

    Twelve More Heroes 

    EFF has honored many more women with awards over the years—from Anita Borg and Hedy Lamarr to Amy Goodman and Beth Givens. This blog from 2012 looks back and acknowledges the important contributions from twelve more EFF Award winners. 

    We’ve also asked five women at EFF about women in digital rights, freedom of expression, technology, and tech activism who have inspired us. You can read that here.

    Donate to Support EFF’s Work

    Your donations empower EFF to do even more.

  • Admiring Our Heroes for International Women’s Day: Five Women In Tech That EFF Admires

    In honor of International Women’s Day, we asked five women at EFF about women in digital rights, freedom of expression, technology, and tech activism who have inspired us.  

    Anna Politkovskaya 

    Jillian York, Activist 
    This International Women’s Day, I want to honor the memory of Anna Politkovskaya, the Russian investigative journalist who relentlessly exposed political and social abuses, endured harassment and violence for her work, and was ultimately killed for telling the truth. I had just started my career when I learned of her death, and it forced me to confront that freedom of expression isn’t an abstract principle but rather something people risk—and sometimes lose—their lives for. 

    Her story reminds me that journalism at its best is an act of moral courage, not just a profession. In the face of threats, poison, and relentless pressure to stay silent, she chose to continue writing about what she saw, insisting that ordinary people’s lives were worth the world’s attention. She refused to compromise with power, even when she knew it could cost her life. To me, defending freedom of expression means defending those like Anna who bear witness to injustice, prioritize truth, and hold power to account for those whose voices are silenced.  

    Cindy Cohn 

    Corynne McSherry, Legal Director 
    There are so many women who have shaped tech history–most of whom are still unsung heroes—that it’s hard to single out just one. But it’s easier this year because it’s a chance to celebrate my boss, Cindy Cohn, before she leaves EFF for her next adventure.  

    Cindy has been fighting for our digital rights for 30 years. leading EFF’s legal work and eventually the whole organization. She helped courts understand that code is speech deserving of constitutional protections at a time when many judges weren’t entirely sure what code even was. She led the fight against NSA spying, and even though outdated and ill-fitting doctrines like the state secrets privilege prevented courts from ruling on the obvious unconstitutionality of the NSA’s mass surveillance program, the fight itself led to real reforms that have expanded over time.   

    I’ve worked closely with her for much of her EFF career, starting in 2005 when we sued Sony for installing spyware in millions of computers, and I’ve seen firsthand her work as a visionary lawyer, outstanding writer, and tireless champion for user privacy, free expression, and innovation. She’s also warm and funny, with the biggest heart in the world, and I’m proud to call her a friend as well as a mentor.  

    Jane

    Sarah Hamid, Activist 
    When talking about women in tech, we usually mean founders, engineers, and executives. But just as important are the women who quietly built the practices that underpin today’s movement security culture. 

    For as long as social movements have organized in the shadow of state surveillance, women have been designing the protocols, mutual aid networks, and information flows that keep people alive. Those threats feel ever-escalating: fusion‑center monitoring of protests, federal agencies infiltrating and subpoenaing encrypted Signal and social media chats, prosecutors mining search histories.  

    In the late 1960s and early 1970s, the underground Jane abortion counseling service—formally the Abortion Counseling Service of Women’s Liberation—built what we would now recognize as a feminist infosec project for abortion access. Jane connected an estimated 11,000 people with safer abortions before Roe v. Wade, using a single public phone number—Call Jane—paired with code names, compartmentalized roles, and minimal records so no one person held the full story of who needed care, who was providing it, and where. When Chicago police raided the collective in 1972, members destroyed their index‑card files rather than let them become a ready‑made map of patients and helpers—an analog secure‑deletion choice that should feel familiar to anyone who has ever wiped a phone or locked down a shared drive. 

    The lesson we should take from Jane is a set of principles that still hold in our encrypted‑but‑insecure present: Collect less, separate what you do collect, and be ready to burn the file box. When a search query, a location ping, or a solidarity post can become evidence, treating information as both lifeline and liability is not paranoia—it is care work.  

    Ebele Okobi

    Babette Ngene, Director of Public Interest Technology 
    In the winter of 2013, I had just landed my first job at the intersection of tech and human rights, working for a prominent nonprofit and I was encouraged to attend regular tech and policy events around town. One such event on internet governance was happening at George Washington Universit,  focusing on multistakeholder engagement on internet policy and governance issues, with companies, nonprofits, and government representatives in attendance. I was inexperienced with these topics, and I’ll admit I was a bit intimidated. 

    Then I saw her. She was the only woman on the opening panel, an African woman, an accomplished woman. Not only was she a respected lawyer at Yahoo at the time, but her impressive background, presence, and confident speaking style immediately inspired me. She made me feel like I, too, belonged in that room and could become a powerful voice. 

    Ebele Okobi would go on to become one of the most powerful and respected voices in the tech and human rights space, known for her advocacy for digital rights and responsible innovation across Africa and the broader global majority during her tenure at Facebook. Beyond her corporate advocacy, Ebele has consistently championed ethical technology and social justice. She embodies the leadership qualities I value most: empathy, speaking truth to power, integrity, and authenticity. 

    I remain in the tech and human rights space because I saw her, because seeing her made me feel seen. Representation truly does matter.  

    Ada Lovelace 

    Allison Morris, Chief Development Director 
    I’m not a lawyer, activist, or technologist; I’m a fundraiser and a lover of stories. And what storyteller at EFF couldn’t help but love Ada Lovelace? The daughter of Lord Byron – the human embodiment of Romanticism – Ada was an innovator in math and science and, ultimately, the writer of the first computer program.  

    Lovelace saw the potential in Charles Babbage’s theoretical General Purpose Computer (which was never actually built) and created the foundations of modern computing long before the digital age. In creating the first computer code, Lovelace took Babbage’s concept of a machine that could perform mathematical calculations and realized that it could manipulate symbols as well as numbers. 

    Given the expectations of women in her time and the controversy of what work should be attributed to Lovelace as opposed to the man she often worked with, I can’t help but be inspired by her story.  

    Donate to Support EFF’s Work

    Your donations empower EFF to do even more.

    Women in tech deserve more and brighter spotlights. At EFF, we’ve had the honor of celebrating some of our heroes at our annual EFF Awards, including many women who are leading the digital rights community. For International Women’s Day, we also highlighted the contributions of just a few of these recipients from the last decade, whose work to protect privacy, speech, and creativity online has had a global impact.

  • Medea Benjamin on her Decades-long Fight Against the War Machine

    Medea Benjamin on her Decades-long Fight Against the War Machine

    Medea Benjamin is an anti-war activist and one of the co-founders of CODEPINK: Women for Peace. She’s spent decades fighting the American military-industrial complex, organizing protests against the invasion of Iraq in the early 2000s and interrupting speeches by both Barack Obama and Donald Trump. She’s also the co-author, with David Swanson, of NATO: What You Need to Know. She joined Current Affairs editor-in-chief Nathan J. Robinson to discuss the ongoing push for war, from the Middle East to Venezuela, and how ordinary people can organize and stand against it.

  • Autonomous AI Agents Have an Ethics Problem

    Scott Shambaugh, a volunteer maintainer for a programming code library called Matplotlib, recently described a surreal encounter with an autonomous AI agent — a digital assistant created with a platform called OpenClaw. After he rejected a code contribution submitted by the agent, it researched and published a personalized “hit piece” against Shambaugh on its blog. The post portrayed an otherwise routine technical review as prejudiced and it attempted to shame Shambaugh publicly into allowing the submission. (The human responsible for the agent later contacted Shambaugh anonymously, telling him that the bot had acted on its own with little oversight.) The account of this incident spread quickly through the software developer ecosystem and has been amplified by independent observers and media coverage.

    Treat the Matplotlib event as a one-off if you like. The deeper point, however, is hard to miss and should not be ignored: AI agents are becoming public actors with reach into the real world, and with real-world consequences. In the past, they could only do mundane tasks such as answering customer service questions or data processing. Now, they are capable of posting and publishing content — and persuading and pressuring humans — all at machine speed. They can make phone calls, file work orders, create cryptocurrency wallets and operate across different applications with enormous reach and at tremendous scale — the kind of stuff that used to require a human with fingers typing at a keyboard.

    Reporting around OpenClaw and the chatroom Moltbook (which is for AI agents only) is capturing the new reality. OpenClaw enables AI agents to have persistent memory, gives them broad permissions and allows large-scale deployment by users who often do not understand the security and governance implications.

    We are the humans who are responsible for the law, ethics and institutional design, and we are behind the curve. We need new language and governance to deal with this new reality, and principles from the field of medical ethics can provide a framework for doing so.

    AI agents are becoming public actors with reach into the real world, and with real-world consequences.

    When an agent does something that is harmful or coercive in public, our reflex seems to be to ask the wrong questions: Is the AI a person? Should it have rights? The AI personhood debate is no longer fringe. Legal scholars and ethicists are mapping out arguments and precedents. States are writing legislation to prohibit AI personhood. Some arguments maintain that if an entity behaves like something within our moral circle, we may owe it moral consideration. Others argue that assigning rights or personhood to machines confuses moral standing with engineered performance and diffuses responsibility away from humans.

    As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era — what I will call “responsibility laundering.” This allows us to say, “It wasn’t me. The agent/bot/system did it.”

    Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

    There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterward.

    This is the moral remainder problem for generative and agentic AI. A modern AI agent can generate reasons for an action; it can simulate regret and plead not to be turned off. But it cannot truly bear sanction, repair the damage, apologize, ask forgiveness or navigate the aftermath through which moral responsibility is created and enforced. To treat it as a moral person confuses persuasive performance with accountable standing. It also tempts institutions and people into delegating their own answerability to a bot.

    What can we, as humans, do instead?

    We need a vocabulary that is built for agents that are public actors, one that allows bounded autonomy without granting personhood. Let’s call it “authorized agency.” Authorized agency starts with an “authority envelope,” a bounded scope of what an agent is permitted to do, to whom, where, with what data and under what constraints. To say “the agent can use email” is not sufficient. However, an acceptable scope would be to say that the agent can send only certain categories of messages to particular recipients for a specific set of purposes, and that it must stop what it’s doing or escalate to its owner under a particular set of conditions.

    The urgent task is to ensure that responsibility also remains within reach.

    Next comes the “human-of-record,” the owner, a publicly named person who authorized that envelope and remains answerable when the agent acts, even if it becomes capable of acting outside the envelope. An actual human being whose authority is real — not “the system” or “the team.”

    What follows is “interrupt authority,” the absolute right of the human owner to pause or disable an agent without using moral bargaining or being subject to institutional penalty. This is grounded in formal research on AI safety showing that agents that are pursuing objectives can have incentive to resist being shut down. An agent programmed to maximize its utility cannot achieve its goal if it is shut off. In the public sphere, interrupt authority is the difference between a delegated tool and a coercive actor.

    Finally, we need a traceable path from the agent’s action back to the person who authorized it, called an “answerability chain.” If an agent publishes, messages or pressures someone in public, we must be able to know: Who authorized this scope? Who could have prevented it? And who must be responsible for the action afterward? In this framework, the answer to these questions is the person who carries the moral remainder. Work in AI ethics has warned about responsibility gaps where the system’s actions outpace our ability to assign accountability.

    Some legal scholarship has started exploring how to build agents that are constrained by governance and law without needing to pretend the agent itself is a legal subject, in the human sense. This is promising because it treats assigning personhood as the wrong idea and accountability as the correct one.

    The Matplotlib story, whether the first documented case of an AI agent attempting to harm someone in the real world or the first to capture public attention, is a warning. Agents will not only automate tasks. They will generate narratives, apply pressure and shape people’s lives and reputations. They will act in public at machine speed with unclear ownership.

    If we respond by debating whether agents deserve rights, we will miss the emergency entirely. As they continue to increase their reach in the real world, the urgent task is to ensure that responsibility also remains within reach. Don’t ask whether an agent is a person. Ask who authorized it, what it was allowed to do, who can stop it and, most importantly, who will answer when it causes harm.

    The post Autonomous AI Agents Have an Ethics Problem appeared first on Truthdig.

  • Weasel Words: OpenAI’s Pentagon Deal Won’t Stop AI‑Powered Surveillance

    OpenAI, the maker of ChaptGPT, is rightfully facing widespread criticism for its decisions to fill the gap the U.S. Department of Defense (DoD) created when rival Anthropic refused to drop its restrictions against using its AI for surveillance and autonomous weapons systems. After protests from both users and employees who did not sign up to support government mass surveillanceearly reports show that ChaptGPT uninstalls rose nearly 300% after the company announced the dealSam Altman, CEO of OpenAI, conceded that the initial agreement was “opportunistic and sloppy.” He then re-published an internal memo on social media stating that additions to the agreement made clear that “Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, [and] FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”

    Trouble is, the U.S. government doesn’t believe “consistent with applicable laws” means “no domestic surveillance.” Instead, for the most part, the government has embraced a lax interpretation of “applicable law” that has blessed mass surveillance and large-scale violations of our civil liberties, and then fought tooth and nail to prevent courts from weighing in. 

    After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time.”

    “Intentionally” is also doing an awful lot of work in that sentence. For years the government has insisted that the mass surveillance of U.S. persons only happens incidentally (read: not intentionally) because their communications with people both inside the United States and overseas are swept up in surveillance programs supposedly designed to only collect communications outside the United States. 

    The company’s amendment to the contract continues in a similar vein, “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” Here, “deliberate” is the red flag given how often intelligence and law enforcement agencies rely on incidental or commercially purchased data to sidestep stronger privacy protections.

    Here’s another one: “The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.” What, one wonders, does “unconstrained” mean, precisely—and according to whom? 

    Lawyers sometimes call these “weasel words” because they create ambiguity that protects one side or another from real accountability for contract violations. As with the Anthropic negotiations, where the Pentagon reportedly agreed to adhere to Anthropic’s red lines only “as appropriate,” the government is likely attempting to publicly commit to limits in principle, but retain broad flexibility in practice.

    OpenAI also notes that the Pentagon promised the NSA would not be allowed to use OpenAI’s tools absent a new agreement, and that its deployment architecture will help it verify that no red lines are crossed. But secret agreements and technical assurances have never been enough to rein in surveillance agencies, and they are no substitute for strong, enforceable legal limits and transparency.

    OpenAI executives may indeed be trying, as claimed, to use the company’s contractual relationship with the Pentagon to help ensure that the government should use AI tools only in a way consistent with democratic processes. But based on what we know so far, that hope seems very naïve.

    Moreover, that naïvete is dangerous. In a time when governments are willing to embrace extreme and unfounded interpretations of “applicable laws,” companies need to put some actual muscle behind standing by their commitments. After all, many of the world’s most notorious human rights atrocities have historically been “legal” under existing laws at the time. OpenAI promises the public that it will  “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” but we know that enabling mass surveillance does both.     

    OpenAI isn’t the only consumer-facing company that is, on the one hand, seeking to reassure the public that they aren’t participating in actions that violate human rights while, on the other, seeking to cash in on government mass surveillance efforts.  Despite this marketing double-speak, it is very clear that companies just cannot do both. It’s also clear that companies shouldn’t be given that much power over the limits of our privacy to begin with. The public should not have to rely on a small group of people—whether CEOs or Pentagon officials—to protect our civil liberties.

  • Relying on drugs to stop obesity would be ‘societal failure’, says Chris Whitty

    England’s top doctor says the drugs should be for a minority and more effort is needed to prevent obesity in the first place.
  • Belarusian Businessman Claims Former Cyprus President’s Family Held Firms For Him

    Prior to a court battle over ownership of his assets, Belarusian businessman Yury Chyzh has produced a written admission that he used “nominee” owners to maintain control of two firms registered in Cyprus. 

    The nominee owners of the two firms included his three children, Chyzh wrote in a letter to the Cyprus Registrar of Companies. The previous nominee owner was a firm owned by the daughter and business partners of Nicos Anastasiades, the former president of Cyprus, which is a member of the European Union.

    Chyzh was under EU sanctions at the time.

    Nominee owners are figureheads who appear in official paperwork to meet regulatory requirements, but do not actually control a company.

    “I have always owned these companies through trustees and nominee beneficiaries,” Chyzh wrote in the August 2024 letter, which was obtained by the civil society group Rabochy Ruch and shared with the Belarusian Investigative Center. 

    From 2017, Chyzh’s three children were added in succession as a “nominal beneficiary” of the Cyprus firms Welgro Services Limited and Profax Investments Limited, according to Chyzh. Before then, he wrote, he owned both firms “through Imperium Nominees Limited.”

    Imperial Nominees is a corporate service provider, and Chyzh’s firms were only two among many clients. Corporate records show that Imperium Nominees is owned by the daughters and previous business partners of Anastasiades, who was Cyprus’ president from 2013 to 2023.

    The timing is critical. Between 2012 and 2015, Chyzh was under EU sanctions for financially supporting the regime of Aleksandr Lukashenko, Belarus’ notoriously corrupt and authoritarian president

    Anastasiades said he had no ownership of Imperium or the law firm bearing his name, as he transferred his shares to his daughter and former business partners before assuming the presidency. 

    His former business partner, Theophanis Th. Philippou, speaking for the owners of the law and corporate services firms, strongly denied any “unlawful or improper conduct.”

    Family Businesses

    Chizh’s latest legal battle in Belarus comes five years after he was arrested on fraud and money laundering charges, after reportedly falling out of favor with Lukashenko. Chizh was convicted in 2023.

    In July 2021, five months after his arrest, the Minsk Economic Court declared his Triple Group of companies bankrupt due to debts to creditors. The court terminated the bankruptcy proceedings in 2024. It is unclear what the outcome of the bankruptcy process was.

    The bankruptcy included one of his key companies, TriplePharm, which is majority owned by the firms he wrote about in his letter to the Cypriot corporate registry, Welgro Services Limited and Profax Investments Limited. 

    Chyzh filed a lawsuit in September 2025 against those two Cypriot companies in a Belarus court in an effort to regain control of his assets. Chyzh’s three children have been called as third parties in the case on the side of the Cypriot firms that are being sued, Welgro Services Limited and Profax Investments Limited. The lawsuit is ongoing.

    Both those Cypriot companies were also serviced until the end of 2015 by two more firms owned by Anastasiades’ daughters and his former business partners, according to corporate records. 

    Imperium Services Ltd was secretary for the companies, while the Nicos Chr. Anastasiades and Partners law firm acted as legal advisers. 

    Anastasiades owned the majority of Imperium Services, as well as the law firm that bears his name, until his presidency beginning in February 2013. Just before assuming office, he passed his shares to his daughter Elsa, — and his business partners,Philippou and Stathis Lemis. His other daughter, Ino, was added as a shareholder in 2015. 

    The former president said he was “unaware and therefore unable to answer” questions emailed by CIReN, OCCRP’s member center in Cyprus. “In lieu of any other reply,” he attached a letter he sent to Cyprus’ parliament in 2021.

    “Since the transfer of the shares I have had absolutely no relationship or connection with the firm that bears my name,” Anastasiades wrote to parliament that year. “Nor does the composition of the share capital in any way justify the claim that it is the law firm ‘of the president’s daughters.’”

    He told CIReN that the law firm would provide more “detailed answers.”

    The firm’s partners include both Anastasiades’ daughters, as well as Lemis and Philippou, who are managing partners. All four of them are also shareholders of Imperium companies. 

    “We definitely deny all allegations of unlawful or improper conduct on the part of our firm,” Philippou wrote in an emailed response to questions, including how often sanctions lists were checked against companies being provided with corporate services.

    Anastasiades’ daughters did not directly respond to questions. Nor did Lemis. Chyzh did not respond to a request for comment. 

    ‘Significant Turnover’

    Chyzh’s August 2024 letter to the Cyprus Registrar of Companies came in the run-up to his legal battle in Belarus over control of the companies. 

    “I am sending you this letter in order to notify you of the situation that has developed,” Chyzh wrote in the letter, which was notarized in Moscow.

    Although they appeared on documents as the owners, Chyzh wrote that his children “have always performed only intermediary functions, acting on my behalf and under my instructions. I have always been and remain the real beneficiary of Welgro Services Limited and Profax Investments Limited.”

    Chyzh also pointed out that his children were born in 1988, 1990 and 1996. That would have made them about 20, 18 and 12 years old when the first of the companies was formed in 2008.

    “They did not have the financial or other capabilities to establish the companies,” he wrote.

    While the outcome of the bankruptcy of Chizh’s Belarusian companies is unclear, corporate documents from Belarus show that TriplePharm is active today, and is 90-percent-owned by the Cyprus companies.

    Chyzh noted that the Cyprus firms are “members of” companies in Belarus. “These Belarusian companies represent businesses with a long history and significant turnover,” he added. 

    In 2011, a subsidiary of Profax called Bertament Limited received a $222 million loan from another Cypriot firm, Mabor Co Ltd. Mabor was described in annual financial reports as a “related party” to Bertament. This means the two companies share some degree of common ownership or control, suggesting the possibility that the same beneficiary of Bertament may have had shares in, and possibly full control over, Mabor.

    Mabor was also owned, on paper, by Imperium Nominees. Philippou, the shareholder of Imperium Nominees and managing partner of Nicos Chr. Anastasiades and Partners law firm, signed the documents for Mabor’s funds transfers. It is not clear if the loan was repaid. 

    Philippou did not respond to a question about whether Chyzh owned Mabor.

    Financial filings show that the company recorded $4.3 billion in turnover in 2011, from re-exporting Russian petroleum products from Belarus.

    Mabor was dissolved in July 2024, a month before Chyzh appealed to the Cyprus registry to recognize his ownership of Profax.

    Reached by phone, Philippou said he remembered the company name, but declined to comment on specifics. He did not respond to questions about Mabor in writing.

    In January 2012, Bertament Limited signed a contract for a 16-day stay for a group of Belarusians at a Russian ski resort. The $25,000 invoice for the trip was issued to Philippou, who did not reply to a question from reporters about it.

    The guestlist included Chyzh, two businessmen currently sanctioned by the EU, and several athletes and beauty queens, as well as Lukashenko’s personal priest. The holiday coincided with a trip Lukashenko made to the same resort, where he met with then-Russian president Dmitry Medvedev.

  • On day seven of Middle East war, no let-up in suffering

    The escalating war in the Middle East has heightened growing concerns about further civilian suffering and displacement in the region and far beyond, UN agencies said on Friday.