Blog

  • More On Raw Milk

    HHS Secretary Robert F. Kennedy, Jr. has been pushing the narrative that raw unpasteurized milk is both safe and better for your health than pasteurized milk. As usual, he is objectively wrong.

    The post More On Raw Milk first appeared on Science-Based Medicine.

  • Hundreds feared dead in Lebanon strikes

    The United Nations has strongly condemned airstrikes by the Israeli military across Lebanon on Wednesday which have resulted in significant casualties and destruction.  
  • Over 1,000 humanitarians have been killed in three years, Security Council hears

    At least 326 humanitarians were killed in the line of duty across 21 countries during 2025, bringing the total killed over three years to over 1,010. The International Red Cross warned the Security Council on Wednesday that “we are losing our humanity in war.”
  • The Student Loan Conjuncture

    From one point of view, the student loan program has returned to normal. Tens of millions of people continue to take out loans to pay for their education, and tens of millions more have resumed making monthly payments after more than four years of forbearance. Struggles over the future of student debt—over its terms, its cancellation and so on—have receded from the political spotlight with so much…

    Source

  • Hospitals coping well with doctors’ strike so far – NHS boss

    Resident doctors in England – the new name for junior doctors – are taking part in their 15th walkout in a long-running pay dispute.
  • Digital Hopes, Real Power: How the Arab Spring Fueled a Global Surveillance Boom

    This is the third installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the first post here, and the second here.

    When people remember the 2011 uprisings across the Middle East and North Africa (MENA), they picture crowded squares, raised phones, and the feeling that the internet had finally shifted the balance of power toward ordinary people. But the past decade and a half is also a story about how governments, companies, and platforms turned those same tools into the backbone of a powerful state surveillance apparatus.

    For activists, journalists, and everyday users, that means now living with a constant threat: the phone in your pocket, the platforms you organize on, and the systems you rely on for safety and connection can be weaponized at the flip of a switch. A global surveillance industry has treated repression by many MENA governments as a growth opportunity, and the tactics refined there now shape digital authoritarianism worldwide. This essay traces how that shift unfolded: security agencies upgraded older systems of repression with new surveillance tools and permanent monitoring infrastructure; cybercrime laws and mercenary spyware markets turned digital control into standard operating procedure; and biometrics, facial recognition, and ‘smart city’ projects laid the groundwork for AI‑driven surveillance that now shapes protests, borders, and everyday life far beyond the region. 

    Remembering the Arab Spring today means seeing the events of 2011 as both a remarkable moment of movement history when people leveraged networked tools in their fight for freedom and the beginning of a long, grinding effort to turn those same tools into mechanisms of state control.

    Old‑School Repression, New‑School Tools

    Long before Facebook and Twitter, regimes in places like Egypt and Syria already knew how to crush dissent. They leaned on informant networks, physical surveillance, and wiretaps, backed by emergency laws that let security agencies monitor and detain critics with almost no restraint. Research on the use of surveillance technology in MENA shows that, even before the Arab Spring, states were layering early digital tools like internet monitoring, deep packet inspection, and interception centers on top of that older machinery of control.

    At the same time, connectivity was racing ahead. Cheap smartphones and social media suddenly let people share information at scale, coordinate protests, and broadcast abuses in real time. In 2011, EFF described both the excitement around “Facebook revolutions” and the early signs that governments were scrambling to upgrade their capacity to watch and disorganize popular dissent.

    After the uprisings, Western critics endlessly debated how much credit to give social media itself. While in the background, security agencies across several MENA states reached a much simpler conclusion: if networked communication can help topple a dictator, then they needed to embed themselves deep inside those networks. Analyses of the rise of digital authoritarianism in MENA show how quickly officials pivoted from being surprised by online organizing to building systems to monitor and pre‑empt it.

    In the years after 2011, governments across the region poured money into expanding internet monitoring and deep packet inspection, investing heavily in tools that let them systematically watch what people said and did on major platforms. Foreign vendors set up monitoring centers and interception systems that let security agencies block tens of thousands of sites, scrape and analyze social media at scale, monitor activist pages and online communities, and track activists in real time. They took the lesson of 2011 and built a new, pre‑emptive model of digital control, one that assumes the state should see as much as possible, as early as possible.

    As we noted in 2011, exporting permanent surveillance infrastructure to already‑abusive governments doesn’t “modernize” public safety; it locks in an architecture of control that is primed to abuse dissidents, journalists, and marginalized communities.

    Domestic Lawfare and Cyber-Mercenaries

    The surveillance tech stack was only half the story. After the uprisings, a number of governments also rewrote the rules that govern online life. Cybercrime laws, “fake news” provisions, and overbroad public‑order and ‘morality’ offences gave prosecutors and security agencies legal cover to act with impunity. Governments in Saudi Arabia, Tunisia, Jordan, and Egypt combined counterterrorism, cybercrime, defamation, and protest laws into a legal thicket designed to make online dissent feel dangerous and costly. Morality laws and cybercrime provisions are used to target queer and trans people based on identity and expression.​

    At the United Nations, a new global cybercrime convention now risks baking this logic into international law. The convention was adopted by the UN General Assembly in late 2024, despite serious human rights concerns raised by civil society. Echoing our partners, EFF warned at the time that the UN cybercrime draft convention remained too flawed to adopt and urged states to reject the draft language because it legitimized expansive surveillance powers and criminalized legitimate expression, security research, and everyday digital practices around the world. 

    While on paper, these instruments gesture to “public safety” objectives, in practice they function as pathways for state security agencies to monitor, prosecute, and silence the communities most at risk. For state-targeted communities, that makes being visible online a calculated risk, not a neutral choice.​​

    But criminal codes are only half the story. Mercenary tech is the other. 

    As governments worldwide looked for ways to outpace their critics, a parallel market emerged to help them infiltrate and take over devices. Companies like NSO Group marketed Pegasus and similar tools as off‑the‑shelf capabilities for governments that wanted to hack a target’s cellphones or other devices to read messages, turn on microphones, and monitor entire social networks while bypassing the courts. 

    In 2019, UN Special Rapporteur David Kaye called for a global moratorium on the sale and transfer of private surveillance tools until real, enforceable safeguards exist. Two years later, forensic work by Amnesty and media partners showed how the same spyware used to hack phones of Palestinian human‑rights defenders was used to surveil journalists, activists, lawyers, and political opponents across dozens of countries

    Regional groups responded by demanding an end to the sale of surveillance technology to autocratic governments and security agencies, arguing that you cannot keep selling “lawful intercept” tools into systems where law itself is an instrument of repression. Commercial spyware is at the center of digital repression, not at its margins. Surveillance vendors are not neutral suppliers. Safeguards remain weak, fragmented, or nonexistent in most of the countries buying these tools, yet vendors continue seeking new contracts and new militarized “use cases.” In other words, the companies that design, market, and maintain these systems precisely because they enable this kind of control profit from and help entrench authoritarian power.

    Biometrics, Facial Recognition, and AI‑Powered Surveillance Cities

    On top of this rapidly intensifying interception and spyware stack, governments and companies began layering biometrics and face recognition into everyday systems, creating pathways for bulk data collection, automated analysis, and risk profiling. In parts of MENA, national ID schemes, border and migration controls, and centralized biometric databases have been rolled out in environments with weak or captured data‑protection laws, making it easy to link people’s movements, services, and political activity to a single, persistent identifier.​

    Humanitarian programs are not exempt from this protocol. In Jordan, Syrian refugees have been required to submit iris scans and biometric data to access cash assistance and food, turning “consent” into a precondition for survival. When access to aid depends on enrollment in centralized biometric systems, any breach, misuse, or repurposing of that data can have severe, life‑altering consequences for people who have no realistic way to opt out. Investigations into surveillance‑tech firms complicit in abuses in MENA show that vendors profit from supplying biometric and surveillance tools for migration management and internal security, even when those tools are used in discriminatory or abusive ways.​

    Mass, indiscriminate surveillance technologies were first piloted in MENA on people who are already criminalized or made vulnerable by poverty, but their use quickly expanded from narrow, security‑framed deployments at borders and checkpoints to routine use in welfare offices, aid distribution sites, and city streets. As hardware for sensors, cameras, and data storage got cheaper and “smart city” surveillance systems promised seamless security and services, it became easier and less politically contentious to keep these systems running everywhere, all the time.​

    Unlike targeted hacking tools, these broad, city‑wide surveillance infrastructures built on camera networks, persistent sensors, and biometric databases erase any practical line between people under investigation and the broad public, normalizing bulk, indiscriminate monitoring of public space and everyday movement. In the Gulf, facial recognition and dense sensor networks are increasingly built into high‑profile “smart city” and mega‑project plans that lean heavily on biometric and AI‑driven monitoring. These are security‑first development projects where biometric and sensor infrastructures are designed from the outset to embed policing, migration control, and commercial tracking into the urban fabric. In this vision of the Gulf’s “smart city” future—often sold as seamless services and digital opportunity—“smart” is the branding, and pervasive monitoring is the operating principle.​​

    EFF has consistently opposed government use of face recognition and biometric surveillance, in some instances calling for outright bans. In contexts that treat peaceful dissent as a security threat, embedding biometric surveillance into everyday infrastructure locks in a balance of power that favors militarized policing and state control. That infrastructure is now the starting point for a new set of risks. Surveillance systems built over the last decade are being repackaged as the foundation for a new generation of “AI‑enabled” defense and security products. 

    Companies that once focused on video management or perimeter security now advertise “defense applications” for AI‑driven situational awareness and threat detection, using computer‑vision models to scan camera feeds, compare against existing watchlists, and flag “suspicious” people or behaviors in real time. Drone and sensor platforms are being upgraded with embedded AI that tracks and classifies targets autonomously and with “drone‑based AI threat detection and intelligent situational awareness,” turning aerial surveillance into a continuous data feed for security agencies and militaries. In smart‑city and defense expos from the Gulf to Europe and North America, similar systems are marketed as neutral efficiency upgrades or tools to “protect critical infrastructure,” even where they are explicitly designed to scale up border enforcement, protest surveillance, and internal security operations.

    As these systems are folded into AI‑driven defense products, the line between “civilian” infrastructure and militarized surveillance disappears, turning streets, borders, and aid sites into continuous input for security operations. That is the landscape that human rights and accountability efforts now have to confront.

    Templates of Control, Networks of Resistance

    The patterns established in heavily securitized MENA states after the Arab Spring now shape how states monitor and crush more recent uprisings, from Iran’s use of location data and facial recognition to track down protesters to long‑running crackdowns elsewhere in the region. This model of “digital authoritarianism” built on spyware, data‑hungry ID systems, platform control, and emergency‑style security laws has emerged everywhere from Latin America to Eastern Europe to here in the United States. As the new UN Cybercrime Convention moves toward implementation, its broad offences and surveillance powers risk turning this ad hoc toolkit into a formal template for cross‑border data‑sharing, repression, and an all‑purpose global surveillance instrument.

    For people on the ground, none of this is theoretical. Human‑rights defenders, journalists, and ordinary users across the region face arrest, long prison sentences, and exile based on their digital traces. In that landscape, commercial spyware is not a side issue but part of the core machinery of repression. Pegasus has been used to hack journalists’ phones through zero‑click exploits and compromise human‑rights defenders and watchdog organizations themselves, including staff at Amnesty’s Pegasus Project partners and Human Rights Watch. These deployments give practical effect to the “cybercrime” and “terrorism” frameworks described earlier: person‑by‑person campaigns against particular communities, contacts, and networks, rather than neutral, generalized security measures.

    Under these conditions, everyday security becomes a second job. People describe carrying multiple phones, keeping one for relatively “clean” uses and others for riskier conversations, splitting identities across platforms, using coded language, and moving their organizing off mainstream services when possible. Pushing this burden onto users is a political choice: states, platforms, and vendors could build systems that are safe by design; instead, they externalize risk to the people they watch and punish.

    Even against that backdrop, civil society organizations have refused to cede the terrain to security agencies and vendors. Regional coalitions have demanded strict export controls and outright bans on selling intrusive surveillance tech to autocratic governments

    Advocates have also pushed companies to do more than box‑ticking “due diligence.” Work with surveillance‑tech firms in the context of migration and border control has repeatedly shown that most are still far from serious human‑rights assessments, let alone willing to turn down these lucrative contracts.

    Many of the same governments that have been critical of others on the issue of human rights have hosted or licensed companies that build these tools, in some cases buying similar capabilities for their own security agencies. European authorities, for instance, have investigated FinFisher’s export of spyware “made in Germany” to Turkey and other non‑EU governments. Meanwhile, the NSO Group has at least 22 Pegasus contracts with security and law‑enforcement agencies in 12 EU countries. This is a transnational industry, not a localized problem.

    Against near impossible odds, people continue finding pathways to freedom. The global surveillance sector reinforces the same hierarchies and violence that people have found ways to survive against for generations. Queer activists and others at the sharpest edges of this system have had to develop their own forms of resistance, including against biometric and data‑driven targeting. Encryption, circumvention tools, and security training are not silver bullets, but they remain essential for anyone trying to organize, document abuses, or simply exist online with a bit less risk. Resources like EFF’s Surveillance Self‑Defense are one piece of that ecosystem, alongside trainers and groups who have been doing this work on the ground for years.​

    Remembering the Arab Spring in this context means not only tracing how surveillance expanded in its wake, but lifting up the people and coalitions who are still pushing back against that infrastructure today.​

    Defending the Future of Digital Dissent

    The Arab Spring is often remembered through images of packed squares and hopeful tweets. But living with its aftermath means confronting the surveillance architecture built in its shadow: laws that turn online speech into a crime, spyware and biometric systems that turn phones and faces into tracking beacons, and platform practices that routinely sacrifice the people most at risk. None of that is inevitable, and none of it is confined to one part of the world.

    Accountability has to reach both governments and the companies that profit from arming them with these tools. That means pushing for far stronger limits on how surveillance tech is built, sold, and deployed; demanding meaningful transparency when these systems are used; and defending the tools people rely on to communicate and organize safely, including robust encryption and secure channels. It also means taking direction from people in the region who have been navigating and resisting this landscape for years, rather than only paying attention once similar abuses show up elsewhere.

    Surveillance itself is transnational: tools are exported, playbooks are copied, and data moves across borders as easily as money. And so we continue our work, documenting abuses, sharing security knowledge, and collectively organizing against these violent systems.

    This is the third installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.

  • EU Parliament Blocks Mass-Scanning of Our Chats—What’s Next?

    The EU’s so-called Chat Control plan, which would mandate mass scanning and other encryption breaking measures, has had some good news lately. The most controversial idea, the forced requirement to scan encrypted messages, was given up by EU member states. And now, another win for privacy: the EU Parliament has dealt a real blow to voluntary mass-scanning of chats by voting to not prolong an interim derogation from e-Privacy rules in the EU. These rules allowed service providers, temporarily, to scan private communication.  

    But no one should celebrate just yet. We said there is more to it, and voluntary scanning is a key part. Unlike in the U.S., where there is no comprehensive federal privacy law, the general and indiscriminate scanning of people’s messages is not legal in the EU without a specific legal basis. The e-Privacy derogation law, which gave (limited) cover for such activities, has now expired. Does that mean mass scanning will stop overnight?  

    Not really. 

    Companies have continued similar scanning practices during past gaps. Google, Meta, Microsoft, and Snap have already signaled in a joint statement to “continue to take voluntary action on our relevant Interpersonal Communication Services.” Whether this indicates continued scanning of our private communication is not entirely clear, but what is clear is that such activity would now risk breaching EU law. Then again, lack of compliance with EU data protection and privacy rules is nothing new for big tech in Europe. 

    Most importantly, the “Chat Control” proposal for mandatory detection of child abuse material (CSAM) is still alive and being negotiated. It has shifted the focus toward so-called risk mitigation measures, such as problematic age verification and voluntary activities. If platforms are expected to adopt these as part of their compliance, they risk no longer being truly voluntary. While mass scanning may be gone on paper, some broader concerns remain.  

    So, where does this leave us? The immediate priority is to make sure the expired exception for mass scanning is not revived. At the same time, lawmakers need to pull the teeth from the currently negotiated Chat Control proposal by narrowing risk mitigation measures. This means ensuring that age verification does not become a default requirement and “voluntary activities” are not turned into an expectation to scan our communications.   

    As we said before, this is a zombie proposal. It keeps coming back and must not be allowed to return through the back door. 

  • Ross Douthat’s Shoddy Arguments For Religion

    Ross Douthat’s Shoddy Arguments For Religion

    According to Pew’s most recent Religious Landscape Study, a growing share of Americans identify as atheists, agnostics, or “nothing in particular.” These so-called “nones” made up 16 percent of the population in 2007, but 29 percent in the latest survey, from 2023-24. The trend among younger Americans is even more striking. In this latest survey, 43 percent of those born in the ’90s and early aughts identified as nones.

  • Russia Hacked Routers to Steal Microsoft Office Tokens

    Russia Hacked Routers to Steal Microsoft Office Tokens

    Hackers linked to Russia’s military intelligence units are using known flaws in older Internet routers to mass harvest authentication tokens from Microsoft Office users, security experts warned today. The spying campaign allowed state-backed Russian hackers to quietly siphon authentication tokens from users on more than 18,000 networks without deploying any malicious software or code.

    Microsoft said in a blog post today it identified more than 200 organizations and 5,000 consumer devices that were caught up in a stealthy but remarkably simple spying network built by a Russia-backed threat actor known as “Forest Blizzard.”

    How targeted DNS requests were redirected at the router. Image: Black Lotus Labs.

    Also known as APT28 and Fancy Bear, Forest Blizzard is attributed to the military intelligence units within Russia’s General Staff Main Intelligence Directorate (GRU). APT 28 famously compromised the Hillary Clinton campaign, the Democratic National Committee, and the Democratic Congressional Campaign Committee in 2016 in an attempt to interfere with the U.S. presidential election.

    Researchers at Black Lotus Labs, a security division of the Internet backbone provider Lumen, found that at the peak of its activity in December 2025, Forest Blizzard’s surveillance dragnet ensnared more than 18,000 Internet routers that were mostly unsupported, end-of-life routers, or else far behind on security updates. A new report from Lumen says the hackers primarily targeted government agencies—including ministries of foreign affairs, law enforcement, and third-party email providers.

    Black Lotus Security Engineer Ryan English said the GRU hackers did not need to install malware on the targeted routers, which were mainly older Mikrotik and TP-Link devices marketed to the Small Office/Home Office (SOHO) market. Instead, they used known vulnerabilities to modify the Domain Name System (DNS) settings of the routers to include DNS servers controlled by the hackers.

    As the U.K.’s National Cyber Security Centre (NCSC) notes in a new advisory detailing how Russian cyber actors have been compromising routers, DNS is what allows individuals to reach websites by typing familiar addresses, instead of associated IP addresses. In a DNS hijacking attack, bad actors interfere with this process to covertly send users to malicious websites designed to steal login details or other sensitive information.

    English said the routers attacked by Forest Blizzard were reconfigured to use DNS servers that pointed to a handful of virtual private servers controlled by the attackers. Importantly, the attackers could then propagate their malicious DNS settings to all users on the local network, and from that point forward intercept any OAuth authentication tokens transmitted by those users.

    DNS hijacking through router compromise. Image: Microsoft.

    Because those tokens are typically transmitted only after the user has successfully logged in and gone through multi-factor authentication, the attackers could gain direct access to victim accounts without ever having to phish each user’s credentials and/or one-time codes.

    “Everyone is looking for some sophisticated malware to drop something on your mobile devices or something,” English said. “These guys didn’t use malware. They did this in an old-school, graybeard way that isn’t really sexy but it gets the job done.”

    Microsoft refers to the Forest Blizzard activity as using DNS hijacking “to support post-compromise adversary-in-the-middle (AiTM) attacks on Transport Layer Security (TLS) connections against Microsoft Outlook on the web domains.” The software giant said while targeting SOHO devices isn’t a new tactic, this is the first time Microsoft has seen Forest Blizzard using “DNS hijacking at scale to support AiTM of TLS connections after exploiting edge devices.”

    Black Lotus Labs engineer Danny Adamitis said it will be interesting to see how Forest Blizzard reacts to today’s flurry of attention to their espionage operation, noting that the group immediately switched up its tactics in response to a similar NCSC report (PDF) in August 2025. At the time, Forest Blizzard was using malware to control a far more targeted and smaller group of compromised routers. But Adamitis said the day after the NCSC report, the group quickly ditched the malware approach in favor of mass-altering the DNS settings on thousands of vulnerable routers.

    “Before the last NCSC report came out they used this capability in very limited instances,” Adamitis told KrebsOnSecurity. “After the report was released they implemented the capability in a more systemic fashion and used it to target everything that was vulnerable.”