Author: tio

  • Congress Is Dropping the Ball with a Clean Extension of FISA

    Two years ago, Congress passed the “Reforming Intelligence and Securing America” Act (RISAA) that included nominal reforms to Section 702 of the Foreign Intelligence Surveillance Act (FISA). The bill unfortunately included some problematic expansions of the lawbut it also included a relatively big victory for civil liberties advocates: Section 702 authorities were only extended for two years, allowing Congress to continue the important work of negotiating a warrant requirement for Americans as well as some other critical reforms

    However, Congress clearly did not continue this work. In fact, it now appears that Congress is poised to consider another extension of this program without even attempting to include necessary and common sense reforms. Most notably, Congress is not considering a requirement to obtain a warrant before looking at data on U.S. persons that was indiscriminately and warrantlessly collected. House Speaker Mike Johnson confirmed that “the plan is to move a clean extension of FISA … for at least 18 months.” 

    Even more disappointing, House Judiciary Chair Jim Jordan, who has previously been a champion of both the warrant requirement and closing the data broker loophole, told the press he would vote for a clean extension of FISA, claiming that RISAA included enough reforms for the moment.

    It’s important to note RISAA was just a reauthorization of this mass surveillance program with a long history of abuse. Prior to the 2024 reauthorization, Section 702 was already misused to run improper queries on peaceful protesters, federal and state lawmakers, Congressional staff, thousands of campaign donors, journalists, and a judge reporting civil rights violations by local police. RISAA further expanded the government’s authority by allowing it to compel a much larger group of people and providers into assisting with this surveillance. As we said when it passed, overall, RISAA is a travesty for Americans who deserve basic constitutional rights and privacy whether they are communicating with people and services inside or outside of the US.

    Section 702 should not be reauthorized without any additional safeguards or oversight. Fortunately, there are currently three reform bills for Congress to consider: SAFE, PLEWSA, and GSRA. While none of these bills are perfect, they are all significantly better than the status quo, and should be considered instead of a bill that attempts no reform at all. 

    Mass spyingaccessing a massive amount of communications by and with Americans first and sorting out targets second and secretlyhas always been a problem for our rights.  It was a problem at first when President George W. Bush authorized it in secret without Congressional or court oversight. And it remained a problem even after the passage of Section 702 in 2008 created the possibility of  some oversight. Congress was right that this surveillance is dangerous, and that’s why it set Section 702 up for regular reconsideration. That reconsideration has not occurred, even as the circumstances of the NSA, Justice Department, and FBI leadership, have radically changed. Reform is long overdue, and now it’s urgent.  

  • Inside the Culture of Silence in Washington

    Inside the Culture of Silence in Washington

    Dr. Annelle Sheline knows firsthand what it means to act on principle. In March 2024, under President Biden, she resigned from the State Department in protest of U.S. support for the Gaza genocide, announcing that she could no longer “serve an administration that enables such atrocities.” Now a senior research fellow at the Quincy Institute and a senior nonresident fellow at the Arab Center in Washington, D.C., Sheline continues to speak out against U.S. militarism.

  • Alleged South American Kingpin Denied Bail After Extradition to U.S.

    In his first public court appearance since his recent extradition from Bolivia, accused South American drug lord Sebastián Marset was ordered detained without bail by a federal judge in Virginia at a detention hearing on Friday.

    Dressed in a dark green prison jumpsuit and black sneakers, the tattoo-covered Uruguayan national did not speak during the proceeding at the Albert V. Bryan United States Courthouse. 

    Magistrate judge William B. Porter denied bail to Marset, agreeing with the view of prosecutors that he represented a flight risk.

    Marset, 34, was arrested in the early morning of March 13 in a residential neighborhood of Santa Cruz, Bolivia and extradited to the U.S. hours later. 

    Allegedly one of the most powerful drug barons in the Southern Cone of South America, the U.S. had announced a $2 million reward for information leading to his capture. 

    Prosecutors allege Marset laundered millions of dollars in global narcotics proceeds through the U.S. and European banking systems, along with fellow Uruguayan co-conspirator Federico Ezequiel Santoro Vassallo, who is now serving a 15 year sentence in the U.S. for money laundering conspiracy.

    The U.S. Attorney’s Office for the Eastern District of Virginia alleges Marset led “a large-scale drug trafficking organization that distributed thousands of kilograms of cocaine, including as many as 10 tons at a time, from South America typically to Europe.”

    “The Marset drug trafficking organization allegedly traffics cocaine in Bolivia, Paraguay, Uruguay, Brazil, Belgium, the Netherlands, Portugal, and elsewhere,” the U.S. Attorney’s Office said.

    Marset’s prosecution in the U.S. state of Virginia is uncommon, as the majority of the country’s significant drug cases are tried in New York City or Miami. 

    According to court documents, at least one of the bank wire transfers made by his alleged co-conspirator Santoro was routed through a U.S. correspondent bank’s server located in Richmond, Virginia. That gave the Justice Department a venue where Marset and Santoro could be charged and tried for money laundering.

    Marset’s arrest and extradition to the U.S. appears to be the result of renewed regional anti-narcotics cooperation, coming just months after the DEA resumed operations in Bolivia following a 17-year absence. It also follows Bolivia’s participation in an anti-narcotics summit convened by President Donald Trump on March 7.

    Marset’s next hearing is expected within the next two weeks.

  • Thousands get meningitis vaccine as experts wait to see outbreak peak

    The outbreak, which has killed two people, is thought to have originated at a Canterbury nightclub.
  • The Generative Fog of War

    The following story is co-published with Nolan Higdon’s Substack.

    “Tel Aviv, stripped of illusion, as you have never witnessed it,” read the caption above a viral March 2026 video showing missiles hammering the Israeli city as explosions burst across the night sky. To the casual scroller, it appeared to be a harrowing document of modern conflict. The problem, however, was that the video was a deepfake.

    Deepfakes are synthetic media edited or generated using artificial intelligence. According to The New York Times, a “cascade of AI fakes about war with Iran” have proliferated across social media since the United States and Israel reignited military actions with Iran on Feb. 28, 2026. Indeed, the digital landscape is increasingly saturated with synthetic fabrications, as false videos of boisterous celebrations, frantic airport evacuations, devastating bombings and graphic casualties flood users’ feeds in a relentless stream of misinformation.

    As these digital fabrications blur the line between reality and simulation, the necessity for critical artificial intelligence literacy (CAIL) has moved from an educational luxury to a vital requirement. We are currently navigating a landscape where the “fog of war” is no longer just a metaphor for confusion on the battlefield, but a literal description of an information environment choked by so-called AI slop. Indeed, one study found that more than 20% of the content on YouTube is AI generated. Without a robust, systemic effort to instill CAIL, the public remains defenseless against sophisticated psychological operations. We must understand not just how to use these tools, but the sociopolitical structures that own them and the inherent biases they encode.

    From Trojan horses to Tonkin

    The deployment of false information is not a modern phenomenon; it has been a foundational staple of conflict since the ancient world. From the Greeks’ legendary construction of a hollow wooden horse to infiltrate Troy, to Genghis Khan’s Mongol cavalry utilizing feigned retreats to lure enemies into fatal disarray, strategic deception has always defined the battlefield.

    In modern democracies like the U.S., leaders have frequently refined these tactics into “false news” designed to manufacture public consent for intervention. This pattern of deception is evident in the phantom attack in the Gulf of Tonkin used to escalate the Vietnam War and in the infamous false claims of weapons of mass destruction that prefaced the 2003 invasion of Iraq. Beyond initiating conflict, misinformation serves to artificially sustain public morale and project an illusion of progress. This was notoriously exemplified by the White House during the Vietnam War, where official reports continuously claimed the U.S. was winning even as internal assessments acknowledged a deepening quagmire. Similarly, President George W. Bush’s “Mission Accomplished“ declaration, delivered from the deck of an aircraft carrier just weeks into the 2003 invasion of Iraq, provided a false sense of finality to a war that would ultimately span decades.

    The architecture of synthetic media

    While the intent to deceive is ancient, AI and social media have complicated these issues by allowing anyone to create slick, convincing content at scale. Even before the recent escalation, the Russia-Ukraine war and the geopolitical tensions between Israel and Bahrain were already inundated with AI-generated misinformation.

    The proliferation of deepfakes does more than just spread lies; it erodes the very foundation of objective truth by fostering universal skepticism. This phenomenon allows genuine evidence of suffering to be dismissed as mere simulation. For instance, NBC News reported on an exhaustive investigation confirming that a video of starving Gazans awaiting food in May 2025 was entirely authentic; nonetheless, a barrage of social media users reflexively dismissed the footage as a deepfake. When the public can no longer distinguish between a sophisticated fabrication and a documented reality, the truth becomes a matter of partisan convenience rather than empirical fact.

    In high-stakes environments, the fog of war creates panic and visceral reactions where people feel their decision-making is a matter of life or death. If the information they consume is incorrect, it could be the difference between a peaceful protest and an individual becoming radicalized toward violence.

    For content creators and platform algorithms, the incentives are skewed toward chaos. Social media platforms are designed to amplify content that triggers intense emotional reactions. Because fake news is often more sensational than the nuanced truth, it spreads faster and wider.

    For content creators and platform algorithms, the incentives are skewed toward chaos.

    While the ideal response is for the public to wait and investigate before passing judgment, this is a tall order when individuals believe they are witnessing an active massacre. Some deepfakes can be debunked quickly, such as the video of Israeli Prime Minister Benjamin Netanyahu which showed him with six fingers. In many cases, verifying information takes time; one must geolocate footage, check metadata and often accept the uncomfortable conclusion that there is not yet enough evidence to be certain. AI has made this truth-finding mission exponentially harder for the average citizen who lacks the resources for deep digital forensics.

    Ironically, many people now rely on AI to tell them if content is AI-generated. This reliance illustrates a profound lack of AI literacy. What we commonly call AI today is more accurately described as large language models (LLMs). These are not “intelligent“ in any human sense; they are pattern-recognition engines that memorize and predict sequences of data. They are only as good as the data fed into them, and as a result, they reflect human biases, often amplified to a dangerous degree.

    Studies consistently show that AI responses can be factually inaccurate about half the time. These models frequently “hallucinate,” fabricating information and citations that do not exist. A study by The Intercept highlighted this absurdity, showing how Google Gemini gave conflicting responses about whether a specific text was AI-generated, even when the text in question was something Gemini itself had produced. When news outlets cite AI detectors as definitive proof, they are often building their conclusions on a foundation of sand.

    The CAIL framework: Interrogating power

    This AI illiteracy compounds decades of neglected media literacy. While many nations have made media literacy a compulsory part of their national curriculum, the U.S. has largely left it to the discretion of local communities. Media literacy is the ability to access, analyze, evaluate, create and act using all forms of communication, from print to digital media. Without this foundation, the public is ill-equipped to handle the nuances of the algorithmic age.

    Critical AI literacy is an evolving framework that goes beyond simply knowing how to prompt a chatbot. It teaches students to interrogate ownership: Who owns the AI, and how does that ownership shape its bias, ideology and purpose? If a corporation owns the model, will it prioritize profit over democratic stability?

    A critical approach also examines representation. We must ask how AI-generated images reflect the biases of their training data, such as the white supremacist or extremist content occasionally surfaced by unmoderated models like Grok AI. Furthermore, it reminds us that Big Tech is often fundamentally anti-human in its philosophy, viewing human beings as buggy systems that need to be fixed or optimized by code.

    Choosing our reality: A mandate for the common good

    As researcher Gary Smith suggests, AI will only surpass human intelligence if humans continue to use it in ways that degrade our own cognitive abilities. Studies show that prolonged, uncritical reliance on AI and screens contributes to a decline in cognitive abilitiesmemory and focus. CAIL points out that humans are the smart ones; the platforms are merely tools.

    In a time of war, the absence of this literacy has deadly consequences. If deepfakes and hallucinating bots are shaping our emotions and our interpretations of international conflict, we are living in a state of perpetual, manufactured crisis. We cannot afford to repeat the mistakes of previous decades, where we naively assumed that simply having access to technology would make the world more connected and smarter.

    The goal of critical AI literacy is not to make us run from technology, but to understand it so it can be harnessed for the common good. We must decide if AI will be a partner in automating meaningless tasks to improve the human condition, or an exploitative force that dictates the citizenry’s reality. That is a decision for an informed public to make, not for Big Tech executives. If people remain AI illiterate, they will remain dependent on the very narratives designed to exploit them.

    The post The Generative Fog of War appeared first on Truthdig.

  • FCC Chair Carr’s Threats to Punish Broadcasters Are Unconstitutional

    EFF joined other digital rights and civil liberties organizations in calling out the unconstitutionality of Federal Communications Commission chair Brendan Carr’s recent threats to punish broadcasters for airing statements he disagrees with. 

    Carr’s recent threats, like his past threats, are unconstitutional efforts to coerce news coverage that favors President Donald Trump. He wrongly claims that the FCC’s “public interest” standard allows him and the commission to revoke the licenses of broadcasters who publish news that is unflattering to the government is anathema to our country’s core constitutional values. 

    The First Amendment constrains the FCC’s authority to force broadcasters to toe the government’s line, even though broadcast licensees are required to operate in the “public interest, convenience, and necessity.” Imposing restrictions on licensees’ speech, especially viewpoint-based limitations, are still subject to First Amendment scrutiny even if, in some circumstances, that scrutiny differs somewhat from that applied to non-broadcast media. And the “public interest” requirement, as it were, has never been interpreted to allow the type of viewpoint-based punishment that Carr has threatened here.  

    Everyone agrees that news reporting should strive for accuracy, but Carr’s threats have little do with that. Instead, his allegations of “falsity” are a proxy for retaliation based on (1) Carr’s subjective policy disagreements; (2) any criticism of Trump and the administration broadly; (3) treatment of anything that is not the official US government line about the Iran War as “false.” 

    We join the call for Carr to withdraw these threats.

     

  • The UN housing development which challenged 1940s’ segregated United States

    At a time when some state laws dictated where different races could live, Parkway Village, built to house some of the first UN staff in New York in 1947, led the way in eliminating racially segregated housing in the United States.
  • Weekly Roundup: March 20

    On Monday, Beau Baumann posed the question that too few are asking: What would a Russell Vought of the Left look like? On Wednesday, Hal Singer laid out one reason that antitrust enforcement has become so difficult: as courts have broadened the rule of reason and raised the evidentiary bar for proving market power, defendants increasingly force plaintiffs into costly disputes over how to define…

    Source

  • Lab-grown food pipe offers new hope for young patients

    UK scientists have grown fully functioning food pipes and successfully transplanted them into mini pigs, paving the way for human trials.
  • Data Centers Are Military Targets Now

    In retaliation for the ongoing U.S.–Israeli war, Iran responded with a novel form of counterattack. For the first time in military history, private sector data centers came under deliberate attack.

    In an era when companies known for e-commerce, social networks, and search engines have also become close collaborators with militaries, is bombing their servers fair game?

    Three days after the U.S. and Israel began their joint bombardment, the Islamic Revolutionary Guard Corps launched kamikaze drone strikes against Amazon-owned data centers in the United Arab Emirates and Bahrain that provide an array of cloud computing services to customers throughout the Middle East. The impacts and subsequent fires “caused structural damage, disrupted power delivery to our infrastructure, and in some cases required fire suppression activities that resulted in additional water damage,” according to Amazon, resulting in service outages across the region.

    The motive behind the attack, according to Iranian state television, was not to block people from ordering groceries or posting to social media, but rather to highlight “the role of these centers in supporting the enemy’s military and intelligence activities.” Though only Amazon’s centers are known to have come under fire, a March 11 tweet from the quasi-official Tasnim News Agency listed dozens of regional facilities, including data centers owned by Microsoft, Google and others, deemed “Enemy Technology Infrastructure” suitable for targeting.

    It’s unclear if the Amazon data centers struck by Iranian drone strikes are used for military purposes or civilian purposes, or both. And it’s unknown if the attacks in any way hindered the militaries of the U.S., Israel, or their allies in the Gulf from using AI or other cloud-based services in their war efforts. But with Amazon, Google, and even Facebook parent company Meta are all eager partners of the Pentagon that augment the destructive power of the United States in Iran and elsewhere, server farms may now have the same status as factories building bombs and warplanes.

    Scholars of international law and the laws of armed conflict say that when a military runs on the cloud, the cloud becomes a legal military target. But the cloud is an abstraction, not a physical site — a global network of millions of chips in servers spread across hundreds of massive buildings across the planet, servicing both civilian apps and state tools used to surveil and kill. Separating the former from the latter is an extremely difficult task.

    “The legality turns on whether the specific facility, at the specific moment, is genuinely serving the military operations of a party to the conflict in a way that offers a concrete and definite advantage to the attacker,” explained León Castellanos-Jankiewicz, a lawyer with the Asser Institute for International and European Law in The Hague.

    Sometimes the split between military and civilian use is straightforward. Microsoft, for example, helps run the Joint Warfighter Cloud Capability, which the Pentagon says provides it with “greater lethality.” This work involves the processing of classified data, which the government does not want commingling with civilian tech. Cloud computing services are generally offered via geographically distinct “regions,” each made up of many physical data centers. Customers typically select the region that is closest to them to minimize lag time. Microsoft’s US DoD Central and US DoD East regions are “reserved for exclusive [Department of Defense] use,” according to the company, and are serviced by data centers in Des Moines, Iowa, and Northern Virginia, respectively.

    Amazon offers similar cloud regions exclusive for Pentagon use, though the location of these data centers is not public. Oracle, another JWCC provider, operates Pentagon-specific facilities in Chicago, Phoenix, and Virginia. Companies are understandably tight-lipped about where exactly on the map these facilities stand, in no small part because Iran, or any country at war with the U.S., would have reason to target them.

    “A data center that is used solely or primarily for military applications is targetable,” said Ioannis Kalpouzos, an international law scholar and visiting professor at Harvard Law, “and a center that supports the Pentagon’s JWCC falls in that category.”


    Related

    AI’s Imperial Agenda


    The march of data center construction has become a point of contention across the United States and around the world, with communities frequently — and sometimes successfully — rallying to block what they view as enormous resource-draining eyesores. But for those living in the widening shadow of data centers, planned or built, their status as military targets may be unsettling beyond concerns over water and energy consumption.

    And as Defense Secretary Pete Hegseth aggressively shoehorns AI tools into the military wherever possible, the rapid expansion of data centers means the potential proliferation of legitimate military targets across the United States.

    With comparisons between the destructive power of AI-augmented warfare and nuclear weaponry becoming more common, the ever-expanding network of American data centers may recreate Cold War anxieties around intercontinental ballistic missile, or ICBM, silo placement. The country’s nuclear launch capabilities were famously clustered in the relatively sparsely populated Upper Midwest, forming a so-called “nuclear sponge” that would draw Soviet nukes away from population centers and toward rural areas and farmland.

    But the legal calculus around most data centers will be less clear. Google, for example, says the Pentagon uses both its general purpose public cloud and smaller specialized air-gapped networks that don’t touch the public internet, depending on the sensitivity of the data involved. Even cloud work involving Top Secret military data “can operate within Google’s trusted, secure, and managed data centers.” The company also sells modular mini-data centers for use closer to battlefields or bases.

    These arrangements, shrouded in both military and trade secrecy, make it hard to assess whether a server is hosting a student’s homework or Air Force R&D, blurring the legality of attacking data centers that may host both. Google may have little control over how governments use its cloud tools; The Intercept has previously reported that Google executives worried internally they wouldn’t be able to tell how the Israeli military was deploying its cloud services.

    “The practical challenge is that cloud infrastructure is often technically opaque, even to providers themselves,” Castellanos-Jankiewicz said. “The services a given data center supports may not be readily ascertainable from the outside or even inside, which complicates the attacker’s legal obligations considerably.”

    Amazon and Google’s Project Nimbus similarly provides cloud computing services across the Israeli government, including both civilian agencies and the Ministry of Defense, along with state-owned weapons companies.

    “The picture becomes more legally complex when a data center functions as a so-called ‘dual-use’ object,” simultaneously hosting military data or capabilities alongside civilian services,” Castellanos-Jankiewicz told The Intercept. “Once a facility is found to make an effective contribution to military action, the entire physical object can, under the dominant legal view, qualify as a military objective.”

    The embrace of commercial cloud computing by the U.S. and others has muddled an already murky legal picture, Castellanos-Jankiewicz explained. “A military’s decision to store classified data or run AI-enabled military systems on commercial cloud infrastructure shared with civilian services could itself raise legal concerns — particularly if the commingling of military and civilian uses makes a strike more likely or increases the foreseeable harm to civilians when one occurs.”


    Related

    OpenAI on Surveillance and Autonomous Killings: You’re Going to Have to Trust Us


    Determining whether a given data center can be legally attacked under international humanitarian law — itself comprised of various treaties that not every country adheres to — relies on a complex series of balancing tests that rarely produce concrete answers. To begin with, every object and person is generally presumed civilian and exempt from attack under this framework. Before launching a strike, a country is supposed to have a verifiable reason to believe a data center contributes to the enemy war effort, and reason to believe an attack will appreciably harm that effort. What “effectively contributes to military action” will, of course, be a source of disagreement.

    Anthropic’s Claude large language model was reportedly used to accelerate American airstrikes against Iran; Claude, in turn, was built in part using 500,000 chips housed in an $11 billion Amazon data center in Indiana. If Claude is now arguably a weapon, is this Indiana site the data equivalent of a bomb factory? Kalpouzos, the Harvard Law visiting professor, told The Intercept it depends on the facts at the moment the bomb hits, not past usage. “If the facility is currently used in the training of the LLM that is used in the conduct of military operations — for example, by fine-tuning object classification or user-interaction features — then this could render it targetable,” he said.

    In a recent article for Just Security, Klaudia Klonowska and Michael Schmitt said that the law calls for proportionality and restraint even against military targets. An attack against a data center that provided both military and civilian computing would need to be precise enough to destroy the former while minimizing harm to the latter, they argued. But international law may call for a degree of carefulness that militaries have little interest in. “If it were possible to attack only the area of the data center where servers hosting military data are located without destroying the entire center, the attacker would need to do so,” they wrote.

    These requirements can be hard to observe in reality. The U.S. and Israel both tout the extreme precision of airstrikes that regularly slaughter civilians. And neither country, nor Iran, is a signatory to some of the relevant legal frameworks that make up the so-called “laws of armed conflict” in the first place.

    Indiscriminate warfare practice by U.S. and Israel has also, ironically, been instrumental in reshaping how these laws are interpreted and effectively loosened. Throughout the Israeli genocide in Gaza, Israel’s military and the Pentagon both made clear it’s acceptable to destroy an apartment block or hospital if one first claims there is a genuine military target inside.

    The second Trump administration in particular has been keen to more tightly integrate Silicon Valley into the global American killing apparatus, a plan to which the industry has shown itself to be largely amenable. Even after being thoroughly maligned by the administration following the collapse of its Pentagon deal over purported disagreements around safety guardrails, Anthropic CEO Dario Amodei issued a public statement making clear he still wanted in on military spending: “Anthropic has much more in common with the Department of War than we have differences. We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government.” That attitude, now commonplace across the tech sector, will see the further commingling of consumer tech and warfare both in the abstract and under sprawling data center rooftops across the country.

    “These [data centers] are further melding military and civilian infrastructure,” said Kalpouzos, “and together with the increasingly permissive rules of engagement adopted by the U.S. and Israel, are potentially drawing in larger sectors of the economy and society in what is targeted and destroyed.”

    The post Data Centers Are Military Targets Now appeared first on The Intercept.