Author: tio

  • Kazakh Authorities Accuse Businessman of Running Shadow Loan Network

    Kazakh authorities have accused businessman Sergey Koynov and his associates of operating a network of commission shops that illegally issued high-interest loans secured by personal property and then laundered the proceeds.

    Investigators say the alleged scheme relied on a licensed pawnshop, Active Lombard, and several affiliated firms — including One Billion Sales, The Best Seller and Shark in Sales — that operated under the single “Active Market” brand. While the outlets were presented as ordinary commission shops, authorities say they effectively functioned as unlicensed lenders.

    Earlier, Agency for Financial Monitoring Deputy Chair Zhenis Elemesov said the commission agreements used with customers were “formal” and “fictitious,” with fees amounting to roughly 1.73% to 2.07% per day — well above Kazakhstan’s legal cap for this type of lending.

    Investigators estimate that more than 1.9 million contracts were signed and that suspected criminal income exceeded about $38 million.

    To disguise the origin of the funds, the case file describes repeated transfers between accounts and the use of sham contracts with affiliated entities, followed by dividend payouts totaling nearly $11 million.

    A separate company, Zeus Co — nominally an IT business — is alleged to have received about $5 million under inflated software-services contracts. Most of that money was then paid out to Koynov as dividends, according to investigators.

    With court authorization, authorities froze assets valued at about $18 million, including four apartments and two houses in Astana, four apartments in the United Arab Emirates, six luxury vehicles — among them a Rolls-Royce Cullinan and a Bentley Bentayga — jewelry, and more than $5 million held in bank deposits.

    Koynov is being held in pretrial detention, while four alleged accomplices were placed under travel restrictions. The agency said further details could not be disclosed under Kazakhstan’s criminal procedure rules.

  • The Internet Still Works: Yelp Protects Consumer Reviews

    Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.

    Yelp hosts millions of reviews written by internet users about local businesses. Most reviews are positive, but over the years, some businesses have tried to pressure Yelp to remove negative reviews, including through legal threats. Since its founding more than two decades ago, Yelp has fought major legal battles to defend reviewers’ rights and preserve the legal protections that allow consumers to share honest feedback online.

    Aaron Schur is General Counsel at Yelp. He joined the company in 2010 as one of its first lawyers and has led its litigation strategy for more than a decade, helping secure court decisions that strengthened legal protections for consumer speech. He was interviewed by Joe Mullin, a policy analyst on EFF’s Activism Team. 

    Joe Mullin: How would you describe Section 230 to a regular Yelp user who doesn’t know about the law?   

    Aaron Schur: I’d say it is a simple rule that, generally speaking, when content is posted online, any liability for that content is with the person that created it, not the platform that is displaying it. That allows Yelp to show your review and keep it up if a business complains about it. It also means that we can develop ways to highlight the reviews we think are most helpful and reliable, and mitigate fake reviews in a way, without creating liability for Yelp, because we’re allowed to host third party content.

    The political debate around Section 230 often centers around the behavior of companies, especially large companies. But we rarely hear about users, even though the law also applies to users. What is the user story that is getting lost? 

    Section 230 at heart protects users. It enables a diversity of platforms and content moderation practices—whether it’s reviews on Yelp, videos on another platform, whatever it may be. 

    Without Section 230, platforms would face heavy pressure to remove consumer speech when we’re threatened with legal action—and that harms users, directly. Their content gets removed. It also harms the greater number of users who would access that content. 

    The focus on the biggest tech companies, I think, is understandable but misplaced when it comes to Section 230. We have tools that exist to go after dominant companies, both at the state and the federal level, and Congress could certainly consider competition-based laws—and has, over the last several years. 

    Tell me about the editorial decisions that Yelp makes regarding the highlighting of reviews, and the weeding out of reviews that might be fake.  

    Yelp is a platform where people share their experiences with local businesses, government agencies, and other entities. People come to Yelp, by the millions, to learn about these places.

    With traffic like that come incentives for bad actors to game the system. Some unscrupulous businesses try to create fake reviews, or compensate people to write reviews, or ask family and friends to write reviews. Those reviews will be biased in a way that won’t be transparent. 

    Yelp developed an automated system to highlight reviews we find most trustworthy and helpful. Other reviews may be placed in a “not recommended” section where they don’t affect a business’s overall rating, but they’re still visible. That helps us maintain a level playing field and keep user trust. 

    Tell me about what your process around complaints around user reviews look like. 

    We have a reporting function for reviews. Those reports get looked at by an actual human, who evaluates the review and looks at data about it to decide whether it violates our guidelines. 

    We don’t remove a review just because someone says it’s “wrong,” because we can’t litigate the facts in your review. If someone says “my pizza arrived cold,” and the restaurant says, no, the pizza was warm—Yelp is not in a position to adjudicate that dispute. 

    That’s where Section 230 comes in. It says Yelp doesn’t have to [decide who’s right]. 

    What other types of moderation tools have you built? 

    Any business, free of charge, can respond to a review, and that response appears directly below it. They can also message users privately. We know when businesses do this, it’s viewed positively by users.

    We also have a consumer alert program, where members of the public can report businesses that may be compensating people for positive reviews—offering things like free desserts or discounted rent. In those cases, we can place an alert on the business’s page and link to the evidence we received. We also do this when businesses make certain types of legal threats against users.

    It’s about transparency. If a business’s rating is inflated, because that business is threatening reviewers who rate less than five stars with a lawsuit, consumers have a right to know what’s happening. 

    How are international complaints, where Section 230 doesn’t come into play, different? 

    We have had a lot of matters in Europe, in particular in Germany. It’s a different system there—it’s notice-and-takedown. They have a line of cases that require review sites to basically provide proof that the person was a customer of the business. 

    If a review was challenged, we would sometimes ask the user for documentation, like an invoice, which we would redact before providing it. Often, they would do that, in order to defend their own speech online. Which was surprising to me! But they wouldn’t always—which shows the benefit of Section 230. In the U.S., you don’t have this back-and-forth that a business can leverage to get content taken down. 

    And invariably, the reviewer was a customer. The business was just using the system to try to take down speech. 

    Yelp has been part of some of the most important legal cases around Section 230, and some of those didn’t exist when we spoke in 2012. What happened in the Hassel v. Bird case, and why was that important for online reviewers?

    Hassel v. Bird was a case where a law firm got a default judgment against an alleged reviewer, and the court ordered Yelp to remove the review—even though Yelp had not been a party to the case. 

    We refused, because the order violated Section 230, due process, and Yelp’s First Amendment rights as a publisher. But the trial court and the appeal court both ruled against us, allowing a side-stepping of Section 230. 

    The California Supreme Court ultimately reversed those rulings, and recognized that plaintiffs cannot accomplish indirectly [by suing a user and then ordering a platform to remove content] what they could not accomplish directly by suing the platform itself.

    We spoke to you in 2012, and the landscape has really changed. Section 230 is really under attack in a way that it wasn’t back then. From your vantage point at Yelp, what feels different about this moment? 

    The biggest tech companies got even bigger, and even more powerful. That has made people distrustful and angry—rightfully so, in many cases. 

    When you read about the attacks on 230, it’s really politicians calling out Big Tech. But what is never mentioned is little tech, or “middle tech,” which is how Yelp bills itself. If 230 is weakened or repealed, it’s really the biggest companies, the Googles of the world, that will be able to weather it better than smaller companies like Yelp. They have more financial resources. It won’t actually accomplish what the legislators are setting out to accomplish. It will have unintended consequences across the board. Not just for Yelp, but for smaller platforms. 

    This interview was edited for length and clarity.

  • The Internet Still Works: Wikipedia Defends Its Editors

    Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information. 

    A decade ago, Wikimedia Foundation, the nonprofit that operates Wikipedia, received 304 requests to alter or remove content over a two-year period, not including copyright complaints. In 2024 alone, it received 664 such takedown requests. Only four were granted. As complaints over user speech have grown, Wikimedia has expanded its legal team to defend the volunteer editors who write and maintain the encyclopedia. 

    Jacob Rogers is Associate General Counsel at the Wikimedia Foundation. He leads the team that deals with legal complaints against Wikimedia content and its editors. Rogers also works to preserve the legal protections, including Section 230, that make a community-governed encyclopedia possible. 

    Joe Mullin: What kind of content do you think would be most in danger if Section 230 was weakened? 

    Jacob Rogers: When you’re writing about a living person, if you get it wrong and it hurts their reputation, they will have a legal claim. So that is always a concentrated area of risk. It’s good to be careful, but  I think if there was a looser liability regime, people could get to be too careful—so careful they couldn’t write important public information. 

    Current events and political history would also be in danger. Writing about images of Muhammad has been a flashpoint in different countries, because depictions are religiously sensitive and controversial in some contexts. There are different approaches to this in different languages. You might not think that writing about the history of art in your country 500 years ago would get you into trouble—but it could, if you’re in a particular country, and it’s a flash point. 

    Writing about history and culture matters to people. And it can matter to governments, to religions, to movements, in a way that can cause people problems. That’s part of why protecting pseudonymity and their ability to work on these topics is so important. 

    If you had to describe to a Wikipedia user what Section 230 does, how would you explain it to them? 

    If there was nothing—no legal protection at all—I think we would not be able to run the website. There would be too many legal claims, and the potential damages of those claims could bankrupt the company. 

    Section 230 protects the Wikimedia Foundation, and it allows us to defer to community editorial processes. We can let the user community make those editorial decisions, and figure things out as a group—like how to write biographies of living persons, and what sources are reliable. Wikipedia wouldn’t work if it had centralized decision making. 

    What does a typical complaint look like, and how does the complaint process look? 

    In some cases, someone is accused of a serious crime and there’s a debate about the sources. People accused of certain types of wrongdoing, or scams. There are debates about peoples’ politics, where someone is accused of being “far-right” or “far-left.” 

    The first step is community dispute resolution. On the top page of every article on Wikipedia there’s a button at the top that translates to “talk.” If you click it, that gives you space to discuss how to write the article. When editors get into a fight about what to write, they should stop and discuss it with each other first. 

    If page editors can’t resolve a dispute, third-party editors can come in, or ask for a broader discussion. If that doesn’t work, or there’s harassment, we have Wikipedia volunteer administrators, elected by their communities, who can intervene. They can ban people temporarily, to cool off. When necessary, they can ban users permanently. In serious cases, arbitration committees make final decisions. 

    And these community dispute processes we’ve discussed are run by volunteers, no Wikimedia Foundation employees are involved? Where does Section 230 come into play?

    That’s right. Section 230 helps us, because it lets disputes go through that community process. Sometimes someone’s edits get reversed, and they write an angry letter to the legal department. If we were liable for that, we would have the risk of expensive litigation every time someone got mad. Even if their claim is baseless, it’s hard to make a single filing in a U.S. court for less than $20,000. There’s a real “death by a thousand cuts” problem, if enough people filed litigation. 

    Section 230 protects us from that, and allows for quick dismissal of invalid claims. 

    When we’re in the United States, then that’s really the end of the matter. There’s no way to bypass the community with a lawsuit. 

    How does dealing with those complaints work in the U.S.? And how is it different abroad? 

    In the US, we have Section 230. We’re able to say, go through the community process, and try to be persuasive. We’ll make changes, if you make a good persuasive argument! But the Foundation isn’t going to come in and change it because you made a legal complaint. 

    But in the EU, they don’t have Section 230 protections. Under the Digital Services Act, once someone claims your website hosts something illegal, they can go to court and get an injunction ordering us to take the content down. If we don’t want to follow that order, we have to defend the case in court. 

    In one German case, the court essentially said, “Wikipedians didn’t do good enough journalism.” The court said the article’s sources aren’t strong enough. The editors used industry trade publications, and the court said they should have used something like German state media, or top newspapers in the country, not a “niche” publication. We disagreed with that. 

    What’s the cost of having to go to court regularly to defend user speech? 

    Because the Foundation is a mission-driven nonprofit, we can take on these defenses in a way that’s not always financially sensible, but is mission sensible. If you were focused on profit, you would grant a takedown. The cost of a takedown is maybe one hour of a staff member’s time. 

    We can selectively take on cases to benefit the free knowledge mission, without bankrupting the company. To do litigation in the EU costs something on the order of $30,000 for one hearing, to a few hundred thousand dollars for a drawn-out case.

    I don’t know what would happen if we had to do that in the United States. There would be a lot of uncertainty. One big unknown is—how many people are waiting in the wings for a better opportunity to use the legal system to force changes on Wikipedia? 

    What does the community editing process get right that courts can get wrong? 

    Sources. Wikipedia editors might cite a blog because they know the quality of its research. They know what’s going into writing that. 

    It can be easy sometimes for a court to look at something like that and say, well, this is just a blog, and it’s not backed by a university or institution, so we’re not going to rely on it. But that’s actually probably a worse result. The editors who are making that consideration are often getting a more accurate picture of reality. 

    Policymakers who want to limit or eliminate Section 230 often say their goal is to get harmful content off the internet, and fast. What do you think gets missed in the conversation about removing harmful content? 

    One is: harmful to whom? Every time people talk about “super fast tech solutions,” I think they leave out academic and educational discussions. Everyone talks about how there’s a terrorism video, and it should come down. But there’s also news and academic commentary about that terrorism video. 

    There are very few shared universal standards of harm around the world. Everyone in the world agrees, roughly speaking, on child protection, and child abuse images. But there’s wild disagreement about almost every other topic. 

    If you do take down something to comply with the UK law, it’s global. And you’ll be taking away the rights of someone in the US or Australia or Canada to see that content. 

    This interview was edited for length and clarity. EFF interviewed Wikimedia attorney Michelle Paulson about Section 230 in 2012.

  • Don’t Let Elon Musk Implant a Device in Your Skull

    Don’t Let Elon Musk Implant a Device in Your Skull

    The richest man in the world wants to implant a chip in your brain. You wouldn’t be the first: as of last September, 12 people have already undergone the operation and 10,000 more are on the waiting list.

  • On Its 30th Birthday, Section 230 Remains The Lynchpin For Users’ Speech

    For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.

    Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.

    But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end to the dominance of the current handful of large tech companies—it would cement their monopoly power

    The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.

    This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.

    Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.

    It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity  because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.

    Section 230 Alternatives Would Protect Less Speech

    With so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?

    The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.

    Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.

    Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal  https://www.eff.org/takedownsof lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.

    The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.

    By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.

    In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.

    EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.

  • Record 40-Tonne Cannabis Haul Found in North Macedonia

    Authorities in North Macedonia have seized a record 40 tonnes of cannabis after inspections at licensed medicinal plant companies revealed that the method of production, storage, and handling indicated the drug was actually meant for illegal activities.

    Officials suspect the cannabis is connected to a recent record seizure in Serbia of about five tonnes of marijuana and weapons, which may have originated in North Macedonia.

    The Interior Ministry said inspections at a Skopje-based company uncovered about nine tonnes of marijuana and more than 1,300 bottles of cannabis oil. Investigators also found over 31 tonnes of marijuana and plant biomass at several companies in the eastern part of the country.

    The scrutiny intensified after one of the suspects detained in Serbia less than two weeks ago was linked to the Skopje company’s ownership. Health inspectors found major discrepancies between recorded and actual quantities, prompting prosecutors to seize all cannabis on the premises.

    The seizures are under investigation by a joint team of police, prosecutors from the Basic Public Prosecutor’s Office for Organized Crime and Corruption, customs officials, and the Health Ministry’s supervisory commission.

    On January 29, Serbian police reported seizing about five tonnes of marijuana near Kruševac, along with automatic rifles and a rocket launcher. Interior Minister Ivica Dačić said the haul, valued at 7 to 10 million euros ($8.3–11.9 million), involved two suspects arrested for illegal drug production and weapons violations.

    North Macedonia’s Prime Minister Hristijan Mickoski called the uncovered irregularities “catastrophic” and said authorities are continuing the investigation. He noted that the Serbia seizure was part of the same network and that the drugs seized in Serbia probably originated from North Macedonia.

  • Op-ed: Weakening Section 230 Would Chill Online Speech

    (This appeared as an op-ed published Friday, Feb. 6 in the Daily Journal, a California legal newspaper.)

    Section 230, “the 26 words that created the internet,” was enacted 30 years ago this week. It was no rush-job—rather, it was the result of wise legislative deliberation and foresight, and it remains the best bulwark to protect free expression online.

    The internet lets people everywhere connect, share ideas and advocate for change without needing immense resources or technical expertise. Our unprecedented ability to communicate online—on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive—is not an accident. In writing Section 230, Congress recognized that for free expression to thrive on the internet, it had to protect the services that power users’ speech. Section 230 does this by preventing most civil suits against online services that are based on what users say. The law also protects users who act like intermediaries when they, for example, forward an email, retweet another user or host a comment section on their blog.

    The merits of immunity, both for internet users who rely on intermediaries—from ISPs to email providers to social media platforms, and for internet users who are intermediaries—are readily apparent when compared with the alternatives.

    One alternative would be to provide no protection at all for intermediaries, leaving them liable for anything and everything anyone says using their service. This legal risk would essentially require every intermediary to review and legally assess every word, sound or image before it’s published—an impossibility at scale, and a death knell for real-time user-generated content.

    Another option: giving protection to intermediaries only if they exercise a specified duty of care, such as where an intermediary would be liable if they fail to act reasonably in publishing a user’s post. But negligence and other objective standards are almost always insufficient to protect freedom of expression because they introduce significant uncertainty into the process and create real chilling effects for intermediaries. That is, intermediaries will choose not to publish anything remotely provocative—even if it’s clearly protected speech—for fear of having to defend themselves in court, even if they are likely to ultimately prevail. Many Section 230 critics bemoan the fact that it prevented courts from developing a common law duty of care for online intermediaries. But the criticism rarely acknowledges the experience of common law courts around the world, few of which adopted an objective standard, and many of which adopted immunity or something very close to it.

    Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages.

    Another alternative is a knowledge-based system in which an intermediary is liable only after being notified of the presence of harmful content and failing to remove it within a certain amount of time. This notice-and-takedown system invites tremendous abuse, as seen under the Digital Millennium Copyright Act’s approach: It’s too easy for someone to notify an intermediary that content is illegal or tortious simply to get something they dislike depublished. Rather than spending the time and money required to adequately review such claims, intermediaries would simply take the content down.

    All these alternatives would lead to massive depublication in many, if not most, cases, not because the content deserves to be taken down, nor because the intermediaries want to do so, but because it’s not worth assessing the risk of liability or defending the user’s speech. No intermediary can be expected to champion someone else’s free speech at its own considerable expense.Nor is the United States the only government to eschew “upload filtering,” the requirement that someone must review content before publication. European Union rules avoid this also, recognizing how costly and burdensome it is. Free societies recognize that this kind of pre-publication review will lead risk-averse platforms to nix anything that anyone anywhere could deem controversial, leading us to the most vanilla, anodyne internet imaginable.

    The advent of artificial intelligence doesn’t change this. Perhaps there’s a tool that can detect a specific word or image, but no AI can make legal determinations or be prompted to identify all defamation or harassment. Human expression is simply too contextual for AI to vet; even if a mechanism could flag things for human review, the scale is so massive that such human review would still be overwhelmingly burdensome.

    Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages. Each of those acts requires numerous layers of online services, all of which face potential liability without immunity.

    This law isn’t a shield for “big tech.” Its ultimate beneficiaries are all of us who want to post things online without having to code it ourselves, and so that we can read and watch content that others create. If Congress eliminated Section 230 immunity, for example, we would be asking email providers and messaging platforms to read and legally assess everything a user writes before agreeing to send it. 

    For many critics of Section 230, the chilling effect is the point: They want a system that will discourage online services to publish protected speech that some find undesirable. They want platforms to publish less than what they would otherwise choose to publish, even when that speech is protected and nonactionable.

    When Section 230 was passed in 1996, about 40 million people used the internet worldwide; by 2025, estimates ranged from five billion to north of six billion. In 1996, there were fewer than 300,000 websites; by last year, estimates ranged up to 1.3 billion. There is no workforce and no technology that can police the enormity of everything that everyone says.

    Internet intermediaries—whether social media platforms, email providers or users themselves—are protected by Section 230 so that speech can flourish online.

  • Hong Kong Jails Jimmy Lai for 20 Years Under National Security Law

    A Hong Kong court on Monday sentenced Jimmy Lai, a prominent critic of Beijing whose pro-democracy newspaper closed in 2021 following police raids and arrests, to 20 years in prison under the city’s national security law.

    Lai, the former publisher of Apple Daily, was found guilty in December of conspiring to publish seditious material and conspiring to “collude with a foreign country or external elements.” The verdict is widely seen as a landmark ruling and a major blow to press freedom in the special administrative region.

    Judges deemed the foreign-collusion charges “grave” and identified Lai as the “mastermind” behind the offenses, while granting limited mitigation for his age, health, and time spent in solitary confinement.

    He is already serving a separate 5-year, 9-month sentence for an unrelated fraud case, and the court ordered 18 years of the new term to run consecutively to that previous sentence.

  • Proposed amnesty law offers political prisoners in Venezuela an ‘opportunity’

    A draft amnesty law in Venezuela aimed at granting immediate clemency to people jailed for participating in political protests or criticizing public figures, has been welcomed by a UN commission of experts – although they have emphasised the Venezuelan people need to be at the centre of the process.
  • UN calls for release of Hong Kong publisher Jimmy Lai following 20-year sentence

    The UN human rights chief on Monday called for the immediate release of Hong Kong media magnate Jimmy Lai after a Chinese court handed him a 20-year prison sentence under the city’s national security legislation, warning the verdict violates international human rights law.