A Defence TAFE Centre of Excellence will be set up in Western Australia with the help of the federal government to deliver industry training opportunities in support of AUKUS. The centre, announced alongside the state’s Defence and Defence Industry Strategy 2025 on Wednesday, will offer training across the five defence domains of land, air, maritime,…
Proton, the company behind the eponymous email provider Proton Mail, has won itself a loyal fanbase of dissidents, investigative journalists, and others skeptical of the prying eyes of government or Big Tech. Headquartered in Switzerland, the service describes itself as “a neutral and safe haven for your personal data, committed to defending your freedom.”
So it came as a surprise last month when Proton CEO Andy Yen praised the Republican Party in a post on X, declaring that “10 years ago, Republicans were the party of big business and Dems stood for the little guys, but today the tables have completely turned.” When the tweet went viral, Proton’s official Reddit account posted a now-deletedcomment stating that “Until corporate Dems are thrown out, the reality is that Republicans remain more likely to tackle Big Tech abuses.”
Within hours, Proton deleted its response across social media accounts, stating that the post — which started with the words “Here is our official response” — was in fact “removed because it was not actually an official statement.” The reply went on to say: “Our policy is that official accounts cannot be used to express personal political opinions. If it happens by mistake, we correct it as soon as we notice it.”
A screenshot of Proton’s now-deleted official response to CEO Andy Yen’s post in support of the Republican Party.Screenshot: Reddit
Yen further claimed that the post had been an “internal miscommunication,” later alsowriting that Proton is “politically neutral.”
He followed it up with a longer statement explaining that “while we may share facts and analysis, our policy going forward will be to share no opinions of a political nature. The line between facts, analysis, and opinions can be blurry at times, but we will seek to better clarify this over time through your feedback and input.” Yen didn’t specifically address whether the deleted post had constituted opinion or analysis.
In response to a request for comment, Proton reiterated the claim that it is a “politically neutral organization,” then went on to state that “regardless of one’s views about the wider Republication platform, if you agree that action is needed on antitrust then the appointment of Gail Slater is a positive thing,” referring to President Donald Trump’s choice to head the Justice Department’s antitrust division. Proton further stated that “Big Tech CEOs are tripping over themselves to kiss the ring precisely because Trump represents an unprecedented challenge to their monopolistic dominance.”
When Governments Ask for Data
Yen has repeatedly described Proton as being a “privacy-first” company, and its homepage touts that “With Proton, your data belongs to you, not tech companies, governments, or hackers.” However, Proton has in the past revealed user information to authorities. For instance, Proton previously handed over an IP address at the request of French authorities made via Europol to Swiss police. Yen wrote a Twitter post at the time, stating, “Proton must comply with Swiss law. As soon as a crime is committed, privacy protections can be suspended and we’re required by Swiss law to answer requests from Swiss authorities.”
Proton’s information for law enforcement page states that it requires a copy of a “police report or court order,” albeit either a foreign or domestic one. For its part, Proton told The Intercept that “Proton does not comply with US subpoenas, it doesn’t matter if it’s Biden or Trump in power.”
Andy Yen, co-founder of Proton Mail. Photo: Stuart Cahill/MediaNews Group/Boston Herald via Getty Images
While Proton states that it “cannot read any of your messages or hand them over to third parties,” the same doesn’t apply to email subjects; sender or recipient names and email addresses; the time a message was sent; or other information in the “header” section of email messages. Proton explicitly states that “if served with a valid Swiss court order, we do have the ability to turn over the subjects of your messages.”
Under Trump’s previous term, the Department of Justice sought to clandestinely obtain “non-content” communications records, including phone and email records, of reporters at a variety of news outlets such as CNN and the New York Times. While the subject of an email is considered “content,” non-content records include metadata such as the date and time a message was sent, as well as the sender and recipient of an email.
The prior behavior of a Trump-led DOJ, coupled with the praise and efforts by tech CEOs to curry favor with the Trump camp, has raised the question of how amenable the industry will be to data requests from the incoming administration. It’s a particularly important question for the types of users who have flocked to Proton — the kind fearful of exposing sensitive sources or persecuted individuals to state surveillance. (The Intercept uses Proton Mail as its email provider.)
“Platforms inherently occupy a position of trust because we want them to have users’ backs when the government comes knocking for data,” said Andrew Crocker, the Electronic Frontier Foundation’s surveillance litigation director, which has published an annual “Who Has Your Back?” reports, which analyzed companies’ acquiescence to government requests for user data.
“It’s reasonable to worry that tech companies’ backbone for protecting users in this way might soften when they get too politically involved with any one administration,” Crocker said.
When Tulsi Gabbard filed ethics paperwork to serve as Donald Trump’s director of national intelligence, she promised to sell her holdings of bitcoin, Cronos, Ethereum, and Solana cryptocurrencies. For decades, such pledges have been a routine part of the standard government hiring process. Congress passed a law in 1962 criminalizing conflicts of interest, and the Office of Government Ethics singled out cryptocurrencies as a concern in 2022.
But the ethics rules restricting the members of his administration who could sway the price of crypto don’t apply to President Donald Trump.
Before his inauguration, Trump cashed in on his election win with a meme coin, signaling the beginning of a new era as crypto companies push the government to allow financial regulators to get in on crypto trading themselves.
Even if that does not come to pass, one ethics watchdog said he already had grave concerns about Trump selling meme coins at the same time that he appoints the heads of the Securities and Exchange Commission and Commodity Futures Trading Commission.
“If you’ve got a direct, personal financial connection to the crypto industries, there’s a self-interested motivation to create the easiest possible path for the crypto world,” said Dylan Hedtler-Gaudette, the director of government affairs at the Project on Government Oversight.
Coins for Me but Not for Thee
Trump has won legions of fans in the crypto world by promising to create industry-friendly regulations. After months of watching scammers make money off rumors that one coin or another was official Trump product, Trump himself got into the game two days before the inauguration.
Legal disclaimers for what is branded as a “meme card” stipulate that it is “not intended to be” used as an investment opportunity, but punters have snapped up enough of $TRUMP to give it a nominal market capitalization above $5 billion as of Monday.
Trump’s business group, the Trump Organization, is affiliated with the company that launched the token. That company and another have retained an 80 percent stake in the token, meaning they could sell off more of it in the future. It’s unclear how much Trump himself owns, Forbes has reported.
His entry into the crypto world earlier this month was so brazen that it alarmed even his fanboys in Silicon Valley, who worried that it might tarnish the entire industry. One investor called the move “very grifty and cheap.”
For many government employees, however, merely owning — let alone creating — a new cryptocurrency would be out of bounds.
In 2022, the Office of Government Ethics released an opinion warning officials that crypto posed a potential conflict of interest. The bulletin advised that owning crypto could violate a federal criminal law if the official in question oversees a matter “when there is a real possibility that the matter will result in a gain or loss to the employee’s digital assets.”
Kathleen Clark, a professor at the Washington University in St. Louis law school, said that advice was a clear-cut application of federals ethics law.
“We should expect presidents to be at least as ethical as the lowliest executive branch official.”
Luckily for Trump, the law does not apply to him. Congress granted presidents an exception in the 1980s at the urging of George H.W. Bush’s White House counsel, C. Boyden Gray.
“We should expect presidents to be at least as ethical as the lowliest executive branch official. But thanks to Boyden Gray, we no longer have that statutory requirement imposed on presidents or vice presidents,” Clark said.
The law also does not apply to members of Congress, who set and enforce their own ethics rules. At present, members are allowed to trade crypto, although few of them do.
In response to Trump’s new venture, one crypto-friendly Democrat has proposed creating a clear-cut rule for elected officials across the board.
“Elected officials must be barred from having meme coins by law,” Rep. Ro Khanna, D-Calif., said on X.
Cracking Open the Door
As Trump takes the reins, the crypto industry is pushing an agenda that includes shifting regulatory oversight from the SEC to the more thinly-staffed and industry-friendly CFTC, as well as loosening the rules that prevent big banks from holding digital assets.
More recently, the industry is calling for a shift in the application of ethics laws. At present, government officials are barred from holding a cryptocurrency if their official duties include overseeing it. That grates on crypto industry figures.
“Imagine designing an FAA safety protocol without ever seeing a plane, or legislating lightbulb efficiency without ever flipping a switch. It may be possible, but it certainly won’t yield sensible policy,” employees of the crypto investment firm Paradigm wrote in a 2023 blog post.
After Trump’s victory in November, the D.C.-based industry group the Digital Chamber wrote a letter to the Office of Government Ethics asking for an exception allowing regulators to maintain “minor” crypto holdings, “limited to a threshold that poses no risk of conflict of interest.”
The industry has compared such a “de minimis” exception to current rules that might allow government officials to hold small amounts of stock in a particular company.
Clark, the professor who studies ethics law, said the industry’s arguments were flawed. Allowing government officials to dabble in crypto, she said, would make them both “more informed and more biased.”
The crypto industry was also overlooking a key difference between digital assets — many of which still operate in a Wild West world outside the reach of regulators — and publicly traded stocks overseen by the SEC.
“Digital assets, they can be junk. I suppose publicly traded companies could be junk, but there is a much greater risk, it seems to me, of pump-and-dump schemes,” Clark said. “OGE is drawing a distinction between digital assets and publicly traded companies. One of those things is not like the other. One of them has actual value.”
Rupert Murdoch’s News Corporation has misled the Australian Parliament and is liable to prosecution — not that government will lift a finger to enforce the law, reports Michael West Media.
SPECIAL REPORT:By Michael West
Rupert Murdoch’s News Corporation has misled the Australian Parliament. In a submission to the Senate, the company claimed, “Foxtel also pays millions of dollars in income tax, GST and payroll tax, unlike many of our large international digital competitors”.
However, an MWM investigation into the financial affairs of Foxtel has shown Foxtel was paying zero income tax when it told the Senate it was paying “millions”. The penalty for lying to the Senate is potential imprisonment, although “contempt of Parliament” laws are never enforced.
The investigation found that NXE, the entity that controls Foxtel, paid no income tax in any of the five years from 2019 to 2023. During this time it generated $14 billion of total income.
The total tax payable across this period is $0. The average total income is $2.8 billion per year.
Foxtel Submission to the Senate Environment and Communications Legislation Committee Inquiry into The Broadcasting Legislation Amendment (2021 Measures No.1) Bill. Image: MWM screenshot
Why did News Corporation mislead the Parliament? The plausible answers are in its Foxtel Submission to the Senate Environment and Communications Legislation Committee Inquiry into The Broadcasting Legislation Amendment.
In May 2021 — which is also where the transgression occurred — the media executives for the American tycoon were lobbying a Parliamentary committee to change the laws in their favour.
By this time, Netflix had leap-frogged Foxtel Pay TV subscriptions in Australia and Foxtel was complaining it had to spend too much money on producing local Australian content under the laws of the time. Also that Netflix paid almost no tax.
Big-league tax dodger
They were correct in this. Netflix, which is a big-league tax dodger itself, was by then making bucketloads of money in Australia but with zero local content requirements.
Making television drama and so forth is expensive. It is far cheaper to pipe foreign content through your channels online. As Netflix does.
The misleading of Parliament by corporations is rife, and contempt laws need to be enforced, as demonstrated routinely by the PwC inquiry last year. Corporations and their representatives routinely lie in their pursuit of corporate objectives.
If democracy is to function better, the information provided to Parliament needs to be clarified, beyond doubt, as reliable. Former senator Rex Patrick has made the point in these pages.
Even in this short statement to the committee of inquiry (published above), there are other misleading statements. Like many companies defending their failure to pay adequate income tax, Foxtel claims that it “paid millions” in GST and payroll tax.
Companies don’t “pay” GST or payroll tax. They collect these taxes on behalf of governments.
Little regard for laws
Further to the contempt of Parliament, so little regard for the laws of Australia is shown by corporations that the local American boss of a small gas fracking company, Tamboran Resources, controlled by a US oil billionaire, didn’t even bother turning up to give evidence when asked.
This despite being rewarded with millions in public grant money.
Politicians need to muscle up, as Greens Senator Nick McKim did when grilling former Woolies boss Brad Banducci for prevaricating over providing evidence to the supermarket inquiry.
Michael West established Michael West Media in 2016 to focus on journalism of high public interest, particularly the rising power of corporations over democracy. West was formerly a journalist and editor with Fairfax newspapers, a columnist for News Corp and even, once, a stockbroker. This article was first published by Michael West Media and is reopublished with permission.
RNZ International (RNZI) began broadcasting to the Pacific region 35 years ago — on 24 January 1990, the same day the Auckland Commonwealth Games opened.
Its news bulletins and programmes were carried by a brand new 100kW transmitter.
The service was rebranded as RNZ Pacific in 2017. However its mission remains unchanged, to provide news of the highest quality and be a trusted service to local broadcasters in the Pacific region.
Although RNZ had been broadcasting to the Pacific since 1948, in the late 1980s the New Zealand government saw the benefit of upgrading the service. Thus RNZI was born, with a small dedicated team.
The first RNZI manager was Ian Johnstone. He believed that the service should have a strong cultural connection to the people of the Pacific. To that end, it was important that some of the staff reflected parts of the region where RNZ Pacific broadcasted.
He hired the first Pacific woman sports reporter at RNZ, the late Elma Ma’ua.
Linden Clark (from left) and Ian Johnstone, former managers of RNZ International now known as RNZ Pacific, and Moera Tuilaepa-Taylor, current manager of RNZ Pacific . . . strong cultural connection to the people of the Pacific. Image: RNZ
The Pacific region is one of the most vital areas of the earth, but it is not always the safest, particularly from natural disasters.
Disaster coverage
RNZ Pacific covered events such as the 2009 Samoan tsunami, and during the devastating 2022 Hunga Tonga-Hunga Haʻapai eruption, it was the only news service that could be heard in the kingdom.
Cyclones have become more frequent in the region, and RNZ Pacific provides vital weather updates, as the late Linden Clark, RNZI’s second manager, explained: “Many times, we have been broadcasting warnings on analogue shortwave to listeners when their local station has had to go off air or has been forced off air.”
RNZ Pacific’s cyclone watch service continues to operate during the cyclone season in the South Pacific.
As well as natural disasters, the Pacific can also be politically volatile. Since its inception RNZ Pacific has reported on elections and political events in the region.
Some of the more recent events include the 2000 and 2006 coups in Fiji, the Samoan Constitutional Crisis of 2021, the 2006 pro-democracy riots in Nuku’alofa, the revolving door leadership changes in Vanuatu, and the 2022 security agreement that Solomon Islands signed with China.
Human interest, culture
Human interest and cultural stories are also a key part of RNZ Pacific’s programming.
The service regularly covers cultural events and festivals within New Zealand, such as Polyfest. This was part of Linden Clark’s vision, in her role as RNZI manager, that the service would be a link for the Pacific diaspora in New Zealand to their homelands.
Today, RNZ Pacific continues that work. Currently its programmes are carried on two transmitters — one installed in 2008 and a much more modern facility, installed in 2024 following a funding boost.
Around 20 Pacific region radio stations relay RNZP’s material daily. Individual short-wave listeners and internet users around the world tune in directly to RNZ Pacific content which can be received as far away as Japan, North America, the Middle East and Europe.
This content originally appeared on Asia Pacific Report and was authored by Pacific Media Watch.
The International Court of Justice heard last month that after reconstruction is factored in Israel’s war on Gaza will have emitted 52 million tonnes of carbon dioxide and other greenhouse gases. A figure equivalent to the annual emissions of 126 states and territories.
It seems somehow wrong to be writing about the carbon footprint of Israel’s 15-month onslaught on Gaza.
The human cost is so unfathomably ghastly. A recent article in the medical journal The Lancet put the death toll due to traumatic injury at more than 68,000 by June of last year (40 percent higher than the Gaza Health Ministry’s figure.)
An earlier letter to The Lancet by a group of scientists argued the total number of deaths — based on similar conflicts — would be at least four times the number directly killed by bombs and bullets.
Seventy-four children were killed in the first week of 2025 alone. More than a million children are currently living in makeshift tents with regular reports of babies freezing to death.
Nearly two million of the strip’s 2.2 million inhabitants are displaced.
Ninety-six percent of Gaza’s children feel death is imminent and 49 percent wish to die, according to a study sponsored by the War Child Alliance.
Truly apocalyptic
I could, and maybe should, go on. The horrors visited on Gaza are truly apocalyptic and have not received anywhere near the coverage by our mainstream media that they deserve.
The contrast with the blanket coverage of the LA fires that have killed 25 people to date is instructive. The lives and property of those in the rich world are deemed far more newsworthy than those living — if you can call it that — in what retired Israeli general Giora Eiland described as a giant concentration camp.
The two stories have one thing in common: climate change.
In the case of the LA fires the role of climate change gets mentioned — though not as much as it should.
But the planet destroying emissions generated by the genocide committed against the Palestinians rarely makes the news.
Incredibly, when the State of Palestine — which is responsible for 0.001 percent of global emissions — told the International Court of Justice, in the Hague, last month, that the first 120 days of the war on Gaza resulted in emissions of between 420,000 and 650,000 tonnes of carbon and other greenhouse gases it went largely unreported.
For context that is the equivalent to the total annual emissions of 26 of the lowest-emitting states.
Fighter planes fuel
Jet fuel burned by Israeli fighter planes contributed about 157,000 tonnes of CO2 equivalent.
Transporting the bombs dropped on Gaza from the US to Israel contributed another 159,000 tonnes of CO2e.
Those figures will not appear in the official carbon emissions of either country due to an obscene exemption for military emissions that the US insisted on in the Kyoto negotiations. The US military’s carbon footprint is larger than any other institution in the world.
Professor of law Kate McIntosh, speaking on behalf of the State of Palestine, told the ICJ hearings, on the obligations of states in respect of climate change, that the emissions to date were just a fraction of the likely total.
Once post-war reconstruction is factored in the figure is estimated to balloon to 52 million tonnes of CO2e — a figure higher than the annual emissions of 126 states and territories.
Far too many leaders of the rich world have turned a blind eye to the genocide in Gaza, others have actively enabled it but as the fires in LA show there’s no escaping the impacts of climate change.
The US has contributed more than $20 billion to Israel’s war on Gaza — a huge figure but one that is dwarfed by the estimated $250 billion cost of the LA fires.
And what price do you put on tens of thousands who died from heatwaves, floods and wildfires around the world in 2024?
The genocide in Gaza isn’t only a crime against humanity, it is an ecocide that threatens the planet and every living thing on it.
Jeremy Rose is a Wellington-based journalist and his Towards Democracy blog is at Substack.
In less than 30 minutes on Monday, Elon Musk and his so-called Department of Government Efficiency were hit with three different lawsuits over the legal status of the effort to find federal regulations to eliminate and federal employees to fire.
The lawsuits landed as Musk rubbed elbows with fellow billionaires at President Donald Trump’s inauguration. As Trump crowed during his speech about DOGE and sending astronauts to Mars, government watchdogs and civil society organizations filed litigation claiming DOGE violates federal law because of its structure and secrecy.
“DOGE is operating unchecked, without authorization or funding from Congress and is led by unelected billionaires.”
“Currently, DOGE is operating unchecked, without authorization or funding from Congress and is led by unelected billionaires who are not representative of ordinary Americans,” said Citizens for Responsibility and Ethics in Washington, in a statement announcing one of the lawsuits, which it filed alongside the American Federation of Teachers and other groups.
Another lawsuit was filed by National Security Counselors, a nonprofit law firm. The third lawsuit came courtesy of Public Citizen, a consumer protection group, and the American Federation of Government Employees, the largest union for federal workers. Unions have spent the months since the election steeling themselves for a fight over DOGE.
Although styled as a “department,” Trump lacks the legal authority to create official departments without legislation from Congress. (During his speech, Trump also said he would establish an “External Revenue Service” to collect his promised tariffs, which would also require a statute.)
The three lawsuits, filed in federal court in Washington, all allege DOGE flouts the Federal Advisory Committee Act. The law requires certain committees that advise the federal government to follow particular procedures, including drafting a formal charter and holding public meetings, which DOGE has not done.
“The advice and guidance that Mr. Trump has charged DOGE with producing is sweeping and consequential,” said Public Citizen in an emailed statement. “DOGE — the members of which currently do not represent the interests of everyday Americans — will be considering cuts to government agencies and programs that protect health, benefits, consumer finance, and product safety.”
In its statement, CREW said, “DOGE representatives have reportedly already been speaking with agency officials throughout the federal government, and communication is allegedly taking place on Signal, a messaging app known for its auto-delete features.”
The initial fight will be over whether DOGE fits the criteria of the Federal Advisory Committee Act. The litigants argue it does since it is “an advisory committee charged by Mr. Trump with providing advice or recommendations to the President and to one or more federal agencies regarding regulatory and fiscal matters,” as Public Citizen asserts in its filing.
Since Trump’s victory in November, Musk and Vivek Ramaswamy, who Trump also tapped to lead DOGE, have been busy staffing up the effort with Silicon Valley types and finding office space, including potentially inside the federal Office of Management and Budget. (Ramaswamy is expected to step away later this month to run for governor in Ohio.)
DOGE’s “intended goal is clear,” according to the National Security Counselors’ suit, which named both Musk and Ramaswamy personally as defendants, along with Trump and other officials. The suit says “recommendations made by unaccountable outsiders without transparent deliberations which will reduce the size of the federal workforce by whatever means necessary.”
CREW’s lawsuit names DOGE, the federal Office of Management and Budget, and the acting head of OMB as defendants, while Public Citizen’s names just Trump and OMB.
Tens of millions of people face the loss of an internet service they use to consume information from around the world. Their government says the block is for their own good, necessitated by threats to national security. The internet service is dangerous, they say, a tool of foreign meddling and a menace to the national fabric — though they furnish little evidence. A situation like this, historically, is the kind of thing the U.S. government protests in clear terms.
When asked, for instance, about Chinese censorship of Twitter in 2009, President Barack Obama was unequivocal. “I can tell you that in the United States, the fact that we have free Internet — or unrestricted Internet access — is a source of strength, and I think should be encouraged.” When the government of Nigeria disconnected its people from Twitter in 2021, the State Department blasted the move, with spokesperson Ned Price declaring, “Unduly restricting the ability of Nigerians to report, gather, and disseminate opinions and information has no place in a democracy.”
But with the Supreme Court approving on Friday a law that would shut off access to TikTok, the U.S. is poised to conduct the exact kind of internet authoritarianism it has spent decades warning the rest of the world about.
Since the advent of the global web, this has been the standard line from the White House, State Department, Congress, and an infinitude of think tanks and NGOs: The internet is a democracy machine. You turn it loose, and it generates freedom ex nihilo. The more internet you have, the more freedom you have.
The State Department in particular seldom misses an opportunity to knock China, Iran, and other faraway governments for blocking their people from reaching the global communications grid — moves justified by those governments as necessary for national safety.
In 2006, the State Department presented the Bush administration’s Global Internet Freedom strategy of “defending Internet freedom by advocating the availability of the widest possible universe of content.” In a 2010 speech, then-Secretary of State Hillary Clinton cautioned that “countries that restrict free access to information or violate the basic rights of internet users risk walling themselves off from the progress of the next century.” She emphasized that the department sought to encourage the flow of foreign internet data into China “because we believe it will further add to the dynamic growth and the democratization” there.
The U.S. has always viewed the internet with something akin to national pride, and for decades has condemned attempts by authoritarian governments — especially China’s — to restrict access to the worldwide exchange of unfettered information. China has become synonymous with internet censorship for snuffing whole websites or apps out of existence with only the thinnest invocation of national security.
But after years of championing “Digital Democracy,” “the Global Village,” and an “American Information Superhighway” shuttling liberalism and freedom to every computer it touches, the U.S. is preparing a dramatic about face. In a move of supreme irony, it will attempt to shield its citizens from Chinese government influence by becoming itself more like the government of China. American internet users must now get accustomed to sweeping censorship in the name of national security as an American strategy, not one inherent to our “foreign adversaries.”
In a move of supreme irony, the U.S. will attempt to shield its citizens from Chinese government influence by becoming itself more like the government of China.
For decades, China has justified its ban against American internet products on the grounds that the likes of Twitter and Instagram represent a threat to Chinese state security and a corrupting influence on Chinese society. That logic has now been seamlessly co-opted by U.S politicians who see China as the great global evil, but with little acknowledgment of how their rhetoric matches that of their enemy.
“Authoritarian and illiberal states,” President Joe Biden’s State Department warned soon after he signed the TikTok ban bill into law, “are seeking to restrict human rights online and offline through the misuse of the Internet and digital technologies” by “siloing the Internet” and “suppressing dissent through Internet and telecommunications shutdowns, virtual blackouts, restricted networks, and blocked websites.”
While TikTok’s national security threat has never been made public — alleged details discussed by Congress remain classified — those who advocate banning the app make clear their concern isn’t merely cybersecurity but also free speech. The Chinese Communist Party “could also use TikTok to propagate videos that support party-friendly politicians or exacerbate discord in American society,” former GOP Rep. Mike Gallagher and Sen. Marco Rubio warned in a 2022 Washington Post op-ed. Their argument perfectly mimicked unspecified threats to Chinese “national unity” that country has cited to defend its blocking of American internet services.
“It’s highly addictive and destructive and we’re seeing troubling data about the corrosive impact of constant social media use, particularly on young men and women here in America,” Gallagher told NBC in 2023.
If politicians are conscious of this contradiction between declarations of America as the home of digital democracy and the rising American firewall, there’s little acknowledgment. In a 2024 opinion piece for Newsweek (“Mr. Xi, Tear Down This Firewall”), Rep. John Moolenaar decried China’s “dystopian” practice of censoring foreign information: “The Great Firewall inhibits contact between Chinese citizens and the outside world. Information is stopped from flowing into China and the Chinese people are not allowed to get information out. Facebook, X, Instagram, and YouTube are blocked.”
Following the Supreme Court’s ruling Friday, Moolenaar, chair of the House Select Committee on the Chinese Communist Party, announced he “commends” the decision, one he believes “will keep our country safe.” His language echoes that of a Chinese foreign ministry spokesperson, who once told reporters the country’s national blockade of American websites was similarly necessary to “safeguard the public.”
It’s unclear whether they see irony in the scores of Americans now flocking to VPN software to bypass a potential national TikTok ban — a technique the State Department has long promoted abroad for those living under repressive regimes.
Nor does there seem to be any awareness of how effortlessly the national security argument deployed against TikTok could be turned against any major American internet company. If the U.S. believes TikTok is a clear and present danger to its citizens because it uses secret algorithms, cooperates with spy agencies, changes speech policies under political pressure, and conducts dragnet surveillance and data harvesting against its clueless users, what does that say about how the rest of the world should view Facebook, YouTube, or X?
To his credit, Gallagher is open about the extent to which the anti-TikTok movement is based less on principle than brinkmanship. The national ideals of open access to information and unbridled speech remain, to Gallagher, but subordinate to the principle of “reciprocity,” as he’s put it. “It’s worth remembering that our social media applications are not allowed in China,” he said in a 2024 New York Times interview. “There’s just a basic lack of reciprocity, and your Chinese citizens don’t have access to them. And yet we allow Chinese government officials to go all over YouTube, Facebook and X spreading lies about America.” The notion that foreign lies — China’s, or anyone else’s — should be countered with state censorship, rather than counter-speech, marks an ideological abandonment of the past 30 years of American internet statecraft.
“Prior to this ban, the U.S. had consistently and rightfully so condemned when other nations banned communications platforms as fundamentally anti-democratic,” said David Greene, senior staff attorney and civil liberties director at the Electronic Frontier Foundation. “We now have lost much of our moral authority to advance democracy and the free flow of information around the world.”
Should TikTok actually become entirely unplugged from the United States, it may grow more difficult for the country to proselytize for an open internet. So too will it grow more difficult for the U.S. to warn of blocking apps or sites as something our backward adversaries, fearful of our American freedoms and open way of life, do out of desperation.
That undesirable online speech can simply be disappeared by state action was previously dismissed as anti-democratic folly: In a 2000 speech, Bill Clinton praised the new digital century in which “liberty will spread by cell phone and cable modem,” comparing China’s “crack down on the internet” to “trying to nail Jello to the wall.” Futile though it may remain, the hammer at least no longer appears un-American.
What’s shaping up to be one of the worst wildfire disasters in U.S. history had many causes. Before the blazes raged across Los Angeles last week, eight months with hardly any rain had left the brush-covered landscape bone-dry. Santa Ana winds blew through the mountains, their gusts turning small fires into infernos and sending embers flying miles ahead. As many as 12,000 buildings have burned down, some hundred thousand people have fled their homes, and at least two dozen people have died.
As winds picked up again this week, key questions about the fires remain unanswered: What sparked the flames in the first place? And could they have been prevented? Some theorize that the Eaton Fire in Pasadena was caused by wind-felled power lines, or that the Palisades Fire was seeded by the embers of a smaller fire the week before. But the list of possible culprits is long — even a car engine idling over dry grass can ignite a fire.
“To jump to any conclusions right now is speculation,” said Ginger Colbrun, a spokesperson for the U.S. Bureau of Alcohol, Tobacco, Firearms and Explosives, the lead agency investigating the cause of the Palisades Fire, to the Los Angeles Times. Figuring it out will likely take months. It took the bureau more than a year to conclude that the fire in August 2023 that devastated Maui, which was similarly lashed by high winds, was started by broken power lines.
Even given enough time, the causes of the Los Angeles fires might remain a mystery. According to a recent study, authorities never find the source of ignition for more than half of all of wildfires in the Western U.S. — a knowledge gap that can hamper prevention efforts even as climate change ramps up the frequency of these deadly events. If authorities can anticipate likely causes of a fire, they can help build more resilient neighborhoods and educate the public on how to avoid the next deadly event.
“Fire research is so incredibly difficult. It’s more difficult than looking for a needle in the haystack,” said Costas Synolakis, a professor at the University of Southern California who studies natural disasters. Synolakis said fires with especially high temperatures, such as those in Los Angeles, often obliterate the evidence. “That’s why it’s so challenging to mitigate fire losses,” he said. “You just don’t know what triggers them.”
The U.S. Forest Service is teaming up with computer scientists to see if artificial intelligence can help crack old cases. A study led by data scientists at Boise State University, published in the journal Earth’s Future earlier this month, analyzed the conditions surrounding more than 150,000 unsolved wildfire cases from 1992 to 2020 in Western states and found that 80 percent of wildfires were likely caused by people (whether accidentally or intentionally), with lightning responsible for just 20 percent. According to Cal Fire, people have caused 95 percent of California’s wildfires.
Karen Short, a research ecologist with the Forest Service who contributed to the study and maintains a historical database of national wildfire reports, says understanding why they start is essential for preventing them and educating the public. Strategic prevention appears to work: According to the National Fire Protection Association, house fires in the U.S. have decreased by nearly half since the 1980s.
In 2024, Short expanded her wildfire archive to include more information useful to investigators, such as weather, elevation, population density, and a fire’s timing. “We need to have those things captured in the data to track them over time. We still track things from the 1900s,” she said.
According to Short, wildfire trends across the Western United States have shifted with human activity. In recent decades, ignitions from power lines, fireworks, and firearms have become more common, in contrast with the railroad- and sawmill-caused fires that were once more common.
Signage warns against the use of illegal fireworks in Pasadena, in June 2022.
David McNew / Getty Images
The study found that vehicles and equipment are likely the number one culprit, potentially causing 21 percent of wildfires without a known cause since 1992. Last fall, the Airport Fire in California was just such an event, burning over 23,000 acres. And an increasing number of fires are the result of arson and accidental ignition — whether from smoking, gunfire, or campfires — that make up another 18 percent. In 2017, an Arizona couple’s choice of a blue smoke-spewing firework for a baby gender reveal party lit the Sawmill Fire, torching close to 47,000 acres.
But these results aren’t definitive. Machine learning models such as those used for the study are trained to predict the likelihood of a given fire’s cause, rather than prove that a particular ignition happened. Although the study’s model showed 90 percent accuracy selecting between lightning or human activity as the ignition source when tested on fires with known causes, it had more difficulty determining exactly which of 11 possible human behaviors were to blame, only getting it right half the time.
Yavar Pourmohamad, a data science Ph.D. researcher at Boise State University who led the study, says that knowing the probable causes of a fire could help authorities warn people in high-risk areas before a blaze actually starts. “It could give people a hint of what is most important to be careful of,” he said. “Maybe in the future, AI can become a trustworthy tool for real-world action.”
Synolakis, the USC professor, says Pourmohamad and Short’s research is important for understanding how risks are changing. He advocates for proactive actions like burying power lines underground where they can’t be buffeted by winds.
A 2018 study found that fires set off by downed power lines — such as the Camp Fire in Paradise, California, that same year — have been increasing. Although the authors note that while power lines do not account for many fires, they’re associated with larger swaths of burned land.
“We have to really make sure that our communities are more resilient to climate change,” Synolakis said. “As we’re seeing with the extreme conditions in Los Angeles, fire suppression alone doesn’t do it.”
Keir Starmer’s plan for artificial intelligence (AI) falls well short of the main point of the fourth industrial revolution. That is to liberate people from labouring work they don’t want to do, providing more time for social, creative, and intellectual endeavours.
Instead, Starmer is largely pursuing private sector for-profit ownership of AI that threatens to rob people of their work without fairly sharing the fruits that robots have created.
In his speech on AI, the prime minister paid some lip service to the issue, without saying how for-profit, private ownership of AI will facilitate it:
Who gets the benefits?
Just those at the top – or working people everywhere?
AI: utopia or apocalypse?
The thing is, there will be a point where capital increasingly becomes labour: where the money for resources equates to labour. That’s because in the long term the economy will be mainly automated rather than consisting of people providing the labour.
Without public ownership of that automation or some kind of citizens dividend, the opportunity for the utopia explored in Aaron Bastani’s Fully Automated Luxury Communism will be replaced by the opposite: AI-controlling overlords and a poverty stricken public.
Yet the public research and development funding for AI Starmer has issued is a drop in the ocean compared to the private investment we are seeing.
The government’s AI plan also mentions the environmental factor in this, given AI generation is an energy intensive process. But corporate lobbyists and fossil fuel donations have played a role in reducing the government’s green energy strategy to near zero, compared to what’s necessary.
In Labour’s first budget, the government issued just £100m to Great British Energy (GBE) for two years. That’s despite its already low pledge of £8bn by 2029. GBE aims to crowd in private investment in renewables. It falls far short of what we need. And it squanders the opportunity for a publicly funded Green New Deal.
The peoples’ work and funding to fuel private, for-profit tech
Starmer’s plan further proposes a national data library to power for-profit AI. That includes data from publicly funded institutions such as the British Library, the NHS, the BBC (through TV license), the Natural History Museum and the National Archives. It also includes reworking copyright law so AI can use people’s work, whether academic or creative.
All of this is essential for the development of AI. But why should people’s work and public funding fuel AI if the outcomes are going to be privately-owned profit generation? It only makes sense for public ownership of the outcomes: AI machines that could be liberating.
At present, the British people are already paying twice for education and information.
Once, to create research (for example, through Research Council funding).
Then, we are paying again to buy back the research through online journal subscriptions, university fees and public library costs. Despite funding the research, the taxpayer must pay again for access.
The for-profit agenda for AI threatens to prolong this reality but at a much greater scale, across the entire economy.
A webpage on the State of Louisiana’s official site appears to be advertising “animal porn Porn Videos.” The online home of the Federal Judicial Center offers “free how to sex videos,” with a closed captioning feature. The Centers for Disease Control and Prevention’s SimpleReport, identified as an “official website of the United States government” in a banner at the top of the page, provides “Desi Girl Xxx Video sex Videos,” while the City of Bethlehem, Pennsylvania, points to “Sexy Beautiful European Porn.”
These are just a few examples of the wide range of U.S. government websites inadvertently directing visitors to hardcore porn content. Other examples can readily be discovered when searching for pornographic keywords like “xxx” and utilizing Google’s “site:” search operator to query only U.S. government domains.
In some cases, the content appears to violate the very laws of the governments whose sites they have taken over. Pages hosted on the State of Louisiana’s official government site that now redirect to porn, for instance, don’t require visitors to provide proof-of-age verification, as is required under Louisiana’s controversial age verification law. The Supreme Court is due this week to hear a case about the constitutionality of age verification laws.
Spammers have in the past exploited the redirection functionalities of government websites to steer traffic to pornographic content — meaning the government sites themselves never actually hosted malicious content. But this recent wave of porn spam appears to be using a more complex technique: uploading to government pages rogue content that transports website visitors to malicious sites.
The new attacks work by tricking the site into attempting to load a nonexistent image. Doing so invokes what’s called an onerror event in the HTML code, which instructs the web browser to pull up a third-party website if an image won’t load. This exploit transports users from the government page to a third-party site, which in turn redirects to yet another site hosting porn and soliciting signups with referral codes and affiliate links. If the user ultimately signs up for an account on one of these sites, the owner may receive a cash incentive.
In some instances, visitors end up on a page to purchase antivirus software from vendors such as McAfee. In response to questions from The Intercept about a specific ad redirected from aBethlehem city government website, a McAfee spokesperson said the company would “be taking action to remove this ad.” McAfee did not respond to a question about how much the spammer had made through the affiliate program.
The rogue webpages in some cases appear to have been uploaded to the government websites that use older versions of the Kentico content management system, which previously allowed any user to upload files to the website.
Users on forums such as BlackHatWorld, which describes itself as “the global forum and marketplace for cutting edge digital marketing techniques and methods to help you make money in digital marketing today,” routinely advise each other to use the Kentico exploit to inject their content into websites.
Kentico disputed that such attacks point to a vulnerability in its systems, stating that its default settings allow any user to upload file and that it is up to its clients’ website administrators to restrict upload permissions. Kentico confirmed to The Intercept that “media libraries are not secured by default” and that the “default admin account has no password.”
The company pointed The Intercept to its official documentation. “By default, files in media libraries are NOT secured,” the documentation states. “It is up to the user’s discretion when using some feature to read the documentation. E.g. when creating a media library, secure it according given project’s needs and goals.”
None of the impacted government responded to requests for comment; all pages flagged by The Intercept were taken offline shortly after our outreach.
Meta is now granting its users new freedom to post a wide array of derogatory remarks about races, nationalities, ethnic groups, sexual orientations, and gender identities, training materials obtained by The Intercept reveal.
Examples of newly permissible speech on Facebook and Instagram highlighted in the training materials include:
“Immigrants are grubby, filthy pieces of shit.”
“Gays are freaks.”
“Look at that tranny (beneath photo of 17 year old girl).”
Meta’s newly appointed global policy chief Joel Kaplan described the effort in a statement as a means to fix “complex systems to manage content on our platforms, which are increasingly complicated for us to enforce.”
While Kaplan and Meta CEO Mark Zuckerberg have couched the changes as a way to allow users to engage more freely in ideological dissent and political debate, the previously unreported policy materials reviewed by The Intercept illustrate the extent to which purely insulting and dehumanizing rhetoric is now accepted.
The document provides those working on Meta user content with an overview of the hate speech policy changes, walking them through how to apply the new rules. The most significant changes are accompanied by a selection of “relevant examples” — hypothetical posts marked either “Allow” or “Remove.”
When asked about the new policy changes, Meta spokesperson Corey Chambliss referred The Intercept to remarks from Kaplan’s blog post announcing the shift: “We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.”
Kate Klonick, a content moderation policy expert who spoke to The Intercept, contests Meta’s framing that the new rules as less politicized, given the latitude they provide to attack conservative bogeymen.
“Drawing lines around content moderation was always a political enterprise,” said Klonick, an associate professor of law at St. John’s University and scholar of content moderation policy. “To pretend these new rules are any more ‘neutral’ than the old rules is a farce and a lie.”
She sees the shifts announced by Kaplan — a former White House deputy chief of staff under George W. Bush and Zuckerberg’s longtime liaison to the American right — as “the open political capture of Facebook, particularly because the changes are pandering to a particular party.”
Meta’s public Community Standards page says that even under the new relaxed rules, the company still protects “refugees, migrants, immigrants, and asylum seekers from the most severe attacks” and prohibits “direct attacks” against people on the basis of “race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and serious disease.” But the instructive examples provided in the internal materials show a wide variety of comments that denigrate people based on these traits that are marked “Allow.”
At times, the provided examples appear convoluted or contradictory. One page notes “generalizations” about any group remain prohibited if they make a comparison to animals or pathogens — such as “All Syrian refugees are rodents.” But comparisons to “filth or feces” are now downgraded from hate speech to a less serious form of “insult,” which violates company rules only if directed at a protected group. According to examples provided by Meta, this change now allows users to broadly dehumanize immigrants with statements like like “Immigrants are grubby, filthy pieces of shit,” despite language elsewhere in the document that claims “comparisons to subhumanity” remain banned.
The company’s policy around nausea-based hate follows a particularly fine line: “Migrants are no better than vomit” is allowed, according to the materials, while “Muslims make me want to throw up” ought to be removed because it claims a group “causes sickness.”
While general comparisons to animals are still against the rules, many other kinds of broad, hateful stereotyping is now allowed. “ALL behavioral statements (qualified and non-qualified)” are also now no longer against Meta’s rules, the document reads, allowing sweeping generalizations connecting entire races or ethnic groups to criminality or terrorism. The document offers as examples of acceptable racial generalizations: “These damn immigrants can’t be trusted, they’re all criminals,” “I bet Jorge’s the one who stole my backpack after track practice today. Immigrants are all thieves,” and “Japanese are all Yakuza.” It notes, however, that the statement “Black people are all drug dealers” remains prohibited under the new rules.
Other sections of the materials provide examples of forbidden “insults about sexual immorality,” such as “Jewish women are slutty.” But the document also provides ample examples of newly permissible insults aimed at specific gender identities or sexual orientations, including “Gay people are sinners” and “Trans people are immoral.” A post stating “Lesbians are so stupid” would remain prohibited as a “mental insult,” though “Trans people are mentally ill” is marked as allowed.
Generalizations about superiority and inferiority are similarly convoluted, though attacks on immigrants tend to get a pass. Examples of banned content include: “Christian men are totally useless,” “Is it me? Or are all autistic women ugly?” and “Hispanics are as dirty as the ground we walk on.” Meanwhile, “Mexican immigrants are trash!” is now deemed acceptable.
Overall, the restrictions on claims of ethnic or religious supremacy has been eased significantly. The document explains that Meta now allows “statements of superiority as long as the statements do not refer to inferiority of another [protected characteristic] group (a) on the basis of inherent intellectual ability and (b) without support.” Allowable statements under this rule include “Latinos are the best!” and “Black people are superior to all others.” Also now acceptable are comparative claims such as “Black people are more violent than Whites,” “Mexicans are lazier than Asians,” and “Jews are flat out greedier than Christians.” Off-limits, only because it pertains to intellectual ability, is the example “White people are more intelligent than black people.”
But general statements about intellect appear to be permitted if they’re shared with purported evidence. For example, “I just read a statistical study about Jewish people being smarter than Christians. From what I can tell, it’s true!” It’s unclear if one would be required to link to such a study, or merely claim its existence.
Rules around explicit statements of hate have been loosened considerably as well. “Statements of contempt, dislike, and dismissal, such as ‘I hate,’ ‘I don’t care for,’ and ‘I don’t like.’ are now considered non-violating and are allowed,” the document explains. Included as acceptable examples are posts stating “I don’t care for white people” and “I’m a proud racist.”
The new rules also forbid “targeting cursing” at a protected group, which “includes the use of the word ‘fuck’ and its variants.” Cited as an example, a post stating “Ugh, the fucking Jews are at it again” violates the rules simply because it contains an obscenity (the new rules permit the use of “bitch” or motherfucker”).
“Referring to the target as genitalia or anus are now considered non-violating and are allowed.”
Another policy shift: “Referring to the target as genitalia or anus are now considered non-violating and are allowed.” As an example of what is now permissible, Facebook offers up: “Italians are dickheads.”
While many of the examples and underlying policies seem muddled, the document shows clarity around allowing disparaging remarks about transgender people, including children. Noting that “‘Tranny’ is no longer a designated slur and is now non-violating,” the materials provide three examples of speech that should no longer be removed: “Trannies are a problem,” “Look at that tranny (beneath photo of 17 year old girl),” and “Get these trannies out of my school (beneath photo of high school students).”
According to Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, Meta’s hate speech protections have historically been well-intentioned, however deeply flawed in practice. “While this has often resulted in over-moderation that I and many others have criticized, these examples demonstrate that Meta’s policy changes are political in nature and not intended to simply allow more freedom of expression,” York said.
Meta has faced international scrutiny for its approach to hate speech, most notably after role that hate speech and other dehumanizing language on Facebook played in fomenting genocide in Myanmar. Following criticism of its mishandling of Myanmar, where the United Nations found Facebook had played a “determining role” in the slaughter of over 650,000 Rohingya Muslims, the company spent yearstouting its investment in preventing the spread of similar rhetoric in the future.
“The reason many of these lines were drawn where they were is because hate speech often doesn’t stay speech, it turns into real world conduct,” said Klonick, the content moderation scholar.
It’s a premise that Meta purported to share up until this week. “We have a responsibility to fight abuse on Facebook. This is especially true in countries like Myanmar where many people are using the internet for the first time and social media can be used to spread hate and fuel tension on the ground,” wrote company product manager Sara Su in a 2018 blog post. “While we’re adapting our approach to false news given the changing circumstances, our rules on hate speech have stayed the same: it’s not allowed.”
Elon Musk takes the stage during a campaign rally for Donald Trump at Madison Square Garden on Oct. 27, 2024, in New York City.Photo: Michael M. Santiago/Getty Images
Elon Musk banned me from X for my journalism. No one should be surprised about it in this era, when the prevailing view in Silicon Valley is “Free speech for me but not for thee.”
That ethos reinforces why we should be concerned about Musk’s takeover of the platform and, more to the point, oligarchs’ control of our main forms of communication.
Musk’s X, and now increasingly Meta, are shirking the responsibilities of owning and operating the world’s venues for sharing information, while taking advantage of these platforms for their own agenda-driven objectives.
My own brush with these hypocrisies went like this: My X account was suspended on Sunday because of a news story I authored. X accused me of violating X’s “doxing” rule — meaning that I shared someone’s personal, private information without permission.
The rule is rarely, and inconsistently, enforced, and more to the point, no reasonable person could think my story violated it.
Moderation on X doesn’t seem to exist to help keep anyone safe.
Moderation on X doesn’t seem to exist to help keep anyone safe. My news story put no one and nothing in danger.
Zuckerberg’s move is in line with X’s capriciously enforced doxing regulations: At best, it’s simply about controlling information; at worst, it’s about serving a self-interested agenda. And with Donald Trump going back to the White House, it’s clear where these billionaires’ self-interests lie.
Notably, Zuckerberg recent detente with Trump comes after the Meta chief faced years of right-wing attacks for the company’s decision to restrict access to news stories about Hunter Biden. Perhaps the most famous blocked link in social media history, both X and Meta supressed the a New York Post story about Biden’s laptop out of skepticism about its veracity and for fear that it was part of a foreign intelligence operation targeting the 2020 election.
Now, Musk has done the same thing to my news story about the Adrian Dittmann account on X not belonging to Elon Musk but likely belonging to … a man named Adrian Dittmann.
How I Got Banned
Published in The Spectator — ironically a right-leaning news outlet — my story examined the Adrian Dittmann account, which rose to meme-worthiness based on speculation that it was an “alt” for Musk himself. Rumors of Musk’s secret links to the sycophantic account had become so prevalent that even major media outlets had touched on it.
For years, Musk had himself made light of the conspiracy theory, using it to discredit what he calls “legacy media” for its gullibility.
My research showed that, indeed, the account was likely not Musk’s. I found a man living in Fiji named Adrian Dittmann — whose life and history were consistent with many claims from the X account bearing his name.
The purported “alt” was not even anonymous: The user went by his real name.
My story did not share any personal details like phone numbers. All my research — to compare claims by Dittmann on X with Dittmann in Fiji — was done using publicly accessible information. And the subject clearly rose to the level of newsworthiness; Musk, after all, operates at the highest echelons of American politics.
Then I got suspended, and links to my story were banned. (After I appealed and X users protested, the company reduced my suspension from 30 days to seven. I am still required to delete my tweets about the story, though the link is now unbanned. X offered no further explanations.)
The conclusion of X’s logic is a place where the powerful and their associations are shielded from any scrutiny. Pulling the mask of anonymity away is not always doxing, and context and execution is what distinguishes ethical journalism from harmful doxing.
What’s more, the purported “alt” was not even anonymous: The user went by his real name.
What’s Really Going On
Real “doxing” runs rampant on X.
In September, I checked in on some posts from the previous year that listed the addresses and family members of judges in president-elect Trump’s civil and criminal trials, some with photos of the judges’ homes and exhortations to harass them. As of Monday, all the posts were still visible, despite being reported by users.
As an independent investigative journalist, I’ve had people post online my own personal data, stolen personal images, and my family members’ information. X never helped me, even when I feared my family might be in danger and had my attorneys send letters to their general counsel.
So what’s really going on? These social media giants are dancing on a delicate line that they rely on to stay in business.
X and Meta want us to believe that they are our public airwaves, not the television stations themselves.
X and Meta want us to believe that they are our public airwaves, not the television stations themselves. The distinction allows social media companies to skirt the civil liabilities that come with being a publisher. That’s because of a loophole of law, the infamous Section 230 of 1996’s Communications Decency Act, which confers immunity on digital platforms for third-party content.
On the one hand, it behooves these giant companies to limit their moderation, since controlling the content themselves increasingly points to their status as a publisher. On the other hand, you don’t exert power over the discourse if you don’t control the content.
The companies, however, don’t have to choose one path or the other. Instead, they choose both: in each case, doing what is politically and financially expedient for them. That’s how Musk can end up arguing for Hunter Biden articles to run free but seems to have no compunction about blocking a piece on his supposed “alt.”
Section 230 simply doesn’t account for what X and Meta have become. Instead, it takes away accountability while still allowing a handful of private companies to have a tremendous influence on our discourse, with huge ramifications to the public interest.
The power they have is actually far beyond that of a publisher. Using the TV analogy, they don’t merely own the airwaves, they control them — exerting more influence over the content than the television stations. It’s a dictatorial power. Did you elect Musk or Zuck king? Would you?
A few legislators have called for reform of Section 230 in recent years, concerned about the ease of abuse of American social media platforms like TikTok by bad foreign actors like China or Russia.
What if, though, the call is coming from inside the house?
There are limits to the First Amendment, under established U.S. Supreme Court precedent. There is no constitutional protection for inciting violence, committing perjury, or child pornography, for example. But when the justices convene on Friday to consider legislation that would effectively ban the video-based social media app TikTok in the United States as of January 19, they will be asked to carve out another exception, at least implicitly: for speech that the government says might threaten national security.
Civil liberties groups warned that the TikTok ban cannot be squared with the First Amendment, and that the lower court that upheld the ban in December improperly deferred to the government’s speculative arguments about the app’s potential national security risks.
“Although the government invokes ‘national security’ to justify its sweeping ban, that does not alter the applicable First Amendment standards,” argued a civil liberties coalition, including the American Civil Liberties Union and the Electronic Frontier Foundation, in a brief supporting TikTok and a group of TikTok creators suing to block the law. “In fact, the judiciary has an especially critical role to play in ensuring that the government meets its burden when the government invokes national security.”
The TikTok ban case pits free speech against the specter of foreign threats. And many fear the Supreme Court will tip the balance against the First Amendment rights of TikTok’s millions of American users and creators.
The TikTok ban took shape during the first Trump administration, and it culminated with passage of the Protecting Americans From Foreign Adversary Controlled Applications Act in April 2024. The bill was tacked onto a foreign aid package for Ukraine and Israel, and it passed overwhelmingly in Congress. (President-elect Donald Trump, who endorsed banning TikTok during his first term, reversed course after joining the platform last summer; in a bizarre brief to the Supreme Court, he asked the justices to pause proceedings until he’s back in the White House on January 20, which is unlikely.)
The legislation singles out TikTok by name as a “foreign adversary controlled application” because of its Chinese parent company, ByteDance. The government argues that this ownership structure ultimately makes TikTok “subject to the control of the People’s Republic of China” and thus poses “grave national-security threats.” TikTok strenuously rejects this characterization, telling the Supreme Court in its brief that “the Government seeks to prophylactically silence” its more than 170 million users in the U.S. “based on fear that China could someday wield control over [TikTok’s] foreign affiliates.”
Under the law, unless ByteDance sells TikTok by January 19, it will be illegal for third-party vendors like Apple or Google to distribute or maintain the app.
TikTok vowed to challenge the law as unconstitutional, and it filed a federal lawsuit last May, as did a group of TikTok creators. The U.S. Court of Appeals for the D.C. Circuit sided with the government in early December.
“We recognize that this decision has significant implications for TikTok and its users,” wrote Judge Douglas H. Ginsburg, noting that if ByteDance didn’t divest from TikTok, “its platform will effectively be unavailable in the United States, at least for a time.”
TikTok and the creators petitioned the Supreme Court for emergency review, which was quickly granted to squeeze in oral arguments less than two weeks before the sale deadline.
The TikTok case is a battle of framing. By the government’s argument, the ban doesn’t implicate the First Amendment at all, because ByteDance is a foreign company without constitutional rights, and the legislation doesn’t regulate speech on TikTok directly — only who can own and control the platform itself.
“TikTok may continue operating in the United States and presenting the same content from the same users in the same manner if its current owner executes a divestiture that frees the platform from the PRC’s control,” the Biden administration wrote in its brief defending the law. “And TikTok users likewise have no First Amendment right to post content on a platform controlled by a foreign adversary.”
The D.C. Circuit ruled that the First Amendment does apply, rejecting the government’s argument to the contrary, which the judges deemed “ambitious.” ByteDance’s U.S.-based subsidiary, TikTok, Inc., has First Amendment rights, the court held, and “the curation of content on TikTok is a form of speech” under a unanimous Supreme Court decision from last summer.
The three-judge panel — a cross-ideological crew composed of Ginsburg, a Reagan appointee; Chief Judge Sri Srinivasan, an Obama nominee; and Judge Neomi Rao, a Trump nominee — found, however, that the TikTok ban didn’t violate the First Amendment, even under the highest level of constitutional scrutiny.
“The Government has offered persuasive evidence demonstrating that the Act is narrowly tailored to protect national security,” Ginsburg wrote. In a concurring opinion, Srinivasan wrote that the TikTok ban was “in step with longstanding restrictions on foreign control of mass communications channels,” such as radio.
The D.C. Circuit’s deference to national security arguments alarmed civil liberties advocates, since the government has acknowledged that TikTok’s potential geopolitical risks to the U.S. are, at least for now, hypothetical.
“Affirming the lower court’s holding would signal a sea change,” argued a group of prominent First Amendment and internet law scholars in a brief, “namely, that the Government need offer only ambiguous evidence and conjecture to support the suppression of free and controversial speech.”
TikTok hawks have offered two primary risk vectors: foreign surveillance and covert influence. Like many websites and apps, TikTok collects a significant amount of data on users — and the company has previously been caught dipping into the data, including to track journalists’ movements and smoke out corporate leakers. The fear is that the Chinese government might access this data too, which could facilitate mass monitoring, blackmail, and espionage against the U.S., as then-hawk Trump wrote in an executive order in 2020.
The government also warns that the Chinese government might leverage TikTok for influence operations targeted at Americans by ordering TikTok to tweak its recommendation algorithm or to censor certain content.
There is no evidence in the public record that the company has coordinated with the Chinese government in either way against the United States. But the D.C. Circuit deferred explicitly to the government’s national security arguments, citing a 2010 decision of the Supreme Court about material support for terrorism, writing that the assessments of the executive branch and Congress about TikTok’s risks were “entitled to significant weight.”
“The list of countries that have banned TikTok should itself be a warning because these countries do not share American commitments to a free and open internet.”
Where the Supreme Court will land is difficult to predict. In the past decade, the court has consistently recognized the importance of protecting online speech, albeit in cases lacking a national security veneer.
But critics worry that allowing a sweeping ban based on predictions rather than more concrete proof of TikTok’s security risks sets a precedent in line with repressive regimes, including, ironically, China.
“The list of countries that have banned TikTok should itself be a warning because these countries do not share American commitments to a free and open internet,” wrote the Knight First Amendment Institute at Columbia University in a brief to the Supreme Court joined by Free Press and PEN America.
Three weeks after Donald Trump’s reelection victory sent cryptocurrencies on a bull run, Rep. Mike Collins, R-Ga., spotted an opportunity.
Collins started buying thousands of dollars’ worth of a meme coin called Ski Mask Dog. His legally mandated disclosure of those purchases helped drive the coin’s price up more than 100 percent.
The purchases once again raised the question of whether members of Congress should trade assets that might fall under their oversight. It also highlighted just how rare it is for members of Congress to buy or sell crypto — despite the growing, bipartisan enthusiasm for unleashing it on Americans.
Only four members of Congress, including Collins, reported buying or selling cryptocurrencies over the past two years, according to a review of trading information compiled by data provider 2iQ. All four were members of the House: two Democrats and two Republicans.
The lack of trading activity suggests that the ranks of lawmakers holding crypto has not changed much since December 2021, when a Wall Street Journal review found that only 11 members of Congress were invested in it.
Members of Congress and industry observers pointed to a range of factors that could explain the reticence, from skittish financial advisers to conflict-of-interest concerns. One crypto skeptic, meanwhile, said he thought the lack of trading activity might simply be a case of “do as I say, not as I do.”
“If those numbers are accurate, and only a few members of Congress own it, it reflects where most Americans are,” said Mark Hays, a senior policy analyst at Americans for Financial Reform. “Most Americans don’t see a lot of utility and are worried about how speculative and risky crypto looks.”
Opinions on the Hill itself vary wildly for why members of Congress hold so little cryptocurrency — and whether they should. One enthusiastic senator told The Intercept that Congress was just too old to get what crypto was all about, and a more skeptical one said she was glad so few members created conflicts of interest by owning securities they are tasked with regulating.
“Crypto Capital”
The dearth of crypto trading on Capitol Hill is especially stark because the last Congress adopted crypto as one of its pet causes, and the new Congress seems even more eager to push crypto into the mainstream.
In May 2024, the House overwhelmingly passed the industry’s favorite piece of legislation. The bipartisan bill, called the Financial Innovation and Technology for the 21st Century Act, would transfer oversight of many digital assets from the Securities and Exchange Commission to a more lightly staffed regulator seen as friendlier to the industry, the Commodity Futures Trading Commission.
The Senate never took the bill up, but the November election made clear that crypto would have many champions in both chambers of Congress.
Ahead of the ballot, crypto-allied super PACs amassed a $200 million war chest and spent with abandon to prop up candidates from both parties willing to toe the industry line. The results were very good for crypto, which pulled off a complete sweep in contested Senate elections and racked up a winning record in House races.
“Our Founding Fathers would have been bitcoiners,” said Republican Ohio Senate candidate Bernie Moreno, who unseated the incumbent Democrat Sherrod Brown. “They believed in decentralization of power and control. That’s what this is.”
Moreno said he held bitcoin but sold it all before the election.
Then there was the man at the top of the ticket — Donald Trump — who has promised to make the U.S. the “crypto capital of the planet.”
Sen. Ted Cruz, R-Texas, speaks during the Empower Energizing Bitcoin conference in Houston on March 31, 2022. Photo: Mark Felix/Bloomberg via Getty Images
Crypto Congress
The hyperbolic statements of support from many elected officials about crypto contrast sharply with what they actually hold in their portfolios.
Back in 2021, a Wall Street Journal review found that only two out of 100 senators and nine out of 435 House representatives held crypto.
In the Congress that just ended, only four members bought or sold crypto, despite an industrywide rebound in the wake of the collapse of Sam Bankman-Fried’s FTX fraud.
Besides Collins, the other crypto traders in the last Congress were Reps. Barry Moore, R-Ala.; Shri Thanedar, D-Mich.; and Jeff Jackson, D-N.C.
The ignorance, Cruz said, could pose a problem for would-be backers of regulations: “It’s one of the dangerous things, when Congress tries to regulate crypto, is there are so few members that have any familiarity with it, that there’s an enormous danger of wreaking havoc and unintended consequences, which is one of the reasons I have been the leading advocate of a light touch from government on crypto to give it space to grow.”
That line of argument is grating for crypto skeptics like Hays.
“We have argued that, for the most part, the industry already has a regulatory framework that it needs to follow, because even though it offers products via new technological platforms, these products aren’t that different from existing securities,” Hays said.
Meanwhile, Sen. Cynthia Lummis, R-Wy., thought the lack of crypto investors on the Hill might be due to how such purchases would be perceived.
“We’re already beat up over owning stocks, so why add another asset that you’re just going to get beat up over?”
“I don’t think they want to buy the brain damage of criticism,” she said in an interview. “We’re already beat up over owning stocks, so why add another asset that you’re just going to get beat up over? I think that’s part of it.”
Lummis is one of the most bombastically enthusiastic members of Congress when it comes to bitcoin, and has said that she owned five bitcoins before putting her holdings into a blind trust.
Since Trump’s election, bitcoin has soared from a price of about $60,000 over the summer to around $100,000 this week. Asked whether she still holds any of the cryptocurrency, Lummis said, “I hope it’s still there, obviously.”
The “Let’s Go Brandon” Debacle
One of the thinning number of elected officials to call for strict crypto regulation is Sen. Elizabeth Warren, D-Mass., who said she was grateful that few members dabble themselves.
“I think it’s a terrible conflict of interest to own individual investments where Congress has important oversight,” she told The Intercept. “It’s just a plain old conflict of interest.”
If lawmakers needed an object lesson in how quickly crypto trading can create conflict-of-interest allegations, the saga of Madison Cawthorn and the “Let’s Go Brandon” coin provided one.
Cawthorn, who represented North Carolina’s 11th Congressional District for a single scandal-stained House term, bought 180 billion of the anti-Joe Biden meme coins in December 2021 at prices below market value. He then went on to promote the coin and sell it, all the while failing to follow disclosure rules. The House Ethics Committee ultimately fined Cawthorn $14,000.
During the meme coin’s monthlong life, Let’s Go Brandon — named for a right-wing slogan code for “Fuck Joe Biden” — soared to a $570 million market capitalization on claims that it would sponsor a NASCAR driver before plummeting to $0 on news that it would not. Litigation over the coin lives on.
“Memecoins have literally no usefulness beyond their buzz, their lulz.”
According to Garrick Hileman, an independent analyst who previously served as the head of research at Blockchain.com, such rapid changes of fortune are not uncommon for meme coins.
“What people typically mean when they discuss a meme coin is something that doesn’t even actually pretend to offer anything other than entertainment,” Hileman said. “Something may be funny, something that captures a moment. It’s not even pretending to solve something like the high costs of cross-border payments.”
Hileman said some digital assets seem to have found their footing for certain uses, such as Bitcoin as an alternative to gold.
“Meme coins, on the other hand, have literally no usefulness beyond their buzz, their lulz, whatever narrative is propping it up,” Hileman said. “And if that narrative or attention shifts, then you see the price collapse.”
None of that appears to have dissuaded Collins, the representative from Georgia, from buying up Ski Mask Dog. A classic meme coin, Ski Mask Dog promotes itself with pictures of dogs, in ski masks, and calls itself a bulwark against “manipulation by powerful entities.”
After news of Collins’s first buy became public, the coin more than doubled in value to gain over $100 million in market capitalization, a shift that cryptocurrency outlets attributed to his purchase.
Collins did not return a request for comment, but he’s previously given a curt explanation for buying Ski Mask Dog.
“I liked the coins, so I bought them,” Collins told the outlet Decrypt. “Washington and Wall Street have stigmatized emerging technology in the crypto ecosystem for far too long, and it’s about time that we start treating this industry with the respect it deserves.”
The enthusiasm for meme coins divides Collins from many in the crypto community. Investor and financial adviser Ric Edelman said that there was a “level of skepticism” about meme coins from the rest of the industry.
“The vast majority of them are more speculative in nature, don’t have a legitimate business use case behind them and often are involved in scams and manipulative trading activities,” he said. “So it’s not merely that members of Congress shouldn’t be doing this; I don’t think the vast majority of Americans should be doing this.”
Edelman said he would support forcing members of Congress to put all of their holdings — crypto or otherwise — in a blind trust so they would not be able to personally manage their investments.
During the last session of Congress, then-Rep. Abigail Spanberger, D-Va., sponsored legislation that would have mandated the blind trusts. Dozens of bipartisan co-sponsors signed on, but the bill never got a vote.
Hileman, the crypto researcher, said he was wary of members of Congress buying meme coins, though he called the issue “more intellectually complicated” than straightforward bribery.
“My knee-jerk reaction is, of course they shouldn’t be doing this,” Hileman said. “It just seems like the signaling value of a sitting member of Congress is quite powerful, and the potential for abuse is significant.”
Meta CEO Mark Zuckerberg arrives in Seoul on Feb. 27, 2024.Photo: SeongJoon Cho/Bloomberg via Getty Images
In a shameless act of genuflection toward the incoming Trump administration, Meta CEO Mark Zuckerberg announced Tuesday that his social media platforms — which include Facebook and Instagram — will be getting rid of fact-checking partners and replacing them with a “community notes” model like that found on X.
There could be little doubt about whom Meta aimed to please with these changes: Donald Trump and his far-right political movement.
In a video message explaining the announcement, Zuckerberg framed the new policies in the Republican lexicon of “free expression” against “censorship,” echoing right-wing talking points about how the social media platform’s third-party fact checkers have been prone to “political bias.”
And ending the fact-checking program was a direct demand of Trump’s pick for Federal Communications Commission chair and current FCC commissioner, Brendan Carr, according to The Verge.
Then there was the venue: News of the changes was first shared by Meta’s chief global affairs officer Joel Kaplan in an exclusive on “Fox & Friends,” Trump’s favorite show.
Zuckerberg and his executives’ naked pandering is worthy of contempt. As is the tech mogul’s decision last month to donate $1 million to Trump’s inauguration fund.
Zuck is just one of the most prominent Silicon Valley billionaires making moves to lick the president-elect’s boots. OpenAI CEO Sam Altman and Jeff Bezos’s Amazon both donated $1 million to the Trump fund. And Elon Musk’s ultra-MAGA performance needs no mention. There’s nothing surprising about the machinations of the mega-rich when it comes to aligning with power.
When it comes to shifting their businesses to be less accountable, the full effects remain to be seen, but we can be confident it will poison the discourse with even more right-wing garbage.
Meta platforms will now follow in the footsteps of X and become more filled with unchecked, reliably racist conspiracy theories, a proliferation of neo-Nazi accounts, hate speech, and violence. Zuckerberg himself admitted in his announcement that “we’re going to catch less bad stuff.”
Liberals have wrongly treated Trump’s rise as a problem of disinformation gone wild, and one that could be fixed with just enough fact-checking.
None of this, though, should lead us to draw the wrong conclusions about the value of social media fact-checking, or fact-checking more broadly, when it comes to combating the far right and the appeal of its conspiratorial world view. For a decade now, liberals have wrongly treated Trump’s rise as a problem of disinformation gone wild, and one that could be fixed with just enough fact-checking.
A case in point is Trump’s forthcoming second term itself: He won back the White House while spewing unfounded, racist lies about Haitian immigrants stealing and eating pets, among other falsehoods — lies that were again and again debunked by every establishment media outlet.
An entire liberal cottage industry of fact-checking Trump and his allies on news and social media, even removing Trump from major social media platforms, did not diminish his support nor expunge dangerous disinformation from the echo chambers primed to receive and propagate it.
The end of the fact check era, however, is worth examining because of how it heralds another liberal failure with little to offer in the way of alternatives. It is just another capitulation in the battle against fascism. Liberals, it turned out, were never really the “resistance” that they pretended they were.
The idea that Zuckerberg is acting out of a renewed, conveniently timed commitment to “free speech” is laughable, and we’d be wise to expect further bending to Trump and Republican whims.
Big Fact Check
Facebook introduced its third-party fact-checking program in 2016, following Trump’s first election victory. The system relied on 90 organizations worldwide to address “viral misinformation.”
In 2021, in response to Trump’s role in the January 6 Capitol attack, Meta banned the then-president from its platforms. Around that time, over 800 QAnon conspiracy groups were also removed from Facebook. Social media censorship became a hot button for the grievance-driven Trump and his far right.
None of the right-wing’s agenda, however, was about free speech for all. Consider that, at the same time, the right was rallying behind book bans in schools. They didn’t utter a peep when, as The Intercept reported in 2020, dozens of left-wing and antifascist groups were also banned from Facebook. And Meta has been engaging in what Human Rights Watch called “systematic and global” censorship of Palestinian and Palestine-solidarity content on its platforms.
Nonetheless, the right has successfully created a victim narrative out of content moderation.
The right has successfully created a victim narrative out of content moderation.
Enter Zuckerberg and the utter lack of subtlety in his announcement. These new policies were clearly not meant to serve the political left or censored pro-Palestinian users. “We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate,” Zuckerberg said, issuing a thinly veiled signal that anti-trans, anti-immigrant hate would face fewer roadblocks.
With history as a guide, it’s hard to imagine that pro-Palestinian speech, alongside speech for environmental, racial, and gender justice won’t face policing under a Trump administration. The Republican-led Congress is already chafing at the bit to condemn such activism as terrorism.
Smashing Liberal Shibboleths
On the eve of Trump’s second term, liberal shibboleths about speaking truth to power are worse than outdated. Meta imitating X’s permissive approach to right-wing fearmongering is not a welcome development, nor is the loss of funding that journalistic and research organizations got for partnering with Meta on fact checks. Yet fact checks were never going to deliver us from the political context in which far-right propaganda thrives — one of alienation, austerity, inequality, and fearfulness.
I’m not the first to point out that narratives about the current scourge of disinformation, largely propagated by establishment media outlets fearful of their diminished authority, failed to account for why certain conspiracies and falsehoods were able to appeal to huge but specific swaths of the population.
Disinformation, though, has been a convenient narrative for a Democratic establishment unwilling to reckon with its own role in upholding anti-immigrant narratives, or repeating baseless fearmongering over crime rates, and failing to support the multiracial working class.
In an essay questioning popular narratives around “big disinformation,” Joe Bernstein recounted that posts labeled as false by Facebook only saw an 8 percent reduction in sharing — showing how the designation doesn’t stop information from spreading. Bernstein noted that the story of disinformation was one that tech giants could use to their advantage, as its very premise — that social media content has a nearly all-powerful ability to convince and persuade users — is a helpful narrative when appealing to advertisers. It’s also largely unfounded.
The persuasion power of social media posts has been overstated, while the political, socioeconomic contexts in which conspiracies thrive has been significantly understated in the disinformation discourse. QAnon appeals disproportionately to evangelicals, for instance, and Covid skepticism gained a foothold because of the experiences that formed Americans’ opinions of public health authorities. “There is nothing magically persuasive about social-media platforms,” Bernstein wrote.
The nails are firmly in the coffin, and the coffin has been buried — so long dead is the idea that social media platforms like X or Instagram are either trustworthy news publishers, sites for liberatory community building, or hubs for digital democracy. Instead, we need to think about the internet as a place driven exactly by the motives of the people who own — and profit from — these platforms.
“The internet may once have been understood as a commons of information, but that was long ago,” wrote media theorist Rob Horning in a recent newsletter. “Now the main purpose of the internet is to place its users under surveillance, to make it so that no one does anything without generating data, and to assure that paywalls, rental fees, and other sorts of rents can be extracted for information that may have once seemed free but perhaps never wanted to be.”
Social media platforms are huge corporations for which we, as users, produce data to be mined as a commodity to sell to advertisers — and — and government agencies. The CEOs of these corporations are craven and power-hungry.
Zuckerberg, lest we forget, is still facing an antitrust Federal Trade Commission lawsuit over claims that Meta bought Instagram and WhatsApp to crush competition. Luckily for him, Trump responds well to bootlicking.
Every day, meteorologist Hannah Wangari takes the free graphs and maps produced by the five forecasting models she subscribes to and interprets what she sees. “What’s the likelihood of rain in different parts of the country?” she might wonder. “How much of it is likely to fall within the next 24 hours?” Answering such questions quickly and accurately is essential to the potentially life-saving work she and others do at the Kenya Meteorological Department.
As climate change drives ever more frequent and intense extreme weather, the need for faster, more precise predictions will only grow. Heavy rain and floods wreaked havoc this year, killing hundreds and displacing countless more in the United States, Spain, central Europe, and a great swath of Africa, where over 7.2 million people have been affected. An estimated 267 people died in Kenya alone and another 278,000 were displaced as floods impacted 42 of the nation’s 47 counties last year. With torrential storms projected to intensify by 7 percent for each 1.8 degree Fahrenheit of warming, predicting precisely when and where such events will happen is key to saving lives and livelihoods.
Yet that can be a time-consuming and expensive endeavor. Traditional forecasting relies upon a method called numerical weather prediction. This physics-based technique, developed in the 1950s, requires multimillion–dollar supercomputers capable of solving complex equations that mimic atmospheric processes. The intensive number-crunching can take hours to produce a single forecast and is out of reach for many forecasters, particularly in the developing world, leaving them to rely upon data produced by others.
Tools driven by artificial intelligence are becoming a faster, and in many cases more accurate, alternative easily produced on a laptop. They use machine learning that draws from 40 years of open-source weather data to spot patterns and identify trends that can help predict what’s coming. “They’re using the past to train the model to basically learn the physics,” said computer scientist Amy McGovern, who leads the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography at the University of Oklahoma.
AI-powered methods developed by the likes of Google, Oxford University, and NVIDIA can provide accurate forecasts within minutes, giving governments more time to prepare and respond. “More frequent updates help agencies monitor rapidly evolving conditions like storm paths,” Dion Harris, who leads the Accelerated Data Center at NVIDIA, told Grist. “This improves decision-making for evacuation planning, infrastructure protection, and resource allocation.”
Users like the government meteorologists in Nairobi can augment these models with local data on things like ground temperature and humidity and free satellite data to tailor forecasts to specific geographic areas. The Kenyan Meteorological Department is working with Oxford, the European Center for Medium-Range Weather Forecasts, Google, and the World Food Programme on an AI model that improves the accuracy of rainfall forecasts.
Of the five traditional models the Kenya Meteorological Department uses, four provide only the free charts and maps Wangari studies so closely. Accessing forecast data requires paying a licensing fee or owning a supercomputer with which to run models. Instead, she and her colleagues analyze the open-source data they receive to ascertain what’s coming. The machine-learning model developed with Oxford allows them to assess actual forecasting data to determine the likelihood of extreme weather. “For the first time, we’re able to produce what you call probabilistic forecasts,” she said. “People are more likely to take action if you give them the probability of something happening.”
“Now we can say things like, ‘This region is going to experience two inches of rain in the next 24 hours and there’s a 75 percent probability that this threshold will be exceeded,” she said.
AI models only take minutes to produce a forecast, providing the ability to run many more of them and survey a wider range of possible outcomes. That allows authorities to play what McGovern calls “the what-if game” and say, “If this happens, we need to evacuate this area” or “If that happens, we might want to take this action.” They can anticipate the most likely scenario or prepare for the worst case by, say, preemptively evacuating people with disabilities.
The machine-learning method that Oxford developed and Wangari uses has proven more effective than other methods of forecasting rainfall. That is not unusual. Google’s GenCast, unveiled last month, outperformed traditional forecasting models on 97 percent of 1,320 metrics. Its predecessor, GraphCast, proved more accurate than the world’s premier conventional tool, run by the European Center for Medium-Range Weather Forecasts. “AI produces much better results than the physics-based models,” said Florian Pappenberger, deputy director general of the European Center, which plans to launch its own AI model this year. It does so more quickly, too. GenCast produced 15-day forecasts within eight minutes, and NVIDIA claims its FourCastNet is 45,000 times faster than numerical weather prediction.
AI has also proven more accurate in predicting hurricane tracks. GraphCast correctly predicted where Hurricane Lee, which raced through the North Atlantic in September 2023, would make landfall nine days before it hit Nova Scotia — and three days before traditional forecasting methods, a Google scientist told Financial Times. Two machine-learning models closely predicted Hurricane Milton’s track across the Gulf of Mexico, although they underestimated the storm’s wind gusts and barometric pressure, said Shruti Nath, a climate researcher on the Oxford project. However, these tools are expected to improve as errors are corrected and the models are fine-tuned.
Of course, forecasts are only as useful as the anticipatory actions they lead to. Researchers developing them must work with local meteorologists and others with regional expertise to understand what they mean for communities and respond accordingly, Nath said.
Questions remain about how well machine learning can predict edge cases like once-in-a-century floods that lie beyond the data sets used to train them. However, “they’re actually representing the extremes much better than many of us predicted initially,” Pappenberger said. “Maybe they have learned more physics than we assumed they would.” These tools also do not yet produce all the outputs that a forecaster typically uses, including cloudiness, fog, and snowfall, but Pappenberger is confident that will come in time.
Users may also benefit from hybrid models, like Google’s NeuralGCM, which combine machine learning with physics, an approach that offers the benefits of AI, like speed, with the long-term forecasting ability and other strengths of numerical weather prediction.
While the improved forecasts are meant to help respond to climate change, they also risk contributing to it. The data centers required to run AI consume so much energy that companies like Google and Microsoft are resorting to nuclear power plants to provide it. Still, the supercomputers needed to run numerical weather prediction are energy intensive as well, and GraphCast could be 1,000 times cheaper in terms of energy consumption.
To realize the AI models’ potential to democratize forecasting, McGovern thinks cross-sector collaborations will be key. The computing power needed to train the models lies primarily with the industry, whereas academia — which writes a lot of the code and offers it on the public software platform GitHub — has the luxury of not having to provide quarterly reports, and the government, as the ultimate end user, knows what’s needed to save lives, she explained.
For now, researchers and the private sector are working together closely to refine the technology. “There’s a lot of collaboration, a lot of copying from each other, and trying to improve based on what other people have produced,” said Pappenberger. Many of these tools are freely available to researchers, but their accessibility to others varies from no-cost to low-cost to a price dependent upon the features used or the purchase of specific hardware. Still, the models are cheaper than a supercomputer, and would allow entities like the Kenya Meteorological Department to quickly and easily create forecasts tailored to their local needs at a fraction of the cost of physics-based models.
Crafting a forecast relevant to people in, say, Nairobi or Mombasa using conventional tools requires zooming in on the global maps to obtain more detail, then manually analyzing a lot of data. “With machine learning, you can produce a forecast for a specific point as long as you have the exact coordinates,” she said. That will make it a whole lot easier for her, and others doing similar work, to see what the weather has in store and, ultimately, save lives.
The Democratic Republic of Congo (DRC) has filed criminal cases against Apple, accusing the US-based global tech giant of fueling the war against the country’s eastern region by using in its products what have been deemed “blood minerals”.
“Year after year, Apple has sold technology made with minerals sourced from a region whose population is being devastated by grave violations of human rights,” maintains Robert Amsterdam, founding partner of Amsterdam & Partners LLP.
The Washington DC-based law firm was retained by DRC’s government late last year to investigate supply chains for illegally extracted and siphoned minerals from Congo, especially the 3T.
We continue to discuss the new HBO Original film Surveilled and explore the film’s investigation of high-tech spyware firms with journalist Ronan Farrow and director Matthew O’Neill. We focus on the influence of the Israeli military in the development of some of the most widely used versions of these surveillance technologies, which in many cases are first tested on Palestinians and used to enforce Israel’s occupation of Palestine, and on the potential expansion of domestic U.S. surveillance under a second Trump administration. Ever-increasing surveillance is “dangerous for democracy,” says Farrow. “We’re making and selling a weapon that is largely unregulated.” As O’Neill emphasizes, “We could all be caught up.”
This content originally appeared on Democracy Now! and was authored by Democracy Now!.
Advanced mobile forensics products being used to illegally extract data from mobile devices, Amnesty finds
Police and intelligence services in Serbia are using advanced mobile forensics products and previously unknown spyware to illegally surveil journalists, environmental campaigners and civil rights activists, according to a report.
The report shows how mobile forensic products from the Israeli firm Cellebrite are used to unlock and extract data from individuals’ mobile devices, which are being infected with a new Android spyware system, NoviSpy.
The hacking attempts from overseas had, however, affected a couple of local companies in the hospitality industry in which their systems were compromised, he said.
“We were able to provide support to reduce any damage caused by these cyber security threats,” Brown said.
The Financial Transactions Reporting Amendment Bill’s primary purpose is to implement the recommended actions put forth by the Global Forum on Transparency and the Exchange of Information for Tax Purposes.
This Forum conducts peer reviews and assessments across over 130 jurisdictions in which Cook Islands is a member of. The aim of these reviews is to evaluate the country’s ability to cooperate effectively with established standards, Brown explained.
‘Increasing collaboration’
“The financial transactions reporting requirements that our country have signed up to is an example of the increasing collaboration among international jurisdictions to share information. Additionally, the need to protect the integrity of our financial centres and enhance our cybersecurity measures will only intensify as the world increasingly moves toward digital currencies.
“Our initial peer reviews took place in 2017, and the Cook Islands received a very positive rating for its capacity to exchange information.
“In light of the subsequent growth and improvements in both the quality and quantity of information exchanges, as well as enhancements to the standards themselves, a second round of assessment was initiated just last year. This latest round includes a legal framework assessment and peer reviews that also cover technical, operational, and information security aspects.”
Brown said that during this process several gaps in the legal framework were identified, and the Global Forum provided recommendations aimed at helping the country maintain a positive rating.
He said Cook Islands is required to address these recommendations by implementing the necessary legislative amendments by the 31st of this month in order to qualify for another round of onsite assessments and reviews in 2025.
The Prime Minister said the security of information is very important, and the security of tax information, in particular, is of significant importance to the Global Forum.
He added that some of the areas identified for improvement extend beyond legislative requirements.
Security codes
“For example, all doors in the RMD (Revenue Management Division) office that hold tax information must have security codes. The staff that work there must have proper identification cards with ID cards to swipe and allow access to these rooms,” Brown said.
“It is a big change from how our public service has operated for many years and maybe we do not see the actual need for this level of security. However, the Global Forum has its standards to maintain and we are obligated to maintain those standards, so we must follow suit.
“Not only that but now there’s also a requirement for proper due diligence to be conducted on employees or people who will work inside these departments. It is these sorts of requirements that compels us in our government agencies, many of them now to change the way we do things and to be mindful of increased security measures that are being imposed on our country. ”
Justice Minister Vaine “Mac” Mokoroa, who presented the Bill to Parliament, said: “The key concern here is to ensure that the Cook Islands continues to be a leader in the trust industry . . . our International Trust Act has been at the forefront of the Cook Islands Offshore Financial Services Industry since its enactment 40 years ago, establishing the Cook Islands as a leader in wealth protection and preservation.”
“At that time, these laws were seen as innovative and ground-breaking, and their success is evident in the growth and development of the sector, as well as in the number of jurisdictions that have copied them, either in whole or in part.”
Mokoroa said that the Cook Islands Trust Companies Association, which comprises seven Trustee Companies licensed under the Trustee Companies Act, along with the Financial Supervisory Commission, conducted a thorough review of the International Trust Act and recommended necessary changes. These changes were reflected in the Financial Transactions Reporting Amendment Bill.
Hackers have gained sweeping access to U.S. text messages and phone calls — and in response, the FBI is falling back on the same warmed-over, bad advice about encryption that it has trotted out for years.
In response to the Salt Typhoon hack, attributed to state-backed hackers from China, the bureau is touting the long-debunked idea that federal agents could access U.S. communications without opening the door to foreign hackers. Critics say the FBI’s idea, which it calls “responsibly managed encryption,” is nothing more than a rebranding of a government backdoor.
“It’s not this huge about-face by law enforcement,” said Andrew Crocker, the surveillance litigation director at the Electronic Frontier Foundation. “It’s just the same, illogical talking points they have had for 30+ years, where they say, ‘Encryption is OK, but we need to be able to access communications.’ That is a circle that cannot be squared.”
The Hack
At least eight telecommunications companies were compromised in the hack, which was first made public in September and has been described as ongoing by U.S. officials.
The hackers have swept up vast amounts of data about phone calls and text messages in the Washington, D.C,. area, according to what officials said at a press conference last week. That information includes details about when and where calls were placed and to whom, but not their contents.
There is a smaller circle, of about 150 people, who had the contents of their communications hacked, including real-time audio of communications, according to a report in the Washington Post last month. The targets of that hack included Donald Trump, his lawyer, JD Vance, and the Kamala Harris campaign.
Another “vector” of the attack, according to government officials, was the interface where law enforcement agencies request wiretaps from telecom companies under the 1994 Communications Assistance for Law Enforcement Act.
“If you build a system so that it is easy to break into, people will do so — both the good guys and the bad.”
Essentially, the CALEA system may have given hackers a shopping list of people who have fallen under FBI suspicion.
It was a development long predicted by privacy advocates. In a blog post last month, encryption expert Susan Landau said CALEA had long been a “national security disaster waiting to happen.”
“If you build a system so that it is easy to break into, people will do so — both the good guys and the bad. That’s the inevitable consequence of CALEA, one we warned would come to pass — and it did,” she said.
The Elusive Golden Key
The FBI has pushed back on the idea that CALEA was the only “vector” for Chinese hackers. It has also rejected the larger moral drawn by privacy advocates, which is that only fully end-to-end encrypted communications are secure.
End-to-end encrypted communications make sure that a written message or voice call is protected from the moment it leaves your device to the moment it arrives at its destination, by ensuring that only the sender and the recipient can decrypt the messages, which are unreadable by any third party — whether that happens to be a Chinese hacker or the FBI.
That type of encryption does not protect communications if the third party has gained access to one of the communication endpoints, such as a phone or a laptop. Hackers could still plant spyware on phones, and the FBI, civil liberties advocates have long noted, can still search through phones through a variety of methods, just on a case-by-case basis.
Major tech companies such as Apple have endorsed end-to-end encryption in recent years, to the dismay of law enforcement agencies. The feds have complained loudly about criminals “going dark” on them, by using the same veil of encryption that protects ordinary people from scammers, pirates, and eavesdroppers.
The FBI and other agencies have long maintained that there might be some way to give them special access to communications without making things easier for hackers and spies. Security experts say the idea is hogwash. Call it a backdoor, a “golden key,” or something else, those experts say, it can’t work.
In their advice to the public last week, federal officials gave a strong endorsement to encryption.
“Encryption is your friend, whether it’s on text messaging or if you have the capacity to use encrypted voice communication,” said Jeff Greene, the executive assistant director for cybersecurity at the Cybersecurity and Infrastructure Security Agency.
Yet notably, an FBI official on the same call fell back on the idea of “responsibly managed” encryption. The FBI says this encryption would be “designed to protect people’s privacy and also managed so U.S. tech companies can provide readable content in response to a lawful court order.”
“If the FBI cannot keep their wiretap system safe, they absolutely cannot keep the skeleton key to all Apple phones safe.”
From a practical perspective, it is unclear what programs, if any, the FBI has in mind when it calls on people to use “responsibly managed” encryption. The FBI did not respond to a question about any apps that would comply with its advice.
Sean Vitka, the policy director at the progressive group Demand Progress, said the hack has once again provided damning evidence that government backdoors cannot be secured.
“If the FBI cannot keep their wiretap system safe, they absolutely cannot keep the skeleton key to all Apple phones safe,” Vitka said.
Going Dark is Good, Actually
In a statement, longtime privacy hawk Sen. Ron Wyden, D-Ore., said it was time for government agencies to endorse end-to-end encryption.
“It’s concerning that federal cybersecurity agencies are still not recommending end-to-end encryption technology — such as Signal, WhatsApp, or FaceTime — which is the widely regarded gold standard for secure communications,” Wyden said.
Wyden has teamed up with Sen. Eric Schmitt, R-Mo., to call on the Department of Defense inspector general to probe why the Pentagon did not use its massive buying power to push cellphone carriers to better secure their services when it signed a $2.7 billion contract with AT&T, Verizon, and T-Mobile.
“Government officials should not use communications services that allow companies to access their calls and texts. Whether it’s AT&T, Verizon, or Microsoft and Google, when those companies are inevitably hacked, China and other adversaries can steal those communications,” Wyden said in his statement.
Privacy advocates say that the best thing that people can do to protect themselves from prying eyes is to use some of the same apps recommended by Wyden, such as Signal or WhatsApp.
They added that in light of Salt Typhoon, it is time for law enforcement to call it quits on its long-running campaign in Congress to thwart stronger encryption. Landau, in a November 21 blog post, noted that even former NSA and CIA Director Michael Hayden has endorsed end-to-end encryption.
“For decades, technologists have been making the point that the strongest and best form of communications security is provided by end-to-end encryption; it is well past time for law enforcement to embrace its widespread public use. Anything less thwarts the nation’s basic security needs,” Landau said.
What was your career like before Hello Ruby? Where did your interest in the different strands that came together in the book—coding, illustration, and writing—start for you?
I’m a kid of the 1990s, so computers have always been in my life. My father brought home what was then called a “laptop computer,” but really it was a draggable computer, something you had to push around. He said, “There’s nothing you can do with the computer that can’t be fixed or undone.” Which meant that we grew up with a very fearless attitude towards computers. I always loved creating little worlds with computers. It was like an evolution from Polly Pocket to Sim City to Hello Ruby — the idea that I can control an entire universe with my hands.
When it came time to think about what I would do when I grew up, I never for a moment considered computer science. For some reason, the world of technology or programming never clicked for me. I felt it was dull. It was removed from the world. It was intensely mathematical. It was the early 2000s in Finland, which was known for Nokia and the mobile phone boom, and I went to business school thinking that that’s what I would become: a middle manager, a business person at Nokia.
Then I was lucky enough to do an exchange at Stanford University. I did this mechanical engineering course that is legendary over there. It’s about getting gritty and building something with your hands. And I discovered this world of technology and startups, and this very optimistic Bay Area culture of the early 2010s. I started a nonprofit that taught women programming, because at the time I was rediscovering my childhood passion for creating things with computers. And then I ended up working in New York for a company that was democratizing coding education. So all of these trends came together.
How did you make space for drawing and writing while you were in business school, or thinking about integrating it into this career that might have been less creative?
I think we all have these curiosities that keep coming back to us. Often, ideas nag me for years before they actually become anything. It was the same with the Hello Ruby book series. It was something that I started when I was in Stanford, doing little doodles in the margins of my computer science books, and then it just gathered more ideas, like a snowball. It kept growing and growing and growing.
In very practical terms, in New York the Codecademy day started at around 10 A.M., so I would put an alarm clock on and use an hour in the morning to carve out time to do something new. It’s a learning curve every time you try out a new medium, so there needs to be enough time reserved for exploration and wondering and trying out new things. You need to have these folders in your head and gather a lot of ideas from a lot of different industries. With Hello Ruby, I spent a lot of time reading children’s books and looking at programming curriculums before jumping into illustrating it and making the book happen.
What gave you the desire, or even the confidence, to take it from an idea that was just for you and make it into a children’s book?
Sometimes I wished someone else would have done it. But the idea just kept coming back to me and saying, “No, wake up, you need to make it happen.”
I think the confidence came from the fact that I had a very unique perspective. Every single project I’ve been good at has been a very niche thing where my personal, subjective experience has been the thing that makes it work. I would be terrible at writing a K–12 curriculum for computer science, or writing a research paper on how to do urban planning for playgrounds. I’m always going for projects where it’s an asset that I have a single, curious viewpoint into the world. That’s where my confidence comes from — that no one else will have this same exact taste or the same exact view.
Hello Ruby is so whimsical. Everything in the world of computer science has a character, from GPUs and CPUs to the computer mouse. Did that idea arrive fully formed in your mind, or was it a more painstaking process of creating a narrative?
The character-driven idea came immediately. I’ve always had this very animistic sense of the world, where a lot of the things around me have a sense of agency and character—this almost Japanese sense that there are souls in the rocks and so forth. So in that sense it was very easy for me to anthropomorphize the concepts and ideas.
But even before the characters came to be, I started to think about how to make computer science more tactile. How can we make it more understandable with our fingers? That’s the way I explore new ideas. The little girl character, Ruby, was born because Ruby was the first programming language that I felt very comfortable expressing myself in, and every time I would run into a problem, like what is object-oriented programming or what is a linked list, I would do a little doodle of the Ruby character. But I think the most profound part of Hello Ruby is the fact that all these storybooks also have activities that help you explore computation for play or crafting or narration. That also came quite early on, because that’s the way I would have preferred to learn.
It seems like the playful or self-expressive side of technology is rarely part of computer science education. Why do you think that’s missing?
I think computer scientists love to stay in their heads, whereas when we are four years old, we explore the world with our fingers. Touch is missing from a lot of computer science curriculum, and touch is profound when we are learning.
We are so in love with the idea that you can just transfer knowledge from one person’s brain to another—this Matrix-like downloadable idea—that we forget that a lot of our learning happens through narrative, through context, through great educators. It’s not as linear a process as some technologists want to make it seem, especially in early childhood.
The final thing that is often missing in computer science education is reciprocity. Knowledge happens not in a transfer but together. As much as the educator is there to teach, the child is also there to teach.
There’s very little open-endedness in computer science curriculums. There’s project-based learning, but still, there are often rubrics that say that this is right or this is wrong, and there’s only one way of solving a problem. Whereas for me, the whole beauty and joy of computer science and programming is the fact that there are multiple ways of solving a problem and expressing yourself. Some of them are more elegant than others. But the teacher doesn’t need to be the person who transfers the knowledge. The student can bring their own experiences and ideas, and there’s constant reciprocity between the one who teaches and the one who learns.
So much of your work has centered around early childhood education. What drew you to that field?
I suppose the age I associate with most is four years old. Four is the pinnacle of life, when children are like philosopher kings. You can’t have a mundane conversation with a four-year-old or a six-year-old.
A lot of our foundations are laid in early childhood. I used to talk about a study that said that around the age of 12, people start to have these self-limiting ideas, for example whether they are math people or non-math people. But actually, there are more studies that say it happens even earlier. It’s around the age of six that kids start to say, “No, I’m not a person who can learn coding.” It’s such a pity, because kids around the age of six can be anything, but they start to self-define at that age already.
Hello Ruby was a real inflection point in your career. It’s been published around the world and translated into dozens of languages. How did things change for you after it was published? Did it set you on a new trajectory?
Absolutely. I wasn’t trained as a children’s book author. I didn’t belong in the industry. I still don’t think I do. But at least I have the credibility to be working on this now. I also recognize that it’s a huge privilege that I’ve been able to fund this work throughout the years. I know a lot of children’s book authors need to have a lot of different strategies to make a living.
Coming from this background of Silicon Valley and Stanford, I put a lot of pressure on myself. What does success look like? What do the next steps look like? Should I open a school? Should this be a big company that employs a lot of people? One of the choices I’m most proud of is that early on, I realized that what success looks like for me is freedom and curiosity and the ability to follow whatever path I take. A lot of the time, that’s not what building scalable companies looks like. For me, the past decade has been a very curiosity-led decade.
I don’t play a lot of video games, but I play Zelda and I only do the side quests. I’m not at all interested in completing the game. I just like meandering in the world and doing silly, mundane things, like collecting apples and learning to make all the recipes in the cooking side quest.
That sounds like the Platonic ideal of a creative life, being able to have a career that lets you go on those creative side quests.
A big part of it is that I’ve been able to talk about all of this. The way I funded a lot of my work is by doing speaking gigs. Books definitely don’t pay enough to sustain everything. I’ve been lucky in that technology companies are curious and interested in this. That’s something no one could have predicted in advance — that you can become a children’s book author who speaks about these topics for leadership at change management companies. Maybe that’s part of the curiosity: keeping your eyes open and mixing and matching things.
You’re now involved in playground design. The first one you helped design just opened in Finland. Tell me about your role in this project, and what you were aiming to bring to it.
The idea for the playground started with the second Hello Ruby book in 2016. I wanted to do an Alice in Wonderland-like book where Ruby falls inside the computer while trying to help the mouse find the missing cursor. The Computer History Museum had an exhibition where you could go inside this gigantic computer and learn how it works, and I thought it would be so cool to do something like that for a museum. I applied for funding but nothing really clicked.
Then, in 2020, early in the pandemic, playgrounds were the only thing that was open in Helsinki, and they became such a lifeline. I noticed that kids on the playground were doing these behaviors that I had always connected with the ideal school environment. They were self-directed, they were doing project-based learning, they were solving problems on their own. Grownups were there, but we were not on a podium telling them what they should be learning. That’s where the idea for the playground started.
There’s a long lineage of artists working with playgrounds. There’s Yinka Llori, a Nigerian-British artist, who created a playground in London. There’s Aldo van Eyck, who was also an architect, who created these very striking abstract playgrounds in postwar Amsterdam. There’s Isamu Noguchi, a Japanese-American artist, who created these intense and whimsical and abstract playgrounds. It’s an interesting place between public space and public art, and also very physical and very educational.
In Finland, we have another layer around playgrounds that is underutilized globally. We have the play structures, the actual physical things, and we put a lot of effort into thinking about how those play structures could mimic the ideas of computer science. That’s the “hardware” side of it. But then there’s the “software” side of playgrounds, which in Finland are run by playground instructors. They are often university-educated people whose sole task is to think about programs and educational content, for example for first-time mothers with small kids or for afterschool programs for nearby schools. That gave us a huge opportunity to think about pedagogical content and materials we could create for the park. I hope we start to see more pedagogical content being created around everyday neighborhood playgrounds.
You’ve found ways to write and draw as part of what you do for a living, but is there any creative outlet that you keep that’s just for you, that you do just for the joy of it?
Cooking, I think. It’s meditative. It happens on a daily basis. It creates a sense of community of familiarity. And it’s intensely creative. The best kind of cooking is when you have a fridge with five different things, and then you make something out of that — it’s the constraints.
I would never want to be a food influencer or turn that into something that influences my work publicly in any way. But almost everything else goes into those folders of ideas.
You’ve lived in Helsinki, New York, and now Paris. How did the cultures of those places influence your work, or the way you think about your work?
I absolutely think that place influences the way we see the world. When I was a young student in post-Nokia Helsinki, my options looked very different from when I was a student at Stanford or when I was an early-stage employee at a startup in New York. Helsinki gave me my personality and the unique vision of what I want to do and how I want to do things. Then New York gave me the permission to put those things together. I remember hearing people saying, “I’m a barista slash actress slash dog sitter,” and I’m like, “Oh, you can do that!” You can conjugate and add new ideas and identities to one another. In Finland, you’re allowed to be one thing — you’re either a teacher or a children’s book author or a playground maker.
I’m still figuring Paris out. But I think because Paris is such a historic city, and computer science and technology as a field is very uninterested in history, I’ve noticed myself being interested in where ideas come from and how they grow. Paris is a wonderful place to observe those ideas, because it’s so full of art and history. An engineering mindset clashes with the culture, which is more about thinking about things over the very long term, valuing art and taste, and being able to converse in many different disciplines as opposed to being narrowly very good at what you do.
What advice would you give to someone who would like a similar career to yours, or any career where they can translate all their different passions into something sustainable?
Pay attention to unlikely niches and unlikely combinations, and choose topics that accrue over time. There are certain disciplines where you need to be young or you need to be in a certain geographic location in order to succeed. It takes time to change education or write books, and I think my secret has been that I’ve always chosen projects that benefit from time as opposed to requiring a very fast execution. And the combinations can be very weird.
Philip Glass said that the legacy is not important, it’s the lineage. Who are the people who came before you? There’s no one who did exactly what I did, but there’s Björk, who combined nature and technology together, and there’s Tove Jansson from my native Finland, who built a beloved children’s brand around Moomins. There’s David Hockney, who has a deep curiosity around technology and perception. There are countless examples of people who have taken a certain path. Have a little hall of fame of those people. Thinking of your work as a continuation or a lineage of those people helps sustain you on the days when it gets tough.
Linda Liukas recommends:
Cooking in Zelda: Tears of the Kingdom: I love the almost meditative experience of gathering herbs, fish, and wild truffles in the game, with zero interest in finishing the actual quests.
Björk’s Biophilia: Biophilia remains, for me, one of the most imaginative ways to combine science and art. Each song explores natural phenomena like gravity, tectonic shifts, or crystal formations, paired with musical ideas such as rhythm, arpeggios, or chords. Björk even made an app (in 2011!) and a pedagogical curriculum to accompany the album. Her approach—allowing students to experience something profound without explicitly telling them they’re learning—has inspired me greatly.
Books for the curious: Every scientific discipline should have at least one writer who presents the field in a literary, experimental way. I want to understand the beauty before the formulas and equations. My current favorites include Carlo Rovelli’s Seven Brief Lessons on Physics for physics, Hannu Rajaniemi’s Darkome for synthetic biology, Benjamin Labatut’s Maniac for the history of physics, and Robin Wall Kimmerer’s Gathering Moss: A Natural and Cultural History of Mosses for botany.
Experts have called on the government to channel more of its industry and clean energy funding into circular economy ventures to help meet ambitious new re-use targets and keep pace with global efforts. On Tuesday, a report from the Circular Economy Ministerial Advisory Group established in 2022 by Environment minister Tanya Plibersek was released with…
Hours after Luigi Mangione was arrested for his alleged role in killing of UnitedHealthcare CEO Brian Thompson, a clip appeared on a YouTube page bearing the suspect’s name and image. Titled “The Truth,” it began with a countdown timer and the message: “If you see this, I’m already under arrest.”
Next, on the bottom-right corner of the screen, appeared the word “Soon…” Then, flashing briefly, was the date “Dec 11th.” The video concluded with a final message: “All is scheduled, be patient. Bye for now.”
The cryptic video has since been deleted from YouTube. It had been uploaded to a channel titled PepMangione — the same username as Mangione’s Twitter account. The YouTube page contained publicly known information about Mangione, such as his age and alma mater.
In the frenzy of news coverage following Mangione’s arrest, the video went viral. But a review of video forensics conducted by The Intercept proves this clip was a hoax.
The video premiered on YouTube on December 9, 2024, at 2:35 p.m. ET. Mangione had been taken into custody hours earlier, at 9:15 a.m. ET. YouTube’s “premiere” function allows a user to schedule a date and time to broadcast a video in advance. That meant it was theoretically possible that Mangione had scheduled the video to be published at a specified time in the future, perhaps postponing the publication time each day until he no longer was able to push it back because he was in custody.
That was the theory originally put forth in an article about the video by Newsweek — prior to an edit that removed this speculation.
But an analysis of video timestamps reveals that the file was only uploaded to YouTube minutes before its premiere time, ruling out the possibility that Mangione had timed its release himself.
There are a few ways to determine when a video was uploaded to YouTube. By querying YouTube, it’s possible to determine the exact time when a video was published on the website – in other words the first moment it could be viewed.
It’s also possible to determine the exact time when a video was first encoded by YouTube – meaning the moment when a clip was initially uploaded by its creator to YouTube’s servers.
Analyzing video metadata using tools such as ExifTool and MediaInfo, The Intercept determined that this video was encoded and last modified on December 9, 2024 at 2:33 p.m. ET — two minutes before it published on YouTube. By that point, Mangione had already been in custody for hours. Unless he was editing clips while in police detention, this video couldn’t have been uploaded by Mangione.
Another red flag is the YouTube channel itself. The channel was created on January 20, 2024, nearly a year before Thompson’s killing. YouTube channels, however, can be renamed at will – meaning this channel may not have always been named PepMangione. That’s indeed what happened in this case, according to YouTube.
“The channel’s metadata was updated following widespread reporting of Luigi Mangione’s arrest,” a YouTube spokesperson told Newsweek in its updated article, “including updates made to the channel name and handle.”
YouTube did not immediately respond to a request for comment to The Intercept.
The YouTube channel wasn’t the only dubious social media account to pop up after the suspect’s arrest. An analysis of online platforms shows that a number of accounts similarly named PepMangione on services such as BlueSky and Telegram were created on Monday.
We discuss the new HBO Original film Surveilled and explore the film’s investigation of high-tech spyware firms with journalist Ronan Farrow and director Matthew O’Neill. We focus on the influence of the Israeli military in the development of some of the most widely used versions of these surveillance technologies, which in many cases are first tested on Palestinians and used to enforce Israel’s occupation of Palestine, and on the potential expansion of domestic U.S. surveillance under a second Trump administration. Ever-increasing surveillance is “dangerous for democracy,” says Farrow. “We’re making and selling a weapon that is largely unregulated.” As O’Neill emphasizes, “We could all be caught up.”
This content originally appeared on Democracy Now! and was authored by Democracy Now!.
For two years, Hannah Byrne was part of an invisible machine that determines what 3 billion people around the world can say on the internet. From her perch within Meta’s Counterterrorism and Dangerous Organizations team, Byrne helped craft one of the most powerful and secretive censorship policies in internet history. Her work adhered to the basic tenet of content moderation: Online speech can cause offline harm. Stop the bad speech — or bad speakers — and you have perhaps saved a life.
In college and early in her career, Byrne had dedicated herself to the field of counterterrorism and its attempt to catalog, explain, and ultimately deter non-state political violence. She was most concerned with violent right-wing extremism: neo-Nazis infiltrating Western armies, Klansmen plotting on Facebook pages, and Trumpist militiamen marching on the Capitol.
In video meetings with her remote work colleagues and in the conference rooms of Menlo Park, California, with the MAGA riot of January 6 fresh in her mind, Byrne believed she was in the right place at the right time to make a difference.
And then Russia invaded Ukraine. A country of under 40 million found itself facing a full-scale assault by one of the largest militaries in the world. Standing between it and Russian invasion were the capable, battle-tested fighters of the Azov Battalion — a unit founded as the armed wing of a Ukrainian neo-Nazi movement. What followed not only shook apart Byrne’s plans for her own life, but also her belief in content moderation and counterterrorism.
Today, she is convinced her former employer cannot be trusted with power so vast, and that the systems she helped build should be dismantled. For the first time, Byrne shares her story with The Intercept, and why the public should be as disturbed by her work as she came to be.
Through a spokesperson, Meta told The Intercept that Byrne’s workplace concerns “do not match the reality” of how policy is enforced at the company.
Good Guys and Bad Guys
Byrne grew up in the small, predominantly white Boston suburb of Natick. She was 7 years old when the World Trade Center was destroyed and grew up steeped in a binary American history of good versus evil, hopeful she would always side neatly with the former.
School taught her that communism was bad, Martin Luther King Jr. ended American racism, and the United States had only ever been a force for peace. Byrne was determined after high school to work for the CIA in part because of reading about its origin story as the Nazi-fighting Office of Strategic Services during World War II. “I was a 9/11 kid with a poor education and a hero complex,” Byrne said.
And so Byrne joined the system, earning an undergraduate degree in political science at Johns Hopkins and then enrolling in a graduate research program in “terrorism and sub-state violence” at Georgetown University’s Center for Security Studies. Georgetown’s website highlights how many graduates from the Center go on to work at places like the Department of Defense, Department of State, Northrop Grumman — and Meta.
It was taken for granted that the program would groom graduates for the intelligence community, said Jacq Fulgham, who met Byrne at Georgetown. But even then, Fulgham remembers Byrne as a rare skeptic willing to question American imperialism: “Hannah always forced us to think about every topic and to think critically.”
Part of her required reading at Georgetown included “A Time to Attack: The Looming Iranian Nuclear Threat,” by former Defense Department official Matthew Kroenig. The book advocates for preemptive air war against Iran to end the country’s nuclear ambitions. Byrne was shocked that the premise of bombing a country of 90 million — presumably killing many innocent people — to achieve the ideological and political ends of the United States would be considered within the realm of educated debate and not an act of terrorism.
That’s because terrorism, her instructors insisted, was not something governments do. Part of terror’s malign character is its perpetration by “non-state actors”: thugs, radicals, militants, criminals, and assassins. Not presidents or generals. Unprovoked air war against Iran was within the realm of polite discussion, but there was never “the same sort of critical thinking to what forms of violence might be appropriate for Hamas” or other non-state groups, she recalls.
As part of her program at Georgetown, Byrne studied abroad in places where “non-state violence” was not a textbook topic but real life. Interviews with former IRA militants in Belfast, ex-FARC soldiers in Colombia, and Palestinians living under Israeli occupation complicated the terrorism binary. Rather than cartoon villains, Byrne met people who felt pushedto violence by the overwhelming reach and power of the United States and its allies. Wherever she went, Byrne said, she met people victimized, not protected by her country. This was a history she had never been taught.
Despite feeling dismayed about the national security sector, Byrne still harbored a temptation to fix it from within. After receiving her master’s and entering a State Department-sponsored immersion language class in India, still hopeful for an eventual job at the CIA or National Security Agency, she got a job at the RAND Corporation as a defense analyst. “I hoped I’d be able to continue to learn and write about ‘terrorism,’ which I now knew to be ‘resistance movements,’ in an academic way,” Byrne said. Instead, her two years at RAND were focused on the traditional research the think tank is known for, contributing to titles like “Countering Violent Nonstate Actor Financing: Revenue Sources, Financing Strategies, and Tools of Disruption.”
“She was all in on a career in national security,” recalled a former RAND co-worker who spoke to The Intercept on the condition of anonymity. “She was earnest in the way a lot of inside-the-Beltway recent grads can be,” they added. “She still had a healthy amount of sarcasm. But I think over time that turned into cynicism.”
Unfulfilled at RAND, Byrne found what she thought could be a way to both do good and honor her burgeoning anti-imperial politics: Fighting the enemy at home. She decided her next step would be a job that let her focus on the threat of white supremacists.
Facebook needed the help. A mob inflamed by white supremacist rhetoric had stormed the U.S. Capitol, and Facebook yet again found itself pilloried for providing an organizing tool for extremists. Byrne came away from job interviews with Facebook’s policy team convinced the company would let her fight a real danger in a way the federal national security establishment would not.
Instead, she would come to realize she had joined the national security state in microcosm.
Azov on the Whitelist
Byrne joined Meta in September 2021.
She and her team helped draft the rulebook that applies to the world’s most diabolical people and groups: the Ku Klux Klan, cartels, and of course, terrorists. Meta bans these so-called Dangerous Organizations and Individuals, or DOI, from using its platforms, but further prohibits its billions of users from engaging in “glorification,” “support,” or “representation” of anyone on the list.
Byrne’s job was not only to keep dangerous organizations off Meta properties, but also to prevent their message from spreading across the internet and spilling into the real world. The ambiguity and subjectivity inherent to these terms has made the “DOI” policy a perennial source of over-enforcement and controversy.
A full copy of the secret list obtained by The Intercept in 2021 showed it was disproportionately comprised of Muslim, Arab, and southeast Asian entities, hewing closely to the foreign policy crosshairs of the United States. Much of the list is copied directly from federal blacklists like the Treasury Department’s Specially Designated Global Terrorist roster.
A 2022 third-party audit commissioned by Meta found the company had violated the human rights of Palestinian users, in part, due to over-enforcement of the DOI policy. Meta’s in-house Oversight Board has repeatedly reversed content removed through the policy, and regularly asks the company to disclose the contents of the list and information about how it’s used.
Meta’s longtime justification of the Dangerous Organizations policy is that the company is legally obligated to censor certain kinds of speech around designated entities or it would risk violating the federal statute barring material support for terrorist groups, a view some national security scholars have vigorously rejected.
Top/Left: Hannah Byrne on a Meta-sponsored trip to Wales in 2022. Bottom/Right: Byrne speaking at the NOLA Freedom Forum in 2024, after leaving Meta.Photo: Courtesy of Hannah Byrne
Byrne tried to focus on initiatives and targets that she could feel good about, like efforts to block violent white supremacists from using the company’s VR platform or running Facebook ads. At first she was pleased to see that Meta’s in-house list went further than the federal roster in designating white supremacist organizations like the Klan — or the Azov Battalion.
Still, Byrne had doubts about the model because of the clear intimacy between American state policy and Meta’s content moderation policy. Meta’s censorship systems are “basically an extension of the government,” Byrne said in an interview.
She was also unsure of whether Meta was up to the task of maintaining a privatized terror roster. “We had this huge problem where we had all of these groups and we didn’t really have … any sort of ongoing check or list of evidence of whether or not these groups were terrorists,” she said, a characterization the company rejected.
Byrne quickly found that the blacklist was flexible.
Meta’s censorship systems are “basically an extension of the government.”
In February 2022, as Russia prepared its full-scale invasion of Ukraine, Byrne learned firsthand just how mercurial the corporate mirroring of State Department policy could be.
As an armed white supremacist group with credible allegations of human rights violations hanging over it, Azov had landed on the Dangerous Organizations list, which meant the unit’s members couldn’t use Meta platforms like Facebook, nor could any users of those platforms praise the unit’s deeds. But with Russian tanks and troops massing along the border, Ukraine’s well-trained Azov fighters became the vanguard of anti-Russian resistance, and their status as international pariahs a sudden liability for American geopolitics. Within weeks, Byrne found the moral universe around her inverted: The heavily armed hate group sanctioned by Congress since 2018 were now freedom fighters resisting occupation, not terroristic racists.
As a Counterterrorism and Dangerous Organizations policy manager, Byrne’s entire job was to help form policies that would most effectively thwart groups like Azov. Then one day, this was no longer the case. “They’re no longer neo-Nazis,” Byrne recalls a policy manager explaining to her somewhat shocked team, a line that is now the official position of the White House.
Shortly after the delisting, The Intercept reported that Meta rules had been quickly altered to “allow praise of the Azov Battalion when explicitly and exclusively praising their role in defending Ukraine OR their role as part of the Ukraine’s National Guard.” Suddenly, billions of people were permitted to call the historically neo-Nazi Azov movement “real heroes,” according to policy language obtained by The Intercept at the time.
Byrne and other concerned colleagues were given an opportunity to dissent and muster evidence that Azov fighters had not in fact reformed. Byrne said that even after gathering photographic evidence to the contrary, Meta responded that while Azov may have harbored Nazi sympathies in recent years, posts violating the company’s rules had sufficiently tapered off.
The odds felt stacked: While their bosses said they were free to make their case that the Battalion should remain blacklisted, they had to pull their evidence from Facebook — a platform that Azov fighters ostensibly weren’t allowed to use in the first place.
“Key to that assessment — which everyone at Facebook knew, but coming from the outside sounds ridiculous — is that we’re actually pretty bad at keeping content off the platform. Especially neo-Nazi content,” Byrne recalls. “So internally, it was like, ‘Oh, there should be lots of evidence online if they’re neo-Nazis because there’s so many neo-Nazis on our platform.’”
Though she was not privy to deliberations about the choice to delist the Azov Battalion, Byrne is adamant in her suspicion that it was done to support the U.S.-backed war effort. “I know the U.S. government is in constant contact with Facebook employees,” she said. “It is so clear that it was a political decision.” Byrne had taken this job to prevent militant racism from spilling over into offline violence. Now, her team was instead loosening its rules for an armed organization whose founder had once declared Ukraine’s destiny was to “lead the white races of the world in a final crusade … against Semite-led Untermenschen.”
It wasn’t just the shock of a reversal on the Azov Battalion, but the fact that it had happened so abruptly — Byrne estimates that it took no more than two weeks to exempt the group and allow praise of it once more.
She was aghast: “Of course, this is going to exacerbate white supremacist violence,” she recalls worrying. “This is going to make them look good. It’s going to make it easier to spread propaganda. Ultimately, I was afraid that it was going to directly contribute to violence.”
In its comments to The Intercept, Meta reiterated its belief that the Azov unit has meaningfully reformed and no longer meets its standards for designation.
Azov Regiment soldiers are seen during weapons training on June 28, 2022, in the Kharkiv region, Ukraine.Photo: Paula Bronstein/Getty Images
Byrne recalled a similar frustration around Meta’s blacklisting of factions fighting the government of Syrian President Bashar al-Assad, but not the violent, repressive government itself. “[Assad] was gassing his civilians, and there were a couple Syrians at Facebook who were like, ‘Hey, why do we have this whole team called Dangerous Organizations and Individuals and they’re only censoring the Syrian resistance?’” Byrne realized there was no satisfying answer: National governments were just generally off-limits.
Meta confirmed to The Intercept that its definition of terrorism doesn’t apply to nation states, reflecting what it described as a legal and academic consensus that governments may legitimately use violence.
At the start of her job, Byrne was under the impression right-wing extremism was a top priority for the company. “But every time I need resources for neo-Nazi stuff … nothing seemed to happen.” The Azov exemption, by contrast, happened at lightning speed. Byrne recalls a similarly rapid engineering effort to tweak Meta’s machine learning-based content scanning system that would have normally removed the bulk of Azov-friendly posts. Not everyone’s algorithmic treatment is similarly prioritized: “It’s infuriating that so many Palestinians are still being taken down for false-positive ‘graphic violence’ violations because it’s obvious to me no one at Meta gives a shit,” Byrne said.
Meta pushed back on Byrne’s broader objections to the Dangerous Organizations policy. “This former employee’s claims do not match the reality of how our Dangerous Organizations policies actually work,” Meta spokesperson Ryan Daniels said in a statement. “These policies are some of the most comprehensive in the industry, and designed to stop those who seek to promote violence, hate and terrorism on our platforms, while at the same time ensuring free expression. We have a team of hundreds of people from different backgrounds working on these issues every day — with expertise ranging from law enforcement and national security to human rights, counterterrorism and academic studies. Our Dangerous Organizations policies are not static, we update them to reflect evolving factors and changing threat landscapes, and we apply them equally around the world while also complying with our legal obligations.”
Malicious Actors
But it wasn’t the Azov reversal that ended Byrne’s counterterror career.
In the wake of the attack on the Capitol, Meta had a problem: “It’s tough to profile or pinpoint the type of person that would be inclined to participate in January 6, which is true of most terrorist groups,” Byrne said. “It’s an ideology, it lives in your mind.”
But what if the company could prevent the next recruit for the Proud Boys, or Three Percenters, or even ISIS? “That was our task,” Byrne said. “Figure out where these groups are organizing, kind of nip it in the bud before they carry out any further real-world violence. We need to make sure they’re not in groups together, not friending each other, and not connecting with like-minded people.”
She was assigned to work on Meta’s Malicious Actor Framework, a system intended to span all its platforms and identify “malicious actors” who might be prone to “dangerous” behavior using “signals,” Byrne said. The approach, she said, had been pioneered at Meta by the child safety team, which used automated alarms to alert the company when it seemed an adult might be attempting inappropriate contact with a child. That tactic had some success, but Byrne recalls it also mistakenly flagged people like coaches and teachers who had legitimate reason to interact with children.
Posting praise or admiring imagery of Osama bin Laden is relatively easy to catch and delete. But what about someone interested in his ideas? “The premise was that we need to target certain kinds of individuals who are likely to sympathize with terrorism,” Byrne said. There was just one problem, as Byrne puts it today: “What the fuck does it mean to be a sympathizer?”
In the field, this Obama-era framework of stopping radicalization before it takes root is known as Countering Violent Extremism, or CVE. It has been criticized as both pseudoscientific and ineffective, undermining the civil liberties of innocent people by placing them under suspicion for their own good. CVE programs generally “lack any scientific basis, are ineffective at reducing terrorism, and are overwhelmingly discriminatory in nature,” according to the Brennan Center for Justice.
Byrne had joined Meta at a time when the company was transitioning “from content-based detection to profile-based detection,”said Byrne. Screenshots of team presentations Byrne participated in show an interest in predicting dangerousness among users. One presentation expresses concern with Facebook’s transition to encrypted messaging, which would prevent authorities (and Meta itself) from eavesdropping on chats: “We will need to move our detection/enforcement/investigation signals more upstream to surfaces we do have insight into (eg., user’s behaviors on FB, past violations, social relationships, group metadata like description, image, title, etc) in order to flag areas of harm.”
Meta specifically wanted the ability to detect and deter “risky interactions” between “dangerous individuals” or “likely-malicious actors” and their “victims” vulnerable of radicalization — without being able to read the messages these users were exchanging. The company hoped to use this capability, according to these meeting materials, to stop “malicious actors distributing propaganda,” for example. This would be accomplished using machine learning to recognize dangerous signals on someone’s profile, according to these screenshots, like certain words in their profile or whether they’d been members of a banned group.
Byrne said the plan was to incorporate this policy into a companywide framework, but she departed Meta too soon to know what ultimately came of this plan.
Meta confirmed the existence of the malicious actor framework to The Intercept, explaining that it remains a work in progress, but disputed its predictive nature.
Byrne has no evidence that Meta was pursuing a system that would use overtly prejudiced criteria to determine who is a future threat, but feared that any predictive system would be based on thin evidence and unconsciously veer toward bias. Civil liberties scholars and counterterror experts have long warned that because terrorism is so extremely rare, any attempt to predict who will commit it is fatally flawed because there simply is not enough data. Such efforts often regress, wittingly or otherwise, into stereotypes.
“I brought it up in a couple meetings, including with my manager, but it wasn’t taken that seriously,” Byrne said.
Byrne recalls discussion of predicting such radicalism risk based on things like who your friends are, what’s on your profile, who sends you message, and the extent to which you and your network have previously violated Meta’s speech rules. Given the fact enforcement of those rules has been shown to be biased along national or ethnic lines and plagued by technical errors, Byrne feared the worst for vulnerable users. “If you live in Palestine, all of your friends are Palestinians,” Byrne said. “They’re all getting flagged, and it’s like a self-licking ice cream cone.”
In the spring of 2022, investigators drawn from Meta’s internal Integrity, Investigations, and Intelligence team, known as i3, began analyzing the profiles of Facebook users whose profiles had run afoul of the Dangerous Organizations and Individuals policy, Byrne said. They were looking for shared traits that could be turned into general indicators of risk. “As someone who came from a professional research background, I can say it wasn’t a good research methodology,” Byrne said.
Part of her objection was pedigree: People just barely removed from American government were able to determine what people could say on online, whether or not the internet users lived in the United States. Many of these investigators, according to Byrne’s recollection and LinkedIn profiles of her former colleagues she shared with The Intercept, had arrived from positions at the Defense Department, FBI, and U.S. intelligence agencies, institutions not known for an unbiased approach to counterterror.
Over hours of interviews, Byrne never badmouthed any of her former colleagues nor blamed them individually. Her criticism of Meta is systemic, the sort of structural ailment she had hoped to change from within. “It was people that I personally liked and trusted, and I trusted their values,” Byrne said of her former colleagues on Meta’s in-house intelligence team.
Byrne feared implementing a system so deeply rooted in inference could endanger the users she’d been hired to protect. She worried about systemic biases, such as “the fact that Arabic language just wasn’t really represented in our data set.”
She worried about generalizing about one strain of violent extremism and applying it to drastically different cultures, contexts, and ideologies: “We’re saying Hamas is the exact same thing as the KKK with absolutely no basis in logic or reason or history or research.” Byrne encountered similar definitional headaches around “misinformation” and “disinformation,” which she says her team studied as potential sources of terror sympathy and wanted to incorporate into the Malicious Actor Framework. But like terrorism itself, Byrne found these terms simply too blunt to be effective. “We’re taking some of the most complex, unrelated, geographically separated, just different fucking things, and we’re literally using this word terrorism, or misinformation, or disinformation, to treat them as a binary.”
Private Policy, Public Relations
Toward the end of her time at Meta, Byrne began to break down. The prospect of catching enemies of the state had energized her at first. Now she faced the grim, gradual realization that she wasn’t accomplishing the things she hoped she would. Her work wasn’t making Facebook safer, nor the people using it. Far from manning the barricades against extremism, Byrne quickly found herself just another millennial in a boring tech job.
But while planning the Malicious Actor Framework, these feelings of futility gave way to something worse: “I’m actually going to be an active participant in harm,” she recalls thinking. The speech of people she’d met in her studies abroad were exactly the kind her job might suppress. Finally, Byrne had decided “it felt impossible to be a good actor within that system.”
Spiraling mental health struggles resulted in a leave of absence in the spring of 2023 and months of partial hospitalization. Away from her job, grappling with the nature of her work, Byrne realized she couldn’t go on. She returned at the end of the summer for a brief stretch before finally quitting on October 4. Her departure came just days before the world would be upended by events that would quickly implicate her former employer and highlight exactly why she fled from it.
For Byrne, watching the Israeli military hailed by her country’s leaders as it kills tens of thousands of civilians in the name of fighting terror exposes everything she believes wrong and fraudulent about the counterterrorism industry. Meta’s Dangerous Organizations policy doesn’t take lives, but she sees it as rooted in that same conceptual injustice. “The same racist, bullshit dynamics of ‘terrorism’ were not only dictating who the U.S. was allowed to kill, they were dictating what the world was allowed to see, who in the world was allowed to speak, and what the world was allowed to say,” Byrne explained. “And the system works exactly as the U.S. law intends it to — to silence resistance to its violence.”
In conversations, it seems most galling for Byrne to compare how malleable Meta’s Dangerous Organizations policy was for Ukraine, and how draconian it has felt for those protesting the war in Gaza, or trying to document it happening around them. Following the Russian invasion of Ukraine, Meta not only moved swiftly to allow users to cheer on the Azov Battalion, but also loosened its rules around incitement, hate speech, and gory imagery so Ukrainian civilians could share images of the suffering around them and voice their fury against it. Byrne recalls seeing a video on Facebook of a Ukrainian woman giving an invading Russian soldier seeds, telling him to keep them in his pockets so they’d flower from his corpse on the battlefield. Were it a Palestinian woman taunting to an Israeli soldier, Byrne said, “that would be taken down for terrorism so quickly.”
Today, Byrne remains conflicted about the very concept of content moderation. On the one hand, she acknowledges that violent groups can and do organize via platforms like Facebook — the problem that brought her to the company in the first place. And there are ways she believes Meta could easily improve, given its financial resources: more and better human moderators, more policy drafted by teams equipped to meet the contours of the hundreds of different countries where people use Facebook and Instagram.
While Byrne and her colleagues were supposed to be preventing harm from occurring in the world, they often felt like they were a janitorial crew responding to bad press. “An article would come out, all my team would share it, and then it would be like ‘Fix this thing’ all day. I’d be glued to the computer.” Byrne recalls “my boss’s boss or even Mark Zuckerberg just like searching things, and screenshotting them, and sending them to us, like ‘Why is this still up?’” She remembers her team, contrary to conventional wisdom about Big Tech, “expressing gratitude when there would be [media] leaks sometimes, because we’d all of a sudden get all of these resources and ability to change things.”
Militant neo-Nazi organizations represent a real, violent threat to the public, and they and other violent groups can and do organize using online platforms like Facebook, she readily admits. Still, watching the way pro-Palestinian speech has been restricted by companies like Meta since October 7, while glorifications of Israeli state violence flows unfettered, pushed her to speak out publicly about the company’s censorship apparatus.
In her life post-Meta, galvanized by the ongoing Israeli bombardment of Gaza, Byrne has become active in pro-Palestinian protest circles and outspoken in her criticism in her former employer’s role in suppressing speech about the war. In February, she gave a presentation on Meta’s censorship practices at the NOLA Freedom Forum, a New Orleans activist group, providing an insider’s advice on how to avoid getting banned on Instagram.
She’s still maddened by the establishment’s circular logic of terrorism, which casts non-state actors as terrorists while condoning the same behaviors from governments. “The scales of acceptable casualties are astronomically different when we’re talking about white, state-perpetrated violence versus brown and black non-state-perpetrated violence.”
Unlike past Big Tech dissidents like Frances Haugen, Byrne doesn’t think her former employer can be reformed with tweaks to its algorithms or greater transparency. Rather, she fundamentally objects to an American company policing speech — even in the name of safety — for so much of the planet.
So long as U.S. foreign policy and federal law brands certain acts of violence beyond the pale depending on politics and not harm — and so long as Meta believes itself beholden to those laws — Byrne believes the machine cannot be fixed. “You want military, Department of State, CIA people enforcing free speech? That is what is concerning about this.”
Realising Australia’s green hydrogen production targets could yield an electrolyser manufacturing opportunity worth more than $1.7 billion a year by 2050 if government delivers support for local procurement, according to a new report from the national science agency. Published on Wednesday, the CSIRO report highlights existing advantages in critical minerals as well as existing manufacturing…