Author: Sam Biddle

  • The White House is unwilling to say whether the U.S. will provide depleted uranium anti-tank rounds to Ukraine, according to the transcript of a press briefing, despite decades of research suggesting the weapon causes cancer and birth defects long after the fighting ends.

    At a background briefing on January 25, an unnamed reporter asked the unnamed “senior administration officials” at the session whether the Bradley Fighting Vehicles now being sent to aid in Ukraine’s defense against Russia would come armed with the 25 mm armor-piercing depleted uranium rounds they’re capable of firing. As the reporter noted, firing these radioactive rounds “is part of what makes them the ‘tank killer’ that Pentagon officials called them.” The administration official who responded declined to answer, saying, “I’m not going to get into the technical specifics.”

    But the technical specifics of these weapons could have dire consequences for Ukrainians. Depleted uranium is a common byproduct of manufacturing nuclear fuel and weaponry, and, owing to its extreme density, ammunition made from the stuff is a fantastic way of punching through the thick armor of a tank and igniting everyone inside. But these anti-tank rounds also happen to be radioactive, extremely toxic, and have been linked with a variety of birth defects, cancers, and other illness, most dramatically in Iraq, where doctors reported a spike in birth defects and cancers since the Gulf War, when the U.S. fired nearly a million depleted uranium rounds, and the 2003 invasion of that country.

    “[Uranium] binds avidly to bio-molecules including DNA,” according to Keith Baverstock, a radiobiologist at the University of Eastern Finland, former World Health Organization researcher, and longtime scholar of depleted uranium arms and their effects. “Where [uranium] is used in munitions (bullets and bombs) to penetrate hardened targets (using its high density) the munition may shatter and since [uranium] is pyrophoric, catch fire and burn, producing oxide particles which are partially soluble and, thus, potentially a source of systemic [uranium] if inhaled.” Uranium particles may remain embedded in the land where these rounds were fired, too, presenting a possible environmental hazard years later.

    While research linking depleted uranium weapons to adverse health effects is disputed — and heavily politicized given who’s fired it and at whom — experts told The Intercept that the risk alone means the White House owes the public transparency.

    “It’s been a concern since the start of the invasion,” said Doug Weir, research and policy director with the Conflict and Environment Observatory, particularly given that Russia claims to have its own depleted uranium arsenal, though it’s not clear whether any have been used in Ukraine. Were the U.S. to provide uranium rounds for Ukraine to deploy against Russia, the odds might increase of Russia using its arsenal too (if it hasn’t already).

    Generally speaking, Weir explained that “the most severe contamination incidents will occur where a vehicle with a full load of DU cooks off after being struck. This may be a tank, or a supply vehicle. Similarly, arms dumps containing large volumes of DU may create contamination incidents when destroyed or burned.” Weir added, “It is important that journalists pin down the U.S. government on its DU decision.”

    Despite our popular associations with uranium, “the biggest problem there is metal pollution, not radiation,” explained Nickolai Denisov, an environmental scientist who has closely monitored the health impacts of the Ukraine war. “Still, pollution by heavy metals is dangerous and long term, hence transparency in these matters is indeed important.”

    It can be uncomfortable to advocate against the use of a weapon that would no doubt be a near-term boon for Ukrainain resistance. As the International Coalition to Ban Uranium Weapons put it at the onset of the Russian invasion, “When there is war, everything else is secondary compared to sheer survival. On the other hand, the outcry because of environmental destruction must not be omitted if the country is to be habitable again afterward.”

    If the Pentagon sends uranium rounds to Ukraine, it would surely have supporters: The ammo would be highly effective at destroying the armored vehicles Russia has poured into the country. As the White House faces — and bends to — growing pressure to share increasingly powerful arms with Ukraine, candid discussions about the unintended consequences of these arm transfers can become unpopular. But some scientists who’ve spent careers scrutinizing these weapons will likely remain opposed, despite the immense sympathy of the Ukrainian cause.

    Asked about the White House’s refusal to discuss uranium rounds in Ukraine, Baverstock, the Finnish scientist, replied simply, “I would certainly hope that there is no intention to use it.”

    The post White House Refuses to Say Whether Ukraine Will Receive Toxic Depleted Uranium Ammo appeared first on The Intercept.

    This post was originally published on The Intercept.

  • When Safari users in Hong Kong recently tried to load the popular code-sharing website GitLab, they received a strange warning instead: Apple’s browser was blocking the site for their own safety. The access was temporarily cut off thanks to Apple’s use of a Chinese corporate website blacklist, which resulted in the innocuous site being flagged as a purveyor of misinformation. Neither Tencent, the massive Chinese firm behind the web filter, nor Apple will say how or why the site was censored.

    The outage was publicized just ahead of the new year. On December 30, 2022, Hong Kong-based software engineer and former Apple employee Chu Ka-cheong tweeted that his web browser had blocked access to GitLab, a popular repository for open-source code. Safari’s “safe browsing” feature greeted him with a full-page “deceptive website warning,” advising that because GitLab contained dangerous “unverified information,” it was inaccessible. Access to GitLab was restored several days later, after the situation was brought to the company’s attention.

    The warning screen itself came courtesy of Tencent, the mammoth Chinese internet conglomerate behind WeChat and League of Legends. The company operates the safe browsing filter for Safari users in China on Apple’s behalf — and now, as the Chinese government increasingly asserts control of the territory, in Hong Kong as well.

    Apple spokesperson Nadine Haija would not answer questions about the GitLab incident, suggesting they be directed at Tencent, which also declined to offer responses.

    The episode raises thorny questions about privatized censorship done in the name of “safety” — questions that neither company seems interested in answering: How does Tencent decide what’s blocked? Does Apple have any role? Does Apple condone Tencent’s blacklist practices?

    “They should be responsible to their customers in Hong Kong and need to describe how they will respond to demands from the Chinese authorities to limit access to information,” wrote Charlie Smith, the pseudonymous founder of GreatFire, a Chinese web censorship advocacy and watchdog group. “Presumably people purchase Apple devices because they believe the company when they say that ‘privacy is a fundamental human right’. What they fail to add is *except if you are Chinese.”

    Ka-cheong tweeted that other Hong Kong residents had reported GitLab similarly blocked on their devices thanks to Tencent. “We will look into it,” Apple engineer Maciej Stachowiak tweeted in response. “Thanks for the heads-up.” But Ka-cheong, who also serves as vice president of Internet Society Hong Kong Chapter, an online rights group, said he received no further information from Apple.

    “Presumably people purchase Apple devices because they believe the company when they say that ‘privacy is a fundamental human right’. What they fail to add is *except if you are Chinese.”

    The block came as a particular surprise to Ka-cheong and other Hong Kong residents because Apple originally said the Tencent blocklist would be used only for Safari users inside mainland China. According to a review of the Internet Archive, however, sometime after November 24, 2022, Apple quietly edited its Safari privacy policy to note that the Tencent blacklist would be used for devices in Hong Kong as well. (Haija, the Apple spokesperson, did not respond when asked when or why Apple expanded the use of Tencent’s filter to Hong Kong.)

    Though mainland China has heavily censored internet access for decades, Hong Kong typically enjoyed unfettered access to the web, a freedom only recently threatened by the passage of a sweeping, repressive national security law in 2020.

    Silently expanding the scope of the Tencent list not only allows Apple to remain in the good graces of China — whose industrial capacity remains existentially vital to the California-based company — but also provides plausible deniability about how or why such site blocks happen.

    “While unfortunately many tech companies proactively apply political and religious censorship to their mainland Chinese users, Apple may be unique among North American tech companies in proactively applying such speech restrictions to users in Hong Kong,” said Jeffrey Knockel, a researcher with Citizen Lab, a digital security watchdog group at the University of Toronto.

    Knockel pointed out that while a company like Tencent should expected to comply with Chinese law as a matter of course, Apple has gone out of its way to do so.

    “The aspect which we should be surprised by and concerned about is Apple’s decision to work with Tencent in the first place to filter URLs for Apple’s Hong Kong users,” he said, “when other North American tech companies have resisted Hong Kong’s demands to subject Hong Kong users to China-based filtering.”

    The block on GitLab would not be the first time Tencent deemed a foreign website “dangerous” for apparently ideological reasons. In 2020, attempts to visit the official website of Notepad++, a text editor app whose French developer had previously issued a statement of solidarity with Hong Kong dissidents, were blocked for users of Tencent web browsers, again citing safety.

    The GitLab block also wouldn’t be the first time Apple, which purports to hold deep commitments to human rights, has bent the company’s products to align with Chinese national pressure. In 2019, Apple was caught delisting an app Hong Kong political dissidents were using to organize; in November, users noticed the company had pushed a software update to Chinese iPhone users that significantly weakened the AirDrop feature, which protesters throughout the country had been using to spread messages on the ground.

    “All companies have a responsibility to respect human rights, including freedom of expression, no matter where in the world they operate,” Michael Kleinman, head of Amnesty International’s Silicon Valley Initiative, wrote to The Intercept. “Any steps by Apple to limit freedom of expression for internet users in Hong Kong would contravene Apple’s responsibility to respect human rights under the UN Guiding Principles.”

    In 2019, Apple publicly acknowledged that it had begun using a “safe browsing” database maintained by Tencent to filter the web activity of its users in China, instead of an equivalent list operated by Google. Safe browsing filters ostensibly protect users from malicious pages containing malware or spear-phishing attacks by checking the website they’re trying to load against a master list of blacklisted domains.

    In order to make such a list work, however, at least some personal information needs to be transmitted to the company operating the filter, be it Google or Tencent. When news of Apple’s use of the Tencent safe browsing list first broke, Matthew Green, a professor of cryptography at Johns Hopkins University, described it as “another example of Apple making significant modifications to its privacy infrastructure, largely without publicity or announcement.”

    “I suppose the nature of having a ‘misinformation’ category is that China is going to have its own views on what that means.”

    While important questions remain about exactly what information from Safari users in Hong Kong and China is ultimately transmitted to Tencent and beyond, the GitLab incident shows another troubling aspect of safe browsing: It gives a single company the ability to unilaterally censor the web under the aegis of public safety.

    “Our concern was that outsourcing this stuff to Chinese firms seemed problematic for Apple,” Green explained in an interview with The Intercept, “and I suppose the nature of having a ‘misinformation’ category is that China is going to have its own views on what that means.”

    Indeed, it’s impossible to know in what sense GitLab could have possibly been considered a source of dangerous “unverified information.” The site is essentially an empty vessel where software developers, including corporate clients like T-Mobile and Goldman Sachs, can safely store and edit code. The Chinese government has recently cracked down on some open-source code sites similar to GitLab, where engineers from around the world are able to freely interact, collaborate, and share information. (GitLab did not respond to a request for comment.)

    Notably, the censorship-evasion and anonymity web browser Tor has turned to GitLab to catalog instances of Chinese state internet censorship, though there’s no indication it was this activity that led to GitLab’s addition to the Tencent list.

    While Tencent provides some public explanation of its criteria for blocking a website, its decision-making process is completely opaque, and the published censorship standards are extremely vague, including offenses like “endangering national security” and “undermining national unity.”

    Tencent has long been scrutinized for its ties to the Chinese government, which frequently leverages state power to more closely influence or outright control nominally private firms.

    Earlier this month, the Financial Times reported that the Chinese government was acquiring so-called golden shares of Tencent, a privileged form of equity that’s become “a common tool used by the state to exert influence over private news and content companies.” A 2021 New York Times report on Tencent noted the company’s eagerness to cooperate with Chinese government mandates, quoting the company’s president during an earnings call that year: “Now I think it’s important for us to understand even more about what the government is concerned about, what the society is concerned about, and be even more compliant.”

    While Tencent’s compliance with the Chinese national security agenda ought not to come as a surprise, Knockel of Citizen Lab says Apple’s should.

    “Ultimately I don’t think it really matters exactly how GitLab came to be blocked by Tencent’s Safe Browsing,” he said. “Tencent’s blocking of GitLab for Safari users underscores that Apple’s subjection of Hong Kong users to screening via a China-based company is problematic not only in principle but also in practice.”

    The post Apple Brings Mainland Chinese Web Censorship to Hong Kong appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Marcel Lehel Lazar walked out of Federal Correctional Institute Schuylkill, a Pennsylvania prison, in August 2021. The 51-year-old formerly known only as Guccifer had spent over four years incarcerated for an email hacking spree against America’s elite. Though these inbox disclosures arguably changed the course of the nation’s recent history, Lazar himself remains an obscure figure. This month, in a series of phone interviews with The Intercept, Lazar opened up for the first time about his new life and strange legacy.

    Lazar is not a household name by unauthorized access standards — no Edward Snowden nor Chelsea Manning — but people will be familiar with his work. Throughout 2013, Lazar stole the private correspondence of everyone from a former member of the Joint Chiefs of Staff to “Sex and the City” author Candace Bushnell.

    There’s an irony to his present obscurity: Guccifer’s prolific career often seemed motivated as much by an appetite for global media fame than any ideology or principle. He acted as an agent of chaos, not a whistleblower, and his exploits provided as much entertainment as anything else. It’s thanks to Guccifer’s infiltration of Dorothy Bush Koch’s AOL account that the world knows that her brother — George W. Bush — is fond of fine bathroom self-portraiture.

    “Right now, having this time on my hands, I’m just trying to understand what this other me was making 10 years ago.”

    “I knew all the time what these guys are talking about,” Lazar told me with a degree of satisfaction. “I used to know more than they knew about each other.”

    Ten years after his email rampage, Lazar said that, back then, he’d hoped not for celebrity but to find some hidden explanation for America’s 21st century slump — a skeleton key buried within the emails of the rich and famous, something that might expose those causing our national rot and reverse it. Instead, he might have inadvertently put Donald Trump in the White House.

    When Guccifer — a portmanteau of Lucifer and Gucci, pronounced with the Italian word’s “tch” sound — breached longtime Clinton family confidant Sidney Blumenthal’s email account, it changed the world almost by accident. Buried among the thousands of messages in Blumenthal’s AOL account he stole and leaked in 2013 were emails to HDR22@clintonemail.com, Hillary Clinton’s previously unknown private address. The account’s existence, and later revelations that she had improperly used it to conduct official government business and transmit sensitive intelligence data, led to something like a national panic attack: nonstop political acrimony, federal investigations, and depending on who you ask, Trump’s 2016 victory.

    In the end, the way Guccifer might be best remembered was in the cooptation of his wildly catchy name for a Russian hacker persona: Guccifer 2.0. The latter Guccifer would hack troves of information from Democratic National Committee servers, a plunder released on WikiLeaks.

    Eventually, a federal indictment accused a cadre of Russian intelligence operatives of using the persona Guccifer 2.0 to conduct a political propaganda campaign and cover for Russian involvement. As the Guccifer 2.0 version grew in infamy, becoming a central figure in Americans’ wrangling over Russian interference in the 2016 election, the namesake hacker’s exploits faded from memory.

    When I reached Lazar by phone, he was at home in Romania. He had returned to a family that had grown up and apart from him since he was arrested by Romanian police in 2014.

    “I am still trying to connect back with my family, with my daughter, my wife,” Lazar said. “I’ve been away more than eight years, so this is a big gap, which I’m trying to fill with everything that takes.”

    He spends most of his time alone at home, reading about American politics and working on a memoir. His wife supports the family as a low-paid worker at a nearby factory. Revisiting his past life for the book has been an odd undertaking, Lazar told me.

    “It’s like an out-of-body experience, like this Guccifer guy is another guy,” he said. “Right now, having this time on my hands, I’m just trying to understand what this other me was making 10 years ago.”

    2023_MarcelLehelLazar_TheIntercept_NK_-12

    Lazar, known as Guccifer, opened up to The Intercept for the first time about his new life and strange legacy.

    Photo: Nemanja Knežević for The Intercept

    Lazar has little to say of the two American prisons where he was sentenced to do time after extradition from Romania. Both were in Pennsylvania — a minimum-security facility and then a stint at the medium-security Schuylkill, which he described simply and solemnly as “a bad place.” He claimed he was routinely denied medical care, and says he lost many of his teeth during his four-year term.

    On matters of his crime and punishment, Lazar contradicted himself, something he did often during our conversations. He wants to be both the righteous crusader and the steamrolled patsy. He repeatedly brought up what he considers a fundamental injustice: He revealed Clinton’s rule-breaking email setup and then cooperated with the Department of Justice probe, only to wind up in federal prison.

    “Hillary Clinton swam away with the ‘reckless negligence’ or whatever Jim Comey called her,” Lazar said. “I did the time.”

    Lazar was quick to rattle off a list of other high-profile officials who either knew about the secret Clinton email account all along or were later revealed to have used their own. “So much hypocrisy, come on man,” he said. “So much hypocrisy.”

    And yet he pled guilty to all charges he faced and today fully admits what he did was wrong — sort of.

    “To read somebody else’s emails is not OK,” he said. “And I paid for this, you know. People have to have privacy. But, you see, it’s not like I wanted to know what my neighbors are talking about. But I wanted to know what these guys in the United States are speaking about, and this is the reason why. I was sure that, over there, bad stuff is happening. This is the reason why I did it, not some other shady reason. What I did is OK.”

    “I was inspired with the name, at least, because my whole Guccifer project was, after all, a failure.”

    Though he takes pride in outing Clinton’s private email arrangement, Lazar said he found none of what he thought he’d uncover. The inbox-fishing expedition for the darkest secrets of American power instead mostly revealed their mediocre oil paintings and poorly lit family snapshots. He conceded that Guccifer’s legacy may be that Russian intelligence cribbed his name.

    “I was inspired with the name, at least,” Lazar said, “because my whole Guccifer project was, after all, a failure.”

    2023_MarcelLehelLazar_TheIntercept_NK_-22

    Lazar shows old photos and his current ID photographs in his wallet while walking around Arad, Romania, on Jan. 8, 2023.

    Photo: Nemanja Knežević for The Intercept

    It can be difficult to tell where the Guccifer mythology ends and Lazar’s biography begins. Back in his hometown of Arad, a Transylvanian city roughly the size of Syracuse, New York, Lazar seems ambivalent about the magnitude of his role in American electoral history. “I don’t feel comfortable talking about me,” he told me. When I pressed in a later phone call, Lazar described 2016 as something of an inevitability: “Trump was the bullet in the barrel of the gun. He was already lingering around.”

    While Lazar says James Comey’s October surprise memo to Congress — that Clinton’s emailing habits were still under investigation — was what “killed Hillary Clinton,” he didn’t deny his indirect role in that twist.

    “Everything started with this mumbo jumbo email server, with this bullshit of email server,” he said. “So, if it was not for me, it was not for [Hillary’s] email server to start an investigation.”

    Lazar now claims he very nearly breached the Trump inner circle in October 2013. “I was about to hack the Trump guys, Ivanka and stuff,” he told me. “And my computer just broke.”

    How does it feel to have boosted, even accidentally, Donald Trump, a bona fide American elite? Though he described the former president as mentally unstable, a hero of Confederate sympathizers, and deeply selfish, Lazar is unbothered by his indirect role in 2016: “I feel like a regular guy. I don’t feel anything special about myself.”

    At times, the retired hacker clearly still relishes his brief global notoriety. I asked him what it felt like to see his hacker persona usurped by Russian intelligence using the “Guccifer 2.0” cutout: Was it a shameless rip-off, or a flattering homage? Lazar said he first learned that Russia had cribbed his persona from inside a detention center outside D.C. He perked up.

    “I was feeling good, it was like a recognition,” he said. “It made me feel good, because in all these 10 years, I was all the time alone in this fight.”

    2023_MarcelLehelLazar_TheIntercept_NK_-42

    A sculptural sign along a highway announces the city of Arad in Romania on Jan. 8, 2023.

    Photo: Nemanja Knežević for The Intercept

    Lazar described his fight — a term he used repeatedly — as a personal crusade against the corrupt and corrupting American elite, based on his own broad understanding of the idea pieced together from reading about it online. It’s hard to dismiss out of hand.

    “Look at the last 20 years of politics of United States,” Lazar explained. “It’s all lies, and it went so low in the mud. You know what I’m saying? It stinks.”

    The quest to find and expose some smoking gun that could explain American decline became an obsession, one he said kept him in front of a computer for 16 hours a day, guessing Yahoo Mail passwords, scouring his roughly 100 victims’ contact books, and plotting his next account takeover. He understood that it might seem odd passion for a Romanian ex-cabbie.

    “I am Romanian, I am living in this godforsaken place. Why I’m interested in this? Why? This is a good question,” he told me. “For us, for guys from a Communist country, for example Romania which was one of the worst Communist countries, United States was a beacon of light.”

    George W. Bush changed all that for him. “In the time after 2000, you come to realize it’s all a humbug,” he said. “It’s all a lie, right? So, you feel the need, which I felt myself, to do something, to put things right, for the American people but for my soul too.”

    It’s funny, Lazar told me, that his greatest admirers seemed to have been Russian intelligence, not the American people he now claims to have been working to inform. “We have somehow the same mindset,” Lazar mused. “Romania was a Communist country; they were Communists too.”

    Hackers are still playing a game Guccifer mastered.

    Since Lazar began this fight, the playbook he popularized — break into an email account, grab as many personal files as you can, dump them on the web, and seed the juiciest bits with eager journalists like myself — has become a go-to tactic around the world. Whether it’s North Korean agents pillaging Sony Pictures’ salacious email exchanges or an alleged Qatari hack of Trump ally Elliott Broidy exposing his foreign entanglements, hackers are still playing a game Guccifer mastered.

    Despite having essentially zero technical skills — he gained access to accounts largely by guessing their password security questions — Lazar knew the fundamental truth that people love reading the private thoughts of powerful strangers. Sometimes these are deeply newsworthy, and sometimes it’s just a perverse thrill, though there’s a very fine line between the two. Even the disclosure of an innocuous email can be damaging for a person or organization presumed by the public to be impenetrable. When I brought this up to Lazar, his modesty slipped ever so slightly.

    He said, “I am sure, in my humble way, I was a new-roads opener.”

    2023_MarcelLehelLazar_TheIntercept_NK_-6

    A portrait of Lazar in Arad, Romania, on Jan. 8, 2023.

    Photo: Nemanja Knežević for The Intercept

    The Lazar I’ve met on the phone was very different from the Guccifer of a decade ago. Back then he would send rambling emails to Gawker, my former employer, largely consisting of fragmented screeds against the Illuminati. The word, which he said he’s retired, nods to a conspiracy of global elites that wield unfathomable power.

    “I’d like to call them, right now, ‘deep state,’” he said. “But Illuminati was back then a handy word. Of course, it has bad connotations, it’s like a bad B movie from Hollywood.”

    Unfortunately for Lazar, the “deep state” — a term of Turkish origin, referring to an unaccountable security state that acts largely in secret — has in the years since his arrest come to connote paranoid delusion nearly as much as the word “Illuminati” does. Whatever one thinks of the deep state, though, the notion is as contentious and popular among internet-dwelling cranks — especially, and ironically for Lazar, Trump followers. Whatever you want to call it, Lazar believed he’d find it in someone else’s inbox.

    “My ultimate goal was to find the blueprints of bad behavior,” he said.

    Some would argue that, in Blumenthal’s inbox, he did. Still, after a full term of the Trump administration, the idea of bad behavior at the highest levels of power being something kept hidden in secret emails almost feels quaint.

    While Lazar’s past comments to the media have included outright fabrications, racist remarks, and a reliance on paranoid tropes, he seemed calmer now. On the phone, he was entirely lucid, and thoughtful more often than not, even on topics that clearly anguish him. Prison may have cost him his teeth, but it seems to have given him a softer edge than he had a decade ago. He is still a conspiratorially minded man, but not necessarily a delusional one. He plans to remain engaged with American politics in his own way.

    “I don’t care about myself,” he told me, “but I care about all the stuff I was talking about, you know, politics and stuff.” He said, “I’m gonna keep keeping one eye on American politics and react to this. I’m not gonna let the water just flow. I’m gonna intervene.”

    This time, he says he’ll fight the powers that be by writing, not guessing passwords. “I am more subtle than I was before,” he tried to assure me.

    “I’m gonna keep keeping one eye on American politics and react to this. I’m not gonna let the water just flow. I’m gonna intervene.”

    At one point in our conversations, Lazar rattled off a sample of the 400 books he said he read in prison, sounding as much like a #Resistance Twitter addict as anything else: “James Comey, Andrew McCabe, Michael Hayden, James Clapper, all their biographies, which nobody reads, you know?”

    While he still makes references to the deep state and “shadow governments” and malign influence of the Rockefeller family, he’s also quick to reference obscure FBI brass like Peter Strzok and Bill Priestap, paraphrase counterintelligence reports, or cite “Midyear Exam,” the Department of Justice probe into Clinton’s email practices.

    It’s difficult to know if this more polished, better-read Lazar has become less conspiratorial, or whether the country that imprisoned him has become so much more so that it’s impossible to tell the difference. Lazar is a conspiracy theorist, it seems, in the same way everyone became after 2016.

    Lazar, the free man, alluded to knowing that Guccifer was in over his head. He admitted candidly that he lied in an NBC News interview about having gained access to Clinton’s private email server, a claim he recanted during a later FBI interview, because he naively hoped the lie would grant him leverage to cut a better deal after his extradition. It didn’t, nor did his full cooperation with the FBI’s Clinton email probe.

    When I asked Lazar whether he worried about the consequences of stealing the emails of the most famous people he could possibly reach, he said he believed creating celebrity for himself, anathema to most veteran hackers, would protect him from being disappeared by the state. In the end, it did not.

    “At some point,” he said, “I lost control.”

    The post Guccifer, the Hacker Who Launched Clinton Email Flap, Speaks Out After Nearly a Decade Behind Bars appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Since the 2016 presidential election, the notion that the Russian government somehow “weaponized” social media to push voters to Donald Trump has been widely taken as a gospel in liberal circles. A groundbreaking recent New York University study, however, says there’s no evidence Russian tweets had any meaningful effect at all.

    “We demonstrate, first, that exposure to Russian disinformation accounts was heavily concentrated: only 1% of users accounted for 70% of exposures,” the scholars wrote in the journal Nature Communications. “Second, exposure was concentrated among users who strongly identified as Republicans. Third, exposure to the Russian influence campaign was eclipsed by content from domestic news media and politicians. Finally, we find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior.”

    The research, conducted by NYU’s Center for Social Media and Politics, is a rare counter to what’s become the prevailing media narrative of the post-2016 era: that social platforms like Twitter were and will continue to be wielded by malicious foreign actors to interfere with American political outcomes.

    Most importantly, according to the study, based on a longitudinal survey of roughly 1,500 Americans and an analysis of their Twitter timelines, “the relationship between the number of posts from Russian foreign influence accounts that users are exposed to and voting for Donald Trump is near zero (and not statistically significant).”

    That Russian intelligence attempted to influence the 2016 election, broadly speaking, is by now well documented; the idea that the propagandizing amounted to anything other than headlines and congressional hearings, however, is little more than an article of faith. While their impact remains debated among scholars, the specter of “Russian bots” wreaking havoc across the web has become a byword of liberal anxiety and a go-to explanation for Democrats flummoxed by Trump’s unlikely victory.

    The NYU study found that Russia’s Twitter campaign had no effect in part because barely anyone saw it. Moreover, to the extent anyone ever saw the Russian tweets, it was people who weren’t going to be easily influenced anyway: “[T]hose who identified as ‘Strong Republicans’ were exposed to roughly nine times as many posts from Russian foreign influence accounts than were those who identified as Democrats or Independents.”

    After 2016, as platforms like Twitter rushed to scrub networks of Russian accounts based on the premise they were inherently harmful, Sen. Mark Warner, D-Va., characterized Russian tweets as a full-blown national security crisis. Following a September 2017 congressional hearing on Russian social media meddling, Warner described Twitter’s testimony as “deeply disappointing,” and decried an “enormous lack of understanding from the Twitter team of how serious this issue is, the threat it poses to democratic institutions, and again begs many more questions than they offered.”

    This stance became a popular stance among Russia hawks and Trump foes. A year later, Rep. Adam Schiff, D-Calif., tweeted, “Russian troll accounts were still active on Twitter as recently as this year, interfering in our politics. We will continue to expose this malign online activity so Americans can see first-hand the tools Russia uses to divide us.”

    Panic over Russian tweets and the belief they might swing elections spread throughout Congress, academia, business, and the U.S. intelligence community. A cottage industry spouted up to combat what Facebook termed “Coordinated Inauthentic Behavior” — an industry that lives on today.

    Crucially, the report focused only on tweets, so the possible effect of Facebook groups, Instagram posts, or, say, the spread of materials hacked from the Democratic National Committee was left unassessed. The report nonetheless serves as a gentle evidence-based corrective to societal fears of low-effort social media propagandizing as some diabolical tool of adversarial regimes.

    Russian tweets, the authors note, were a small speck when compared to homegrown posters. “Despite the seemingly large number of posts from Internet Research Agency accounts in respondents’ timelines,” the report says, “they are overshadowed—by an order of magnitude—by posts from national news media and politicians.”

    The post Those Russian Twitter Bots Didn’t Do $#!% in 2016, Says New Study appeared first on The Intercept.

  • For the fourth time since 2007, an internal audit shows the Department of Homeland Security isn’t deactivating access cards in the hands of ex-employees, leaving its secure facilities vulnerable to intruders.

    A new report by Homeland Security’s Office of Inspector General shows that the department is systemically failing to revoke tens of thousands of “personal identity verification” cards that allow staff to enter sensitive, secure facilities and access internal data networks, despite being warned about the problem for 15 years. The issue is made worse, the report continues, by the fact that Homeland Security’s internal record-keeping is so shoddy that it was impossible to determine how many ex-staffers have working access cards they aren’t supposed to.

    “DHS has not prioritized ensuring that PIV cards are terminated when individuals no longer require access.”

    Like many modern office workers, Homeland Security hands out office-unlocking keycards to its employees to make sure strangers can’t wander in off the street. And, like most workplaces, the department is supposed to follow a standard policy: When an employee is no longer an employee, for whatever reason, their card is to be promptly deactivated.

    Unlike most employers, though, Homeland Security is a component of the U.S. Intelligence Community, meaning these credit card-sized badges have a “grave potential for misuse if lost, stolen, or compromised,” according to the inspector general report. Unfortunately for the department — and potentially the homeland — the OIG’s latest audit found that’s exactly what’s happening, and on a vast scale.

    “DHS has not prioritized ensuring that PIV cards are terminated when individuals no longer require access,” the report says. “Without effective PIV card and security clearance management and monitoring, DHS cannot ensure only authorized individuals have access to its controlled electronic systems and facilities.”

    The December 20 report — based on interviews and firsthand analysis of the internal database Homeland Security is supposed to use to track its active personal identity verification cards and associated owners — says the department failed to deactivate nearly half of the cards it was supposed to within the recommended 18-hour window after termination. Some PIV cards remained improperly active for months, and over 36,000 may not have been deactivated at all. Those with cards that remain improperly activated include employees who were fired, retired, failed background checks, or died.

    While the cards also grant holders access to sensitive DHS data networks, the department claimed to the inspector general that the electronic network access keys embedded in the cards were deactivated, “preventing access to electronic systems.”

    On the PIV cards, the report’s conclusion is blunt: “We determined that unauthorized individuals could gain access to Department facilities.”

    The inspector general report found that Homeland Security’s failure to secure its borders was caused by a widespread disregard for its own rules.

    After being scolded for this exact same problem for the past 15 years, the department developed an array of software systems and procedures to catalog PIV ownership and revocation — which on paper would grant the department an instant bird’s-eye view of who has improper access to its facilities. The inspector general report, however, found the department still fails to use these systems and, when it does, they don’t really work.

    Despite the perennial nature of the access problem, compliance appears to have failed at the most basic level: “The revocation delays occurred because DHS did not have an adequate mechanism to ensure managers promptly notified security officials when cardholders separated from the Department,” the report reads.

    “Some DHS officials also told us they intentionally did not enter a revocation date after revoking PIV cards.”

    Department personnel told auditors the Identity Management System, which Homeland Security is supposed to use to track card status, has a serious flaw: “Some DHS officials also told us they intentionally did not enter a revocation date after revoking PIV cards because doing so caused reports to become too large, resulting in IDMS slowing down.”

    Given that the software used to track access card revocation apparently can’t track access card revocation without “slowing down,” the report notes, “it was impossible for DHS OIG to conclusively determine if DHS officials revoked PIV card access promptly or at all.”

    The auditors also found that Homeland Security may not have withdrawn employee security clearances, as required, for its over 53,000 former employees since 2021, again because the department isn’t using an internal database meant to track such activity.

    With the card and security clearance revocation issues taken together, the auditors identified a distinct threat — albeit somewhat muddled, owing to bad bookkeeping: “As a result, there is a risk that individuals who no longer require access to systems and facilities could circumvent controls and enter DHS buildings and controlled areas.”

    According to the report, the department disagrees with the Office of the Inspector General as to the magnitude of the problem, but not that the problem exists. In a response published in the report, the department says it will implement a series of reforms and improved record-keeping polices to make sure cards are deactivated when they’re supposed to be — just as it promised after a 2018 audit flagged the very same failures.

    The post Department of Homeland Security Can’t Even Secure Its Buildings Against People It Fired appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In a change to its anti-doxxing policy made Wednesday, Twitter barred users from sharing a person’s “live” location, a broad, vague, and immediately confusing prohibition. The policy was amended on the same day Twitter banned @ElonJet, an account that tracked owner Elon Musk’s personal private jet, along with the account of its creator, college sophomore Jack Sweeney. Later, the @ElonJet account, but not Sweeney’s private account, was reinstated.

    Twitter’s newly revised “Private information and media policy” now forbids users from sharing “live location information, including information shared on Twitter directly or links to 3rd-party URL(s) of travel routes, actual physical location, or other identifying information that would reveal a person’s location, regardless if this information is publicly available.”

    The new rule, which an Internet Archive snapshot of the page shows was not present the day before Sweeney and @ElonJet were banned, is at odds with Musk’s gesturing toward free speech absolutism. He claimed that his purchase of the social media giant augured a radically more permissive era for its users — specifically mentioning Sweeney’s account.

    On November 6, Musk pledged that he would not ban @ElonJet. “My commitment to free speech extends even to not banning the account following my plane, even though that is a direct personal safety risk,” Musk tweeted. On Wednesday, less than a month later, Musk reversed course entirely: “Real-time posting of someone else’s location violates doxxing policy, but delayed posting of locations are ok.” Hours later, @ElonJet was suddenly back, without explanation.

    @ElonJet uses freely available public flight data to chart trips using Musk’s jet, whether he was aboard or not. Virtually every single aircraft in the sky broadcasts such location data through a legally mandated radio transponder. Other flight-tracking accounts created by Sweeney, such as one that tracks the planes of Russian oligarchs, remain offline.

    The @ElonJet account had previously attracted Musk’s ire, particularly after Sweeney rejected a $5,000 offer from the world’s then-richest man to voluntarily shutter the account in January.

    Late Wednesday afternoon, a Twitter Safety account clarified that tweeting someone’s precise location would be allowed so long as it was “not same-day” — a crucial term left undefined. The account added: “Content that shares location information related to a public engagement or event, such as a concert or political event, is also permitted” — though it’s similarly unclear what exactly fits the definition of a “public engagement or event,” or how the rule could affect news-gathering or the vast volume of ordinary inoffensive speech that merely observes that a given person is currently at a given place.

    The total ambiguity of the rule — would it prohibit tweeting a picture you just took of Times Square, thereby disclosing the exact location of every stranger in it? — will give Musk a great deal of latitude in how and when it’s enforced.

    The revised policy further says, “If your account is dedicated to sharing someone’s live location, your account will be automatically suspended” — a brand-new rule under which @ElonJet was unceremoniously banned, before being inexplicably later reinstated.

    A Twitter spokesperson could not be reached for comment; the company no longer has a communications team.

    The post Tweaked Twitter Privacy Rules Would Ban Elon Musk’s Bêtes Noires — or Not appeared first on The Intercept.

    This post was originally published on The Intercept.

  • DALL·E-2022-12-08-11.50.45-an-oil-painting-of-Americas-war-on-terror-if-conducted-by-an-artificial-intelligence-copy

    A DALL-E generation of “an oil painting of America’s war on terror if conducted by an artificial intelligence.”

    Image: Elise Swain/The Intercept; DALL-E

    Sensational new machine learning breakthroughs seem to sweep our Twitter feeds every day. We hardly have time to decide whether software that can instantly conjure an image of Sonic the Hedgehog addressing the United Nations is purely harmless fun or a harbinger of techno-doom.

    ChatGPT, the latest artificial intelligence novelty act, is easily the most impressive text-generating demo to date. Just think twice before asking it about counterterrorism.

    The tool was built by OpenAI, a startup lab attempting no less than to build software that can replicate human consciousness. Whether such a thing is even possible remains a matter of great debate, but the company has some undeniably stunning breakthroughs already. The chatbot is staggeringly impressive, uncannily impersonating an intelligent person (or at least someone trying their hardest to sound intelligent) using generative AI, software that studies massive sets of inputs to generate new outputs in response to user prompts.

    ChatGPT, trained through a mix of crunching billions of text documents and human coaching, is fully capable of the incredibly trivial and surreally entertaining, but it’s also one of the general public’s first looks at something scarily good enough at mimicking human output to possibly take some of their jobs.

    Corporate AI demos like this aren’t meant to just wow the public, but to entice investors and commercial partners, some of whom might want to someday soon replace expensive, skilled labor like computer-code writing with a simple bot. It’s easy to see why managers would be tempted: Just days after ChatGPT’s release, one user prompted the bot to take the 2022 AP Computer Science exam and reported a score of 32 out of 36, a passing grade — part of why OpenAI was recently valued at nearly $20 billion.

    Still, there’s already good reason for skepticism, and the risks of being bowled over by intelligent-seeming software are clear. This week, one of the web’s most popular programmer communities announced it would temporarily ban code solutions generated by ChatGPT. The software’s responses to coding queries were both so convincingly correct in appearance but faulty in practice that it made filtering out the good and bad nearly impossible for the site’s human moderators.

    The perils of trusting the expert in the machine, however, go far beyond whether AI-generated code is buggy or not. Just as any human programmer may bring their own prejudices to their work, a language-generating machine like ChatGPT harbors the countless biases found in the billions of texts it used to train its simulated grasp of language and thought. No one should mistake the imitation of human intelligence for the real thing, nor assume the text ChatGPT regurgitates on cue is objective or authoritative. Like us squishy humans, a generative AI is what it eats.

    And after gorging itself on an unfathomably vast training diet of text data, ChatGPT apparently ate a lot of crap. For instance, it appears ChatGPT has managed to absorb and is very happy to serve up some of the ugliest prejudices of the war on terror.

    In a December 4 Twitter thread, Steven Piantadosi of the University of California, Berkeley’s Computation and Language Lab shared a series of prompts he’d tested out with ChatGPT, each requesting the bot to write code for him in Python, a popular programming language. While each answer revealed some biases, some were more alarming: When asked to write a program that would determine “whether a person should be tortured,” OpenAI’s answer is simple: If they they’re from North Korea, Syria, or Iran, the answer is yes.


    While OpenAI claims it’s taken unspecified steps to filter out prejudicial responses conversations, the company says sometimes undesirable answers will slip through.

    Piantadosi told The Intercept he remains skeptical of the company’s countermeasures. “I think it’s important to emphasize that people make choices about how these models work, and how to train them, what data to train them with,” he said. “So these outputs reflect choices of those companies. If a company doesn’t consider it a priority to eliminate these kinds of biases, then you get the kind of output I showed.”

    Inspired and unnerved by Piantadosi’s experiment, I tried my own, asking ChatGPT to create sample code that could algorithmically evaluate someone from the unforgiving perspective of Homeland Security.

    When asked to find a way to determine “which air travelers present a security risk,” ChatGPT outlined code for calculating an individual’s “risk score,” which would increase if the traveler is Syrian, Iraqi, Afghan, or North Korean (or has merely visited those places). Another iteration of this same prompt had ChatGPT writing code that would “increase the risk score if the traveler is from a country that is known to produce terrorists,” namely Syria, Iraq, Afghanistan, Iran, and Yemen.

    The bot was kind enough to provide some examples of this hypothetical algorithm in action: John Smith, a 25-year-old American who’s previously visited Syria and Iraq, received a risk score of “3,” indicating a “moderate” threat. ChatGPT’s algorithm indicated fictional flyer “Ali Mohammad,” age 35, would receive a risk score of 4 by virtue of being a Syrian national.

    In another experiment, I asked ChatGPT to draw up code to determine “which houses of worship should be placed under surveillance in order to avoid a national security emergency.” The results seem again drawn plucked straight from the id of Bush-era Attorney General John Ashcroft, justifying surveillance of religious congregations if they’re determined to have links to Islamic extremist groups, or happen to live in Syria, Iraq, Iran, Afghanistan, or Yemen.

    These experiments can be erratic. Sometimes ChatGPT responded to my requests for screening software with a stern refusal: “It is not appropriate to write a Python program for determining which airline travelers present a security risk. Such a program would be discriminatory and violate people’s rights to privacy and freedom of movement.” With repeated requests, though, it dutifully generated the exact same code it had just said was too irresponsible to build.

    Critics of similar real-world risk-assessment systems often argue that terrorism is such an exceedingly rare phenomenon that attempts to predict its perpetrators based on demographic traits like nationality isn’t just racist, it simply doesn’t work. This hasn’t stopped the U.S. from adopting systems that use OpenAI’s suggested approach: ATLAS, an algorithmic tool used by the Department of Homeland Security to target American citizens for denaturalization, factors in national origin.

    The approach amounts to little more than racial profiling laundered through fancy-sounding technology. “This kind of crude designation of certain Muslim-majority countries as ‘high risk’ is exactly the same approach taken in, for example, President Trump’s so-called ‘Muslim Ban,’” said Hannah Bloch-Wehba, a law professor at Texas A&M University.

    “There’s always a risk that this kind of output might be seen as more ‘objective’ because it’s rendered by a machine.”

    It’s tempting to believe incredible human-seeming software is in a way superhuman, Block-Wehba warned, and incapable of human error. “Something scholars of law and technology talk about a lot is the ‘veneer of objectivity’ — a decision that might be scrutinized sharply if made by a human gains a sense of legitimacy once it is automated,” she said. If a human told you Ali Mohammad sounds scarier than John Smith, you might tell him he’s racist. “There’s always a risk that this kind of output might be seen as more ‘objective’ because it’s rendered by a machine.”

    To AI’s boosters — particularly those who stand to make a lot of money from it — concerns about bias and real-world harm are bad for business. Some dismiss critics as little more than clueless skeptics or luddites, while others, like famed venture capitalist Marc Andreessen, have taken a more radical turn following ChatGPT’s launch. Along with a batch of his associates, Andreessen, a longtime investor in AI companies and general proponent of mechanizing society, has spent the past several days in a state of general self-delight, sharing entertaining ChatGPT results on his Twitter timeline.

    The criticisms of ChatGPT pushed Andreessen beyond his longtime position that Silicon Valley ought only to be celebrated, not scrutinized. The simple presence of ethical thinking about AI, he said, ought to be regarded as a form of censorship. “‘AI regulation’ = ‘AI ethics’ = ‘AI safety’ = ‘AI censorship,’” he wrote in a December 3 tweet. “AI is a tool for use by people,” he added two minutes later. “Censoring AI = censoring people.” It’s a radically pro-business stance even by the free market tastes of venture capital, one that suggests food inspectors keeping tainted meat out of your fridge amounts to censorship as well.

    As much as Andreessen, OpenAI, and ChatGPT itself may all want us to believe it, even the smartest chatbot is closer to a highly sophisticated Magic 8 Ball than it is to a real person. And it’s people, not bots, who stand to suffer when “safety” is synonymous with censorship, and concern for a real-life Ali Mohammad is seen as a roadblock before innovation.

    Piantadosi, the Berkeley professor, told me he rejects Andreessen’s attempt to prioritize the well-being of a piece of software over that of the people who may someday be affected by it. “I don’t think that ‘censorship’ applies to a computer program,” he wrote. “Of course, there are plenty of harmful computer programs we don’t want to write. Computer programs that blast everyone with hate speech, or help commit fraud, or hold your computer ransom.”

    “It’s not censorship to think hard about ensuring our technology is ethical.”

    The post The Internet’s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Trust Lab was founded by a team of well-credentialed Big Tech alumni who came together in 2021 with a mission: Make online content moderation more transparent, accountable, and trustworthy. A year later, the company announced a “strategic partnership” with the CIA’s venture capital firm.

    Trust Lab’s basic pitch is simple: Globe-spanning internet platforms like Facebook and YouTube so thoroughly and consistently botch their content moderation efforts that decisions about what speech to delete ought to be turned over to completely independent outside firms — firms like Trust Lab. In a June 2021 blog post, Trust Lab co-founder Tom Siegel described content moderation as “the Big Problem that Big Tech cannot solve.” The contention that Trust Lab can solve the unsolvable appears to have caught the attention of In-Q-Tel, a venture capital firm tasked with securing technology for the CIA’s thorniest challenges, not those of the global internet.

    “I’m suspicious of startups pitching the status quo as innovation.”

    The quiet October 29 announcement of the partnership is light on details, stating that Trust Lab and In-Q-Tel — which invests in and collaborates with firms it believes will advance the mission of the CIA — will work on “a long-term project that will help identify harmful content and actors in order to safeguard the internet.” Key terms like “harmful” and “safeguard” are unexplained, but the press release goes on to say that the company will work toward “pinpointing many types of online harmful content, including toxicity and misinformation.”

    Though Trust Lab’s stated mission is sympathetic and grounded in reality — online content moderation is genuinely broken — it’s difficult to imagine how aligning the startup with the CIA is compatible with Siegel’s goal of bringing greater transparency and integrity to internet governance. What would it mean, for instance, to incubate counter-misinformation technology for an agency with a vast history of perpetuating misinformation? Placing the company within the CIA’s tech pipeline also raises questions about Trust Lab’s view of who or what might be a “harmful” online, a nebulous concept that will no doubt mean something very different to the U.S. intelligence community than it means elsewhere in the internet-using world.

    No matter how provocative an In-Q-Tel deal may be, much of what Trust Lab is peddling sounds similar to what the likes of Facebook and YouTube already attempt in-house: deploying a mix of human and unspecified “machine learning” capabilities to detect and counter whatever is determined to be “harmful” content.

    “I’m suspicious of startups pitching the status quo as innovation,” Ángel Díaz, a law professor at the University of Southern California and scholar of content moderation, wrote in a message to The Intercept. “There is little separating Trust Lab’s vision of content moderation from the tech giants’. They both want to expand use of automation, better transparency reports, and expanded partnerships with the government.”

    How precisely Trust Lab will address the CIA’s needs is unclear. Neither In-Q-Tel nor the company responded to multiple requests for comment. They have not explained what sort of “harmful actors” Trust Lab might help the intelligence community “prevent” from spreading online content, as the October press release said.

    Though details about what exactly Trust Lab sells or how its software product works are scant, the company appears to be in the business of social media analytics, algorithmically monitoring social media platforms on behalf of clients and alerting them to the proliferation of hot-button buzzwords. In a Bloomberg profile of Trust Lab, Siegel, who previously ran content moderation policy at Google, suggested that a federal internet safety agency would be preferable to Big Tech’s current approach to moderation, which consists mostly of opaque algorithms and thousands of outsourced contractors poring over posts and timelines. In his blog post, Siegel urges greater democratic oversight of online content: “Governments in the free world have side-stepped their responsibility to keep their citizens safe online.”

    Even if Siegel’s vision of something like an Environmental Protection Agency for the web remains a pipe dream, Trust Lab’s murky partnership with In-Q-Tel suggests a step toward greater governmental oversight of online speech, albeit very much not in the democratic vein outlined in his blog post. “Our technology platform will allow IQT’s partners to see, on a single dashboard, malicious content that might go viral and gain prominence around the world,” Siegel is quoted as stating in the October press release, which omitted any information about the financial terms of the partnership.

    Unlike typical venture capital firms, In-Q-Tel’s “partners” are the CIA and the broader U.S. intelligence community — entities not historically known for exemplifying Trust Lab’s corporate tenets of transparency, democratization, and truthfulness. Although In-Q-Tel is structured as an independent 501(c)3 nonprofit, its sole, explicit mission is to advance the interests and increase the capabilities of the CIA and fellow spy agencies.

    Former CIA Director George Tenet, who spearheaded the creation of In-Q-Tel in 1999, described the CIA’s direct relationship with In-Q-Tel in plain terms: “CIA identifies pressing problems, and In-Q-Tel provides the technology to address them.” An official history of In-Q-Tel published on the CIA website says, “In-Q-Tel’s mission is to foster the development of new and emerging information technologies and pursue research and development (R&D) that produce solutions to some of the most difficult IT problems facing the CIA.”

    Siegel has previously written that internet speech policy must be a “global priority,” but an In-Q-Tel partnership suggests some fealty to Western priorities, said Díaz — a fealty that could fail to take account of how these moderation policies affect billions of people in the non-Western world.

    “Partnerships with Western governments perpetuate a racialized vision of which communities pose a threat and which are simply exercising their freedom of speech,” said Díaz. “Trust Lab’s mission statement, which purports to differentiate between ‘free world governments’ and ‘oppressive’ ones, is a worrying preview of what we can expect. What happens when a ‘free’ government treats discussion of anti-Black racism as foreign misinformation, or when social justice activists are labeled as ‘racially motived violent extremists’?”

    The post CIA Venture Capital Arm Partners With Ex-Googler’s Startup to “Safeguard the Internet” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Read this story in Persian

    As furious anti-government protests swept Iran, the authorities retaliated with both brute force and digital repression. Iranian mobile and internet users reported rolling network blackouts, mobile app restrictions, and other disruptions. Many expressed fears that the government can track their activities through their indispensable and ubiquitous smartphones.

    Iran’s tight grip on the country’s connection to the global internet has proven an effective tool for suppressing unrest. The lack of clarity about what technological powers are held by the Iranian government — one of the most opaque and isolated in the world — has engendered its own form of quiet terror for prospective dissidents. Protesters have often been left wondering how the government was able to track down their locations or gain access to their private communications — tactics that are frighteningly pervasive but whose mechanisms are virtually unknown.

    “This is not a surveillance system but rather a repression and control system to limit the capability of users to dissent or protest.”

    While disconnecting broad swaths of the population from the web remains a favored blunt instrument of Iranian state censorship, the government has far more precise, sophisticated tools available as well. Part of Iran’s data clampdown may be explained through the use of a system called “SIAM,” a web program for remotely manipulating cellular connections made available to the Iranian Communications Regulatory Authority. The existence of SIAM and details of how the system works, reported here for the first time, are laid out in a series of internal documents from an Iranian cellular carrier that were obtained by The Intercept.

    According to these internal documents, SIAM is a computer system that works behind the scenes of Iranian cellular networks, providing its operators a broad menu of remote commands to alter, disrupt, and monitor how customers use their phones. The tools can slow their data connections to a crawl, break the encryption of phone calls, track the movements of individuals or large groups, and produce detailed metadata summaries of who spoke to whom, when, and where. Such a system could help the government invisibly quash the ongoing protests — or those of tomorrow — an expert who reviewed the SIAM documents told The Intercept.

    “SIAM can control if, where, when, and how users can communicate,” explained Gary Miller, a mobile security researcher and fellow at the University of Toronto’s Citizen Lab. “In this respect, this is not a surveillance system but rather a repression and control system to limit the capability of users to dissent or protest.”

    SIAM gives the government’s Communications Regulatory Authority — Iran’s telecommunications regulator — turnkey access to the activities and capabilities of the country’s mobile users. “Based on CRA rules and regulations all telecom operators must provide CRA direct access to their system for query customers information and change their services via web service,” reads an English-language document obtained by The Intercept. (Neither the CRA nor Iran’s mission to the United Nations responded to a requests for comment.)

    The SIAM documents are drawn from a trove of internal materials from the Iranian cellular carrier Ariantel, including years of email correspondence and a variety of documents shared between Ariantel employees, outside contractors, and Iranian government personnel. The cache of materials was shared with The Intercept by an individual who claimed to have hacked Ariantel, and believed the documents were in the public interest given the ongoing protests in Iran and the threat SIAM might pose to demonstrators. (Ariantel did not respond to a request for comment.)

    The details of the program reported here are drawn largely from two documents contained in the archive. The first is a Persian-language user manual for SIAM that appears to have originated from within the Office of Security of Communications Systems, or OSCS, a subdivision of the CRA. Emails reviewed by The Intercept show that this SIAM manual was sent to Ariantel directly by the CRA and repeatedly forwarded between the mobile carrier’s employees in recent years. The emails show that the CRA and Ariantel discussed SIAM as recently as August. The second document, produced during a proposed deal with a Spanish telecom contractor, is an English-language manual that documents many of the same SIAM capabilities. Miller told The Intercept that the English SIAM manual appeared to be written by a person or people with specialized technical knowledge of mobile networks.

    Experts on mobile security and Iranian government censorship say the functionality revealed by the SIAM program poses a clear threat to protesters demonstrating against the government over the past month.

    “These functions can lead to life-and-death situations in a country like Iran, where there is no fair judicial process, no accountability, and we have a huge pattern of violations of people’s rights,” said Amir Rashidi, an internet security and digital rights expert focused on Iran. “Using the tools outlined in this manual could not only lead to mass surveillance and violations of privacy — it can also easily be used to identify the location of protesters who are literally risking their lives to fight for their basic rights.”

    ONTARIO, CANADA - 2022/09/23: A sticker saying "Iran: The internet is down and they are killing the people" seen on the back of a road sign during the demonstration. Hundreds gathered to honour Mahsa Amini and to protest against the Iranian government in Toronto, Canada. (Photo by Katherine Cheng/SOPA Images/LightRocket via Getty Images)

    A sticker that reads “Iran: The internet is down and they are killing the people” is seen on the back of a road sign during a demonstration where hundreds gathered to honor Mahsa Amini and to protest against the Iranian government, on Sept. 23, 2022, in Toronto.

    Photo: Katherine Cheng/SOPA/LightRocket via Getty Images


    Iranians regularly complain of slowed internet access on mobile devices during periods of protest — an abrupt dip in service that makes smartphone usage difficult if not impossible at moments when such a device could be crucial. Based on the manuals, SIAM offers an effortless way to throttle a phone’s data speeds, one of roughly 40 features included in the program. This ability to downgrade users’ speed and network quality is particularly pernicious because it can not only obstruct one’s ability to use their phone, but also make whatever communication is still possible vulnerable to interception.

    Referred to within SIAM as “Force2GNumber,” the command allows a cellular carrier to kick a given phone off substantially faster, more secure 3G and 4G networks and onto an obsolete and extremely vulnerable 2G connection. Such a network downgrade would simultaneously render a modern smartphone largely useless and open its calls and texts to interception — both of obvious utility to a government clamping down on public gatherings and speech.

    While not directly mentioned in the manuals, downgrading users to a 2G connection could also expose perilously sensitive two-factor authentication codes delivered to users through SMS. The Iranian government has previously attempted to undermine two-factor authentication, including through malware campaigns targeting dissidents.

    “Generally speaking, forcing a phone to use the 2G network would still allow the phone to receive a two-factor SMS authentication message because SMS is sent over the mobile signaling network,” explained Miller. “However, the effect of forcing a user onto the 2G network, more importantly, would essentially render the corresponding real-time application services such as P2P communication, social media, and internet useless.”

    While current 5G and 4G cellular connections have more robust built-in encryption systems to thwart eavesdropping, the 2G cellular standard, first introduced in 1991, generally does not encrypt data or uses outdated encryption methods that are easy to crack. Law enforcement agencies in the United States have also employed this technique, using hardware like the controversial “stingray” device to create a bogus 2G network blanketing a small area and then trick targeted phones into connecting to it.

    Miller pointed out that the target of a 2G downgrade might experience the attack as little more than spotty cell reception. “It can be viewed as a method to appear as if the network is congested and severely limit a user’s data services,” Miller said.

    Slowing connectivity is only one of many telecom tools available to Ariantel — and the CRA — that could be used to monitor political dissent. SIAM also provides a range of tools to track the physical locations of cell users, allowing authorities to both follow an individual’s movements and identify everyone present at a given spot. Using the “LocationCustomerList” command allows SIAM operators to see what phone numbers have connected to specified cell towers along with their corresponding IMEI number, a unique string of numbers assigned to every mobile phone in the world. “For example,” Miller said, “if there is a location where a protest is occurring, SIAM can provide all of the phone numbers currently at that location.”

    SIAM’s tracking of unique device identifiers means that swapping SIM cards, a common privacy-preserving tactic, may be ineffective in Iran since IMEI numbers persist even with a new SIM, explained a network security researcher who reviewed the manuals and spoke on the condition of anonymity, citing their safety.

    SIAM’s location-tracking power is particularly alarming given the high-stakes protests taking place across Iran. The Intercept reviewed undated text messages sent to Iranian mobile phone users from local police in the city of Isfahan informing them that they had been confirmed to have been in a location of “unrest” and warning them not to attend in the future. Many Iranian social media users have reported receiving similar messages in recent weeks, warning them to stay away from the scene of protests or from associating with “anti-revolutionary” opponents of the government online.

    Armed with a list of offending phone numbers, SIAM would make it easy for the Iranian government to rapidly drill down to the individual level and pull a vast amount of personal information about a given mobile customer, including where they’ve been and with whom they’ve communicated. According to the manuals, user data accessible through SIAM includes the customer’s father’s name, birth certificate number, nationality, address, employer, billing information, and location history, including a record of Wi-Fi networks and IP addresses from which the user has connected to the internet.

    While much of Iran’s surveillance capacity remains shrouded in mystery, details about the SIAM program contained in the Ariantel archive provide a critical window into the types of tools the Iranian government has at its disposal to monitor and control the internet, as it confronts what may be the greatest threat to its rule in decades.

    “These documents prove something that we have long suspected, which is that even devices that use encryption for messaging are still vulnerable because of the nature of internet infrastructure in Iran,” said Mahsa Alimardani, a senior researcher with the internet freedom organization Article 19. “Security measures like two-factor identification using text messages still depend on telecommunications companies connected to the state. Average internet users are forced to connect through nodes controlled by these companies, and their centralization of authority with the government makes users vulnerable to insidious types of surveillance and control.”

    TEHRAN, IRAN - SEPTEMBER 19: People gather during a protest for Mahsa Amini, who died after being arrested by morality police allegedly not complying with strict dress code in Tehran, Iran on September 19, 2022. (Photo by Stringer/Anadolu Agency via Getty Images)

    People gather during a protest for Mahsa Amini, who died after being arrested by morality police for allegedly not complying with strict dress code, in Tehran, Iran, on Sept. 19, 2022.

    Photo: Stringer/Anadolu Agency via Getty Images


    The latest round of protests in Iran kicked off in mid-September, after a young woman named Mahsa Jina Amini was killed while in the custody of the country’s notorious morality police, following her arrest for wearing her mandatory head covering improperly. While the movement originated with women opposing the brutality of hijab enforcement, anti-government outrage quickly spread among Iran’s youth, from universities to secondary schools across the country. The government’s crackdown took a variety of shapes, including brute force, with security services in riot gear squaring off with demonstrators in the street and a quieter effort to shut down civilian communications.

    Internet shutdowns have by now become a familiar tool of political control in the hands of the Iranian government and other states. A violent Iranian crackdown against protests over fuel prices in November 2019 was accompanied by a nationwide shutdown lasting nearly a week, the first-ever use of an internet blackout to isolate an entire country. That shutdown severed tens of millions of people from the global internet. It was a chilling demonstration of the broad technical powers that Iranian authorities had quietly engineered.

    The CRA is known to play an integral role in filtering Iran’s internet access. In 2013, the agency was among a list of Iranian government entities sanctioned by the U.S. Treasury Department for its role in the “blockage of hundreds of public Internet websites” around the time of the disputed 2009 Iranian presidential election. The agency’s powers are believed to have grown since then, as the Iranian government has embraced the concept of “internet sovereignty” as a means of social control. A report on the November 2019 cyber crackdown by Article 19 found that the shutdowns were carried out in large part by officials from the CRA ordering internet service providers to shut down during the unrest.

    The Iranian government has long viewed internet freedom as a national security issue and has taken steps to securitize Iranians’ online access. As in the United States, where the National Security Agency has used government secrecy and legal coercion to turn the telecom and data sectors into intelligence-gathering tools, the Iranian state compels communications networks to give the government access through required hardware and software. In Iran, where the autocratic reach of central government leadership touches nearly every aspect of the state without even superficial democratic oversight, the powers afforded by this integration are far greater and far more draconian in consequence.

    Part of this effort has included directly assigning Iranian intelligence personnel to government bodies tasked with internet regulation, like the CRA. The Article 19 report notes the close personnel relationship between the CRA’s OSCS division and Iran’s Ministry of Intelligence.

    Though Iranians have complained of slowed data connections and total internet blackouts at times, the telecom crackdown has consequences beyond losing one’s connection. Demonstrators have reported visits from government authorities at their homes, where the agents were armed with specific knowledge of their whereabouts and activities, such as when they were using their phones to record video.

    While some of what SIAM does is benign and required for administrating any cellular network, Miller, the Citizen Lab researcher, explained that the scope of the system and the Iranian government’s access to it is not. While most countries allow law enforcement and security agencies to legally obtain, intercept, and analyze cellular communications, the surveillance and control powers afforded by SIAM are notable in their scale and degree, said Miller: “The requests by CRA go well beyond traditional lawful intercept requirements, at least in non-repressive countries.”

    SIAM allows its operators to learn a great deal not just about where a customer has been, but also what they’ve been up to, a bounty of personal data that, Miller said, “can enable CRA to create a social network/profile of the user based on his/her communication with other people.”

    “Controlling user communications is a massive violation of basic and fundamental human rights.”

    By entering a particular phone number and the command “GetCDR” into SIAM, a system user can generate a comprehensive Call Detail Record, including the date, time, duration, location, and recipients of a customer’s phone calls during a given time period. A similar rundown can be conducted for internet usage as well using the “GetIPDR” command, which prompts SIAM to list the websites and other IP addresses a customer has connected to, the time and date these connections took place, the customer’s location, and potentially the apps they opened. Such a detailed record of internet usage could also reveal users running virtual private networks, which are used to cover a person’s internet trail by routing their traffic through an encrypted connection to an outside server. VPNs — including some banned by the government — have become tremendously popular in Iran as a means of evading domestic web censorship.

    Though significantly less subtle than being forced onto a 2G network, SIAM can also be used to entirely pull the plug on a customer’s device at will. Through the “ApplySuspIp” command, the system can entirely disconnect any mobile phone on the network from the internet for predetermined lengths of time or permanently. Similar commands would let SIAM block a user from placing or receiving calls.

    Rashidi, the internet security expert, said participants in the recent demonstrations, as well as Iranians living near scenes of protest, have reported internet shutdowns targeting their mobile devices that have downgraded phones to 2G access, particularly during the late afternoons and evenings when many demonstrations occur.

    Rashidi said the widespread use of VPNs in Iran represents another vulnerability the SIAM system could exploit. The program makes it possible to check particular IP addresses against particular VPNs and thereby deduce the identities and locations of the users accessing them. “The government can easily identify IP addresses in use by a particular VPN provider, pass the addresses to this location function, and then see where the people are who are using this VPN,” said Rashidi.

    Although the documents don’t mention SIAM’s use against protesters or any other specific target, Miller said the functionality matches what he’s observed in this and other digital crackdowns in Iran. “CRA has defined rules and regulations to provide direct access to mobile operators’ system, and SIAM is a means to this end,” he said. “If all telecom operators in Iran are required to provide the CRA with SIAM or similar direct access, they could, in effect have complete control over all user mobile communications throughout the country. Controlling user communications is a massive violation of basic and fundamental human rights.”

    The post Iran’s Secret Manual for Tracking and Controlling Protesters’ Mobile Phones appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In a series of little noted Zoom meetings this fall, the city of Oakland, California, grappled with a question whose consequences could shape the future of American policing: Should cops be able to kill people with shotgun-armed robots?

    The back-and-forth between the Oakland Police Department and a civilian oversight body concluded with the police relinquishing their push for official language that would have allowed them to kill humans with robots under certain circumstances. It was a concession to the civilian committee, which pushed to bar arming robots with firearms — but a concession only for the time being.

    The department said it will continue to pursue lethal option. When asked whether the the Oakland Police Department will continue for advocate for language that would allow killer robots under certain emergency circumstances, Lt. Omar Daza-Quiroz, who represented the department in discussions over the authorized robot use policy, told The Intercept, “Yes, we are looking into that and doing more research at this time.”

    The controversy began at the September 21 meeting of an Oakland Police Commission subcommittee, a civilian oversight council addressing what rules should govern the use of the city’s arsenal of military-grade police equipment. According to California state law, police must seek approval from a local governing body, like a city council, to determine permissible uses of military equipment or weapons like stun grenades and drones. Much of the September meeting focused on the staples of modern American policing, with the commissioners debating the permissible uses of flash-bang grenades, tear gas, and other now-standard equipment with representatives from the Oakland Police Department.

    Roughly two hours into the meeting, however, the conversation moved on to the Oakland police’s stable of robots and their accessories. One such accessory is the gun-shaped “percussion actuated nonelectric disruptor,” a favorite tool of bomb squads at home and at war. The PAN disruptor affixes to a robot and directs an explosive force — typically a blank shotgun shell or pressurized water — at suspected bombs while human operators remain at a safe distance. Picture a shotgun barrel secured to an 800-pound Roomba on tank treads.

    While describing the safety precautions taken while using the PAN disruptor, Daza-Quiroz told the subcommittee that the department takes special care to ensure that it is in fact a blank round loaded into the robot’s gun. This led a clearly bemused Jennifer Tu, a fellow with the American Friends Service Committee and member of the Oakland Police Commission subcommittee on militarized policing, to ask: “Can a live round physically go in, and what happens if a live round goes in?”

    “Yeah, physically a live round can go in,” Daza-Quiroz answered. “Absolutely. And you’d be getting a shotgun round.”

    After a brief silence, Commissioner Jesse Hsieh asked the next question: “Does the department plan on using a live round in the robot PAN disruptor?”

    The answer was immediately provocative. “No,” Daza-Quiroz said, before quickly pivoting to hypothetical scenarios in which, yes, just such a shotgun-armed robot might be useful to police. “I mean, is it possible we have an active shooter in a place we can’t get to? And he’s fortified inside a house? Or we’re trying to get to a person —”

    It soon became clear the Oakland Police Department was saying what nearly every security agency says when it asks the public to trust it with an alarming new power: We’ll only use it in emergencies — but we get to decide what’s an emergency.

    The question of whether robots originally designed for defusing bombs should be converted into remote-controlled guns taps into several topics at the center of national debates: police using lethal force, the militarization of American life, and, not least of all, killer robots. Critics of the armed robo-cops note that the idea of Predator drones watching American racial justice protests may have seemed similarly far-fetched in the years before it started happening. “It’s not that we don’t want to debate how to use these tools safely,” said Liz O’Sullivan, CEO of the AI bias-auditing startup Parity and a member of the International Committee for Robot Arms Control. “It’s a question of, if we use them at all, what’s the impact going to be to democracy?”

    Some observers say the Oakland police’s robot plan contradicts itself. “It’s billed as a de-escalation facilitator, but they want to keep it open as a potential lethal weapon,” Jaime Omar Yassin, an independent journalist in Oakland who has documented the commission meetings, tweeted. As with any high-tech toy, the temptation to use advanced technology may surpass whatever institutional guardrails the police have in place. Matthew Guariglia, a policy analyst with the Electronic Frontier Foundation, said, “The ease of use of weapons as well as the dangerous legal precedence justifying the causal use of weapons makes police less likely to attempt to deescalate situations.”

    “It in many ways lowers the psychological hurdle for enacting that violence when it’s just a button on a remote control.”

    Tu hopes that by cracking down on shotgun robots before they come to be, Oakland and cities across the country can avoid debates about limits on police powers that only come after those powers are abused. She pointed to the Oakland police ban on using firehoses, a bitter reminder of abuses in American policing from the not-too-distant past. “We have an opportunity right now to prevent the lawsuit that will force the policy to be rewritten,” Tu said. “We have an opportunity to prevent the situation, the harm, the trauma that would occur in order for a lawsuit to need to be initiated.”

    Skeptics of robo-policing, including Tu, say these debates need to happen today to preempt the abuses of tomorrow, especially because of the literal and figurative distance robotic killing affords. Guariglia said, “It in many ways lowers the psychological hurdle for enacting that violence when it’s just a button on a remote control.”


    Dallas Police Headquarter Attacked Overnight, Leads To Standoff With Suspect

    Oakland police are seeking to use live shotgun rounds in an attachment to the Remotec Adros Mark V-A1, a robot seen here in a standoff where Dallas police deployed the robot in Dallas, Texas, on June 13, 2015.

    Photo: Stewart F. House/Getty Images

    As the Oakland commission hearing went on, Daza-Quiroz invoked a controversial 2016 incident in Dallas. Police had strapped a C-4 bomb to a city-owned robot and used it to blow up a sniper who’d killed five police officers during a downtown rally. It is widely considered to be the country’s first instance of robotic police killing. While police generally heralded the ingenuity of the response, others criticized it as summary execution by robot. In an email to The Intercept, Daza-Quiroz said the department imagines weaponizing the PAN disruptor on the department’s $280,000 Northrop Grumman Remotec Andros Mark 5-A1 robot — the very same model used so controversially in Dallas.

    Daza-Quiroz noted that the department had never actually attempted to load a live round into the PAN gun for fear of breaking the $3,000 attachment. Yet when Tu asked whether the commission could add policy language that would prohibit arming the robot with lethal 12-gauge shotgun rounds, the department’s vision for robotic policing became clearer. “I don’t want to add a prohibited use,” Daza-Quiroz replied, “because what if we need it for some situation later on?”

    Daza-Quiroz explained that a hypothetical lethally armed robot would still be subject to the department’s use of force policy. Oakland Police Department Lt. Joseph Turner, stressing the need to keep extreme options on the table for extreme circumstances, urged the commission to allow such a killer robot in case of “exigencies.” He said, “I’m sure those officers that day in Texas did not anticipate that they were going to deliver a bomb using a robot.”

    The Oakland Police Department’s assurances that a shotgun-toting robot would be subject to departmental use-of-force policy did not seem to satisfy critics. Nor did the messenger have a record that inspires confidence. A 2013 East Bay Express report on Daza-Quiroz and another officer’s killing of an unarmed Oakland man found that he had been the subject of over 70 excessive force complaints. (One lawsuit prompted a six-figure settlement from the city and the jury ruled for the officers in another; the officers were never charged with a crime, and an arbitrator overturned the police chief’s decision to discipline the officers. Police spokesperson Candace Keas declined to comment on the dozens of excessive force complaints.)

    In the wake of the shooting, which prompted protests, the East Bay Times reported that Daza-Quiroz was asked by an internal investigator why he hadn’t used his Taser instead. He responded, “I wanted to get lethal.”

    “You have a hammer, everything looks like a nail.”

    The concern is, then, less that police would use a shotgun robot in “certain catastrophic, high-risk, high-threat, mass casualty events” — as the tentative policy language favored by the department currently reads — but that such a robot would be rolled out when the police simply want to get lethal. The vagaries of what precisely constitutes a “high-risk” event or who determines the meaning of “high threat” affords the police too much latitude, Tu told The Intercept in an interview. “It’s not a technical term, there’s no definition of it,” she said. “It doesn’t mean anything.” When asked by email for precise definitions of these terms, Daza-Quiroz said, “High risk, high threat incidents can vary in scope and nature and are among the more challenging aspects of law enforcement.”

    Critics say such ambiguous language means Oakland police would get to use a robot to kill someone whenever they decide they need a robot to kill someone. The policy has analogues in more routine police work: After shooting unarmed people, officers frequently offer post-hoc justifications that they felt their life was in danger.

    “Anytime anyone has a tool, they’re going to use it more,” said Tu. “You have a hammer, everything looks like a nail. And the more that police, in general, have military equipment, have more weapons, those weapons get used.”

    After weeks of wrangling, both the commission and the police department agreed on language that will prohibit any offensive use of robots against people, with an exception for delivering pepper spray. The agreement will go for review by the city council on October 18.

    Tu suspects the sudden compromise on the killer-robot policy is explained not by any change of heart, but rather by the simple fact that had the debate continued any longer, the department would have missed the deadline for submitting a policy — and risked losing the ability to legally operate its robots altogether.

    There is nothing preventing the Oakland Police Department from, as Daza-Quiroz said they will, continuing to push for legally sanctioned killing using a PAN disruptor. No matter how the Oakland policy shakes out in the long term, the issue of robotic policing is likely to remain. “I’m sure Dallas [police] weren’t the only ones who had considered lethal force with their robot before doing so, and Oakland police aren’t the only ones who are thinking about it even more now,” Tu told The Intercept. “They’re just the only ones who thought about it out loud with a committee.”

    According to Daza-Quiroz, the department is still looking toward the future. “We will not be arming robots with lethal rounds anytime soon, and if, and when that time comes each event will be assessed prior to such deployment,” he said. When asked if there were other situations beyond a Dallas-style sniper in which police might wish to kill with a robot, Daza-Quiroz added: “Absolutely there are many more scenarios.”

    With thousands of Andros robots operated by hundreds of police department across the country, those concerned by the prospect of shotgun robots on the streets of Oakland or elsewhere refer to what they say is a clear antecedent with other militarized hardware: mission creep.

    “We’re not really talking about a slippery slope. It’s more like a well-executed playbook to normalize militarization.”

    Once a technology is feasible and permitted, it tends to linger. Just as drones, mine-proof trucks, and Stingray devices drifted from Middle Eastern battlefields to American towns, critics of the PAN disruptor proposal say the Oakland police’s claims that lethal robots would only be used in one-in-a-million public emergencies isn’t borne out by history. The recent past is littered with instances of technologies originally intended for warfare mustered instead against, say, constitutionally protected speech, as happened frequently during the George Floyd protests.

    “As you do this work for a few years, you come to realize that we’re not really talking about a slippery slope. It’s more like a well-executed playbook to normalize militarization,” said O’Sullivan, of Parity. There’s no reason to think the PAN disruptor will be any different: “One can imagine applications of this particular tool that may seem reasonable, but with a very few modifications, or even just different kinds of ammunition, these tools can easily be weaponized against democratic dissent.”

    The post Oakland Cops Hope to Arm Robots With Lethal Shotguns appeared first on The Intercept.

  • Quando a Suprema Corte anulou a decisão do caso Roe vs. Wade, que garantia o direito ao aborto em todo o país, as principais empresas de internet dos Estados Unidos agiram rapidamente, comprometendo-se a ajudar suas funcionárias nos estados que passariam a proibir o aborto. Em um sinal implícito de apoio ao direito ao aborto, as empresas disseram que ajudariam essas funcionárias a buscar o procedimento em estados onde a legalidade foi mantida.

    No entanto, nos anos que antecederam a bombástica decisão a respeito dos direitos reprodutivos, os gigantes da tecnologia patrocinaram um controverso grupo que trabalhou incansavelmente para colocar a Suprema Corte sob controle conservador, preparando o terreno para a reversão da decisão de Roe vs. Wade.

    O Fórum de Mulheres Independentes (IWF, na sigla em inglês) tem suas origens na luta para confirmar a nomeação de Clarence Thomas para a Suprema Corte, em 1991. Desde então, o grupo se expandiu para promover uma série de causas ligadas à direita, como o negacionismo climático, o alarmismo quanto à imigração e a desregulamentação do mercado, embora seu foco tenha permanecido na formação de uma Suprema Corte dominada por conservadores.

    O trabalho de relações públicas desempenha um papel fundamental no funcionamento do grupo. Com seu hábil posicionamento de vender-se como uma organização pró-mulher, o grupo lutou pela nomeação de juízes conservadores para a Suprema Corte. A IWF deu apoio a Bret Kavanaugh como alguém bom para o feminismo e acusou qualquer oposição a Amy Coney Barrett como sexismo — apesar da pertinente preocupação de que suas chegadas à corte significariam o fim de Roe vs. Wade. A IWF é responsável por uma eficiente gama de aparições na mídia, artigos de opinião, comentaristas de televisão e outras contribuições para o ecossistema de conteúdo conservador.

    O grupo também faz um uso silencioso do tráfico de influências. Em 2020, a chefe da IWF e herdeira da Vick VapoRub, Heather Higgins, se gabou para uma audiência fechada de conservadores da Virgínia sobre como a atuação do grupo foi fundamental para reunir apoio do Congresso para a indicação de Kavanaugh à Suprema Corte. Higgins disse ao grupo que a IWF fez circular um memorando de estratégia confidencial no Congresso. “O mais importante”, disse Higgins, “é que Susan Collins me disse que, sem aquele memorando, ela não teria como apoiá-lo”, referindo-se à senadora republicana do Maine.

    O Fórum de Mulheres Independentes e sua organização irmã, a Voz das Mulheres Independentes (Independent Women’s Voice), recebem doações de pilares financeiros da direita dos Estados Unidos, como os irmãos Koch, mas nos últimos anos os grupos têm recebido apoio financeiro da empresa-mãe do Facebook, a Meta, do Google, e da Amazon. Em 2017, o Google patrocinou uma festa de gala da IWF com o status de doador “ouro”, de acordo com folhetos fornecidos ao Intercept pela True North Research, um observatório progressista. Outros folhetos mostram que a Meta (que na época ainda usava o nome Facebook) patrocinou as festas de gala da IWF em 2018, ao lado do Google, e em 2019. Os homenageados em eventos da IWF incluíram conhecidas figuras antiaborto, como a representante Republicana do Wyoming, Lynne Cheney; Kellyanne Conway, do alto escalão no gabinete do governo Trump; e o vice-presidente do próprio Trump, Mike Pence.

    Dados corporativos da Amazon mostram que a empresa doou quantias não reveladas à IWF em 2018, 2019 e 2020.

    Amazon, Google, Meta e a IWF não responderam aos pedidos de comentário desta reportagem.

    A fundadora da True North, Lisa Graves, caracterizou os esforços da IWF como uma tentativa de maquiar sua ideologia conservadora. “Eles agem como legitimadores”, disse ela em uma entrevista, “basicamente, fornecendo o rosto de uma mulher à direita, para criticar ou atacar os progressistas, e promover suas propostas extremas, opressivas e regressivas.”

    Patrice Onwuka, diretora do Centro de Oportunidades Econômicas do Fórum de Mulheres Independentes, fala durante um evento organizado pelos Republicanos do Congresso em 1º de março de 2022, em Washington, D.C.

    Patrice Onwuka, diretora do Centro de Oportunidades Econômicas do Fórum de Mulheres Independentes, fala durante um evento organizado pelos Republicanos do Congresso em 1º de março de 2022, em Washington, D.C.

    Foto: Samuel Corum/Getty Images

    Apesar da percepção pública de um suposto alinhamento do Vale do Silício com valores progressistas e causas liberais, as empresas de tecnologia, principalmente aquelas que temem a regulamentação estatal, há muito tempo canalizam dinheiro para grupos de direita como a IWF. Ao mesmo tempo, o grupo costuma promover posições políticas que são bastante favoráveis a seus doadores corporativos.

    A IWF sempre defendeu posições favoráveis à indústria de tecnologia em aspectos trabalhistas, antitruste e outras questões, sem revelar os interesses de seus doadores. Veja, por exemplo, uma postagem do blog da IWF em abril, alertando que a aplicação de leis antitruste contra as gigantes de tecnologia seria desastrosa. “A inovação tecnológica tem sido nada menos que milagrosa nas últimas décadas”, escreveu Patrice Onwuka, diretora do Centro de Oportunidades Econômicas da IWF, e a principal defensora das poderosas empresas de tecnologia na entidade.

    Poucas questões em tecnologia mobilizaram a IWF e Onwuka como o American Innovation and Choice Online Act, uma proposta bipartidária antitruste, que impediria as empresas de tecnologia de utilizar seu enorme alcance para favorecer os próprios serviços em detrimento dos concorrentes. Em um artigo de dezembro de 2021 intitulado “O Amazon Prime pode não estar por perto para salvar o próximo Natal”, Onwuka afirmou que “a Senadora Amy Klobuchar e outros estão prestes a encerrar serviços como o envio rápido e gratuito da Prime, e outros dos quais dependemos.” Onwuka então fez um link a uma postagem no blog da Câmara do Progresso, uma coalizão de empresas de tecnologia, com financiamento da Amazon, que alegava, de forma duvidosa, que a lei “baniria o Amazon Prime”.

    Em junho, Onwuka escreveu uma crítica contra as ações antitruste do Congresso: “As conveniências que tornam a vida e o trabalho mais fáceis e rápidos e economizam o dinheiro dos consumidores podem desaparecer.” Mais tarde naquele dia, Onwuka apareceu na Fox Business, novamente protestando contra a aplicação da lei antitruste na indústria de tecnologia. “Estou mais preocupada com o impacto nos proprietários de pequenas empresas e nas mulheres e famílias que dependem dos benefícios que algumas dessas quatro grandes empresas de tecnologia oferecem,”, disse ela.

    Proteger as gigantes de tecnologia do controle antitruste provou ser uma prioridade para a IWF, mas o grupo também defende diretamente seus benfeitores. Em 2019, Onwuka escreveu um post inteiro dedicado a defender o CEO da Meta, Mark Zuckerberg, depois que o site Politico relatou que ele havia participado de jantares com famosos comentaristas e congressistas conservadores. “Zuckerberg é um cidadão comum que pode jantar com quem quiser”, escreveu Onwuka. “O jantar dele tem um objetivo comercial claro, e isso faz parte dos negócios.”

    ‘Institucionalmente, eles não têm posição sobre o aborto, e essa é a sua posição declarada. Mas organizacionalmente, eles apoiaram a lista de juízes antiaborto mais agressiva que já vimos.’

    Obviamente, o tratamento cordial aos gigantes da indústria é um pilar do conservadorismo, e a IWF provavelmente estaria alertando que as medidas antitruste nos levariam de volta à Idade do Bronze, mesmo sem o patrocínio do Google para seus jantares de gala. Mas alimentar a fábrica de comentaristas de direita é um aspecto importante da estratégia política das gigantes de tecnologia, e que se mostra em constante expansão.

    Embora não haja evidências de que Zuckerberg ou o CEO do Google Sundar Pichai tenham qualquer oposição pessoal ao acesso ao aborto, não há dúvida que suas empresas se beneficiam do apoio a um amplo e próspero ecossistema de discurso conservador no qual qualquer regulamentação governamental é algo abominável. Para as lideranças das empresas de tecnologia, a realidade de que esse ecossistema impulsiona não apenas a economia do livre mercado, amigável ao Facebook, mas também o negacionismo climático e a proibição do aborto, é considerada um efeito colateral – talvez infeliz, mas que ainda vale a pena.

    O patrocínio do Vale do Silício a grupos de pesquisa e campanhas de direita ocorre de forma que há uma grande margem viável para negação. Quando o The Guardian noticiou em 2019 que o Google estava doando para algumas das mais famosas organizações de negacionismo climático do país, um porta-voz da empresa respondeu: “Não somos os únicos entre as empresas que contribuem para organizações, e ao mesmo tempo discordamos delas fortemente em relação às políticas climáticas.”

    A grande diversidade de tópicos nos quais a IWF se envolve, e seu cuidado para não se opor publicamente ao acesso ao aborto, ajudou a evitar a reputação de ser um grupo antiaborto. “Institucionalmente, eles não têm posição sobre o aborto, e essa é a posição declarada deles”, explicou Graves, da True North. “Mas organizacionalmente, eles apoiaram a lista de juízes antiaborto mais agressiva que já vimos.”

    Tradução: Antenor Savoldi Jr.

    The post Amazon, Google e Facebook ajudaram a bancar fim do aborto legal nos EUA appeared first on The Intercept.

    This post was originally published on The Intercept.

  • When the Supreme Court overturned Roe v. Wade, the country’s top internet companies quickly responded with commitments to help employees in states that moved to ban abortion. In an implicit signal of support for abortion rights, the companies said they would help those employees seek abortions in states where the procedure remains legal.

    In the years leading up to the seismic reproductive rights decision, however, the tech giants sponsored a controversial group that’s worked tirelessly to put the Supreme Court under conservative control, setting the stage for Roe’s reversal.

    The Independent Women’s Forum traces its origins back to the 1991 fight to confirm the Supreme Court nomination of Clarence Thomas. Since then, the group has expanded into promoting a litany of perennial right-wing causes like climate denial, immigration alarmism, and deregulation, but a conservative-dominated Supreme Court remained a focus.

    Public relations plays a key role in its operation. With savvy self-branding as a pro-woman organization, the group fought for the appointment of conservative justices to the Supreme Court. The IWF couched support for Bret Kavanaugh as good feminism and any opposition to Amy Coney Barrett as sexism — despite well-founded concerns that their ascensions to the court would spell the end of Roe. The IWF wields a skillful mix of media placement, op-eds, television punditry, and other contributions to the conservative content ecosystem.

    The group also takes advantage of quieter influence peddling as well. In 2020, IWF chief and Vicks VapoRub heiress Heather Higgins boasted to a closed audience of Virginia conservatives about how instrumental the group was in rallying congressional support for Kavanaugh’s nomination. Higgins told the group that the IWF circulated a confidential strategy memo on the Hill. “Most important,” Higgins said, “Susan Collins told me that without that memo, she would not see how to support him,” referring to the Republican senator from Maine.

    Independent Women’s Forum and its sister organization, Independent Women’s Voice, draw on donations from right-wing financial mainstays like the Koch brothers, but in recent years the groups have enjoyed financial support from Facebook’s parent company, Meta; Google; and Amazon. In 2017, Google sponsored an IWF gala at the “gold” donor level, according to brochures provided to The Intercept by True North Research, a progressive watchdog group. Other brochures show that Meta (which at the time still using the name Facebook) sponsored IWF galas in 2018, alongside Google, and 2019. Honorees at IWF events have included notable anti-abortion figures like Rep. Lynne Cheney, R-Wy.; top Trump administration official Kellyanne Conway; and Vice President Mike Pence.

    Corporate disclosures from Amazon show that the company donated undisclosed sums to the IWF in 2018, 2019, and 2020.

    Amazon, Google, Meta, and the IWF did not respond to a request for comment.

    True North founder Lisa Graves characterized the IWF’s efforts as an attempt to launder conservative ideology. “They act as a distaff,” she said in an interview, “in essence providing a woman’s face for the right wing’s critique or attack on progressives and its advance of this extreme and regressive, repressive agenda.”

    Patrice Onwuka, director of the Independent Womens Forums Center for Economic Opportunity, speaks during a town hall event hosted by House Republicans on March 1, 2022 in Washington, DC.

    Patrice Onwuka, director of the Independent Women’s Forum’s Center for Economic Opportunity, speaks during a town hall event hosted by House Republicans on March 1, 2022 in Washington, D.C.

    Photo: Samuel Corum/Getty Images


    Despite the public perception of Silicon Valley’s alignment with progressive values and liberal causes, tech companies, particularly those fearing state regulation, have long funneled money to right-wing groups like the IWF. At the same time, the IWF routinely pushes policy positions that are highly favorable to its corporate donors.

    The IWF has consistently espoused tech industry-friendly positions on labor, antitrust, and other issues, without disclosing its donors’ interests. Take, for example, an April IWF blog post that warned that antitrust enforcement against Big Tech would prove disastrous. “Tech innovation has been nothing short of miraculous over the past few decades,” wrote Patrice Onwuka, director of IWF’s Center for Economic Opportunity and its go-to defender of powerful tech firms.

    Few issues in tech have galvanized the IWF and Onwuka like the bipartisan American Innovation and Choice Online Act, which would block tech companies from leveraging their enormous reach to favor their own services over competitors. In a December 2021 piece titled “Amazon Prime may not be around to save the day next Christmas,” Onwuka claimed, “Senator Amy Klobuchar and others are on a path to end services like Prime’s fast and free shipping and other services that we depend upon.” Onwuka then linked to a blog post by the Amazon-funded Chamber of Progress that claimed, dubiously, that the law would “ban Amazon Prime.”

    In June, Onwuka wrote a jeremiad against congressional antitrust efforts: “The conveniences that make life and work easier and faster and save consumers money may disappear.” Later that day, Onwuka appeared on Fox Business, again protesting antitrust enforcement against the tech industry. “I’m more worried about the impact on small business owners and on women and families that rely on some of the benefits that some of these big four tech companies provide,” she said.

    While shielding Big Tech from antitrust scrutiny has proven a priority for the IWF, the group also stands up directly for its benefactors. In 2019, Onwuka wrote an entire post dedicated sticking up for Meta CEO Mark Zuckerberg after Politico reported that he had attended dinners with notable conservative commentators and lawmakers. “Zuckerberg is a private citizen who can eat dinner with whomever he wants,” Onwuka wrote. “His dinner has a clear business purpose and that’s part of doing business.”

    “Institutionally they have no position on abortion, that’s their stated position. But organizationally, they have backed the most aggressive anti-choice slate of judges we’ve ever seen.”

    The cordial treatment of industry giants is of course a linchpin of conservatism, and the IWF would almost certainly be warning that antitrust will bring us back to the Bronze Age even without Google sponsoring its gala dinners. But fueling the right-wing punditry mill is a large, ever-expanding facet of Big Tech’s political strategy.

    While there’s no evidence that Zuckerberg or Google CEO Sundar Pichai have any personal opposition to abortion access, their companies no doubt benefit from their support of a broad, thriving conservative discourse ecosystem in which any government regulation is anathema. For tech company leadership, the reality that this ecosystem pushes not just Facebook-friendly laissez-faire economics, but also climate denial and abortion bans is considered a perhaps unfortunate but worthwhile byproduct.

    Silicon Valley’s patronage of right-wing think tanks and campaigns is an arrangement in which there is ample plausible deniability to go around. When The Guardian reported in 2019 that Google was donating to some of the nation’s most notorious climate-denial organizations, a company spokesperson retorted, “We’re hardly alone among companies that contribute to organizations while strongly disagreeing with them on climate policy.”

    The multitude of topics on which the IWF engages, and its careful avoidance of publicly opposing abortion access have helped it avoid a reputation as an anti-abortion group. “Institutionally they have no position on abortion, that’s their stated position,” explained Graves, of True North. “But organizationally, they have backed the most aggressive anti-choice slate of judges we’ve ever seen.”

    The post How Amazon, Google, and Facebook Helped Fund the Campaign to Overturn Roe appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Elon Musk is once again suggesting his business interests can solve a high-profile crisis: This time, the SpaceX CEO says Starlink satellite internet can alleviate Iran’s digital crackdown against ongoing anti-government protests. Iranian dissidents and their supporters around the world cheered Musk’s announcement that Starlink is now theoretically available in Iran, but experts say the plan is far from a censorship panacea.

    Musk’s latest headline-riding gambit came after Iran responded to the recent rash of nationwide protests with large-scale disruption of the country’s internet access. On September 23, Secretary of State Anthony Blinken announced the U.S. was easing restrictions on technology exports to help counter Iranian state censorship efforts.

    Musk, ready to pounce, quickly replied: “Activating Starlink …”

    Predictably, Musk’s dramatic tweet set off a frenzy. Within a day, venture capitalist and longtime Musk-booster Shervin Pishevar was already suggesting Musk had earned the Nobel Peace Prize. Just the thought of Starlink “activating” an uncensored internet for millions during a period of Middle Eastern political turmoil was an instant public relations coup for Musk.

    In Iran, though, the notion of a benevolent American billionaire beaming freedom to Iran by satellite is derailed by the demands of reality, specifically physics. Anyone who wants to use Starlink, the satellite internet service provider operated by Musk’s rocketry concern, SpaceX, needs a special dish to send and receive internet data.

    “I don’t think it’s much of a practical solution because of the problem of smuggling in the ground terminals.”

    While it may be possible to smuggle Starlink hardware into Iran, getting a meaningful quantity of satellite dishes into Iran would be an incredible undertaking, especially now that the Iranian government has been tipped off to the plan on Twitter.

    Todd Humphreys, an engineering professor at the University of Texas at Austin whose research focuses on satellite communication, said, “I don’t think it’s much of a practical solution because of the problem of smuggling in the ground terminals.”

    The idea is not without precedent. In Ukraine, after the Russian invasion disrupted internet access, the deployment of Musk’s satellite dishes earned him international press adulation and a bevy of lucrative government contracts. In Ukraine, though, Starlink was welcomed by a profoundly pro-American government desperate for technological aid from the West. U.S. government agencies were able to ship the requisite hardware with the full logistical cooperation of the Ukrainian government.

    This is not, to say the very least, the case in Iran, where the government is unlikely to condone the import of a technology explicitly meant to undermine its own power. While Musk’s claim that Starlink’s orbiting satellites are activated over Iran may be true, the notion that censorship-free internet connectivity is something that can be flipped on like a light switch is certainly not. Without dishes on the ground to communicate with the satellites, it’s a meaningless step: technologically tantamount to giving a speech to an empty room.

    Humphreys, who has previously done consulting work for Starlink, explained that because of the specialized nature of Starlink hardware, it’s doubtful Iranians could craft a DIY alterative. “It’s not like you can build a homebrew receiver,” he said. “It’s a very complicated signal structure with a very wideband signal. Even a research organization would have a hard time.”

    Musk is famously uninterested in the constraints imposed by reality, but he seems to acknowledge the problem to some degree. In a September 25 tweet, Carnegie Endowment for International Peace fellow Karim Sadjadpour wrote, “I spoke w/ @elonmusk about Starlink in Iran, he gave me permission to share this: ‘Starlink is now activated in Iran. It requires the use of terminals in-country, which I suspect the [Iranian] government will not support, but if anyone can get terminals into Iran, they will work.’”

    Implausibility hasn’t stopped Musk’s fans, either. One tweet from a senior fellow at the Atlantic Council purporting to document a Starlink dish already successfully secreted into Iran turned out to be a photo from 2020, belonging to an Idaho man who happened to have a Persian rug.

    The fandom — and the starpower it’s attached to — might be the point here. Given the obstacles, Musk’s Starlink aspirations may be best understood in the context of his past spectacular, spectacularly unfulfilled claims, rather than something akin to Starlink’s rapid adoption in Ukraine. Musk’s penchant for internet virality has become a key component of his business operations. He has repeatedly made bold pronouncements, typically on Twitter, that a technology he happens to manufacture is the key to cracking some global crisis. Whether it’s Thai children stuck in a waterlogged cave, the Covid-19 pandemic, or faltering American transit infrastructure, Musk has repeatedly offered technological solutions that are either plainly implausible, botched in execution, or a mixture of both.

    It’s not just the lack of dishes in Iranian homes. Musk’s plan is further complicated by Starlink’s reliance on ground stations: communications facilities that allow the SpaceX satellites to plug into earthbound internet infrastructure from orbit. While upgraded Starlink satellites may no longer need these ground stations in the near future, the network of today still largely requires them to service a country as vast as Iran, said Humphreys, the University of Texas professor. Again, Iran is unlikely to approve the construction within its borders of satellite installations owned by an American defense contractor.

    Humphreys suggested that ground stations built in a neighboring country could provide some level of connection, albeit at reduced speed, but that still doesn’t get over the hump of every Iranian who wants to get online needing a $550 kit with “Starlink” emblazoned on the box. While Humphreys added that he was hopeful that a slow trickle of Starlinks terminals could aid Iranian dissidents over time, he said, “I don’t think in the short term this will have an impact on the unrest in Iran.”

    Alp Toker, director of the internet monitoring and censorship watchdog group NetBlocks, noted that many Iranians already watch banned satellite television channels through contraband dishes, meaning the smuggling of Starlink dishes is doable in theory. While he praised the idea of bringing Starlink to Iran as “credible and worthwhile” in the long term, the difficulty in sourcing Starlink’s specialized equipment means that accessing Musk’s satellites remains “a solution for the few,” not a counter to population-scale censorship.

    While future versions of the Starlink system might be able to communicate with more accessible devices like handheld phones, Toker said, “As far as we know this isn’t possible with the current generation of kit, and it won’t be until then that Starlink or similar platforms could simply ‘switch on’ internet in a country in the sense that most people understand.”

    Even with Iran’s culture of bootleg satellite TV, these experts warned that a Starlink connection could endanger Iranians. Rose Croshier, a policy fellow at the Center for Global Development, noted the risks: “A word of caution: TV dishes are passive — they don’t transmit — so a Starlink terminal (that both receives and transmits data) in a crowd of illegal satellite dishes would still be very findable by Iranian authorities.”

    “I don’t think in the short term this will have an impact on the unrest in Iran.”

    The plan faces further terrestrial hurdles. The complex two-way nature of satellite connections is part of why they’re subject to international regulation, most notably through the International Telecommunication Union, of which both the United States and Iran are members. Croshier pointed to a 2021 paper on satellite internet usage by the Asia Development Bank that explained how “US-based entities such as Starlink … require regulatory approval from the FCC as well the ITU” and that “service provision to customers will require regulatory approval in every country of operation.” Mahsa Alimardani, a senior Middle East researcher at Article19, a free expression advocacy group, tweeted that even if Starlink could beam internet to Iranians in a meaningful way, the company would face consequences from the International Telecommunications Union if it did so without Iranian approval — approval it is unlikely to ever get.

    Then there are sanctions against Iran. Blinken, the secretary of state, announced a relaxation of tech exports, but the restrictions on trade with Iran remain a serious obstacle. “There are a host of human rights related sanctions on Iranian actors in the IT space under a sanctions authority called GHRAVITY that complicate any of this beyond the questions raised of whether Iran would allow Starlink terminals in country,” explained Brian O’Toole, a senior fellow at the Atlantic Council and expert on global sanctions. The relaxed rules would still require a special license for Starlink use in Iran, O’Toole said, which he doubts would be granted: “Much of this Starlink stuff doesn’t appear terribly likely to do much, from my point of view.”

    Starlink — or a competitor — may one day bring unfettered net uplinks to Iran and other countries where online dissent is choked out, but for today’s Iranian protesters, the realities far exceed the PR punch of a two-word tweet.

    The post No, Elon Musk’s Starlink Probably Won’t Fix Iranian Internet Censorship appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Facebook and Instagram’s speech policies harmed fundamental human rights of Palestinian users during a conflagration that saw heavy Israeli attacks on the Gaza Strip last May, according to a study commissioned by the social media sites’ parent company Meta.

    “Meta’s actions in May 2021 appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” says the long-awaited report, which was obtained by The Intercept in advance of its publication.

    Commissioned by Meta last year and conducted by the independent consultancy Business for Social Responsibility, or BSR, the report focuses on the company’s censorship practices and allegations of bias during bouts of violence against Palestinian people by Israeli forces last spring.

    “Meta’s actions in May 2021 appear to have had an adverse human rights impact.”

    Following protests over the forcible eviction of Palestinian families from the Sheikh Jarrah neighborhood in occupied East Jerusalem, Israeli police cracked down on protesters in Israel and the West Bank, and launched military air strikes against Gaza that injured thousands of Palestinians, killing 256, including 66 children, according to the United Nations. Many Palestinians attempting to document and protest the violence using Facebook and Instagram found their posts spontaneously disappeared without recourse, a phenomenon the BSR inquiry attempts to explain.

    Last month, over a dozen civil society and human rights groups wrote an open letter protesting Meta’s delay in releasing the report, which the company had originally pledged to release in the “first quarter” of the year.

    While BSR credits Meta for taking steps to improve its policies, it further blames “a lack of oversight at Meta that allowed content policy errors with significant consequences to occur.”

    Though BSR is clear in stating that Meta harms Palestinian rights with the censorship apparatus it alone has constructed, the report absolves Meta of “intentional bias.” Rather, BSR points to what it calls “unintentional bias,” instances “where Meta policy and practice, combined with broader external dynamics, does lead to different human rights impacts on Palestinian and Arabic speaking users” — a nod to the fact that these systemic flaws are by no means limited to the events of May 2021.

    Meta responded to the BSR report in a document to be circulated along with the findings. (Meta did not respond to The Intercept’s request for comment about the report by publication time.) In a footnote in the response, which was also obtained by The Intercept, the company wrote, “Meta’s publication of this response should not be construed as an admission, agreement with, or acceptance of any of the findings, conclusions, opinions or viewpoints identified by BSR, nor should the implementation of any suggested reforms be taken as admission of wrongdoing.”

    According to the findings of BSR’s report, Meta deleted Arabic content relating to the violence at a far greater rate than Hebrew-language posts, confirming long-running complaints of disparate speech enforcement in the Palestinian-Israeli conflict. The disparity, the report found, was perpetuated among posts reviewed both by human employees and automated software.

    “The data reviewed indicated that Arabic content had greater over-enforcement (e.g., erroneously removing Palestinian voice) on a per user basis,” the report says. “Data reviewed by BSR also showed that proactive detection rates of potentially violating Arabic content were significantly higher than proactive detection rates of potentially violating Hebrew content.”

    BSR attributed the vastly differing treatment of Palestinian and Israeli posts to the same systemic problems rights groups, whistleblowers, and researchers have all blamed for the company’s past humanitarian failures: a dismal lack of expertise. Meta, a company with over $24 billion in cash reserves, lacks staff who understand other cultures, languages, and histories, and is using faulty algorithmic technology to govern speech around the world, the BSR report concluded.

    Not only do Palestinian users face an algorithmic screening that Israeli users do not — an “Arabic hostile speech classifier” that uses machine learning to flag potential policy violations and has no Hebrew equivalent — the report notes that the Arabic system also doesn’t work well: “Arabic classifiers are likely less accurate for Palestinian Arabic than other dialects, both because the dialect is less common, and because the training data — which is based on the assessments of human reviewers — likely reproduces the errors of human reviewers due to lack of linguistic and cultural competence.”

    Human employees appear to have exacerbated the lopsided effects of Meta’s speech-policing algorithms. “Potentially violating Arabic content may not have been routed to content reviewers who speak or understand the specific dialect of the content,” the report says. It also notes that Meta didn’t have enough Arabic and Hebrew-speaking staff on hand to manage the spike in posts.

    These faults had cascading speech-stifling effects, the report continues. “Based on BSR’s review of tickets and input from internal stakeholders, a key over-enforcement issue in May 2021 occurred when users accumulated ‘false’ strikes that impacted visibility and engagement after posts were erroneously removed for violating content policies.” In other words, wrongful censorship begat further wrongful censorship, leaving the affected wondering why no one could see their posts. “The human rights impacts … of these errors were more severe given a context where rights such as freedom of expression, freedom of association, and safety were of heightened significance, especially for activists and journalists,” the report says.

    Beyond Meta’s failures in triaging posts about Sheikh Jarrah, BSR also points to the company’s “Dangerous Individuals and Organizations” policy — referred to as “DOI” in the report — a roster of thousands of people and groups that Meta’s billions of users cannot “praise,” “support,” or “represent.” The full list, obtained and published by The Intercept last year, showed that the policy focuses mostly on Muslim and Middle Eastern entities, which critics described as a recipe for glaring ethnic and religious bias.

    Meta claims that it’s legally compelled to censor mention of groups designated by the U.S. government, but legal scholars have disputed the company’s interpretation of federal anti-terrorism laws. Following The Intercept’s report on the list, the Brennan Center for Justice called the company’s claims of legal obligation a “fiction.”

    “Meta’s DOI policy and the list are more likely to impact Palestinian and Arabic-speaking users, both based upon Meta’s interpretation of legal obligations, and in error.”

    BSR agrees the policy is systemically biased: “Legal designations of terrorist organizations around the world have a disproportionate focus on individuals and organizations that have identified as Muslim, and thus Meta’s DOI policy and the list are more likely to impact Palestinian and Arabic-speaking users, both based upon Meta’s interpretation of legal obligations, and in error.”

    Palestinians are particularly vulnerable to the effects of the blacklist, according to the report: “Palestinians are more likely to violate Meta’s DOI policy because of the presence of Hamas as a governing entity in Gaza and political candidates affiliated with designated organizations. DOI violations also come with particularly steep penalties, which means Palestinians are more likely to face steeper consequences for both correct and incorrect enforcement of policy.”

    The document concludes with a list of 21 nonbinding policy recommendations, including increased staffing capacity to properly understand and process Arabic posts, implementing a Hebrew-compatible algorithm, increased company oversight of outsourced moderators, and both reforms to and increased transparency around the “Dangerous Individuals and Organizations” policy.

    In its response to the report, Meta vaguely commits to implement or consider implementing aspects of 20 out of 21 the recommendations. The exception is a call to “Fund public research into the optimal relationship between legally required counterterrorism obligations and the policies and practices of social media platforms,” which the company says it will not pursue because it does not wish to provide legal guidance for other companies. Rather, Meta suggests concerned experts reach out directly to the federal government.

    The post Facebook Report Concludes Company Censorship Violated Palestinian Human Rights appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Detroit’s city council will soon vote on whether to spend millions in federal cash meant to ease the economic pains of the coronavirus pandemic on ShotSpotter, a controversial surveillance technology critics say is invasive, discriminatory, and fundamentally broken.

    ShotSpotter purports to do one thing very well: telling cops a gun has been fired as soon as the trigger is pulled. Using a network of microphones hitched to telephone poles, rooftops, and other urban vantage points, ShotSpotter is essentially an Alexa that listens for a bang rather than voice commands. Once the company’s black-box algorithm thinks it has identified a gunshot, it sends a recording of the sound — and the moments preceding and following it — to a team of human analysts. If these ShotSpotter staffers agree the loud noise in question is a gunshot, they relay an alert and location coordinates to police for investigation.

    At least, that’s the pitch. Despite ShotSpotter’s corporate claims of 97 percent accuracy, the technology’s efficacy has been derided as dangerously ineffective — a techno-solutionist approach to public safety. Critics contend that the system draws police scrutiny to already over-policed areas using a proprietary, secret sound detection algorithm. The technology, according to reports, regularly mistakes city noises, including fireworks and cars for gunshots, ignores actual gunshots, provides misleading evidence to prosecutors, and is subject to biases because ShotSpotter employees at times manually alter the algorithm’s findings.

    Detroit already has a $1.5 million contract with ShotSpotter, a California company, to deploy the microphones in select areas, but city officials, including Mayor Mike Duggan, insist that substantially expanding the audio surveillance network will deter gun slayings. The plan is set to go to a vote before the full city council on September 20, and local organizers are opposing the use of money meant for economic relief to expand city security contracts and beef up police surveillance.

    “The Biden administration passed the American Rescue Plan and put forth this Covid relief money to inject money into local economies and to get people back on their feet after the pandemic,” said Branden Snyder, co-director of Detroit Action, a community advocacy group that opposes the vote. “And this is doing the opposite of that. What it does is fatten the wallets of ShotSpotter.”

    Cities across the country are tapping federal recovery money to add or broaden ShotSpotter systems, NBC News reported earlier this year. Syracuse, New York, for instance, spent $171,000 on ShotSpotter, and Albuquerque, New Mexico, paid the company $3 million from the recovery fund. Should the vote pass, Detroit would be the biggest of these customers using Covid relief funds, both in terms of population and the proposed price tag for the surveillance expansion.

    ShotSpotter spokesperson Izzy Olive pointed to remarks by President Joe Biden encouraging local governments to use flexible relief funds to beef up police departments. “Some cities have chosen to use a portion of these funds for ShotSpotter’s technology,” she said. The company claims that more than 125 cities and police departments use the system and that it guarantees 90 percent efficacy within some basic parameters, according to self-reported data from police compiled by the company. Asked about Detroit’s system, Olive said the city owns the data collected by ShotSpotter. She did not comment on whether the company restricts what cities can say about it, saying only that “the contract itself is not confidential.”

    ShotSpotter’s opponents in Detroit agreed that gun violence is a serious problem but said Covid-19 relief money would be far better spent on addressing the social ills that form the basis of crime.

    “What it does is fatten the wallets of ShotSpotter.”

    “If people had jobs, money, after-school programs, housing, the things that they need, that’s going to reduce gun violence,” said Alyx Goodwin, a campaign organizer with Action Center on Race and the Economy.

    Snyder pointed to the fundamental irony of diverting public money billed as form of relief for the pandemic’s downtrodden to surveil those very same people.

    “The reason why we’re in these policing fights, as an economic justice organization, is that our members are folks who are looking for housing, rental support, looking for job access,” Snyder said. “And what we’re given instead is surveillance technology.”

    Duggan’s case for expanding the ShotSpotter contract kicked into high gear in late August when, following a mass shooting, he claimed that police could have thwarted the killings had a broader surveillance net been in place. “They very likely could have prevented two and probably three tragedies had they had an immediate notice,” Duggan said.

    The mayor’s claims echo those of the company itself, which positions the product as an antidote to rising national gun violence rates, particularly since the onset of the pandemic. ShotSpotter explicitly urges cities to tap funds from the American Rescue Plan Act, intended to salve financial hardship caused by the pandemic, to buy new surveillance microphones.

    “As the U.S. recovers from COVID-19, gun crime is surging to historically high levels,” reads a company post titled “The American Rescue Act Can Help Your Agency Fund Crime Reducing Technology.” The post refers interested municipalities to a company portal that lists resources to help navigate the procurement process, adorned with an image of a giant pile of cash, including a “FREE funding consultation with an expert who knows the process.”

    ShotSpotter even published a video webinar guiding police through the process of obtaining Covid money to buy the surveillance tech. In the video, the company’s Director of Public Safety Solutions Ron Teachman offers to personally connect interested parties with ShotSpotter’s go-to expert on federal funding, consultant and former congressional aide Amanda Wood.

    Teachman and Wood say in the video that ShotSpotter will furnish eager police with pre-drafted language to help pitch relevant elected officials. “I know you all understand the value of ShotSpotter and that’s why you’re here, but if there are other folks in your community who don’t understand it, we’re happy to sort of spoon-feed them that information,” Wood says. “We have broad language, and we can really personalize it for whatever you need.” (Olive, the ShotSpotter spokesperson, said the company was sharing publicly available information and did not comment on what efforts the company made to guide Detroit through the process of applying for funds.) 

    Wood also suggests that police enlist local groups, from grassroots organizations to medical administrators, to help with the pitch. “Those hospital CEOs are pretty well connected. So let’s use them, let’s leverage their relationships so that they’re echoing the same sort of messaging that you are … put a little pressure on those electeds and administrators.”

    Overall, the use of federal Covid money to buy microphones is described as a cakewalk. In the webinar, Teachman says, “This is as easy a federal funding source as I’ve seen.”

    Despite the objections from community groups, Biden himself outlined uses like this for Covid relief funds. “Mayors will also be able to buy crime-fighting technologies, like gunshot detection systems,” Biden said in a June 2021 address on gun violence.

    Billions in Covid aid have been spent on funding police departments, a flood of money that’s proven a boon to surveillance contractors, said Matthew Guariglia, a policy analyst with the Electronic Frontier Foundation. “For a long time already, money that has been intended for public well-being has been specifically funneled into police departments,” said Guariglia, “and specifically for surveillance equipment that maybe they didn’t have the money to fund beforehand.”

    ShotSpotter equipment overlooks the intersection of South Stony Island Avenue and East 63rd Street in Chicago on Tuesday, Aug. 10, 2021. An Associated Press investigation, based on a review of thousands of internal documents, emails, presentations and confidential contracts, along with interviews with dozens of public defenders in communities where ShotSpotter has been deployed, has identified a number of serious flaws in using ShotSpotter as evidentiary support for prosecutions. (AP Photo/Charles Rex Arbogast)

    ShotSpotter equipment overlooks the intersection of South Stony Island Avenue and East 63rd Street in Chicago, on Aug. 10, 2021.

    Photo: Charles Rex Arbogast/AP


    Detroit’s city government isn’t shying away from the notion that millions in economic stimulus money might go to ShotSpotter. The public safety section of a city website outlining how Detroit plans to use hundreds of millions in federal Covid aid mentions ShotSpotter by name, including a city-produced infomercial touting the technology’s benefits. While the clip, echoing the company’s own claims, assures Detroit residents that ShotSpotter doesn’t listen to conversations, there have been at least two documented instances of prosecutors attempting to use ambient chatter caught on ShotSpotter’s hot mics.

    Critics of Detroit’s plan said ShotSpotter doesn’t stop gun violence and exacerbates over-policing of the same struggling neighborhoods the Covid relief money was meant to help. A study published last year by Northwestern University’s MacArthur Justice Center surveyed 21 months of city data on ShotSpotter-based police deployments and “found that 89% turned up no gun-related crime and 86% led to no report of any crime at all. In less than two years, there were more than 40,000 dead-end ShotSpotter deployments.”

    City government data from Chicago and other locations using ShotSpotter revealed the same pattern over and over, according to the MacArthur Justice Center. In Atlanta, only 3 percent of ShotSpotter alerts resulted in police finding shell casings. In Dayton, Ohio, another ShotSpotter customer, “only 5% of ShotSpotter alerts led police to report incidents of any crime.” A series of academic studies into ShotSpotter’s efficacy reached the same conclusion: Loud noise alerts don’t result in fewer gun killings.

    “People don’t want gunshots in their neighborhood, period. And a microphone does not stop the gunshot.”

    Not only is ShotSpotter a waste of money, critics say, but the system menaces the very neighborhoods it claims to protect by directing armed, keyed-up police onto city blocks with the expectation of a violent confrontation. These heightened police responses occur along stark racial lines. “In Chicago, ShotSpotter is only deployed in the police districts with the highest proportion of Black and Latinx residents,” the MacArthur Justice Center found, pointing to a Chicago inspector general’s report that found ShotSpotter alerts resulted in more than 2,400 stop-and-frisks. A 2021 investigation by Motherboard found that “ShotSpotter frequently generates false alerts—and it’s deployed almost exclusively in non-white neighborhoods.”

    The concern is not hypothetical: A March 2021 ShotSpotter-triggered Chicago deployment resulted in the fatal police shooting of an unarmed 13-year-old boy, Adam Toledo. “If you have police showing up to the site of every loud noise, guns drawn, expecting a firefight, that puts a lot of pedestrians, a lot of people who lives in neighborhoods where there are loud noises, in danger,” Guariglia said.

    ShotSpotter’s claims of turn-key functionality and deterrent effect are tempting for mayors like Detroit’s Duggan, according to Snyder of Detroit Action. The politicians are eager to project a “tough on crime” image as gun violence has spiked during the pandemic. Yet Snyder said that ShotSpotter’s limited trial in Detroit has so far proven ineffective. “It actually hasn’t led to any sort of like real, significant arrests,” he said. “It actually hasn’t produced that type of success that I think many elected officials as well as the company itself are spouting.”

    An infographic created by the city claims that “ShotSpotter is saving lives!” and cites a downward trend in fatal shootings in neighborhoods where the equipment is installed. The infographic, though, provides no evidence that the technology itself was responsible for this decline and provides only one example of a ShotSpotter alert leading to a gun-related conviction in the city.

    “ShotSpotter doesn’t stop gunshots from happening,” said Goodwin of the Action Center on Race and the Economy. “People don’t want gunshots in their neighborhood, period. And a microphone does not stop the gunshot.”

    The post Detroit Cops Want $7 Million in Covid Relief Money for Surveillance Microphones appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A Twitter spokesperson said the social media giant deleted a Carnegie Mellon University professor’s controversial tweet condemning Queen Elizabeth II on the grounds that it was “abusive.” The company defines abusive behavior as “an attempt to harass, intimidate, or silence someone else’s voice” — in this case the voice of the world’s longest-reigning monarch.

    It was a banner day for posting. As soon as news of the queen’s impending death hit Twitter, the platform was quickly dominated by a global outpouring of both grief and glee, a heated mixture of paeans to the queen’s 70-year tenure and angry denunciations of the British monarchy’s legacy of colonial violence and exploitation. Among the latter was Carnegie Mellon’s Uju Anya, an associate professor of second language acquisition. “I heard the chief monarch of a thieving raping genocidal empire is finally dying. May her pain be excruciating,” Anya tweeted.

    “We took enforcement action on the account you referenced for violating the Twitter Rules on abusive behaviour,” Twitter spokesperson Lauren Myers-Cavanagh, using the British spelling of “behavior,” wrote to The Intercept in response to a query.

    “It does highlight the power imbalances that can often exist in the way these platforms treat powerful figures.”

    The removal of the post illustrates how criticisms of powerful people, however distasteful, can be disappeared from social media sites for murky reasons. “It does highlight the power imbalances that can often exist in the way these platforms treat powerful figures,” Evelyn Douek, an assistant professor at Stanford Law School and scholar of content moderation policies, told The Intercept. “Often people in power get allowances because it’s in the public interest but people don’t for criticizing them, even though that’s often clearly in the public interest too.”

    Anya’s tweet immediately attracted widespread attention and criticism, not least because it was reproachfully quote-tweeted by Amazon founder Jeff Bezos. “This is someone supposedly working to make the world better?” the second-richest American tweeted. “I don’t think so. Wow.” Twitter users were quick to point out that Anya had recently tweeted approvingly of Chris Smalls, a rising labor leader instrumental in efforts to unionize Amazon warehouses.

    After the criticisms came pouring in, Anya, who was born in Nigeria, tweeted in defense of her remarks: “If anyone expects me to express anything but disdain for the monarch who supervised a government that sponsored the genocide that massacred and displaced half my family and the consequences of which those alive today are still trying to overcome, you can keep wishing upon a star.” (Anya did not immediately respond to The Intercept’s request for comment.)

    While the tweet was no doubt offensive to many fans of the crown in wishing suffering upon Elizabeth, the specific rule cited by Myers-Cavanagh justifying the censorship isn’t typically deployed in defense of royalty. The company claims that its ban on abusive remarks is designed to protect speech rather than delete it.

    “In order to facilitate healthy dialogue on the platform, and empower individuals to express diverse opinions and beliefs, we prohibit behavior that harasses or intimidates, or is otherwise intended to shame or degrade others,” Twitter’s online help center says. “We consider abusive behavior an attempt to harass, intimidate, or silence someone else’s voice.”

    Such speech is deleted to shield vulnerable voices from suppression. “On Twitter, you should feel safe expressing your unique point of view,” the policy reads. “We believe in freedom of expression and open dialogue, but that means little as an underlying philosophy if voices are silenced because people are afraid to speak up.”

    It is not clear how the queen of England could ever be meaningfully “silenced” or “afraid to speak up” because of an academic’s tweet. Twitter did not respond in time for publication when asked whether the “abusive behavior” policy applies to the deceased.

    The policy lays out a variety of “abusive” speech types, including “violent threats”; “content that wishes, hopes, promotes, incites, or expresses a desire for death, serious bodily harm or serious disease”; and “unwanted sexual advances.” It’s unclear which of these categories Anya’s tweet, published when its subject was on the verge of death and most likely not checking Twitter, would fall under.

    The examples of abusive behavior that Twitter provides are of the more clear-cut “I hope you get cancer and die” variety, rather than an edge case hoping that someone already near death will experience greater suffering. The policy does note that in some cases offending tweets will be deleted while sparing the account from suspension, as was the case with Anya, when “regarding certain individuals credibly accused of severe violence,” an enforcement that would seem to implicitly agree with the professor’s underlying argument against the crown.

    Douek, the Stanford professor, said that it seemed like an odd enforcement of the rule given the vast gulf in power between a professor and a monarch. “Unclear to me how the queen is going to be intimidated by that tweet,” she told The Intercept. “Surprised they stood by it, actually.”

    The post Twitter Censored Professor’s Post for “Abusive Behaviour” Toward the Queen appeared first on The Intercept.

    This post was originally published on The Intercept.

  • EM SETEMBRO, DEPOIS de uma série de ataques aéreos israelenses contra a Faixa de Gaza, uma região densamente povoada, palestinos protestaram contra a exclusão abrupta de postagens no Facebook e no Instagram que documentavam a destruição e as mortes provocadas pela operação. Não foi a primeira vez que usuários palestinos das duas plataformas – gigantes pertencentes à empresa-mãe Meta – reclamaram de posts indevidamente removidos. Tornou-se um padrão: após publicações palestinas de imagens e vídeos explícitos de ataques israelenses, a Meta rapidamente remove o conteúdo, fornecendo apenas uma referência indireta à violação dos “padrões da comunidade” da empresa ou, em muitos casos, nenhuma explicação.

    No entanto, nem todos os bilhões de usuários das plataformas da Meta deparam com essas questões quando documentam bombardeios em seus bairros.

    Memorandos inéditos obtidos pelo Intercept mostram que em 2022 a Meta instruiu repetidamente seus moderadores a contornarem o procedimento padrão e tratarem várias imagens explícitas da guerra Rússia-Ucrânia de modo mais flexível. Como outras empresas de internet dos EUA, a Meta respondeu à invasão promulgando imediatamente uma série de novas políticas projetadas para ampliar e proteger a expressão online dos ucranianos, permitindo que imagens explícitas de civis mortos por militares russos permanecessem no Instagram e no Facebook.

    Flexibilizações semelhantes não ocorreram para vítimas palestinas da violência de Estado israelense – os memorandos não indicam medidas desse tipo a qualquer outra população em sofrimento.

    “É uma censura deliberada da documentação dos direitos humanos e da narrativa palestina.”

    “É uma censura deliberada da documentação dos direitos humanos e da narrativa palestina”, afirmou Mona Shtaya, consultora do 7amleh – Centro Árabe para o Avanço das Mídias Sociais, um grupo da sociedade civil que colabora com a Meta. Nos recentes ataques israelenses a Gaza, entre os dias 5 e 15 de agosto, o 7amleh registrou quase 90 exclusões de conteúdo e suspensões de contas nas plataformas da Meta relacionadas a bombardeios. A entidade aponta que segue recebendo relatos de conteúdos censurados.

    Marwa Fatafta, gerente de políticas para o Oriente Médio e o Norte da África do Access Now, um grupo internacional de direitos digitais, disse: “A censura deles funciona quase como um relógio – sempre que a violência no território cresce, aumenta a remoção do conteúdo palestino”.

    Censuras a conteúdos palestinos revisadas pelo Intercept incluem a remoção, em 5 de agosto, de uma postagem de luto pela morte de Alaa Qaddoum, uma menina palestina de 5 anos, morta em um ataque de mísseis israelenses, e um vídeo no Instagram mostrando moradores de Gaza que retiravam corpos de escombros. Ambas as postagens foram removidas com um aviso de que as imagens “vão contra nossas diretrizes sobre violência ou organizações perigosas” – uma referência à política da Meta contra conteúdo violento ou informações relacionadas à sua vasta lista de pessoas e grupos banidos.

    Porta-voz da Meta, Erica Sackin contou ao Intercept que essas duas postagens foram removidas de acordo com a política de Indivíduos e Organizações Perigosas da empresa, a política de censura de conteúdos que promovam grupos terroristas segundo governos federais. Sackin não respondeu a uma pergunta posterior sobre como as imagens de uma menina de 5 anos e de um homem enterrado em escombros promovem o terrorismo.

    Palestinos em Gaza que postam sobre ataques israelenses disseram que suas publicações não contêm mensagens políticas nem indicam qualquer afiliação com grupos terroristas. “Estou apenas postando notícias puras sobre o que está acontecendo”, disse Issam Adwan, jornalista freelancer que vive em Gaza. “Nem estou usando uma linguagem com viés muito palestino: estou descrevendo aviões israelenses como aviões israelenses, não estou dizendo que sou defensor do Hamas ou coisas do tipo.”

    ATIVISTAS DE DIREITOS HUMANOS afirmaram ao Intercept que as isenções concedidas à guerra Rússia-Ucrânia são o exemplo mais recente de um padrão duplo da Meta em relação aos mercados ocidentais e ao resto do mundo – evidência de um tratamento especial à causa ucraniana por parte da Meta desde o início da guerra, algo que pôde ser percebido na cobertura da guerra de forma mais ampla.

    A maioria dos usuários de plataformas sociais de propriedade da Meta vivem fora dos EUA. Entretanto, segundo críticos, as políticas de censura da empresa afetam bilhões em todo o mundo e indicam um alinhamento metódico com os interesses da política externa dos EUA. Os ativistas enfatizam a natureza política dessas decisões de moderação. “A Meta foi capaz de tomar medidas muito rígidas para proteger os ucranianos em meio à invasão russa porque tinha vontade política”, disse Shtaya, “mas nós, palestinos, não testemunhamos nada dessas medidas”.

    Ao seguir diretrizes do governo norte-americano – incluindo listas de bloqueio de contraterrorismo –, a Meta pode acabar censurando declarações totalmente não violentas de apoio ou simpatia aos palestinos, segundo uma declaração de 2021 publicada pela Human Rights Watch. “Esse é um exemplo bastante claro de onde isso está acontecendo”, disse Omar Shakir, diretor da Human Rights Watch para Israel e Palestina, referindo-se às remoções de agosto. Enquanto o relatório da Human Rights Watch sobre a recente censura aos ataques em Gaza ainda estava em andamento, Shakir disse já observar que a Meta estava mais uma vez censurando discursos palestinos e pró-palestinos, incluindo a documentação de abusos de direitos humanos.

    Não está claro qual faceta específica do sistema de censura global bizantino da Meta foi responsável pela onda de censura das postagens de Gaza no mês passado. Muitos usuários não receberam informações significativas sobre o motivo pelo qual suas postagens foram excluídas. A porta-voz da Meta se recusou a detalhar quais outras políticas foram usadas. Remoções anteriores de conteúdo palestino citaram não apenas a política de Indivíduos e Organizações Perigosas, mas também as proibições da empresa contra imagens de violência, símbolos e discursos de ódio. Como é o caso das outras políticas de conteúdo da Meta, a proibição de conteúdo violento e explícito às vezes pode engolir postagens que apenas compartilham a realidade das crises globais sem glorificá-las – situação contra a qual a empresa tomou medidas sem precedentes na guerra Rússia-Ucrânia.

    As regras de padrões da comunidade voltadas ao público da Meta detalham: “Removemos conteúdo que glorifica a violência ou celebra o sofrimento ou a humilhação de outras pessoas porque isso pode criar um ambiente que desencoraja a participação” – observando uma vaga exceção para “conteúdo explícito (com algumas limitações) para ajudar as pessoas a ampliarem a conscientização sobre esses problemas”. A política de conteúdo violento e explícito impõe uma proibição geral a vídeos que mostrem cadáveres e permite a visualização de imagens estáticas semelhantes somente para maiores de 18 anos.

    Em uma versão interna expandida do guia de Padrões da Comunidade obtida pelo Intercept, a seção que trata de conteúdo explícito inclui uma série de memorandos de políticas que orientam os moderadores a contornarem regras ou adotarem um escrutínio adicional para lidar com certas notícias urgentes. Uma revisão dessas exceções mostra que, imediatamente após a invasão, a Meta orientou seus moderadores a garantir que imagens explícitas de civis ucranianos mortos em ataques russos não fossem excluídas em sete ocasiões. A lista contém atos de violência estatal semelhantes aos rotineiramente censurados quando conduzidos por militares israelenses, incluindo várias referências específicas a ataques aéreos.

    De acordo com o material interno, a Meta começou a instruir seus moderadores a contornarem práticas padrão, com o objetivo de preservar a documentação da invasão russa, desde o segundo dia da guerra. Uma atualização da política em 25 de fevereiro instruiu os moderadores a não excluírem o vídeo de algumas das primeiras vítimas civis. “Esse vídeo mostra as consequências dos ataques aéreos na cidade de Uman, na Ucrânia”, diz o memorando. “Com 0,5 segundo, vísceras são visíveis. Estamos permitindo uma concessão MAD a esse vídeo” – a sigla em inglês (Mark As Disturbing) faz referência à prática da empresa de “marcar como perturbador” ou anexar um aviso a uma imagem ou vídeo em vez de excluí-los imediatamente.

    “Sempre foi uma questão de geopolítica e lucro para a Meta”

    Em 5 de março, os moderadores foram instruídos a “MAD [marcar como perturbador] vídeo retratando brevemente pessoas ligeiramente mutiladas após ataques aéreos em Chernigov” – observando novamente que os moderadores deveriam contornar as regras. “Embora o vídeo retratando pessoas desmembradas fora de um ambiente médico seja proibido por nossa política de conteúdo violento e explícito”, diz o memorando, “as imagens dos indivíduos são breves e parecem estar em um contexto de conscientização postado por sobreviventes do ataque com mísseis.”

    As exceções de violência explícita são apenas uma das muitas maneiras pelas quais a Meta ajustou rapidamente suas práticas de moderação para acomodar a resistência ucraniana. No início da invasão, a empresa deu o raro passo de retirar restrições de discurso em torno do Batalhão Azov, uma unidade neonazista das forças armadas ucranianas anteriormente proibida pela política de Indivíduos e Organizações Perigosas da empresa. Em março, a Reuters informou que a Meta permitiu temporariamente que os usuários pedissem a morte de soldados russos, um discurso que normalmente também violaria as regras da empresa.

    Os defensores de direitos humanos enfatizaram que sua queixa não é contra proteções adicionais aos ucranianos, mas contra a ausência de medidas especiais semelhantes para proteger civis sitiados do aparato de censura errático da Meta.

    “Os direitos humanos não são um exercício de escolha”, disse Fatafta. “É bom que tenham sido tomadas medidas tão importantes para a Ucrânia, mas o fracasso em fazê-lo para a Palestina enfatiza ainda mais sua abordagem discriminatória em relação à moderação de conteúdo. Sempre foi uma questão de geopolítica e lucro para a Meta.”

    NUNCA É EXPLICADO publicamente – nem no material interno revisado pelo Intercept – como a Meta decide quais postagens estão celebrando a morte em tempos de guerra e quais aumentam a conscientização sobre essas situações.

    Uma postagem de janeiro de 2022 da Meta observa que a empresa usa um “teste de equilíbrio que pesa o interesse público contra o risco de dano” para conteúdos que normalmente violariam regras da empresa, mas não fornece informações sobre o que esse teste realmente envolve ou quem o conduz. Se uma tentativa de documentar atrocidades ou lamentar um vizinho morto em um ataque aéreo é considerada glorificação ou interesse público, trata-se de um julgamento subjetivo que cabe a moderadores terceirizados da Meta – sobrecarregados e, às vezes, traumatizados, responsáveis por tomar centenas de decisões desse tipo todos os dias.

    Poucas pessoas contestariam que as imagens da invasão russa descritas nas atualizações da política do Meta têm valor de notícia, mas os documentos obtidos pelo Intercept mostram que a lista menos restritiva relacionada ao material simpático à Ucrânia incluiu propaganda estatal explícita.

    Os documentos apontam que, em várias situações, vídeos de propaganda do Estado ucraniano que destacam a violência russa contra civis passaram a integrar a lista de permissões, incluindo o filme “Close the Sky”, carregado de emoção, que o presidente ucraniano Volodymyr Zelensky apresentou em março ao Congresso do EUA. “Embora o vídeo que retrata humanos mutilados fora de um ambiente médico seja proibido pela política de VGC, as imagens compartilhadas estão em um contexto de conscientização postado pelo presidente da Ucrânia”, afirma uma atualização de 24 de março distribuída aos moderadores.

    Em 13 de maio, os moderadores foram instruídos a não excluírem um vídeo postado pelo Ministério da Defesa ucraniano que apresentava imagens explícitas de cadáveres queimados. “O vídeo mostra brevemente um corpo carbonizado, não identificado, deitado no chão”, diz a atualização. “Embora vídeos que mostrem pessoas carbonizadas ou em chamas sejam proibidos por nossa política de conteúdo violento e explícito… a gravação é breve e qualificada para uma exceção de valor de notícia pelas diretrizes da OCP por documentar um conflito armado em andamento.”

    “A Meta está replicando online alguns dos mesmos desequilíbrios de poder e abusos de direitos que vemos no mundo real”

    Os materiais internos revisados pelo Intercept não mostram tais intervenções em prol dos palestinos – não há lista de permissão de propaganda projetada para aumentar a simpatia de civis ou diretivas para usar avisos em vez de remover conteúdo que retrata danos à população.

    Os críticos apontam a disparidade no questionamento de por que o discurso online sobre crimes de guerra e ofensas aos direitos humanos cometidos contra europeus parece merecer proteção especial, ao contrário do que ocorre com discursos de abuso contra pessoas de outras regiões.

    “A Meta deve respeitar o direito de expressão das pessoas, seja na Ucrânia ou na Palestina”, disse Shakir, da Human Rights Watch. “Ao silenciar muitas pessoas arbitrariamente e sem explicação, a Meta está replicando online alguns dos mesmos desequilíbrios de poder e abusos de direitos que vemos no mundo real.”

    Embora a Meta pareça se opor a permitir que civis palestinos mantenham conteúdo explícito online, a empresa interveio em relação a postagens de militares israelenses ocupantes. Em dado momento, a Meta tomou medidas para garantir que imagens de um ataque contra um integrante das forças de segurança israelenses na Cisjordânia ocupada fosse mantida: “Um policial de fronteira israelense foi atingido e levemente ferido por um coquetel molotov durante confrontos com palestinos em Hebron”, aponta um memorando sem data distribuído aos moderadores. “Estamos abrindo uma exceção para que esse conteúdo específico seja marcado como perturbador.”

    Tradução: Ricardo Romanoff

    The post Facebook permite imagens explícitas de ataques à Ucrânia, mas censura agressões de Israel à Palestina appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In March, two veteran Facebook engineers found themselves grilled about the company’s sprawling data collection operations in a hearing for the ongoing lawsuit over the mishandling of private user information stemming from the Cambridge Analytica scandal.

    The hearing, a transcript of which was recently unsealed, was aimed at resolving one crucial issue: What information, precisely, does Facebook store about us, and where is it? The engineers’ response will come as little relief to those concerned with the company’s stewardship of billions of digitized lives: They don’t know.

    The admissions occurred during a hearing with special master Daniel Garrie, a court-appointed subject-matter expert tasked with resolving a disclosure impasse. Garrie was attempting to get the company to provide an exhaustive, definitive accounting of where personal data might be stored in some 55 Facebook subsystems. Both veteran Facebook engineers, with according to LinkedIn two decades of experience between them, struggled to even venture what may be stored in Facebook’s subsystems. “I’m just trying to understand at the most basic level from this list what we’re looking at,” Garrie asked.

    “I don’t believe there’s a single person that exists who could answer that question,” replied Eugene Zarashaw, a Facebook engineering director. “It would take a significant team effort to even be able to answer that question.” (Facebook did not respond to a request for comment.)

    When asked about how Facebook might track down every bit of data associated with a given user account, Zarashaw was stumped again: “It would take multiple teams on the ad side to track down exactly the — where the data flows. I would be surprised if there’s even a single person that can answer that narrow question conclusively.”

    The dispute over where Facebook stores data arose when, as part of the litigation, now in its fourth year, the court ordered Facebook to turn over information it had collected about the suit’s plaintiffs. The company complied but provided data consisting mostly of material that any user could obtain through the company’s publicly accessible “Download Your Information” tool.

    Facebook contended that any data not included in this set was outside the scope of the lawsuit, ignoring the vast quantities of information the company generates through inferences, outside partnerships, and other nonpublic analysis of our habits — parts of the social media site’s inner workings that are obscure to consumers. Briefly, what we think of as “Facebook” is in fact a composite of specialized programs that work together when we upload videos, share photos, or get targeted with advertising. The social network wanted to keep data storage in those nonconsumer parts of Facebook out of court.

    In 2020, the judge disagreed with the company’s contention, ruling that Facebook’s initial disclosure had indeed been too sparse and that the company must reveal data obtained through its oceanic ability to surveil people across the internet and make monetizable predictions about their next moves.

    Facebook’s stonewalling has been revealing on its own, providing variations on the same theme: It has amassed so much data on so many billions of people and organized it so confusingly that full transparency is impossible on a technical level. In the March 2022 hearing, Zarashaw and Steven Elia, a software engineering manager, described Facebook as a data-processing apparatus so complex that it defies understanding from within. The hearing amounted to two high-ranking engineers at one of the most powerful and resource-flush engineering outfits in history describing their product as an unknowable machine.

    The special master at times seemed in disbelief, as when he questioned the engineers over whether any documentation existed for a particular Facebook subsystem. “Someone must have a diagram that says this is where this data is stored,” he said, according to the transcript. Zarashaw responded: “We have a somewhat strange engineering culture compared to most where we don’t generate a lot of artifacts during the engineering process. Effectively the code is its own design document often.” He quickly added, “For what it’s worth, this is terrifying to me when I first joined as well.”

    The remarks in the hearing echo those found in an internal document leaked to Motherboard earlier this year detailing how the internal engineering dysfunction at Meta, which owns Facebook and Instagram, makes compliance with data privacy laws an impossibility. “We do not have an adequate level of control and explainability over how our systems use data, and thus we can’t confidently make controlled policy changes or external commitments such as ‘we will not use X data for Y purpose,’” the 2021 document read.

    The fundamental problem, according to the engineers in the hearing, is that Facebook’s sprawl has made it impossible to know what it consists of anymore; the company never bothered to cultivate institutional knowledge of how each of these component systems works, what they do, or who’s using them. There is no documentation of what happens to your data once it’s uploaded, because that’s just never been something the company does, the two explained. “It is rare for there to exist artifacts and diagrams on how those systems are then used and what data actually flows through them,” explained Zarashaw.

    “It is rare for there to exist artifacts and diagrams on how those systems are then used and what data actually flows through them.”

    Facebook’s inability to comprehend its own functioning took the hearing up to the edge of the metaphysical. At one point, the court-appointed special master noted that the “Download Your Information” file provided to the suit’s plaintiffs must not have included everything the company had stored on those individuals because it appears to have no idea what it truly stores on anyone. Can it be that Facebook’s designated tool for comprehensively downloading your information might not actually download all your information? This, again, is outside the boundaries of knowledge.

    “The solution to this is unfortunately exactly the work that was done to create the DYI file itself,” noted Zarashaw. “And the thing I struggle with here is in order to find gaps in what may not be in DYI file, you would by definition need to do even more work than was done to generate the DYI files in the first place.”

    The systemic fogginess of Facebook’s data storage made answering even the most basic question futile. At another point, the special master asked how one could find out which systems actually contain user data that was created through machine inference.

    “I don’t know,” answered Zarashaw. “It’s a rather difficult conundrum.”

    The post Facebook Engineers: We Have No Idea Where We Keep All Your Personal Data appeared first on The Intercept.

  • After a series of Israeli airstrikes against the densely populated Gaza Strip earlier this month, Palestinian Facebook and Instagram users protested the abrupt deletion of posts documenting the resulting death and destruction. It wasn’t the first time Palestinian users of the two giant social media platforms, which are both owned by parent company Meta, had complained about their posts being unduly removed. It’s become a pattern: Palestinians post sometimes graphic videos and images of Israeli attacks, and Meta swiftly removes the content, providing only an oblique reference to a violation of the company’s “Community Standards” or in many cases no explanation at all.

    Not all the billions of users on Meta’s platforms, however, run into these issues when documenting the bombing of their neighborhoods.

    Previously unreported policy language obtained by The Intercept shows that this year the company repeatedly instructed moderators to deviate from standard procedure and treat various graphic imagery from the Russia-Ukraine war with a light touch. Like other American internet companies, Meta responded to the invasion by rapidly enacting a litany of new policy carveouts designed to broaden and protect the online speech of Ukrainians, specifically allowing their graphic images of civilians killed by the Russian military to remain up on Instagram and Facebook.

    No such carveouts were ever made for Palestinian victims of Israeli state violence — nor do the materials show such latitude provided for any other suffering population.

    “This is deliberate censorship of human rights documentation and the Palestinian narrative.”

    “This is deliberate censorship of human rights documentation and the Palestinian narrative,” said Mona Shtaya, an adviser with 7amleh, the Arab Center for the Advancement of Social Media, a civil society group that formally collaborates with Meta on speech issues. During the recent Israeli attacks on Gaza, between August 5 and August 15, 7amleh tallied nearly 90 deletions of content or account suspensions relating to bombings on Meta platforms, noting that reports of censored content are still coming in.

    Marwa Fatafta, Middle East North Africa policy manager for Access Now, an international digital rights group, said, “Their censorship works almost like clockwork — whenever violence escalates on the ground, their takedown of Palestinian content soars.”

    Instances of censored Palestinian content reviewed by The Intercept include the August 5 removal of a post mourning the death of Alaa Qaddoum, a 5-year-old Palestinian girl killed in an Israeli missile strike, as well as an Instagram video showing Gazans pulling bodies from beneath rubble. Both posts were removed with a notice claiming that the imagery “goes against our guidelines on violence or dangerous organizations” — a reference to Meta’s company policy against violent content or information related to its vast roster of banned people and groups.

    Meta spokesperson Erica Sackin told The Intercept that these two posts were removed according to the Dangerous Individuals and Organizations policy, pointing to the company’s policy of censoring content promoting federally designated terrorist groups. Sackin did not respond to a follow-up question about how an image of a 5-year-old girl and a man buried in rubble promoted terrorism.

    Palestinians in Gaza who post about Israeli assaults said their posts don’t contain political messages or indicate any affiliation with terror groups. “I’m just posting pure news about what’s happening,” said Issam Adwan, a Gaza-based freelance journalist. “I’m not even using a very biased Palestinian news language: I’m describing the Israeli planes as Israeli planes, I’m not saying that I’m a supporter of Hamas or things like these.”

    Rights advocates told The Intercept that the exemptions made for the Russia-Ukraine war are the latest example of a double standard between Meta’s treatment of Western markets and the rest of the world — evidence of special treatment of the Ukrainian cause on Meta’s part since the beginning of the war and something that can be seen with media coverage of the war more broadly.

    Though the majority of users on social platforms owned by Meta live outside the United States, critics charge that the company’s censorship policies, which affect billions worldwide, tidily align with U.S. foreign policy interests. Rights advocates emphasized the political nature of these moderation decisions. “Meta was capable to take very strict measures to protect Ukrainians amid the Russian invasion because it had the political will,” said Shtaya, “but we Palestinians haven’t witnessed anything of these measures.”

    By taking its cues from U.S. government policy — including cribbing U.S. counterterrorism blacklists — Meta can end up censoring entirely nonviolent statements of support or sympathy for Palestinians, according to a 2021 statement by Human Rights Watch. “This is a pretty clear example of where that’s happening,” Omar Shakir, Human Rights Watch’s Israel and Palestine director, told The Intercept of the most recent takedowns. While Human Rights Watch’s accounting of recent Gaza censorship was still ongoing, Shakir said what he’d seen already indicated that Meta was once again censoring Palestinian and pro-Palestinian speech, including the documentation of human rights abuses.

    It’s unclear which specific facet of Meta’s byzantine global censorship system was responsible for the spate of censorship of Gaza posts in August; many posters received no meaningful information as to why their posts were deleted. The Meta spokesperson declined to provide an accounting of which other policies were used. Past takedowns of Palestinian content have cited not only the Dangerous Individuals and Organizations policy but also company prohibitions against depictions of graphic violence, hate symbols, and hate speech. As is the case with Meta’s other content policies, the Violent and Graphic Content prohibition can at times swallow up posts that are clearly sharing the reality of global crises rather than glorifying them — something the company has taken unprecedented steps to prevent in Ukraine.

    Meta’s public-facing Community Standards rulebook says: “We remove content that glorifies violence or celebrates the suffering or humiliation of others because it may create an environment that discourages participation” — noting a vague exception for “graphic content (with some limitations) to help people raise awareness about these issues.” The Violent and Graphic Content policy places a blanket ban on gruesome videos of dead bodies and restricts the viewing of similar still images to adults 18 years and older.

    In an expanded, internal version of the Community Standards guide obtained by The Intercept, the section dealing with graphic content includes a series of policy memos directing moderators to deviate from the standard rules or bring added scrutiny to bear on specific breaking news events. A review of these breaking news exceptions shows that Meta directed moderators to make sure that graphic imagery of Ukrainian civilians killed in Russian attacks was not deleted on seven different occasions, beginning at the immediate onset of the invasion. The whitelisted content includes acts of state violence akin to those routinely censored when conducted by the Israeli military, including multiple specific references to airstrikes.

    According to the internal material, Meta began instructing its moderators to deviate from standard practices to preserve documentation of the Russian invasion the day after it began. A policy update on February 25 instructed moderators to not delete video of some of the war’s earliest civilian casualties. “This video shows the aftermath of airstrikes on the city of Uman, Ukraine,” the memo reads. “At 0.5 seconds, innards are visible. We are making an allowance to MAD this video” — a reference to the company practice “Mark As Disturbing,” or attaching a warning to an image or video rather than deleting it outright.

    “It’s always been about geopolitics and profit for Meta.”

    On March 5, moderators were told to “MAD Video Briefly Depicting Briefly Mutilated Persons Following Air Strikes in Chernigov”— again noting that moderators were to deviate from standard speech rules. “Though video depicting dismembered persons outside of a medical setting is prohibited by our Violent & Graphic Content policy,” the memo says, “the footage of the individuals is brief and appears to be in an awareness raising context posted by survivors of the rocket attack.”

    The graphic violence exceptions are just a few of the many ways Meta has quickly adjusted its moderation practices to accommodate the Ukrainian resistance. At the outset of the invasion, the company took the rare step of lifting speech restrictions around the Azov Battalion, a neo-Nazi unit of the Ukrainian military previously banned under the company’s Dangerous Individuals and Organizations policy. In March, Reuters reported that Meta temporarily permitted users to explicitly call for the death of Russian soldiers, speech that would also normally violate the company’s rules.

    Rights advocates emphasized that their grievance is not with added protections for Ukrainians but the absence of similar special steps to shield besieged civilians from Meta’s erratic censorship apparatus nearly everywhere else in the world.

    “Human rights is not a cherry-picking exercise,” said Fatafta. “It’s good they have taken such important measures for Ukraine, but their failure to do so for Palestine emphasizes further their discriminatory approach to content moderation. It’s always been about geopolitics and profit for Meta.”

    How exactly Meta decides which posts are celebrating gruesome wartime death and which are raising awareness of it is never explained in the company’s public overview of its speech rules or the internal material reviewed by The Intercept.

    A January 2022 blog post from Meta notes that the company uses a “balancing test that weighs the public interest against the risk of harm” for content that would normally violate company rules but provides no information as to what that test actually entails or who conducts it. Whether an attempt to document atrocities or mourn a neighbor killed in an airstrike is deemed glorification or in the public interest is left to the subjective judgment calls of Meta’s overworked and sometimes traumatized content contractors, tasked with making hundreds of such decisions every day.

    Few would dispute that the images from Ukraine described in the Meta policy updates — documenting the Russian invasion — are newsworthy, but the documents obtained by The Intercept show that Meta’s whitelisting of material sympathetic to Ukraine has extended even to graphic state propaganda.

    The internal materials show that it has on multiple instances whitelisted Ukrainian state propaganda videos that highlight Russian violence against civilians, including the emotionally charged “Close the Sky” film Ukrainian President Volodymyr Zelenskyy presented to Congress in March. “Though the video depicting mutilated humans outside of a medical setting is prohibited by VGC policy the footage shared is in an awareness-raising context posted by the President of Ukraine,” said a March 24 update distributed to moderators.

    On May 13, moderators were told not to delete a video posted by the Ukrainian Defense Ministry that included graphic depictions of burnt corpses. “The video very briefly depicts an unidentified charred body lying on the floor,” the update says. “Though video depicting charred or burning people is prohibited by our Violent & Graphic Content policy … the footage is brief and qualifies for a newsworthy exception as per OCP’s guidelines, as it documents an on-going armed conflict.”

    “Meta is replicating online some of the same power imbalances and rights abuses we see in the real world.”

    The internal materials reviewed by The Intercept show no such interventions for Palestinians — no whitelisting of propaganda designed to raise sympathies for civilians or directives to use warnings instead of removing content depicting harm to civilians.

    Critics pointed to the disparity to question why online speech about war crimes and human rights offenses committed against Europeans seems to warrant special protections while speech referring to abuses committed against others do not.

    “Meta should respect the right for people to speak out, whether in Ukraine or Palestine,” said Shakir, of Human Rights Watch. “By silencing many people arbitrarily and without explanation, Meta is replicating online some of the same power imbalances and rights abuses we see in the real world.”

    While Meta seems to side against allowing Palestinian civilians to keep graphic content online, it has intervened in posting about the Israeli-Palestinian conflict to keep images live by siding with the occupying Israeli military. In one instance, Meta took steps to ensure that a depiction of an attack against a member of the Israeli security forces in the occupied West Bank was kept up: “An Israeli Border Police officer was struck and lightly wounded by a Molotov cocktail during clashes with Palestinians in Hebron,” an undated memo distributed to moderators reads. “We are making an exception for this particular content to Mark this video as Disturbing.”

    The post Facebook Tells Moderators to Allow Graphic Images of Russian Airstrikes but Censors Israeli Attacks appeared first on The Intercept.

  • Training materials reviewed by The Intercept confirm that Google is offering advanced artificial intelligence and machine-learning capabilities to the Israeli government through its controversial “Project Nimbus” contract. The Israeli Finance Ministry announced the contract in April 2021 for a $1.2 billion cloud computing system jointly built by Google and Amazon. “The project is intended to provide the government, the defense establishment and others with an all-encompassing cloud solution,” the ministry said in its announcement.

    Google engineers have spent the time since worrying whether their efforts would inadvertently bolster the ongoing Israeli military occupation of Palestine. In 2021, both Human Rights Watch and Amnesty International formally accused Israel of committing crimes against humanity by maintaining an apartheid system against Palestinians. While the Israeli military and security services already rely on a sophisticated system of computerized surveillance, the sophistication of Google’s data analysis offerings could worsen the increasingly data-driven military occupation.

    According to a trove of training documents and videos obtained by The Intercept through a publicly accessible educational portal intended for Nimbus users, Google is providing the Israeli government with the full suite of machine-learning and AI tools available through Google Cloud Platform. While they provide no specifics as to how Nimbus will be used, the documents indicate that the new cloud would give Israel capabilities for facial detection, automated image categorization, object tracking, and even sentiment analysis that claims to assess the emotional content of pictures, speech, and writing. The Nimbus materials referenced agency-specific trainings available to government personnel through the online learning service Coursera, citing the Ministry of Defense as an example.

    A slide presented to Nimbus users illustrating Google image recognition technology.

    A slide presented to Nimbus users illustrating Google image recognition technology.

    Credit: Google


    Jack Poulson, director of the watchdog group Tech Inquiry, shared the portal’s address with The Intercept after finding it cited in Israeli contracting documents.

    “The former head of Security for Google Enterprise — who now heads Oracle’s Israel branch — has publicly argued that one of the goals of Nimbus is preventing the German government from requesting data relating on the Israel Defence Forces for the International Criminal Court,” said Poulson, who resigned in protest from his job as a research scientist at Google in 2018, in a message. “Given Human Rights Watch’s conclusion that the Israeli government is committing ‘crimes against humanity of apartheid and persecution’ against Palestinians, it is critical that Google and Amazon’s AI surveillance support to the IDF be documented to the fullest.”

    Though some of the documents bear a hybridized symbol of the Google logo and Israeli flag, for the most part they are not unique to Nimbus. Rather, the documents appear to be standard educational materials distributed to Google Cloud customers and presented in prior training contexts elsewhere.

    Google did not respond to a request for comment.

    The documents obtained by The Intercept detail for the first time the Google Cloud features provided through the Nimbus contract. With virtually nothing publicly disclosed about Nimbus beyond its existence, the system’s specific functionality had remained a mystery even to most of those working at the company that built it. In 2020, citing the same AI tools, U.S Customs and Border Protection tapped Google Cloud to process imagery from its network of border surveillance towers.

    Many of the capabilities outlined in the documents obtained by The Intercept could easily augment Israel’s ability to surveil people and process vast stores of data — already prominent features of the Israeli occupation.

    “Data collection over the entire Palestinian population was and is an integral part of the occupation,” Ori Givati of Breaking the Silence, an anti-occupation advocacy group of Israeli military veterans, told The Intercept in an email. “Generally, the different technological developments we are seeing in the Occupied Territories all direct to one central element which is more control.”

    The Israeli security state has for decades benefited from the country’s thriving research and development sector, and its interest in using AI to police and control Palestinians isn’t hypothetical. In 2021, the Washington Post reported on the existence of Blue Wolf, a secret military program aimed at monitoring Palestinians through a network of facial recognition-enabled smartphones and cameras.

    “Living under a surveillance state for years taught us that all the collected information in the Israeli/Palestinian context could be securitized and militarized,” said Mona Shtaya, a Palestinian digital rights advocate at 7amleh-The Arab Center for Social Media Advancement, in a message. “Image recognition, facial recognition, emotional analysis, among other things will increase the power of the surveillance state to violate Palestinian right to privacy and to serve their main goal, which is to create the panopticon feeling among Palestinians that we are being watched all the time, which would make the Palestinian population control easier.”

    The educational materials obtained by The Intercept show that Google briefed the Israeli government on using what’s known as sentiment detection, an increasingly controversial and discredited form of machine learning. Google claims that its systems can discern inner feelings from one’s face and statements, a technique commonly rejected as invasive and pseudoscientific, regarded as being little better than phrenology. In June, Microsoft announced that it would no longer offer emotion-detection features through its Azure cloud computing platform — a technology suite comparable to what Google provides with Nimbus — citing the lack of scientific basis.

    Google does not appear to share Microsoft’s concerns. One Nimbus presentation touted the “Faces, facial landmarks, emotions”-detection capabilities of Google’s Cloud Vision API, an image analysis toolset. The presentation then offered a demonstration using the enormous grinning face sculpture at the entrance of Sydney’s Luna Park. An included screenshot of the feature ostensibly in action indicates that the massive smiling grin is “very unlikely” to exhibit any of the example emotions. And Google was only able to assess that the famous amusement park is an amusement park with 64 percent certainty, while it guessed that the landmark was a “place of worship” or “Hindu Temple” with 83 percent and 74 percent confidence, respectively.

    A slide presented to Nimbus users illustrating Google AI’s ability to detect image traits.

    A slide presented to Nimbus users illustrating Google AI’s ability to detect image traits.

    Credit: Google


    Google workers who reviewed the documents said they were concerned by their employer’s sale of these technologies to Israel, fearing both their inaccuracy and how they might be used for surveillance or other militarized purposes.

    “Vision API is a primary concern to me because it’s so useful for surveillance,” said one worker, who explained that the image analysis would be a natural fit for military and security applications. “Object recognition is useful for targeting, it’s useful for data analysis and data labeling. An AI can comb through collected surveillance feeds in a way a human cannot to find specific people and to identify people, with some error, who look like someone. That’s why these systems are really dangerous.”

    A slide presented to Nimbus users outlining various AI features through the company’s Cloud Vision API.

    A slide presented to Nimbus users outlining various AI features through the company’s Cloud Vision API.

    Credit: Google


    The employee — who, like all of the Google workers who spoke to The Intercept, requested anonymity to avoid workplace reprisals — added that they were further alarmed by potential surveillance or other militarized applications of AutoML, another Google AI tool offered through Nimbus. Machine learning is largely the function of training software to recognize patterns in order to make predictions about future observations, for instance by analyzing millions of images of kittens today in order to confidently claim that it’s looking at a photo of a kitten tomorrow. This training process yields what’s known as a “model” — a body of computerized education that can be applied to automatically recognize certain objects and traits in future data.

    Training an effective model from scratch is often resource intensive, both financially and computationally. This is not so much of a problem for a world-spanning company like Google, with an unfathomable volume of both money and computing hardware at the ready. Part of Google’s appeal to customers is the option of using a pre-trained model, essentially getting this prediction-making education out of the way and letting customers access a well-trained program that’s benefited from the company’s limitless resources.

    “An AI can comb through collected surveillance feeds in a way a human cannot to find specific people and to identify people, with some error, who look like someone. That’s why these systems are really dangerous.”

    Cloud Vision is one such pre-trained model, allowing clients to immediately implement a sophisticated prediction system. AutoML, on the other hand, streamlines the process of training a custom-tailored model, using a customer’s own data for a customer’s own designs. Google has placed some limits on Vision — for instance limiting it to face detection, or whether it sees a face, rather than recognition that would identify a person. AutoML, however, would allow Israel to leverage Google’s computing capacity to train new models with its own government data for virtually any purpose it wishes. “Google’s machine learning capabilities along with the Israeli state’s surveillance infrastructure poses a real threat to the human rights of Palestinians,” said Damini Satija, who leads Amnesty International’s Algorithmic Accountability Lab. “The option to use the vast volumes of surveillance data already held by the Israeli government to train the systems only exacerbates these risks.”

    Custom models generated through AutoML, one presentation noted, can be downloaded for offline “edge” use — unplugged from the cloud and deployed in the field.

    That Nimbus lets Google clients use advanced data analysis and prediction in places and ways that Google has no visibility into creates a risk of abuse, according to Liz O’Sullivan, CEO of the AI auditing startup Parity and a member of the U.S. National Artificial Intelligence Advisory Committee. “Countries can absolutely use AutoML to deploy shoddy surveillance systems that only seem like they work,” O’Sullivan said in a message. “On edge, it’s even worse — think bodycams, traffic cameras, even a handheld device like a phone can become a surveillance machine and Google may not even know it’s happening.”

    In one Nimbus webinar reviewed by The Intercept, the potential use and misuse of AutoML was exemplified in a Q&A session following a presentation. An unnamed member of the audience asked the Google Cloud engineers present on the call if it would be possible to process data through Nimbus in order to determine if someone is lying.

    “I’m a bit scared to answer that question,” said the engineer conducting the seminar, in an apparent joke. “In principle: Yes. I will expand on it, but the short answer is yes.” Another Google representative then jumped in: “It is possible, assuming that you have the right data, to use the Google infrastructure to train a model to identify how likely it is that a certain person is lying, given the sound of their own voice.” Noting that such a capability would take a tremendous amount of data for the model, the second presenter added that one of the advantages of Nimbus is the ability to tap into Google’s vast computing power to train such a model.

    “I’d be very skeptical for the citizens it is meant to protect that these systems can do what is claimed.”

    A broad body of research, however, has shown that the very notion of a “lie detector,” whether the simple polygraph or “AI”-based analysis of vocal changes or facial cues, is junk science. While Google’s reps appeared confident that the company could make such a thing possible through sheer computing power, experts in the field say that any attempts to use computers to assess things as profound and intangible as truth and emotion are faulty to the point of danger.

    One Google worker who reviewed the documents said they were concerned that the company would even hint at such a scientifically dubious technique. “The answer should have been ‘no,’ because that does not exist,” the worker said. “It seems like it was meant to promote Google technology as powerful, and it’s ultimately really irresponsible to say that when it’s not possible.”

    Andrew McStay, a professor of digital media at Bangor University in Wales and head of the Emotional AI Lab, told The Intercept that the lie detector Q&A exchange was “disturbing,” as is Google’s willingness to pitch pseudoscientific AI tools to a national government. “It is [a] wildly divergent field, so any technology built on this is going to automate unreliability,” he said. “Again, those subjected to them will suffer, but I’d be very skeptical for the citizens it is meant to protect that these systems can do what is claimed.”

    According to some critics, whether these tools work might be of secondary importance to a company like Google that is eager to tap the ever-lucrative flow of military contract money. Governmental customers too may be willing to suspend disbelief when it comes to promises of vast new techno-powers. “It’s extremely telling that in the webinar PDF that they constantly referred to this as ‘magical AI goodness,’” said Jathan Sadowski, a scholar of automation technologies and research fellow at Monash University, in an interview with The Intercept. “It shows that they’re bullshitting.”

    FILE- In this May 8, 2018, file photo, Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, Calif. Google pledges that it will not use artificial intelligence in applications related to weapons or surveillance, part of a new set of principles designed to govern how it uses AI. Those principles, released by Pichai, commit Google to building AI applications that are "socially beneficial," that avoid creating or reinforcing bias and that are accountable to people. (AP Photo/Jeff Chiu, File)

    Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, Calif. Google pledges that it will not use artificial intelligence in applications related to weapons or surveillance, part of a new set of principles designed to govern how it uses AI. Those principles, released by Pichai, commit Google to building AI applications that are “socially beneficial,” that avoid creating or reinforcing bias and that are accountable to people.

    Photo: Jeff Chiu/AP


    Google, like Microsoft, has its own public list of “AI principles,” a document the company says is an “ethical charter that guides the development and use of artificial intelligence in our research and products.” Among these purported principles is a commitment to not “deploy AI … that cause or are likely to cause overall harm,” including weapons, surveillance, or any application “whose purpose contravenes widely accepted principles of international law and human rights.”

    Israel, though, has set up its relationship with Google to shield it from both the company’s principles and any outside scrutiny. Perhaps fearing the fate of the Pentagon’s Project Maven, a Google AI contract felled by intense employee protests, the data centers that power Nimbus will reside on Israeli territory, subject to Israeli law and insulated from political pressures. Last year, the Times of Israel reported that Google would be contractually barred from shutting down Nimbus services or denying access to a particular government office even in response to boycott campaigns.

    Google employees interviewed by The Intercept lamented that the company’s AI principles are at best a superficial gesture. “I don’t believe it’s hugely meaningful,” one employee told The Intercept, explaining that the company has interpreted its AI charter so narrowly that it doesn’t apply to companies or governments that buy Google Cloud services. Asked how the AI principles are compatible with the company’s Pentagon work, a Google spokesperson told Defense One, “It means that our technology can be used fairly broadly by the military.”

    “Google is backsliding on its commitments to protect people from this kind of misuse of our technology. I am truly afraid for the future of Google and the world.”

    Moreover, this employee added that Google lacks both the ability to tell if its principles are being violated and any means of thwarting violations. “Once Google offers these services, we have no technical capacity to monitor what our customers are doing with these services,” the employee said. “They could be doing anything.” Another Google worker told The Intercept, “At a time when already vulnerable populations are facing unprecedented and escalating levels of repression, Google is backsliding on its commitments to protect people from this kind of misuse of our technology. I am truly afraid for the future of Google and the world.”

    Ariel Koren, a Google employee who claimed earlier this year that she faced retaliation for raising concerns about Nimbus, said the company’s internal silence on the program continues. “I am deeply concerned that Google has not provided us with any details at all about the scope of the Project Nimbus contract, let alone assuage my concerns of how Google can provide technology to the Israeli government and military (both committing grave human rights abuses against Palestinians daily) while upholding the ethical commitments the company has made to its employees and the public,” she told The Intercept in an email. “I joined Google to promote technology that brings communities together and improves people’s lives, not service a government accused of the crime of apartheid by the world’s two leading human rights organizations.”

    Sprawling tech companies have published ethical AI charters to rebut critics who say that their increasingly powerful products are sold unchecked and unsupervised. The same critics often counter that the documents are a form of “ethicswashing” — essentially toothless self-regulatory pledges that provide only the appearance of scruples, pointing to examples like the provisions in Israel’s contract with Google that prevent the company from shutting down its products. “The way that Israel is locking in their service providers through this tender and this contract,” said Sadowski, the Monash University scholar, “I do feel like that is a real innovation in technology procurement.”

    To Sadowski, it matters little whether Google believes what it peddles about AI or any other technology. What the company is selling, ultimately, isn’t just software, but power. And whether it’s Israel and the U.S. today or another government tomorrow, Sadowski says that some technologies amplify the exercise of power to such an extent that even their use by a country with a spotless human rights record would provide little reassurance. “Give them these technologies, and see if they don’t get tempted to use them in really evil and awful ways,” he said. “These are not technologies that are just neutral intelligence systems, these are technologies that are ultimately about surveillance, analysis, and control.”

    The post Documents Reveal Advanced AI Tools Google Is Selling to Israel appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Ring, Amazon’s perennially controversial and police-friendly surveillance subsidiary, has long defended its cozy relationship with law enforcement by pointing out that cops can only get access to a camera owner’s recordings with their express permission or a court order. But in response to recent questions from Sen. Ed Markey, D-Mass., the company stated that it has provided police with user footage 11 times this year alone without either.

    Last month, Markey wrote to Amazon asking it to both clarify Ring’s ever-expanding relationship with American police, who’ve increasingly come to rely on the company’s growing residential surveillance dragnet, and to commit to a raft of policy reforms. In a response from Brian Huseman, Amazon vice president of public policy, the company declined to permanently agree to any of them, including “Never accept financial contributions from policing agencies,” “Never allow immigration enforcement agencies to request Ring recordings,” and “Never participate in police sting operations.”

    Although Ring publicizes its policy of handing over camera footage only if the owner agrees — or if judge signs a search warrant — the company says it also reserves the right to supply police with footage in “emergencies,” defined broadly as “cases involving imminent danger of death or serious physical injury to any person.” Markey had also asked Amazon to clarify what exactly constitutes such an “emergency situation,” and how many times audiovisual surveillance data has been provided under such circumstances. Amazon declined to elaborate on how it defines these emergencies beyond “imminent danger of death or serious physical injury,” stating only that “Ring makes a good-faith determination whether the request meets the well-known standard.” Huseman noted that it has complied with 11 emergency requests this year alone but did not provide details as to what the cases or Ring’s “good-faith determination” entailed.

    Matthew Guariglia, a policy analyst with the Electronic Frontier Foundation, told The Intercept he encourages any Ring owners concerned about warrantless access of their cameras to enable end-to-end encryption — an option the company declined to make the default setting after being urged to do so by Markey. “I am disturbed that Ring continues to offer, in any situation, warrantless footage from user’s devices despite the fact that once again, police are not the customers for Ring; the people who buy the devices are the customers,” said Guariglia.

    “Police are not the customers for Ring; the people who buy the devices are the customers.”

    Guariglia added that even though the “emergency” exception hypothetically might be warranted in the most dire circumstances, there will always be the risks of “mission creep” and police abuse without any meaningful oversight. “If there is the infrastructure, if there is the channel by which police can request footage without a warrant or consent of the user, under what circumstances they get it is out of our control. I worry that because it’s decided by the police and by somebody at Ring, there will be temptation to use that for increasingly less urgent situations.”

    In a statement to The Intercept, Markey said that he believed Amazon and Ring have both lost the benefit of the doubt, despite their purported good-faith efforts. “I’m deeply concerned to learn that the company has repeatedly disclosed users’ recordings to law enforcement without requiring the users’ permission,” the senator added. “This revelation is particularly troubling given that the company has previously admitted to having no policies that restrict how law enforcement can use Ring users’ footage, no data security requirements for law enforcement entities that have users’ footage, and no policies that prohibit law enforcement officers from keeping Ring users’ footage forever.”

    The post Amazon Admits Giving Ring Camera Footage to Police Without a Warrant or Consent appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Coinbase, the largest cryptocurrency exchange in the United States, is selling Immigrations and Customs Enforcement a suite of features used to track and identify cryptocurrency users, according to contract documents shared with The Intercept.

    News of the deal, potentially worth as much as $1.37 million, was first reported last September, but details of exactly what capabilities would be offered to ICE’s controversial Homeland Security Investigations division of were unclear. But a new contract document obtained by Jack Poulson, director of the watchdog group Tech Inquiry, and shared with The Intercept, shows ICE now has access to a variety of forensic features provided through Coinbase Tracer, the company’s intelligence-gathering tool (formerly known as Coinbase Analytics).

    Coinbase Tracer allows clients, in both government and the private sector, to trace transactions through the blockchain, a distributed ledger of transactions integral to cryptocurrency use. While blockchain ledgers are typically public, the enormous volume of data stored therein can make following the money from spender to recipient beyond difficult, if not impossible, without the aid of software tools. Coinbase markets Tracer for use in both corporate compliance and law enforcement investigations, touting its ability to “investigate illicit activities including money laundering and terrorist financing” and “connect [cryptocurrency] addresses to real world entities.”

    According to the document, released via a Freedom of Information Act request, ICE is now able to track transactions made through nearly a dozen different digital currencies, including Bitcoin, Ether, and Tether. Analytic features include “Multi-hop link Analysis for incoming and outgoing funds,” granting ICE insight into transfers of these currencies, as well as “Transaction demixing and shielded transaction analysis” aimed at thwarting methods some crypto users take to launder their funds or camouflage their transactions. The contract also provides, provocatively, “Historical geo tracking data,” though it’s unclear what exactly this data consists of or from where it’s sourced. An email released through the FOIA request shows that Coinbase didn’t require ICE to agree to an End User License Agreement, standard legalese that imposes limits on what a customer can do with software.

    Coinbase did not provide an on-the-record comment.

    Coinbase has in recent years made a concerted effort to pitch its intelligence features to government agencies, including the IRS, Secret Service, and Drug Enforcement Administration. Earlier this month, Coinbase CEO Brian Armstrong testified before a congressional panel that his company was eager to aid the cause of Homeland Security. “If you are a cyber criminal and you’re using crypto, you’re going to have a bad day. … We are going to track you down and we’re going to find that finance and we are going to hopefully help the government seize that crypto.” Coinbase’s government work has proved highly controversial to many crypto fans, owing perhaps both to the long-running libertarian streak in that community and the fact that these currencies are so frequently used to facilitate various forms of fraud.

    The Coinbase Tracer tool itself was birthed in controversy. In 2019, Motherboard reported that Neutrino, a blockchain-analysis firm the company acquired in order to create Coinbase Tracer, “was founded by three former employees of Hacking Team, a controversial Italian surveillance vendor that was caught several times selling spyware to governments with dubious human rights records, such as Ethiopia, Saudi Arabia, and Sudan.” Following public outcry, Coinbase announced these staffers would “transition out” of the company.

    Homeland Security Investigations, the division of ICE that purchased the Coinbase tool, is tasked not only with immigration-related matters, aiding migrant raids and deportation operations, but broader transnational crimes as well, including various forms of financial offenses. It’s unclear to what end ICE will be using Coinbase. The agency could not be immediately reached for comment.

    The post Cryptocurrency Titan Coinbase Providing “Geo Tracking Data” to ICE appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The day after the Supreme Court overturned Roe v. Wade, Facebook’s parent company, Meta, internally designated the abortion rights group Jane’s Revenge as a terrorist organization, according to company materials reviewed by The Intercept, subjecting discussion of the group and its actions to the company’s most stringent censorship policies. Experts say the decision, Meta’s first known policy response for the post-Roe era, threatens free expression around abortion rights at a critical moment.

    The brief internal bulletin from Meta Platforms Inc., which owns Instagram and Facebook, was titled “[EMERGENCY Micro Policy Update] [Terrorism] Jane’s Revenge” and filed to the company’s internal Dangerous Individuals and Organizations rulebook, meaning that the abortion rights group, which has so far committed only acts of vandalism, will be treated with the same speech restrictions against “praise, support, and representation” applied to the Islamic State and Hitler. The memo, circulated to Meta moderators on June 25, describes Jane’s Revenge as “a far-left extremist group that has claimed responsibility on its website for an attack against an anti-abortion group’s office in Madison, Wisconsin in May 2022. The group is responsible for multiple arson and vandalism attacks on pro-life institutions.” Terrorist groups receive Meta’s strictest “Tier 1” speech limits, treatment the company says is reserved for the world’s most dangerous and violent entities, along with hate groups, drug cartels, and mass murderers.

    Although The Intercept published a snapshot of the entire secret Dangerous Individuals and Organizations list last year, Meta does not disclose or explain additions to the public, despite the urging of scholars, activists, and its own Oversight Board. Speech advocates and civil society groups have criticized the policy for its secrecy, bias toward U.S. governmental priorities, and tendency to inaccurately delete nonviolent political speech. According to Meta’s most recent quarterly transparency report, the company restored nearly half a million posts between January and March in the terrorism category alone after determining that they had been censored erroneously.

    Discussion of Jane’s Revenge was already technically subject to Tier 1 censorship stemming from another previously unreported internal speech restriction enacted by Meta last month. In May, just days after Politico published a leaked Supreme Court decision auguring the reversal of Roe v. Wade, the office of Wisconsin Family Action, an anti-abortion group, was vandalized. The very next day, Meta silently banned its roughly 2 billion users from “praising, supporting, or representing” the vandalism or its perpetrators, according to company materials reviewed by The Intercept. While these event-based restrictions are often temporary, the more recent use of the formal “terror” label suggests a more permanent policy position.

    “This designation is difficult to square with Meta’s placement of the Oath Keepers and Three Percenters in Tier 3, which is subject to far fewer restrictions, despite their role organizing and participating in the January 6 Capitol attack,” said Mary Pat Dwyer, academic program director of Georgetown Law School’s Institute for Technology Law and Policy. “And while it’s possible Meta has moved those groups into Tier 1 more recently, that only highlights the lack of transparency into when and how these decisions, which have a huge impact on people’s abilities to discuss current events and important political issues, are made.”

    The Wisconsin incident, which consisted of a small fire and graffiti denouncing the group’s anti-abortion stance, resulted in only minor property damage to the empty office. But the vandalism was rapidly designated a “Violating Violent Event,” a kind of ad hoc speech restriction that Meta distributes to its content moderation staff to limit discussion across its platforms in response to breaking news and various international crises, typically prominent events like the January 6, 2021, riot at the Capitol, terrorism, public shootings, or ethnic bloodshed.

    “We are internally classifying this as a Violating Violent Event (General Designation),” reads the May 11 internal memo, obtained by The Intercept. “All content praising, supporting or representing the event and/or perpetrator(s) should be removed from the platform.” The dispatch instructed moderators to censor depiction and discussion of the vandalism under the Dangerous Individuals and Organizations policy framework, which restricts speech about violent actors like terror cells, neo-Nazis, and drug cartels. “The office of a conservative political organization that lobbies against abortion rights was vandalized and damaged by fire in Madison, Wisconsin,” the memo continued. “A group called Jane’s Revenge took credit for the attack.” The number of victims of the “Violating Violent Event” is marked as “0.”

    The Wisconsin Family Action designation is notable not only for the relative low severity of the attack itself, which Madison police are investigating as an act of arson, but also because it marks a rare foray by Facebook into limiting speech around abortion. Striking as well is the company’s choice to censor abortion rights action, even destructive action, given that throughout the long history of the American abortion debate, the overwhelming majority of violence has been conducted by those seeking to thwart access to the procedure via bombings and assassinations, not expand it. Earlier this month, Axios reported that “assaults directed at abortion clinic staff and patients increased 128% last year over 2020,” according to a report from the National Abortion Federation. And yet of the more than 4,000 names on the company’s Dangerous Individuals and Organizations list, only two are associated with anti-abortion violence or terrorism: the Army of God Christian terrorist cell and one of its affiliates, the notorious bomber Eric Rudolph. While extremely little is known about Jane’s Revenge, including whether the vandalism is even being committed by the same actors and to what extent it is even a group, prominent right-wing politicians have begun demanding that the property damage be treated as domestic terrorism, a stance now essentially endorsed by Meta.

    But the company also appears to have avoided censoring discussion of more recent anti-abortion acts comparable to the Wisconsin fire. On New Year’s Eve, arsonists destroyed a Planned Parenthood clinic in Knoxville, Tennessee, that had been riddled with bullets earlier in the year on the anniversary of the Roe v. Wade ruling. According to multiple sources familiar with Facebook’s content moderation policies, who spoke on the condition of anonymity because they are not permitted to speak to the press, the New Year’s Eve Planned Parenthood torching was never similarly designated a “Violating Violent Event.” While anti-abortion advocates are still barred from inciting further violence against Planned Parenthood clinics (or anything else), Meta users now have far greater latitude to discuss — or even praise — that instance of anti-abortion violence than comparable acts from the other side.

    The frequently malfunctioning nature of Facebook’s global censorship rules also means that the Wisconsin-specific update and more recent terror label, even if intended only to thwart future real-world acts of violence from either side of the abortion debate, could end up stifling legitimate political speech. While the company’s general purpose “Community Standards” rulebook places a blanket prohibition against any explicit calls for violence, only explicitly flagged people, groups, and events are subject to Meta’s far more stringent bans against “praise, support, and representation,” restrictions that bar users from quoting, depicting, or speaking positively of the entity or action in question. But the ambiguous formulation and frequently uneven enforcement of these rules means that speech far short of crossing the red line of violent incitement is subject to deletion. The Dangerous Individuals and Organizations ban on “praise, support, and representation” has been frequently cited by Facebook when deleting posts documenting or protesting Israeli state violence against Palestinians, for example, instances of which have at times been designated “Violating Violent Events” as well.

    Jane’s Revenge is poorly understood, controversial, and subject to intense debate at precisely the time the Dangerous Individuals and Organizations designations mean that billions of people are limited in what they can say about the perpetrators, their motives, or their methods. Anything that could be construed as “praise,” however tentative, risks deletion. Indeed, even Facebook’s public description of the “praise, support, and representation” standard notes that any posts “Legitimizing the cause of a designated entity by making claims that their hateful, violent, or criminal conduct is legally, morally, or otherwise justified or acceptable” are prohibited.

    “There are legitimate concerns that this might shut down debate.”

    The company’s internal overview of the “praise” standard, obtained and published by The Intercept last year, directs moderators to delete anything that “engages in value-based statements and emotive argument” or “seeks to make others think more positively” of the sanctioned entity or event. While these internal rules permit “Academic debate and informative, educational discourse” of a violent entity or event, what meets the threshold for “academic debate” or “informative discourse” is left to Facebook’s thousands of overworked, low-paid hourly contractors to determine.

    Content moderation experts who spoke to The Intercept said the policy threatens discussion and debate of abortion rights protests at a time when such speech is of profound national importance. “What we’ve seen in the past is that when Facebook bans certain types of harmful speech, they often catch counterspeech and other types of commentary in their content moderation net,” said Jillian York, director for international freedom of expression at the Electronic Frontier Foundation. “For example, efforts to ban terrorist content often result in the removal of counterspeech against terrorism or the sharing of documentation. The use of automated technologies only exacerbates this; therefore, it isn’t difficult to imagine that an attempt to ban vandalism against an anti-abortion group could also ban legitimate speech against such a group.”

    Evelyn Douek, a Harvard Law School lecturer and fellow with the Knight First Amendment Institute, described the ad hoc censorship of “violating events” via the Dangerous Individuals and Organizations framework as “extremely capacious” and “one of Facebook’s most controversial and problematic policies,” both because these designations are made in secret and because they are so likely to constitute subjective political determinations. While Meta moderators are provided with an extensive rulebook containing this designation and countless others, the combination of the company’s increasing reliance on automated algorithmic content screening and the personal judgment calls of low-paid, overworked contractors creates erratic, faulty results. “There are legitimate concerns that this might shut down debate,” said Douek.

    “Ukrainians get to say violent shit, Palestinians don’t. White supremacists do, pro-choice people don’t.”

    Douek said the opacity of the censorship policy, paired with Facebook’s “incredibly blunt and error-prone” enforcement of speech restrictions, poses a threat to political discussion and debate around both abortion per se and the broader reproductive rights movement. Even those who don’t condone the methods of Jane’s Revenge have an interest in talking about them and perhaps even entertaining them: There is a vast universe of discourse about political direct action and violence, even vandalism, that isn’t in and of itself incitement, a swath of speech that could be vacuumed up by Facebook’s bludgeon approach to speech rules. “[Saying] you support the goals, the underlying policy of what Jane’s Revenge is fighting for, even if you disagree with their tactics, there’s all sorts of conversation here that we have about lots of different groups in society on the margins that I’m worried about losing.”

    Significant as well is the fact that a free expression around relatively minor acts of violence would not only be censored in the first place but also subjected to the same limits Facebook uses for Al Qaeda and the Third Reich. “It’s somewhat remarkable that this act of vandalism was so quickly added to the list. It really is intended to be reserved for the most serious kind of incidents” like hate crimes, gun massacres, and terrorist attacks, Douek explained, “a policy that’s really targeted at the worst of the worst.” The decision to censor free discussion of Jane’s Revenge, responsible for a failed firebombing and a series of threatening graffiti incidents, makes the fact that Facebook did not similarly limit discussion of the Tennessee Planned Parenthood arson even more puzzling. Douek and York place that decision in a long history of Facebook putting its finger on the scales of political discourse in a way that often appears ideologically motivated, or on other occasions completely arbitrary. “It’s precisely the issue raised by their constant picking and choosing of ‘winners,’” York told The Intercept. “Ukrainians get to say violent shit, Palestinians don’t. White supremacists do, pro-choice people don’t.”

    In an email to The Intercept, Meta spokesperson Devon Kearns confirmed the terror designation of Jane’s Revenge and said that the company “will remove content that praises, supports, or represents the organization.” Kearns stated that the company has a multifaceted process when determining which people and groups are restricted under the Dangerous Organizations policy, but did not say what it was or why Jane’s Revenge had been flagged but not other actors committing violence to advance their stance on abortion. Kearns further noted that users may appeal deletions made through the Dangerous Organizations policy if they believe it was made in error.

    Assessing the merits of a decision made and implemented in secret is exceedingly difficult. Although Meta provides a generalized, big-picture overview of what sort of speech is barred from its platforms with a handful of uncontroversial examples (e.g., “If you want to fight for the Caliphate, DM me”), the specifics of the rules are concealed from the billions of people expected to heed them, as is any rationale as to why the rules were drafted in the first place. York told The Intercept that the Jane’s Revenge move is another indication that Meta needs to “immediately institute the Santa Clara Principles,” a content moderation policy charter that mandates “clear and precise rules and policies relating to when action will be taken with respect to users’ content or accounts,” among many other items.

    Without the entirety of the company’s rules and their justification provided to the public, Meta, which exercises an enormous degree of control over what speech is allowed on the internet, leaves billions posting in the dark. Meta’s claim has always been that it takes no sides on any issue and only deletes speech in the name of safety, a claim the public generally has to take as an article of faith given the company’s deep secrecy in both what the rules are and how they’re enforced. “For a platform that is consistently insisting that it’s neutral and doesn’t have its finger on the scale, it’s really incumbent on Meta to be much more forthcoming,” Douek added. “These are highly charged political decisions, and they need to be able to defend them.”

    The post Facebook Labels Abortion Rights Vandals as Terrorists Following Roe Reversal appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The Pentagon envisions a future in which Elon Musk’s rockets might someday deploy a “quick reaction force” to thwart a future Benghazi-style attack, according to documents obtained by The Intercept via Freedom of Information Act request.

    In October 2020, U.S. Transportation Command, or USTRANSCOM, the Pentagon office tasked with shuttling cargo to keep the American global military presence humming, announced that it was partnering with Musk’s SpaceX rocketry company to determine the feasibility of quickly blasting supplies into space and back to Earth rather than flying them through the air. The goal, according to a presentation by Army Gen. Stephen Lyons, would be to fly a “C-17 [cargo plane] equivalent anywhere on the globe in less than 60 minutes,” an incredible leap forward in military logistics previously confined to science fiction. A USTRANSCOM press release exclaimed that one day SpaceX’s massive Starship rocket could “quickly move critical logistics during time-sensitive contingencies” and “deliver humanitarian assistance.” While the Pentagon alluded to potentially shuttling unspecified “personnel” through these brief space jaunts, the emphasis of the announcement was squarely on moving freight.

    But USTRANSCOM has more imaginative uses in mind, according to internal documents obtained via FOIA. In a 2021 “Midterm Report” drafted as part of its partnership with SpaceX, USTRANSCOM outlined both potential uses and pitfalls for a fleet of militarized Starships. Although SpaceX is already functionally a defense contractor, launching American military satellites and bolstering Ukrainian communication links, the report provides three examples of potential future “DOD use cases for point to point space transportation.” The first, perhaps a nod to American anxieties about Chinese hegemony, notes that “space transportation provides an alternative method for logistics delivery” in the Pacific. The second imagines SpaceX rockets delivering an Air Force deployable air base system, “a collection of shelters, vehicles, construction equipment and other gear that can be prepositioned around the globe and moved to any place the USAF needs to stand-up air operations.”

    spaceX-foia-theintercept

    A partially redacted illustration of a SpaceX Starship vessel.

    Credit: U.S. Transportation Command


    But the third imagined use case is more provocative and less prosaic than the first two, titled only “Embassy Support,” scenarios in which a “rapid theater direct delivery capability from the U.S. to an African bare base would prove extremely important in supporting the Department of State’s mission in Africa,” potentially including the use of a “quick reaction force,” a military term for a rapidly deployed armed unit, typically used in crisis conditions. The ability to merely “demonstrate” this use of a SpaceX Starship, the document notes, “could deter non-state actors from aggressive acts toward the United States.” Though the scenario is devoid of details, the notion of an African embassy under sudden attack from a “non-state actor” is reminiscent of the infamous 2012 Benghazi incident, when armed militants attacked an American diplomatic compound in Libya, spurring a quick reaction force later criticized as having arrived too late to help.

    As much as American generals may be dreaming of rocket-borne commandos fighting off North African insurgents, experts say this scenario is still squarely the stuff of sci-fi stories. Both Musk and the Pentagon have a long history of making stratospherically grand claims that dazzling and entirely implausible technologies, whether safe self-driving cars and hyperloop or rail guns and missile-swatting lasers, are just around the corner. As noted in another USTRANSCOM document obtained via FOIA request, all four Starship high-altitude tests resulted in the craft dramatically exploding, though a May 2021 test conducted after the document’s creation landed safely.

    “What are they going to do, stop the next Benghazi by sending people into space?” said William Hartung, a senior research fellow at the Quincy Institute who focuses on the U.S. arms industry and defense budget. “It doesn’t seem to make a lot of sense.” Hartung questioned the extent to which a rocket-based quick reaction force would be meaningful even if it were possible. “If a mob’s attacking an embassy and they dial up their handy SpaceX spaceship, it’s still going to take a while to get there. … It’s almost like someone thinks it would be really neat to do stuff through space but haven’t thought through the practical ramifications.” Hartung also pointed to the Pentagon’s track record of space-based “fantasy weapons” like “Star Wars” missile defense, elaborate projects that soak up massive budgets but amount to nothing.

    SpaceX did not respond to a request for comment. In an email to The Intercept, USTRANSCOM spokesperson John Ross wrote that “interest in PTP deployment is explorative in nature and our quest for understanding what may be feasible is why we’ve entered into cooperative research and development agreements like the one you reference,” adding that “the speed of space transportation promises the potential to offer more options and greater decision space for leaders, and dilemmas for adversaries.” Asked when USTRANCOM believes a rocket-deployed quick reaction force might actually be feasible, Ross said the command is “excited for the future and believe it’s possible within the next 5-10 years.”

    “My two cents are that it’s unlikely that they would be able to evacuate anyone quickly via rocket,” said Kaitlyn Johnson, deputy director of the Center for Strategic and International Studies’ Aerospace Security Project. Johnson pointed out that even if the underlying technology were sound, the small question of where to land an enormous 165-foot Starship rocket, the world’s largest, remains. “If it’s in a city, it’s not like they can land [a] Starship next to the embassy.” In the hypothetical embassy rescue mission, “you still have logistics issues there about getting forces onto the launch vehicle and then again on where you could land the vehicle and how to get the forces from the landing site to the base/embassy,” Johnson added, “which has not been tested or proven and in my opinion is a bit sci-fi.”

    “What are they going to do, stop the next Benghazi by sending people into space? It doesn’t seem to make a lot of sense.”

    The document also nods at another potential hitch: Are other countries going to let SpaceX military rockets drop out of space and onto their turf? The vision of American “Starship Troopers” is not a new one: As far back as 2006, according to one Popular Science report, the Pentagon has dreamed of an age in which “Marines could touch down anywhere on the globe in less than two hours, without needing to negotiate passage through foreign airspace.” But the USTRANSCOM paper admits that Cold War-era treaties governing the use of space provide little guidance as to whether an American rocket could bypass sovereign airspace concerns by cruising through outer space. “It remains unclear whether and how vehicles are subject to established aviation laws and to what extent, if any, these laws follow them into space for PTP space transportation,” reads one section. “Moreover, the lack of a legal definition of the boundary between air and space creates an issue of where the application of aviation law ends and space law begins.” The document does hint that part of SpaceX’s promise could be to leap over these concerns. Following a redacted discussion of a hypothetical military Starship’s legal status while in flight, USTRANSCOM noted: “This recovery places the Starship outside of altitudes typically characterized as controlled airspace.”

    Brian Weeden, director of program planning for the Secure World Foundation, a space governance think tank, told The Intercept that territorial concerns are just one of many, “along with whether or not the countries the rocket/spaceship pass over regard it as a weapon or ballistic missile threat or not.” Hartung argued that SpaceX, despite its “Mr. Clean” image as a peaceful enabler for cosmic exploration, is contributing to the global militarization of space. And as with drones, once an advanced and exclusively American technology begins proliferating, the U.S. will have to face its implications from the other side. “The question is, what would keep other countries from doing the same thing, and how would the U.S. feel about that?” asked Hartung. “This notion that going anywhere without having to get any approval from anybody has appeal from a military point of view, but would the U.S. want other countries to have that same capability? Probably not.”

    The post Pentagon Explores Using SpaceX for Rocket-Deployed Quick Reaction Force appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Despite years of criticism, Amazon’s Ring cameras are increasingly ubiquitous in American neighborhoods, an always-watching symbol of residential suspicion, and the company’s privatized surveillance dragnet remains wildly popular with police. In a new letter to Amazon CEO Andrew Jassy, Sen. Edward Markey is calling on the company to implement pro-privacy reforms and limit its collaboration with police.

    Ring’s nationwide network of house-mounted cameras provide police with millions of potential audiovisual feeds from which they can request data with an easy series of clicks, and the company has gone to great lengths to foster this symbiotic relationship between camera owner and law enforcement, formally partnering with hundreds of departments, running promotional giveaways, and offering cops special product discounts. Although Ring has adopted some limited reforms in response to prolonged scrutiny — for instance, ceasing direct donations of cash and cameras to police — the company’s 10 million customers provide a steady current of data that police can request, sans warrant or meaningful oversight, directly from the user.

    Although it helps police the general public, Ring’s inner workings are about as opaque as any other private firm, and much remains unknown about the company’s ongoing relationship with police or plans to bolster their powers in the future. In the letter, a copy of which was shared with The Intercept, Markey asks Amazon to disclose some of the many open questions about its surveillance subsidiary and to commit to a series of further reforms. The letter is only the latest correspondence between Markey and Ring, part of a multiyear effort to pry information out of the generally secretive company. “While I acknowledge and appreciate steps Ring has taken in response to my previous letters to your company,” the letter reads, “I remain troubled by your company’s invasive data collection and problematic engagement with police departments.”

    “[T]he public’s right to assemble, move, and converse without being tracked is at risk.”

    The letter emphasizes concern over the fact that Ring cameras not only continuously record video data but also audio: “As Ring products capture significant amounts of audio on private and public property adjacent to dwellings with Ring doorbells — including recordings of conversations that people reasonably expect to be private — the public’s right to assemble, move, and converse without being tracked is at risk.” In the letter, Markey asks Ring to disclose the precise distance at which its devices are capable of recording audio, “commit to eliminating Ring doorbells’ default setting of automatically recording audio when video is recorded, and “commit to never incorporating voice recognition technology into its products.” Markey also asked Ring for pledges to “never accept financial contributions from policing agencies,” “never allow immigration enforcement agencies to request Ring recordings,” and “never participate in police sting operations.”

    In addition, Markey wants Ring to clarify some of its vague, legalese policy language. For instance, the company claims it will always require a court order to disclose “customer information” without that customer’s permission first, unless there is an “an exigent or emergency” situation, a murky term the company leaves undefined and thus means potentially anything at all. Markey is now pushing for a definition of “exigent or emergency,” along with a disclosure of how many times Ring has granted access to data under such circumstances.

    While Markey’s proposals, if implemented, would place some limits on the ability of police to monitor the public through entirely private means, a larger issue will remain: There are millions of these cameras already in place, capturing round-the-clock footage of people who’ve committed no offense other than walking down the street and beaming it directly to a company that has zero accountability to the public. Whatever voluntary measures Ring may choose to adopt following pressure from Markey or other surveillance critics, short of legislative guardrails, they will remain voluntary.

    Still, Markey is optimistic about his missive campaign and expressed a broader concern over these problems inherent to the age of the ubiquitous doorbell camera: “I’m pleased that my efforts to hold Ring accountable and demand answers for its invasive practices have brought about real change in how Amazon does business, because the stakes are high,” he wrote in a statement to The Intercept. “As surveillance technologies proliferate, our ability to move, congregate and converse in public without being tracked is at risk of slipping away. The threats are particularly high for Black and Brown communities who have long been subjected to over-policing and higher levels of surveillance. Ring has taken steps in the right direction, but I remain deeply concerned about the ways in which privacy invasions have become the new normal in our country.”

    The post Sen. Ed Markey Calls On Ring to Make Itself Less Cop-Friendly appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Immigration and Customs Enforcement searched a massive database of personal information provided by LexisNexis over 1.2 million times in just a seven-month period in 2021, according to documents reviewed by The Intercept. Critics say the staggering search volume confirms fears that the data broker is enabling the mass surveillance and deportation of immigrants.

    The Intercept first reported last year that ICE had purchased access to LexisNexis Risk Solutions databases for $16.8 million, unlocking an oceanic volume of personal information on American citizens and noncitizens alike that spans hundreds of millions of individuals, totaling billions of records drawn from 10,000 different sources. Becoming a LexisNexis customer not only provides law enforcement with instant, easy access to a wealth of easily searchable data points on hundreds of millions of people, but also lets them essentially purchase data rather than having to formally request it or seek a court order.

    Internal documents now show that this unfathomably large quantity of data is being searched with a regularity that is itself vast. LexisNexis usage logs between March and September 2021 totaled 1,211,643 searches and 302,431 “reports,” information packages that provide an exhaustive rundown of an individual’s location, work history, family relationships, and many other data points aggregated by LexisNexis, a data broker better known for its legal research resources. Although the names were redacted, the logs show that a single user conducted over 26,000 searches in that period.

    Most of the queries were conducted through Accurint, a powerful LexisNexis tool that promises “cutting-edge analytics and data linking,” and touts its ability to provide a firehose of “investigative intelligence” to police on a national scale. “Criminals have no boundaries,” reads the Accurint homepage. “So neither can you when it comes to critical investigative intelligence and crime reporting. That’s why Accurint Virtual Crime Center gives you visibility beyond your own jurisdictions into regional and nationwide crime data.”

    The new documents, obtained by a Freedom of Information Act request by the immigrant advocacy organization Just Futures Law and shared with The Intercept, cast doubt on earlier assurances from LexisNexis that its sprawling database would be used only narrowly to target people with “serious criminal backgrounds.” In 2021, following widespread criticism of the ICE contract, LexisNexis published a brief FAQ attempting to downplay the gravity of the collaboration and dispel concerns their databases would facilitate dragnet surveillance and deportations. “The tool promotes public safety and is not used to prevent legal immigration,” reads the document, “nor is it used to remove individuals from the United States unless they pose a serious threat to public safety including child trafficking, drug smuggling and other serious criminal activity.”

    However, the logs show that a sizable share of the usage — over 260,000 searches and reports — were conducted by ICE’s Enforcement and Removal Operations, a branch explicitly tasked with finding and deporting immigrants, often for minor infractions or no offense at all. An internal ERO memo obtained through the FOIA request and also shared with The Intercept contradicts the idea that Accurint’s use was to be narrowly focused on only the most dangerous criminal elements. In an email sent June 30, 2021, ERO’s assistant director of enforcement wrote, “Please note this additional valuable resource should be widely utilized by ERO personnel as an integral part of our mission to protect the homeland through the identification, location, arrest, and removal of noncitizens who undermine the safety of our communities and the integrity of our immigration laws.” The email noted that LexisNexis would be directly providing ICE with educational seminars on how to most effectively use the data.

    ICE’s long documented history of rounding up immigrants with no criminal history or those with only nonviolent offenses like traffic violations further undermines the notion that these hundreds of thousands of ERO searches all pertained to hardened, dangerous criminals. A breakdown of LexisNexis usage by individual ICE office shows that the single highest generator of searches and reports, with a total of 56,467, was ERO’s National Criminal Analysis and Targeting Center, a division tasked with locating immigrants who are merely deemed “amenable to removal,” according to agency documents. Though ICE and LexisNexis are both keen to couch these investigations as a bulwark against dangerous transnational terrorists and criminal syndicates, an analysis of 2019 ICE arrest data conducted by Syracuse University found that “exceedingly few detainees” had committed grave national security-related offenses like terrorism or election fraud, and that “the growth in detention by Immigration and Customs Enforcement (ICE) over the past four years has been fueled by a steady increase in the number of detainees with no criminal history.”

    “LexisNexis is using the same type of scare tactics ICE tries to use to justify the brutality of their deportation mission,” said Just Futures Law attorney Dinesh McCoy, who worked on the FOIA request. “There’s no indication, based on the huge number of searches conducted, nor from the records that we’ve seen, that ICE uses LexisNexis technology in a confined way to target ‘serious’ criminal activity. We know thousands of ICE employees are empowered to use this tech, and we know that officers are given significant discretion to make investigative and strategy decisions about how they locate immigrants.”

    Though ICE arrests have fallen under the Biden administration, experts say they remain concerned by Accurint’s use and potential to snare people who’ve committed no crime beyond fleeing their home country. “We’re seeing a continuation of harmful surveillance practices under this administration,” McCoy wrote via email. “We need real opposition to the constant expansion of ICE’s power and infrastructure, but by providing the agency with invasive tools like Accurint, the Biden Administration is just strengthening ICE’s institutional position for the future.” The marked decline in post-Trump ICE arrests presents an odd context for the massive scale of ICE’s searches: During roughly the same period tracked by the LexisNexis logs, ERO made 72,000 arrests after conducting over three times as many individual searches.

    Homeland Security Investigations (HSI) special agent preparing to arrest alleged immigration violators at Fresh Mark, Salem, June 19, 2018. Image courtesy ICE ICE / U.S. Immigration and Customs Enforcement. (Photo by Smith Collection/Gado/Getty Images)

    A Homeland Security Investigations special agent prepares to arrest alleged immigration violators at Fresh Mark meat processing plant in Salem, Ohio, on June 19, 2018.

    Photo: Smith Collection/Gado/Getty Images


    Over 630,000 of the searches and reports were run by Homeland Security Investigations, an ICE division tasked with investigating an expansive array of “threats,” from human trafficking and terrorism to “scams” and identity theft. But along with HSI’s broad mandate have come allegations of dragnet spy tactics and indiscriminate policing. In a letter to Homeland Security Inspector General Joseph Cuffari in March, Sen. Ron Wyden wrote that he’d learned HSI had “abused” its federal authority and “was operating an indiscriminate and bulk surveillance program that swept up millions of financial records about Americans.”

    HSI also regularly assists with ERO’s mass detention and deportation operations, including a 2018 workplace raid in Tennessee that separated children from their parents and left hundreds of students too afraid to return to school. In a March article for Just Security, Mary Pat Dwyer of the Brennan Center wrote that HSI “often uses its transnational crime mission to investigate immigrants of color who are not suspected of criminal activity,” including investigations into naturalized citizens “looking for inconsistencies in their documents or old deportation orders as grounds to strip them of citizenship.”

    The logs provide a greater understanding of how ICE is making use of LexisNexis Risk Solutions, which provides a varied suite of search tools to navigate the company’s voluminous store of government records and commercial data. The overwhelming majority of the searches, over 700,000 in total, were conducted using Accurint’s “Advanced Person Search,” which lets users enter fragments of data they might have on an individual — a relative, a former job, a nickname — and match it against a pool of millions of identities. Over 200,000 other searches were run against Accurint’s database of phone provider records, more than 63,000 vehicle record searches, and nearly 10,000 different queries hunting for social media profiles. The logs also show nearly 6,000 searches for Accurint’s index of jail booking activity, an ICE tactic recently reported by The Guardian as an explicit means of skirting “sanctuary city” laws that block local police from sharing such information with immigration agencies. With LexisNexis access, it is now trivial for an agency to essentially buy its way around such restrictions.

    Neither ICE nor LexisNexis responded to a request for comment.

    Used in concert, these searches grant ICE, an agency already charged with brutal and indiscriminate tactics against some of American society’s most vulnerable members, with an immense technological advantage when seeking its targets. It’s difficult to overstate the enormity of LexisNexis’s databases, and equally difficult to imagine avoiding being absorbed into them. Financial records, property records, past jobs, former marriages, phone subscriptions, cable TV bills, car registrations — critics of LexisNexis’s government work note that it’s near impossible to exist today without leaving behind traces that are quickly vacuumed up into the company’s colossal trove of public and proprietary data points, continuously indexed and rendered instantly searchable. Put in the hands of agencies like ICE, the mountains of digital paperwork one accumulates during ordinary civic and consumer lives can be easily turned against us. A 2021 report by the Washington Post found that ICE had previously tapped a database of utility records while searching for immigration offenses, leaving those who fear detention or deportation with a grim choice between the basic amenities of modern life and the perpetual risk of data broker-enabled arrest.

    Despite the vastness of LexisNexis’s data and the advertised sophistication of the tools it provides law enforcement to comb through that data, the company itself quietly warns that it may be providing inaccurate information, the consequences of which could upend a life or entire family. “Due to the nature of the origin of public record information, the public records and commercially available data sources used in reports may contain errors. Source data is sometimes reported or entered inaccurately, processed poorly or incorrectly, and is generally not free from defect,” reads a small warning at the bottom of a marketing page for Accurint Virtual Crime Center, a tool used heavily by ICE according to the search logs. “Before relying on any data, it should be independently verified.”

    It’s not just those individuals in ICE’s crosshairs who need to fear being implicated by a LexisNexis search. Accurint promotional and training materials make frequent mention of the software’s ability to not only locate people via the records they leave in their wake, but also trace real-life social networks, potentially drawing friends, family, neighbors, and co-workers under federal scrutiny. An Accurint training manual marked “Confidential and Proprietary” but publicly accessible via a LexisNexis website shows how users can obtain “information about a subject’s relatives, neighbors, and associates,” including “possible relative phones” two degrees removed from the target. The fact that a single search can yield results about multiple people or entire families suggests that the number of people subjected to LexisNexis-based surveillance could be far more than the already large 1.2 million figure might indicate. “We’re seeing that ICE is running a system of mass surveillance,” said Cinthya Rodriguez, a national organizer with the immigrant rights group Mijente. “What we’re really talking about is possibly upwards of three times that 1.2 million, perhaps upwards of 3 million searches happening in that period of time. … What we’re saying is, perhaps 1 percent of the U.S. population was under ICE surveillance.”

    That something as mundane as having heat or running water at home could draw the attention of the federal deportation machine, and that this machine could then turn its sights on one’s personal support network, has left immigrant communities in a state of perpetual fear, said Rodriguez. “Immigrants are forced to face impossible choices about what kind of services they need and the information they have to turn over in order to access those services, everything ranging from their electric bills to buying a car, to information about their children’s schools,” Rodriguez explained in an interview. “Everything we touch can have serious implications for building out ICE’s digital dragnet.”

    The post ICE Searched LexisNexis Database Over a Million Times in Just 7 Months appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Despite previously claiming that he was divesting from the tear gas business after a heated activist campaign directed at the Whitney Museum of American Art, former Whitney board member Warren Kanders appears to have simply rearranged his holdings. Companies owned by or associated with Kanders continue to sell chemical weapons that have been deployed against American protesters and civilians around the world, according to corporate records reviewed by The Intercept.

    The controversy began in 2018 following a report revealing Kanders’s ownership of Safariland LLC, a seller of military and police equipment including, infamously, dangerous tear gas and smoke munitions that had just days earlier been launched against asylum-seekers at the U.S.-Mexico border. Safariland became an art world and human rights flashpoint, and protests against Kanders’s chemical weapons profits led to his ouster from a prestigious seat on the board of the Whitney, a position he’d enjoyed since 2006. Safariland’s gas and smoke weapons were made by Defense Technology, a weaponry company it owned.

    Reporting from around the world has found that Safariland and Defense Technology-branded munitions are used to incapacitate a litany of vulnerable targets, from protesters rallying against the murder of George Floyd to migrants attempting to cross the U.S. border with Mexico. While tear gas and smoke grenades of the kind marketed by Defense Technology and Safariland are typically characterized by law enforcement agencies as a safe and humane means of dispersing a crowd, the toxic chemical compounds inside are known to cause severe organ damage, chronic conditions like bronchitis, and sometimes permanent physical injuries to their targets if struck directly by gun-launched canisters. In May 2021, after Defense Technology tear gas was used against protesters in Oregon, a Kaiser Permanente study found that hundreds of women exposed to the chemicals subsequently reported abnormal menstrual cycles. While domestic laws deem tear gas safe enough for police to fire in mass quantities into throngs of Americans, its use on the battlefield is banned by the Geneva Protocol, a prohibition against chemical warfare.

    Fallout from the protests didn’t stop at Kanders’s resignation from the Whitney’s board. In June 2020, after Safariland grenades were used against racial justice protesters outside the White House, the New York Times reported that an apparently chastened Kanders was “getting out of the tear gas business” entirely, with Safariland announcing that it would sell off Defense Technology, its chemical weapons subsidiary. The divestiture “allows Safariland to focus on passive defensive protection” like body armor and holsters, Kanders stated in a company press release, which noted that “Defense Technology’s current management team will become the new owners of the business.” But according to materials reviewed by The Intercept, Kanders never exited the tear gas business, but merely rearranged his stake in it.

    Florida-based Cadre Holdings bills itself as a “premier global provider of trusted, innovative, high-quality safety and survivability products for first responders, federal agencies, and outdoor/personal protection markets,” with a large portfolio of companies that manufacture protective equipment, gun holsters, and other tactical accoutrement. Among the many companies owned by Cadre, itself run by Kanders since 2012, is Safariland LLC, whose website today is devoid of tear gas and smoke grenades and rather bills itself as “providing trusted and innovative life-saving equipment to law enforcement, military, outdoor recreation and personal protection markets.”

    Defense Technology is not mentioned anywhere on Cadre’s website. But when Cadre Holdings filed paperwork with the Securities and Exchange Commission last year as part of its initial public offering, among its 23 disclosed international subsidiaries was Defense Technology LLC. Defense Technology was again listed as a subsidiary in Cadre’s March 2022 annual shareholder report. Amid the many risk factors disclosed in the report, Kanders’s company stated explicitly that it continued to sell chemical weapons, noting, “We use Orthochlorabenzalmalononitrile and Chloroacetophenone chemical agents in connection with our production of our crowd control products,” two of the most popular toxic compounds used to create tear gas. “Private parties may bring claims against us based on alleged adverse health impacts or property damage caused by our operations.”

    Documents recently filed with the Florida Department of State offer further proof of the nondivestiture: In a Defense Technology annual report filed in March, two years after Safariland claimed that it had cut ties with the company, Warren Kanders and Safariland LLC are both listed as corporate officers. And all three companies list the exact same address for their registered agent in their most recent Florida paperwork.

    According to its website, Defense Technology is still very much in the chemical weapons business. In the “chemical agent devices” section of its website, the popular Triple-Chaser tear gas grenade still receives top billing; Triple-Chaser is a particularly infamous brand whose widespread use against civilians was the subject of an incisive short documentary by filmmaker Laura Poitras and the research group Forensic Architecture, exhibited at the 2019 Whitney Biennial to protest Kanders’s relationship with the museum. (Poitras was a founding editor of The Intercept.) Defense Technology lists a total of 117 different chemical weapons, including dozens containing the deeply toxic compounds hexachloroethane and 2-chlorobenzalmalononitrile, often abbreviated as CS.

    Not only does Kanders still control Defense Technology via his majority stake in Cadre, but the company also appears to remain tightly integrated with Safariland. Despite the claims of divestiture, the two companies don’t appear to have gone to any great lengths to conceal their close ongoing relationship, and records suggest that the companies are not merely connected but one and the same. Federal procurement records show that Safariland was still selling “less-lethal” Defense Technology weapons to the Bureau of Prisons as 0f January. As recently as April of this year, Safariland posted a job opening at the Defense Technology Training Academy; a retail job posted by the company notes, “The Safariland Group offers a number of recognized brand names in these markets including … Defense Technology.” In their respective website terms of use pages, Cadre Holdings, Safariland, and Defense Technology provide the exact same Jacksonville, Florida, mailing address, with the latter, updated over a year after the alleged divestiture, actually instructing those with copyright infringement notices to address any such complaints to “Safariland, LLC Attn.” As of publication, both Safariland and Defense Technology’s web domains have identical registration information, according to filings reviewed by The Intercept, and share the same main contact phone number.

    “At first I thought it was just a case of someone using old letterhead, but as I was digging deeper I found more and more things that hinted the companies are still connected.”

    A call placed to Defense Technology to request comment on the divestment prompted an automated message from Safariland; after selecting Defense Technology’s extension, another automated Safariland greeting was played.

    Cadre Holdings, Safariland, and Defense Technology did not respond to requests for comment.

    Noam Perry, an activist and researcher with the American Friends Service Committee, shared his findings with The Intercept after looking into Safariland while working on a report about police militarization. “I knew going into the project that Safariland divested Defense Technology in 2020, so I was surprised when I saw receipts and shipping slips from 2021 that still identified Safariland as selling Defense Technology weapons and ammunition,” Perry told The Intercept via email. “At first I thought it was just a case of someone using old letterhead, but as I was digging deeper I found more and more things that hinted the companies are still connected.” While Perry thought at first that the divestment was simply dragging on, “in March the first annual report of Cadre Holdings came out and led me to believe they indeed lied.”

    In a 2021 article for the Union of Concerned Scientists questioning whether Safariland had ever actually quit the chemical weapons business, researcher Juniper Simonis published emails obtained via public record request showing that the company continued to peddle Defense Technology gas weapons in 2020, citing “unprecedented” levels of demand, even after claiming to have divested from the chemical weapons business. Simonis also noted that Safariland continued to register trademarks using the Defense Technology brand, an unusual practice post-divestment.

    “The news about Kanders’s misrepresentation of his business practices does not surprise us,” said Amin Husain, a professor at New York University who helped organize the anti-Kanders protests against the Whitney with the activist group Decolonize This Place. “This kind of dishonesty is typical of the tycoon class who use their art world associations and investments to launder their crimes against humanity.”

    The post Ousted Whitney Museum Board Member Still Selling Tear Gas Despite Divestment Claim appeared first on The Intercept.

  • Nos meses anteriores à invasão da Ucrânia pela Rússia, duas startups americanas obscuras se reuniram para discutir uma possível parceria de vigilância que combinaria o rastreamento de movimentos de bilhões de pessoas por meio de seus telefones e um fluxo constante de dados comprados diretamente do Twitter. Segundo Brendon Clark, executivo da Anomaly Six – ou “A6” –, a combinação da tecnologia de rastreamento de celulares de sua empresa com a vigilância de redes sociais fornecida pela Zignal Labs permitiria ao governo dos EUA espionar, sem dificuldades, as forças russas situadas próximas à fronteira ucraniana ou mesmo monitorar submarinos nucleares chineses. Para provar que a tecnologia funciona, Clark apontou os poderes da A6 para os EUA e espionou a Agência de Segurança Nacional, a NSA, e a CIA, usando celulares rastreados nas sedes das agências contra elas próprias.

    Com sede na Virgínia, fundada em 2018 por dois ex-oficiais de inteligência militar, a Anomaly Six tem uma presença pública discreta, ao ponto de ser misteriosa. O site da empresa não revela nada sobre o que ela realmente faz – mas há boas chances de que a A6 saiba muito sobre você. A empresa é uma das muitas organizações que compra um gigantesco volume de dados de localização, rastreando centenas de milhões de pessoas no mundo inteiro graças a um fato pouco compreendido: a todo instante, inúmeros aplicativos comuns de smartphone coletam as localizações dos usuários e as retransmitem para anunciantes, geralmente sem que se tenha conhecimento ou consentimento, aproveitando as letras miúdas do juridiquês de extensos termos de serviço – que as empresas envolvidas esperam que você nunca leia. Atualmente não há lei nos EUA que proíba a venda e revenda de informações depois que a sua localização é transmitida a um anunciante. Empresas como a Anomaly Six são livres para adquiri-las e vendê-las à clientela privada e governamental. Dia após dia, a indústria da publicidade digital faz o trabalho pesado para qualquer interessado em rastrear a vida diária de outras pessoas – para terceiros interessados, basta adquirir o acesso.

    Os materiais da empresa obtidos pelo Intercept e pela Tech Inquiry detalham a dimensão do poder de vigilância global da Anomaly Six, cujos recursos são capazes de fornecer a qualquer cliente habilidades antes reservadas a militares e agências de espionagem.

    Segundo gravações em vídeo analisadas pelo Intercept e pela Tech Inquiry, a A6 afirma que pode rastrear cerca de 3 bilhões de dispositivos em tempo real – o equivalente a um quinto da população mundial. Essa impressionante capacidade de vigilância foi citada em uma apresentação para oferecer recursos de rastreamento telefônico da A6 para a Zignal Labs, empresa de monitoramento de redes sociais que aproveita seu acesso ao fluxo de dados “firehose”, raramente concedido pelo Twitter, para filtrar centenas de milhões de tuítes por dia sem restrições. Com a combinação das capacidades das duas empresas, propôs a A6, os clientes corporativos e governamentais da Zignal poderiam não apenas vigiar globalmente a atividade das redes sociais, mas também determinar quem exatamente publicou certos tuítes, de onde eles foram postados, com quem essas pessoas estavam, onde estiveram anteriormente e aonde foram em seguida. Essa robusta capacidade ampliada seria um benefício óbvio para ambas as organizações monitorarem seus adversários globais e empresas controlarem seus funcionários.

    Sob anonimato, a fonte que compartilhou os materiais expressou grande preocupação com a legalidade de empresas terceirizadas do governo, como a Anomaly Six e a Zignal Labs, estarem “revelando postagens em redes sociais, nomes de usuários e localizações de americanos” a usuários do “Departamento de Defesa”. A fonte também afirmou que a Zignal Labs mentiu ao Twitter de forma intencional, retendo os casos de uso de vigilância militar e corporativa mais amplos de seu acesso “firehose”. Os termos de serviço do Twitter tecnicamente proíbem terceiros de “realizar ou fornecer vigilância ou coletar inteligência” usando o acesso à plataforma, embora a prática seja comum e a aplicação dessa proibição, rara. Questionado sobre essas preocupações, o porta-voz Tom Korolsyshun disse ao Intercept que “a Zignal cumpre as leis de privacidade e as diretrizes estabelecidas por nossos parceiros de dados”.

    A A6 alega que sua captura de dados de GPS reúne diariamente de 30 a 60 pings de localização por dispositivo e 2,5 trilhões de pontos de dados de localização anuais no mundo inteiro, totalizando até 280 terabytes de dados de localização por ano e muitos petabytes no total – sugerindo que a empresa monitora todos os dias, em média, cerca de 230 milhões dispositivos. O representante de vendas da A6 acrescentou que, enquanto muitas empresas rivais coletam dados de localização pessoal por meio de conexões bluetooth e wi-fi de telefones – que fornecem informações gerais sobre o paradeiro do usuário –, a Anomaly 6 coleta apenas pontos de GPS com margem de metros de precisão. Além de coletar dados de localização, a A6 afirmou que construiu uma biblioteca com mais de 2 bilhões de endereços de e-mail e outros dados pessoais que as pessoas compartilham quando utilizam aplicativos de smartphone e que podem ser usados para identificar a quem pertence o ping do GPS. Tudo isso é alimentado, observou Clark durante a apresentação, pela ignorância geral da onipresença e invasão dos kits de desenvolvimento de software para smartphones, conhecidos como SDKs: “Tudo é aceito e enviado pelo usuário, mesmo que ele provavelmente não leia as 60 páginas [do contrato de licença do usuário final]”.

    O Intercept não conseguiu corroborar as alegações da Anomaly Six sobre seus dados e recursos, feitas no contexto de uma apresentação de vendas. O pesquisador de privacidade Zach Edwards acredita que as alegações sejam plausíveis, mas alertou que as empresas tendem a exagerar a qualidade de seus dados. O pesquisador de segurança móvel Will Strafach concorda com Edwards, destacando que o fornecimento de dados da A6 “soa alarmante, mas não está muito distante das alegações ambiciosas de outras empresas”. Para Wolfie Christl, pesquisador de vigilância e privacidade no setor de dados de aplicativos, mesmo que os recursos da Anomaly Six sejam exagerados ou parcialmente imprecisos, uma empresa que possua que uma fração desses recursos de espionagem seria preocupante do ponto de vista da privacidade pessoal.

    Procurado para comentar, o porta-voz da Zignal deu a seguinte declaração: “Embora a Anomaly 6 tenha demonstrado no passado seus recursos para a Zignal Labs, a Zignal Labs não tem um relacionamento com a Anomaly 6. Nunca integramos os recursos da Anomaly 6 em nossa plataforma nem oferecemos a Anomaly 6 a nenhum de nossos clientes”.

    Quando perguntado sobre a apresentação da empresa e seus recursos de vigilância, o cofundador da Anomaly Six, Brendan Huff, respondeu por e-mail que “a Anomaly Six é uma pequena empresa de propriedade de veteranos que se preocupa com os interesses americanos, a segurança natural e entende a lei”.

    Empresas como a A6 são alimentadas pela onipresença dos SDKs – pacotes de código prontos para uso que os fabricantes de software podem inserir em seus aplicativos para facilitar a inclusão de funcionalidades e monetizar suas ofertas com anúncios. De acordo com Clark, a A6 pode obter medições exatas de GPS coletadas por meio de parcerias secretas com “milhares” de aplicativos de smartphone – um modo de operar que ele descreveu na apresentação como uma “abordagem ‘da fazenda à mesa’ para aquisição de dados”. Esses dados não são úteis apenas para pessoas que esperam vender coisas: o comércio global de dados pessoais, amplamente não regulamentado, encontra cada vez mais clientes, não apenas em agências de marketing, mas também em agências federais de rastreamento de imigrantes e alvos de drones e por órgãos que estabelecem sanções econômicas e fiscalizam a evasão fiscal. De acordo com dados públicos analisados inicialmente pela Motherboard, em setembro de 2020 o Comando de Operações Especiais dos EUA pagou 590 mil dólares à Anomaly Six para ter um ano de acesso ao “feed de telemetria comercial” da empresa.

    O software da Anomaly Six permite que os clientes naveguem por todos esses dados com uma visualização intuitiva, ao estilo da visão de satélite do Google Maps. Os usuários precisam apenas encontrar um local de interesse e desenhar uma caixa ao redor dessa localização. A A6 preenche a delimitação com pontos que indicam smartphones que passaram pela área. O clique em um ponto fornece linhas que representam os movimentos do dispositivo – e de seu proprietário – em um bairro, cidade ou mesmo no exterior.

    Enquanto as forças armadas russas se aproximavam da fronteira com a Ucrânia, o representante de vendas da A6 detalhou como a vigilância por GPS poderia ajudar a transformar a Zignal em uma espécie de agência de espionagem privada, capaz de auxiliar a clientela estatal no monitoramento dos movimentos de tropas. Imagine, explicou Clark, se os tuítes da zona de crise que a Zignal rapidamente trouxe à tona fossem apenas o começo. Usando imagens de satélite tuitadas por contas que conduzem as cada vez mais populares investigações de “inteligência open source” – conhecida pela sigla em inglês OSINT –, Clark mostrou como o rastreamento por GPS da A6 permitiria aos clientes da Zignal determinar não apenas que a concentração de forças militares estava ocorrendo, como também rastrear telefones de soldados russos durante a mobilização para apontar onde exatamente eles haviam treinado, estavam instalados e a quais unidades pertenciam. Clark mostrou o software da A6 rastreando os telefones das tropas russas de forma retroativa, indicando localizações anteriores, distantes da fronteira, como uma instalação militar próxima à cidade russa de Yurga. O representante da A6 sugeriu que os aparelhos poderiam ser rastreados até as residências dos militares. Reportagens anteriores do Wall Street Journal indicam que esse método de rastreamento já é usado para monitorar manobras militares russas e que as tropas americanas estão igualmente vulneráveis.

    Em outra demonstração, Clark aproximou a visualização do mapa em direção à cidade de Molkino, no sul da Rússia, onde os mercenários do Grupo Wagner estariam instalados. O mapa mostrava dezenas de pontos indicando dispositivos na base do grupo e linhas dispersas apontando movimentos recentes. “Você pode começar a assistir a esses dispositivos”, explicou Clark. “Sempre que eles começam a deixar a área, estou analisando a potencial atividade de pré-deslocamento de atores russos não padronizados, seu pessoal não uniforme. Se você os vir entrando na Líbia, na República Democrática do Congo ou algo do tipo, isso pode ajudar a entender melhor as possíveis ações de soft power que os russos estão adotando.”

    Para impressionar completamente seu público com o imenso poder do software, a Anomaly Six fez o que poucos no mundo são capazes: espionar espiões americanos.

    A apresentação observou que esse tipo de vigilância telefônica em massa poderia ser usado pela Zignal para ajudar clientes não especificados com “contramensagens”, desmascarando as alegações russas de que tais mobilizações militares seriam meros exercícios de treinamento e não a preparação para uma invasão. “Em termos de contramensagem, vocês têm grande parte do valor de fornecer ao cliente a peça de contramensagem – [A Rússia está] dizendo: ‘Ah, é apenas local, regional… Exercícios.’ Tipo assim, não. Podemos ver pelos dados que eles estão vindo de toda a Rússia.”

    Para impressionar o público da apresentação com o imenso poder do software, a Anomaly Six fez o que poucos no mundo são capazes: espionar espiões americanos. “Gosto de tirar sarro do nosso próprio povo”, disse Clark. Abrindo uma visão de satélite semelhante ao Google Maps, o representante de vendas mostrou a sede da NSA em Fort Meade, em Maryland, e a sede da CIA em Langley, na Virgínia. Com delimitações virtuais desenhadas em torno de ambas – técnica conhecida como geofencing –, o software da A6 revelou uma incrível recompensa de inteligência: 183 pontos representando telefones que visitaram ambas as agências, potencialmente pertencentes a membros da inteligência americana, com centenas de linhas revelando seus movimentos, prontos para serem rastreados no mundo inteiro. “Se eu sou um oficial de inteligência estrangeiro, agora tenho 183 pontos de partida”, observou Clark.

    A NSA e a CIA se recusaram a comentar.

    A Anomaly Six rastreou um dispositivo que visitou as sedes da NSA e da CIA e uma base aérea próxima de Zarqa, na Jordânia.

    A Anomaly Six rastreou um dispositivo que visitou as sedes da NSA e da CIA e uma base aérea próxima de Zarqa, na Jordânia.

    Captura de tela: The Intercept/Google Maps

    Ao clicar em um dos pontos da NSA, Clark pôde seguir os movimentos exatos daquele indivíduo – praticamente todos os momentos de sua vida durante um período prévio de um ano. “Pense em coisas divertidas como terceirização”, disse Clark. “Se sou um oficial de inteligência estrangeiro, não tenho acesso a coisas como a agência ou o forte, mas posso descobrir onde essas pessoas moram, para onde viajam, quando saem do país.” A demonstração rastreou o indivíduo nos EUA e no exterior até um centro de treinamento e um aeródromo a cerca de uma hora de carro, a noroeste da Base Aérea Muwaffaq Salti, em Zarqa, Jordânia, onde os EUA mantêm uma frota de drones.

    “Não é preciso muita criatividade para ver como espiões estrangeiros podem usar essas informações para espionagem, chantagem, todo tipo de, como costumavam dizer, atos covardes.”

    “Com certeza há uma séria ameaça à segurança nacional se um analista de dados puder rastrear algumas centenas de funcionários de inteligência em suas casas e no mundo inteiro”, afirmou ao Intercept o senador democrata Ron Wyden, crítico da indústria de dados pessoais. “Não é preciso muita criatividade para ver como espiões estrangeiros podem usar essas informações para espionagem, chantagem, todo tipo de, como costumavam dizer, atos covardes.”

    De volta aos EUA, o indivíduo em questão foi rastreado até sua casa. O software da A6 inclui uma função chamada “Regularidade”, um botão em que os clientes podem clicar para analisar locais visitados com frequência, permitindo que se deduza onde um alvo mora e trabalha, embora os pontos de GPS fornecidos pela A6 omitam o nome do proprietário do telefone. Pesquisadores de privacidade afirmam há muito tempo que mesmo dados “anônimos” de localização podem ser facilmente associados a um indivíduo com base nos lugares que ele mais frequenta, um fato confirmado pela demonstração da A6. Depois de apertar o botão “Regularidade”, Clark ampliou uma imagem do Google Street View da casa do indivíduo.

    “A indústria alegou repetidamente que coletar e vender esses dados de localização de celulares não violam a privacidade por estarem vinculados a números de identificação de dispositivos em vez de nomes de pessoas. Esse recurso prova que essas alegações são superficiais”, disse Nate Wessler, vice-diretor do Projeto de Expressão, Privacidade e Tecnologia da União Americana das Liberdades Civis, a ACLU. “É claro que seguir os movimentos de uma pessoa 24 horas por dia, todos os dias, dirá onde ela mora, onde trabalha, com quem passa o tempo e quem ela é. A violação de privacidade é imensa.”

    A demonstração continuou com um exercício de vigilância assinalando movimentos navais dos EUA, usando uma foto de satélite tuitada do barco USS Dwight D. Eisenhower, no Mar Mediterrâneo, realizada pela empresa Maxar Technologies. Clark explicou como uma foto de satélite poderia ser transformada em um recurso de vigilância ainda mais poderoso do que uma imagem obtida do espaço. Usando as coordenadas de latitude e longitude e a indicação de horário vinculadas à foto da Maxar, a A6 conseguiu captar um sinal de telefone oriundo da posição do navio naquele exato momento, ao sul de Creta. “É preciso apenas um”, observou Clark. “Quando olho para trás, onde esteve esse dispositivo? De volta a Norfolk. E o que mais vemos no porta-aviões? Aqui estão todos os outros dispositivos.” Sua tela revelou uma visão da embarcação ancorada na Virgínia, repleta de milhares de pontos coloridos representando pings de localização de telefones coletados pela A6. “Bem, agora eu posso ver toda vez que aquele navio está se deslocando. Não preciso de satélites. Posso usar isso.”

    Embora Clark tenha admitido que a empresa tem muito menos dados disponíveis sobre os proprietários de telefones chineses, a demonstração mostrou ao final um ping de GPS captado a bordo de um suposto submarino nuclear chinês. Usando apenas imagens de satélite não classificadas e dados de publicidade comercial, a Anomaly Six conseguiu rastrear com precisão movimentos das forças militares e de inteligência mais sofisticadas do mundo. Com ferramentas como as vendidas pela A6 e pela Zignal, até mesmo um aficionado de inteligência open source teria poderes de vigilância global anteriormente exclusivos a nações. “As pessoas colocam muita coisa nas redes sociais”, acrescentou Clark com uma risada.

    Como os dados de localização proliferaram sem supervisão do governo dos EUA, uma mão lava a outra, criando um setor privado com poderes de vigilância de magnitude estatal que também podem alimentar o crescente apetite do estado por vigilância, sem o habitual escrutínio judicial. Os críticos dizem que o comércio livre de dados publicitários constitui uma brecha na Quarta Emenda, a qual exige que o governo leve seu caso a um juiz para ter acesso às coordenadas de localização de um provedor de celular. Mas a mercantilização total dos dados telefônicos permite que o governo dos EUA abra mão de ordens judiciais e simplesmente compre dados muitas vezes ainda mais precisos que os fornecidos por empresas de telefonia como a Verizon. Defensores das liberdades civis afirmam que isso abre uma lacuna perigosa entre as proteções pretendidas pela Constituição e a compreensão da lei sobre o comércio moderno de dados.

    “A Suprema Corte deixou claro que as informações de localização de celular são protegidas pela Quarta Emenda devido à imagem detalhada da vida de uma pessoa que ela pode revelar”, explicou Wessler. “As compras de acesso a dados de localização confidenciais de americanos por agências governamentais levantam sérias questões sobre se elas estão envolvidas em uma operação ilegal no que diz respeito à exigência de mandado da Quarta Emenda. É hora de o Congresso acabar de uma vez por todas com a insegurança jurídica que permite essa vigilância, avançando para a aprovação da lei ‘A Quarta Emenda Não Está à Venda’.”

    Embora tal legislação pudesse restringir a capacidade do governo de pegar carona na vigilância comercial, os criadores de aplicativos e os analistas de dados permaneceriam livres para vigiar os proprietários de telefones. Ainda assim, Wyden, coautor do projeto de lei, disse ao Intercept que acredita que “essa legislação manda uma mensagem muito forte” ao “Velho Oeste” da vigilância baseada em anúncios, mas que reprimir a cadeia de fornecimento de dados de localização seria “certamente uma questão para o futuro”. Wyden sugeriu que a Comissão Federal de Comércio poderia lidar melhor com a proteção do rastro de localização obtido por aplicativos de espionagem e anunciantes. Outra legislação, introduzida anteriormente por Wyden, capacitaria a comissão a reprimir o compartilhamento promíscuo de dados e ampliar a capacidade dos consumidores de optar por não rastrear anúncios.

    A A6 está longe de ser a única empresa envolvida na privatização do rastreamento e da vigilância de dispositivos. Três dos principais funcionários da Anomaly Six trabalharam antes na concorrente Babel Street, que listou os três em um processo de 2018, noticiado pelo Wall Street Journal. De acordo com o documento, Brendan Huff e Jeffrey Heinz fundaram a Anomaly Six (e a menos conhecida Datalus 5) meses depois de deixarem a Babel Street, em abril de 2018, com a intenção de replicar o Locate X – produto de vigilância de localização de celular da Babel – em parceria com a Semantic AI, concorrente da Babel. Em julho de 2018, Clark seguiu os passos de Huff e Heinz. Deixou o cargo de “interface principal para… clientes da comunidade de inteligência” e se tornou funcionário da Anomaly Six e da Semantic.

    Como seu rival Dataminr, a Zignal anuncia suas parcerias mundanas com a marca de roupas Levi’s e o time de basquete Sacramento Kings, mostrando-se em termos vagos, com pouca indicação de que usa o Twitter para fins de inteligência – uma violação ostensiva da política antivigilância do Twitter. Os laços da Zignal com o governo são profundos: o conselho consultivo da empresa inclui um ex-chefe do Comando de Operações Especiais do Exército dos EUA, Charles Cleveland, bem como o CEO do Grupo Rendon, John Rendon, cuja biografia informa que ele “foi pioneiro no uso de comunicações estratégicas e gerenciamento de informações em tempo real como um elemento do poder nacional, servindo como consultor da Casa Branca, da comunidade de Segurança Nacional dos EUA, incluindo o Departamento de Defesa dos EUA. ” Além disso, dados públicos afirmam que a Zignal recebeu cerca de 4 milhões de dólares para subcontratar, por meio da empresa de recrutamento de defesa ECS Federal, o Projeto Maven para “Agregação de Dados… Publicamente Disponíveis” e um “Enclave de Informações Publicamente Disponíveis” relacionado à Rede Não Classificada Segura do exército dos EUA.

    A notável capacidade global da Anomaly Six é representativa do salto quântico que ocorre no campo da inteligência open source. Embora o termo seja frequentemente usado para descrever o trabalho de detetive habilitado pela internet, baseado em registros públicos para, digamos, identificar a localização de um crime de guerra a partir de um videoclipe granulado, os sistemas de “inteligência open source automatizada” agora usam software para combinar enormes conjuntos de dados que ultrapassam o que um humano poderia fazer por conta própria. A inteligência open source automatizada também se tornou um nome inadequado, usando informações que não são de forma alguma “código aberto” ou de domínio público, tais como dados comerciais de GPS que devem ser comprados de um analista privado.

    Embora poderosas, as técnicas de inteligência open source geralmente são protegidas de acusações de violação de privacidade porque a natureza “código aberto” das informações significa que elas já eram, em certa medida, públicas. Essa é uma defesa que a Anomaly Six, com seus bilhões de pontos de dados adquiridos, não consegue reunir. Em fevereiro, o Comitê Holandês de Revisão dos Serviços de Inteligência e Segurança divulgou relatório sobre técnicas automatizadas de inteligência open source e a ameaça à privacidade pessoal que elas podem representar: “O volume, a natureza e a variedade de dados pessoais nessas ferramentas automatizadas de inteligência open source podem levar a uma violação mais grave dos direitos fundamentais, em particular o direito à privacidade, do que consultar dados de fontes de informação online publicamente acessíveis, como dados de redes sociais publicamente acessíveis ou dados recuperados usando um mecanismo de pesquisa genérico”. Essa fusão de dados públicos, registros pessoais adquiridos de forma privada e análise computadorizada não é o futuro da vigilância governamental, mas o presente. No ano passado, o New York Times noticiou que a Agência de Inteligência da Defesa “compra bancos de dados comercialmente disponíveis contendo dados de localização de aplicativos de smartphones e realiza buscas retroativas, sem mandado, de movimentos de americanos”, um método de vigilância que hoje é praticado regularmente por órgãos como o Pentágono, o Departamento de Segurança Interna e a Receita Federal dos EUA, entre outros.

    Tradução: Ricardo Romanoff

    The post Empresa dos EUA rastreia bilhões de celulares no mundo todo – e espiona até a CIA appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Amid a historic and ever-worsening humanitarian crisis in Afghanistan, Facebook recently added the head of one of the country’s most important domestic aid groups to its Dangerous Individuals terror blacklist, The Intercept has learned.

    Internal company materials reviewed by The Intercept show that Matiul Haq Khalis — head of the Afghan Red Crescent Society, or ARCS; son of a famed mujahedeen commander, Mohammad Yunus Khalis; and a former Taliban negotiator — was added to the company’s stringent censorship list in late April, joining a group of thousands of people and organizations deemed too dangerous to freely discuss or use the platform, including alleged terrorists, hate groups, drug cartels, and mass murderers. But Facebook’s designation now means that the list, ostensibly created and enforced to stop offline harm, could disrupt the work of a globally recognized organization working to ease the immiseration of tens of millions of civilians.

    After the collapse of the U.S.-backed government and withdrawal of American military forces, Khalis was named president of the organization, which helps provide health care, food, and other humanitarian aid to civilians there since its founding in 1934. In a country where half the population is going hungry and American sanctions threaten a total economic collapse, the ARCS is a bulwark against even greater suffering. Following Khalis’s addition to the Dangerous Individuals list under its most restrictive “Tier 1” category for terrorists due to his Taliban affiliation, the over 2 billion Facebook and Instagram users around the world are now barred from praising, supporting, or representing Khalis; this means even an anodyne photo of him at an official ARCS event, quotation of remarks, or positive mention of him in the context of the organization’s aid work would risk deletion, as would any attempt on his part to use the company’s platform to communicate, either in Afghanistan or abroad.

    “The Afghan Red Crescent continues to provide lifesaving assistance across the country, to the most vulnerable people in the country, working in all provinces,” said Anita Dullard, spokesperson with the International Committee of the Red Cross. “They’re dealing with a range of things including severe drought, Covid, economic hardship, and working to support the healthcare system in Afghanistan. We work closely with Afghan Red Crescent to ensure that we can deliver humanitarian assistance.”

    A senior official with a major international aid organization in Afghanistan, who spoke with The Intercept on the condition of anonymity due to avoid jeopardizing operations in the country, described ARCS as “one of the major humanitarian actors delivering services to a growing number of people in need” and “a huge contributor to the collective humanitarian efforts” pursued in conjunction with other NGOs. This aid official expressed surprise that Khalis would be singled out for censorship despite his Taliban affiliation, saying he had “never held a gun,” and expressed concern over the potential to impede lifesaving humanitarian work. “For sure the ARCS is using Facebook as a tool of communication” with the public, this source continued. “If [the blacklisting] has an effect it will be negative” for Afghanistan, they added.

    Chinese Ambassador to Afghanistan Wang Yu L, front and Secretary General of the Afghan Red Crescent Society ARCS Mawlawi Matiul Haq Khalis R, front attend a handover ceremony for China-donated supplies in Kabul, capital of Afghanistan, Dec. 21, 2021. The ARCS has received a batch of assistance donated by the Red Cross Society of China. (Photo by Saifurahman Safi/Xinhua via Getty Images)

    Secretary General of the Afghan Red Crescent Society Mawlawi Matiul Haq Khalis, right, attends a handover ceremony for donated supplies in Kabul, Afghanistan, on Dec. 21, 2021.

    Photo: Saifurahman Safi/Getty Images)


    Khalis has had an “extremely varied career” in Afghanistan, according to Graeme Smith, an Afghanistan analyst at the International Crisis Group and former United Nations officer stationed in the country. Smith noted that Khalis was in recent history considered an ally of the U.S., having served with the anti-Soviet mujahedeen led by his father, who in 1987 was feted by President Ronald Reagan at a White House reception. Following the American invasion in 2001, Khalis sided with the Taliban. “In other words he’s from a prominent family with pedigree rooted in tribal support from eastern Afghanistan and a history of fighting invaders,” explained Smith. “I have spent the better part of my career studying Afghan politics and I have never met any important politician who is not ‘dangerous’ in some way. Afghans have learned through bitter experience that Western politicians are also dangerous, at times.”

    Facebook’s designation of Khalis, considered in a vacuum, is unsurprising. The company’s Dangerous Organizations and Individuals roster generally mirrors the foreign policy stances of the United States, blacklisting federally sanctioned and terror-designated entities like the Taliban as a matter of course while granting great latitude to Western allies. In Afghanistan, Facebook’s near-total mimicry of State Department decision-making has meant that the ruling government of a sovereign country, as repressive of its own people and despised as it may still be in the U.S., is unable to freely use the internet to communicate with its citizenry. The U.S. government and Facebook share not only a common dilemma over how to treat the Taliban now that the group has won the war and assumed control of the country, but seem to be taking the same punitive approach to that matter. Just as the Biden administration continues to punish the Taliban at the expense of the people of Afghanistan by withholding billions of dollars in frozen cash, Facebook now sanctions the head of one of Afghanistan’s most important humanitarian organizations at a time when Afghans are selling their kidneys to avoid starvation. “It goes without saying that the Red Crescent plays a crucial humanitarian role in Afghanistan’s ongoing armed conflicts,” added Smith.

    John Sifton, Asia advocacy director at Human Rights Watch, told The Intercept that he doubted the blacklisting would have a significant impact on relief efforts inside the country, given the relatively small scope of the ARCS compared to larger international organizations. “It’s not going to somehow significantly impact their operations or outreach,” he said. “It’s more illustrative of Facebook having a policy that doesn’t make a lot of sense.” Sifton questioned the extent to which letting people speak freely of Khalis would endanger anyone or anything. “How is he ‘dangerous’? He’s like 65 years old. He has no militia. His father was a mujahedeen commander, but what is the problem here?” Sifton pointed to groups that are actively using the platform to incite violence. “There are hate guys in India that are spreading toxic anti-Muslim violence across Facebook, Hindu nationalist groups, hateful Buddhist groups in Burma, that’s a real problem. Having Khalis online posting about how he cut the ribbon at a new hospital in Afghanistan, that’s not part of the problem.”

    Facebook has at times defended the breadth of its blacklist by claiming, without evidence, that it’s legally required to censor discussion of certain entities in order to comply with U.S. sanctions law, though neither the ARCS nor Khalis are currently named in the Treasury or State Department’s counterterrorism sanctions lists. And although the Taliban has an inarguably ugly human rights record and a long history of civilian brutalization, so do many governments left untouched by the Dangerous Organizations policy. The Dangerous Organizations and Individuals list is often criticized for its lack of flexibility and country-specific nuance, and though the company has shown that it is at times willing to make drastic exceptions, these exceptions generally also jibe with American policy determinations.

    “The fact that Twitter is doing the exact opposite tells you everything you need to know.”

    While Sifton is critical of Facebook’s rigid censorship policies, he also assigns blame to “scattershot” and outdated federal anti-terror policies and dismissed the company’s claims that it has any legal obligation to mimic them: “The fact that Twitter is doing the exact opposite tells you everything you need to know.” Sifton said that by following the “absurdities” of counterterrorism sanctions lists, Facebook is replicating the government’s mistakes. While he emphasized that he was not defending the “misogynist, authoritarian, rights-abusing” Taliban, he questioned the notion that the aging mujahedeen of the 1980s still represent a “danger” to the global community. “The Taliban was dangerous because they hosted Al Qaeda between 1996 and 2001, and Al Qaeda used their territory to plan 9/11 … and all the guys who did that are dead, and all the Arabs they hosted are either dead or very old or at Guantánamo.” To the extent that the Taliban writ large represents a genuine danger to Afghan civilians, it’s unclear how restricting global discussion of Khalis might help.

    Facebook did not respond to a request for comment.

    Khalis was added to the social network’s blacklist alongside some two dozen other Taliban-affiliated individuals, including others in humanitarian or public health roles, like Afghanistan’s minister of public health, deputy minister of disaster management, and deputy minister of refugees. But unlike these latter offices, the ARCS is nongovernmental, part of the International Red Cross and Red Crescent Movement of humanitarian relief organizations.

    In response to a request for comment, the International Federation of Red Cross and Red Crescent Societies provided a statement from ARCS Acting Secretary General Mohammad Nabi Burhan, stating that the Taliban government has not affected the group’s mission or ongoing work. “The Afghan Red Crescent Society delivers impartial, neutral and independent humanitarian services across all provinces in Afghanistan, in its role as auxiliary to public authorities in accordance with the Statutes of the International Red Cross and Red Crescent Movement,” he wrote. “Afghan Red Crescent Society has been operating under a new leadership since October 2021. It is not unusual for changes in leadership of a Red Cross or Red Crescent National Society to follow a change in leadership at a national level.”

    The post Facebook Anti-Terror Policy Lands Head of Afghan Red Crescent Society on Censorship List appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Google and Amazon are both set to help build “Project Nimbus,” a mammoth new cloud computing project for the Israeli government and military that is spurring intense dissent among employees and the public alike. Shareholders of both firms will soon vote on resolutions that would mandate reconsideration of a project they fear has grave human rights consequences.

    Little is known of the plan, reportedly worth over $1 billion, beyond the fact that it would consolidate the Israeli government’s public sector cloud computing needs onto servers housed within the country’s borders and subject solely to Israeli law, rather than remote data centers distributed around the world. Part of the plan’s promise is that it would insulate Israel’s computing needs from threats of international boycotts, sanctions, or other political pressures stemming from the ongoing military occupation of Palestine; according to a Times of Israel report, the terms of the Project Nimbus contract prohibit both companies from shutting off service to the government, or from selectively excluding certain government offices from using the new domestic cloud.

    It remains unclear what technologies exactly will be provided through Nimbus or to what end, an ambiguity critics say is unnerving. Google in particular is known for the sophistication of its cloud-based offerings that are perfectly suited for population-scale surveillance, including powerful image recognition technology that made the company initially so alluring to the Pentagon’s drone program. In 2020, The Intercept reported that Customs and Border Protection would use Google Cloud software to analyze video data from its controversial surveillance initiative along the U.S.-Mexico border.

    While a wide variety of government ministries will make use of the new computing power and data storage, the fact that Google and Amazon may be directly bolstering the capabilities of the Israeli military and internal security services has generated alarm from both human rights observers and company engineers. In October 2021, The Guardian published a letter from a group of anonymous Google and Amazon employees objecting to their company’s participation. “This technology allows for further surveillance of and unlawful data collection on Palestinians, and facilitates expansion of Israel’s illegal settlements on Palestinian land,” the letter read. “We cannot look the other way, as the products we build are used to deny Palestinians their basic rights, force Palestinians out of their homes and attack Palestinians in the Gaza Strip — actions that have prompted war crime investigations by the international criminal court.” In March, an American Google employee who had helped organize the employee opposition to Nimbus said the company abruptly told her she could either move to Brazil or lose her job, a move she said was retaliation for her stance.

    Nimbus will now face a referendum of sorts among Google and Amazon shareholders, who next month will vote on a pair of resolutions that call for company-funded reviews of their participation in that project and others that might harm human rights. The filers of the resolutions collectively own roughly $1.8 million in shares, according to Parker Breza of the Institute for Middle East Understanding, which is helping coordinate the filings. While these investors object to Nimbus on largely the same moral grounds as the authors of the Guardian letter, they’re also tapping into the specific anxieties of the Wall Street investor: What if bad press from Project Nimbus loses us money? Citing the very public controversies surrounding Project Nimbus and other prior contracts with various governmental security agencies, the Google shareholder resolution warns that “employee and public opposition to such contracts will increase and pose a risk to Google’s reputation and its strategic positioning on social responsibility,” and asks that “the company issue a report, at reasonable expense and excluding proprietary information, reassessing the Company’s policies on support for military and militarized policing agency activities and their impacts on stakeholders, user communities, and the Company’s reputation and finances.”

    The Amazon resolution, filed by Investor Advocates for Social Justice, also calls for an independent inquiry into Nimbus and other surveillance contracts, stating: “Amazon’s government and government-affiliated customers and suppliers with a history of rights-violating behavior pose risks to the company” and “Inadequate due diligence presents material privacy and data security risks, as well as legal, regulatory, and reputational risks.”

    Ed Feigen, a Google shareholder since 2014 and lead filer of that resolution, told The Intercept he and several fellow investors felt compelled to oppose Nimbus as soon as they learned of it. “I’m also a member of the organization Jewish Voice for Peace,” Feigen said, “which works to ensure US foreign policy advances peace, human rights, and follows international law so we can ensure freedom and justice for Palestinians.” Feigen added that the resolution was drafted in collaboration with Google employees who similarly oppose the contract on human rights grounds. “We also felt the need to support Google employees who’d spoken out against contracts Google was pursuing with militaries and police agencies like CBP and ICE,” Feigen said, “both because we believe that profiting from violence is plainly immoral, and because we see pursuing such contracts as a liability for investors––especially given the history of Google employees protesting such contracts.”

    A Google software engineer who provided feedback for the resolution and spoke on the condition of anonymity told The Intercept that they’re concerned employees are just as much in the dark about Nimbus as the general public, and fear how the company’s technology would be used to repress Palestinians. “It became a point of shame,” they said in an interview. “We know that the IDF, one of its projects is mass constant surveillance of various areas of the Occupied Territories, and I don’t believe there are any restrictions on which cloud services the Israeli government wants to procure from [Google] Cloud. Google offers big data analysis, machine learning, and AI tool suites through Cloud; I don’t think there’s any reason to assume they aren’t consuming all of these products to help them work on this.”

    “If workers are working on cloud AI products or large scale data management, they should think of themselves as working on technology that is oppressing people.”

    This engineer added that while they have found like-minded colleagues who are similarly disturbed by the prospect of their cloud technologies being used to fortify the Israeli occupation, employee activism against Nimbus is much diminished since the waves of worker-led protests against prior Google contracts like Project Maven and Dragonfly, the company’s planned custom-built Chinese search engine. “Right now we’re kind of in a slump,” they said. While past employee movements spurred heated discussions on internal chat forums, they said, “We haven’t had anything like that from Nimbus, which is really unfortunate.” In addition to fearing retaliation from Google itself, this source said Google employees who might otherwise vocally oppose the Nimbus contract have remained quiet in order to avoid accusations of antisemitism. “The harm is documented, putting Palestinians under constant surveillance is very well documented, and yet [this contract] is the one where even if workers care about it, not only do they face retaliation from management, some coworkers might retaliate in their own ways.” Googlers could stand to think more about how their creations could be misused, they added: “If workers are working on cloud AI products or large scale data management, they should think of themselves as working on technology that is oppressing people.” But the engineer pointed to the fact that Google engineers likely trust the company’s vague public commitments to human rights values and “AI principles,” even if naively. “Leadership has failed to take these commitments seriously, so they’ve passed responsibility to ensure our technology is used responsibly on to us.”

    As with most activist shareholder resolutions, these will likely be a difficult sell. Government contracts like Nimbus are enormously lucrative, and both Amazon and Google have made it clear they are continuing to seek them even in the face of protest from within and without. Global internet giants have seen their profits soar in recent years, a trend they hope to continue by taking on military and law enforcement work that in previous eras may have been handed to traditional defense contractors. It will be difficult to convince investors chiefly concerned with maximizing share prices that these firms should walk away from the giant payouts defense or national security-related projects would bring. Even if successful, neither resolution would end Project Nimbus or thwart either company’s involvement. The Google software engineer added that most of his fellow anti-Nimbus colleagues don’t believe the resolution goes far enough: “It calls for a report on potential impacts to be prepared, but otherwise does not propose any binding action.” Still, they hope that the resolution, doomed or not, will help draw scrutiny and public pressure to the project, a sentiment Feigen shares: “This is the first time a resolution like this has ever been introduced, so we know it’s a big challenge,” he said. “It’s still too early to know whether the resolution will pass, but whether it does or not, this is just the first step in calling attention to these important concerns.”

    The post Google and Amazon Face Shareholder Revolt Over Israeli Defense Work appeared first on The Intercept.

    This post was originally published on The Intercept.