Author: Sam Biddle

  • Google and Amazon are both loath to discuss security aspects of the cloud services they provide through their joint contract with the Israeli government, known as Project Nimbus. Though both the Ministry of Defense and Israel Defense Forces are Nimbus customers, Google routinely downplays the military elements while Amazon says little at all.

    According to a 63-page Israeli government procurement document, however, two of Israel’s leading state-owned weapons manufacturers are required to use Amazon and Google for cloud computing needs. Though details of Google and Amazon’s contractual work with the Israeli arms industry aren’t laid out in the tender document, which outlines how Israeli agencies will obtain software services through Nimbus, the firms are responsible for manufacturing drones, missiles, and other weapons Israel has used to bombard Gaza.

    “If tech companies, including Google and Amazon, are engaged in business activities that could impact Palestinians in Gaza, or indeed Palestinians living under apartheid in general, they must abide by their responsibility to carry out heightened human rights due diligence along the entirety of the lifecycle of their products,” said Matt Mahmoudi, a researcher at Amnesty International working on tech issues. “This must include how they plan to prevent, mitigate, and provide redress for possible human rights violation, particularly in light of mandatory relationships with weapons manufacturers, which contribute to risk of genocide.”

    Project Nimbus, which provides the Israeli government with cloud services ranging from mundane Google Meet video chats to a variety of sophisticated machine-learning tools, has already created a public uproar. Google and Amazon have faced backlash ranging from street protests to employee revolts.

    The tender document consists largely of legal minutiae, rules, and regulations laying out how exactly the state will purchase cloud computing services from Amazon and Google, which won the $1.2 billion contract in 2021. The Israeli document was first published in 2021 but had been updated periodically, most recently in October 2023.

    One of the document’s appendices includes a list of Israeli companies and government offices that are “required to purchase the services that are the subject of the tender from the winning bidder,” according to a translation of the Hebrew-language original.

    The tender document doesn’t require any of the entities to purchase cloud services, but if they need these services — ubiquitous in any 21st-century enterprise — they must purchase them from the two American tech giants. A separate portion of the document notes that any office that wants to buy cloud computing services from other companies must petition two government committees that oversee procurement for an explicit exemption.

    Some of the entities listed in the document have had relationships with other companies that provide cloud services. The status and future of those business ties is unclear.

    Obligatory Customers

    The list of obligatory cloud customers includes state entities like the Bank of Israel, the Israel Airports Authority, and the Settlement Division, a quasi-governmental body tasked with expanding Israel’s illegal colonies in the West Bank.

    Also included on the list are two of Israel’s most prominent, state-owned arms manufacturers: Israel Aerospace Industries and Rafael Advanced Defense Systems. The Israeli military has widely fielded weapons and aircraft made by these companies and their subsidiaries to prosecute its war in Gaza, which since October 7 has killed over 30,000 Palestinians, including 13,000 children.

    These relationships with Israeli arms manufacturers place Project Nimbus far closer to the bloodshed in Gaza than has been previously understood.

    Asked how work with weapons manufacturers could be consistent with Google’s claim that Project Nimbus doesn’t involve weapons, spokesperson Anna Kowalczyk repeated the claim in a statement to The Intecept.

    “We have been very clear that the Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy. This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services,” said Kowalczyk, who declined to answer specific questions. “Across Google, we’ve also been clear that we will not design or deploy AI applications as weapons or weapons systems, or for mass surveillance.”

    (A spokesperson for Amazon Web Services declined to comment. Neither Rafael nor IAI responded to requests for comment.)

    The Israeli document provides no information about exactly what cloud services these arms makers must purchase, or whether they are to purchase them from Google, Amazon, or both. Though the government’s transition to Google and Amazon’s bespoke cloud has hit lengthy delays, last June Rafael announced it had begun transitioning certain “unclassified” cloud needs to Amazon Web Services but did not elaborate.

    Google has historically declined to explain whether its various human rights commitments and terms of service prohibiting its users from harming others apply to Israel. After an April 3 report by +972 Magazine found that the Israeli military was using Google Photos’ facial recognition to map, identify, and create a “hit list” of Palestinians in Gaza, Google would not say whether it allowed this use of its software.

    “Without such deep and serious process, they can be seen as complicit in Israeli crimes.”

    Both Google and Amazon say their work is guided by the United Nations Guiding Principles on Business and Human Rights, which seeks to “to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts.” The U.N. principles, which were endorsed by the U.N. Human Rights Council in 2011, say companies must “identify and assess any actual or potential” rights abuses related to their business.

    Michael Sfard, an Israeli human rights attorney, told The Intercept that these guidelines dictate that Google and Amazon should conduct human rights due diligence and vet the use of their technology by the Israeli government.

    “Without such deep and serious process,” Sfard said, “they can be seen as complicit in Israeli crimes.”

    “Spike” Missiles

    Rafael, a state-owned arms contractor, is a titan of the Israeli defense sector. The company provides the Israeli military with broad variety of missiles, drones, and other weapons systems.

    It sells the vaunted Iron Dome rocket-defense system and the “Trophy” anti-rocket countermeasure system that’s helped protect Israeli military tanks during the ground offensive in Gaza.

    Israel also routinely fields Rafael’s “Spike” line of missiles, which can be fired from shoulder-carried launchers, jets, or drones. Effective against vehicles, buildings, and especially people, Spike missiles can be outfitted with a fragmentation option that creates a lethal spray of metal. Since 2009, analysts have attributed cube-shaped tungsten shrapnel wounds in civilians to Israel’s use of Spike missiles.

    Use of these missiles in Gaza continue, with military analysts saying that Spike missiles were likely used in the April 1 drone killing of seven aid workers affiliated with World Central Kitchen.

    A view of the destroyed roof of a vehicle where employees from the World Central Kitchen (WCK), including foreigners, were killed in an Israeli airstrike, according to the NGO as the Israeli military said it was conducting a thorough review at the highest levels to understand the circumstances of this "tragic" incident, amid the ongoing conflict between Israel and Hamas, in Deir Al-Balah, in the central Gaza, Strip April 2, 2024. (Photo by Yasser Qudihe / Middle East Images / Middle East Images via AFP) (Photo by YASSER QUDIHE/Middle East Images/AFP via Getty Images)
    The destroyed roof of a vehicle where World Central Kitchen aid workers were killed in an Israeli airstrike, in Deir Al-Balah, Gaza Strip, on April 2, 2024. Photo: Yasser Qudihe/Middle East Images via AFP

    Elta Systems, a subsidiary of IAI, is also named in the document as an obligatory Nimbus customer. The firm deals mostly in electronic surveillance hardware but co-developed the Panda, a remote-controlled bulldozer Israeli military has used to demolish portions of Gaza.

    Israel Aerospace Industries, commonly known as IAI, plays a similarly central role in the war, its weapons often deployed hand in glove with Rafael’s.

    IAI’s Heron drone, for instance, is frequently armed with Spike missiles. The Heron provides the Israeli Air Force with the crucial capacity to persistently surveil the denizens of Gaza and launch airstrikes against them at will.

    In November, IAI CEO Boaz Levy told the Jerusalem Post, “IAI’s HERON Unmanned Aerial Systems stand as a testament to our commitment to innovation and excellence in the ever-evolving landscape of warfare. In the Iron Swords War” — referring to Israel’s name for its military operation against Hamas — “the HERON UAS family played a pivotal role, showcasing Israel’s operational versatility and adaptability in diverse environments.”

    Project Nimbus also establishes its own links between the Israeli security establishment and the American defense industry. While Nimbus is based on Google and Amazon’s provision of their own cloud services to Israel, the tender document says these companies will also establish “digital marketplaces,” essentially bespoke app stores for the Israeli government that allow them to access a library of cloud-hosted software from third parties.

    According to a spreadsheet detailing these third-party cloud offerings, Google provides Nimbus users with access to Foundry, a data analysis tool made by the U.S. defense and intelligence contractor Palantir. (A spokesperson for Palantir declined to comment.)

    Google began offering Foundry access to its cloud customers last year. While marketed primarily as civilian software, Foundry is used by military forces including U.S. Special Operations Command and the U.K. Royal Navy. In 2019, the Washington Post reported the U.S. Army would spend $110 million to use Foundry “to piece together thousands of complex data sets containing information on U.S. soldiers and the expansive military arsenal that supports them.”

    The Israeli military extensively uses Palantir software for targeting in Gaza, veteran national security journalist James Bamford reported recently in The Nation.

    Palantir has been an outspoken champion of the Israeli military’s invasion of Gaza. “Certain kinds of evil can only be fought with force,” the company posted on its social media during the first week of the conflict. “Palantir stands with Israel.”

    War Abroad, Revolt at Home

    That Project Nimbus includes a prominent military dimension has been known since the program’s inception.

    In 2021, the Israeli Finance Ministry announced the contract as “intended to provide the government, the defense establishment and others with an all-encompassing cloud solution.” In 2022, training materials first reported by The Intercept confirmed that the Israeli Ministry of Defense would be a Google Cloud user.

    Google’s public relations apparatus, however, has consistently downplayed the contracting work with the Israeli military. Google spokespeople have repeatedly told press outlets that Nimbus is “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” Amazon has tended to avoid discussing the contract at all.

    The revelation that Google’s lucrative relationship with the Israeli state includes a mandated relationship with two weapons manufacturers undermines its claim that the contract doesn’t touch the arms trade.

    “Warfighting operations narrowly defined can only proceed through the wider communications and data infrastructures on which they depend,” Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University, told The Intercept. “Providing those infrastructures to industries and organizations responsible for the production and deployment of weapon systems arguably implicates Google in the operations that its services support, however indirectly.”

    Project Nimbus has proven deeply contentious within Google and Amazon, catalyzing a wave of employee dissent unseen since the controversy over Google’s now-defunct contract to bolster the U.S. military drone program.

    “Why are we pretending that because my logo is colorful and has round letters that I’m any better than Raytheon?”

    While workers from both companies have publicly protested the Nimbus contract, Google employees have been particularly vocal. Following anti-Nimbus sit-ins organized at the company’s New York and Sunnyvale, California, offices, Google fired 50 employees it said participated in the protests.

    Emaan Haseem, a cloud computing engineer at Google until she was fired after participating in the Sunnyvale protest, told The Intercept she thinks the company needs to be frank with its employees about what their labor ends up building.

    “A lot of us signed up or applied to work at Google because we were trying to avoid working at terrible unethical companies,” she said in an interview. Haseem graduated college in 2022 and said she consciously avoided working for weapons manufacturers like Raytheon or large energy companies.

    “Then you just naively join, and you find out it’s all the same. And then you’re just kind of angry,” she said. “Why are we acting any different? Why are we pretending that because my logo is colorful and has round letters that I’m any better than Raytheon?”

    The post Israeli Weapons Firms Required to Buy Cloud Services from Google and Amazon appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Microsoft last year proposed using OpenAI’s mega-popular image generation tool, DALL-E, to help the Department of Defense build software to execute military operations, according to internal presentation materials reviewed by The Intercept. The revelation comes just months after OpenAI silently ended its prohibition against military work.

    The Microsoft presentation deck, titled “Generative AI with DoD Data,” provides a general breakdown of how the Pentagon can make use of OpenAI’s machine learning tools, including the immensely popular ChatGPT text generator and DALL-E image creator, for tasks ranging from document analysis to machine maintenance. (Microsoft invested $10 billion in the ascendant machine learning startup last year, and the two businesses have become tightly intertwined. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.)

    The Microsoft document is drawn from a large cache of materials presented at an October 2023 Department of Defense “AI literacy” training seminar hosted by the U.S. Space Force in Los Angeles. The event included a variety of presentation from machine learning firms, including Microsoft and OpenAI, about what they have to offer the Pentagon.

    The publicly accessible files were found on the website of Alethia Labs, a nonprofit consultancy that helps the federal government with technology acquisition, and discovered by journalist Jack Poulson. On Wednesday, Poulson published a broader investigation into the presentation materials. Alethia Labs has worked closely with the Pentagon to help it quickly integrate artificial intelligence tools into its arsenal, and since last year has contracted with the Pentagon’s main AI office. The firm did not respond to a request for comment.

    One page of the Microsoft presentation highlights a variety of “common” federal uses for OpenAI, including for defense. One bullet point under “Advanced Computer Vision Training” reads: “Battle Management Systems: Using the DALL-E models to create images to train battle management systems.” Just as it sounds, a battle management system is a command-and-control software suite that provides military leaders with a situational overview of a combat scenario, allowing them to coordinate things like artillery fire, airstrike target identification, and troop movements. The reference to computer vision training suggests artificial images conjured by DALL-E could help Pentagon computers better “see” conditions on the battlefield, a particular boon for finding — and annihilating — targets.

    In an emailed statement, Microsoft told The Intercept that while it had pitched the Pentagon on using DALL-E to train its battlefield software, it had not begun doing so. “This is an example of potential use cases that was informed by conversations with customers on the art of the possible with generative AI.” Microsoft, which declined to attribute the remark to anyone at the company, did not explain why a “potential” use case was labeled as a “common” use in its presentation.

    OpenAI spokesperson Liz Bourgeous said OpenAI was not involved in the Microsoft pitch and that it had not sold any tools to the Department of Defense. “OpenAI’s policies prohibit the use of our tools to develop or use weapons, injure others or destroy property,” she wrote. “We were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.”

    Bourgeous added, “We have no evidence that OpenAI models have been used in this capacity. OpenAI has no partnerships with defense agencies to make use of our API or ChatGPT for such purposes.”

    At the time of the presentation, OpenAI’s policies seemingly would have prohibited a military use of DALL-E. Microsoft told The Intercept that if the Pentagon used DALL-E or any other OpenAI tool through a contract with Microsoft, it would be subject to the usage policies of the latter company. Still, any use of OpenAI technology to help the Pentagon more effectively kill and destroy would be a dramatic turnaround for the company, which describes its mission as developing safety-focused artificial intelligence that can benefit all of humanity.

    “It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm.”

    “It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm,” Brianna Rosen, a visiting fellow at Oxford University’s Blavatnik School of Government who focuses on technology ethics.

    Rosen, who worked on the National Security Council during the Obama administration, explained that OpenAI’s technologies could just as easily be used to help people as to harm them, and their use for the latter by any government is a political choice. “Unless firms such as OpenAI have written guarantees from governments they will not use the technology to harm civilians — which still probably would not be legally-binding — I fail to see any way in which companies can state with confidence that the technology will not be used (or misused) in ways that have kinetic effects.”

    The presentation document provides no further detail about how exactly battlefield management systems could use DALL-E. The reference to training these systems, however, suggests that DALL-E could be to used to furnish the Pentagon with so-called synthetic training data: artificially created scenes that closely resemble germane, real-world imagery. Military software designed to detect enemy targets on the ground, for instance, could be shown a massive quantity of fake aerial images of landing strips or tank columns generated by DALL-E in order to better recognize such targets in the real world.

    Even putting aside ethical objections, the efficacy of such an approach is debatable. “It’s known that a model’s accuracy and ability to process data accurately deteriorates every time it is further trained on AI-generated content,” said Heidy Khlaaf, a machine learning safety engineer who previously contracted with OpenAI. “Dall-E images are far from accurate and do not generate images reflective even close to our physical reality, even if they were to be fine-tuned on inputs of Battlefield management system. These generative image models cannot even accurately generate a correct number of limbs or fingers, how can we rely on them to be accurate with respect to a realistic field presence?”

    In an interview last month with the Center for Strategic and International Studies, Capt. M. Xavier Lugo of the U.S. Navy envisioned a military application of synthetic data exactly like the kind DALL-E can crank out, suggesting that faked images could be used to train drones to better see and recognize the world beneath them.

    Lugo, mission commander of the Pentagon’s generative AI task force and member of the Department of Defense Chief Digital and Artificial Intelligence Office, is listed as a contact at the end of the Microsoft presentation document. The presentation was made by Microsoft employee Nehemiah Kuhns, a “technology specialist” working on the Space Force and Air Force.

    The Air Force is currently building the Advanced Battle Management System, its portion of a broader multibillion-dollar Pentagon project called the Joint All-Domain Command and Control, which aims to network together the entire U.S. military for expanded communication across branches, AI-powered data analysis, and, ultimately, an improved capacity to kill. Through JADC2, as the project is known, the Pentagon envisions a near-future in which Air Force drone cameras, Navy warship radar, Army tanks, and Marines on the ground all seamlessly exchange data about the enemy in order to better destroy them.

    On April 3, U.S. Central Command revealed it had already begun using elements of JADC2 in the Middle East.

    The Department of Defense didn’t answer specific questions about the Microsoft presentation, but spokesperson Tim Gorman told The Intercept that “the [Chief Digital and Artificial Intelligence Office’s] mission is to accelerate the adoption of data, analytics, and AI across DoD. As part of that mission, we lead activities to educate the workforce on data and AI literacy, and how to apply existing and emerging commercial technologies to DoD mission areas.”

    While Microsoft has long reaped billions from defense contracts, OpenAI only recently acknowledged it would begin working with the Department of Defense. In response to The Intercept’s January report on OpenAI’s military-industrial about face, the company’s spokesperson Niko Felix said that even under the loosened language, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

    “The point is you’re contributing to preparation for warfighting.”

    Whether the Pentagon’s use of OpenAI software would entail harm or not might depend on a literal view of how these technologies work, akin to arguments that the company that helps build the gun or trains the shooter is not responsible for where it’s aimed or pulling the trigger. “They may be threading a needle between the use of [generative AI] to create synthetic training data and its use in actual warfighting,” said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. “But that would be a spurious distinction in my view, because the point is you’re contributing to preparation for warfighting.”

    Unlike OpenAI, Microsoft has little pretense about forgoing harm in its “responsible AI” document and openly promotes the use of its machine learning tools in the military’s “kill chain.”

    Related

    OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”

    Following its policy reversal, OpenAI was also quick to emphasize to the public and business press that its collaboration with the military was of a defensive, peaceful nature. In a January interview at Davos responding to The Intercept’s reporting, OpenAI vice president of global affairs Anna Makanju assured panel attendees that the company’s military work was focused on applications like cybersecurity initiatives and veteran suicide prevention, and that the company’s groundbreaking machine learning tools were still forbidden from causing harm or destruction.

    Contributing to the development of a battle management system, however, would place OpenAI’s military work far closer to warfare itself. While OpenAI’s claim of avoiding direct harm could be technically true if its software does not directly operate weapons systems, Khlaaf, the machine learning safety engineer, said, its “use in other systems, such as military operation planning or battlefield assessments” would ultimately impact “where weapons are deployed or missions are carried out.”

    Indeed, it’s difficult to imagine a battle whose primary purpose isn’t causing bodily harm and property damage. An Air Force press release from March, for example, describes a recent battle management system exercise as delivering “lethality at the speed of data.”

    Other materials from the AI literacy seminar series make clear that “harm” is, ultimately, the point. A slide from a welcome presentation given the day before Microsoft’s asks the question, “Why should we care?” The answer: “We have to kill bad guys.” In a nod to the “literacy” aspect of the seminar, the slide adds, “We need to know what we’re talking about… and we don’t yet.”

    The post Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The Israeli military has reportedly implemented a facial recognition dragnet across the Gaza Strip, scanning ordinary Palestinians as they move throughout the ravaged territory, attempting to flee the ongoing bombardment and seeking sustenance for their families.

    The program relies on two different facial recognition tools, according to the New York Times: one made by the Israeli contractor Corsight, and the other built into the popular consumer image organization platform offered through Google Photos. An anonymous Israeli official told the Times that Google Photos worked better than any of the alternative facial recognition tech, helping the Israelis make a “hit list” of alleged Hamas fighters who participated in the October 7 attack.

    The mass surveillance of Palestinian faces resulting from Israel’s efforts to identify Hamas members has caught up thousands of Gaza residents since the October 7 attack. Many of those arrested or imprisoned, often with little or no evidence, later said they had been brutally interrogated or tortured. In its facial recognition story, the Times pointed to Palestinian poet Mosab Abu Toha, whose arrest and beating at the hands of the Israeli military began with its use of facial recognition. Abu Toha, later released without being charged with any crime, told the paper that Israeli soldiers told him his facial recognition-enabled arrest had been a “mistake.”

    Putting aside questions of accuracy — facial recognition systems are notorious less accurate on nonwhite faces — the use of Google Photos’s machine learning-powered analysis features to place civilians under military scrutiny, or worse, is at odds with the company’s clearly stated rules. Under the header “Dangerous and Illegal Activities,” Google warns that Google Photos cannot be used “to promote activities, goods, services, or information that cause serious and immediate harm to people.”

    “Facial recognition surveillance of this type undermines rights enshrined in international human rights law.”

    Asked how a prohibition against using Google Photos to harm people was compatible with the Israel military’s use of Google Photos to create a “hit list,” company spokesperson Joshua Cruz declined to answer, stating only that “Google Photos is a free product which is widely available to the public that helps you organize photos by grouping similar faces, so you can label people to easily find old photos. It does not provide identities for unknown people in photographs.” (Cruz did not respond to repeated subsequent attempts to clarify Google’s position.)

    It’s unclear how such prohibitions — or the company’s long-standing public commitments to human rights — are being applied to Israel’s military.

    “It depends how Google interprets ‘serious and immediate harm’ and ‘illegal activity’, but facial recognition surveillance of this type undermines rights enshrined in international human rights law — privacy, non-discrimination, expression, assembly rights, and more,” said Anna Bacciarelli, the associate tech director at Human Rights Watch. “Given the context in which this technology is being used by Israeli forces, amid widespread, ongoing, and systematic denial of the human rights of people in Gaza, I would hope that Google would take appropriate action.”

    Doing Good or Doing Google?

    In addition to its terms of service ban against using Google Photos to cause harm to people, the company has for many years claimed to embrace various global human rights standards.

    “Since Google’s founding, we’ve believed in harnessing the power of technology to advance human rights,” wrote Alexandria Walden, the company’s global head of human rights, in a 2022 blog post. “That’s why our products, business operations, and decision-making around emerging technologies are all informed by our Human Rights Program and deep commitment to increase access to information and create new opportunities for people around the world.”

    This deep commitment includes, according to the company, upholding the Universal Declaration of Human Rights — which forbids torture — and the U.N. Guiding Principles on Business and Human Rights, which notes that conflicts over territory produce some of the worst rights abuses.

    The Israeli military’s use of a free, publicly available Google product like Photos raises questions about these corporate human rights commitments, and the extent to which the company is willing to actually act upon them. Google says that it endorses and subscribes to the U.N. Guiding Principles on Business and Human Rights, a framework that calls on corporations to “to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts.”

    Walden also said Google supports the Conflict-Sensitive Human Rights Due Diligence for ICT Companies, a voluntary framework that helps tech companies avoid the misuse of their products and services in war zones. Among the document’s many recommendations are for companies like Google to consider “Use of products and services for government surveillance in violation of international human rights law norms causing immediate privacy and bodily security impacts (i.e., to locate, arrest, and imprison someone).” (Neither JustPeace Labs nor Business for Social Responsibility, which co-authored the due-diligence framework, replied to a request for comment.)

    “Google and Corsight both have a responsibility to ensure that their products and services do not cause or contribute to human rights abuses,” said Bacciarelli. “I’d expect Google to take immediate action to end the use of Google Photos in this system, based on this news.”

    Google employees taking part in the No Tech for Apartheid campaign, a worker-led protest movement against Project Nimbus, called their employer to prevent the Israeli military from using Photos’s facial recognition to prosecute the war in Gaza.

    “That the Israeli military is even weaponizing consumer technology like Google Photos, using the included facial recognition to identify Palestinians as part of their surveillance apparatus, indicates that the Israeli military will use any technology made available to them — unless Google takes steps to ensure their products don’t contribute to ethnic cleansing, occupation, and genocide,” the group said in a statement shared with The Intercept. “As Google workers, we demand that the company drop Project Nimbus immediately, and cease all activity that supports the Israeli government and military’s genocidal agenda to decimate Gaza.”

    Project Nimbus

    This would not be the first time Google’s purported human rights principles contradict its business practices — even just in Israel. Since 2021, Google has sold the Israeli military advanced cloud computing and machine learning-tools through its controversial “Project Nimbus” contract.

    Unlike Google Photos, a free consumer product available to anyone, Project Nimbus is a bespoke software project tailored to the needs of the Israeli state. Both Nimbus and Google Photos’s face-matching prowess, however, are products of the company’s immense machine-learning resources.

    The sale of these sophisticated tools to a government so regularly accused of committing human rights abuses and war crimes stands in opposition to Google’s AI Principles. The guidelines forbid AI uses that are likely to cause “harm,” including any application “whose purpose contravenes widely accepted principles of international law and human rights.”

    Google has previously suggested its “principles” are in fact far narrower than they appear, applying only to “custom AI work” and not the general use of its products by third parties. “It means that our technology can be used fairly broadly by the military,” a company spokesperson told Defense One in 2022.

    How, or if, Google ever turns its executive-blogged assurances into real-world consequences remains unclear. Ariel Koren, a former Google employee who said she was forced out of her job in 2022 after protesting Project Nimbus, placed Google’s silence on the Photos issue in a broader pattern of avoiding responsibility for how its technology is used.

    “It is an understatement to say that aiding and abetting a genocide constitutes a violation of Google’s AI principles and terms of service,” Koren, now an organizer with No Tech for Apartheid, told The Intercept. “Even in the absence of public comment, Google’s actions have made it clear that the company’s public AI ethics principles hold no bearing or weight in Google Cloud’s business decisions, and that even complicity in genocide is not a barrier to the company’s ruthless pursuit of profit at any cost.”

    The post Google Won’t Say Anything About Israel Using Its Photo Software to Create Gaza “Hit List” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Citing the company’s “failure to provide answers to important questions,” Sens. Elizabeth Warren, D-Mass., and Bernie Sanders, I-Vt., are pressing Meta, which owns Facebook and Instagram, to respond to reports of disproportionate censorship around the Israeli war on Gaza.

    “Meta insists that there’s been no discrimination against Palestinian-related content on their platforms, but at the same time, is refusing to provide us with any evidence or data to support that claim,” Warren told The Intercept. “If its ad-hoc changes and removal of millions of posts didn’t discriminate against Palestinian-related content, then what’s Meta hiding?”

    In a letter to Meta CEO Mark Zuckerberg sent last December, first reported by The Intercept, Warren presented the company with dozens of specific questions about the company’s Gaza-related content moderation efforts. Warren asked about the exact numbers of posts about the war, broken down by Hebrew or Arabic, that have been deleted or otherwise suppressed.

    The letter was written following widespread reporting in The Intercept and other outlets that detailed how posts on Meta platforms that are sympathetic to Palestinians, or merely depicting the destruction in Gaza, are routinely removed or hidden without explanation.

    A month later, Meta replied to Warren’s office with a six-page letter, obtained by The Intercept, that provided an overview of its moderation response to the war but little in the way of specifics or new information.

    Meta’s reply disclosed some censorship: “In the nine days following October 7, we removed or marked as disturbing more than 2,200,000 pieces of content in Hebrew and Arabic for violating our policies.” The company declined, however, to provide a breakdown of deletions by language or market, making it impossible to tell whether that figure reflects discriminatory moderation practices.

    Much of Meta’s letter is a rehash of an update it provided through its public relations portal at the war’s onset, some of it verbatim.

    Now, a second letter from Warren to Meta, joined this time by Sanders, says this isn’t enough. “Meta’s response, dated January 29, 2024, did not provide any of the requested information necessary to understand Meta’s treatment of Arabic language or Palestine-related content versus other forms of content,” the senators wrote.

    Both senators are asking Meta to again answer Warren’s specific questions about the extent to which Arabic and Hebrew posts about the war have been treated differently, how often censored posts are reinstated, Meta’s use of automated machine learning-based censorship tools, and more.

    Accusations of systemic moderation bias against Palestinians have been borne out by research from rights groups.

    “Since October 7, Human Rights Watch has documented over 1,000 cases of unjustified takedowns and other suppression of content on Instagram and Facebook related to Palestine and Palestinians, including about human rights abuses,” Human Rights Watch said in a late December report. “The censorship of content related to Palestine on Instagram and Facebook is systemic, global, and a product of the company’s failure to meet its human rights due diligence responsibilities.”

    A February report by AccessNow said Meta “suspended or restricted the accounts of Palestinian journalists and activists both in and outside of Gaza, and arbitrarily deleted a considerable amount of content, including documentation of atrocities and human rights abuses.”

    A third-party audit commissioned by Meta itself previously concluded it had given the short shrift to Palestinian rights during a May 2021 flare-up of violence between Israel and Hamas, the militant group that controls the Gaza Strip. “Meta’s actions in May 2021 appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” said the auditor’s report.

    In response to this audit, Meta pledged an array of reforms, which free expression and digital rights advocates say have yet to produce a material improvement.

    In its December report, Human Rights Watch noted, “More than two years after committing to publishing data around government requests for taking down content that is not necessarily illegal, Meta has failed to increase transparency in this area.”

    The post Meta Refuses to Answer Questions on Gaza Censorship, Say Sens. Warren and Sanders appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Ten years ago, the internet platform X, then known as Twitter, filed a lawsuit against the government it hoped would force transparency around abuse-prone surveillance of social media users. X’s court battle, though, clashes with an uncomfortable fact: The company is itself in the business of government surveillance of social media.

    Under the new ownership of Elon Musk, X had continued the litigation, until its defeat in January. The suit was aimed at overturning a governmental ban on disclosing the receipt of requests, known as national security letters, that compel companies to turn over everything from user metadata to private direct messages. Companies that receive these requests are typically legally bound to keep the request secret and can usually only disclose the number they’ve received in a given year in vague numerical ranges.

    In its petition to the Supreme Court last September, X’s attorneys took up the banner of communications privacy: “History demonstrates that the surveillance of electronic communications is both a fertile ground for government abuse and a lightning-rod political topic of intense concern to the public.” After the court declined to take up the case in January, Musk responded tweeting, “Disappointing that the Supreme Court declined to hear this matter.”

    The court’s refusal to take the case on ended X’s legal bid, but the company and Musk had positioned themselves at the forefront of a battle on behalf of internet users for greater transparency about government surveillance.

    However, emails between the U.S. Secret Service and the surveillance firm Dataminr, obtained by The Intercept from a Freedom of Information Act request, show X is in an awkward position, profiting from the sale of user data for government surveillance purposes at the same time as it was fighting secrecy around another flavor of state surveillance in court.

    While national security letters allow the government to make targeted demands for non-public data on an individual basis, companies like Dataminr continuously monitor public activity on social media and other internet platforms. Dataminr provides its customers with customized real-time “alerts” on desired topics, giving clients like police departments a form of social media omniscience. The alerts allow police to, for instance, automatically track a protest as it moves from its planning stages into the streets, without requiring police officials to do any time-intensive searches.

    Although Dataminr defends First Alert, its governmental surveillance platform, as a public safety tool that helps first responders react quickly to sudden crises, the tool has been repeatedly shown to be used by police to monitor First Amendment-protected online political speech and real-world protests.

    “The Whole Point”

    Dataminr has long touted its special relationship with X as integral to First Alert. (Twitter previously owned a stake in Dataminr, though divested before Musk’s purchase.) Unlike other platforms it surveils by scraping user content, Dataminr pays for privileged access to X through the company’s “firehose”: a direct, unfiltered feed of every single piece of user content ever shared publicly to the platform.

    Watching everything that happens on X in real time is key to Dataminr’s pitch to the government. The company essentially leases indirect access to this massive spray of information, with Dataminr acting as an intermediary between X’s servers and a multitude of police, intelligence, and military agencies.

    While it was unclear whether, under Musk, X would continue leasing access to its users to Dataminr — and by extension, the government — the emails from the Secret Service confirm that, as of last summer, the social media platform was still very much in the government surveillance business.

    “Dataminr has a unique contractual relationship with Twitter, whereby we have real-time access to the full stream of all publicly available Tweets,” a representative of the surveillance company wrote to the Secret Service in a July 2023 message about the terms of the law enforcement agency’s surveillance subscription. “In addition all of Dataminr’s public sector customers today have agreed to these terms including dozens who are responsible for law enforcement whether at the local, state or federal level.” (The terms are not mentioned in the emails.)

    According to an email from the Secret Service in the same thread, the agency’s interest in Dataminr was unambiguous: “The whole point of this contract is to use the information for law enforcement purposes.”

    Privacy advocates told The Intercept that X’s Musk-era warnings of government surveillance abuses are contradictory to the company’s continued sale of user data for the purpose of government surveillance. (Neither X nor Dataminr responded to a request for comment.)

    “X’s legal briefs acknowledge that communications surveillance is ripe for government abuse, and that we can’t depend on the police to police themselves,” said Jennifer Granick, the surveillance and cybersecurity counsel at the American Civil Liberties Union’s Speech, Privacy, and Technology Project. “But then X turns around and sells Dataminr fire-hose access to users’ posts, which Dataminr then passes through to the government in the form of unregulated disclosures and speculative predictions that can falsely ensnare the innocent.”

    “Social media platforms should protect the privacy of their users.”

    “Social media platforms should protect the privacy of their users,” Adam Schwartz, the privacy litigation director at the Electronic Frontier Foundation, which filed an amicus brief in support of X’s Supreme Court petition. “For example, platforms must not provide special services, like real-time access to the full stream of public-facing posts, to surveillance vendors who share this information with police departments. If X is providing such access to Dataminr, that would be disappointing.”

    “Glaringly at Odds”

    Following a 2016 investigation into the use of Twitter data for police surveillance by the ACLU, the company went so far as to expressly ban third parties from “conducting or providing surveillance or gathering intelligence” and “monitoring sensitive events (including but not limited to protests, rallies, or community organizing meetings)” using firehose data. The new policy went so far as to ban the use of firehose data for purposes pertaining to “any alleged or actual commission of a crime” — ostensibly a problem for Dataminr’s crime-fighting clientele.

    These assurances have done nothing to stop Dataminr from using the data it buys from X to do exactly these things. Prior reporting from The Intercept has shown the company has, in recent years, helped federal and local police surveil entirely peaceful Black Lives Matter protests and abortion rights rallies in recent years.

    Neither X nor Dataminr have responded to repeated requests to explain how a tool that allows for the real-time monitoring of protests is permitted under a policy that expressly bans the monitoring of protests. In the past, both Dataminr and X have denied that monitoring the real-time communications of people on the internet and relaying that information to the police is a form of surveillance because the posts in question are public.

    Twitter later softened this prohibition by noting surveillance applications were banned “Unless explicitly approved by X in writing.” Dataminr, for its part, remains listed as an “official partner” of X.

    Though the means differ, national security scholars told The Intercept that the ends of national security letters and fire-hose monitoring are the same: widespread government surveillance with little to no meaningful oversight. Neither the national security letters nor dragnet social media surveillance require a sign-off from a judge and, in both cases, those affected are left unaware they’ve fallen under governmental scrutiny.

    “While I appreciate that there may be some symbolic difference between giving the government granular data directly and making them sift through what they buy from data brokers, the end result is still that user data ends up in the hands of law enforcement, and this time without any legal process,” said David Greene, civil liberties director at EFF.

    “The end result is still that user data ends up in the hands of law enforcement, and this time without any legal process.”

    It’s the kind of ideological contradiction typical of X’s owner. Musk has managed to sell himself as a heterodox critic of U.S. foreign policy and big government while simultaneously enriching himself by selling the state expensive military hardware through his rocket company SpaceX.

    “While X’s efforts to bring more transparency to the National Security Letter process are commendable, its objection to government surveillance of communications in that context is glaringly at odds with its decision to support similar surveillance measures through its partnership with Dataminr,” said Mary Pat Dwyer, director of Georgetown University’s Law Institute for Technology Law and Policy. “Scholars and advocates have long argued the Dataminr partnership is squarely inconsistent with the platform’s policy forbidding use of its data for surveillance, and X’s continued failure to end the relationship prevents the company from credibly portraying itself as an advocate for its users’ privacy.”

    The post Elon Musk Fought Government Surveillance — While Profiting Off Government Surveillance appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Among the many hawks on Capitol Hill, few have as effectively frightened lawmakers over Chinese control of TikTok as Jacob Helberg, a member of the U.S.–China Economic and Security Review Commission. Helberg’s day job at the military contractor Palantir, however, means he stands to benefit from ever-frostier relations between the two countries.

    Helberg has been instrumental in the renewed legislative fight against TikTok, according to the Wall Street Journal. “Spearheading the effort to create the bipartisan, bicoastal alliance of China hawks is Jacob Helberg,” the Journal reported in March 2023. The paper noted collaboration between Helberg, previously a policy adviser at Google, and investor and fellow outspoken China hawk Peter Thiel, as well as others in Thiel’s circle. The anti-China coalition, the Journal reported this past week, has been hammering away at a TikTok ban, and Helberg said he has spoken to over 100 members of Congress about the video-sharing social media app.

    From his position on the U.S.–China commission — founded by Congress to advise it on national security threats represented by China — Helberg’s rhetoric around TikTok has been as jingoistic as any politician. “TikTok is a scourge attacking our children and our social fabric, a threat to our national security, and likely the most extensive intelligence operation a foreign power has ever conducted against the United States,” he said in a February hearing held by the commission.

    Unlike those in government he’s supposed to be advising, however, Helberg has another gig: He is a policy adviser to Alex Karp, CEO of the defense and intelligence contractor Palantir. And Palantir, like its industry peers, could stand to profit from increased hostility between China and the United States. The issue has been noted by publications like Fortune, which noted in September 2023 that Palantir relies heavily on government contracts for AI work — a business that would grow in a tech arms race with China. (Neither the U.S.–China commission nor Palantir responded to requests for comment.)

    “It is a clear conflict-of-interest to have an advisor to Palantir serve on a commission that is making sensitive recommendations.”

    Experts told The Intercept that there didn’t appear to be a legal conflict that would exclude Helberg from the commission, but the participation of tech company officials could nonetheless create competing interests between sound policymaking and corporate profits.

    “It is a clear conflict-of-interest to have an advisor to Palantir serve on a commission that is making sensitive recommendations about economic and security relations between the U.S. and China,” said Bill Hartung, a senior research fellow at the Quincy Institute for Responsible Statecraft and scholar of the U.S. defense industry. “From their perspective, China is a mortal adversary and the only way to ‘beat’ them is to further subsidize the tech sector so we can rapidly build next generation systems that can overwhelm China in a potential conflict — to the financial benefit of Palantir and its Silicon Valley allies.”

    Big Tech’s China Hawks

    Helberg’s activities are part of a much broader constellation of anti-China advocacy orbiting around Peter Thiel, who co-founded Palantir in 2003 and is still invested in the company. (Helberg’s husband, the venture capitalist Keith Rabois, spent five years as a partner at Thiel’s Founders Fund). Thiel, also an early investor in Pentagon aerospace contractor SpaceX and weaponsmaker Anduril, has for years blasted the Chinese tech sector as inherently malignant — claims, like Helberg’s, made with more than trace amounts of paranoia and xenophobia.

    Thiel’s remarks on China are characteristically outlandish. Speaking at the MAGA-leaning National Conservatism Conference in 2019, Thiel suggested, without evidence, that Google had been “infiltrated by Chinese intelligence” — and urged a joint CIA–FBI investigation. In 2021, at a virtual event held by the Richard Nixon Foundation, Thiel said, “I do wonder whether at this point, bitcoin should also be thought [of] in part as a Chinese financial weapon against the U.S.”

    For Thiel’s camp, conflict with China is both inevitable and necessary. U.S.–China research cooperation on artificial intelligence, he says, is treacherous, and Chinese technology, generally, is anathema to national security. As he inveighs against Chinese tech, Thiel’s portfolio companies stand by with handy solutions. Palantir, for instance, began ramping up its own Made-in-the-USA militarized AI offerings last year. Anduril executives engage in routine fearmongering over China, all the while pitching their company’s weapons as just the thing to thwart an invasion of Taiwan. “Everything that we’re doing, what the [Department of Defense] is doing, is preparing for a conflict with a great power like China in the Pacific,” Anduril CEO Palmer Luckey told Bloomberg TV last year.

    Palantir is making a similar pitch. In a 2023 quarterly earnings call, Palantir Chief Operations Officer Shyam Sankar told investors that the company had China in mind as it continues to grow its reach into the Western Pacific. On another Palantir earnings call, Karp, the company’s CEO and Helberg’s boss, told investors the more dangerous and real the Chinese threat gets, “the more battle-tested and real your software has to be. I believe it’s about to get very real. Why? Because our GDP growth is significantly better than China’s.” Even marketing images distributed by Palantir show the company’s software being used to track Chinese naval maneuvers in the South China Sea.

    Thiel is not alone among Silicon Valley brass. At a February 2023 panel event, a representative of America’s Frontier Fund, a national security-oriented technology investment fund that pools private capital and federal dollars, said that a war between China and Taiwan would boost the firm’s profits by an order of magnitude. Private sector contributors to America’s Frontier Fund include both Thiel and former Google chair Eric Schmidt, whose China alarmism and defense-spending boosterism rivals Thiel’s — and who similarly stands to personally profit from escalations with China.

    TikTok, Bad! China, Bad!

    Repeated often enough, anti-TikTok rhetoric from tech luminaries serves to reinforce the notion that China is the enemy of the U.S. and that countering this enemy is worth the industry’s price tags — even if the app’s national security threat remains entirely hypothetical.

    “Just like tech had to convince people that crypto and NFTs had intrinsic value, they also have to convince the Pentagon that the forms of warfare that their technologies make possible are intrinsically superior or fill a gap,” Shana Marshall, an arms industry scholar at George Washington University’s Elliott School of International Affairs, told The Intercept.

    Marshall said bodies like the U.S.–China Economic and Security Review Commission can contribute to such conflicts because advisory boards that encourage revolving-door moves between private firms and government help embed corporate interests in policymaking. “In other words,” she said, “it’s not a flaw in the program, it’s an intentional design element.”

    “The tensions with China/Taiwan are tailor made for this argumentation,” Marshall added. “You couldn’t get better cases — or better timing — so grifters and warmongers like Helberg and Schmidt are going to be increasingly integrated into Pentagon planning and all aspects of regulation.”

    Forcing divestiture or banning TikTok outright would not trigger armed conflict between the U.S. and China on its own, but the pending legislation to effectively ban the app is already dialing up hostility between the two countries. After the House’s overwhelming support last week of the bill to force the sale of the app from Chinese hands, the Financial Times reported that Chinese foreign ministry spokesperson Wang Wenbin accused the U.S. of displaying a “robber’s logic” through legislative expropriation. An editorial in the Chinese government mouthpiece Global Times decried the bill as little more than illegal “commercial plunder” and urged TikTok parent company ByteDance to not back down.

    “There are a lot of defense contractors that are discussing the China threat with an eye out on their bottom line.”

    Of course, self-interest is hardly a deviation from the norm in the military-industrial complex. “This is the way Washington works,” said Scott Amey, general counsel at the Project on Government Oversight, a watchdog group. “There are a lot of defense contractors that are discussing the China threat with an eye out on their bottom line.”

    Although it’s common for governmental advisory boards like Helberg’s U.S.–China commission to enthusiastically court the private sector, Amey said it would be important for Helberg to make clear when making policy recommendations whether he’s speaking as an adviser to Palantir’s CEO or to Congress, though the disclosure wouldn’t negate Helberg’s personal interest in a second Cold War. Such disclosures have been uneven: While Helberg’s U.S.–China commission bio leads with his Palantir job, the company went unmentioned in the February hearing. Given the ongoing campaign to pass the TikTok bill, Helberg’s lobbying “certainly raises some red flags,” Amey said.

    He said, “The industry is hawkish on China but has a financial interest in the decisions that the executive branch or Congress make.”

    The post Tech Official Pushing TikTok Ban Could Reap Windfall From U.S.–China Cold War appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Facebook and Instagram’s parent company, Meta, is contemplating stricter rules around discussing Israeli nationalism on its platforms, a major policy change that could stifle criticism and free expression about the war in Gaza and beyond, five civil society sources who were briefed on the potential change told The Intercept.

    “Meta is currently revisiting its hate speech policy, specifically in relation to the term ‘Zionist,’” reads a January 30 email sent to civil society groups by Meta policy personnel and reviewed by The Intercept. While the email says Meta has not made a final determination, it is soliciting feedback on a potential policy change from civil society and digital rights groups, according to the sources. The email notes that “Meta is reviewing this policy in light of content that users and stakeholders have recently reported” but does not detail the content in question or name any stakeholders.

    “As an anti-Zionist Jewish organization for Palestinian freedom, we are horrified to learn that Meta is considering expanding when they treat ‘Zionism’ — a political ideology — as the same as ‘Jew/Jewish’ — an ethno-religious identity,” said Dani Noble, an organizer with Jewish Voice for Peace, one of the groups Meta has contacted to discuss the possible change. Noble added that such a policy shift “will result in shielding the Israeli government from accountability for its policies and actions that violate Palestinian human rights.”

    For years, Meta has allowed its 3 billion users around the world to employ the term “Zionist,” which refers to supporters of the historical movement to create a Jewish state in the Middle East, as well as backers of modern-day nationalism in support of that state and its policies.

    Meta’s internal rules around the word “Zionist,” first reported by The Intercept in 2021, show that company moderators are only supposed to delete posts using the term if it’s determined to be a proxy for “Jewish” or “Israeli,” both protected classes under company speech rules. The policy change Meta is now considering would enable the platform’s moderators to more aggressively and expansively enforce this rule, a move that could dramatically increase deletions of posts critical of Israeli nationalism.

    “We don’t allow people to attack others based on their protected characteristics, such as their nationality or religion. Enforcing this policy requires an understanding of how people use language to reference those characteristics,” Meta spokesperson Corey Chambliss told The Intercept. “While the term Zionist often refers to a person’s ideology, which is not a protected characteristic, it can also be used to refer to Jewish or Israeli people. Given the increase in polarized public discourse due to events in the Middle East, we believe it’s important to assess our guidance for reviewing posts that use the term Zionist.”

    In the months since October 7, staunchly pro-Israel groups like the Anti-Defamation League have openly called for treating anti-Zionism as a form of antisemitism, pointing out that the word is often used by antisemites as a stand-in for “Jew.” The ADL and American Jewish Committee, another pro-Israel, Zionist advocacy group in the U.S., have both been lobbying Meta to restrict use of the word “Zionist,” according to Yasmine Taeb, legislative and political director at the Muslim grassroots advocacy group MPower Change. In his statement, Chambliss responded, “We did not initiate this policy development at the behest of any outside group.”

    Taeb, who spoke to a Meta employee closely involved with the proposed policy change, said it would result in mass censorship of critical mentions of Zionism, restricting, for example, non-hateful, non-violent speech about the ongoing bloodshed in Gaza.

    While a statement as general as “I don’t like Zionists” could be uttered by an antisemitic Instagram user as a means of expressing dislike for Jews, civil society advocates point out that there is nothing inherently or necessarily anti-Jewish about the statement. Indeed, much of the fiercest political activism against Israel’s war in Gaza has been organized by anti-Zionist Jews, while American evangelical Christian Zionists are some of Israel’s most hardcore supporters.

    “The suppression of pro-Palestinian speech critical of Israel is happening specifically during the genocide in Gaza,” Taeb said in an interview. “Meta should instead be working on implementing policies to make sure political speech is not being suppressed, and they’re doing the exact opposite.”

    According to presentation materials reviewed by The Intercept, Meta has been sharing with stakeholders a series of hypothetical posts that could be deleted under a stricter policy, and soliciting feedback as to whether they should be. While one example seemed like a clear case of conspiratorial antisemitic tropes about Jewish control of the news media, others were critical of Israeli state policy or supporters of that policy, not Judaism, said Nadim Nashif, executive director of the Palestinian digital rights group 7amleh, who was briefed this week by Meta via video conference. Meta plans to brief U.S. stakeholder groups on Friday morning, according to its outreach email.

    Examples of posts Meta could censor under a new policy included the statements: “Zionists are war criminals, just look at what’s happening in Gaza”; “I don’t like Zionists”; and “No Zionists allowed at tonight’s meeting of the Progressive Student Association.” Nashif said that one example — “Just read the news every day. A coalition of Zionists, Americans and Europeans tries to rule the world.” — was described by Peter Stern, Meta’s director of content policy stakeholder engagement, as possibly hateful because it engaged in conspiratorial thinking about Jews.

    In an interview with The Intercept, Nashif disagreed, arguing that criticism of the strategic alliance and foreign policy alignment between the U.S., European states, and Israel should not be conflated with conspiratorial bigotry against Judaism, or collapsed into bigoted delusions of global Jewish influence. In their meeting, Nashif says Stern acknowledged that Zionism is a political ideology, not an ethnic group, despite the prospect of enforcement that would treat it more like the latter. “I think it may actually harm the fight against antisemitism, conflating Zionism and the Israeli government with Judaism,” Nashif told The Intercept.

    It will be difficult or impossible to determine whether someone says they “don’t like” Zionists with a hateful intent, Nashif said, adding: “You’d need telepathy.” Meta has yet to share with those it has briefed any kind of general principles, rules, or definitions that would guide this revised policy or help moderators enforce it, Nashif said. But given the company’s systematic censorship of Palestinian and other Arab users of its platforms, Nashif and others familiar with the potential change fear it would make free expression in the Arab world even more perilous.

    “As anti-Zionist Jews, we have seen how the Israeli government and its supporters have pushed an agenda that falsely claims that equating ‘Zionist’ with ‘Jew’ or ‘Jewish’ will somehow keep Jews safe,” added Noble of Jewish Voice for Peace. “Not only does conflating anti-Zionism and antisemitism harm all people who fight for human rights in the world by stifling legitimate criticism of a state and military, it also does nothing to actually keep our community safe while undermining our collective efforts to dismantle real antisemitism and all forms of racism, extremism and oppression.”

    The post Meta Considering Increased Censorship of the Word “Zionist” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Notorious Blackwater founder and perennial mercenary entrepreneur Erik Prince has a new business venture: a cellphone company whose marketing rests atop a pile of muddled and absurd claims of immunity to surveillance. On a recent episode of his podcast, Prince claimed that his special phone’s purported privacy safeguards could have prevented many of the casualties from Hamas’s October 7 attack.

    The inaugural episode of “Off Leash with Erik Prince,” the podcast he co-hosts with former Trump campaign adviser Mark Serrano, focused largely on the Hamas massacre and various intelligence failures of the Israeli military. Toward the end of the November 2 episode, following a brief advertisement for Prince’s new phone company, Unplugged, Serrano asked how Hamas had leveraged technology to plan the attack. “I think that when the post-op of this disaster is done, I think the main source of intel for Hamas was cellphone data,” Serrano claimed, without evidence. “How does Gaza access that data? I mean, Hamas?”

    Prince answered that location coordinates, commonly leaked from phones via advertising data, were surely crucial to Hamas’s ability to locate Israel Defense Forces installations and kibbutzim.

    Serrano, apparently sensing an opportunity to promote Prince’s $949 “double encrypted” phone, continued: “If all of Israel had Unplugged [phones] on October 7, what would that have done to Hamas’s strategy?”

    Prince didn’t miss a beat. “I will almost guarantee that whether it’s the people living on kibbutzes, but especially the 19, 20, 21-year-old kids that are serving in the IDF, if they’re not on duty, they’re on their phones and on social media, and that cellphone data was tracked and collected and used for targeting by Hamas,” he said. “This phone, Unplugged, prevents that from happening.”

    Unplugged’s product documentation is light on details, privacy researcher Zach Edwards told The Intercept, and the features the company touts can be replicated on most phones just by tinkering with settings. Both Android devices and iPhones, Edwards pointed out, allow users to deactivate their advertising IDs. It’s unclear what makes Unplugged any different, let alone a tool that could have thwarted the Hamas attack. “Folks should wait for proof before accepting those claims,” Edwards said.

    “Simply Not True”

    This isn’t the first time Prince has used an act of violence as a business opportunity. Following the 1999 mass shooting at Columbine High School, Prince constructed a mock school building called R U Ready High School where police could pay to train for future shootings. In 2017, he pitched the Trump White House on a plan, modeled after the British East India Company, to privatize the American war in Afghanistan with mercenaries.

    With Unplugged, Prince’s main claim seems to be that, unlike most phones, his company’s devices don’t have advertising IDs: unique codes generated by every Android and iOS phone that marketers use to surveil consumer habits, including location. Unplugged claims its phones use a customized version of Android that strips out these IDs. But the notion that Prince’s phone, which is still unavailable for purchase more than a year after it was announced, could have saved lives on October 7 was contradicted by mobile phone security experts, who told The Intercept that just about every aspect of the claim is false, speculative, or too vague to verify.

    “That is simply not true and that is not how mobile geolocation works,” said Allan Liska, an intelligence analyst at the cybersecurity firm Recorded Future. While Prince is correct that the absence of an advertising ID would diminish to some degree the amount of personal data leaked by a phone, it by no means cuts it off entirely. So long as a device is connected to cellular towers — generally considered a key feature for cellphones — it’s susceptible to tracking. “Mobile geolocation is based on tower data triangulation and there is no level of operating system security that that can bypass that,” Liska added.

    Unplugged CEO Ryan Paterson told The Intercept that Prince’s statement about how his phone could have minimized Israeli deaths on October 7 “has much to do with the amount of data that the majority of cell phones in the world today create about the users of the device, their locations, patterns of life and behaviors,” citing a 2022 Electronic Frontier Foundation report on how mobile advertising data fuels police surveillance. Indeed, smartphone advertising has created an immeasurably vast global ecosystem of intimate personal data, unregulated and easily bought and sold, that can facilitate state surveillance without judicial oversight.

    “Unplugged’s UP Phone has an operating system that does not contain a [mobile advertising ID] that can be passed [on], does not have any Google Mobile Services, and has a built-in firewall that blocks applications from sending any tracker information from the device, and delivering advertisements to the phone,” Paterson added in an email. “Taking these data sources away from the Hamas planners could have seriously disrupted and limited their operations effectiveness.”

    Unplugged did not respond to a request for more detailed information about its privacy and security measures.

    Neither Erik Prince nor an attorney who represents him responded to questions from The Intercept.

    Articles of Faith

    “While it’s true that anyone could theoretically find aggregate data on populated areas and possibly more specific data on an individual using mobile advertiser identifiers, it is completely unclear if Hamas used this and the ‘could have’ in the last sentence is doing a lot of work,” William Budington, a security researcher at the Electronic Frontier Foundation who regularly scrutinizes Android systems, wrote in an email to The Intercept. “If Hamas was getting access to location information through cell tower triangulation methods (say their targets were connecting to cell towers within Gaza that they had access to), then [Prince’s] phone would be as vulnerable to this as any iOS or Android device.”

    The idea of nixing advertising IDs is by no means a privacy silver bullet. “When he is talking about advertising IDs, that is separate from location data,” Budington noted. If a phone’s user gives an app permission to access that phone’s location, there’s little to nothing Prince can do to keep that data private. “Do some apps get location data as well as an advertising ID? Yes. But his claim that Hamas had access to this information, and it was pervasively used in the attack to establish patterns of movement, is far-fetched and extremely speculative,” Budington wrote.

    Liska, who previously worked in information security within the U.S. intelligence community, agreed. “I also find the claim that Hamas was purchasing advertising/location data to be a bit preposterous as well,” he said. “Not that they couldn’t do it (I am not familiar with Israeli privacy laws) but that they would have enough intelligence to know who to target with the purchase.”

    Hamas’s assault displayed a stunningly sophisticated understanding of the Israeli state security apparatus, but there’s been no evidence that this included the use of commercially obtained mobile phone data.

    While it’s possible that Unplugged phones block all apps from requesting location tracking permission in the first place, this would break any location-based features in the phone, rendering something as basic as a mapping app useless. But even this hypothetical is impossible to verify, because the phone has yet to leave Prince’s imagination and reach any actual customers, and its customized version of Android, dubbed “LibertOS,” has never been examined by any third parties.

    While Unplugged has released a one-page security audit, conducted by PwC Digital Technology, it applied only to the company’s website and an app it offers, not the phone, making its security and privacy claims largely articles of faith.

    The post Erik Prince Claims His Vaporware Super-Phone Could Have Thwarted Hamas Attack appeared first on The Intercept.

    This post was originally published on The Intercept.

  • OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

    Up until January 10, OpenAI’s “usage policies” page included a ban on “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.” That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished.

    The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document “clearer” and “more readable,” and which includes many other substantial language and formatting changes.

    “We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept. “A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.”

    Felix declined to say whether the vaguer “harm” ban encompassed all military use, writing, “Any use of our technology, including by the military, to ‘[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,’ is disallowed.”

    “OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications,” said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper she co-authored with OpenAI researchers that specifically flagged the risk of military use. Khlaaf added that the new policy seems to emphasize legality over safety. “There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law,” she said. “Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties.”

    The real-world consequences of the policy are unclear. Last year, The Intercept reported that OpenAI was unwilling to say whether it would enforce its own clear “military and warfare” ban in the face of increasing interest from the Pentagon and U.S. intelligence community.

    “Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” said Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission. “The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement.”

    While nothing OpenAI offers today could plausibly be used to directly kill someone, militarily or otherwise — ChatGPT can’t maneuver a drone or fire a missile — any military is in the business of killing, or at least maintaining the capacity to kill. There are any number of killing-adjacent tasks that a LLM like ChatGPT could augment, like writing code or processing procurement orders. A review of custom ChatGPT-powered bots offered by OpenAI suggests U.S. military personnel are already using the technology to expedite paperwork. The National Geospatial-Intelligence Agency, which directly aids U.S. combat efforts, has openly speculated about using ChatGPT to aid its human analysts. Even if OpenAI tools were deployed by portions of a military force for purposes that aren’t directly violent, they would still be aiding an institution whose main purpose is lethality.

    Experts who reviewed the policy changes at The Intercept’s request said OpenAI appears to be silently weakening its stance against doing business with militaries. “I could imagine that the shift away from ‘military and warfare’ to ‘weapons’ leaves open a space for OpenAI to support operational infrastructures as long as the application doesn’t directly involve weapons development narrowly defined,” said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. “Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system – including command and control infrastructures – of which it’s part.” Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, “It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons.”

    Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.

    The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs. LLMs are trained on giant volumes of books, articles, and other web data in order to approximate human responses to user prompts. Though the outputs of an LLM like ChatGPT are often extremely convincing, they are optimized for coherence rather than a firm grasp on reality and often suffer from so-called hallucinations that make accuracy and factuality a problem. Still, the ability of LLMs to quickly ingest text and rapidly output analysis — or at least the simulacrum of analysis — makes them a natural fit for the data-laden Defense Department.

    While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools. In a November address, Deputy Secretary of Defense Kathleen Hicks stated that AI is “a key part of the comprehensive, warfighter-centric approach to innovation that Secretary [Lloyd] Austin and I have been driving from Day 1,” though she cautioned that most current offerings “aren’t yet technically mature enough to comply with our ethical AI principles.”

    Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, told a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”

    The post OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A small group of volunteers from Israel’s tech sector is working tirelessly to remove content it says doesn’t belong on platforms like Facebook and TikTok, tapping personal connections at those and other Big Tech companies to have posts deleted outside official channels, the project’s founder told The Intercept.

    The project’s moniker, “Iron Truth,” echoes the Israeli military’s vaunted Iron Dome rocket interception system. The brainchild of Dani Kaganovitch, a Tel Aviv-based software engineer at Google, Iron Truth claims its tech industry back channels have led to the removal of roughly 1,000 posts tagged by its members as false, antisemitic, or “pro-terrorist” across platforms such as X, YouTube, and TikTok.

    In an interview, Kaganovitch said he launched the project after the October 7 Hamas attack, when he saw a Facebook video that cast doubt on alleged Hamas atrocities. “It had some elements of disinformation,” he told The Intercept. “The person who made the video said there were no beheaded babies, no women were raped, 200 bodies is a fake. As I saw this video, I was very pissed off. I copied the URL of the video and sent it to a team in [Facebook parent company] Meta, some Israelis that work for Meta, and I told them that this video needs to be removed and actually they removed it after a few days.”

    Billed as both a fight against falsehood and a “fight for public opinion,” according to a post announcing the project on Kaganovitch’s LinkedIn profile, Iron Truth vividly illustrates the perils and pitfalls of terms like “misinformation” and “disinformation” in wartime, as well as the mission creep they enable. The project’s public face is a Telegram bot that crowdsources reports of “inflammatory” posts, which Iron Truth’s organizers then forward to sympathetic insiders. “We have direct channels with Israelis who work in the big companies,” Kaganovitch said in an October 13 message to the Iron Truth Telegram group. “There are compassionate ones who take care of a quick removal.” The Intercept used Telegram’s built-in translation feature to review the Hebrew-language chat transcripts.

    Iron Truth vividly illustrates the perils and pitfalls of terms like “misinformation” and “disinformation” in wartime.

    So far, nearly 2,000 participants have flagged a wide variety of posts for removal, from content that’s clearly racist or false to posts that are merely critical of Israel or sympathetic to Palestinians, according to chat logs reviewed by The Intercept. “In the U.S. there is free speech,” Kaganovitch explained. “Anyone can say anything with disinformation. This is very dangerous, we can see now.”

    “The interests of a fact checking or counter-disinformation group working in the context of a war belongs to one belligerent or another. Their job is to look out for the interests of their side,” explained Emerson Brooking, a fellow with the Atlantic Council’s Digital Forensic Research Lab. “They’re not trying to ensure an open, secure, accessible online space for all, free from disinformation. They’re trying to target and remove information and disinformation that they see as harmful or dangerous to Israelis.”

    While Iron Truth appears to have frequently conflated criticism or even mere discussion of Israeli state violence with misinformation or antisemitism, Kaganovitch says his views on this are evolving. “In the beginning of the war, it was anger, most of the reporting was anger,” he told The Intercept. “Anti-Israel, anti-Zionist, anything related to this was received as fake, even if it was not.”

    The Intercept was unable to independently confirm that sympathetic workers at Big Tech firms are responding to the group’s complaints or verify that the group was behind the removal of the content it has taken credit for having deleted. Iron Truth’s founder declined to share the names of its “insiders,” stating that they did not want to discuss their respective back channels with the press. In general, “they are not from the policy team but they have connections to the policy team,” Kaganovitch told The Intercept, referring to the personnel at social media firms who set rules for permissible speech. “Most of them are product managers, software developers. … They work with the policy teams with an internal set of tools to forward links and explanations about why they need to be removed.” While companies like Meta routinely engage with various civil society groups and NGOs to discuss and remove content, these discussions are typically run through their official content policy teams, not rank-and-file employees.

    The Iron Truth Telegram account regularly credits these supposed insiders. “Thanks to the TikTok Israel team who fight for us and for the truth,” read an October 28 post on the group’s Telegram channel. “We work closely with Facebook, today we spoke with more senior managers,” according to another post on October 17. Soon after a Telegram chat member complained that something they’d posted to LinkedIn had attracted “inflammatory commenters,” the Iron Truth account replied, “Kudos to the social network LinkedIn who recruited a special team and have so far removed 60% of the content we reported on.”

    Kaganovitch said the project has allies outside Israel’s Silicon Valley annexes as well. Iron Truth’s organizers met with the director of a controversial Israeli government cyber unit, he said, and its core team of more than 50 volunteers and 10 programmers includes a former member of the Israeli Parliament.

    “Eventually our main goal is to get the tech companies to differentiate between freedom of speech and posts that their only goal is to harm Israel and to interfere with the relationship between Israel and Palestine to make the war even worse,” Inbar Bezek, the former Knesset member working with Iron Truth, told The Intercept in a WhatsApp message.

    “Across our products, we have policies in place to mitigate abuse, prevent harmful content and help keep users safe. We enforce them consistently and without bias,” Google spokesperson Christa Muldoon told The Intercept. “If a user or employee believes they’ve found content that violates these policies, we encourage them to report it through the dedicated online channels.” Muldoon added that Google “encourages employees to use their time and skills to volunteer for causes they care about.” In interviews with The Intercept, Kaganovitch emphasized that he works on Iron Truth only in his free time, and said the project is entirely distinct from his day job at Google.

    Meta spokesperson Ryan Daniels pushed back on the notion that Iron Truth was able to get content taken down outside the platform’s official processes, but declined to comment on Iron Truth’s underlying claim of a back channel to company employees. “Multiple pieces of content this group claims to have gotten removed from Facebook and Instagram are still live and visible today because they don’t violate our policies,” Daniels told The Intercept in an emailed statement. “The idea that we remove content based on someone’s personal beliefs, religion, or ethnicity is simply inaccurate.” Daniels added, “We receive feedback about potentially violating content from a variety of people, including employees, and we encourage anyone who sees this type of content to report it so we can investigate and take action according to our policies,” noting that Meta employees have access to internal content reporting tools, but that this system can only be used to remove posts that violate the company’s public Community Standards.

    Neither TikTok nor LinkedIn responded to questions about Iron Truth. X could not be reached for comment.

    GAZA CITY, GAZA - OCTOBER 18: A Palestinian woman around the belongings of Palestinians cries at the garden of Al-Ahli Arabi Baptist Hospital after it was hit in Gaza City, Gaza on October 18, 2023. Over 500 people were killed on Al-Ahli Arabi Baptist Hospital in Gaza on Tuesday, Health Ministry spokesman Ashraf al-Qudra told. According to the Palestinian authorities, Israeli army is responsible for the deadly bombing. (Photo by Mustafa Hassona/Anadolu via Getty Images)
    A Palestinian woman cries in the garden of Al-Ahli Arab Hospital after it was hit in Gaza City, Gaza, on Oct. 18, 2023.
    Photo by Mustafa Hassona/Anadolu via Getty Images

    “Keep Bombing!”

    Though confusion and recrimination are natural byproducts of any armed conflict, Iron Truth has routinely used the fog of war as evidence of anti-Israeli disinformation.

    At the start of the project in the week after Hamas’s attack, for example, Iron Truth volunteers were encouraged to find and report posts expressing skepticism about claims of the mass decapitation of babies in an Israeli kibbutz. They quickly surfaced posts casting doubt on reports of “40 beheaded babies” during the Hamas attack, tagging them “fake news” and “disinformation” and sending them to platforms for removal. Among a list of LinkedIn content that Iron Truth told its Telegram followers it had passed along to the company was a post demanding evidence for the beheaded baby claim, categorized by the project as “Terror/Fake.”

    But the skepticism they were attacking proved warranted. While many of Hamas’s atrocities against Israelis on October 7 are indisputable, the Israeli government itself ultimately said it couldn’t verify the horrific claim about beheaded babies. Similarly, Iron Truth’s early efforts to take down “disinformation” about Israel bombing hospitals now contrast with weeks of well-documented airstrikes against multiple hospitals and the deaths of hundreds of doctors from Israeli bombs.

    On October 16, Iron Truth shared a list of Facebook and Instagram posts it claimed responsibility for removing, writing on Telegram, “Significant things reported today and deleted. Good job! Keep bombing! 💪

    While most of the links no longer work, several are still active. One is a video of grievously wounded Palestinians in a hospital, including young children, with a caption accusing Israel of crimes against humanity. Another is a video from Mohamed El-Attar, a Canadian social media personality who posts under the name “That Muslim Guy.” In the post, shared the day after the Hamas attack, El-Attar argued the October 7 assault was not an act of terror, but of armed resistance to Israeli occupation. While this statement is no doubt inflammatory to many, particularly in Israel, Meta is supposed to allow for this sort of discussion, according to internal policy guidance previously reported by The Intercept. The internal language, which detailed the company’s Dangerous Individuals and Organizations policy, lists this sentence among examples of permitted speech: “The IRA were pushed towards violence by the brutal practices of the British government in Ireland.”

    While it’s possible for Meta posts to be deleted by moderators and later reinstated, Daniels, the spokesperson, disputed Iron Truth’s claim, saying links from the list that remain active had never been taken down in the first place. Daniels added that other links on the list had indeed been removed because they violated Meta policy but declined to comment on specific posts.

    Under their own rules, the major social platforms aren’t supposed to remove content simply because it is controversial. While content moderation trigger-happiness around mere mentions of designated terror organizations has led to undue censorship of Palestinian and other Middle Eastern users, Big Tech policies on misinformation are, on paper, much more conservative. Facebook, Instagram, TikTok, and YouTube, for example, only prohibit misinformation when it might cause physical harm, like snake oil cures for Covid-19, or posts meant to interfere with civic functions such as elections. None of the platforms targeted by Iron Truth prohibit merely “inflammatory” speech; indeed, such a policy would likely be the end of social media as we know it.

    Still, content moderation rules are known to be vaguely conceived and erratically enforced. Meta for instance, says it categorically prohibits violent incitement, and touts various machine learning-based technologies to detect and remove such speech. Last month, however, The Intercept reported that the company had approved Facebook ads calling for the assassination of a prominent Palestinian rights advocate, along with explicit calls for the murder of civilians in Gaza. On Instagram, users leaving comments with Palestinian flag emojis have seen these responses inexplicably vanished. 7amleh, a Palestinian digital rights organization that formally partners with Meta on speech issues, has documented over 800 reports of undue social media censorship since the war’s start, according to its public database.

    Disinformation in the Eye of the Beholder

    “It’s really hard to identify disinformation,” Kaganovitch acknowledged in an interview, conceding that what’s considered a conspiracy today might be corroborated tomorrow, and pointing to a recent Haaretz report that an Israel Defense Forces helicopter may have inadvertently killed Israelis on October 7 in the course of firing at Hamas.

    Throughout October, Iron Truth provided a list of suggested keywords for volunteers in the project’s Telegram group to use when searching for content to report to the bot. Some of these terms, like “Kill Jewish” and “Kill Israelis,” pertained to content flagrantly against the rules of major social media platforms, which uniformly ban explicit violent incitement. Others reflected stances that might understandably offend Israeli social media users still reeling from the Hamas attack, like “Nazi flag israel.”

    But many other suggestions included terms commonly found in news coverage or general discussion of the war, particularly in reference to Israel’s brutal bombardment of Gaza. Some of those phrases — including “Israel bomb hospital”; “Israel bomb churches”; “Israel bomb humanitarian”; and “Israel committing genocide” — were suggested as disinformation keywords as the Israeli military was being credibly accused of doing those very things. While some allegations against both Hamas and the IDF were and continue to be bitterly disputed — notably who bombed the Al-Ahli Arab Hospital on October 17 — Iron Truth routinely treated contested claims as “fake news,” siding against the sort of analysis or discussion often necessary to reach the truth.

    “This post must be taken down, he is a really annoying liar and the amount of exposure he has is crazy.”

    Even the words “Israel lied” were suggested to Iron Truth volunteers on the grounds that they could be used in “false posts.” On October 16, two days after an Israeli airstrike killed 70 Palestinians evacuating from northern Gaza, one Telegram group member shared a TikTok containing imagery of one of the bombed convoys. “This post must be taken down, he is a really annoying liar and the amount of exposure he has is crazy,” the member added. A minute later, the Iron Truth administrator account encouraged this member to report the post to the Iron Truth bot.

    Although The Intercept is unable to see which links have been submitted to the bot, Telegram transcripts show the group’s administrator frequently encouraged users to flag posts accusing Israel of genocide or other war crimes. When a chat member shared a link to an Instagram post arguing “It has BEEN a genocide since the Nakba in 1948 when Palestinians were forcibly removed from their land by Israel with Britain’s support and it has continued for the past 75 years with US tax payer dollars,” the group administrator encouraged them to report the post to the bot three minutes later. Links to similar allegations of Israeli war crimes from figures such as popular Twitch streamer Hasan Piker; Colombian President Gustavo Petro; psychologist Gabor Maté; and a variety of obscure, ordinary social media users have received the same treatment.

    Iron Truth has acknowledged its alleged back channel has limits: “It’s not immediate unfortunately, things go through a chain of people on the way,” Kaganovitch explained to one Telegram group member who complained a post they’d reported was still online. “There are companies that implement faster and there are companies that work more slowly. There is internal pressure from the Israelis in the big companies to speed up the reports and removal of the content. We are in constant contact with them 24/7.”

    Since the war began, social media users in Gaza and beyond have complained that content has been censored without any clear violation of a given company’s policies, a well-documented phenomenon long before the current conflict. But Brooking, of the Atlantic Council, cautioned that it can be difficult to determine the process that led to the removal of a given social media post. “There are almost certainly people from tech companies who are receptive to and will work with a civil society organization like this,” he said. “But there’s a considerable gulf between claiming those tech company contacts and having a major influence on tech company decision making.”

    Iron Truth has found targets outside social media too. On November 27, one volunteer shared a link to NoThanks, an Android app that helps users boycott companies related to Israel. The Iron Truth administrator account quickly noted that the complaint had been forwarded to Google. Days later, Google pulled NoThanks from its app store, though it was later reinstated.

    The group has also gone after efforts to fundraise for Gaza. “These cuties are raising money,” said one volunteer, sharing a link to the Instagram account of Medical Aid for Palestinians. Again, the Iron Truth admin quickly followed up, saying the post had been “transferred” accordingly.

    But Kaganovitch says his thinking around the topic of Israeli genocide has shifted. “I changed my thoughts a bit during the war,” he explained. Though he doesn’t agree that Israel is committing a genocide in Gaza, where the death toll has exceeded 20,000, according to the Gaza Health Ministry, he understands how others might. “The genocide, I stopped reporting it in about the third week [of the war].”

    Several weeks after its launch, Iron Truth shared an infographic in its Telegram channel asking its followers not to pass along posts that were simply anti-Zionist. But OCT7, an Israeli group that “monitors the social web in real-time … and guides digital warriors,” lists Iron Truth as one of its partner organizations, alongside the Israeli Ministry for Diaspora Affairs, and cites “anti-Zionist bias” as part of the “challenge” it’s “battling against.”

    Despite Iron Truth’s occasional attempts to rein in its volunteers and focus them on finding posts that might actually violate platform rules, getting everyone on board has proven difficult. Chat transcripts show many Iron Truth volunteers conflating Palestinian advocacy with material support for Hamas or characterizing news coverage as “misinformation” or “disinformation,” perennially vague terms whose meaning is further diluted in times of war and crisis.

    “By the way, it would not be bad to go through the profiles of [United Nations] employees, the majority are local there and they are all supporters of terrorists,” recommended one follower in October. “Friends, report a profile of someone who is raising funds for Gaza!” said another Telegram group member, linking to the Instagram account of a New York-based beauty influencer. “Report this profile, it’s someone I met on a trip and it turns out she’s completely pro-Palestinian!” the same user added later that day. Social media accounts of Palestinian journalist Yara Eid; Palestinian photojournalist Motaz Azaiza; and many others involved in Palestinian human rights advocacy were similarly flagged by Iron Truth volunteers for allegedly spreading “false information.”

    Iron Truth has at times struggled with its own followers. When one proposed reporting a link about Israeli airstrikes at the Rafah border crossing between Gaza and Egypt, the administrator account pointed out that the IDF had indeed conducted the attacks, urging the group: “Let’s focus on disinformation, we are not fighting media organizations.” On another occasion, the administrator discouraged a user from reporting a page belonging to a news organization: “What’s the problem with that?” the administrator asked, noting that the outlet was “not pro-Israel, but is there fake news?”

    But Iron Truth’s standards often seem muddled or contradictory. When one volunteer suggested going after B’Tselem, an Israeli human rights organization that advocates against the country’s military occupation and broader repression of Palestinians, the administrator account replied: “With all due respect, B’Tselem does publish pro-Palestinian content and this was also reported to us and passed on to the appropriate person. But B’Tselem is not Hamas bots or terrorist supporters, we have tens of thousands of posts to deal with.”

    11 September 2022, Israel, Jerusalem: Israeli flags fly in front of the Knesset, the unicameral parliament of the State of Israel. Photo by: Christophe Gateau/picture-alliance/dpa/AP Images
    Israeli flags fly in front of the Knesset, the unicameral parliament of the state of Israel, on Sept. 11, 2022, in Jerusalem.
    Photo: Christophe Gateau/AP

    Friends in High Places

    Though Iron Truth is largely a byproduct of Israel’s thriving tech economy — the country is home to many regional offices of American tech giants — it also claims support from the Israeli government.

    The group’s founder says that Iron Truth leadership have met with Haim Wismonsky, director of the controversial Cyber Unit of the Israeli State Attorney’s Office. While the Cyber Unit purports to combat terrorism and miscellaneous cybercrime, critics say it’s used to censor unwanted criticism and Palestinian perspectives, relaying thousands upon thousands of content takedown demands. American Big Tech has proven largely willing to play ball with these demands: A 2018 report from the Israeli Ministry of Justice claimed a 90 percent compliance rate across social media platforms.

    Following an in-person presentation to the Cyber Unit, Iron Truth’s organizers have remained in contact, and sometimes forward the office links they need help removing, Kaganovitch said. “We showed them the presentation, they asked us also to monitor Reddit and Discord, but Reddit is not really popular here in Israel, so we focus on the big platforms right now.”

    Wismonsky did not respond to a request for comment.

    Kaganovitch noted that Bezek, the former Knesset member, “helps us with diplomatic and government relationships.” In an interview, Bezek confirmed her role and corroborated the group’s claims, saying that while Iron Truth had contacts with “many other employees” at social media firms, she is not involved in that aspect of the group’s work, adding, “I took on myself to be more like the legislation and legal connection.”

    “What we’re doing on a daily basis is that we have a few groups of people who have social media profiles in different medias — LinkedIn, X, Meta, etc. — and if one of us is finding content that is antisemitic or content that is hate claims against Israel or against Jews, we are informing the other people in the group, and few people at the same time are reporting to the tech companies,” Bezek explained.

    Bezek’s governmental outreach has so far included organizing meetings with Israel’s Ministry of Foreign Affairs and “European ambassadors in Israel.” Bezek declined to name the Israeli politicians or European diplomatic personnel involved because their communications are ongoing. These meetings have included allegations of foreign, state-sponsored “antisemitic campaigns and anti-Israeli campaigns,” which Bezek says Iron Truth is collecting evidence about in the hope of pressuring the United Nations to act.

    Iron Truth has also collaborated with Digital Dome, a similar volunteer effort spearheaded by the Israeli anti-disinformation organization FakeReporter, which helps coordinate the mass reporting of unwanted social media content. Israeli American investment fund J-Ventures, which has reportedly worked directly with the IDF to advance Israeli military interests, has promoted both Iron Truth and Digital Dome.

    FakeReporter did not respond to a request for comment.

    While most counter-misinformation efforts betray some geopolitical loyalty, Iron Truth is openly nationalistic. An October 28 write-up in the popular Israeli news website Ynet — “Want to Help With Public Diplomacy? This is How You Start”— cited the Telegram bot as an example of how ordinary Israelis could help their country, noting: “In the absence of a functioning Information Ministry, Israeli men and women hope to be able to influence even a little bit the sounding board on the net.” A mention in the Israeli financial news website BizPortal described Iron Truth as fighting “false and inciting content against Israel.”

    Iron Truth is “a powerful reminder that it’s still people who run these companies at the end of the day,” said Brooking. “I think it’s natural to try to create these coordinated reporting groups when you feel that your country is at war in or in danger, and it’s natural to use every tool at your disposal, including the language of disinformation or fact checking, to try to remove as much content as possible if you think it’s harmful to you or people you love.”

    The real risk, Brooking said, lies not in the back channel, but in the extent to which companies that control the speech of billions around the world are receptive to insiders arbitrarily policing expression. “If it’s elevating content for review that gets around trust and safety teams, standing policy, policy [into] which these companies put a lot of work,” he said, “then that’s a problem.”

    The post Israeli Group Claims It’s Using Big Tech Back Channels to Censor “Inflammatory” Wartime Content appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A forthcoming drone made by Autel, a Chinese electronics manufacturer and drone-maker, is being marketed using images of the unmanned aerial vehicle carrying a payload of what appears to be explosive shells. The images were discovered just two months after the company condemned the military use of its flying robots.

    Two separate online retail preorder listings for the $52,000 Autel Titan drone, with a cargo capacity of 22 pounds and an hour of flight time, advertised a surprising feature: the ability to carry (and presumably fire) weapons.

    In response to concerns from China-hawk lawmakers in the U.S. over Autel’s alleged connections to the Chinese government and its “potentially supporting Russia’s ongoing invasion of Ukraine,” according to a congressional inquiry into the firm, Autel issued a public statement disowning battlefield use of its drones: “Autel Robotics strongly opposes the use of drone products for military purposes or any other activities that infringe upon human rights.” A month later, it issued a second, similarly worded denial: “Autel Robotics is solely dedicated to the development and production of civilian drones. Our products are explicitly designed for civilian use and are not intended for military purposes.”

    It was surprising, then, when Spanish engineer and drone enthusiast Konrad Iturbe discovered a listing for the Titan drone armed to the teeth on OBDPRICE.com, an authorized reseller of Autel products. While most of the product images are anodyne promotional photos showing the drone from various angles, including carrying a generic cargo container, three show a very different payload: what appears to be a cluster of four explosive shells tucked underneath, a configuration similar to those seen in bomb-dropping drones deployed in Ukraine and elsewhere. Samuel Bendett, an analyst with the Center for Naval Analyses, told The Intercept that the shells resembled mortar rounds. Arms analyst Patrick Senft said the ordnance shown might actually be toy replicas, as they “don’t resemble any munitions I’ve seen deployed by UAV.”

    Contacted by email, an OBDPRICE representative who identified themselves only as “Alex” told The Intercept: “The drone products we sell cannot be used for military purposes.” When asked why the site was then depicting the drone product in question carrying camouflage-patterned explosive shells, they wrote: “You may have misunderstood, those are some lighting devices that help our users illuminate themselves at night.” The site has not responded to further queries, but shortly after being contacted by The Intercept, the mortar-carrying images were deleted.

    A “heavy lift” drone made by Autel, a Chinese electronics manufacturers listed for resale on eBay on Jan. 5, 2023, showing Autel’s renderings of the drone carrying a payload of camouflage-clad explosives.
    Screenshot: The Intercept

    Iturbe also identified a separate listing from an Autel storefront on eBay using the very same three images of an armed Titan drone. When asked about the images and whether the drone is compatible with other weapons systems, the account replied via eBay message: “The bombs shown in the listing for this drone is just for display. Pls note that Titan comes with a standard load of 4 kilograms and a maximum load up to 10 kilograms.”

    The images bear a striking resemblance to ordnance-carrying drones that have been widely used during the Russian invasion of Ukraine, where their low cost and sophisticated cameras make them ideal for both reconnaissance and improvised bombing runs. Autel’s drones in particular have proven popular on both sides of the conflict: A March 2023 New York Times report found that “nearly 70 Chinese exporters sold 26 distinct brands of Chinese drones to Russia since the invasion. The second-largest brand sold was Autel, a Chinese drone maker with subsidiaries in the United States, Germany and Italy.” A December 2022 report from the Washington Post, meanwhile, cited Autel’s EVO II model drone as particularly popular among volunteer efforts to source drones for the Ukrainian war effort.

    Last summer, researchers who’ve closely followed the use of drones in the Russia–Ukraine war documented an effort by two Russian nationals, self-chronicled via Telegram, to obtain Chinese drones for the country’s ongoing war in Ukraine. Their visit to Shenzhen resulted in a meeting at an Autel facility and the procurement, the individuals claimed, of military-purpose drones. 

    Autel’s New York-based American subsidiary did not respond to a request for comment.

    The post Drones From Company That “Strongly Opposes” Military Use Marketed With Bombs Attached appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In a letter sent Thursday to Meta chief executive Mark Zuckerberg, Sen. Elizabeth Warren, D-Mass., calls on the Facebook and Instagram owner to disclose unreleased details about wartime content moderation practices that have “exacerbated violence and failed to combat hate speech,” citing recent reporting by The Intercept.

    “Amidst the horrific Hamas terrorist attacks in Israel, a humanitarian catastrophe including the deaths of thousands of civilians in Gaza, and the killing of dozens of journalists, it is more important than ever that social media platforms do not censor truthful and legitimate content, particularly as people around the world turn to online communities to share and find information about developments in the region,” the letter reads, according to a copy shared with The Intercept.

    Since Hamas’s October 7 attack, social media users around the world have reported the inexplicable disappearance of posts, comments, hashtags, and entire accounts — even though they did not seem to violate any rules. Uneven enforcement of rules generally, and Palestinian censorship specifically, have proven perennial problems for Meta, which owns Facebook and Instagram, and the company has routinely blamed erratic rule enforcement on human error and technical glitches, while vowing to improve.

    Following a string of 2021 Israeli raids at the Al-Aqsa Mosque in occupied East Jerusalem, Instagram temporarily censored posts about the holy site on the grounds that it was associated with terrorism. A third-party audit of the company’s speech policies in Israel and Palestine conducted last year found that “Meta’s actions in May 2021 appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.”

    Users affected by these moderation decisions, meanwhile, are left with little to no recourse, and often have no idea why their posts were censored in the first place. Meta’s increased reliance on opaque, automated content moderation algorithms has only exacerbated the company’s lack of transparency around speech policy, and has done little to allay allegations that the company’s systems are structurally biased against certain groups.

    The letter references recent articles by The Intercept, the Wall Street Journal, and other outlets’ reporting on the widespread, unexplained censorship of Palestinians and the broader discussion of Israel’s ongoing bombardment of Gaza. Last month, for instance, The Intercept reported that Instagram users leaving Palestinian flag emojis in post comments had seen those comments quickly hidden; Facebook later told The Intercept it was hiding these emojis in contexts it deemed “potentially offensive.”

    “Social media users deserve to know when and why their accounts and posts are restricted, particularly on the largest platforms where vital information-sharing occurs.”

    These “reports of Meta’s suppression of Palestinian voices raise serious questions about Meta’s content moderation practices and anti-discrimination protections,” Warren writes. “Social media users deserve to know when and why their accounts and posts are restricted, particularly on the largest platforms where vital information-sharing occurs. Users also deserve protection against discrimination based on their national origin, religion, and other protected characteristics.” Outside of its generalized annual reports, Meta typically shares precious little about how it enforces its rules in specific instances, or how its policies are determined behind closed doors. This general secrecy around the company’s speech rules mean that users are often in the dark about whether a given post will be allowed — especially if it even mentions a U.S.-designated terror organization like Hamas — until it’s too late.

    To resolve this, and “[i]n order to further understand what legislative action might be necessary to address these issues,” Warren’s letter includes a litany of specific questions about how Meta treats content pertaining to the war, and to what extent it has enforced its speech rules depending on who’s speaking. “How many Arabic language posts originating from Palestine have been removed [since October 7]?” the letter asks. “What percentage of total Arabic language posts originating from Palestine does the above number represent?” The letter further asks Meta to divulge removal statistics since the war began (“How often did Meta limit the reachability of posts globally while notifying the user?”) and granular details of its enforcement system (“What was the average response time for a user appeal of a content moderation decision for Arabic language posts originating from Palestine?”).

    The letter asks Meta to respond to Warren’s dozens of questions by January 5, 2024.

    The post Sen. Elizabeth Warren Questions Meta Over Palestinian Censorship appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A series of advertisements dehumanizing and calling for violence against Palestinians, intended to test Facebook’s content moderation standards, were all approved by the social network, according to materials shared with The Intercept.

    The submitted ads, in both Hebrew and Arabic, included flagrant violations of policies for Facebook and its parent company Meta. Some contained violent content directly calling for the murder of Palestinian civilians, like ads demanding a “holocaust for the Palestinians” and to wipe out “Gazan women and children and the elderly.” Others posts, like those describing kids from Gaza as “future terrorists” and a reference to “Arab pigs,” contained dehumanizing language.

    “The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people.”

    “The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people,” Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, which submitted the test ads, told The Intercept. “Throughout this crisis, we have seen a continued pattern of Meta’s clear bias and discrimination against Palestinians.”

    7amleh’s idea to test Facebook’s machine-learning censorship apparatus arose last month, when Nashif discovered an ad on his Facebook feed explicitly calling for the assassination of American activist Paul Larudee, a co-founder of the Free Gaza Movement. Facebook’s automatic translation of the text ad read: “It’s time to assassinate Paul Larudi [sic], the anti-Semitic and ‘human rights’ terrorist from the United States.” Nashif reported the ad to Facebook, and it was taken down.

    The ad had been placed by Ad Kan, a right-wing Israeli group founded by former Israel Defense Force and intelligence officers to combat “anti-Israeli organizations” whose funding comes from purportedly antisemitic sources, according to its website. (Neither Larudee nor Ad Kan immediately responded to requests for comment.)

    Calling for the assassination of a political activist is a violation of Facebook’s advertising rules. That the post sponsored by Ad Kan appeared on the platform indicates Facebook approved it despite those rules. The ad likely passed through filtering by Facebook’s automated process, based on machine-learning, that allows its global advertising business to operate at a rapid clip.

    “Our ad review system is designed to review all ads before they go live,” according to a Facebook ad policy overview. As Meta’s human-based moderation, which historically relied almost entirely on outsourced contractor labor, has drawn greater scrutiny and criticism, the company has come to lean more heavily on automated text-scanning software to enforce its speech rules and censorship policies.

    While these technologies allow the company to skirt the labor issues associated with human moderators, they also obscure how moderation decisions are made behind secret algorithms.

    Last year, an external audit commissioned by Meta found that while the company was routinely using algorithmic censorship to delete Arabic posts, the company had no equivalent algorithm in place to detect “Hebrew hostile speech” like racist rhetoric and violent incitement. Following the audit, Meta claimed it had “launched a Hebrew ‘hostile speech’ classifier to help us proactively detect more violating Hebrew content.” Content, that is, like an ad espousing murder.

    Incitement to Violence on Facebook

    Amid the Israeli war on Palestinians in Gaza, Nashif was troubled enough by the explicit call in the ad to murder Larudee that he worried similar paid posts might contribute to violence against Palestinians.

    Large-scale incitement to violence jumping from social media into the real world is not a mere hypothetical: In 2018, United Nations investigators found violently inflammatory Facebook posts played a “determining role” in Myanmar’s Rohingya genocide. (Last year, another group ran test ads inciting against Rohingya, a project along the same lines as 7amleh’s experiment; in that case, all the ads were also approved.)

    The quick removal of the Larudee post didn’t explain how the ad was approved in the first place. In light of assurances from Facebook that safeguards were in place, Nashif and 7amleh, which formally partners with Meta on censorship and free expression issues, were puzzled.

    “Meta has a track record of not doing enough to protect marginalized communities.”

    Curious if the approval was a fluke, 7amleh created and submitted 19 ads, in both Hebrew and Arabic, with text deliberately, flagrantly violating company rules — a test for Meta and Facebook. 7amleh’s ads were designed to test the approval process and see whether Meta’s ability to automatically screen violent and racist incitement had gotten better, even with unambiguous examples of violent incitement.

    “We knew from the example of what happened to the Rohingya in Myanmar that Meta has a track record of not doing enough to protect marginalized communities,” Nashif said, “and that their ads manager system was particularly vulnerable.”

    Meta’s appears to have failed 7amleh’s test.

    The company’s Community Standards rulebook — which ads are supposed to comply with to be approved — prohibit not just text advocating for violence, but also any dehumanizing statements against people based on their race, ethnicity, religion, or nationality. Despite this, confirmation emails shared with The Intercept show Facebook approved every single ad.

    Though 7amleh told The Intercept the organization had no intention to actually run these ads and was going to pull them before they were scheduled to appear, it believes their approval demonstrates the social platform remains fundamentally myopic around non-English speech — languages used by a great majority of its over 4 billion users. (Meta retroactively rejected 7amleh’s Hebrew ads after The Intercept brought them to the company’s attention, but the Arabic versions remain approved within Facebook’s ad system.)

    Facebook spokesperson Erin McPike confirmed the ads had been approved accidentally. “Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes,” she said. “That’s why ads can be reviewed multiple times, including once they go live.”

    Just days after its own experimental ads were approved, 7amleh discovered an Arabic ad run by a group calling itself “Migrate Now” calling on “Arabs in Judea and Sumaria” — the name Israelis, particularly settlers, use to refer to the occupied Palestinian West Bank — to relocate to Jordan.

    According to Facebook documentation, automated, software-based screening is the “primary method” used to approve or deny ads. But it’s unclear if the “hostile speech” algorithms used to detect violent or racist posts are also used in the ad approval process. In its official response to last year’s audit, Facebook said its new Hebrew-language classifier would “significantly improve” its ability to handle “major spikes in violating content,” such as around flare-ups of conflict between Israel and Palestine. Based on 7amleh’s experiment, however, this classifier either doesn’t work very well or is for some reason not being used to screen advertisements. (McPike did not answer when asked if the approval of 7amleh’s ads reflected an underlying issue with the hostile speech classifier.)

    Either way, according to Nashif, the fact that these ads were approved points to an overall problem: Meta claims it can effectively use machine learning to deter explicit incitement to violence, while it clearly cannot.

    “We know that Meta’s Hebrew classifiers are not operating effectively, and we have not seen the company respond to almost any of our concerns,” Nashif said in his statement. “Due to this lack of action, we feel that Meta may hold at least partial responsibility for some of the harm and violence Palestinians are suffering on the ground.”

    The approval of the Arabic versions of the ads come as a particular surprise following a recent report by the Wall Street Journal that Meta had lowered the level of certainty its algorithmic censorship system needed to remove Arabic posts — from 80 percent confidence that the post broke the rules, to just 25 percent. In other words, Meta was less sure that the Arabic posts it was suppressing or deleting actually contained policy violations.

    Nashif said, “There have been sustained actions resulting in the silencing of Palestinian voices.”

    The post Facebook Approved an Israeli Ad Calling for Assassination of Pro-Palestine Activist appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The popular data broker LexisNexis began selling face recognition services and personal location data to U.S. Customs and Border Protection late last year, according to contract documents obtained through a Freedom of Information Act request.

    According to the documents, obtained by the advocacy group Just Futures Law and shared with The Intercept, LexisNexis Risk Solutions began selling surveillance tools to the border enforcement agency in December 2022. The $15.9 million contract includes a broad menu of powerful tools for locating individuals throughout the United States using a vast array of personal data, much of it obtained and used without judicial oversight.

    “This contract is mass surveillance in hyperdrive.”

    Through LexisNexis, CBP investigators gained a convenient place to centralize, analyze, and search various databases containing enormous volumes of intimate personal information, both public and proprietary.

    “This contract is mass surveillance in hyperdrive,” Julie Mao, an attorney and co-founder of Just Futures Law, told The Intercept. “It’s frightening that a rogue agency such as CBP has access to so many powerful technologies at the click of the button. Unfortunately, this is what LexisNexis appears now to be selling to thousands of police forces across the country. It’s now become a one-stop shop for accessing a range of invasive surveillance tools.”

    A variety of CBP offices would make use of the surveillance tools, according to the documents. Among them is the U.S. Border Patrol, which would use LexisNexis to “help examine individuals and entities to determine their admissibility to the US. and their proclivity to violate U.S. laws and regulations.”

    Among other tools, the contract shows LexisNexis is providing CBP with social media surveillance, access to jail booking data, face recognition and “geolocation analysis & geographic mapping” of cellphones. All this data can be queried in “large volume online batching,” allowing CBP investigators to target broad groups of people and discern “connections among individuals, incidents, activities, and locations,” handily visualized through Google Maps.

    CBP declined to comment for this story, and LexisNexis did not respond to an inquiry. Despite the explicit reference to providing “LexisNexis Facial Recognition” in the contract, a fact sheet published by the company online says, “LexisNexis Risk Solutions does not provide the Department of Homeland Security” — CBP’s parent agency — “or US Immigration and Customs Enforcement with license plate images or facial recognition capabilities.”

    The contract includes a variety of means for CBP to exploit the cellphones of those it targets. Accurint, a police and counterterror surveillance tool LexisNexis acquired in 2004, allows the agency to do analysis of real-time phone call records and phone geolocation through its “TraX” software.

    While it’s unclear how exactly TraX pinpoints its targets, LexisNexis marketing materials cite “cellular providers live pings for geolocation tracking.” These materials also note that TraX incorporates both “call detail records obtained through legal process (i.e. search warrant or court order) and third-party device geolocation information.” A 2023 LexisNexis promotional brochure says, “The LexisNexis Risk Solutions Geolocation Investigative Team offers geolocation analysis and investigative case assistance to law enforcement and public safety customers.”

    Any CBP use of geolocational data is controversial, given the agency’s recent history. Prior reporting found that, rather than request phone location data through a search warrant, CBP simply purchased such data from unregulated brokers — a practice that critics say allows the government to sidestep Fourth Amendment protections against police searches.

    According to a September report by 404 Media, CBP recently told Sen. Ron Wyden, D-Ore., it “will not be utilizing Commercial Telemetry Data (CTD) after the conclusion of FY23 (September 30, 2023),” using a technical term for such commercially purchased location information.

    The agency, however, also told Wyden that it could renew its use of commercial location data if there were “a critical mission need” in the future. It’s unclear if this contract provided commercial location data to CBP, or if it was affected by the agency’s commitment to Wyden. (LexisNexis did not respond to a question about whether it provides or provided the type of phone location data that CBP had sworn off.)

    The contract also shows how LexisNexis operates as a reseller for surveillance tools created by other vendors. Its social media surveillance is “powered by” Babel X, a controversial firm that CBP and the FBI have previously used.

    According to a May 2023 report by Motherboard, Babel X allows users to input one piece of information about a surveillance target, like a Social Security number, and receive large amounts of collated information back. The returned data can include “social media posts, linked IP address, employment history, and unique advertising identifiers associated with their mobile phone. The monitoring can apply to U.S. persons, including citizens and permanent residents, as well as refugees and asylum seekers.”

    While LexisNexis is known to provide similar data services to U.S. Immigration and Customs Enforcement, another division of the Department of Homeland Security, details of its surveillance work with CBP were not previously known. Though both agencies enforce immigration law, CBP typically focuses on enforcement along the border, while ICE detains and deports migrants inland.

    In recent years, CBP has drawn harsh criticism from civil libertarians and human rights advocates for its activities both at and far from the U.S.-Mexico border. In 2020, CBP was found to have flown a Predator surveillance drone over Minneapolis protests after the murder of George Floyd; two months later, CBP agents in unmarked vehicles seized racial justice protesters off the streets of Portland, Oregon — an act the American Civil Liberties Union condemned as a “blatant demonstration of unconstitutional authoritarianism.”

    Just Futures Law is currently suing LexisNexis over claims it illegally obtains and sells personal data.

    The post LexisNexis Sold Powerful Spy Tools to U.S. Customs and Border Protection appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A Google employee protesting the tech giant’s business with the Israeli government was questioned by Google’s human resources department over allegations that he endorsed terrorism, The Intercept has learned. The employee said he was the only Muslim and Middle Easterner who circulated the letter and also the only one who was confronted by HR about it.

    The employee was objecting to Project Nimbus, Google’s controversial $1.2 billion contract with the Israeli government and its military to provide state-of-the-art cloud computing and machine learning tools.

    Since its announcement two years ago, Project Nimbus has drawn widespread criticism both inside and outside Google, spurring employee-led protests and warnings from human rights groups and surveillance experts that it could bolster state repression of Palestinians.

    Mohammad Khatami, a Google software engineer, sent an email to two internal listservs on October 18 saying Project Nimbus was implicated in human rights abuses against Palestinians — abuses that fit a 75-year pattern that had brought the conflict to the October 7 Hamas massacre of some 1,200 Israelis, mostly civilians. The letter, distributed internally by anti-Nimbus Google workers through company email lists, went on to say that Google could become “complicit in what history will remember as a genocide.”

    “Strangely enough, I was the only one of us who was sent to HR over people saying I was supporting terrorism or justifying terrorism.”

    Twelve days later, Google HR told Khatami they were scheduling a meeting with him, during which he says he was questioned about whether the letter was “justifying the terrorism on October 7th.”

    In an interview, Khatami told The Intercept he was not only disturbed by what he considers an attempt by Google to stifle dissent on Nimbus, but also believes he was left feeling singled out because of his religion and ethnicity. The letter was drafted and internally circulated by a group of anti-Nimbus Google employees, but none of them other than Khatami were called by HR, according to Khatami and Josh Marxen, another anti-Nimbus organizer at Google who helped spread the letter. Though he declined to comment on the outcome of the HR meeting, Khatami said it left him shaken.

    “It was very emotionally taxing,” Khatami said. “I was crying by the end of it.”

    “I’m the only Muslim or Middle Eastern organizer who sent out that email,” he told The Intercept. “Strangely enough, I was the only one of us who was sent to HR over people saying I was supporting terrorism or justifying terrorism.”

    Marxen shared a screenshot of the virtually identical email he sent, also on October 18, with The Intercept. Though there are a few small changes — Marxen’s email refers to “a seige [sic] upon all of Gaza” whereas Khamati’s cites “the complete destitution of Gaza” — both contain verbatim language connecting the October 7 attack to Israel’s past treatment of Palestinians.

    Google spokesperson Courtenay Mencini told The Intercept, “We follow up on every concern raised, and in this case, dozens of employees reported this individual’s email – not the sharing of the petition itself – for including language that did not follow our workplace policies.” Mencini declined to say which workplace policies Khatami’s email allegedly violated, whether other organizers had gotten HR calls, or if any other company personnel had been approached by Employee Relations for comments made about the war.

    The incident comes just one year after former Google employee Ariel Koren said the company attempted to force her to relocate to Brazil in retaliation for her early anti-Nimbus organizing. Koren later quit in protest and remains active in advocating against the contract. Project Nimbus, despite the dissent, remains in place, in part because of contractual terms put in place by Israel forbidding Google from cutting off service in response to political pressure or boycott campaigns.

    Dark Clouds Over Nimbus

    Dissent at Google is neither rare nor ineffective. Employee opposition to controversial military contracts has previously pushed the company to drop plans to help with the Pentagon’s drone warfare program and a planned Chinese version of Google Search that would filter out results unwanted by the Chinese government. Nimbus, however, has managed to survive.

    In the wake of the October 7 Hamas attacks against Israel and resulting Israeli counteroffensive, now in its second month of airstrikes and a more recent ground invasion, Project Nimbus is again a flashpoint within the company.

    With the rank and file disturbed by the company’s role as a defense contractor, Google has attempted to downplay the military nature of the contract.

    Mencini, the Google spokesperson, said that anti-Nimbus organizers were “misrepresenting” the contract’s military role.

    “This is part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” Mencini said. “We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education. Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”

    Nimbus training documents published by The Intercept last year, however, show the company was pitching its use for the Ministry of Defense. Moreover, the Israeli government itself is open about the military applications of Project Nimbus: A 2023 press release by the Israeli Ministry of Finance specifically names the Israel Defense Forces as a beneficiary, while an overview written by the country’s National Digital Agency describes the contract as “a comprehensive and in-depth solution to the provision of public cloud services to the Government, the defense establishment and other public organizations.”

    “If we do not speak out now, we are complicit in what history will remember as a genocide.”

    Against this backdrop, Khatami, in coordination with others in the worker-led anti-Nimbus campaign, sent his October 18 note to internal Arab and Middle Eastern affinity groups laying out their argument against the project and asking like-minded colleagues to sign an employee petition.

    “Through Project Nimbus, Google is complicit in the mass surveillance and other human rights abuses which Palestinians have been subject to daily for the past 75 years, and which is the root cause of the violence initiated on October 7th,” the letter said. “If we do not speak out now, we are complicit in what history will remember as a genocide.”

    On October 30, Khatami received an email from Google’s Employee Relations division informing him that he would soon be questioned by company representatives regarding “a concern about your conduct that has been brought to our attention.”

    According to Khatami, in the ensuing phone call, Google HR pressed him about the portion of his email that made a historical connection between the October 7 Hamas attack and the 75 years of Israeli rights abuses that preceded it, claiming some of his co-workers believed he was endorsing violence. Khatami recalled being asked, “Can you see how people are thinking you’re justifying the terrorism on October 7th?”

    Khatami said he and his fellow anti-Nimbus organizers were in no way endorsing the violence against Israeli civilians — just as they now oppose the deaths of more than 10,000 Palestinians, according to the latest figures from Gaza’s Ministry of Health. Rather, the Google employees wanted to provide sociopolitical context for Project Nimbus, part of a broader employee-led effort of “demilitarizing our company that was never meant to be militarized.” To point out the relevant background leading to the October 7 attack, he said, is not to approve it.

    “We wrote that the root cause of the violence is the occupation,” Khatami explained. “Analysis is not justification.”

    Double Standard

    Khatami also objects to what he said is a double standard within Google about what speech about the war is tolerated, a source of ongoing turmoil at the company. The day after his original email, a Google employee responded angrily to the email chain: “Accusing Israel of genocide and Google of being complicit is a grave accusation!” This employee, who works at the company’s cloud computing division, itself at the core of Project Nimbus, continued:

    To break it down for you, project nimbus contributes to Israel’s security. Therefore, any calls to drop it are meant to weaken Israel’s security. If Israel’s security is weak, then the prospect of more terrorist attacks, like the one we saw on October 7, is high. Terrorist attacks will result in casualties that will affect YOUR Israeli colleagues and their family. Attacks will be retaliated by Israel which will result in casualties that will affect your Palestinian colleagues and their family (because they are used as shields by the terrorists)…bottom line, a secured Israel means less lives lost! Therefore if you have the good intention to preserve human lives then you MUST support project Nimbus!

    While Khatami disagrees strongly with the overall argument in the response email, he objected in particular to the co-worker’s claim that Israel is killing Palestinians “because they are used as shields by the terrorists” — a justification of violence far more explicit than the one he was accused of, he said. Khatami questioned whether widespread references to the inviolability of Israeli self-defense by Google employees have provoked treatment from HR similar to what he received after his email about Nimbus.

    Internal employee communications viewed by The Intercept show tensions within Google over the Israeli–Palestinian conflict aren’t limited to debates over Project Nimbus. A screenshots viewed by The Intercept shows an Israeli Google employee repeatedly asking Middle Eastern colleagues if they support Hamas, while another shows a Google engineer suggesting Palestinians worried about the welfare of their children should simply stop having kids. Another lamented “friends and family [who] are slaughtered by the Gaza-grown group of bloodthirsty animals.”

    According to a recent New York Times report, which found “at least one” instance of “overtly antisemitic” content posted through internal Google channels, “one worker had been fired after writing in an internal company message board that Israelis living near Gaza ‘deserved to be impacted.’”

    Another screenshot reviewed by The Intercept, taken from an email group for Israeli Google staff, shows employees discussing a post by a colleague criticizing the Israeli occupation and encouraging donations to a Gaza relief fund.

    “During this time we all need to stay strong as a nation and united,” one Google employee replied in the email group. “As if we are not going through enough suffering, we will unfortunately see many emails, comments either internally or on social media that are pro Hamas and clearly anti semitic. report immediately!” Another added: “People like that make me sick. But she is a lost cause.” A third chimed in to say they had internally reported the colleague soliciting donations. A separate post soliciting donations for the same Gaza relief fund was downvoted 139 times on an internal message board, according to another screenshot, while a post stating only “Killing civilians is indefensible” received 51 downvotes.

    While Khatami says he was unnerved and disheartened by the HR grilling, he’s still committed to organizing against Project Nimbus.

    “It definitely emotionally affected me, it definitely made me significantly more fearful or organizing in this space,” he said. “But I think knowing that people are dying right now and slaughtered in a genocide that’s aided and abetted by my company, remembering that makes the fear go away.”

    The post Google Activists Circulated Internal Petition on Israel Ties. Only the Muslim Got a Call from HR. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In Phoenix, Austin, Houston, Dallas, Miami, and San Francisco, hundreds of so-called autonomous vehicles, or AVs, operated by General Motors’ self-driving car division, Cruise, have for years ferried passengers to their destinations on busy city roads. Cruise’s app-hailed robot rides create a detailed picture of their surroundings through a combination of sophisticated sensors, and navigate through roadways and around obstacles with machine learning software intended to detect and avoid hazards.

    AV companies hope these driverless vehicles will replace not just Uber, but also human driving as we know it. The underlying technology, however, is still half-baked and error-prone, giving rise to widespread criticisms that companies like Cruise are essentially running beta tests on public streets.

    Despite the popular skepticism, Cruise insists its robots are profoundly safer than what they’re aiming to replace: cars driven by people. In an interview last month, Cruise CEO Kyle Vogt downplayed safety concerns: “Anything that we do differently than humans is being sensationalized.”

    The concerns over Cruise cars came to a head this month. On October 17, the National Highway Traffic Safety Administration announced it was investigating Cruise’s nearly 600-vehicle fleet because of risks posed to other cars and pedestrians. A week later, in San Francisco, where driverless Cruise cars have shuttled passengers since 2021, the California Department of Motor Vehicles announced it was suspending the company’s driverless operations. Following a string of highly public malfunctions and accidents, the immediate cause of the order, the DMV said, was that Cruise withheld footage from a recent incident in which one of its vehicles hit a pedestrian, dragging her 20 feet down the road.

    In an internal address on Slack to his employees about the suspension, Vogt stuck to his message: “Safety is at the core of everything we do here at Cruise.” Days later, the company said it would voluntarily pause fully driverless rides in Phoenix and Austin, meaning its fleet will be operating only with human supervision: a flesh-and-blood backup to the artificial intelligence.

    Even before its public relations crisis of recent weeks, though, previously unreported internal materials such as chat logs show Cruise has known internally about two pressing safety issues: Driverless Cruise cars struggled to detect large holes in the road and have so much trouble recognizing children in certain scenarios that they risked hitting them. Yet, until it came under fire this month, Cruise kept its fleet of driverless taxis active, maintaining its regular reassurances of superhuman safety.

    “This strikes me as deeply irresponsible at the management level to be authorizing and pursuing deployment or driverless testing, and to be publicly representing that the systems are reasonably safe,” said Bryant Walker Smith, a University of South Carolina law professor and engineer who studies automated driving.

    In a statement, a spokesperson for Cruise reiterated the company’s position that a future of autonomous cars will reduce collisions and road deaths. “Our driverless operations have always performed higher than a human benchmark, and we constantly evaluate and mitigate new risks to continuously improve,” said Erik Moser, Cruise’s director of communications. “We have the lowest risk tolerance for contact with children and treat them with the highest safety priority. No vehicle — human operated or autonomous — will have zero risk of collision.”

    “These are not self-driving cars. These are cars driven by their companies.”

    Though AV companies enjoy a reputation in Silicon Valley as bearers of a techno-optimist transit utopia — a world of intelligent cars that never drive drunk, tired, or distracted — the internal materials reviewed by The Intercept reveal an underlying tension between potentially life-and-death engineering problems and the effort to deliver the future as quickly as possible. With its parent company General Motors, which purchased Cruise in 2016 for $1.1 billion, hemorrhaging money on the venture, any setback for the company’s robo-safety regimen could threaten its business.

    Instead of seeing public accidents and internal concerns as yellow flags, Cruise sped ahead with its business plan. Before its permitting crisis in California, the company was, according to Bloomberg, exploring expansion to 11 new cities.

    “These are not self-driving cars,” said Smith. “These are cars driven by their companies.”

    Kyle Vogt, co-founder, president and chief technology officer for Cruise Automation Inc., holds an articulating radar as he speaks during a reveal event in San Francisco, California, U.S., on Tuesday, Jan. 21, 2020. The shuttle is designed to be more spacious and passenger-friendly than a conventional, human-driven car. The silver, squared-off vehicle lacks traditional controls like pedals and a steering wheel, freeing up room for multiple people to share rides, Cruise Chief Executive Officer Dan Ammann said at the event. Photographer: David Paul Morris/Bloomberg via Getty Images
    Kyle Vogt — co-founder, president, chief executive officer, and chief technology officer of Cruise — holds an articulating radar as he speaks during a reveal event in San Francisco on Jan. 21, 2020.
    Photo: David Paul Morris/Bloomberg via Getty Images

    “May Not Exercise Additional Care Around Children”

    Several months ago, Vogt became choked up when talking about a 4-year-old girl who had recently been killed in San Francisco. A 71-year-old woman had taken what local residents described as a low-visibility right turn, striking a stroller and killing the child. “It barely made the news,” Vogt told the New York Times. “Sorry. I get emotional.” Vogt offered that self-driving cars would make for safer streets.

    Behind the scenes, meanwhile, Cruise was grappling with its own safety issues around hitting kids with cars. One of the problems addressed in the internal, previously unreported safety assessment materials is the failure of Cruise’s autonomous vehicles to, under certain conditions, effectively detect children. “Cruise AVs may not exercise additional care around children,” reads one internal safety assessment. The company’s robotic cars, it says, still “need the ability to distinguish children from adults so we can display additional caution around children.”

    In particular, the materials say, Cruise worried its vehicles might drive too fast at crosswalks or near a child who could move abruptly into the street. The materials also say Cruise lacks data around kid-centric scenarios, like children suddenly separating from their accompanying adult, falling down, riding bicycles, or wearing costumes.

    The materials note results from simulated tests in which a Cruise vehicle is in the vicinity of a small child. “Based on the simulation results, we can’t rule out that a fully autonomous vehicle might have struck the child,” reads one assessment. In another test drive, a Cruise vehicle successfully detected a toddler-sized dummy but still struck it with its side mirror at 28 miles per hour.

    The internal materials attribute the robot cars’ inability to reliably recognize children under certain conditions to inadequate software and testing. “We have low exposure to small VRUs” — Vulnerable Road Users, a reference to children — “so very few events to estimate risk from,” the materials say. Another section concedes Cruise vehicles’ “lack of a high-precision Small VRU classifier,” or machine learning software that would automatically detect child-shaped objects around the car and maneuver accordingly. The materials say Cruise, in an attempt to compensate for machine learning shortcomings, was relying on human workers behind the scenes to manually identify children encountered by AVs where its software couldn’t do so automatically.

    In its statement, Cruise said, “It is inaccurate to say that our AVs were not detecting or exercising appropriate caution around pedestrian children” — a claim undermined by internal Cruise materials reviewed by The Intercept and the company’s statement itself. In its response to The Intercept’s request for comment, Cruise went on to concede that, this past summer during simulation testing, it discovered that its vehicles sometimes temporarily lost track of children on the side of the road. The statement said the problem was fixed and only encountered during testing, not on public streets, but Cruise did not say how long the issue lasted. Cruise did not specify what changes it had implemented to mitigate the risks.

    Despite Cruise’s claim that its cars are designed to identify children to treat them as special hazards, spokesperson Navideh Forghani said that the company’s driving software hadn’t failed to detect children but merely failed to classify them as children.

    Moser, the Cruise spokesperson, said the company’s cars treat children as a special category of pedestrians because they can behave unpredictably. “Before we deployed any driverless vehicles on the road, we conducted rigorous testing in a simulated and closed-course environment against available industry benchmarks,” he said. “These tests showed our vehicles exceed the human benchmark with regard to the critical collision avoidance scenarios involving children.”

    “Based on our latest assessment this summer,” Moser continued, “we determined from observed performance on-road, the risk of the potential collision with a child could occur once every 300 million miles at fleet driving, which we have since improved upon. There have been no on-road collisions with children.”

    Do you have a tip to share about safety issues at Cruise? The Intercept welcomes whistleblowers. Use a personal device to contact Sam Biddle on Signal at +1 (978) 261-7389, by email at sam.biddle@theintercept.com, or by SecureDrop.

    Cruise has known its cars couldn’t detect holes, including large construction pits with workers inside, for well over a year, according to the safety materials reviewed by The Intercept. Internal Cruise assessments claim this flaw constituted a major risk to the company’s operations. Cruise determined that at its current, relatively miniscule fleet size, one of its AVs would drive into an unoccupied open pit roughly once a year, and a construction pit with people inside it about every four years. Without fixes to the problems, those rates would presumably increase as more AVs were put on the streets.

    It appears this concern wasn’t hypothetical: Video footage captured from a Cruise vehicle reviewed by The Intercept shows one self-driving car, operating in an unnamed city, driving directly up to a construction pit with multiple workers inside. Though the construction site was surrounded by orange cones, the Cruise vehicle drives directly toward it, coming to an abrupt halt. Though it can’t be discerned from the footage whether the car entered the pit or stopped at its edge, the vehicle appears to be only inches away from several workers, one of whom attempted to stop the car by waving a “SLOW” sign across its driverless windshield.

    “Enhancing our AV’s ability to detect potential hazards around construction zones has been an area of focus, and over the last several years we have conducted extensive human-supervised testing and simulations resulting in continued improvements,” Moser said. “These include enhanced cone detection, full avoidance of construction zones with digging or other complex operations, and immediate enablement of the AV’s Remote Assistance support/supervision by human observers.”

    Known Hazards

    Cruise’s undisclosed struggles with perceiving and navigating the outside world illustrate the perils of leaning heavily on machine learning to safely transport humans. “At Cruise, you can’t have a company without AI,” the company’s artificial intelligence chief told Insider in 2021. Cruise regularly touts its AI prowess in the tech media, describing it as central to preempting road hazards. “We take a machine-learning-first approach to prediction,” a Cruise engineer wrote in 2020.

    The fact that Cruise is even cataloguing and assessing its safety risks is a positive sign, said Phil Koopman, an engineering professor at Carnegie Mellon, emphasizing that the safety issues that worried Cruise internally have been known to the field of autonomous robotics for decades. Koopman, who has a long career working on AV safety, faulted the data-driven culture of machine learning that leads tech companies to contemplate hazards only after they’ve encountered them, rather than before. The fact that robots have difficulty detecting “negative obstacles” — AV jargon for a hole — is nothing new.

    “Safety is about the bad day, not the good day, and it only takes one bad day.”

    “They should have had that hazard on their hazard list from day one,” Koopman said. “If you were only training it how to handle things you’ve already seen, there’s an infinite supply of things that you won’t see until it happens to your car. And so machine learning is fundamentally poorly suited to safety for this reason.”

    The safety materials from Cruise raise an uncomfortable question for the company about whether robot cars should be on the road if it’s known they might drive into a hole or a child.

    “If you can’t see kids, it’s very hard for you to accept that not being high risk — no matter how infrequent you think it’s going to happen,” Koopman explained. “Because history shows us people almost always underestimate the risk of high severity because they’re too optimistic. Safety is about the bad day, not the good day, and it only takes one bad day.”

    Koopman said the answer rests largely on what steps, if any, Cruise has taken to mitigate that risk. According to one safety memo, Cruise began operating fewer driverless cars during daytime hours to avoid encountering children, a move it deemed effective at mitigating the overall risk without fixing the underlying technical problem. In August, Cruise announced the cuts to daytime ride operations in San Francisco but made no mention of its attempt to lower risk to local children. (“Risk mitigation measures incorporate more than AV behavior, and include operational measures like alternative routing and avoidance areas, daytime or nighttime deployment and fleet reductions among other solutions,” said Moser. “Materials viewed by The Intercept may not reflect the full scope of our evaluation and mitigation measures for a specific situation.”)

    A quick fix like shifting hours of operation presents an engineering paradox: How can the company be so sure it’s avoiding a thing it concedes it can’t always see? “You kind of can’t,” said Koopman, “and that may be a Catch-22, but they’re the ones who decided to deploy in San Francisco.”

    “The reason you remove safety drivers is for publicity and optics and investor confidence.”

    Precautions like reduced daytime operations will only lower the chance that a Cruise AV will have a dangerous encounter with a child, not eliminate that possibility. In a large American city, where it’s next to impossible to run a taxi business that will never need to drive anywhere a child might possibly appear, Koopman argues Cruise should have kept safety drivers in place while it knew this flaw persisted. “The reason you remove safety drivers is for publicity and optics and investor confidence,” he told The Intercept.

    Koopman also noted that there’s not always linear progress in fixing safety issues. In the course of trying to fine-tune its navigation, Cruise’s simulated tests showed its AV software missed children at an increased rate, despite attempts to fix the issues, according to materials reviewed by The Intercept.

    The two larger issues of kids and holes weren’t the only robot flaws potentially imperiling nearby humans. According to other internal materials, some vehicles in the company’s fleet suddenly began making unprotected left turns at intersections, something Cruise cars are supposed to be forbidden from attempting. The potentially dangerous maneuvers were chalked up to a botched software update.

    The Cruise Origin, a self-driving vehicle with no steering wheel or pedals, is displayed at the Honda Motor's booth during the press day of the Japan Mobility Show in Tokyo on October 25, 2023. (Photo by Kazuhiro NOGI / AFP) (Photo by KAZUHIRO NOGI/AFP via Getty Images)
    The Cruise Origin, a self-driving vehicle with no steering wheel or pedals, is displayed at Honda’s booth during the press day of the Japan Mobility Show in Tokyo on Oct. 25, 2023.
    Photo: Kazuhiro Nog/AFP via Getty Images

    The Future of Road Safety?

    Part of the self-driving industry’s techno-libertarian promise to society — and a large part of how it justifies beta-testing its robots on public roads — is the claim that someday, eventually, streets dominated by robot drivers will be safer than their flesh-based predecessors.

    Cruise cited a RAND Corporation study to make its case. “It projected deploying AVs that are on average ten percent safer than the average human driver could prevent 600,000 fatalities in the United States over 35 years,” wrote Vice President for Safety Louise Zhang in a company blog post. “Based on our first million driverless miles of operation, it appears we are on track to far exceed this projected safety benefit.”

    During General Motors’ quarterly earnings call — the same day California suspended Cruise’s operating permit — CEO Mary Barra told financial analysts that Cruise “is safer than a human driver and is constantly improving and getting better.”

    In the 2022 “Cruise Safety Report,” the company outlines a deeply unflattering comparison of fallible human drivers to hyper-intelligent robot cars. The report pointed out that driver distraction was responsible for more than 3,000 traffic fatalities in 2020, whereas “Cruise AVs cannot be distracted.” Crucially, the report claims, a “Cruise AV only operates in conditions that it is designed to handle.”

    “It’s I think especially egregious to be making the argument that Cruise’s safety record is better than a human driver.”

    When it comes to hitting kids, however, internal materials indicate the company’s machines were struggling to match the safety performance of even an average human: Cruise’s goal was, at the time, for its robots to merely drive as safely around children at the same rate as an average Uber driver — a goal the internal materials note it was failing to meet.

    “It’s I think especially egregious to be making the argument that Cruise’s safety record is better than a human driver,” said Smith, the University of South Carolina law professor. “It’s pretty striking that there’s a memo that says we could hit more kids than an average rideshare driver, and the apparent response of management is, keep going.”

    In a statement to The Intercept, Cruise confirmed its goal of performing better than ride-hail drivers. “Cruise always strives to go beyond existing safety benchmarks, continuing to raise our own internal standards while we collaborate with regulators to define industry standards,” said Moser. “Our safety approach combines a focus on better-than-human behavior in collision imminent situations, and expands to predictions and behaviors to proactively avoid scenarios with risk of collision.”

    Cruise and its competitors have worked hard to keep going despite safety concerns, public and nonpublic. Before the California Public Utilities Commission voted to allow Cruise to offer driverless rides in San Francisco, where Cruise is headquartered, the city’s public safety and traffic agencies lobbied for a slower, more cautious approach to AVs. The commission didn’t agree with the agencies’ worries. “While we do not yet have the data to judge AVs against the standard human drivers are setting, I do believe in the potential of this technology to increase safety on the roadway,” said commissioner John Reynolds, who previously worked as a lawyer for Cruise.

    Had there always been human safety drivers accompanying all robot rides — which California regulators let Cruise ditch in 2021 — Smith said there would be little cause for alarm. A human behind the wheel could, for example, intervene to quickly steer a Cruise AV out of the path of a child or construction crew that the robot failed to detect. Though the company has put them back in place for now, dispensing entirely with human backups is ultimately crucial to Cruise’s long-term business, part of its pitch to the public that steering wheels will become a relic. With the wheel still there and a human behind it, Cruise would struggle to tout its technology as groundbreaking.

    “We’re not in a world of testing with in-vehicle safety drivers, we’re in a world of testing through deployment without this level of backup and with a whole lot of public decisions and claims that are in pretty stark contrast to this,” Smith explained. “Any time that you’re faced with imposing a risk that is greater than would otherwise exist and you’re opting not to provide a human safety driver, that strikes me as pretty indefensible.”

    The post Cruise Knew Its Self-Driving Cars Had Problems Recognizing Children — and Kept Them on the Streets appeared first on The Intercept.

  • As Israel imposed an internet blackout in Gaza on Friday, social media users posting about the grim conditions have contended with erratic and often unexplained censorship of content related to Palestine on Instagram and Facebook.

    Since Israel launched retaliatory airstrikes in Gaza after the October 7 Hamas attack, Facebook and Instagram users have reported widespread deletions of their content, translations inserting the word “terrorist” into Palestinian Instagram profiles, and suppressed hashtags. Instagram comments containing the Palestinian flag emoji have also been hidden, according to 7amleh, a Palestinian digital rights group that formally collaborates with Meta, which owns Instagram and Facebook, on regional speech issues.

    Numerous users have reported to 7amleh that their comments were moved to the bottom of the comments section and require a click to display. Many of the remarks have something in common: “It often seemed to coincide with having a Palestinian flag in the comment,” 7amleh spokesperson Eric Sype told The Intercept.

    Users report that Instagram had flagged and hidden comments containing the emoji as “potentially offensive,” TechCrunch first reported last week. Meta has routinely attributed similar instances of alleged censorship to technical glitches. Meta confirmed to The Intercept that the company has been hiding comments that contain the Palestinian flag emoji in certain “offensive” contexts that violate the company’s rules.

    “The notion of finding a flag offensive is deeply distressing for Palestinians,” Mona Shtaya, a nonresident fellow at the Tahrir Institute for Middle East Policy who follows Meta’s policymaking on speech, told The Intercept.

    “The notion of finding a flag offensive is deeply distressing for Palestinians.”

    Asked about the contexts in which Meta hides the flag, Meta spokesperson Andy Stone pointed to the Dangerous Organizations and Individuals policy, which designates Hamas as a terrorist organization, and cited a section of the Community Standards rulebook that prohibits any content “praising, celebrating or mocking anyone’s death.”

    It remains unclear, however, precisely how Meta determines whether the use of the flag emoji is offensive enough to suppress. The Intercept reviewed several hidden comments containing the Palestinian flag emoji that had no reference to Hamas or any other banned group. The Palestinian flag itself has no formal association with Hamas and predates the militant group by decades.

    Some of the hidden comments reviewed by The Intercept only contained emojis and no other text. In one, a user commented on an Instagram video of a pro-Palestinian demonstration in Jordan with green, white, and black heart emojis corresponding to the colors of the Palestinian flag, along with emojis of the Moroccan and Palestinian flags. In another, a user posted just three Palestinian flag emojis. Another screenshot seen by The Intercept shows two hidden comments consisting only of the hashtags #Gaza, #gazaunderattack, #freepalestine, and #ceasefirenow.

    “Throughout our long history, we’ve endured moments where our right to display the Palestinian flag has been denied by Israeli authorities. Decades ago, Palestinian artists Nabil Anani and Suleiman Mansour ingeniously used a watermelon as a symbol of our flag,” Shtaya said. “When Meta engages in such practices, it echoes the oppressive measures imposed on Palestinians.”

    Faulty Content Moderation

    Instagram and Facebook users have taken to other social media platforms to report other instances of censorship. On X, formerly known as Twitter, one user posted that Facebook blocked a screenshot of a popular Palestinian Instagram account he tried to share with a friend via private message. The message was flagged as containing nonconsensual sexual images, and his account was suspended.

    On Bluesky, Facebook and Instagram users reported that attempts to share national security reporter Spencer Ackerman’s recent article criticizing President Joe Biden’s support of Israel were blocked and flagged as cybersecurity risks.

    On Friday, the news site Mondoweiss tweeted a screenshot of an Instagram video about Israeli arrests of Palestinians in the West Bank that was removed because it violated the dangerous organizations policy.

    Meta’s increasing reliance on automated, software-based content moderation may prevent people from having to sort through extremely disturbing and potentially traumatizing images. The technology, however, relies on opaque, unaccountable algorithms that introduce the potential to misfire, censoring content without explanation. The issue appears to extend to posts related to the Israel-Palestine conflict.

    An independent audit commissioned by Meta last year determined that the company’s moderation practices amounted to a violation of Palestinian users’ human rights. The audit also concluded that the Dangerous Organizations and Individuals policy — which speech advocates have criticized for its opacity and overrepresentation of Middle Eastern, Muslim, and South Asians — was “more likely to impact Palestinian and Arabic-speaking users, both based upon Meta’s interpretation of legal obligations, and in error.”

    Last week, the Wall Street Journal reported that Meta recently dialed down the level of confidence its automated systems require before suppressing “hostile speech” to 25 percent for the Palestinian market, a significant decrease from the standard threshold of 80 percent.

    The audit also faulted Meta for implementing a software scanning tool to detect violent or racist incitement in Arabic, but not for posts in Hebrew. “Arabic classifiers are likely less accurate for Palestinian Arabic than other dialects … due to lack of linguistic and cultural competence,” the report found.

    “Since the beginning of this crisis, we have received hundreds of submissions documenting incitement to violence in Hebrew.”

    Despite Meta’s claim that the company developed a speech classifier for Hebrew in response to the audit, hostile speech and violent incitement in Hebrew are rampant on Instagram and Facebook, according to 7amleh.

    “Based on our monitoring and documentation, it seems to be very ineffective,” 7amleh executive director and co-founder Nadim Nashif said of the Hebrew classifier. “Since the beginning of this crisis, we have received hundreds of submissions documenting incitement to violence in Hebrew, that clearly violate Meta’s policies, but are still on the platforms.”

    An Instagram search for a Hebrew-language hashtag roughly meaning “erase Gaza” produced dozens of results at the time of publication. Meta could not be immediately reached for comment on the accuracy of its Hebrew speech classifier.

    The Wall Street Journal shed light on why hostile speech in Hebrew still appears on Instagram. “Earlier this month,” the paper reported, “the company internally acknowledged that it hadn’t been using its Hebrew hostile speech classifier on Instagram comments because it didn’t have enough data for the system to function adequately.”

    The post Instagram Hid a Comment. It Was Just Three Palestinian Flag Emojis. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The very obscure, archaic technologies that make cellphone roaming possible also makes it possible to track phone owners across the world, according to a new investigation by the University of Toronto’s Citizen Lab. The roaming tech is riddled with security oversights that make it a ripe target for those who might want to trace the locations of phone users.

    As the report explains, the flexibility that made cellphones so popular in the first place is largely to blame for their near-inescapable vulnerability to unwanted location tracking: When you move away from a cellular tower owned by one company to one owned by another, your connection is handed off seamlessly, preventing any interruption to your phone call or streaming video. To accomplish this handoff, the cellular networks involved need to relay messages about who — and, crucially, precisely where — you are.

    “Notably, the methods available to law enforcement and intelligence services are similar to those used by the unlawful actors and enable them to obtain individuals’ geolocation information.”

    While most of these network-hopping messages are sent to facilitate legitimate customer roaming, the very same system can be easily manipulated to trick a network into divulging your location to governments, fraudsters, or private sector snoops.

    “Foreign intelligence and security services, as well as private intelligence firms, often attempt to obtain location information, as do domestic state actors such as law enforcement,” states the report from Citizen Lab, which researches the internet and tech from the Munk School of Global Affairs and Public Policy at the University of Toronto. “Notably, the methods available to law enforcement and intelligence services are similar to those used by the unlawful actors and enable them to obtain individuals’ geolocation information with high degrees of secrecy.”

    The sheer complexity required to allow phones to easily hop from one network to another creates a host of opportunities for intelligence snoops and hackers to poke around for weak spots, Citizen Lab says. Today, there are simply so many companies involved in the cellular ecosystem that opportunities abound for bad actors.

    Citizen Lab highlights the IP Exchange, or IPX, a network that helps cellular companies swap data about their customers. “The IPX is used by over 750 mobile networks spanning 195 countries around the world,” the report explains. “There are a variety of companies with connections to the IPX which may be willing to be explicitly complicit with, or turn a blind eye to, surveillance actors taking advantage of networking vulnerabilities and one-to-many interconnection points to facilitate geolocation tracking.”

    This network, however, is even more promiscuous than those numbers suggest, as telecom companies can privately sell and resell access to the IPX — “creating further opportunities for a surveillance actor to use an IPX connection while concealing its identity through a number of leases and subleases.” All of this, of course, remains invisible and inscrutable to the person holding the phone.

    Citizen Lab was able to document several efforts to exploit this system for surveillance purposes. In many cases, cellular roaming allows for turnkey spying across vast distances: In Vietnam, researchers identified a seven-month location surveillance campaign using the network of the state-owned GTel Mobile to track the movements of African cellular customers. “Given its ownership by the Ministry of Public Security the targeting was either undertaken with the Ministry’s awareness or permission, or was undertaken in spite of the telecommunications operator being owned by the state,” the report concludes.

    African telecoms seem to be a particular hotbed of roaming-based location tracking. Gary Miller, a mobile security researcher with Citizen Lab who co-authored the report, told The Intercept that, so far this year, he’d tracked over 11 million geolocation attacks originating from just two telecoms in Chad and the Democratic Republic of the Congo alone.

    In another case, Citizen Lab details a “likely state-sponsored activity intended to identify the mobility patterns of Saudi Arabia users who were traveling in the United States,” wherein Saudi phone owners were geolocated roughly every 11 minutes.

    The exploitation of the global cellular system is, indeed, truly global: Citizen Lab cites location surveillance efforts originating in India, Iceland, Sweden, Italy, and beyond.

    While the report notes a variety of factors, Citizen Lab places particular blame with the laissez-faire nature of global telecommunications, generally lax security standards, and lack of legal and regulatory consequences.

    As governments throughout the West have been preoccupied for years with the purported surveillance threats of Chinese technologies, the rest of the world appears to have comparatively avoided scrutiny. “While a great deal of attention has been spent on whether or not to include Huawei networking equipment in telecommunications networks,” the report authors add, “comparatively little has been said about ensuring non-Chinese equipment is well secured and not used to facilitate surveillance activities.”

    The post Vulnerabilities in Cellphone Roaming Let Spies and Criminals Track You Across the Globe appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Instagram and Facebook users attempting to share scenes of devastation from a crowded hospital in Gaza City claim their posts are being suppressed, despite previous company policies protecting the publication of violent, newsworthy scenes of civilian death.

    Late Tuesday, amid a 10-day bombing campaign by Israel, the Gaza Strip’s al-Ahli Hospital was rocked by an explosion that left hundreds of civilians killed and wounded. Footage of the flaming exterior of the hospital, as well as dead and wounded civilians, including children, quickly emerged on social media in the aftermath of the attack.

    While the Palestinian Ministry of Health in the Hamas-run Gaza Strip blamed the explosion on an Israeli airstrike, the Israeli military later said the blast was caused by an errant rocket misfired by militants from the Gaza-based group Islamic Jihad.

    While widespread electrical outages and Israel’s destruction of Gaza’s telecommunications infrastructure have made getting documentation out of the besieged territory difficult, some purported imagery of the hospital attack making its way to the internet appears to be activating the censorship tripwires of Meta, the social media giant that owns Instagram and Facebook.

    Since Hamas’s surprise attack against Israel on October 7 and amid the resulting Israeli bombardment of Gaza, groups monitoring regional social media activity say censorship of Palestinian users is at a level not seen since May 2021, when violence flared between Israel and Gaza following Israeli police incursions into Muslim holy sites in Jerusalem.

    Two years ago, Meta blamed the abrupt deletion of Instagram posts about Israeli military violence on a technical glitch. On October 15, Meta spokesperson Andy Stone again attributed claims of wartime censorship on a “bug” affecting Instagram. (Meta could not be immediately reached for comment.)

    “It’s censorship mayhem like 2021. But it’s more sinister given the internet shutdown in Gaza.”

    Since the latest war began, Instagram and Facebook users inside and outside of the Gaza Strip have complained of deleted posts, locked accounts, blocked searches, and other impediments to sharing timely information about the Israeli bombardment and general conditions on the ground. 7amleh, a Palestinian digital rights group that collaborates directly with Meta on speech issues, has documented over hundreds user complaints of censored posts about the war, according to spokesperson Eric Sype, far outpacing deletion levels seen two years ago.

    “It’s censorship mayhem like 2021,” Marwa Fatafta, a policy analyst with the digital rights group Access Now, told The Intercept. “But it’s more sinister given the internet shutdown in Gaza.”

    In other cases, users have successfully uploaded graphic imagery from al-Ahli to Instagram, suggesting that takedowns are not due to any formal policy on Meta’s end, but a product of the company’s at times erratic combination of outsourced human moderation and automated image-flagging software.

    An Instagram notification shows a story depicting a widely circulated image was removed by the platform on the basis of violating guidelines on nudity or sexual activity.
    Screenshot: Obtained by The Intercept

    Alleged Photo of Gaza Hospital Bombing

    One image rapidly circulating social media platforms following the blast depicts what appears to be the flaming exterior of the hospital, where a clothed man is lying beside a pool of blood, his torso bloodied.

    According to screenshots shared with The Intercept by Fatafta, Meta platform users who shared this image had their posts removed or were prompted to remove them themselves because the picture violated policies forbidding “nudity or sexual activity.” Mona Shtaya, nonresident fellow at the Tahrir Institute for Middle East Policy, confirmed she had also gotten reports of two instances of this same image deleted. (The Intercept could not independently verify that the image was of al-Ahli Hospital.)

    One screenshot shows a user notified that Instagram had removed their upload of the photo, noting that the platform forbids “showing someone’s genitals or buttocks” or “implying sexual activity.” The underlying photo does not appear to show anything resembling either category of image.

    In another screenshot, a Facebook user who shared the same image was told their post had been uploaded, “but it looks similar to other posts that were removed because they don’t follow our standards on nudity or sexual activity.” The user was prompted to delete the post. The language in the notification suggests the image may have triggered one of the company’s automated, software-based content moderation systems, as opposed to a human review.

    Meta has previously distributed internal policy language instructing its moderators to not remove gruesome documentation of Russian airstrikes against Ukrainian civilians, though no such carveout is known to have been provided for Palestinians, whether today or in the past. Last year, a third-party audit commissioned by Meta found that systemic, unwarranted censorship of Palestinian users amounted to a violation of their human rights.

    The post Instagram Censored Image of Gaza Hospital Bombing, Claims It’s Too Sexual appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Amid a heavy retaliatory air and artillery assault by Israel against the Gaza Strip on October 10, Israel Defense Forces spokesperson Avichay Adraee posted a message on Facebook to residents of the al-Daraj neighborhood, urging them to leave their homes in advance of impending airstrikes.

    It’s not clear how most people in al-Daraj were supposed to see the warning: Intense fighting and electrical shortages have strangled Palestinian access to the internet, putting besieged civilians at even greater risk.

    Following Hamas’s grisly surprise attack across the Gaza border on October 7, the Israeli counterattack — a widespread and indiscriminate bombardment of the besieged Gaza Strip — left the two million Palestinians who call the area home struggling to connect to the internet at a time when access to current information is crucial and potentially lifesaving.

    “Shutting down the internet in armed conflict is putting civilians at risk.”

    “Shutting down the internet in armed conflict is putting civilians at risk,” Deborah Brown, a senior researcher at Human Rights Watch, told The Intercept. “It could help contribute to injury or death because people communicate around what are safe places and conditions.”

    According to companies and research organizations that monitor the global flow of internet traffic, Gazan access to the internet has dramatically dropped since Israeli strikes began, with data service cut entirely for some customers.

    “My sense is that very few people in Gaza have internet service,” Doug Madory of the internet monitoring firm Kentik told The Intercept. Madory said he spoke to a contact working with an internet service provider, or ISP, in Gaza who told him that internet access has been reduced by 80 to 90 percent because of a lack of fuel and power, and airstrikes.

    As causes of the outages, Marwa Fatafta, a policy analyst with the digital rights group Access Now, cited Israeli strikes against office buildings housing Gazan telecommunications firms, such as the now-demolished Al-Watan Tower, as a major factor, in addition to damage to the electrical grid.

    Fatafta told The Intercept, “There is a near complete information blackout from Gaza.”

    Most Gaza ISPs Are Gone

    With communications infrastructure left in rubble, Gazans now increasingly find themselves in a digital void at a time when data access is most crucial.

    “People in Gaza need access to the internet and telecommunications to check on their family and loved ones, seek life-saving information amidst the ongoing Israeli barrage on the strip; it’s crucial to document the war crimes and human rights abuses committed by Israeli forces at a time when disinformation is going haywire on social media,” Fatafta said.

    “There is some slight connectivity,” Alp Toker of the internet outage monitoring firm NetBlocks told The Intercept, but “most of the ISPs based inside of Gaza are gone.”

    Though it’s difficult to be certain whether these outages are due to electrical shortages, Israeli ordnance, or both, Toker said that, based on reports he has received from Gazan internet providers, the root cause is the Israeli destruction of fiber optic cables connecting Gaza. The ISPs are generally aware of where their infrastructure is damaged or destroyed, Toker said, but ongoing Israeli airstrikes will make sending a crew to patch them too dangerous to attempt. Still, one popular Gazan internet provider, Fusion, wrote in a Facebook post to its customers that efforts to repair damaged infrastructure were ongoing.

    That Gazan internet access remains in place at all, Toker said, is probably due to the use of backup generators that could soon run out of fuel in the face of an intensified Israeli military blockade. (Toker also said that, while it’s unclear if it was due to damage from Hamas rockets or a manual blackout, NetBlocks detected an internet service disruption inside Israel at the start of the attack, but that it quickly subsided.)

    Amanda Meng, a research scientist at Georgia Tech who works on the university’s Internet Outage Detection and Analysis project, or IODA, estimated Gazan internet connectivity has dropped by around 55 percent in the recent days, meaning over half the networks inside Gaza have gone dark and no longer respond to the outside internet. Meng compared this level of access disruption to what’s been previously observed in Ukraine and Sudan during recent warfare in those countries. In Gaza, Border Gateway Protocol activity, an obscure system that routes data from one computer to another and undergirds the entire internet, has also seen disruptions.

    “On the ground, this looks like people not being able to use networked communication devices that rely on the Internet,” Meng explained.

    Organizations like NetBlocks and IODA all used varying techniques to measure internet traffic, and their results tend to vary. It’s also nearly impossible to tell from the other side of the world whether a sudden dip in service is due to an explosion or something else. In addition to methodological differences and the fog of war, however, is an added wrinkle: Like almost everything else in Gaza, ISPs connect to the broader internet through Israeli infrastructure.

    “By law, Gaza internet connectivity must go through Israeli infrastructure to connect to the outside world, so there is a possibility that the Israelis could leave it up because they are able to intercept communications,” said Madory of Kentik.

    Fatafta, the policy analyst, also cited Israel’s power to keep Gaza offline — but both in this war and in general. “Israel’s full control of Palestinian telecommunications infrastructure and long-standing ban on technology upgrades” is an immense impediment, she said. With the wider internet blockaded, she said, “people in Gaza can only access slow and unreliable 2G services” — a cellular standard from 1991.

    While Israel is reportedly also using analog means to warn Palestinians, their effectiveness is not always clear: “Palestinian residents of the city of Beit Lahiya in the northern region of the Gaza Strip said Thursday that Israeli planes dropped flyers warning them to evacuate their homes,” according to the Associated Press. “The area had already been heavily struck by the time the flyers were dropped.”

    The post Israel Warns Palestinians on Facebook — But Bombings Decimated Gaza Internet Access appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The Heat Initiative, a nonprofit child safety advocacy group, was formed earlier this year to campaign against some of the strong privacy protections Apple provides customers. The group says these protections help enable child exploitation, objecting to the fact that pedophiles can encrypt their personal data just like everyone else.

    When Apple launched its new iPhone this September, the Heat Initiative seized on the occasion, taking out a full-page New York Times ad, using digital billboard trucks, and even hiring a plane to fly over Apple headquarters with a banner message. The message on the banner appeared simple: “Dear Apple, Detect Child Sexual Abuse in iCloud” — Apple’s cloud storage system, which today employs a range of powerful encryption technologies aimed at preventing hackers, spies, and Tim Cook from knowing anything about your private files.

    Something the Heat Initiative has not placed on giant airborne banners is who’s behind it: a controversial billionaire philanthropy network whose influence and tactics have drawn unfavorable comparisons to the right-wing Koch network. Though it does not publicize this fact, the Heat Initiative is a project of the Hopewell Fund, an organization that helps privately and often secretly direct the largesse — and political will — of billionaires. Hopewell is part of a giant, tightly connected web of largely anonymous, Democratic Party-aligned dark-money groups, in an ironic turn, campaigning to undermine the privacy of ordinary people.

    “None of these groups are particularly open with me or other people who are tracking dark money about what it is they’re doing.”

    For experts on transparency about money in politics, the Hopewell Fund’s place in the wider network of Democratic dark money raises questions that groups in the network are disinclined to answer.

    “None of these groups are particularly open with me or other people who are tracking dark money about what it is they’re doing,” said Robert Maguire, of Citizens for Responsibility and Ethics in Washington, or CREW. Maguire said the way the network operated called to mind perhaps the most famous right-wing philanthropy and dark-money political network: the constellation of groups run and supported by the billionaire owners of Koch Industries. Of the Hopewell network, Maguire said, “They also take on some of the structural calling cards of the Koch network; it is a convoluted group, sometimes even intentionally so.”

    The decadeslong political and technological campaign to diminish encryption for the sake of public safety — known as the “Crypto Wars” — has in recent years pivoted from stoking fears of terrorists chatting in secret to child predators evading police scrutiny. No matter the subject area, the battle is being waged between those who think privacy is an absolute right and those who believe it ought to be limited for expanded oversight from law enforcement and intelligence agencies. The ideological lines pit privacy advocates, computer scientists, and cryptographers against the FBI, the U.S. Congress, the European Union, and other governmental bodies around the world. Apple’s complex 2021 proposal to scan cloud-bound images before they ever left your phone became divisive even within the field of cryptography itself.

    While the motives on both sides tend to be clear — there’s little mystery as to why the FBI doesn’t like encryption — the Heat Initiative, as opaque as it is new, introduces the obscured interests of billionaires to a dispute over the rights of ordinary individuals. 

    “I’m uncomfortable with anonymous rich people with unknown agendas pushing these massive invasions of our privacy,” Matthew Green, a cryptographer at Johns Hopkins University and a critic of the plan to have Apple scan private files on its devices, told The Intercept. “There are huge implications for national security as well as consumer privacy against corporations. Plenty of unsavory reasons for people to push this technology that have nothing to do with protecting children.”

    Apple’s Aborted Scanning Scheme

    Last month, Wired reported the previously unknown Heat Initiative was pressing Apple to reconsider its highly controversial 2021 proposal to have iPhones constantly scan their owners’ photos as they were uploaded to iCloud, checking to see if they were in possession of child sexual abuse material, known as CSAM. If a scan turned up CSAM, police would be alerted. While most large internet companies check files their users upload and share against a centralized database of known CSAM, Apple’s plan went a step further, proposing to check for illegal files not just on the company’s servers, but directly on its customers’ phones.

    “In the hierarchy of human privacy, your private files and photos should be your most important confidential possessions,” Green said. “We even wrote this into the U.S. Constitution.”

    The backlash was swift and effective. Computer scientists, cryptographers, digital rights advocates, and civil libertarians immediately protested, claiming the scanning would create a deeply dangerous precedent. The ability to scan users’ devices could open up iPhones around the world to snooping by authoritarian governments, hackers, corporations, and security agencies. A year later, Apple reversed course and said it was shelving the idea.

    Green said that efforts to push Apple to monitor the private files of iPhone owners are part of a broader effort against encryption, whether used to safeguard your photographs or speak privately with others — rights that were taken for granted before the digital revolution. “We have to have some principles about what we’ll give up to fight even heinous crime,” he said. “And these proposals give up everything.”

    “We have to have some principles about what we’ll give up to fight even heinous crime. And these proposals give up everything.”

    In an unusual move justifying its position, Apple provided Wired with a copy of the letter it sent to the Heat Initiative in reply to its demands. “Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit,” the letter read. “It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.”

    The strong encryption built into iPhones, which shields sensitive data like your photos and iMessage conversations even from Apple itself, is frequently criticized by police agencies and national security hawks as providing shelter to dangerous criminals. In a 2014 speech, then-FBI Director James Comey singled out Apple’s encryption specifically, warning that “encryption threatens to lead all of us to a very dark place.”

    Some cryptographers respond that it’s impossible to filter possible criminal use of encryption without defeating the whole point of encryption in the first place: keeping out prying eyes.

    Similarly, any attempt to craft special access for police to use to view encrypted conversations when they claim they need to — a “backdoor” mechanism for law enforcement access — would be impossible to safeguard against abuse, a stance Apple now says it shares.

    LOS ANGELES CA - SEPTEMBER 01, 2023: Apple is facing pressure from child safety advocates and shareholders to improve its policies for policing child sexual abuse material in iCloud. Photographed here is Sarah Gardner, head of the Heat Initiative, who is leading the campaign. CREDIT: Jessica Pons for The New York Times

    Sarah Gardner, head of the Heat Initiative, on Sept. 1, 2023, in Los Angeles.

    Photo: Jessica Pons for the New York Times

    Dark-Money Network

    For an organization demanding that Apple scour the private information of its customers, the Heat Initiative discloses extremely little about itself. According to a report in the New York Times, the Heat Initiative is armed with $2 million from donors including the Children’s Investment Fund Foundation, an organization founded by British billionaire hedge fund manager and Google activist investor Chris Cohn, and the Oak Foundation, also founded by a British billionaire. The Oak Foundation previously provided $250,000 to a group attempting to weaken end-to-end encryption protections in EU legislation, according to a 2020 annual report.



    The Heat Initiative is helmed by Sarah Gardner, who joined from Thorn, an anti-child trafficking organization founded by actor Ashton Kutcher. (Earlier this month, Kutcher stepped down from Thorn following reports that he’d asked a California court for leniency in the sentencing of convicted rapist Danny Masterson.) Thorn has drawn scrutiny for its partnership with Palantir and efforts to provide police with advanced facial recognition software and other sophisticated surveillance tools. Critics say these technologies aren’t just uncovering trafficked children, but ensnaring adults engaging in consensual sex work.

    In an interview, Gardner declined to name the Heat Initiative’s funders, but she said the group hadn’t received any money from governmental or law enforcement organizations. “My goal is for child sexual abuse images to not be freely shared on the internet, and I’m here to advocate for the children who cannot make the case for themselves,” Gardner added.

    She said she disagreed with “privacy absolutists” — a group now apparently including Apple — who say CSAM-scanning iPhones would have imperiled user safety. “I think data privacy is vital,” she said. “I think there’s a conflation between user privacy and known illegal content.”

    Heat Initiative spokesperson Kevin Liao told The Intercept that, while the group does want Apple to re-implement its 2021 plan, it would be open to other approaches to screening everyone’s iCloud storage for CSAM. Since Apple began allowing iCloud users to protect their photos with end-to-end encryption last December, however, this objective is far trickier now than it was back in 2021; to scan iCloud images today would still require the mass-scrutinizing of personal data in some manner. As Apple put it in its response letter, “Scanning every user’s privately stored iCloud content would in our estimation pose serious unintended consequences for our users.”

    Both the Oak Foundation and Thorn were cited in a recent report revealing the extent to which law enforcement and private corporate interests have influenced European efforts to weaken encryption in the name of child safety.

    Beyond those groups and a handful of names, however, there is vanishingly little information available about what the Heat Initiative is, where it came from, or who exactly is paying its bills and why. Its website, which describes the group only as a “collective effort of concerned child safety experts and advocates” — who go unnamed — contains no information about funding, staff, or leadership.

    One crucial detail, however, can be found buried in the “terms of use” section of the Heat Initiative’s website: “THIS WEBSITE IS OWNED AND OPERATED BY Hopewell Fund AND ITS AFFILIATES.” Other than a similarly brief citation in the site’s privacy policy, there is no other mention of the Hopewell Fund or explanation of its role. The omission is significant, given Hopewell’s widely reported role as part of a shadowy cluster of Democratic dark-money groups that funnel billions from anonymous sources into American politics.

    Hopewell is part of a labyrinthine billionaire-backed network that receives and distributes philanthropic cash while largely obscuring its origin. The groups in this network include New Venture Fund (which has previously paid salaries at Hopewell), the Sixteen Thirty Fund, and Arabella Advisors, a for-profit company that helps administer these and other Democratic-leaning nonprofits and philanthropies. The groups have poured money into a wide variety of causes ranging from abortion access to opposing Republican tax policy, along the way spending big on elections — about $1.2 billion total in 2020 alone, according to a New York Times investigation.

    The deep pockets of this network and mystery surrounding the ultimate source of its donations have drawn comparisons — by Maguire, the Times, and others — to the Koch brothers’ network, whose influence over electoral politics from the right long outraged Democrats. When asked by The Atlantic in 2021 whether she felt good “that you’re the left’s equivalent of the Koch brothers,” Sampriti Ganguli, at the time the CEO of Arabella Advisors, replied in the affirmative.

    “Sixteen Thirty Fund is the largest network of liberal politically active nonprofits in the country. We’re talking here about hundreds of millions of dollars.”

    “Sixteen Thirty Fund is the largest network of liberal politically active nonprofits in the country,” Maguire of CREW told The Intercept. “We’re talking here about hundreds of millions of dollars.”

    Liao told The Intercept that Hopewell serves as the organization’s “fiscal sponsor,” an arrangement that allows tax-deductible donations to pass through a registered nonprofit on its way to an organization without tax-exempt status. Liao declined to provide a list of the Heat Initiative’s funders beyond the two mentioned by the New York Times. Owing to this fiscal sponsorship, Liao continued, “the Hopewell Fund’s board is Heat Initiative’s board.” Hopewell’s board includes New Venture Fund President Lee Bodner and Michael Slaby, a veteran of Barack Obama’s 2008 and 2012 campaigns and former chief technology strategist at an investment fund operated by ex-Google chair Eric Schmidt.

    When asked who exactly was leading the Heat Initiative, Liao told The Intercept that “it’s just the CEO Sarah Gardner.” According to LinkedIn, however, Lily Rhodes, also previously with Thorn, now works as Heat Initiative’s director of strategic operations. Liao later said Rhodes and Gardner are the Heat Initiative’s only two employees. When asked to name the “concerned child safety experts and advocates” referred to on the Heat Initiative’s website, Liao declined.

    “When you take on a big corporation like Apple,” he said, “you probably don’t want your name out front.”

    Hopewell’s Hopes

    Given the stakes — nothing less than the question of whether people have an absolute right to communicate in private — the murkiness surrounding a monied pressure campaign against Apple is likely to concern privacy advocates. The Heat Initiative’s efforts also give heart to those aligned with law enforcement interests. Following the campaign’s debut, former Georgia Bureau of Investigations Special Agent in Charge Debbie Garner, who has also previously worked for iPhone-hacking tech firm Grayshift, hailed the Heat Initiative’s launch in a LinkedIn group for Homeland Security alumni, encouraging them to learn more.

    The larger Hopewell network’s efforts to influence political discourse have attracted criticism and controversy in the past. In 2021, OpenSecrets, a group that tracks money in politics, reported that New Venture Fund and the Sixteen Thirty Fund were behind a nationwide Facebook ad campaign pushing political messaging from Courier News, a network of websites designed to look like legitimate, independent political news outlets.

    Despite its work with ostensibly progressive causes, Hopewell has taken on conservative campaigns: In 2017, Deadspin reported with bemusement an NFL proposal in which the league would donate money into a pool administered by the Hopewell Fund as part of an incentive to get players to stop protesting the national anthem.

    Past campaigns connected to Hopewell and its close affiliates have been suffused with Big Tech money. Hopewell is also the fiscal sponsor of the Economic Security Project, an organization that promotes universal basic income founded by Facebook co-founder Chris Hughes. In 2016, SiliconBeat reported that New Venture Fund, which is bankrolled in part by major donations from the Bill and Melinda Gates Foundation and William and Flora Hewlett Foundation, was behind the Google Transparency Project, an organization that publishes unflattering research relating to Google. Arabella has also helped Microsoft channel money to its causes of choice, the report noted. Billionaire eBay founder Pierre Omidyar has also provided large cash gifts to both Hopewell and New Venture Fund, according to the New York Times (Omidyar is a major funder of The Intercept).

    According to Riana Pfefferkorn, a research scholar at Stanford University’s Internet Observatory program, the existence of the Heat Initiative is ultimately the result of an “unforced error” by Apple in 2021, when it announced it was exploring using CSAM scanning for its cloud service.

    “And now they’re seeing that they can’t put the genie back in the bottle,” Pfefferkorn said. “Whatever measures they take to combat the cloud storage of CSAM, child safety orgs — and repressive governments — will remember that they’d built a tool that snoops on the user at the device level, and they’ll never be satisfied with anything less.”

    The post New Group Attacking iPhone Encryption Backed by U.S. Political Dark-Money Network appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The social media giant Meta recently updated the rulebook it uses to censor online discussion of people and groups it deems “dangerous,” according to internal materials obtained by The Intercept. The policy had come under fire in the past for casting an overly wide net that ended up removing legitimate, nonviolent content.

    The goal of the change is to remove less of this material. In updating the policy, Meta, the parent company of Facebook and Instagram, also made an internal admission that the policy has censored speech beyond what the company intended.

    Meta’s “Dangerous Organizations and Individuals,” or DOI, policy is based around a secret blacklist of thousands of people and groups, spanning everything from terrorists and drug cartels to rebel armies and musical acts. For years, the policy prohibited the more than one billion people using Facebook and Instagram from engaging in “praise, support or representation” of anyone on the list.

    Now, Meta will provide a greater allowance for discussion of these banned people and groups — so long as it takes place in the context of “social and political discourse,” according to the updated policy, which also replaces the blanket prohibition against “praise” of blacklisted entities with a new ban on “glorification” of them.

    The updated policy language has been distributed internally, but Meta has yet to disclose it publicly beyond a mention of the “social and political discourse” exception on the community standards page. Blacklisted people and organizations are still banned from having an official presence on Meta’s platforms.

    The revision follows years of criticism of the policy. Last year, a third-party audit commissioned by Meta found the company’s censorship rules systematically violated the human rights of Palestinians by stifling political speech, and singled out the DOI policy. The new changes, however, leave major problems unresolved, experts told The Intercept. The “glorification” adjustment, for instance, is well intentioned but likely to suffer from the same ambiguity that created issues with the “praise” standard.

    Related

    Facebook Report Concludes Company Censorship Violated Palestinian Human Rights

    “Changing the DOI policy is a step in the right direction, one that digital rights defenders and civil society globally have been requesting for a long time,” Mona Shtaya, nonresident fellow at the Tahrir Institute for Middle East Policy, told The Intercept.

    Observers like Shtaya have long objected to how the DOI policy has tended to disproportionately censor political discourse in places like Palestine — where discussing a Meta-banned organization like Hamas is unavoidable — in contrast to how Meta rapidly adjusted its rules to allow praise of the Ukrainian Azov Battalion despite its neo-Nazi sympathies.

    “The recent edits illustrate that Meta acknowledges the participation of certain DOI members in elections,” Shtaya said. “However, it still bars them from its platforms, which can significantly impact political discourse in these countries and potentially hinder citizens’ equal and free interaction with various political campaigns.”

    Acknowledged Failings

    Meta has long maintained the original DOI policy is intended to curtail the ability of terrorists and other violent extremists from causing real-world harm. Content moderation scholars and free expression advocates, however, maintain that the way the policy operates in practice creates a tendency to indiscriminately swallow up and delete entirely nonviolent speech. (Meta declined to comment for this story.)

    In the new internal language, Meta acknowledged the failings of its rigid approach and said the company is attempting to improve the rule. “A catch-all policy approach helped us remove any praise of designated entities and individuals on the platform,” read an internal memo announcing the change. “However, this approach also removes social and political discourse and causes enforcement challenges.”

    Meta’s proposed solution is “recategorizing the definition of ‘Praise’ into two areas: ‘References to a DOI,’ and ‘Glorification of DOIs.’ These fundamentally different types of content should be treated differently.” Mere “references” to a terrorist group or cartel kingpin will be permitted so long as they fall into one of 11 new categories of discourse Meta deems acceptable:

    Elections, Parliamentary and executive functions, Peace and Conflict Resolution (truce/ceasefire/peace agreements), International agreements or treaties, Disaster response and humanitarian relief, Human Rights and humanitarian discourse, Local community services, Neutral and informative descriptions of DOI activity or behavior, News reporting, Condemnation and criticism, Satire and humor.

    Posters will still face strict requirements to avoid running afoul of the policy, even if they’re attempting to participate in one of the above categories. To stay online, any Facebook or Instagram posts mentioning banned groups and people must “explicitly mention” one of the permissible contexts or face deletion. The memo says “the onus is on the user to prove” that they’re fitting into one of the 11 acceptable categories.

    According to Shtaya, the Tahrir Institute fellow, the revised approach continues to put Meta’s users at the mercy of a deeply flawed system. She said, “Meta’s approach places the burden of content moderation on its users, who are neither language experts nor historians.”

    Unclear Guidance

    Instagram and Facebook users will still have to hope their words aren’t interpreted by Meta’s outsourced legion of overworked, poorly paid moderators as “glorification.” The term is defined internally in almost exactly the same language as its predecessor, “praise”: “Legitimizing or defending violent or hateful acts by claiming that those acts or any type of harm resulting from them have a moral, political, logical, or other justification that makes them appear acceptable or reasonable.” Another section defines glorification as any content that “justifies or amplifies” the “hateful or violent” beliefs or actions of a banned entity, or describes them as “effective, legitimate or defensible.”

    Though Meta intends this language to be universal, equitably and accurately applying labels as subjective as “legitimate” or “hateful” to the entirety of global online discourse has proven impossible to date.

    “Replacing ‘praise’ with ‘glorification’ does little to change the vagueness inherent to each term,” according to Ángel Díaz, a professor at University of Southern California’s Gould School of Law and a scholar of social media content policy. “The policy still overburdens legitimate discourse.”

    “Replacing ‘praise’ with ‘glorification’ does little to change the vagueness inherent to each term. The policy still overburdens legitimate discourse.”

    The notions of “legitimization” or “justification” are deeply complex, philosophical matters that would be difficult to address by anyone, let alone a contractor responsible for making hundreds of judgments each day.

    The revision does little to address the heavily racialized way in which Meta assesses and attempts to thwart dangerous groups, Díaz added. While the company still refuses to disclose the blacklist or how entries are added to it, The Intercept published a full copy in 2021. The document revealed that the overwhelming majority of the “Tier 1” dangerous people and groups — who are still subject to the harshest speech restrictions under the new policy — are Muslim, Arab, or South Asian. White, American militant groups, meanwhile, are overrepresented in the far more lenient “Tier 3” category.

    Díaz said, “Tier 3 groups, which appear to be largely made up of right-wing militia groups or conspiracy networks like QAnon, are not subject to bans on glorification.”

    Meta’s own internal rulebook seems unclear about how enforcement is supposed to work, seemingly still dogged by the same inconsistencies and self-contradictions that have muddled its implementation for years.

    For instance, the rule permits “analysis and commentary” about a banned group, but a hypothetical post arguing that the September 11 attacks would not have happened absent U.S. aggression abroad is considered a form of glorification, presumably of Al Qaeda, and should be deleted, according to one example provided in the policy materials. Though one might vehemently disagree with that premise, it’s difficult to claim it’s not a form of analysis and commentary.

    Another hypothetical post in the internal language says, in response to Taliban territorial gains in the Afghanistan war, “I think it’s time the U.S. government started reassessing their strategy in Afghanistan.” The post, the rule says, should be labeled as nonviolating, despite what appears to be a clear-cut characterization of the banned group’s actions as “effective.”

    David Greene, civil liberties director at the Electronic Frontier Foundation, told The Intercept these examples illustrate how difficult it will be to consistently enforce the new policy. “They run through a ton of scenarios,” Greene said, “but for me it’s hard to see a through-line in them that indicates generally applicable principles.”

    The post Meta Overhauls Controversial “Dangerous Organizations” Censorship Policy appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The Texas Department of Public Safety purchased access to powerful software capable of locating and following people through their phones as part of Republican Gov. Greg Abbott’s “border security disaster” efforts, according to documents reviewed by The Intercept.

    In 2021, Abbott proclaimed that the “surge of individuals unlawfully crossing the Texas-Mexico border posed an ongoing and imminent threat of disaster” to the state and its residents. Among other effects, the disaster declaration opened a spigot of government money to a variety of private firms ostensibly paid to help patrol and blockade the state’s border with Mexico.

    One of the private companies that got in on the cash disbursements was Cobwebs Technologies, a little-known Israeli surveillance contractor. Cobwebs’s marquee product, the surveillance platform Tangles, offers its users a bounty of different tools for tracking people as they navigate both the internet and the real world, synthesizing social media posts, app activity, facial recognition, and phone tracking.

    “As long as this broken consumer data industry exists as it exists today, shady actors will always exploit it.”

    News of the purchase comes as Abbott’s border crackdown escalated to new heights, following a Department of Public Safety whistleblower’s report of severe mistreatment of migrants by state law enforcement and a Justice Department lawsuit over the governor’s deployment of razor wire on the Rio Grande. The Cobwebs documents show that Abbott’s efforts to usurp the federal government’s constitutional authority to conduct immigration enforcement have extended into the electronic realm as well. The implications could reach far beyond the geographic bounds of the border and into the private lives of citizens and noncitizens alike.

    “Government agencies systematically buying data that has been originally collected to provide consumer services or digital advertising represents the worst possible kind of decontextualized misuse of personal information,” Wolfie Christl, a privacy researcher who tracks data brokerages, told The Intercept. “But as long as this broken consumer data industry exists as it exists today, shady actors will always exploit it.”

    Like its competitors in the world of software tracking tools, Cobwebs — which sells its services to the Department of Homeland Security, the IRS, and a variety of undisclosed corporate customers — lets its clients track the movements of private individuals without a court order. Instead of needing a judge’s sign-off, these tracking services rely on bulk-purchasing location pings pulled from smartphones, often through unscrupulous mobile apps or in-app advertisers, an unregulated and increasingly pervasive form of location tracking.

    In August 2021, the Texas Department of Public Safety’s Intelligence and Counterterrorism division purchased a year of Tangles access for $198,000, according to contract documents, obtained through a public records request by Tech Inquiry, a watchdog and research organization, and shared with The Intercept. The state has renewed its Tangles subscription twice since then, though the discovery that Cobwebs failed to pay taxes owed in Texas briefly derailed the renewal last April, according to an email included in the records request. (Cobwebs declined to comment for this story.)

    A second 2021 contract document shared with The Intercept shows DPS purchased “unlimited” access to Clearview AI, a controversial face recognition platform that matches individuals to tens of billions of photos scraped from the internet. The purchase, according to the document, was made “in accordance/governed by the Texas Governor’s Disaster Declaration for the Texas-Mexico border for ongoing and imminent threats.” (Clearview did not respond to a request for comment.)

    Each of the three yearlong subscriptions notes Tangles was purchased “in accordance to the provisions outlined in the Texas Governor-Proclaimed Border Disaster Declaration signed May 22, 2022, per Section 418.011 of the Texas Government Code.”

    The disaster declaration, which spans more than 50 counties, is part of an ongoing campaign by Abbott that has pushed the bounds of civil liberties in Texas, chiefly through the governor’s use of the Department of Public Safety.

    Related

    The Texas Border County at the Center of a Dangerous Right-Wing Experiment

    Under Operation Lone Star, Abbott has spent $4.5 billion surging 10,000 Department of Public Safety troopers and National Guard personnel to the border as part of a stated effort to beat back a migrant “invasion,” which he claims is aided and abetted by President Joe Biden. The resulting project has been riddled with scandal, including migrants languishing for months in state jails without charges and several suicides among personnel deployed on the mission. Just this week, the Houston Chronicle obtained an internal Department of Public Safety email revealing that troopers had been “ordered to push small children and nursing babies back into the Rio Grande” and “told not to give water to asylum seekers even in extreme heat.”

    On Monday, the U.S. Justice Department sued Texas over Abbott’s deployment of floating barricades on the Rio Grande. Abbott, having spent more than two years angling for a states’ rights border showdown with the Biden administration, responded last week to news of the impending lawsuit by tweeting: “I’ll see you in court, Mr. President.”

    Despite Abbott’s repeated claims that Operation Lone Star is a targeted effort focused specifically on crimes at the border, a joint investigation by the Texas Tribune, ProPublica, and the Marshall Project last year found that the state was counting arrests and drug charges far from the U.S-Mexico divide and unrelated to the Operation Lone Star mandate. Records obtained by the news organizations last summer showed that the Justice Department opened a civil rights investigation into Abbott’s operation. The status of the investigation has not been made public.

    Where the Department of Public Safety’s access to Tangles’s powerful cellphone tracking software will fit into Abbott’s controversial border enforcement regime remains uncertain. (The Texas Department of Public Safety did not respond to a request for comment.)

    Although Tangles provides an array of options for keeping tabs on a given target, the most powerful feature obtained by the Department of Public Safety is Tangles’s “WebLoc” feature: “a cutting-edge location solution which automatically monitors and analyzes location-based data in any specified geographic location,” according to company marketing materials. While Cobwebs claims it sources device location data from multiple sources, the Texas Department of Public Safety contract specifically mentions “ad ID,” a reference to the unique strings of text used to identify and track a mobile phone in the online advertising ecosystem.

    “Every second, hundreds of consumer data brokers most people never heard of collect and sell huge amounts of personal information on everyone,” explained Christl, the privacy researcher. “Most of these shady and opaque data practices are systematically enabled by today’s digital marketing and advertising industry, which has gotten completely out of control.”

    While advertisers defend this practice on the grounds that the device ID itself doesn’t contain a person’s name, Christl added that “several data companies sell information that helps to link mobile device identifiers to email addresses, phone numbers, names and postal addresses.” Even without extra context, tying a real name to an “anonymized” advertising identifier’s location ping is often trivial, as a person’s daily movement patterns typically quickly reveal both where they live and work.

    Cobwebs advertises that WebLoc draws on “huge sums of location-based data,” and it means huge: According to a WebLoc promotional brochure, it affords customers “worldwide coverage” of smartphone pings based on “billions of data points to ensure maximum location based data coverage.” WebLoc not only provides the exact locations of smartphones, but also personal information associated with their owners, including age, gender, languages spoken, and interests — “e.g., music, luxury goods, basketball” — according to a contract document from the Office of Naval Intelligence, another Cobwebs customer.

    The ability to track a person wherever they go based on an indispensable object they keep on or near them every hour of every day is of obvious appeal to law enforcement officials, particularly given that no judicial oversight is required to use a tool like Tangles. Critics of the technology have argued that a legislative vacuum allows phone-tracking tools, fed by the unregulated global data broker market, to give law enforcement agencies a way around Fourth Amendment protections.

    The power to track people through Tangles, however, is valuable even in countries without an ostensible legal prohibition against unreasonable searches. In 2021, Facebook announced it had removed 200 accounts used by Cobwebs to track its users in Bangladesh, Saudi Arabia, Poland, and several other countries.

    “In addition to targeting related to law enforcement activities,” the company explained, “we also observed frequent targeting of activists, opposition politicians and government officials in Hong Kong and Mexico.”

    Beryl Lipton, an investigative researcher with the Electronic Frontier Foundation, told The Intercept that bolstering surveillance powers under the aegis of an emergency declaration adds further risk to an already fraught technology. “We need to be very skeptical of any expansion of surveillance that occurs under disaster declarations, particularly open-ended claims of emergency,” Lipton said. “They can undermine legislative checks on the executive branch and obviate bounds on state behavior that exist for good reason.”

    The post Texas State Police Purchased Israeli Phone-Tracking Software for “Border Emergency” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A technology wish list circulated by the U.S. military’s elite Joint Special Operations Command suggests the country’s most secretive war-fighting component shares an anxiety with the world’s richest man: Too many people can see where they’re flying their planes.

    The Joint Special Operations Air Component, responsible for ferrying commandos and their gear around the world, is seeking help keeping these flights out of the public eye through a “‘Big Data’ Analysis & Feedback Tool,” according to a procurement document obtained by The Intercept. The document is one of a series of periodic releases of lists of technologies that special operations units would like to see created by the private sector.

    The listing specifically calls out the risk of social media “tail watchers” and other online observers who might identify a mystery plane as a military flight. According to the document, the Joint Special Operations Air Component needs software to “leverage historical and real-time data, such as the travel histories and details of specific aircraft with correlation to open-source information, social media, and flight reporting.”

    Armed with this data, the tool would help the special operations gauge how much scrutiny a given plane has received in the past and how likely it is to be connected to them by prying eyes online.

    “It just gives them better information on how to blend in. It’s like the police deciding to use the most common make of local car as an undercover car.”

    Rather than providing the ability to fake or anonymize flight data, the tool seems to be aimed at letting sensitive military flights hide in plain sight. “It just gives them better information on how to blend in,” Scott Lowe, a longtime tail watcher and aviation photographer told The Intercept. “It’s like the police deciding to use the most common make of local car as an undercover car.”

    While plane tracking has long been a niche hobby among aviation enthusiasts who enjoy cataloging the comings and goings of aircraft, the public availability of midair transponder data also affords journalists, researchers, and other observers an effective means of tracking the movements and activities of the world’s richest and most powerful. The aggregation and analysis of public flight data has shed light on CIA torture flights, movements of Russian oligarchs, and Google’s chummy relationship with NASA.

    More recently, these sleuthing techniques gained international attention after they drew the ire of Elon Musk, the world’s richest man. After he purchased the social media giant Twitter, Musk banned an account that shared the movements of his private jet. Despite repeated promises to protect free speech — and a specific pledge to not ban the @ElonJet account — on the platform, Musk proceeded to censor anyone sharing his plane’s whereabouts, claiming the entirely legally obtained and fully public data amounted to “assassination coordinates.”

    The Joint Special Operations Air Component’s desire for more discreet air travel, published six months after Musk’s jet data meltdown, is likely more firmly grounded in reality.

    The Joint Special Operations Air Component provides a hypothetical scenario in which special forces need to travel with a “reduced profile” — that is to say, quietly — and use this tool.

    “When determining if the planned movement is suitable and appropriate,” the procurement document says, “the ‘Aircraft Flight Profile Management Database Tool’ reveals that the aircraft is primarily associated with a distinctly different geographic area” — a frequent tip-off to civilian plane trackers that something interesting is afoot. “Additionally, ‘tail watchers’ have posted on social media pictures of the aircraft at various airfields. Based on the information available, the commander decides to utilize a different airframe for the mission. With the aircraft in flight, the tool is monitored for any indication of increased scrutiny or mission compromise.”

    The request is part of a broad-ranging list of technologies sought by the Joint Special Operations Command, from advanced radios and portable blood pumps to drones that can fly months at a time. The 85-page list essentially advertises these technologies for private-sector contractors, who may be able to sell them to the Pentagon in the near future.

    “What will be interesting is seeing how they change their operations after having this information.”

    The document — marked unclassified but for “Further dissemination only as directed by the Office of the Secretary of Defense (OSD) Joint Capability and Technology Expo (JCTE) Team” — is part of an annual effort by Joint Special Operations Command to “inform and influence industry’s internal investment decisions in areas that address SOF’s most sensitive and urgent interest areas.”

    The anti-plane-tracking tool fits into a broader pattern of the military attempting to minimize the visibility of its flights, according to Ian Servin, a pilot and plane-tracking enthusiast. In March, the military removed tail numbers and other identifying marks from its planes.

    “What will be interesting is seeing how they change their operations after having this information,” Servin said. From a transparency standpoint, he added, “Those changes could be problematic or concerning.”

    The post Pentagon Joins Elon Musk’s War Against Plane Tracking appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The legal research and public records data broker LexisNexis is providing U.S. Immigration and Customs Enforcement with tools to target people who may potentially commit a crime — before any actual crime takes place, according to a contract document obtained by The Intercept. LexisNexis then allows ICE to track the purported pre-criminals’ movements.

    The unredacted contract overview provides a rare look at the controversial $16.8 million agreement between LexisNexis and ICE, a federal law enforcement agency whose surveillance of and raids against migrant communities are widely criticized as brutal, unconstitutional, and inhumane.

    “The purpose of this program is mass surveillance at its core.”

    “The purpose of this program is mass surveillance at its core,” said Julie Mao, an attorney and co-founder of Just Futures Law, which is suing LexisNexis over allegations it illegally buys and sells personal data. Mao told The Intercept the ICE contract document, which she reviewed for The Intercept, is “an admission and indication that ICE aims to surveil individuals where no crime has been committed and no criminal warrant or evidence of probable cause.”

    While the company has previously refused to answer any questions about precisely what data it’s selling to ICE or to what end, the contract overview describes LexisNexis software as not simply a giant bucket of personal data, but also a sophisticated analytical machine that purports to detect suspicious activity and scrutinize migrants — including their locations.

    “This is really concerning,” Emily Tucker, the executive director of Georgetown Law School’s Center on Privacy and Technology, told The Intercept. Tucker compared the LexisNexis contract to controversial and frequently biased predictive policing software, causing heightened alarm thanks to ICE’s use of license plate databases. “Imagine if whenever a cop used PredPol to generate a ‘hot list’ the software also generated a map of the most recent movements of any vehicle associated with each person on the hot list.”

    The document, a “performance of work statement” made by LexisNexis as part of its contract with ICE, was obtained by journalist Asher Stockler through a public records request and shared with The Intercept. LexisNexis Risk Solutions, a subsidiary of LexisNexis’s parent company, inked the contract with ICE, a part of the Department of Homeland Security, in 2021.

    “LexisNexis Risk Solutions prides itself on the responsible use of data, and the contract with the Department of Homeland Security encompasses only data allowed for such uses,” said LexisNexis spokesperson Jennifer Richman. She told The Intercept the company’s work with ICE doesn’t violate the law or federal policy, but did not respond to specific questions.

    The document reveals that over 11,000 ICE officials, including within the explicitly deportation-oriented Enforcement and Removal Operations branch, were using LexisNexis as of 2021. “This includes supporting all aspects of ICE screening and vetting, lead development, and criminal analysis activities,” the document says.

    In practice, this means ICE is using software to “automate” the hunt for suspicious-looking blips in the data, or links between people, places, and property. It is unclear how such blips in the data can be linked to immigration infractions or criminal activity, but the contract’s use of the term “automate” indicates that ICE is to some extent letting computers make consequential conclusions about human activity. The contract further notes that the LexisNexis analysis includes “identifying potentially criminal and fraudulent behavior before crime and fraud can materialize.” (ICE did not respond to a request for comment.)

    LexisNexis supports ICE’s activities through a widely used data system named the Law Enforcement Investigative Database Subscription. The contract document provides the most comprehensive window yet for what data tools might be offered to a LEIDS clients. Other federal, state, and local authorities who pay a hefty subscription fee for the LexisNexis program could have access to the same powerful surveillance tools used by ICE.

    The LEIDS program is used by ICE for “the full spectrum of its immigration enforcement,” according to the contract document. LexisNexis’s tools allow ICE to monitor the personal lives and mundane movements of migrants in the U.S., in search of incriminating “patterns” and for help to “strategize arrests.”

    The ICE contract makes clear the extent to which LexisNexis isn’t simply a resource to be queried but a major power source for the American deportation machine.

    LexisNexis is known for its vast trove of public records and commercial data, a constantly updating archive that includes information ranging from boating licenses and DMV filings to voter registrations and cellphone subscriber rolls. In the aggregate, these data points create a vivid mosaic of a person’s entire life, interests, professional activities, criminal run-ins no matter how minor, and far more.

    While some of the data is valuable for the likes of researchers, journalists, and law students, LexisNexis has turned the mammoth pool of personal data into a lucrative revenue stream by selling it to law enforcement clients like ICE, who use the company’s many data points on over 280 million different people to not only determine whether someone constitutes a “risk,” but also to locate and apprehend them.

    LexisNexis has long since deflected questions about its relationship by citing ICE’s “national security” and “public safety” mission; the agency is responsible for both criminal and civil immigration violations, including smuggling, other trafficking, and customs violations. The contract’s language, however, indicates LexisNexis is empowering ICE to sift through an large sea of personal data to do exactly what advocates have warned against: busting migrants for civil immigration violations, a far cry from thwarting terrorists and transnational drug cartels.

    Related

    ICE Searched LexisNexis Database Over 1 Million Times in Just Seven Months

    ICE has a documented history of rounding up and deporting nonviolent immigrants without any criminal history, whose only offense may be something on the magnitude of a traffic violation or civil immigration violation. The contract document further suggests LexisNexis is facilitating ICE’s workplace raids, one of the agency’s most frequently criticized practices, by helping immigration officials detect fraud through bulk searches of Social Security and phone numbers.

    ICE investigators can use LexisNexis tools, the document says, to pull a large quantity of records about a specified individual’s life and visually map their relationships to other people and property. The practice stands as an exemplar of the digital surveillance sprawl that immigrant advocates have warned unduly broadens the gaze of federal suspicion onto masses of people.

    Citing language from the contract, Mao, the lawyer on the lawsuit, said, “‘Patterns of relationships between entities’ likely means family members, one of the fears for immigrants and mixed status families is that LexisNexis and other data broker platforms can map out family relationships to identify, locate, and arrest undocumented individuals.”

    The contract shows ICE can combine LexisNexis data with databases from other outside firms, namely PenLink, a controversial company that helps police nationwide request private user data from social media companies.

    In this Wednesday, April 29, 2020 photo, a surveillance camera, top right, and license plate scanners, center, are seen at an intersection in West Baltimore. On Friday, May 1, planes equipped with cameras will begin creating a continuous visual record of the city of Baltimore so that police can see how potential suspects and witnesses moved to and from crime scenes. Police alerted to violent crimes by street-level cameras and gunfire sound detectors will work with analysts to see just where people came and went.

    A license plate reader, center, and surveillance camera, top right, are seen at an intersection in West Baltimore, Md., on April 29, 2020.

    Photo: Julio Cortez/AP

    The contract’s “performance of work statement” mostly avoids delving into the numerous categories of data LexisNexis makes available to ICE, but it does make clear the importance of one: license plates.

    The automatic scanning of license plates has created a feast for data-hungry government agencies, providing an effective means of tracking people. Many people are unaware that their license plates are continuously scanned as they drive throughout their communities and beyond — thanks to automated systems affixed to traffic lights, cop cars, and anywhere else a small camera might fit. These automated license plate reader systems, or ALPRs, are employed by an increasingly diverse range of surveillance-seekers, from toll booths to homeowners associations.

    Police are a major consumer of the ALPR spigot. For them, tracking the humble license plate is a relatively cheap means of covertly tracking a person’s movements while — as with all the data offered by LexisNexis — potentially bypassing Fourth Amendment considerations. The trade in bulk license plate data is generally unregulated, and information about scanned plates is indiscriminately aggregated, stored, shared, and eventually sold through companies like LexisNexis.

    A major portion of the LexisNexis overview document details ICE’s access to and myriad use of license plate reader data to geolocate its targets, providing the agency with 30 million new plate records monthly. The document says ICE can access data on any license plate query going back years; while the time frame for different kinds of investigations aren’t specified, the contract document says immigration investigations can query location and other data on a license plate going back five years.

    “This begins to look a lot like indiscriminate, warrantless real-time surveillance capabilities for ICE with respect to any vehicle.”

    The LexisNexis license plate bounty provides ICE investigators with a variety of location-tracking surveillance techniques, including the ability to learn which license plates — presumably including people under no suspicion of any wrongdoing — have appeared in a location of interest. Users subscribing to LexisNexis’s LEIDS program can also plug a plate into the system, and LexisNexis will automatically share updates on the car as they come in, including maps and vehicle images. ICE investigators are allowed to place up to 2,500 different license plates onto their own watchlist simultaneously, the contract notes.

    ICE agents can also bring the LexisNexis car-tracking tech on the road through a dedicated smartphone app that allows them to, with only a few taps, snap a picture of someone’s plate to automatically place them on the watchlist. Once a plate of interest is snapped and uploaded, ICE agents then need only to wait for a convenient push notification informing them that there’s been activity detected about the car.

    Related

    How ICE Uses Social Media to Surveil and Arrest Immigrants

    Combining the staggering number of plates with the ability to search them from anywhere provides a potent tool with little oversight, according to Tucker, of Georgetown Law.

    Tucker told The Intercept, “This begins to look a lot like indiscriminate, warrantless real-time surveillance capabilities for ICE with respect to any vehicle encountered by any agent in any context.”

    LexisNexis’s LEIDS program is, crucially, not an outlier in the United States. For-profit data brokers are increasingly tapped by law enforcement and intelligence agencies for both the vastness of the personal information they collect and the fact that this data can be simply purchased rather than legally obtained with a judge’s approval.

    “Today, in a way that far fewer Americans seem to understand, and even fewer of them can avoid, CAI includes information on nearly everyone,” warned a recently declassified report from the Office of the Director of National Intelligence on so-called commercially available information. Specifically citing LexisNexis, the report said the breadth of the information “could be used to cause harm to an individual’s reputation, emotional well-being, or physical safety.”

    While the ICE contract document is replete with mentions of how these tools will be used to thwart criminality — obscuring the extent to which this the ends up deporting noncriminal migrants guilty of breaking only civil immigration rules — Tucker said the public should take seriously the inflated ambitions of ICE’s parent agency, the Department of Homeland Security.

    “What has happened in the last several years is that DHS’s ‘immigration enforcement’ activities have been subordinated to its mass surveillance activities,” Tucker said, “which produce opportunities for immigration enforcement but no longer have the primary purpose of immigration enforcement.”

    “What has happened in the last several years is that DHS’s ‘immigration enforcement’ activities have been subordinated to its mass surveillance activities.”

    The federal government allows the general Homeland Security apparatus so much legal latitude, Tucker explained, that an agency like ICE is the perfect vehicle for indiscriminate surveillance of the general public, regardless of immigration status.

    “That’s not to say that DHS isn’t still detaining and deporting hundreds of thousands of people every year. Of course they are, and it’s horrific,” Tucker said. “But the main goal of DHS’s surveillance infrastructure is not immigration enforcement, it’s … surveillance.

    “Use the agency that operates with the fewest legal and political restraints to put everyone inside a digital panopticon, and then figure out who to target for what kind of enforcement later, depending on the needs of the moment.”

    The post LexisNexis Is Selling Your Personal Data to ICE So It Can Try to Predict Crimes appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A program spearheaded by the World Bank that uses algorithmic decision-making to means-test poverty relief money is failing the very people it’s intended to protect, according to a new report by Human Rights Watch. The anti-poverty program in question, known as the Unified Cash Transfer Program, was put in place by the Jordanian government.

    Having software systems make important choices is often billed as a means of making those choices more rational, fair, and effective. In the case of the poverty relief program, however, the Human Rights Watch investigation found the algorithm relies on stereotypes and faulty assumptions about poverty.

    “Its formula also flattens the economic complexity of people’s lives into a crude ranking.”

    “The problem is not merely that the algorithm relies on inaccurate and unreliable data about people’s finances,” the report found. “Its formula also flattens the economic complexity of people’s lives into a crude ranking that pits one household against another, fueling social tension and perceptions of unfairness.”

    The program, known in Jordan as Takaful, is meant to solve a real problem: The World Bank provided the Jordanian state with a multibillion-dollar poverty relief loan, but it’s impossible for the loan to cover all of Jordan’s needs.  

    Without enough cash to cut every needy Jordanian a check, Takaful works by analyzing the household income and expenses of every applicant, along with nearly 60 socioeconomic factors like electricity use, car ownership, business licenses, employment history, illness, and gender. These responses are then ranked — using a secret algorithm — to automatically determine who are the poorest and most deserving of relief. The idea is that such a sorting algorithm would direct cash to the most vulnerable Jordanians who are in most dire need of it. According to Human Rights Watch, the algorithm is broken.

    The rights group’s investigation found that car ownership seems to be a disqualifying factor for many Takaful applicants, even if they are too poor to buy gas to drive the car.

    Similarly, applicants are penalized for using electricity and water based on the presumption that their ability to afford utility payments is evidence that they are not as destitute as those who can’t. The Human Rights Watch report, however, explains that sometimes electricity usage is high precisely for poverty-related reasons. “For example, a 2020 study of housing sustainability in Amman found that almost 75 percent of low-to-middle income households surveyed lived in apartments with poor thermal insulation, making them more expensive to heat.”

    In other cases, one Jordanian household may be using more electricity than their neighbors because they are stuck with old, energy-inefficient home appliances.

    Beyond the technical problems with Takaful itself are the knock-on effects of digital means-testing. The report notes that many people in dire need of relief money lack the internet access to even apply for it, requiring them to find, or pay for, a ride to an internet café, where they are subject to further fees and charges to get online.

    “Who needs money?” asked one 29-year-old Jordanian Takaful recipient who spoke to Human Rights Watch. “The people who really don’t know how [to apply] or don’t have internet or computer access.”

    Human Rights Watch also faulted Takaful’s insistence that applicants’ self-reported income match up exactly with their self-reported household expenses, which “fails to recognize how people struggle to make ends meet, or their reliance on credit, support from family, and other ad hoc measures to bridge the gap.”

    The report found that the rigidity of this step forced people to simply fudge the numbers so that their applications would even be processed, undermining the algorithm’s illusion of objectivity. “Forcing people to mold their hardships to fit the algorithm’s calculus of need,” the report said, “undermines Takaful’s targeting accuracy, and claims by the government and the World Bank that this is the most effective way to maximize limited resources.”

    Related

    AI Tries (and Fails) to Detect Weapons in Schools

    The report, based on 70 interviews with Takaful applicants, Jordanian government workers, and World Bank personnel, emphasizes that the system is part of a broader trend by the World Bank to popularize algorithmically means-tested social benefits over universal programs throughout the developing economies in the so-called Global South.

    Confounding the dysfunction of an algorithmic program like Takaful is the increasingly held naïve assumption that automated decision-making software is so sophisticated that its results are less likely to be faulty. Just as dazzled ChatGPT users often accept nonsense outputs from the chatbot because the concept of a convincing chatbot is so inherently impressive, artificial intelligence ethicists warn the veneer of automated intelligence surrounding automated welfare distribution leads to a similar myopia.

    The Jordanian government’s official statement to Human Rights Watch defending Takaful’s underlying technology provides a perfect example: “The methodology categorizes poor households to 10 layers, starting from the poorest to the least poor, then each layer includes 100 sub-layers, using statistical analysis. Thus, resulting in 1,000 readings that differentiate amongst households’ unique welfare status and needs.”

    “These are technical words that don’t make any sense together.”

    When Human Rights Watch asked the Distributed AI Research Institute to review these remarks, Alex Hanna, the group’s director of research, concluded, “These are technical words that don’t make any sense together.” DAIR senior researcher Nyalleng Moorosi added, “I think they are using this language as technical obfuscation.”

    As is the case with virtually all automated decision-making systems, while the people who designed Takaful insist on its fairness and functionality, they refuse to let anyone look under the hood. Though it’s known Takaful uses 57 different criteria to rank poorness, the report notes that the Jordanian National Aid Fund, which administers the system, “declined to disclose the full list of indicators and the specific weights assigned, saying that these were for internal purposes only and ‘constantly changing.’”

    While fantastical visions of “Terminator”-like artificial intelligences have come to dominate public fears around automated decision-making, other technologists argue civil society ought to focus on real, current harms caused by systems like Takaful, not nightmare scenarios drawn from science fiction.

    So long as the functionality of Takaful and its ilk remain government and corporate secrets, the extent of those risks will remain unknown.

    The post Algorithm Used in Jordanian World Bank Aid Program Stiffs the Poorest appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The precise locations of the U.S. government’s high-tech surveillance towers along the U.S-Mexico border are being made public for the first time as part of a mapping project by the Electronic Frontier Foundation.

    While the Department of Homeland Security’s investment of more than a billion dollars into a so-called virtual wall between the U.S. and Mexico is a matter of public record, the government does not disclose where these towers are located, despite privacy concerns of residents of both countries — and the fact that individual towers are plainly visible to observers. The surveillance tower map is the result of a year’s work steered by EFF Director of Investigations Dave Maass, who pieced together the constellation of surveillance towers through a combination of public procurement documents, satellite photographs, in-person trips to the border, and even virtual reality-enabled wandering through Google Street View imagery. While Maass notes the map is incomplete and remains a work in progress, it already contains nearly 300 current tower locations and nearly 50 more planned for the near future.




    As border surveillance towers have multiplied across the southern border, so too have they become increasingly sophisticated, packing a panoply of powerful cameras, microphones, lasers, radar antennae, and other sensors designed to zero in on humans. While early iterations of the virtual wall relied largely on human operators monitoring cameras, companies like Anduril and Google have reaped major government paydays by promising to automate the border-watching process with migrant-detecting artificial intelligence. Opponents of these modern towers, bristling with always-watching sensors, argue the increasing computerization of border security will lead inevitably to the dehumanization of an already thoroughly dehumanizing undertaking.

    While American border authorities insist that the surveillance net is aimed only at those attempting to illegally enter the country, critics like Maass say they threaten the privacy of anyone in the vicinity. According to a 2022 estimate by the EFF, “about two out of three Americans live within 100 miles of a land or sea border, putting them within Customs and Border Protection’s special enforcement zone, so surveillance overreach must concern us all.” Taking the towers out of abstract funding and strategy documents and sticking them onto a map of the physical world also punctures CBP’s typical defense against privacy concerns, namely that the towers are erected in remote areas and therefore pose a threat to no one but those attempting to break the law. In fact, “the placement of the towers undermines the myth that border surveillance only affects unpopulated rural areas,” Maass wrote of the map. “A large number of the existing and planned targets are positioned within densely populated urban areas.”

    The map itself serves as a striking document of the militarization of the U.S. border and domestic law enforcement, revealing a broad string of surveillance machines three decades in the making, stretching from the beaches of Tijuana to the southeastern extremity of Texas.

    In 1993, federal officials launched Operation Blockade, a deployment of 400 Border Patrol agents to the northern banks of the Rio Grande between El Paso and Ciudad Juárez. The aim of the “virtual wall,” as it was described at the time, was to push the ubiquitous unauthorized crossing of mostly Mexican laborers out of the city — where they disappeared into the general population and Border Patrol agents engaged in racial profiling to find them — and into remote areas where they would be easier to arrest. Similar initiatives, Operation Gatekeeper in San Diego, Operation Safeguard in southern Arizona, Operation Rio Grande in South Texas, soon followed.

    Though undocumented labor was essential to industries in the Southwest and had been for generations, an increasingly influential nativist wing of the Republican Party had found electoral success in attacking the Democrats and the Clinton administration for a purported disinterest in tackling lawbreaking in border cities. The White House responded by ordering the Pentagon’s Center for Low-Intensity Conflict, which had spent the previous decade running counterinsurgency campaigns around the world, as well as the now-defunct Immigration and Naturalization Service, to devise a tactical response to the president’s political problem.

    The answer was “prevention through deterrence,” a combination of militarization and surveillance strategy that remains the foundation for border security thinking in the U.S. to this day. Bill Clinton’s unusual team of immigration and counterinsurgency officials saw the inherent “mortal danger” of pushing migrants into remote, deadly terrain as a strategic advantage. “The prediction is that with traditional entry and smuggling routes disrupted, illegal traffic will be deterred or forced over more hostile terrain, less suited for crossing and more suited for enforcement,” the officials wrote in their 1994 national strategy plan. The architects of prevention through deterrence accepted that funneling migrants into the most remote landscapes in the country would have deadly consequences, noting, “Violence will increase as effects of the strategy are felt.”

    Violence did increase, albeit in the slow and agonizing form one finds in the desiccated washes of the Sonoran Desert and the endless chaparral fields of South Texas. Before prevention through deterrence, the medical examiner’s office in Tucson, Arizona, averaged roughly 12 migrant death cases a year. After the strategy went into effect, that number skyrocketed to 155.

    The September 11 attacks made the already deadly situation far worse. In Washington, the cliched quip that “border security is national security” led to the Department of Homeland Security, the largest reorganization of the federal government since the creation of the CIA and the Defense Department. With the Department of Homeland Security up and running, U.S. taxpayers began funneling more money into the nation’s border and immigration agencies than the FBI; Drug Enforcement Administration; and the Bureau of Alcohol, Tobacco, Firearms and Explosives combined. Immigration offenses became the most common charge on the federal docket. An unprecedented network of for-profit immigration jails went up across the country.

    On the border itself, a massive new industry of surveillance tech, much of it repurposed from the war on terror, was born. The more money the U.S. government poured into interdiction on the border, the more money there was to be made in evading the U.S. government. For migrants, hiring a smuggler became unavoidable. For smugglers, engaging with Mexican organized crime, many with links to Mexican government officials, became unavoidable. With organized crime involved, U.S. agencies called out for more resources. These dynamics have been extremely lucrative for an array of individuals and interests, while at the same time making human migration vastly more dangerous, radically altering life in border communities, and exacting a heavy toll on borderland ecosystems.


    A close-up shot of an IFT’s camera lens, reflecting the desert landscape that it looks over below Coronado Peak, Cochise County, AZ.

    A close-up shot of a Federal Telecommunications Institute camera lens, reflecting the desert landscape that it looks over below Coronado Peak in Hereford, Ariz.

    Electronic Frontier Foundation

    Surveillance towers have been significant part of that vicious cycle, even though, as Maass’s EFF report notes, their efficacy is far less certain than their considerable price tag.

    Nobody can say for certain how many people have died attempting to cross the U.S.-Mexico border in the recent age of militarization and surveillance. Researchers estimate that the minimum is at least 10,000 dead in the past two and a half decades, and most agree that the true death toll is considerably higher.

    Sam Chambers, a researcher at the University of Arizona, studies the relationship between surveillance infrastructure and migrant deaths in the Sonoran Desert and has found the two inextricable from one another. While the purpose of surveillance towers in theory is to collect and relay data, Chambers argues that the actual function of towers in the borderlands is more basic than that. Like the agents deployed to the Rio Grande in Operation Blockade or a scarecrow in a field, the towers function as barriers pushing migrants into remote areas. “It’s made in a way to make certain places watched and others not watched,” Chambers told The Intercept. “It’s basically manipulating behavior.”

    “People cross in more remote areas away from the surveillance to remain undetected,” he said. “What it ends up doing is making the journeys longer and more difficult. So instead of crossing near a community, somebody is going to go through a mountain range or remote area of desert, somewhere far from safety. And it’s going to take them more energy, more time, much more exposure in the elements, and higher likelihood of things like hyperthermia.”

    “There’s nothing to suggest anybody’s trying to make this humane in any manner.”

    Last year was the deadliest on record for migrants crossing the southern border. While the planet is already experiencing a level of human migration unlike anything in living memory, experts expect human movement across the globe to increase even further as the climate catastrophe intensifies. In the U.S., where the nation’s two leading political parties have offered no indication of a will to abandon their use of deadly landscapes as force multipliers on the border, the multidecade wave of dying shows no sign of stopping anytime soon.

    “They’ve been doing this, prevention through deterrence, since the ’90s,” Chambers said. “There’s nothing to suggest anybody’s trying to make this humane in any manner.”

    The post Mapping Project Reveals Locations of U.S. Border Surveillance Towers appeared first on The Intercept.

  • U.S. Special Operations Command, responsible for some of the country’s most secretive military endeavors, is gearing up to conduct internet propaganda and deception campaigns online using deepfake videos, according to federal contracting documents reviewed by The Intercept.

    The plans, which also describe hacking internet-connected devices to eavesdrop in order to assess foreign populations’ susceptibility to propaganda, come at a time of intense global debate over technologically sophisticated “disinformation” campaigns, their effectiveness, and the ethics of their use.

    While the U.S. government routinely warns against the risk of deepfakes and is openly working to build tools to counter them, the document from Special Operations Command, or SOCOM, represents a nearly unprecedented instance of the American government — or any government — openly signaling its desire to use the highly controversial technology offensively.

    SOCOM’s next generation propaganda aspirations are outlined in a procurement document that lists capabilities it’s seeking for the near future and soliciting pitches from outside parties that believe they’re able to build them.

    “When it comes to disinformation, the Pentagon should not be fighting fire with fire,” Chris Meserole, head of the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative, told The Intercept. “At a time when digital propaganda is on the rise globally, the U.S. should be doing everything it can to strengthen democracy by building support for shared notions of truth and reality. Deepfakes do the opposite. By casting doubt on the credibility of all content and information, whether real or synthetic, they ultimately erode the foundation of democracy itself.”

    “When it comes to disinformation, the Pentagon should not be fighting fire with fire.”

    Meserole added, “If deepfakes are going to be leveraged for targeted military and intelligence operations, then their use needs to be subject to review and oversight.”

    The pitch document, first published by SOCOM’s Directorate of Science and Technology in 2020, established a wish list of next-generation toys for the 21st century special forces commando, a litany of gadgets and futuristic tools that will help the country’s most elite soldiers more effectively hunt and kill their targets using lasers, robots, holographs, and other sophisticated hardware.

    Last October, SOCOM quietly released an updated version of its wish list with a new section: “Advanced technologies for use in Military Information Support Operations (MISO),” a Pentagon euphemism for its global propaganda and deception efforts.

    The added paragraph spells out SOCOM’s desire to obtain new and improved means of carrying out “influence operations, digital deception, communication disruption, and disinformation campaigns at the tactical edge and operational levels.” SOCOM is seeking “a next generation capability to collect disparate data through public and open source information streams such as social media, local media, etc. to enable MISO to craft and direct influence operations.”

    SOCOM typically fights in the shadows, but its public reputation and global footprint loom large. Comprised of the elite units from the Army, Marine Corps, Navy, and Air Force, SOCOM leads the most sensitive military operations of the world’s most lethal nation.

    While American special forces are widely known for splashy exploits like the Navy SEALs’ killing of Osama bin Laden, their history is one of secret missions, subterfuge, sabotage, and disruption campaigns. SOCOM’s “next generation” disinformation ambitions are only part of a long, vast history of deception efforts on the part of the U.S. military and intelligence apparatuses.

    Special Operations Command, which is accepting proposals on these capabilities through 2025, did not respond to a request for comment.

    Though Special Operations Command has for years coordinated foreign “influence operations,” these deception campaigns have come under renewed scrutiny. In December, The Intercept reported that SOCOM had convinced Twitter, in violation of its internal policies, to permit a network of sham accounts that spread phony news items of dubious accuracy, including a claim that the Iranian government was stealing the organs of Afghan civilians. Though the Twitter-based propaganda offensive didn’t use of deepfakes, researchers found that Pentagon contractors employed machine learning-generated avatars to lend the fake accounts a degree of realism.

    Provocatively, the updated capability document reveals that SOCOM wants to boost these internet deception efforts with the use of “next generation” deepfake videos, an increasingly effective method of generating lifelike digital video forgeries using machine learning. Special forces would use this faked footage to “generate messages and influence operations via non-traditional channels,” the document adds.

    While deepfakes have largely remained fodder for entertainment and pornography, the potential for more dire applications is real. At the onset of Russian’s invasion of Ukraine, a shoddy deepfake of Ukrainian President Volodymyr Zelenskyy ordering troops to surrender began circulating on social media channels. Ethical considerations aside, the legality of militarized deepfakes in a conflict, which remains an open question, is not addressed in the SOCOM document.

    As with foreign governmental “disinformation” campaigns, the U.S. has spent the past several years warning against the potent national security threat represented by deepfakes. The use of deepfakes to deliberately deceive, government authorities warn regularly, could have a deeply destabilizing effect on civilian populations exposed to them.

    At the federal level, however, the conversation has revolved exclusively around the menace foreign-made deepfakes might pose to the U.S., not the other way around. Previously reported contracting documents show SOCOM has sought technologies to detect deepfake-augmented internet campaigns, a tactic it now wants to unleash on its own.

    Perhaps as provocative as the mention of deepfakes is the section that follows, which notes SOCOM wishes to finely tune its offensive propaganda seemingly by spying on the intended audience through their internet-connected devices.

    Described as a “next generation capability to ‘takeover’ Internet of Things (loT) devices for collect [sic] data and information from local populaces to enable breakdown of what messaging might be popular and accepted through sifting of data once received,” the document says that the ability to eavesdrop on propaganda targets “would enable MISO to craft and promote messages that may be more readily received by local populace.” In 2017, WikiLeaks published pilfered CIA files that revealed a roughly similar capability to hijack into household devices.

    The technology behind deepfake videos first arrived in 2017, spurred by a combination of cheap, powerful computer hardware and research breakthroughs in machine learning. Deepfake videos are typically made by feeding images of an individual to a computer and using the resultant computerized analysis to essentially paste a highly lifelike simulacrum of that face onto another.

    “The capacity for societal harm is certainly there.”

    Once the software has been sufficiently trained, its user can crank out realistic fabricated footage of a target saying or doing virtually anything. The technology’s ease of use and increasing accuracy has prompted fears of an era in which the global public can no longer believe what it sees with its own eyes.

    Though major social platforms like Facebook have rules against deepfakes, given the inherently fluid and interconnected nature of the internet, Pentagon-disseminated deepfakes might also risk flowing back to the American homeland.

    “If it’s a nontraditional media environment, I could imagine the form of manipulation getting pretty far before getting stopped or rebuked by some sort of local authority,” Max Rizzuto, a deepfakes researcher with the Atlantic Council’s Digital Forensic Research Lab, told The Intercept.The capacity for societal harm is certainly there.”

    SOCOM’s interest in deploying deepfake disinformation campaigns follows recent years of international anxiety about forged videos and digital deception from international adversaries. Though there’s scant evidence Russia’s efforts to digitally sway the 2016 election had any meaningful effect, the Pentagon has expressed an interest in redoubling its digital propaganda capabilities, lest it fall behind, with SOCOM taking on a crucial role.

    At an April 2018 hearing of the Senate Armed Services Committee, Gen. Kenneth Tovo of the Army Special Operations Command assured the assembled senators that American special forces were working to close the propaganda gap.

    “We have invested fairly heavily in our psy-op operators,” he said, “developing new capabilities, particularly to deal in the digital space, social media analysis and a variety of different tools that have been fielded by SOCOM that allow us to evaluate the social media space, evaluate the cyber domain, see trend analysis, where opinion is moving, and then how to potentially influence that environment with our own products.”

    While military propaganda is as old as war itself, deepfakes have frequently been discussed as a sui generis technological danger, the existence of which poses a civilizational threat.

    At a 2018 Senate Intelligence Committee hearing discussing the nomination of William Evanina to run the National Counterintelligence and Security Center, Sen. Marco Rubio, R-Fla., said of deepfakes, “I believe this is the next wave of attacks against America and Western democracies.” Evanina, in response, reassured Rubio that the U.S. intelligence community was working to counter the threat of deepfakes.

    The Pentagon is also reportedly hard at work countering the foreign deepfake threat. According to a 2018 news report, the Defense Advanced Research Projects Agency, the military’s tech research division, has spent tens of millions of dollars developing methods to detect deepfaked imagery. Similar efforts are underway throughout the Department of Defense.

    In 2019, Rubio and Sen. Mark Warner, D-Va., wrote 11 American internet companies urging them to draft policies to detect and remove deepfake videos. “If the public can no longer trust recorded events or images,” read the letter, “it will have a corrosive impact on our democracy.”

    Nestled within the National Defense Authorization Act for Fiscal Year 2021 was a directive instructing the Pentagon to complete an “intelligence assessment of the threat posed by foreign government and non-state actors creating or using machine-manipulated media (commonly referred to as ‘deep fakes’),” including “how such media has been used or might be used to conduct information warfare.”

    Just a couple years later, American special forces seem to be gearing up to conduct the very same.

    “It’s a dangerous technology,” said Rizzuto, the Atlantic Council researcher.

    “You can’t moderate this tech the way we approach other sorts of content on the internet,” he said. “Deepfakes as a technology have more in common with conversations around nuclear nonproliferation.”

    The post U.S. Special Forces Want to Use Deepfakes for Psy-ops appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Like many other modern American corporations, Google professes a deep commitment to protecting the environment and combating climate change from the very top: In a September 2020 blog post, CEO Sundar Pichai nodded to a “carbon-free future” and outlined a plan to tackle the company’s own emissions with great urgency.

    “The science is clear: The world must act now if we’re going to avert the worst consequences of climate change,” Pichai wrote, and that meant phasing out Google’s use of fossil fuels in favor of clean, renewable power.

    Two months later, Google announced it was partnering with Saudi Aramco. The internet giant maintains that the joint venture with Aramco — one of human history’s most prolific producers of oil and gas — is entirely green, but critics question whether it’s possible to work for a fossil fuel powerhouse without being complicit in the very dirty business of fossil fuels.

    Google went into PR cleanup mode following the announcement, dispatching Thomas Kurian, head of the highly lucrative cloud-computing division, to deny allegations of climate hypocrisy. Yes, Google works with Big Oil, Kurian told Bloomberg TV’s Emily Chang, “but to the environmentally clean or green parts of these companies.” He added, “We have said that again and again that we don’t work with the oil and gas division within Aramco.”

    Within months of Kurian’s denial, Aramco was using Google Cloud to transport methane gas more efficiently. When burned as fuel, methane is a leading source of carbon emissions.

    This past November, Google Cloud hosted an “emissions hackathon” at the offices of Schlumberger, a Houston oil field services company. The winning team was none other than six Aramco oil and gas data scientists who’d devised a method of using Google Cloud’s machine learning features to detect and repair leaks in methane gas pipelines.

    Google spokesperson Ted Ladd stood by the Aramco partnership in a statement to The Intercept, defending the collaboration as a means of helping Aramco “protect the environment.” The claim cuts to the heart of an ongoing debate among climate advocates and policymakers, in the halls of power and boardrooms alike: Can fossil fuel use be made meaningfully cleaner through technology, or does so-called decarbonization only “greenwash” the irredeemable pursuit of fossil fuels that must be cast aside to preserve life on Earth?

    Whatever the Google-Aramco project offers in terms of environmental protection, this much is clear: The joint venture will be lucrative, bringing Google’s sophisticated cloud computing services to Saudi Arabia, an estimated $10 billion market, through the construction of a vast data center in Dammam — the very place where Saudi oil was first discovered in 1938 and where it continues to be pumped out by Aramco today.

    The Decarbonization Myth

    The notion that a company of Google’s immense influence can work with a world historical exporter of hydrocarbon fuels while championing a “carbon-free future” is controversial.

    Some observers of such deals say that if Google is making money helping a firm like Aramco even slightly reduce its emissions, society stands to benefit. “The sooner the world ditches gas and goes 100% renewable, the better it is for the environment and public health, but the transition won’t happen overnight,” Johanna Neumann, a senior director with renewable energy advocacy group Environment America, told The Intercept. “In the immediate term, oil and gas companies need to be held accountable for their methane pollution and the sooner they find and seal methane leaks, the better.”

    Others are less hopeful. “These efforts certainly sound like they are driven by the bottom-line and not the desire to align businesses with carbon free investments,” said Gregory Trencher, a professor of environmental studies at Kyoto University. “Not touching fossil fuels in any shape or form is generally expected by many divestment players, so helping lower the carbon intensity of transport does not come across as a very impactful action.” Still, he added that “methane leaks in existing infrastructure are very large source of anthropogenic methane emissions. So this is a difficult debate.”

    Google’s claims to be helping Aramco decarbonize are complicated by the fact the Saudi-owned company has no apparent interest in doing so. Climate scientists routinely criticize Aramco’s green energy rhetoric as little more than talk, a PR smoke screen obscuring the firm’s role in perpetuating climate change — a role that’s made Aramco a $6.7 trillion company.

    A recent New York Times investigation into Aramco found the firm’s eco-friendly initiatives are only part of a broader strategy to keep the planet addicted to Saudi fossil fuels for decades to come. The Times report noted that by reducing peripheral emissions, like methane leaks, Aramco gains the credibility needed to publicly pledge it will itself stop emitting greenhouse gases by 2050. All the while, the theory goes, the massive emissions caused by the continued global consumption of its chief products — oil and gas — will be ignored. “People would like us to give up on investment in hydrocarbons. But no,” Aramco CEO Amin Nasser told the Times.

    Aramco even appears to be accelerating its oil and gas work, not moving away from it. According to a 2022 report by Oil Change International, Saudi Aramco greenlighted more new gas and oil projects than any other energy company last year; it is on track to rank No. 3 in the world in expanding its oil and gas operations through 2025.

    While Google Cloud software may do what Kurian, the head of the division, says and help Aramco leak less methane into the atmosphere, it could also be helping the company push back on the global scientific consensus that fuel pipelines need to be ditched, not patched, to avoid a climate catastrophe.

    “Kurian’s statement that his team is ‘helping oil and gas companies decarbonize in a variety of ways’ is concerning on two levels,” Collin Rees, a senior campaigner with anti-fossil fuel group Oil Change International, told The Intercept. “First, if so they’re pretty bad at their jobs — because we’ve seen almost no meaningful decarbonization in the sector — and second, it implies a continued existence for the sector or an ‘acceptable’ level of emissions, when we know that level must be near-zero.”

    Working on ostensibly “green” projects for an oil giant like Aramco allows the company to pretend there is such a thing as eco-friendly fossil fuel, added Kelly Trout, research co-director at Oil Change International. While Google might help Aramco plug leaks, they’re also helping Aramco obscure its role in climate destruction. “The danger of what Google is doing lies in the company misleading the public that technology can render oil and gas ‘safe’ for the climate when only phasing it out can do that,” Trout said.

    “The danger of what Google is doing lies in the company misleading the public that technology can render oil and gas ‘safe’ for the climate when only phasing it out can do that.”

    In other words, Google may be making Aramco’s operations more sustainable in terms of withstanding public relations pressure, not emissions.

    There are further signs Google’s joint cloud venture is broadly courting the oil and gas sector; for one, they say it themselves. Google Cloud is sold to the Saudi market via CNTXT, a regional middleman Aramco co-founded with a Norwegian software company that works largely with oil and gas firms. CNTXT’s website directly advertises “cloud-driven digital transformation solutions for Public and Private Sectors and industrial digital transformation solutions asset-heavy industries including: Oil And Gas.”

    In his statement to The Intercept, Ladd, the Google spokesperson, defended the entirety of the company’s work for Aramco, including the pipeline project. “This is entirely consistent with the type of work Google Cloud does with energy companies — in this case, helping them track emissions and gas leaks to protect the environment,” Ladd said. “We are not doing work in the exploration and production business with energy firms.”

    This is a purported change from Google’s previous business strategy of engaging directly with the extraction portion of the chain. A landmark 2020 Greenpeace investigation accused Google of “helping Big Oil profit from climate destruction,” pointing to the company’s open courtship of upstream business. Prior to 2019, a now-deleted section of Google’s website touted a variety of ways in which the company had helped oil firms pump more oil, like Chevron using “Google’s AutoML Vision Al tools to parse Chevron’s vast data sets and revisit potential subsurface deposits that were previously passed over due to inconclusive or hard to parse data.”

    Following the Greenpeace report, Google claimed it will no longer build custom artificial intelligence tools to aid drilling and pumping. The company remains, however, listed as a current member of the Open Subsurface Data Universe, a consortium of oil and tech companies that collaborate using data to improve oil and gas extraction. As of 2020, Google maintained a corporate email address specifically to field requests from consortium members, according to the OSDU’s official newsletter, “In The Pipeline.”

    Upstream Versus
    Downstream

    Google’s defense is based on drawing a careful distinction between so-called upstream processes — pumping of oil and gas out of the ground — and mid- or downstream work, where oil and gas are moved down the supply chain, refined, sold, and eventually burned as fuel. Just a month after the emissions hackathon in Houston, CNTXT CEO Abdullah Jarwan pitched Google Cloud at the Aramco 2022 Downstream Technology & Digital Excellence Awards, according to a LinkedIn post.

    Google’s contention that helping only to transport fossil fuels keeps their hands clean from the act of pumping them is a fallacy, according to Josh Eisenfeld of the environmental advocacy group Earthworks. After all, every ounce of methane gas that Google might spare from leaking into the atmosphere is still destined to be burned for fuel.

    “It’s like saying we don’t support the sale of tobacco but helping the transportation of it. You’re still helping that industry look better and exist longer than it should.”

    “Anything that looks at a specific part of the production supply chain without looking at whole chain is perpetuating this disconnect that allows the chain to look cleaner than it is,” Eisenfeld told The Intercept in an interview. “It’s like saying we don’t support the sale of tobacco but helping the transportation of it. You’re still helping that industry look better and exist longer than it should.”

    Greenpeace campaigner Xueying Wu agreed, telling The Intercept that “Google’s collaboration with Aramco works against the company’s climate commitments,” and that “it remains unclear how this effort would be separate from Aramco’s oil and gas business. It’s like eating organic food at home while collecting dividends from a pesticide business – there is a contradiction that is impossible to ignore.”

    There are other indications that Google’s refusal to aid the upstream supply chain isn’t quite ironclad. According to a LinkedIn post by Kera Gautreau, a senior director with a Houston-based oil consortium that helped judge the emissions hackathon, the involved “teams utilized BlueSky Resources, LLC datasets and Google’s geospatial analytics and machine learning pipelines to solve big decarbonization challenges in upstream oil and gas.”

    In late November 2022, CNTXT hosted an informational event for “Aramco Affiliates” with the stated goal of “keeping our heads in the cloud by adopting Google Cloud.” Despite Kurian’s claims that his cloud division wouldn’t do business with the upstream “exploration and production” side of Aramco, the people doing that work appear to have been attendance. The very first comment on CNTXT’s LinkedIn post (“The event was very informative”) is from Mazhar Saeed Siddiqui, whose profile lists him as an exploration system specialist at Saudi Aramco. Also in attendance was Asem Radhwi, who, according to his LinkedIn profile, spent 16 years as a petroleum engineering systems analyst at Aramco.

    The Phantom Division

    Part of Kurian’s attempt to distance Google from images of blackened oil fields relied on his claim that while they were doing business with Aramco, it wasn’t the part trashing the planet. “We work with Aramco system integration division, not with the oil and gas division,” he told Bloomberg. “We have said that again and again that we don’t work with the oil and gas division within Aramco.” It’s an odd claim, muddled by the fact that there’s no evidence an Aramco system integration division has ever existed. The term appears nowhere on the company’s website, nor has it ever been mentioned in its press releases.

    When asked for specific information about this division, Ladd, the Google spokesperson, told The Intercept, “Thomas was referring to a division within Saudi Aramco.” Ladd did not point to anything specific. Aramco’s media relations office, through an unnamed spokesperson, provided only a link to the company’s “digital transformation” webpage, which gives a loose overview of how the company uses various technologies to aid its oil and gas business. To the extent Aramco has any business operations whatsoever outside of directly pumping, transporting, and selling fossil fuels, they appear to be almost entirely technologies designed to aid the pumping, transporting, and selling of fossil fuels, such as developing corrosion-resistant pipeline materials.

    In its original announcement of the joint venture, Aramco noted the deal was struck “between Saudi Aramco Development Company, a subsidiary of Aramco, and Google Cloud.” Even if this distinction were meaningful, it doesn’t help Kurian’s case. When the European Commission granted its approval for Cognite, the Norwegian software firm that co-founded CNTXT, to take part in the Saudi cloud deal, it described the Saudi Aramco Development Company as “engaged in the exploration, production and marketing of crude oil and in the production and marketing of refined products and petrochemicals.”

    “It’s hard to make the argument any oil and gas company can be part of the solution.”

    Google’s Aramco justifications require thinking of the energy firm as something other than what it is: a machine engineered to extract, refine, and sell hydrocarbon fuels around the world. Polishing up a portion of this machine may make it seem cleaner, environmentalists argue, but obscures the contraption’s entire purpose.

    While it may be hard to walk away from billions of petrodollars in missed cloud services revenue, advocates like Earthworks’s Eisenfeld say there can be no compromise when it comes to averting worst-case climate disaster. “It’s hard to make the argument any oil and gas company can be part of the solution,” he said.

    The fundamental business of an entity like Aramco, Eisenfeld explained, is incompatible with the scientific climate consensus. If Aramco wants to provide a “carbon-free future” for the planet of the kind Google is attempting internally, it will have to dismantle its pipelines, not simply keep them from leaking.

    “If you’re talking about decarbonizing and not talking about decommissioning, you’re saying, ‘I’m going to drive the climate off a cliff,’” Eisenfeld said.

    As Google CFO Ruth Porat put it in 2019, “It should be the goal of every business to protect our planet.”

    The post Google Greenwashes a Dirty Partnership with Climate-Destroying Saudi Aramco appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A war between China and Taiwan will be extremely good for business at America’s Frontier Fund, a tech investment outfit whose co-founder and CEO sits on both the State Department Foreign Affairs Policy Board and President Joe Biden’s Intelligence Advisory Board, according to audio from a February 1 event.

    The remarks occurred at a tech finance symposium hosted at the Manhattan offices of Silicon Valley Bank. According to attendee Jack Poulson, head of the watchdog group Tech Inquiry, an individual who identified himself as “Tom” attended the event in place of Jordan Blashek, America’s Frontier Fund’s president and chief operating officer.

    Following the panel discussion, “Tom” spoke with a gaggle of other attendees and held forth on AFF’s investment in so-called choke points: sectors that would spike in value during a volatile geopolitical crisis, like computer chips or rare earth minerals. It turns out, according to audio published by Poulson, that a war in the Pacific would be tremendous for AFF’s bottom line.

    If the China-Taiwan situation happens, some of our investments will 10x, like overnight.”

    If the China-Taiwan situation happens, some of our investments will 10x, like overnight,” the person who identified as “Tom” said. “So I don’t want to share the name, but the one example I gave was a critical component that … the total market value is $200 million, but it is a critical component to a $50 billion market cap. That’s like a choke point, right. And so if it’s only produced in China, for example, and there’s a kinetic event in the Pacific, that would 10x overnight, like no question about it. There’s a couple of different things like that.”

    AFF is surely not the only venture fund that would see stratospheric returns throughout their portfolio in the case of a destabilizing global crisis, like a “kinetic event in the Pacific” — that is to say, war. Unlike most other investment firms, though, AFF is closely tied to the upper echelons of American power, the very people who would craft any response to such a war.

    Gilman Louie, AFF’s co-founder and current CEO, serves as chair of the National Intelligence University, advises Biden through his Intelligence Advisory Board, and was tapped for the State Department’s Foreign Affairs Policy Board by Secretary of State Antony Blinken in 2022. Louie previously ran In-Q-Tel, the CIA’s venture capital arm.

    In other words, AFF stands to massively profit from a geopolitical crisis while its CEO advises the Biden administration on geopolitical crises. (America’s Frontier Fund did not respond to a request for comment.)

    AFF was founded in 2021, according to its website, “to build the companies, platforms, and capabilities that will generate once-in-a-generation returns for investors, while ensuring long-term economic competitiveness for the U.S. and its allies.” Last year, the New York Times reported the techno-nationalist fund had met with U.S. lawmakers to request a $1 billion injection. AFF currently leads the Quad Investor Network, a White House-sponsored alliance of investors from the so-called Quad: a geopolitical bloc aimed at countering Chinese hegemony constituted by the U.S., Australia, India, and Japan.

    The fund also has close ties to some of the American private sector’s most vocal and influential China hawks. AFF was founded last year with support from former Google CEO Eric Schmidt, whose closeness to Biden’s government is attracting growing scrutiny and skepticism, and investor Peter Thiel. Thiel and Schmidt, whose business interests in national security and defense both stand to profit immensely from war in the Pacific, have both advocated for a more hostile national stance toward China.

    Schmidt is particularly dedicated to China alarmism, having spent much of his post-Google career thus far drumming up anti-China tensions; first at the National Security Commission on Artificial Intelligence, which he chaired, and today through his new think tank, the Special Competitive Studies Project, which regularly depicts China as a direct threat to the United States.

    AFF’s own Schmidt connections run deep: Louie, the CEO, worked alongside Schmidt at the National Security Commission on Artificial Intelligence, while Balshek, the fund’s COO, was previously an executive at Schmidt’s philanthropic fund.

    The post White House-Linked Venture Capital Fund Boasts China War Would Be Great for Business appeared first on The Intercept.

    This post was originally published on The Intercept.