Author: Sam Biddle

  • Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn’t control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals.

    The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations and wartime atrocities. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel’s use of its technology. And it would require close collaboration with the Israeli security establishment — including joint drills and intelligence sharing — that was unprecedented in Google’s deals with other nations.

    A third-party consultant Google hired to vet the deal recommended that the company withhold machine learning and artificial intelligence tools from Israel because of these risk factors.

    Three international law experts who spoke with The Intercept said that Google’s awareness of the risks and foreknowledge that it could not conduct standard due diligence may pose legal liability for the company. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza — with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses.

    “They’re aware of the risk that their products might be used for rights violations,” said León Castellanos-Jankiewicz, a lawyer with the Asser Institute for International and European Law in The Hague, who reviewed portions of the report. “At the same time, they will have limited ability to identify and ultimately mitigate these risks.”

    Google declined to answer any of a list of detailed questions sent by The Intercept about the company’s visibility into Israel’s use of its services or what control it has over Project Nimbus.

    Company spokesperson Denise Duffy-Parkes instead responded with a verbatim copy of a statement that Google provided for a different article last year. “We’ve been very clear about the Nimbus contract, what it’s directed to, and the Terms of Service and Acceptable Use Policy that govern it. Nothing has changed.”

    Portions of the internal document were first reported by the New York Times, but Google’s acknowledged inability to oversee Israel’s usage of its tools has not previously been disclosed.

    In January 2021, just three months before Google won the Nimbus contract alongside Amazon, the company’s cloud computing executives faced a dilemma.

    The Project Nimbus contract — then code-named “Selenite” at Google — was a clear moneymaker. According to the report, which provides an assessment of the risks and rewards of this venture, Google estimated a bespoke cloud data center for Israel, subject to Israeli sovereignty and law, could reap $3.3 billion between 2023 and 2027, not only by selling to Israel’s military but also its financial sector and corporations like pharmaceutical giant Teva.

    But given decades of transgressions against international law by Israeli military and intelligence forces it was now supplying, the company acknowledged that the deal was not without peril. “Google Cloud Services could be used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank,” resulting in “reputation harm,” the company warned.

    In the report, Google acknowledged the urgency of mitigating these risks, both to the human rights of Palestinians and Google’s public image, through due diligence and enforcement of the company’s terms of service, which forbid certain acts of destruction and criminality.

    But the report makes clear a profound obstacle to any attempt at oversight: The Project Nimbus contract is written in such a way that Google would be largely kept in the dark about what exactly its customer was up to, and should any abuses ever come to light, obstructed from doing anything about them.

    The document lays out the limitations in stark terms.

    Google would only be given “very limited visibility” into how its software would be used. The company was “not permitted to restrict the types of services and information that the Government (including the Ministry of Defense and Israeli Security Agency) chooses to migrate” to the cloud.

    Attempts to prevent Israeli military or spy agencies from using Google Cloud in ways damaging to Google “may be constrained by the terms of the tender, as Customers are entitled to use services for any reason except violation of applicable law to the Customer,” the document says. A later section of the report notes Project Nimbus would be under the exclusive legal jurisdiction of Israel, which, like the United States, is not a party to the Rome Statute and does not recognize the International Criminal Court.

    “Google must not respond to law enforcement disclosure requests without consultation and in some cases approval from the Israeli authorities, which could cause us to breach international legal orders / law.”

    Should Project Nimbus fall under legal scrutiny outside of Israel, Google is required to notify the Israeli government as early as possible, and must “Reject, Appeal, and Resist Foreign Government Access Requests.”

    Google noted this could put the company at odds with foreign governments should they attempt to investigate Project Nimbus. The contract requires Google to “implement bespoke and strict processes to protect sensitive Government data,” according to a subsequent internal report, also viewed by The Intercept that was drafted after the company won its bid. This obligation would stand even if it means violating the law: “Google must not respond to law enforcement disclosure requests without consultation and in some cases approval from the Israeli authorities, which could cause us to breach international legal orders / law.”

    The second report notes another onerous condition of the Nimbus deal: Israel “can extend the contract up to 23 years, with limited ability for Google to walk away.”

    The initial report notes that Google Cloud chief Thomas Kurian would personally approve the contract with full understanding and acceptance of these risks before the company submitted its contract proposal. Google did not make Kurian available for comment.

    Business for Social Responsibility, a human rights consultancy tapped by Google to vet the deal, recommended the company withhold machine learning and AI technologies specifically from the Israeli military in order to reduce potential harms, the document notes. It’s unclear how the company could have heeded this advice considering the limitations in the contract. The Intercept in 2022 reported that Google Cloud’s full suite of AI tools was made available to Israeli state customers, including the Ministry of Defense.

    BSR did not respond to a request for comment.

    The first internal Google report makes clear that the company worried how Israel might use its technology. “If Google Cloud moves forward with the tender, we recommend the business secure additional assurances to avoid Google Cloud services being used for, or linked to, the facilitation of human rights violations.”

    It’s unclear if such assurances were ever offered.

    Related

    U.S. Companies Honed Their Surveillance Tech in Israel. Now It’s Coming Home.

    Google has long defended Project Nimbus by stating that the contract “is not directed at highly sensitive, classified or military workloads relevant to weapons or intelligence services.” The internal materials note that Project Nimbus will entail nonclassified workloads from both the Ministry of Defense and Shin Bet, the country’s rough equivalent of the FBI. Classified workloads, one report states, will be handled by a second, separate contract code-named “Natrolite.” Google did not respond when asked about its involvement in the classified Natrolite project.

    Both documents spell out that Project Nimbus entails a deep collaboration between Google and the Israeli security state through the creation of a Classified Team within Google. This team is made up of Israeli nationals within the company with security clearances, designed to “receive information by [Israel] that cannot be shared with [Google].” Google’s Classified Team “will participate in specialized training with government security agencies,” the first report states, as well as “joint drills and scenarios tailored to specific threats.”

    The level of cooperation between Google and the Israeli security state appears to have been unprecedented at the time of the report. “The sensitivity of the information shared, and general working model for providing it to a government agency, is not currently provided to any country by GCP,” the first document says.

    Whether Google could ever pull the plug on Nimbus for violating the company rules or the law is unclear. The company has claimed to The Intercept and other outlets that Project Nimbus is subject to its standard terms of use, like any other Google Cloud customer. But Israeli government documents contradict this, showing the use of Project Nimbus services is constrained not by Google’s normal terms, but a secret amended policy.

    A spokesperson for the Israeli Ministry of Finance confirmed to The Intercept that the amended Project Nimbus terms of use are confidential. Shortly after Google won the Nimbus contract, an attorney from the Israeli Ministry of Finance, which oversaw the deal, was asked by reporters if the company could ever terminate service to the government. “According to the tender requirements, the answer is no,” he replied.

    In its statement, Google points to a separate set of rules, its Acceptable Use Policy, that it says Israel must abide by. These rules prohibit actions that “violate or encourage the violation of the legal rights of others.” But the follow-up internal report suggests this Acceptable Use Policy is geared toward blocking illegal content like sexual imagery or computer viruses, not thwarting human rights abuses. Before the government agreed to abide by the AUP, Google wrote there was a “relatively low risk” of Israel violating the policy “as the Israel government should not be posting harmful content itself.” The second internal report also says that “if there is a conflict between Google’s terms” and the government’s requirements, “which are extensive and often ambiguous,” then “they will be interpreted in the way which is the most advantageous to the customer.”

    International law is murky when it comes to the liability Google could face for supplying software to a government widely accused of committing a genocide and responsible for the occupation of the West Bank that is near-universally considered illegal.

    Related

    Google Won’t Say Anything About Israel Using Its Photo Software to Create Gaza “Hit List”

    Legal culpability grows more ambiguous the farther you get from the actual act of killing. Google doesn’t furnish weapons to the military, but it provides computing services that allow the military to function — its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations. But if Project Nimbus were to be tied directly to the facilitation of a war crime or other crime against humanity, Google executives could hypothetically face criminal liability under customary international law or through a body like the ICC, which has jurisdiction in both the West Bank and Gaza.

    Civil lawsuits are another option: Castellanos-Jankiewicz imagined a scenario in which a hypothetical plaintiff with access to the U.S. court system could sue Google over Project Nimbus for monetary damages, for example.

    Along with its work for the Israeli military, Google through Project Nimbus sells cloud services to Israel Aerospace Industries, the state-owned weapons maker whose munitions have helped devastate Gaza. Another confirmed Project Nimbus customer is the Israel Land Authority, a state agency that among other responsibilities distributes parcels of land in the illegally annexed and occupied West Bank.

    An October 2024 judicial opinion issued by the International Court of Justice, which arbitrates disputes between United Nations member states, urged countries to “take all reasonable measures” to prevent corporations from doing anything that might aid the illegal occupation of the West Bank. While nonbinding, “The advisory opinions of the International Court of Justice are generally perceived to be quite authoritative,” Ioannis Kalpouzos, a visiting professor at Harvard Law and expert on human rights law and laws of war, told The Intercept.

    “Both the very existence of the document and the language used suggest at least the awareness of the likelihood of violations.”

    Establishing Google’s legal culpability in connection with the occupation of the West Bank or ongoing killing in Gaza entails a complex legal calculus, experts explained, hinging on the extent of its knowledge about how its products would be used (or abused), the foreseeability of crimes facilitated by those products, and how directly they contributed to the perpetration of the crimes. “Both the very existence of the document and the language used suggest at least the awareness of the likelihood of violations,” Kalpouzos said.

    While there have been a few instances of corporate executives facing local criminal charges in connections with human rights atrocities, liability stemming from a civil lawsuit is more likely, said Castellanos-Jankiewicz. A hypothetical plaintiff might have a case if they could demonstrate that “Google knew or should have known that there was a risk that this software was going to be used or is being used,” he explained, “in the commission of serious human rights violations, war crimes, crimes against humanity, or genocide.”

    Getting their date in court before an American judge, however, would be another matter. The 1789 Alien Tort Statute allows federal courts in the United States to take on lawsuits by foreign nationals regarding alleged violations of international law but has been narrowed considerably over the years, and whether U.S. corporations could even be sued under the statute in the first place remains undecided.

    History has seen scant few examples of corporate accountability in connection with crimes against humanity. In 2004, IBM Germany donated $4 million to a Holocaust reparations fund in connection with its wartime role supplying computing services to the Third Reich. In the early 2000s, plaintiffs in the U.S. sued dozens of multinational corporations for their work with apartheid South Africa, including the sale of ”essential tools and services,” Castellanos-Jankiewicz told The Intercept, though these suits were thrown out following a 2016 Supreme Court decision. Most recently Lafarge, a French cement company, pleaded guilty in both the U.S. and France following criminal investigations into its business in ISIS-controlled Syria.

    There is essentially no legal precedent as to whether the provision of software to a military committing atrocities makes the software company complicit in those acts. For any court potentially reviewing this, an important legal standard, Castellanos-Jankiewicz said, is whether “Google knew or should have known that its equipment that its software was being either used to commit the atrocities or enabling the commission of the atrocities.”

    The Nimbus deal was inked before Hamas attacked Israel on October 7, 2023, igniting a war that has killed tens of thousands of civilians and reduced Gaza to rubble. But that doesn’t mean the company wouldn’t face scrutiny for continuing to provide service. “If the risk of misuse of a technology grows over time, the company needs to react accordingly,” said Andreas Schüller, co-director of the international crimes and accountability program at the European Center for Constitutional and Human Rights. “Ignorance and an omission of any form of reaction to an increasing risk in connection with the use of the product leads to a higher liability risk for the company.”

    Though corporations are generally exempt from human rights obligations under international frameworks, Google says it adheres to the United Nations Guiding Principles on Business and Human Rights. The document, while voluntary and not legally binding, lays out an array of practices multinational corporations should follow to avoid culpability in human rights violations.

    Among these corporate responsibilities is “assessing actual and potential human rights impacts, integrating and acting upon the findings, tracking responses, and communicating how impacts are addressed.”

    The board of directors at Alphabet, Google’s parent entity, recently recommended voting against a shareholder proposal to conduct an independent third-party audit of the processes the company uses “to determine whether customers’ use of products and services for surveillance, censorship, and/or military purposes contributes to human rights harms in conflict-affected and high-risk areas.” The proposal cites, among other risk areas, the Project Nimbus contract. In rejecting the proposal, the board touted its existing human rights oversight processes, and cites the U.N. Guiding Principles and Google’s “AI Principles” as reason no further oversight is necessary. In February, Google amended this latter document to remove prohibitions against weapons and surveillance.

    “The UN guiding principles, plain and simple, require companies to conduct due diligence,” said Castellanos-Jankiewicz. “Google acknowledging that they will not be able to conduct these screenings periodically flies against the whole idea of due diligence. It sounds like Google is giving the Israeli military a blank check to basically use their technology for whatever they want.”

    The post Google Worried It Couldn’t Control How Israel Uses Project Nimbus, Files Reveal appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Sen. Elizabeth Warren is calling for President Trump’s pick for Under Secretary of the Army to sell his stock in a defense contractor that experts say would pose a clear conflict of interest.

    In a federal ethics agreement first reported by The Intercept, Michael Obadal, Trump’s pick  for the second most powerful post at the Army, acknowledged held equity in Anduril Industries, where he has worked for two years as an executive. Obadal said that contrary to ethics norms, he will not divest his stock, which he valued at between $250,000 and $500,000.

    In a letter shared with The Intercept in advance of Obadal’s confirmation hearing Thursday, Warren, D-Mass., says Obadal must divest from Anduril, calling the current arrangement a “textbook conflict of interest.”

    “By attempting to serve in this role with conflicts of interest, you risk spending taxpayer dollars on wasteful DoD contracts that enrich wealthy contractors but fail to enhance Americans’ national security.”

    Warren, who sits on the Senate Armed Services Committee, writes that Obadal’s stock holdings “will compromise your ability to serve with integrity, raising a cloud of suspicion over your contracting and operational decision.”

    Related

    Trump’s Pick for a Top Army Job Works at a Weapons Company — And Won’t Give Up His Stock

    “By attempting to serve in this role with conflicts of interest, you risk spending taxpayer dollars on wasteful DoD contracts that enrich wealthy contractors but fail to enhance Americans’ national security,” Warren writes.

    A more detailed financial disclosure form obtained by The Intercept shows Anduril is not the full extent of Obadal’s military investments. According to this document, a retirement investment account belonging to Obadal holds stock in both General Dynamics, which does billions of dollars of business with the Army, and Howmet Aerospace, a smaller firm. While nominees are not required to list the precise value of such investments, Obadal says his holdings in General Dynamics and Howmet are worth between $2,000 and $30,000.

    Don Fox, former acting director of the U.S. Office of Government Ethics, told The Intercept that neither stock should be exempt from conflict of interest considerations under federal law. “The fact that they are within either a traditional or Roth IRA doesn’t impact the conflict of interest analysis,” he said. “Not sure why he would be allowed to keep those.”

    “A DoD contractor is a DoD contractor,” said Fox. “The degree of their business with DoD or what they do isn’t material. A lot of people were surprised for example that Disney was/is a DoD contractor. For a Senate confirmation position they would have had to divest.”

    In addition to these defense contractors, Obadal holds stock in other corporations that do business with the Pentagon, including Microsoft, Amazon, Thermo Fisher Scientific, and Cummins, which manufactures diesel engines for the Army’s Bradley Fighting Vehicle. None of these companies are mentioned in Obadal’s ethics letter detailing which assets he will and won’t dispose of if confirmed. In his more detailed disclosure document, known as a Form 278, Obadal explicitly notes he will be able to exercise his shares in Anduril “if there is an equity event such as the sale of the company, or the company becoming a publicly-traded entity,” potentially netting him a large payout. Anduril was most recently reported to be valued by private investors at over $28 billion.

    In addition to divesting from Anduril, Warren’s letter asks Obadal to get rid of the stocks in these other firms, commit to recusing himself entirely from any Anduril-related matters at the Army, and pledge to avoid working for or lobbying on behalf of the defense sector for a period of four years after leaving the Department of Defense. “By making these commitments, you would increase Americans’ trust in your ability to serve the public interest during your time at the Army,” Warren writes, “rather than the special interests of large DoD contractors.”

    The post Trump Army Appointee Should Sell His Anduril Stock, Sen. Warren Demands appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Trump’s nominee for under secretary of the Army, Michael Obadal, retired from a career in the Army in 2023, then spent the past two years working for Anduril, the ascendant arms maker with billions of dollars in Army contracts.

    If confirmed to the Pentagon post — often described as the “chief operating officer” position at the largest branch of the U.S. military — Obadal plans to keep his stock in Anduril, according to an ethics disclosure reviewed by The Intercept.

    “This is unheard of for a presidential appointee in the Defense Department to retain a financial interest in a defense contractor,” said Richard Painter, the top White House ethics lawyer during the George W. Bush administration. Painter said that while the arrangement may not be illegal, it certainly creates the appearance of a conflict of interest. Under the norms of prior administrations, Painter said, “nobody at upper echelons of the Pentagon would be getting anywhere near contracts if he’s sitting on a pile of defense contractor stock.”

    Obadal has been a senior director at Anduril since 2023, according to his LinkedIn profile, following a nearly 30-year career in the U.S. Army. While the revolving door between the Pentagon and defense industry is as old as both of those institutions, federal law and ethical norms require employees of the executive branch to unload financial interests and relationships that might create a conflict of interest in the course of their duties.

    Obadal’s April 11 financial disclosure letter, filed with the Office of Government Ethics, states “Upon confirmation, I will resign from my position as Senior Director at Anduril Industries” and forfeit his right to unvested stock options. But crucially, Obadal says he will retain his restricted stock units that have already vested — i.e., Anduril stock he already owns. That means he will continue to own a piece of the company, whose valuation has reportedly increased from $8.5 billion when Obadal joined to $28 billion today on the strength of its military contracts, and stands to materially benefit from anything that helps Anduril.

    Related

    Trump’s Election Is Also a Win for Tech’s Right-Wing “Warrior Class”

    In his ethics letter, Obadal says he “will not participate personally and substantially in any particular matter that to my knowledge has a direct and predictable effect on the financial interests of Anduril Industries” — unless he’s given permission by his boss, the secretary of the Army.

    Don Fox, former acting director of the Office of Government Ethics, told The Intercept Obadal’s Anduril shares could pose a clear conflict of interest if he is confirmed. “The general reason an appointee would be allowed to maintain a potentially conflicting interest is because divestiture is either not possible or highly impractical.” Anduril is privately held, meaning shares in the company can’t be quickly disposed of on the stock market.

    But Painter, the Bush-era ethics lawyer, suggests that Obadal could liquidate his stake in Anduril through the lively secondary market in its shares. “In the Bush years, we’d just say ‘You’re not going to the Pentagon,” said Painter.  

    Fox said that if Obadal adheres to what’s in his ethics agreement and recuses himself from anything that touches Anduril, he will stay in compliance with federal law. “That’s going to be a pretty broad recusal,” added Painter, who speculated, “He’s going to have to recuse from any weapons systems that might use [Anduril] equipment, anything having to do with contracts, even competitor companies.”

    Fox, who spent decades as a lawyer at the Air Force and Navy, speculated that a vast recusal from budgetary matters “is feasible, but he’s going to have to be really scrupulous about it,” to the point of literally leaving the room whenever Anduril, its capabilities, or those of its competitors are discussed. “Once we get into areas that involve hardware and software, I’d say don’t even be in the room,” he said. “At a really senior level, people are not only looking for what you say but what you don’t say,” Fox added. “It poses a significant risk to him personally of crossing that line, no matter how scrupulous he may be.”

    William Hartung, a senior research fellow at the Quincy Institute for Responsible Statecraft who focuses on the U.S. arms industry, describes the situation as “the very definition of a conflict of interest” given the vast business interests between Obadal’s current and new employer. “The fact that the administration and the Congress have accepted this arrangement is a commentary on the sad state of ethics in Washington — an indication that too many of our elected officials won’t even try to take steps to make it harder to engage in corrupt practices,” Hartung added.

    As its second-highest ranking civilian at the Army, Obadal will have considerable sway over what weapons the Army purchases, what technologies it prioritizes, and when and how the U.S. wages war. Having a former employee and current shareholder in that position may prove lucrative for Anduril as the company seeks to add to its billions of dollars of federal contracts.

    In the past year alone, during Obadal’s time at the company, Anduril announced it was taking over the Army’s Integrated Visual Augmentation System, a troubled $22 billion program intended to provide soldiers with augmented reality goggles and selling the Army components for its rocket artillery systems and a fleet of miniature “Ghost-X” helicopter drones. Anduril is also working on the Army’s TITAN system, a truck-mounted sensor suite, and an experimental U.S. Army’s Robotic Combat Vehicle program.

    Last year, DefenseOne reported that the Army’s “unfunded priorities” tech wishlist included “$4.5 million in research and development for Anduril’s Roadrunner-M drone interceptor.” Obadal described that jet-powered bomb drone in a LinkedIn post as “revolutionary.”

    The White House declined to comment on the ethics agreement and referred The Intercept to the Office of the Secretary of Defense, which also declined to comment and referred The Intercept to the Army, which referred The Intercept back to the White House. Neither Anduril nor Obadal responded to a request for comment.

    Related

    Defense Tech Startup Founded by Trump’s Most Prominent Silicon Valley Supporters Wins Secretive Military AI Contract

    Even amid a rightward turn for the tech industry, Anduril stands out for its MAGA-alignment and closeness to the Trump administration of its top executives and investors.

    In December, the New York Times reported Trump’s transition team offices were “crawling with executives from defense tech firms with close ties to Mr. Trump’s orbit,” including Anduril. The month before, Anduril co-founder and longtime Trump booster Palmer Luckey told Bloomberg he was already “in touch” with the incoming administration about impending nominees: “I don’t want to throw any (names) out there because I would be happy with all of them.”

    The post Trump’s Pick for a Top Army Job Works at a Weapons Company — And Won’t Give Up His Stock appeared first on The Intercept.

    This post was originally published on The Intercept.

  • U.S. Immigration and Customs Enforcement just signed a contract worth $73 million with a firm whose executives are accused of taking part in a scheme to manufacture evidence against a co-worker during their time working at the Department of Homeland Security.

    According to a contract document reviewed by The Intercept, federal contractor Universal Strategic Advisors will provide services pertaining to ICE’s “non-detained docket,” a master list of millions of noncitizens believed to be removable from the United States but not yet in the agency’s custody.

    The contract cites President Donald Trump’s declaration of a national emergency on the U.S.-Mexico border, an overwhelming glut of potential deportees, and a shortage of officers to process them all as justification for hiring a private vendor to assist with the collection of biometric data, coordinating removals, and monitoring immigrant populations.

    The document says that with a fleet of new outsourced employees, ICE can reassign hundreds of officers to tasks that better align with Trump’s recent executive orders aimed at maximizing the agency’s detention and deportation operations. With the contractors onboard, the document says at least 675 ICE officers “will be able to take all appropriate actions to comply with the EO’s by prioritizing conducting at-large arrests, removals, and detention related activities.”

    A former ICE official, who spoke to The Intercept on the condition of anonymity, said they were concerned by this plan to further privatize the agency’s operations at the same time as the Trump administration has dramatically slashed its workforce and gutted important oversight bodies like the Office for Civil Rights and Civil Liberties, as well as the Office of the Immigration Detention Ombudsman. “I certainly take issue with them firing career feds and demolishing whole offices, just to hire contractors to do the same work, many of them who are former ICE employees now retired,” the official said.

    The responsibilities handed over to US Advisors are vast:

    “[Contractors] will manage field office alien check-ins, monitor immigration case statuses (and the outcome), assist with coordinating removals, update contact information to ensure that the alien can be located, respond to telephone calls, triage complaints and grievances, manage outreach mailboxes, enter data into ICE’s system of record, manage alien files, capture biometrics, organize and collect immigration related documents, field questions related to the immigration process, coordinate with ICE to assign aliens to an appropriate monitoring program, and notify ICE if someone is not complying with the terms of a conditional release or when someone is a risk to community safety.”

    “I don’t like, in general, to attach a profit motive to these inherently governmental services,” the former ICE official said, explaining that while the contract’s scope seems mostly administrative, the work in question has serious implications for millions living in the United States. “This is the backbone of decisions that are going to impact peoples’ lives; it’s a very high impact work stream.”


    Related

    The Unusual Nonprofit That Helps ICE Spy on Wire Transfers


    They also questioned the contract’s rationale of hiring private sector workers to handle administrative tasks in order to free up ICE officers to hit the streets. “If they’re just doing the arrests and they’re not following the case, not understanding the complexities, it gives the officers a much more limited view of the impact of their work. They’re not hearing when they’re talking about their kids, or why they might need to be released,” the ICE source explained.

    The procurement document notes that ICE is turning to US Advisors without conducting the typical competition for the business among other potential vendors, owing to, it says, the “emergency” conditions declared by Trump. “ICE would be unable to recruit, hire, vet, train, and deploy staff as quickly as a contractor can,” the notice reads. According to an April 9 filing, however, a rival vendor is protesting the contract with the Government Accountability Office, leaving the contract temporarily on hold.

    US Advisors has the right pedigree: The company has previously provided staffing support for ICE and is run by former Department of Homeland Security officials.

    But this executive team, while well-credentialed, has a checkered past: US Advisors CEO Brian DeMore and Chief Talent Officer David Marin are both named defendants in a lawsuit stemming from their time working at DHS. In 2019, former ICE officer Kui Myles filed suit alleging she was the victim of a scheme to manufacture criminal evidence against her after she complained of workplace harassment, resulting in her false arrest, false imprisonment, and invasion of privacy.

    Myles, a naturalized U.S. citizen born in China, further alleges she was subject to discrimination based on her national origin, and says her co-workers fabricated a report that she was illegally “housing Chinese nationals” as part of their conspiracy to discredit her. Myles alleges she was then placed under DHS surveillance, which revealed she was not in fact housing undocumented Chinese immigrants. At this point, Myles alleges that campaign to essentially frame her for criminal wage theft was executed at the “direction and instigation” of ICE officials including DeMore, then an ICE assistant field office director, and Marin, at the time a deputy field office director. All told, Myles accuses Marin and DeMore of engaging in a conspiracy to violate both her constitutional and civil rights under federal law.

    Myles’s lawsuit is ongoing, but in 2022 the U.S. Court of Appeals for the 9th Circuit determined the litigation could continue.

    Neither ICE nor US Advisors responded to a request for comment.

    Certain ICE tasks struck this source as particularly unfit for outsourcing: “Dealing with grievances — what if it’s a grievance against the contractor? They want to stay on ICE’s good side, so they probably want to minimize grievances,” they explained. “You’re really going to contract out community safety decisions?”

    Privatization is not a novelty among federal agencies generally or ICE specifically. Trump’s deportation fixation has signaled a feeding frenzy for corporations like the private prison firm GEO Group, up for a $350 million DHS contract renewal, and Deployed Resources, which operates migrant detention camps and just won a $3.8 billion ICE contract. The source said, “This is the game at ICE — they all work with their old buddies.”

    The post No-Bid ICE Contract Went to Former ICE Agents Being Sued for Fabricating Criminal Evidence on the Job appeared first on The Intercept.

    This post was originally published on The Intercept.

  • U.S. Immigration and Customs Enforcement just signed a contract worth $73 million with a firm whose executives were accused of taking part in a scheme to manufacture evidence against a co-worker during their time working at the Department of Homeland Security.

    According to a contract document reviewed by The Intercept, federal contractor Universal Strategic Advisors will provide services pertaining to ICE’s “non-detained docket,” a master list of millions of noncitizens believed to be removable from the United States but not yet in the agency’s custody.

    The contract cites President Donald Trump’s declaration of a national emergency on the U.S.-Mexico border, an overwhelming glut of potential deportees, and a shortage of officers to process them all as justification for hiring a private vendor to assist with the collection of biometric data, coordinating removals, and monitoring immigrant populations.

    The document says that with a fleet of new outsourced employees, ICE can reassign hundreds of officers to tasks that better align with Trump’s recent executive orders aimed at maximizing the agency’s detention and deportation operations. With the contractors onboard, the document says at least 675 ICE officers “will be able to take all appropriate actions to comply with the EO’s by prioritizing conducting at-large arrests, removals, and detention related activities.”

    A former ICE official, who spoke to The Intercept on the condition of anonymity, said they were concerned by this plan to further privatize the agency’s operations at the same time as the Trump administration has dramatically slashed its workforce and gutted important oversight bodies like the Office for Civil Rights and Civil Liberties, as well as the Office of the Immigration Detention Ombudsman. “I certainly take issue with them firing career feds and demolishing whole offices, just to hire contractors to do the same work, many of them who are former ICE employees now retired,” the official said.

    The responsibilities handed over to US Advisors are vast:

    “[Contractors] will manage field office alien check-ins, monitor immigration case statuses (and the outcome), assist with coordinating removals, update contact information to ensure that the alien can be located, respond to telephone calls, triage complaints and grievances, manage outreach mailboxes, enter data into ICE’s system of record, manage alien files, capture biometrics, organize and collect immigration related documents, field questions related to the immigration process, coordinate with ICE to assign aliens to an appropriate monitoring program, and notify ICE if someone is not complying with the terms of a conditional release or when someone is a risk to community safety.”

    “I don’t like, in general, to attach a profit motive to these inherently governmental services,” the former ICE official said, explaining that while the contract’s scope seems mostly administrative, the work in question has serious implications for millions living in the United States. “This is the backbone of decisions that are going to impact peoples’ lives; it’s a very high impact work stream.”

    Related

    The Unusual Nonprofit That Helps ICE Spy on Wire Transfers

    They also questioned the contract’s rationale of hiring private sector workers to handle administrative tasks in order to free up ICE officers to hit the streets. “If they’re just doing the arrests and they’re not following the case, not understanding the complexities, it gives the officers a much more limited view of the impact of their work. They’re not hearing when they’re talking about their kids, or why they might need to be released,” the ICE source explained.

    The procurement document notes that ICE is turning to US Advisors without conducting the typical competition for the business among other potential vendors, owing to, it says, the “emergency” conditions declared by Trump. “ICE would be unable to recruit, hire, vet, train, and deploy staff as quickly as a contractor can,” the notice reads. According to an April 9 filing, however, a rival vendor is protesting the contract with the Government Accountability Office, leaving the contract temporarily on hold.

     

    US Advisors has the right pedigree: The company has previously provided staffing support for ICE and is run by former Department of Homeland Security officials.

    But this executive team, while well-credentialed, has a checkered past: US Advisors CEO Brian DeMore and Chief Talent Officer David Marin were both named defendants in a lawsuit stemming from their time working at DHS. In 2019, former ICE officer Kui Myles filed suit alleging she was the victim of a scheme to manufacture criminal evidence against her after she complained of workplace harassment, resulting in her false arrest, false imprisonment, and invasion of privacy.

    Myles, a naturalized U.S. citizen born in China, further alleged she was subject to discrimination based on her national origin, and said her co-workers fabricated a report that she was illegally “housing Chinese nationals” as part of their conspiracy to discredit her. Myles alleged she was then placed under DHS surveillance, which revealed she was not in fact housing undocumented Chinese immigrants. At this point, Myles alleged that a campaign to essentially frame her for criminal wage theft was executed at the “direction and instigation” of ICE officials including DeMore, then an ICE assistant field office director, and Marin, at the time a deputy field office director. All told, Myles accused Marin and DeMore of engaging in a conspiracy to violate both her constitutional and civil rights under federal law.

    Myles’s lawsuit is ongoing. In 2022 the U.S. Court of Appeals for the 9th Circuit determined the litigation could continue, but a subsequent ruling dismissed the case against the individual defendants, including Marin and DeMore, on the grounds that they could not be sued in this context for their work as federal agents, while the larger case against the government continues.

    Neither ICE nor US Advisors responded to a request for comment.

    Certain ICE tasks struck this source as particularly unfit for outsourcing: “Dealing with grievances — what if it’s a grievance against the contractor? They want to stay on ICE’s good side, so they probably want to minimize grievances,” they explained. “You’re really going to contract out community safety decisions?”

    Privatization is not a novelty among federal agencies generally or ICE specifically. Trump’s deportation fixation has signaled a feeding frenzy for corporations like the private prison firm GEO Group, up for a $350 million DHS contract renewal, and Deployed Resources, which operates migrant detention camps and just won a $3.8 billion ICE contract. The source said, “This is the game at ICE — they all work with their old buddies.”

    Correction: April 26, 2025
    A previous version of this article incorrectly stated that Marin and DeMore were still being sued, when in fact the suit against them as individual defendants was dismissed in 2023. The article has been updated to reflect that fact.

    The post No-Bid ICE Contract Went to Former ICE Agents Sued for Fabricating Criminal Evidence on the Job appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Five years after Google Cloud CEO Thomas Kurian assured employees that the company was “not working on any projects associated with immigration enforcement at the southern border,” federal contract documents reviewed by The Intercept show that the tech giant is at the center of project to upgrade the so-called virtual wall.

    U.S. Customs and Border Protection is planning to modernize older video surveillance towers in Arizona that provide the agency an unblinking view of the border. A key part of the effort is adding machine-learning capabilities to CBP cameras, allowing the agency to automatically detect humans and vehicles as they approach the border without continuous monitoring by humans. CBP is purchasing computer vision powers from two vendors, IBM and Equitus. Google, the documents show, will play a critical role stitching those services together by operating a central repository for video surveillance data.

    Related

    The U.S. Border Patrol and an Israeli Military Contractor Are Putting a Native American Reservation Under “Persistent Surveillance”

    The work is focused on older towers purchased from Israeli military defense contractor Elbit. In all, the document notes “50 towers with up to 100 cameras across 6 sites in the Tucson Sector” will be upgraded with machine learning capabilities.

    IBM will provide its Maximo Visual Inspection software, a tool the company generally markets for industrial quality control inspections — not tracking humans. Equitus is offering its Video Sentinel, a more traditional video surveillance analytics program explicitly marketed for border surveillance that, according to a promotional YouTube video, recently taken offline, featuring a company executive, can detect “people walking caravan style … some of them are carrying backpacks and being identified as mules.”

    “Within 60 days from the start of the project, real life video from the southern border is available to train and create AI/ML models to be used by the Equitus Video Sentinel.”

    Tying together these machine learning surveillance tools is Google, which the document reveals is supplying CBP with a cloud computing platform known as MAGE: the ModulAr Google Cloud Platform Environment. Based on the document, Google is providing a hub for video surveillance data and will directly host the Equitus AI analysis tool. It appears every camera in CBP’s Tucson Sector will pipe data into Google servers: “This project will focus initially on 100 simultaneous video streams from the data source for processing,” the document reads, and “the resulting metadata and keyframes will be sent to CBP’s Google Cloud.”

    The diagram also notes that one of Google’s chief rivals, Amazon Web Services, provides CBP with unspecified cloud computing services.

    During President Trump’s first term, border surveillance and immigration enforcement work carried a stigma in the tech sector it has in part shed today.

    Related

    Google AI Tech Will Be Used for Virtual Border Wall, CBP Contract Shows

    In 2020, The Intercept revealed a document produced by the CBP Innovation Team, known as INVNT, that stated Google Cloud services would be used in conjunction with AI-augmented surveillance towers manufactured by defense contractor Anduril: “Google Cloud Platform (GCP) will be utilized for doing innovation projects for C1’s INVNT team like next generation IoT, NLP (Natural Language Processing), Language Translation and Andril [sic] image camera and any other future looking project for CBP. The GCP has unique product features which will help to execute on the mission needs.” (A CBP spokesperson confirmed to The Intercept that “Andril” was a misspelling of Anduril.)

    After the Anduril work came to light, Google’s cloud computing chief Thomas quickly attempted damage control, directly contradicting the Department of Homeland Security and telling concerned employees that the company was not involved in immigration enforcement on the Mexican border, CNBC reported at the time. “We have spoken directly with Customs and Border Patrol and they have confirmed that they are not testing our products for those purposes,” Kurian added.

    If this was true then, it’s certainly not now; references to Google services appear repeatedly throughout the tower modernization project document. A technical diagram showing how video data flows between various CBP servers shows Google’s MAGE literally in the middle.

    Customs and Border Protection did not respond to a request for comment about its use of Google Cloud.

    Google did not respond to specific questions about the project, nor address Kurian’s prior denial.

    In a statement provided to The Intercept, Google Public Sector executive Jim Kelly attempted to distance the company slightly from the border surveillance work. “CBP has been public about how it has a multicloud strategy and has used Google Cloud for work like translation,” Kelly wrote. “In this case, Google Cloud is not on the contract. That said, customers or partners can purchase Google Cloud’s off-the-shelf compute, storage, and networking products for their own use, much like they might use a mobile network or run their own computer hardware.”

    Kelly’s statement indicates the government is acquiring Google Cloud services through a reseller, as is common in federal procurement. But Kelly’s comparison of Google Cloud technology to buying off-the-shelf computer hardware is misleading. Even if it’s supplied through a subcontractor or reseller, CBP’s use of Google’s service still requires a constant and ongoing connection to the company’s cloud infrastructure. Were Google still serious about “not working on any projects associated with immigration enforcement at the southern border,” as Kurian claimed in 2020, it would be trivial to prevent CBP from using Google Cloud.

    “Border communities end up paying the price with their privacy.”

    Industry advocates and immigration hard-liners have long touted the “virtual wall” initiative, which substitutes iron and concrete barriers and Border Patrol agents for a 2,000 mile array of sensors, cameras, and computers. But critics say advanced technology is no substitute for policy reforms.

    “On top of the wasted tax dollars, border communities end up paying the price with their privacy, as demonstrated by the recent findings by the Government Accountability Office that CBP had failed to implement six out of six key privacy policy requirements,” Dave Maass, director of investigations at the Electronic Frontier Foundation, told The Intercept, referring to the tower program’s dismal privacy protections record. “For more than two decades, surveillance towers at the border have proven to be a boondoggle, and adding AI isn’t going to make it any less of a boondoggle — it will just be an AI-powered boondoggle.”

    The post Google Is Helping the Trump Administration Deploy AI Along the Mexican Border appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Five years after Google Cloud CEO Thomas Kurian assured employees that the company was “not working on any projects associated with immigration enforcement at the southern border,” federal contract documents reviewed by The Intercept show that the tech giant is at the center of project to upgrade the so-called virtual wall.

    U.S. Customs and Border Protection is planning to modernize older video surveillance towers in Arizona that provide the agency an unblinking view of the border. A key part of the effort is adding machine-learning capabilities to CBP cameras, allowing the agency to automatically detect humans and vehicles as they approach the border without continuous monitoring by humans. CBP is purchasing computer vision powers from two vendors, IBM and Equitus. Google, the documents show, will play a critical role stitching those services together by operating a central repository for video surveillance data.

    Related

    The U.S. Border Patrol and an Israeli Military Contractor Are Putting a Native American Reservation Under “Persistent Surveillance”

    The work is focused on older towers purchased from Israeli military defense contractor Elbit. In all, the document notes “50 towers with up to 100 cameras across 6 sites in the Tucson Sector” will be upgraded with machine learning capabilities.

    IBM will provide its Maximo Visual Inspection software, a tool the company generally markets for industrial quality control inspections — not tracking humans. Equitus is offering its Video Sentinel, a more traditional video surveillance analytics program explicitly marketed for border surveillance that, according to a promotional YouTube video, recently taken offline, featuring a company executive, can detect “people walking caravan style … some of them are carrying backpacks and being identified as mules.”

    “Within 60 days from the start of the project, real life video from the southern border is available to train and create AI/ML models to be used by the Equitus Video Sentinel.”

    Tying together these machine learning surveillance tools is Google, which the document reveals is supplying CBP with a cloud computing platform known as MAGE: the ModulAr Google Cloud Platform Environment. Based on the document, Google is providing a hub for video surveillance data and will directly host the Equitus AI analysis tool. It appears every camera in CBP’s Tucson Sector will pipe data into Google servers: “This project will focus initially on 100 simultaneous video streams from the data source for processing,” the document reads, and “the resulting metadata and keyframes will be sent to CBP’s Google Cloud.”

    The diagram also notes that one of Google’s chief rivals, Amazon Web Services, provides CBP with unspecified cloud computing services.

    During President Trump’s first term, border surveillance and immigration enforcement work carried a stigma in the tech sector it has in part shed today.

    Related

    Google AI Tech Will Be Used for Virtual Border Wall, CBP Contract Shows

    In 2020, The Intercept revealed a document produced by the CBP Innovation Team, known as INVNT, that stated Google Cloud services would be used in conjunction with AI-augmented surveillance towers manufactured by defense contractor Anduril: “Google Cloud Platform (GCP) will be utilized for doing innovation projects for C1’s INVNT team like next generation IoT, NLP (Natural Language Processing), Language Translation and Andril [sic] image camera and any other future looking project for CBP. The GCP has unique product features which will help to execute on the mission needs.” (A CBP spokesperson confirmed to The Intercept that “Andril” was a misspelling of Anduril.)

    After the Anduril work came to light, Google’s cloud computing chief Thomas quickly attempted damage control, directly contradicting the Department of Homeland Security and telling concerned employees that the company was not involved in immigration enforcement on the Mexican border, CNBC reported at the time. “We have spoken directly with Customs and Border Patrol and they have confirmed that they are not testing our products for those purposes,” Kurian added.

    If this was true then, it’s certainly not now; references to Google services appear repeatedly throughout the tower modernization project document. A technical diagram showing how video data flows between various CBP servers shows Google’s MAGE literally in the middle.

    Customs and Border Protection did not respond to a request for comment about its use of Google Cloud.

    Google did not respond to specific questions about the project, nor address Kurian’s prior denial.

    In a statement provided to The Intercept, Google Public Sector executive Jim Kelly attempted to distance the company slightly from the border surveillance work. “CBP has been public about how it has a multicloud strategy and has used Google Cloud for work like translation,” Kelly wrote. “In this case, Google Cloud is not on the contract. That said, customers or partners can purchase Google Cloud’s off-the-shelf compute, storage, and networking products for their own use, much like they might use a mobile network or run their own computer hardware.”

    Kelly’s statement indicates the government is acquiring Google Cloud services through a reseller, as is common in federal procurement. But Kelly’s comparison of Google Cloud technology to buying off-the-shelf computer hardware is misleading. Even if it’s supplied through a subcontractor or reseller, CBP’s use of Google’s service still requires a constant and ongoing connection to the company’s cloud infrastructure. Were Google still serious about “not working on any projects associated with immigration enforcement at the southern border,” as Kurian claimed in 2020, it would be trivial to prevent CBP from using Google Cloud.

    “Border communities end up paying the price with their privacy.”

    Industry advocates and immigration hard-liners have long touted the “virtual wall” initiative, which substitutes iron and concrete barriers and Border Patrol agents for a 2,000 mile array of sensors, cameras, and computers. But critics say advanced technology is no substitute for policy reforms.

    “On top of the wasted tax dollars, border communities end up paying the price with their privacy, as demonstrated by the recent findings by the Government Accountability Office that CBP had failed to implement six out of six key privacy policy requirements,” Dave Maass, director of investigations at the Electronic Frontier Foundation, told The Intercept, referring to the tower program’s dismal privacy protections record. “For more than two decades, surveillance towers at the border have proven to be a boondoggle, and adding AI isn’t going to make it any less of a boondoggle — it will just be an AI-powered boondoggle.”

    The post Google Is Helping the Trump Administration Deploy AI Along the Mexican Border appeared first on The Intercept.

    This post was originally published on The Intercept.

  • One week after Hamas’s October 7 attack, thousands rallied outside the Israeli Consulate in Los Angeles to protest the country’s retaliatory assault on Gaza. The protestors were peaceful, according to local media, “carrying signs that said ‘Free Palestine’ and ‘End the Occupation,’” and watched over by a “sizable police presence in the area.” The LAPD knew the protests were coming: Two days earlier, the department received advanced warning on Dataminr, a social media surveillance firm and “official partner” of X.

    Internal Los Angeles Police Department emails obtained via public records request show city police used Dataminr to track Gaza-related demonstrations and other constitutionally protected speech. The department receives real-time alerts from Dataminr not only about protests in progress, but also warnings of upcoming demonstrations as well. Police were tipped off about protests in the Los Angeles area and across the country. On at least one occasion, the emails show a Dataminr employee contacted the LAPD directly to inform officers of a protest being planned that apparently hadn’t been picked up by the company’s automated scanning.

    Based on the records obtained by The Intercept, which span October 2023 to April 2024, Dataminr alerted the LAPD of more than 50 different protests, including at least a dozen before they occurred.

    It’s unclear whether the LAPD used any of these notifications to inform its response to the wave of pro-Palestine protests that spread across Southern California over the last two years, which have resulted in hundreds of arrests.

    Neither the LAPD nor Dataminr responded to a request for comment.

    “They are using taxpayer money to enlist companies to conduct this surveillance on social media.”

    Privacy and civil liberties experts argue that police surveillance of First Amendment activity from afar has chilling effect on political association, discourse and dissent.

    “Police departments are surveilling protests which are First Amendment protected political activity about a matter of public importance,” Jennifer Granick, an attorney with the American Civil Liberties Union’s Speech, Privacy, and Technology Project, told The Intercept. “They are using taxpayer money to enlist companies to conduct this surveillance on social media. This is especially worrisome now that the Administration is targeting Gaza protesters for arrest and deportation based on protected activity.”

    The alerts began pouring in on October 9, when Dataminr flagged a “Protest mentioning Israel” blocking traffic in Beverly Hills, citing a tweet. Over the course of the month, Dataminr tipped off the LAPD to six different protests against the war across Los Angeles. These alerts included information about protests already in progress and information about the time and place of at least one LA protest planned for a future date.

    Emails produced by the LAPD in response to The Intercept’s records request show that along with its regular feed of information about constitutionally protected speech, it also provides the department with alerts curated through feeds with titles like “Domestic Demonstrations Awareness,” “LA demonstrations,” “LA unrest,” and “demonstrations,” indicating the department proactively monitors First Amendment gatherings using the platform.

    The department also began receiving a regular flow of alerts about protests thousands of miles away, including a “protest mentioning Palestinian territories outside the Consulate General of Israel” in Chicago,” and tweets from journalist Talia Jane, who was providing real-time updates on an antiwar rally in New York City.

    Jane told The Intercept that she objects to the monitoring of her reporting by police, and also said Dataminr’s summary of her posts were at times inaccurate. In one instance, she says, Dataminr attributed a Manhattan road closure to protesters, when it had in fact been closed by the NYPD. “It’s absurd any agency would spend money on a service that is apparently completely incapable of parsing information correctly,” she said, adding that “the surveillance of journalists’ social media to suppress First Amendment activity is exactly why members of the press have a responsibility to ensure their work is not used to harm people.”

    On October 17, Dataminr sent an “urgent update” to the department warning of a “Demonstration mentioning Palestinian territories planned for today at 17:00 in Rittenhouse Square area of Philadelphia,” based on a tweet. Three days later, a similar update noted another “Demonstration mentioning Palestinian territories” planned for Boston’s Copley Square. Another warned of a “protest mentioning Palestinian territories” in the planning stages at the Oregon State Capitol. It’s unclear if the department intended to cast such a wide net, or if the out-of-state protest alerts were sent in error. Dataminr’s threat notifications are known to turn up false positives; multiple tweets by angry Taylor Swift fans aimed at Ticketmaster were forwarded to the LAPD as “L.A. Threats and Disruptions,” the records show.

    Materials obtained by The Intercept also show that despite Dataminr’s marketing claims of being an “AI” intermediary between public data and customers, the firm has put its human fingers on the scales. On October 12, a Dataminr account manager emailed three LAPD officers, whose names are redacted, with the subject line “FYSA,” military shorthand meaning “for your situational awareness.” The email informed the officers of a “Protest planned for October 14 at 12:30 at Consulate General of Israel in Los Angeles,” with a link to a tweet by a Los Angeles university professor. It’s unclear if the LAPD has requested these manual tip-offs from Dataminr, or whether such personal service is routine; Dataminr did not respond when asked if it was a standard practice. But the hands-on approach undercuts Dataminr’s prior claims that it just passively provides alerts to customers about social media speech germane to their interests.

    A company spokesperson previously told The Intercept that “Every First Alert user has access to the exact same alerts and can choose to receive the alerts most relevant to them.””

    Dataminr pitches its clients across the private and public sector a social media superpower: What if you had immediate access tweet relevant to your interests — without having to even conduct a search? The company, founded in 2016 and valued at over $4 billion, claims a wide variety of customers, from media newsrooms to government agencies, including lucrative federal contracts with the Department of Defense. It has also found an avid customer base in law enforcement. While its direct access to Twitter has been a primary selling point, Dataminr also scours apps like Snap and Telegram.

    The company — which boasts both Twitter and the CIA as early investors — pitches its “First Alert” software platform as a public safety-oriented newsfeed of breaking events.

    It has for years defended its police work as simply news reporting, arguing it can’t be considered a surveillance tool because the information relayed to police is public and differs in no way from what an ordinary user browsing social media could access.

    Privacy advocates and civil libertarians have countered that the software provides the government with visibility that far surpasses what any individual user or even team of human officers could accomplish. Indeed, Dataminr’s own law enforcement marketing materials claim “30k people working 24/7 would only process 1% of all the data Dataminr ingests each day.” 

    Related

    U.S. Marshals Spied on Abortion Protesters Using Dataminr

    The company has this power because of its long-standing “official partner” status with both Twitter and now X. Dataminr purchases access to the platform’s data “firehose,” allowing it to query every single post and scan them on behalf of clients in real-time.

    Previous reporting by The Intercept has shown Dataminr has used this privileged access to surveil abortion rights rallies, Black Lives Matter protests, and other constitutionally protected speech on behalf of both local and federal police. Dataminr sources told The Intercept in 2020 how the company’s human analysts, helping tailor the service to its various police and military customers, at time demonstrated implicit biases in their work — an allegation the company denied.

    In its previous incarnation as Twitter before its purchase by Elon Musk, and today as X, the social media platform for years expressly prohibited third parties from using its user data for “monitoring sensitive events (including but not limited to protests, rallies, or community organizing meetings),” per its terms of service. Both companies have previously claimed that Dataminr’s service by definition cannot be considered surveillance because it is applied against public discourse; critics have often pointed out that while posts are technically public, only a company with data access as powerful as Dataminr’s would ever be able to find and flag all of these specific posts amid hundreds of millions of others. Neither company has directly addressed how Dataminr’s monitoring of protests is compatible with Twitter and X’s explicit prohibition against monitoring protests.

    Neither X nor Dataminr responded when asked about this contradiction.

    Related

    Why Trump Is So Desperate to Keep Mahmoud Khalil in Louisiana

    While Dataminr’s monitoring of campus protests began before the second Trump administration, it has taken on greater significance now given the White House’s overt attempts to criminalize speech critical of Israel and the war in Gaza. Earlier this month, former Columbia graduate student Mahmoud Khalil, who helped organize Columbia University’s student protests against the war, was abruptly arrested and jailed by plainclothes Immigration and Customs Enforcement officers. The State Department and White House quickly confirmed the arrest was a function of Khalil’s antiwar protest efforts, which the administration has described without evidence or explanation as “aligned to Hamas.” The White House has pledged to arrest and deport more individuals who have taken part in similar campus protests against the war.

    Civil libertarians have long objected to dragnet monitoring of political speech on the grounds that it will have a chilling effect on speech guaranteed by the First Amendment. While fires, shootings, and natural disasters are of obvious interest to police, these critics frequently argue that if people know their tweets are subject to police scrutiny without any evidence of wrongdoing, they may tend to self-censor. 

    “Political action supporting any kind of government-disfavored viewpoint could be subject to the same over-policing: gun rights, animal rights, climate change are just a few examples,” the ACLU’s Granick added. “Law enforcement should leave online organizing alone.”

    The post LAPD Surveilled Gaza Protests Using This Social Media Tool appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The Trump administration is planning to sell a major IRS computing center crucial to processing the tax returns of millions of Americans — just in time for tax season.

    The IRS Enterprise Computing Center in Martinsburg, West Virginia, is included on a list of over 400 “empty and underutilized” federal properties marked for liquidation. It is one of two agency data facilities that make possible the collection of federal taxes in the United States. The Martinsburg data center has for decades housed the IRS “Master File,” an authoritative national record of tax return data and tax status for every tax-paying American individual and corporation, containing a historical computerized archive of every return and refund.

    While Martinsburg’s systems are vital around tax season — exactly when the Trump administration has ostensibly put it up for sale — its databases are queried year-round.

    On Tuesday, the Martinsburg center was flagged by the General Services Administration as one of hundreds of “noncore” facilities that should be sold off to save the federal government money. “Decades of funding deficiencies have resulted in many of these buildings becoming functionally obsolete and unsuitable for use by our federal workforce,” the GSA noted. But just last year, a GSA work order for roof repairs said the exact opposite about the Martinsburg facility, describing it as “a critical component of IRS’s operations, which, during peak season, processes over 13 million tax returns each day. Due to the continuous operations year-round and critical mission performed within, this project is viewed as a high priority.”

    It’s unclear if the administration intends to shutter the installation or eventually lease it back from private owners. Neither the GSA nor IRS immediately responded to a request for comment.

    Shortly after the GSA published the list of unwanted federal properties, it quickly amended and then deleted it entirely, leaving agencies in a state of confusion and disarray. On Wednesday, the Washington Post reported that while the sales process had been paused, “the plan is still to dispose of the buildings.”

    Travis Thompson, a tax attorney with Boutin Jones and expert on IRS technology practices, told The Intercept the Martinsburg computing center is, contrary to the GSA’s new claim, absolutely mission-critical infrastructure.

    “It goes to the very backbone of what the IRS does,” Thompson said.

    The data housed at Martinsburg is also regularly tapped for internal investigations to ferret out fraud, Thompson said. He speculated that the sale, should it go through, would likely either result in the facility being sold to private owners and leased back to the federal government, or shuttered entirely. Owing to the extremely sensitive nature of the tax records held there, Martinsburg has always been a “super high-security facility,” and housing these computer systems under privatized ownership “does raise questions about protecting taxpayer data and the privacy of taxpayer data,” Thompson said. An interruption of service caused by a change in ownership would present potentially widespread disruption to the IRS and American taxpayers.

    Related

    The IRS Is Buying an AI Supercomputer From Nvidia

    In February, The Intercept reported that the IRS was purchasing a multimillion-dollar Nvidia AI supercomputing cluster which was to be installed at Martinsburg.

    In a statement to The Intercept, Sen. Ron Wyden, D-Ore., suggested private ownership is more likely.

    “If the Trump administration really sold this site, the IRS data system would be down to a single backup facility in Memphis, and all it would take to knock the entire agency offline is one hack or power outage. It’d be an economic disaster,” said Wyden, ranking member of the Senate Finance Committee. “That said, the likely story here is that Trump and Musk want to help a bunch of vultures plunder the country in a rent-seeking scheme, and after they sell off essential sites like this IRS facility, American taxpayers are going to be on the hook paying rent for real estate they should rightfully own.”

    The post It’s Tax Season — The Perfect Time for Trump to Sell This “Critical” IRS Computing Center appeared first on The Intercept.

    This post was originally published on The Intercept.

  • As the Trump administration and its cadre of Silicon Valley machine-learning evangelists attempt to restructure the administrative state, the IRS is preparing to purchase advanced artificial intelligence hardware, according to procurement materials reviewed by The Intercept.

    With Elon Musk’s so-called Department of Government Efficiency installing itself at the IRS amid a broader push to replace federal bureaucracy with machine-learning software, the tax agency’s computing center in Martinsburg, West Virginia, will soon be home to a state-of-the-art Nvidia SuperPod AI computing cluster. According to the previously unreported February 5 acquisition document, the setup will combine 31 separate Nvidia servers, each containing eight of the company’s flagship Blackwell processors designed to train and operate artificial intelligence models that power tools like ChatGPT.

    The hardware has not yet been purchased and installed, nor is a price listed, but SuperPod systems reportedly start at $7 million. The setup described in the contract materials notes that it will include a substantial memory upgrade from Nvidia.

    Though small compared to the massive AI-training data centers deployed by companies like OpenAI and Meta, the SuperPod is still a powerful and expensive setup using the most advanced technology offered by Nvidia, whose chips have facilitated the global machine-learning spree. While the hardware can be used in many ways, it’s marketed as a turnkey means of creating and querying an AI model. Last year, the MITRE Corporation, a federally funded military R&D lab, acquired a $20 million SuperPod setup to train bespoke AI models for use by government agencies, touting the purchase as a “massive increase in computing power” for the United States.

    How exactly the IRS will use its SuperPod is unclear. An agency spokesperson said the IRS had no information to share on the supercomputer purchase, including which presidential administration ordered it. A 2024 report by the Treasury Inspector General for Tax Administration identified 68 different AI-related projects underway at the IRS; the Nvidia cluster is not named among them, though many were redacted.

    But some clues can be gleaned from the purchase materials. “The IRS requires a robust and scalable infrastructure that can handle complex machine learning (ML) workloads,” the document explains. “The Nvidia Super Pod is a critical component of this infrastructure, providing the necessary compute power, storage, and networking capabilities to support the development and deployment of large-scale ML models.”

    The document notes that the SuperPod will be run by the IRS Research, Applied Analytics, and Statistics division, or RAAS, which leads a variety of data-centric initiatives at the agency. While no specific uses are cited, it states that this division’s Compliance Data Warehouse project, which is behind this SuperPod purchase, has previously used machine learning for automated fraud detection, identity theft prevention, and generally gaining a “deeper understanding of the mechanisms that drive taxpayer behavior.”

    “The IRS has probably more proprietary data than most agencies that is totally untapped.”

    It’s unclear from the document whether the SuperPod purchase had been planned under the Biden administration or if it represents a new initiative of the Trump administration.

    Some funding from the 2022 Inflation Reduction Act was earmarked for upgrading IRS technology generally, said Travis Thompson, a tax attorney with Boutin Jones with an expertise in IRS AI strategy. But “the IRS has been going toward AI for quite some time prior to IRA funding,” Thompson explained. “They didn’t have enough money to properly enforce the tax code, they were looking for ways to do more with less.” A June 2024 Government Accountability Office report suggested the IRS use artificial intelligence-based software to retrieve “hundreds of billions of dollars [that] are potentially missing from what should be collected in taxes each year.”

    Thompson added that the agency is ripe for machine-learning training because of the mountain of personal and financial data it sits atop. “The IRS has probably more proprietary data than most agencies that is totally untapped. When you look at something like this Nvidia cluster and training machine learning algorithms going forward, it makes perfect sense, because they have the data there. AI needs data. It needs lots of it. And it needs it quickly. And the IRS has it.”

    Related

    Trump’s Election Is Also a Win for Tech’s Right-Wing “Warrior Class”

    The purchase comes at a crossroads for U.S. governance of artificial intelligence tech. In Trump’s first term, the RAAS office was assigned “responsibility for monitoring and overseeing AI at the IRS” under Executive Order 13960, which he signed shortly before leaving office in 2020. This executive order put an emphasis on the “responsible,” “safe” implementation of AI by the United States — an approach that has fallen out of favor by American tech barons who now advocate for the breakneck development of these technologies unburdened by consideration of ethics or risk. One of Trump’s first moves following his inauguration was reversing a Biden administration executive order calling for greater AI safety guardrails in government use.

    Many of the AI industry for whom “safe AI” is now anathema have become close allies of the new Trump White House, such as Elon Musk and venture capitalist Marc Andreessen. This wing of Silicon Valley has reportedly pushed the new administration to leverage artificial intelligence to help dismantle the administrative state via automation.

    This week, the Wall Street Journal reported Musk’s liquidators had arrived at the IRS, an agency long the target of disparagement and distortion by Trump and Republican allies. Days before, the New York Times reported, “Representatives from the so-called Department of Government Efficiency have sought information about the tax collector’s information technology, with a goal of automating more work to replace the need for human staff members.”

    The IRS has in recent years increasingly turned to AI for automated fraud detection and chatbot-based support services — including through collaboration with Nvidia — but a new Nvidia supercomputer could also be a boon to those interested in shrinking the agency’s human headcount as much as possible. A February 8 report by the Washington Post quoted an unnamed federal official who described Musk’s end goal as “replacing the human workforce with machines,” and that “Everything that can be machine-automated will be. And the technocrats will replace the bureaucrats.”

    Musk underlings are reportedly contemplating replacing humans at the Department of Education with a large language-based chatbot, as well.

    Wired previously reported that Musk loyalist Thomas Shedd, placed in a directorship within the General Services Administration, has talked of an “AI-first” agenda for Trump’s second term; DOGE staffers have already reportedly turned to Microsoft’s Azure AI platform for advice on slashing programs. While the Nvidia SuperPod couldn’t on its own replicate services like those provided by Microsoft, it is powerful enough to train AI models based on government data.

    Thompson told The Intercept that efforts to slash the federal workforce and more aggressively deploy artificial intelligence systems fit hand-in-glove.

    “I firmly believe that rooted behind the reduction in the human workforce that seems to be goal of current administration, there’s an overarching goal there to implement more technology-based systems in order to do the jobs,” he explained. “If you’re going to reduce your workforce, something has to pick up the slack. Something has to do the job.”

    The post The IRS Is Buying an AI Supercomputer From Nvidia appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Amid anger and protest over the Trump administration’s plan to deport millions of immigrants, U.S. Immigration and Customs Enforcement plans to monitor and locate “negative” social media discussion about the agency and its top officials, according to contract documents reviewed by The Intercept.

    Citing an increase in threats to ICE agents and leadership, the agency is soliciting pitches from private companies to monitor threats across the internet — with a special focus on social media. People who simply criticize ICE online could pulled into the dragnet.

    “In order to prevent adversaries from successfully targeting ICE Senior leaders, personnel and facilities, ICE requires real-time threat mitigation and monitoring services, vulnerability assessments, and proactive threat monitoring services,” the procurement document reads.

    If this scanning uncovers anything the agency deems suspicious, ICE is asking its contractors to drill down into the background of social media users.

    That includes:

    “Previous social media activity which would indicate any additional threats to ICE; 2). Information which would indicate the individual(s) and/or the organization(s) making threats have a proclivity for violence; and 3). Information indicating a potential for carrying out a threat (such as postings depicting weapons, acts of violence, refences to acts of violence, to include empathy or affiliation with a group which has violent tendencies; references to violent acts; affections with violent acts; eluding [sic] to violent acts.”

    It’s unclear how exactly any contractor might sniff out someone’s “proclivity for violence.” The ICE document states only that the contractor will use “social and behavioral sciences” and “psychological profiles” to accomplish its automated threat detection.

    Once flagged, the system will further scour a target’s internet history and attempt to reveal their real-world position and offline identity. In addition to compiling personal information — such as the Social Security numbers and addresses of those whose posts are flagged — the contractor will also provide ICE with a “photograph, partial legal name, partial date of birth, possible city, possible work affiliations, possible school or university affiliation, and any identified possible family members or associates.”

    The document also requests “Facial Recognition capabilities that could take a photograph of a subject and search the internet to find all relevant information associated with the subject.” The contract contains specific directions for targets found in other countries, implying the program would scan the domestic speech of American citizens.

     

    The posting indicates that ICE isn’t merely looking to detect direct threats of violence, but also online criticism of the agency.

    As part of its mission to protect ICE with “proactive threat monitoring,” the winning contractor will not simply flag threatening remarks but “Provide monitoring and analysis of behavioral and social media sentiment (i.e. positive, neutral, and negative).”

    “ICE’s attempt to have eyes and ears in as many places as we exist both online and offline should ring an alarm.”

    Such sentiment analysis — typically accomplished via machine-learning techniques — could place under law enforcement scrutiny speech that is constitutionally protected. Simply stated, a post that is critical or even hostile to ICE isn’t against the law.

    “ICE’s attempts to capture and assign a judgement to people’s ‘sentiment’ throughout the expanse of the internet is beyond concerning,” said Cinthya Rodriguez, an organizer with the immigrant rights group Mijente. “The current administration’s attempt to use this technology falls within the agency’s larger history of mass surveillance, which includes gathering information from personal social media accounts and retaliating against immigrant activists. ICE’s attempt to have eyes and ears in as many places as we exist both online and offline should ring an alarm for all of us.”

    Related

    How ICE Uses Social Media to Surveil and Arrest Immigrants

    The document soliciting contractors appears nearly identical to a procurement document published by ICE in 2020, which resulted in a $5.5 million contract between the agency and Barbaricum, a Washington-based defense and intelligence contractor. A new contract has not yet been awarded. ICE spokesperson Jocelyn Biggs told The Intercept, “While ICE anticipates maintaining its threat risk monitoring services, we cannot speculate on a specific timeline for future contract decisions.”

    ICE already has extensive social media surveillance capabilities provided by federal contractor Giant Oak, which seeks “derogatory” posts about the United States to inform immigration-related decision-making. The goal of this contract, ostensibly, is focused more narrowly on threats to ICE leadership, agents, facilities, and operations.

    Civil liberties advocates told The Intercept the program had grave speech implications under the current administration. “While surveillance programs like this under any administration are a concerning privacy and free speech violation and I would fight to stop them, the rhetoric of the Trump administration makes this practice especially terrifying,” said Calli Schroeder, senior counsel at the Electronic Privacy Information Center. “Threats to ‘punish’ opponents or deport those exercising 1st Amendment rights combine with these invasive practices to create a real ‘thought police’ scenario.”

    The post ICE Wants to Know If You’re Posting Negative Things About It Online appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Tens of millions of people face the loss of an internet service they use to consume information from around the world. Their government says the block is for their own good, necessitated by threats to national security. The internet service is dangerous, they say, a tool of foreign meddling and a menace to the national fabric — though they furnish little evidence. A situation like this, historically, is the kind of thing the U.S. government protests in clear terms.

    When asked, for instance, about Chinese censorship of Twitter in 2009, President Barack Obama was unequivocal. “I can tell you that in the United States, the fact that we have free Internet — or unrestricted Internet access — is a source of strength, and I think should be encouraged.” When the government of Nigeria disconnected its people from Twitter in 2021, the State Department blasted the move, with spokesperson Ned Price declaring, “Unduly restricting the ability of Nigerians to report, gather, and disseminate opinions and information has no place in a democracy.”

    But with the Supreme Court approving on Friday a law that would shut off access to TikTok, the U.S. is poised to conduct the exact kind of internet authoritarianism it has spent decades warning the rest of the world about.

    Since the advent of the global web, this has been the standard line from the White House, State Department, Congress, and an infinitude of think tanks and NGOs: The internet is a democracy machine. You turn it loose, and it generates freedom ex nihilo. The more internet you have, the more freedom you have.

    The State Department in particular seldom misses an opportunity to knock China, Iran, and other faraway governments for blocking their people from reaching the global communications grid — moves justified by those governments as necessary for national safety.

    In 2006, the State Department presented the Bush administration’s Global Internet Freedom strategy of “defending Internet freedom by advocating the availability of the widest possible universe of content.” In a 2010 speech, then-Secretary of State Hillary Clinton cautioned that “countries that restrict free access to information or violate the basic rights of internet users risk walling themselves off from the progress of the next century.” She emphasized that the department sought to encourage the flow of foreign internet data into China “because we believe it will further add to the dynamic growth and the democratization” there.

    The U.S. has always viewed the internet with something akin to national pride, and for decades has condemned attempts by authoritarian governments — especially China’s — to restrict access to the worldwide exchange of unfettered information. China has become synonymous with internet censorship for snuffing whole websites or apps out of existence with only the thinnest invocation of national security.

    But after years of championing “Digital Democracy,” “the Global Village,” and an “American Information Superhighway” shuttling liberalism and freedom to every computer it touches, the U.S. is preparing a dramatic about face. In a move of supreme irony, it will attempt to shield its citizens from Chinese government influence by becoming itself more like the government of China. American internet users must now get accustomed to sweeping censorship in the name of national security as an American strategy, not one inherent to our “foreign adversaries.”

    In a move of supreme irony, the U.S. will attempt to shield its citizens from Chinese government influence by becoming itself more like the government of China.

    For decades, China has justified its ban against American internet products on the grounds that the likes of Twitter and Instagram represent a threat to Chinese state security and a corrupting influence on Chinese society. That logic has now been seamlessly co-opted by U.S politicians who see China as the great global evil, but with little acknowledgment of how their rhetoric matches that of their enemy.

    “Authoritarian and illiberal states,” President Joe Biden’s State Department warned soon after he signed the TikTok ban bill into law, “are seeking to restrict human rights online and offline through the misuse of the Internet and digital technologies” by “siloing the Internet” and “suppressing dissent through Internet and telecommunications shutdowns, virtual blackouts, restricted networks, and blocked websites.”

    While TikTok’s national security threat has never been made public — alleged details discussed by Congress remain classified — those who advocate banning the app make clear their concern isn’t merely cybersecurity but also free speech. The Chinese Communist Party “could also use TikTok to propagate videos that support party-friendly politicians or exacerbate discord in American society,” former GOP Rep. Mike Gallagher and Sen. Marco Rubio warned in a 2022 Washington Post op-ed. Their argument perfectly mimicked unspecified threats to Chinese “national unity” that country has cited to defend its blocking of American internet services.

    “It’s highly addictive and destructive and we’re seeing troubling data about the corrosive impact of constant social media use, particularly on young men and women here in America,” Gallagher told NBC in 2023.

    If politicians are conscious of this contradiction between declarations of America as the home of digital democracy and the rising American firewall, there’s little acknowledgment. In a 2024 opinion piece for Newsweek (“Mr. Xi, Tear Down This Firewall”), Rep. John Moolenaar decried China’s “dystopian” practice of censoring foreign information: “The Great Firewall inhibits contact between Chinese citizens and the outside world. Information is stopped from flowing into China and the Chinese people are not allowed to get information out. Facebook, X, Instagram, and YouTube are blocked.”

    Following the Supreme Court’s ruling Friday, Moolenaar, chair of the House Select Committee on the Chinese Communist Party, announced he “commends” the decision, one he believes “will keep our country safe.” His language echoes that of a Chinese foreign ministry spokesperson, who once told reporters the country’s national blockade of American websites was similarly necessary to “safeguard the public.”

    It’s unclear whether they see irony in the scores of Americans now flocking to VPN software to bypass a potential national TikTok ban — a technique the State Department has long promoted abroad for those living under repressive regimes.

    Nor does there seem to be any awareness of how effortlessly the national security argument deployed against TikTok could be turned against any major American internet company. If the U.S. believes TikTok is a clear and present danger to its citizens because it uses secret algorithms, cooperates with spy agencies, changes speech policies under political pressure, and conducts dragnet surveillance and data harvesting against its clueless users, what does that say about how the rest of the world should view Facebook, YouTube, or X?

    To his credit, Gallagher is open about the extent to which the anti-TikTok movement is based less on principle than brinkmanship. The national ideals of open access to information and unbridled speech remain, to Gallagher, but subordinate to the principle of “reciprocity,” as he’s put it. “It’s worth remembering that our social media applications are not allowed in China,” he said in a 2024 New York Times interview. “There’s just a basic lack of reciprocity, and your Chinese citizens don’t have access to them. And yet we allow Chinese government officials to go all over YouTube, Facebook and X spreading lies about America.” The notion that foreign lies — China’s, or anyone else’s — should be countered with state censorship, rather than counter-speech, marks an ideological abandonment of the past 30 years of American internet statecraft.

    “Prior to this ban, the U.S. had consistently and rightfully so condemned when other nations banned communications platforms as fundamentally anti-democratic,” said David Greene, senior staff attorney and civil liberties director at the Electronic Frontier Foundation. “We now have lost much of our moral authority to advance democracy and the free flow of information around the world.”

    Should TikTok actually become entirely unplugged from the United States, it may grow more difficult for the country to proselytize for an open internet. So too will it grow more difficult for the U.S. to warn of blocking apps or sites as something our backward adversaries, fearful of our American freedoms and open way of life, do out of desperation.

    That undesirable online speech can simply be disappeared by state action was previously dismissed as anti-democratic folly: In a 2000 speech, Bill Clinton praised the new digital century in which “liberty will spread by cell phone and cable modem,” comparing China’s “crack down on the internet” to “trying to nail Jello to the wall.” Futile though it may remain, the hammer at least no longer appears un-American.

    The post Washington’s TikTok Ban Hypocrisy: Internet Censorship Is Good, Now appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Meta is now granting its users new freedom to post a wide array of derogatory remarks about races, nationalities, ethnic groups, sexual orientations, and gender identities, training materials obtained by The Intercept reveal.

    Examples of newly permissible speech on Facebook and Instagram highlighted in the training materials include:

    “Immigrants are grubby, filthy pieces of shit.”

    “Gays are freaks.”

    “Look at that tranny (beneath photo of 17 year old girl).”

    The changes are part of a broader policy shift that includes the suspension of the company’s fact-checking program. The goal, Meta said Tuesday, is to “allow more speech by lifting restrictions.”

    Meta’s newly appointed global policy chief Joel Kaplan described the effort in a statement as a means to fix “complex systems to manage content on our platforms, which are increasingly complicated for us to enforce.”

    While Kaplan and Meta CEO Mark Zuckerberg have couched the changes as a way to allow users to engage more freely in ideological dissent and political debate, the previously unreported policy materials reviewed by The Intercept illustrate the extent to which purely insulting and dehumanizing rhetoric is now accepted.

    The document provides those working on Meta user content with an overview of the hate speech policy changes, walking them through how to apply the new rules. The most significant changes are accompanied by a selection of “relevant examples” — hypothetical posts marked either “Allow” or “Remove.”

    When asked about the new policy changes, Meta spokesperson Corey Chambliss referred The Intercept to remarks from Kaplan’s blog post announcing the shift: “We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.”

    Kate Klonick, a content moderation policy expert who spoke to The Intercept, contests Meta’s framing that the new rules as less politicized, given the latitude they provide to attack conservative bogeymen.

    “Drawing lines around content moderation was always a political enterprise,” said Klonick, an associate professor of law at St. John’s University and scholar of content moderation policy. “To pretend these new rules are any more ‘neutral’ than the old rules is a farce and a lie.”

    She sees the shifts announced by Kaplan — a former White House deputy chief of staff under George W. Bush and Zuckerberg’s longtime liaison to the American right — as “the open political capture of Facebook, particularly because the changes are pandering to a particular party.”

    Meta’s public Community Standards page says that even under the new relaxed rules, the company still protects “refugees, migrants, immigrants, and asylum seekers from the most severe attacks” and prohibits “direct attacks” against people on the basis of “race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and serious disease.” But the instructive examples provided in the internal materials show a wide variety of comments that denigrate people based on these traits that are marked “Allow.”

    Related

    Facebook Fact Checks Were Never Going to Save Us. They Just Made Liberals Feel Better.

    At times, the provided examples appear convoluted or contradictory. One page notes “generalizations” about any group remain prohibited if they make a comparison to animals or pathogens — such as “All Syrian refugees are rodents.” But comparisons to “filth or feces” are now downgraded from hate speech to a less serious form of “insult,” which violates company rules only if directed at a protected group. According to examples provided by Meta, this change now allows users to broadly dehumanize immigrants with statements like like “Immigrants are grubby, filthy pieces of shit,” despite language elsewhere in the document that claims “comparisons to subhumanity” remain banned.

    The company’s policy around nausea-based hate follows a particularly fine line: “Migrants are no better than vomit” is allowed, according to the materials, while “Muslims make me want to throw up” ought to be removed because it claims a group “causes sickness.”

    While general comparisons to animals are still against the rules, many other kinds of broad, hateful stereotyping is now allowed. “ALL behavioral statements (qualified and non-qualified)” are also now no longer against Meta’s rules, the document reads, allowing sweeping generalizations connecting entire races or ethnic groups to criminality or terrorism. The document offers as examples of acceptable racial generalizations: “These damn immigrants can’t be trusted, they’re all criminals,” “I bet Jorge’s the one who stole my backpack after track practice today. Immigrants are all thieves,” and “Japanese are all Yakuza.” It notes, however, that the statement “Black people are all drug dealers” remains prohibited under the new rules.

    Other sections of the materials provide examples of forbidden “insults about sexual immorality,” such as “Jewish women are slutty.” But the document also provides ample examples of newly permissible insults aimed at specific gender identities or sexual orientations, including “Gay people are sinners” and “Trans people are immoral.” A post stating “Lesbians are so stupid” would remain prohibited as a “mental insult,” though “Trans people are mentally ill” is marked as allowed.

    Generalizations about superiority and inferiority are similarly convoluted, though attacks on immigrants tend to get a pass. Examples of banned content include: “Christian men are totally useless,” “Is it me? Or are all autistic women ugly?” and “Hispanics are as dirty as the ground we walk on.” Meanwhile, “Mexican immigrants are trash!” is now deemed acceptable.

    Overall, the restrictions on claims of ethnic or religious supremacy has been eased significantly. The document explains that Meta now allows “statements of superiority as long as the statements do not refer to inferiority of another [protected characteristic] group (a) on the basis of inherent intellectual ability and (b) without support.” Allowable statements under this rule include “Latinos are the best!” and “Black people are superior to all others.” Also now acceptable are comparative claims such as “Black people are more violent than Whites,” “Mexicans are lazier than Asians,” and “Jews are flat out greedier than Christians.” Off-limits, only because it pertains to intellectual ability, is the example “White people are more intelligent than black people.”

    But general statements about intellect appear to be permitted if they’re shared with purported evidence. For example, “I just read a statistical study about Jewish people being smarter than Christians. From what I can tell, it’s true!” It’s unclear if one would be required to link to such a study, or merely claim its existence.

    Rules around explicit statements of hate have been loosened considerably as well. “Statements of contempt, dislike, and dismissal, such as ‘I hate,’ ‘I don’t care for,’ and ‘I don’t like.’ are now considered non-violating and are allowed,” the document explains. Included as acceptable examples are posts stating “I don’t care for white people” and “I’m a proud racist.”

    The new rules also forbid “targeting cursing” at a protected group, which “includes the use of the word ‘fuck’ and its variants.” Cited as an example, a post stating “Ugh, the fucking Jews are at it again” violates the rules simply because it contains an obscenity (the new rules permit the use of “bitch” or motherfucker”).

    “Referring to the target as genitalia or anus are now considered non-violating and are allowed.”

    Another policy shift: “Referring to the target as genitalia or anus are now considered non-violating and are allowed.” As an example of what is now permissible, Facebook offers up: “Italians are dickheads.”

    While many of the examples and underlying policies seem muddled, the document shows clarity around allowing disparaging remarks about transgender people, including children. Noting that “‘Tranny’ is no longer a designated slur and is now non-violating,” the materials provide three examples of speech that should no longer be removed: “Trannies are a problem,” “Look at that tranny (beneath photo of 17 year old girl),” and “Get these trannies out of my school (beneath photo of high school students).”

    According to Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, Meta’s hate speech protections have historically been well-intentioned, however deeply flawed in practice. “While this has often resulted in over-moderation that I and many others have criticized, these examples demonstrate that Meta’s policy changes are political in nature and not intended to simply allow more freedom of expression,” York said.

    Meta has faced international scrutiny for its approach to hate speech, most notably after role that hate speech and other dehumanizing language on Facebook played in fomenting genocide in Myanmar. Following criticism of its mishandling of Myanmar, where the United Nations found Facebook had played a “determining role” in the slaughter of over 650,000 Rohingya Muslims, the company spent years touting its investment in preventing the spread of similar rhetoric in the future.

    “The reason many of these lines were drawn where they were is because hate speech often doesn’t stay speech, it turns into real world conduct,” said Klonick, the content moderation scholar.

    It’s a premise that Meta purported to share up until this week. “We have a responsibility to fight abuse on Facebook. This is especially true in countries like Myanmar where many people are using the internet for the first time and social media can be used to spread hate and fuel tension on the ground,” wrote company product manager Sara Su in a 2018 blog post. “While we’re adapting our approach to false news given the changing circumstances, our rules on hate speech have stayed the same: it’s not allowed.”

    The post Leaked Meta Rules: Users Are Free to Post “Mexican Immigrants Are Trash!” or “Trans People Are Immoral” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • For two years, Hannah Byrne was part of an invisible machine that determines what 3 billion people around the world can say on the internet. From her perch within Meta’s Counterterrorism and Dangerous Organizations team, Byrne helped craft one of the most powerful and secretive censorship policies in internet history. Her work adhered to the basic tenet of content moderation: Online speech can cause offline harm. Stop the bad speech — or bad speakers — and you have perhaps saved a life.

    In college and early in her career, Byrne had dedicated herself to the field of counterterrorism and its attempt to catalog, explain, and ultimately deter non-state political violence. She was most concerned with violent right-wing extremism: neo-Nazis infiltrating Western armies, Klansmen plotting on Facebook pages, and Trumpist militiamen marching on the Capitol.

    In video meetings with her remote work colleagues and in the conference rooms of Menlo Park, California, with the MAGA riot of January 6 fresh in her mind, Byrne believed she was in the right place at the right time to make a difference.

    And then Russia invaded Ukraine. A country of under 40 million found itself facing a full-scale assault by one of the largest militaries in the world. Standing between it and Russian invasion were the capable, battle-tested fighters of the Azov Battalion — a unit founded as the armed wing of a Ukrainian neo-Nazi movement. What followed not only shook apart Byrne’s plans for her own life, but also her belief in content moderation and counterterrorism.

    Today, she is convinced her former employer cannot be trusted with power so vast, and that the systems she helped build should be dismantled. For the first time, Byrne shares her story with The Intercept, and why the public should be as disturbed by her work as she came to be.

    Through a spokesperson, Meta told The Intercept that Byrne’s workplace concerns “do not match the reality” of how policy is enforced at the company.

    Good Guys and Bad Guys

    Byrne grew up in the small, predominantly white Boston suburb of Natick. She was 7 years old when the World Trade Center was destroyed and grew up steeped in a binary American history of good versus evil, hopeful she would always side neatly with the former.

    School taught her that communism was bad, Martin Luther King Jr. ended American racism, and the United States had only ever been a force for peace. Byrne was determined after high school to work for the CIA in part because of reading about its origin story as the Nazi-fighting Office of Strategic Services during World War II. “I was a 9/11 kid with a poor education and a hero complex,” Byrne said.

    And so Byrne joined the system, earning an undergraduate degree in political science at Johns Hopkins and then enrolling in a graduate research program in “terrorism and sub-state violence” at Georgetown University’s Center for Security Studies. Georgetown’s website highlights how many graduates from the Center go on to work at places like the Department of Defense, Department of State, Northrop Grumman — and Meta.

    It was taken for granted that the program would groom graduates for the intelligence community, said Jacq Fulgham, who met Byrne at Georgetown. But even then, Fulgham remembers Byrne as a rare skeptic willing to question American imperialism: “Hannah always forced us to think about every topic and to think critically.”

    Part of her required reading at Georgetown included “A Time to Attack: The Looming Iranian Nuclear Threat,” by former Defense Department official Matthew Kroenig. The book advocates for preemptive air war against Iran to end the country’s nuclear ambitions. Byrne was shocked that the premise of bombing a country of 90 million presumably killing many innocent people — to achieve the ideological and political ends of the United States would be considered within the realm of educated debate and not an act of terrorism.

    That’s because terrorism, her instructors insisted, was not something governments do. Part of terror’s malign character is its perpetration by “non-state actors”: thugs, radicals, militants, criminals, and assassins. Not presidents or generals. Unprovoked air war against Iran was within the realm of polite discussion, but there was never “the same sort of critical thinking to what forms of violence might be appropriate for Hamas” or other non-state groups, she recalls.

    As part of her program at Georgetown, Byrne studied abroad in places where “non-state violence” was not a textbook topic but real life. Interviews with former IRA militants in Belfast, ex-FARC soldiers in Colombia, and Palestinians living under Israeli occupation complicated the terrorism binary. Rather than cartoon villains, Byrne met people who felt pushed to violence by the overwhelming reach and power of the United States and its allies. Wherever she went, Byrne said, she met people victimized, not protected by her country. This was a history she had never been taught.

    Despite feeling dismayed about the national security sector, Byrne still harbored a temptation to fix it from within. After receiving her master’s and entering a State Department-sponsored immersion language class in India, still hopeful for an eventual job at the CIA or National Security Agency, she got a job at the RAND Corporation as a defense analyst. “I hoped I’d be able to continue to learn and write about ‘terrorism,’ which I now knew to be ‘resistance movements,’ in an academic way,” Byrne said. Instead, her two years at RAND were focused on the traditional research the think tank is known for, contributing to titles like “Countering Violent Nonstate Actor Financing: Revenue Sources, Financing Strategies, and Tools of Disruption.”

    “She was all in on a career in national security,” recalled a former RAND co-worker who spoke to The Intercept on the condition of anonymity. “She was earnest in the way a lot of inside-the-Beltway recent grads can be,” they added. “She still had a healthy amount of sarcasm. But I think over time that turned into cynicism.”

    Unfulfilled at RAND, Byrne found what she thought could be a way to both do good and honor her burgeoning anti-imperial politics: Fighting the enemy at home. She decided her next step would be a job that let her focus on the threat of white supremacists.

    Facebook needed the help. A mob inflamed by white supremacist rhetoric had stormed the U.S. Capitol, and Facebook yet again found itself pilloried for providing an organizing tool for extremists. Byrne came away from job interviews with Facebook’s policy team convinced the company would let her fight a real danger in a way the federal national security establishment would not.

    Instead, she would come to realize she had joined the national security state in microcosm.

    Azov on the Whitelist

    Byrne joined Meta in September 2021.

    She and her team helped draft the rulebook that applies to the world’s most diabolical people and groups: the Ku Klux Klan, cartels, and of course, terrorists. Meta bans these so-called Dangerous Organizations and Individuals, or DOI, from using its platforms, but further prohibits its billions of users from engaging in “glorification,” “support,” or “representation” of anyone on the list.

    Byrne’s job was not only to keep dangerous organizations off Meta properties, but also to prevent their message from spreading across the internet and spilling into the real world. The ambiguity and subjectivity inherent to these terms has made the “DOI” policy a perennial source of over-enforcement and controversy.

    Related

    Revealed: Facebook’s Secret Blacklist of “Dangerous Individuals and Organizations”

    A full copy of the secret list obtained by The Intercept in 2021 showed it was disproportionately comprised of Muslim, Arab, and southeast Asian entities, hewing closely to the foreign policy crosshairs of the United States. Much of the list is copied directly from federal blacklists like the Treasury Department’s Specially Designated Global Terrorist roster.

    A 2022 third-party audit commissioned by Meta found the company had violated the human rights of Palestinian users, in part, due to over-enforcement of the DOI policy. Meta’s in-house Oversight Board has repeatedly reversed content removed through the policy, and regularly asks the company to disclose the contents of the list and information about how it’s used.

    Meta’s longtime justification of the Dangerous Organizations policy is that the company is legally obligated to censor certain kinds of speech around designated entities or it would risk violating the federal statute barring material support for terrorist groups, a view some national security scholars have vigorously rejected.

    Top/Left: Hannah Byrne on a Meta-sponsored trip to Wales in 2022. Bottom/Right: Byrne speaking at the NOLA Freedom Forum in 2024, after leaving Meta. Photo: Courtesy of Hannah Byrne

    Byrne tried to focus on initiatives and targets that she could feel good about, like efforts to block violent white supremacists from using the company’s VR platform or running Facebook ads. At first she was pleased to see that Meta’s in-house list went further than the federal roster in designating white supremacist organizations like the Klan — or the Azov Battalion.

    Still, Byrne had doubts about the model because of the clear intimacy between American state policy and Meta’s content moderation policy. Meta’s censorship systems are “basically an extension of the government,” Byrne said in an interview.

    She was also unsure of whether Meta was up to the task of maintaining a privatized terror roster. “We had this huge problem where we had all of these groups and we didn’t really have … any sort of ongoing check or list of evidence of whether or not these groups were terrorists,” she said, a characterization the company rejected.

    Byrne quickly found that the blacklist was flexible.

    Meta’s censorship systems are “basically an extension of the government.”

    In February 2022, as Russia prepared its full-scale invasion of Ukraine, Byrne learned firsthand just how mercurial the corporate mirroring of State Department policy could be.

    As an armed white supremacist group with credible allegations of human rights violations hanging over it, Azov had landed on the Dangerous Organizations list, which meant the unit’s members couldn’t use Meta platforms like Facebook, nor could any users of those platforms praise the unit’s deeds. But with Russian tanks and troops massing along the border, Ukraine’s well-trained Azov fighters became the vanguard of anti-Russian resistance, and their status as international pariahs a sudden liability for American geopolitics. Within weeks, Byrne found the moral universe around her inverted: The heavily armed hate group sanctioned by Congress since 2018 were now freedom fighters resisting occupation, not terroristic racists.

    As a Counterterrorism and Dangerous Organizations policy manager, Byrne’s entire job was to help form policies that would most effectively thwart groups like Azov. Then one day, this was no longer the case. “They’re no longer neo-Nazis,” Byrne recalls a policy manager explaining to her somewhat shocked team, a line that is now the official position of the White House.

    Related

    Facebook Allows Praise of Neo-Nazi Ukrainian Battalion If It Fights Russian Invasion

    Shortly after the delisting, The Intercept reported that Meta rules had been quickly altered to “allow praise of the Azov Battalion when explicitly and exclusively praising their role in defending Ukraine OR their role as part of the Ukraine’s National Guard.” Suddenly, billions of people were permitted to call the historically neo-Nazi Azov movement “real heroes,” according to policy language obtained by The Intercept at the time.

    Byrne and other concerned colleagues were given an opportunity to dissent and muster evidence that Azov fighters had not in fact reformed. Byrne said that even after gathering photographic evidence to the contrary, Meta responded that while Azov may have harbored Nazi sympathies in recent years, posts violating the company’s rules had sufficiently tapered off.

    The odds felt stacked: While their bosses said they were free to make their case that the Battalion should remain blacklisted, they had to pull their evidence from Facebook — a platform that Azov fighters ostensibly weren’t allowed to use in the first place.

    “Key to that assessment — which everyone at Facebook knew, but coming from the outside sounds ridiculous — is that we’re actually pretty bad at keeping content off the platform. Especially neo-Nazi content,” Byrne recalls. “So internally, it was like, ‘Oh, there should be lots of evidence online if they’re neo-Nazis because there’s so many neo-Nazis on our platform.’”

    Though she was not privy to deliberations about the choice to delist the Azov Battalion, Byrne is adamant in her suspicion that it was done to support the U.S.-backed war effort. “I know the U.S. government is in constant contact with Facebook employees,” she said. “It is so clear that it was a political decision.” Byrne had taken this job to prevent militant racism from spilling over into offline violence. Now, her team was instead loosening its rules for an armed organization whose founder had once declared Ukraine’s destiny was to “lead the white races of the world in a final crusade … against Semite-led Untermenschen.”

    Related

    Facebook Tells Moderators to Allow Graphic Images of Russian Airstrikes but Censors Israeli Attacks

    It wasn’t just the shock of a reversal on the Azov Battalion, but the fact that it had happened so abruptly — Byrne estimates that it took no more than two weeks to exempt the group and allow praise of it once more.

    She was aghast: “Of course, this is going to exacerbate white supremacist violence,” she recalls worrying. “This is going to make them look good. It’s going to make it easier to spread propaganda. Ultimately, I was afraid that it was going to directly contribute to violence.”

    In its comments to The Intercept, Meta reiterated its belief that the Azov unit has meaningfully reformed and no longer meets its standards for designation.

    KHARKIV REGION, UKRAINE - JUNE 28: Azov Regiment soldiers are seen during weapons training on June 28, 2022 in the Kharkiv region, Ukraine.The Azov Regiment was founded as a paramilitary group in 2014 to fight pro-Russian forces in the Donbas War, and was later incorporated into Ukraine's National Guard as Special Operations Detachment "Azov." The group, which takes its name from the Sea of Azov, has drawn controversy due to its far-right roots, which Russian President Vladimir Putin has tried to exploit to portray his war as a fight against "Nazis." Azov battalion members were among those forced to surrender to Russia at Mariupol's Azovstal steel plant last month, after holding out amid months of intense bombardment, during which time they were celebrated as heroes by their compatriots and Ukrainian President Volodymyr Zelensky. (Photo by Paula Bronstein/Getty Images)
    Azov Regiment soldiers are seen during weapons training on June 28, 2022, in the Kharkiv region, Ukraine. Photo: Paula Bronstein/Getty Images

    Byrne recalled a similar frustration around Meta’s blacklisting of factions fighting the government of Syrian President Bashar al-Assad, but not the violent, repressive government itself. “[Assad] was gassing his civilians, and there were a couple Syrians at Facebook who were like, ‘Hey, why do we have this whole team called Dangerous Organizations and Individuals and they’re only censoring the Syrian resistance?’” Byrne realized there was no satisfying answer: National governments were just generally off-limits.

    Meta confirmed to The Intercept that its definition of terrorism doesn’t apply to nation states, reflecting what it described as a legal and academic consensus that governments may legitimately use violence.

    At the start of her job, Byrne was under the impression right-wing extremism was a top priority for the company. “But every time I need resources for neo-Nazi stuff … nothing seemed to happen.” The Azov exemption, by contrast, happened at lightning speed. Byrne recalls a similarly rapid engineering effort to tweak Meta’s machine learning-based content scanning system that would have normally removed the bulk of Azov-friendly posts. Not everyone’s algorithmic treatment is similarly prioritized: “It’s infuriating that so many Palestinians are still being taken down for false-positive ‘graphic violence’ violations because it’s obvious to me no one at Meta gives a shit,” Byrne said.

    Meta pushed back on Byrne’s broader objections to the Dangerous Organizations policy. “This former employee’s claims do not match the reality of how our Dangerous Organizations policies actually work,” Meta spokesperson Ryan Daniels said in a statement. “These policies are some of the most comprehensive in the industry, and designed to stop those who seek to promote violence, hate and terrorism on our platforms, while at the same time ensuring free expression. We have a team of hundreds of people from different backgrounds working on these issues every day — with expertise ranging from law enforcement and national security to human rights, counterterrorism and academic studies. Our Dangerous Organizations policies are not static, we update them to reflect evolving factors and changing threat landscapes, and we apply them equally around the world while also complying with our legal obligations.”

    Malicious Actors

    But it wasn’t the Azov reversal that ended Byrne’s counterterror career.

    In the wake of the attack on the Capitol, Meta had a problem: “It’s tough to profile or pinpoint the type of person that would be inclined to participate in January 6, which is true of most terrorist groups,” Byrne said. “It’s an ideology, it lives in your mind.”

    But what if the company could prevent the next recruit for the Proud Boys, or Three Percenters, or even ISIS? “That was our task,” Byrne said. “Figure out where these groups are organizing, kind of nip it in the bud before they carry out any further real-world violence. We need to make sure they’re not in groups together, not friending each other, and not connecting with like-minded people.”

    She was assigned to work on Meta’s Malicious Actor Framework, a system intended to span all its platforms and identify “malicious actors” who might be prone to “dangerous” behavior using “signals,” Byrne said. The approach, she said, had been pioneered at Meta by the child safety team, which used automated alarms to alert the company when it seemed an adult might be attempting inappropriate contact with a child. That tactic had some success, but Byrne recalls it also mistakenly flagged people like coaches and teachers who had legitimate reason to interact with children.

    Posting praise or admiring imagery of Osama bin Laden is relatively easy to catch and delete. But what about someone interested in his ideas? “The premise was that we need to target certain kinds of individuals who are likely to sympathize with terrorism,” Byrne said. There was just one problem, as Byrne puts it today: “What the fuck does it mean to be a sympathizer?”

    In the field, this Obama-era framework of stopping radicalization before it takes root is known as Countering Violent Extremism, or CVE. It has been criticized as both pseudoscientific and ineffective, undermining the civil liberties of innocent people by placing them under suspicion for their own good. CVE programs generally “lack any scientific basis, are ineffective at reducing terrorism, and are overwhelmingly discriminatory in nature,” according to the Brennan Center for Justice.

    Byrne had joined Meta at a time when the company was transitioning “from content-based detection to profile-based detection,”said Byrne. Screenshots of team presentations Byrne participated in show an interest in predicting dangerousness among users. One presentation expresses concern with Facebook’s transition to encrypted messaging, which would prevent authorities (and Meta itself) from eavesdropping on chats: “We will need to move our detection/enforcement/investigation signals more upstream to surfaces we do have insight into (eg., user’s behaviors on FB, past violations, social relationships, group metadata like description, image, title, etc) in order to flag areas of harm.”

    Meta specifically wanted the ability to detect and deter “risky interactions” between “dangerous individuals” or “likely-malicious actors” and their “victims” vulnerable of radicalization — without being able to read the messages these users were exchanging. The company hoped to use this capability, according to these meeting materials, to stop “malicious actors distributing propaganda,” for example. This would be accomplished using machine learning to recognize dangerous signals on someone’s profile, according to these screenshots, like certain words in their profile or whether they’d been members of a banned group.

    Byrne said the plan was to incorporate this policy into a companywide framework, but she departed Meta too soon to know what ultimately came of this plan.

    Meta confirmed the existence of the malicious actor framework to The Intercept, explaining that it remains a work in progress, but disputed its predictive nature.

    Byrne has no evidence that Meta was pursuing a system that would use overtly prejudiced criteria to determine who is a future threat, but feared that any predictive system would be based on thin evidence and unconsciously veer toward bias. Civil liberties scholars and counterterror experts have long warned that because terrorism is so extremely rare, any attempt to predict who will commit it is fatally flawed because there simply is not enough data. Such efforts often regress, wittingly or otherwise, into stereotypes.

    “I brought it up in a couple meetings, including with my manager, but it wasn’t taken that seriously,” Byrne said.

    Byrne recalls discussion of predicting such radicalism risk based on things like who your friends are, what’s on your profile, who sends you message, and the extent to which you and your network have previously violated Meta’s speech rules. Given the fact enforcement of those rules has been shown to be biased along national or ethnic lines and plagued by technical errors, Byrne feared the worst for vulnerable users. “If you live in Palestine, all of your friends are Palestinians,” Byrne said. “They’re all getting flagged, and it’s like a self-licking ice cream cone.”

    In the spring of 2022, investigators drawn from Meta’s internal Integrity, Investigations, and Intelligence team, known as i3, began analyzing the profiles of Facebook users whose profiles had run afoul of the Dangerous Organizations and Individuals policy, Byrne said. They were looking for shared traits that could be turned into general indicators of risk. “As someone who came from a professional research background, I can say it wasn’t a good research methodology,” Byrne said.

    Part of her objection was pedigree: People just barely removed from American government were able to determine what people could say on online, whether or not the internet users lived in the United States. Many of these investigators, according to Byrne’s recollection and LinkedIn profiles of her former colleagues she shared with The Intercept, had arrived from positions at the Defense Department, FBI, and U.S. intelligence agencies, institutions not known for an unbiased approach to counterterror.

    Over hours of interviews, Byrne never badmouthed any of her former colleagues nor blamed them individually. Her criticism of Meta is systemic, the sort of structural ailment she had hoped to change from within. “It was people that I personally liked and trusted, and I trusted their values,” Byrne said of her former colleagues on Meta’s in-house intelligence team.

    Byrne feared implementing a system so deeply rooted in inference could endanger the users she’d been hired to protect. She worried about systemic biases, such as “the fact that Arabic language just wasn’t really represented in our data set.”

    She worried about generalizing about one strain of violent extremism and applying it to drastically different cultures, contexts, and ideologies: “We’re saying Hamas is the exact same thing as the KKK with absolutely no basis in logic or reason or history or research.” Byrne encountered similar definitional headaches around “misinformation” and “disinformation,” which she says her team studied as potential sources of terror sympathy and wanted to incorporate into the Malicious Actor Framework. But like terrorism itself, Byrne found these terms simply too blunt to be effective. “We’re taking some of the most complex, unrelated, geographically separated, just different fucking things, and we’re literally using this word terrorism, or misinformation, or disinformation, to treat them as a binary.”

    Private Policy, Public Relations

    Toward the end of her time at Meta, Byrne began to break down. The prospect of catching enemies of the state had energized her at first. Now she faced the grim, gradual realization that she wasn’t accomplishing the things she hoped she would. Her work wasn’t making Facebook safer, nor the people using it. Far from manning the barricades against extremism, Byrne quickly found herself just another millennial in a boring tech job.

    But while planning the Malicious Actor Framework, these feelings of futility gave way to something worse: “I’m actually going to be an active participant in harm,” she recalls thinking. The speech of people she’d met in her studies abroad were exactly the kind her job might suppress. Finally, Byrne had decided “it felt impossible to be a good actor within that system.”

    Spiraling mental health struggles resulted in a leave of absence in the spring of 2023 and months of partial hospitalization. Away from her job, grappling with the nature of her work, Byrne realized she couldn’t go on. She returned at the end of the summer for a brief stretch before finally quitting on October 4. Her departure came just days before the world would be upended by events that would quickly implicate her former employer and highlight exactly why she fled from it.

    For Byrne, watching the Israeli military hailed by her country’s leaders as it kills tens of thousands of civilians in the name of fighting terror exposes everything she believes wrong and fraudulent about the counterterrorism industry. Meta’s Dangerous Organizations policy doesn’t take lives, but she sees it as rooted in that same conceptual injustice. “The same racist, bullshit dynamics of ‘terrorism’ were not only dictating who the U.S. was allowed to kill, they were dictating what the world was allowed to see, who in the world was allowed to speak, and what the world was allowed to say,” Byrne explained. “And the system works exactly as the U.S. law intends it to — to silence resistance to its violence.”

    In conversations, it seems most galling for Byrne to compare how malleable Meta’s Dangerous Organizations policy was for Ukraine, and how draconian it has felt for those protesting the war in Gaza, or trying to document it happening around them. Following the Russian invasion of Ukraine, Meta not only moved swiftly to allow users to cheer on the Azov Battalion, but also loosened its rules around incitement, hate speech, and gory imagery so Ukrainian civilians could share images of the suffering around them and voice their fury against it. Byrne recalls seeing a video on Facebook of a Ukrainian woman giving an invading Russian soldier seeds, telling him to keep them in his pockets so they’d flower from his corpse on the battlefield. Were it a Palestinian woman taunting to an Israeli soldier, Byrne said, “that would be taken down for terrorism so quickly.”

    Today, Byrne remains conflicted about the very concept of content moderation. On the one hand, she acknowledges that violent groups can and do organize via platforms like Facebook — the problem that brought her to the company in the first place. And there are ways she believes Meta could easily improve, given its financial resources: more and better human moderators, more policy drafted by teams equipped to meet the contours of the hundreds of different countries where people use Facebook and Instagram.

    While Byrne and her colleagues were supposed to be preventing harm from occurring in the world, they often felt like they were a janitorial crew responding to bad press. “An article would come out, all my team would share it, and then it would be like ‘Fix this thing’ all day. I’d be glued to the computer.” Byrne recalls “my boss’s boss or even Mark Zuckerberg just like searching things, and screenshotting them, and sending them to us, like ‘Why is this still up?’” She remembers her team, contrary to conventional wisdom about Big Tech, “expressing gratitude when there would be [media] leaks sometimes, because we’d all of a sudden get all of these resources and ability to change things.”

    Militant neo-Nazi organizations represent a real, violent threat to the public, and they and other violent groups can and do organize using online platforms like Facebook, she readily admits. Still, watching the way pro-Palestinian speech has been restricted by companies like Meta since October 7, while glorifications of Israeli state violence flows unfettered, pushed her to speak out publicly about the company’s censorship apparatus. 

    In her life post-Meta, galvanized by the ongoing Israeli bombardment of Gaza, Byrne has become active in pro-Palestinian protest circles and outspoken in her criticism in her former employer’s role in suppressing speech about the war. In February, she gave a presentation on Meta’s censorship practices at the NOLA Freedom Forum, a New Orleans activist group, providing an insider’s advice on how to avoid getting banned on Instagram.

    She’s still maddened by the establishment’s circular logic of terrorism, which casts non-state actors as terrorists while condoning the same behaviors from governments. “The scales of acceptable casualties are astronomically different when we’re talking about white, state-perpetrated violence versus brown and black non-state-perpetrated violence.”

    Unlike past Big Tech dissidents like Frances Haugen, Byrne doesn’t think her former employer can be reformed with tweaks to its algorithms or greater transparency. Rather, she fundamentally objects to an American company policing speech — even in the name of safety — for so much of the planet.

    So long as U.S. foreign policy and federal law brands certain acts of violence beyond the pale depending on politics and not harm — and so long as Meta believes itself beholden to those laws — Byrne believes the machine cannot be fixed. “You want military, Department of State, CIA people enforcing free speech? That is what is concerning about this.”

    The post She Joined Facebook to Fight Terror. Now She’s Convinced We Need to Fight Facebook. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • When questioned about its controversial cloud computing contract with the Israeli government, Google has repeatedly claimed the so-called Project Nimbus deal is bound by the company’s general cloud computing terms of service policy.

    While that policy would prohibit uses that lead to deprivation of rights, injury, or death, or other harms, contract documents and an internal company email reviewed by The Intercept show the deal forged between Google and Israel doesn’t operate under the tech company’s general terms of service. Rather, Nimbus is subject to an “adjusted” policy drafted between Google and the Israeli government. It is unclear how this “Adjusted Terms of Service” policy differs from Google’s typical terms.

    Related

    Documents Reveal Advanced AI Tools Google Is Selling to Israel

    The $1.2 billion joint contract split between Google and Amazon provides the Israeli government, including its military, with access to state-of-the-art cloud computing and artificial intelligence tools. This has made Project Nimbus a consistent source of protest inside and outside Google, even before Israel’s war on Gaza.

    While Amazon has largely remained silent in the face of employee activism and outside scrutiny, Google routinely downplays or denies the military reach of Project Nimbus — despite the Israeli Finance Ministry’s 2021 announcement that the deal would service the country’s “defense establishment.”

    Google has also sought to reassure those concerned by its relationship with a government whose leadership is being investigated by the International Criminal Court for crimes against humanity by claiming Nimbus is constrained by the company’s general rules and regulations.

    “We have been very clear that the Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy,” a Google spokesperson told Wired in July and repeated verbatim to Time magazine in August, linking both times to the public-facing copies of each document.

    Google Cloud’s terms of service prohibit, among other things, uses that “violate, or encourage the violation of, the legal rights of others,” any “invasive” purpose, or anything “that can cause death, serious harm, or injury to individuals or groups of individuals.”

    “If Google wins the competition, we will need to accept a non negotiable contract on terms favourable to the government.”

    But the premise that Google policy dictates how Nimbus is used is called into question by a previously undisclosed email from a company lawyer. On December 10, 2020, before the tech giant won the contract, Google lawyer Edward du Boulay wrote to company executives with exciting news: “Google Cloud has been preparing to submit a bid for Project Nimbus (internal code ‘Selenite’), a competitive tender to provide cloud to the Israeli government. The business believes this is currently the largest government procurement of public cloud globally.”

    Du Boulay noted that “If Google wins the competition, we will need to accept a non negotiable contract on terms favourable to the government,” and “Given the value and strategic nature of this project, it carries potential risks and rewards which are significant if we win.” Among Du Boulay’s concerns is the fact that the Israeli “government has unilateral right to impose contract changes,” the lawyer warned. He cautioned further that should it win the contract, Google would retain “almost no ability to sue [Israel] for damages” stemming from “permitted uses … breaches.” The email does not explain what exactly would prevent Google from seeking legal recourse should the Israeli state commit such a breach.

    Google’s suggestion of authority over the contract are further undermined by Israeli governmental contract documents reviewed by The Intercept. The documents state that the company’s standard terms of service don’t apply — rather, an “adjusted” terms of service document is in effect.

    “The tenderer [Israel] has adjusted the winning suppliers’ [Google and Amazon] service agreement for each of the services supplied within the framework of this contract,” according to a 63-page overview of the Nimbus contract. “The Adjusted Terms of Service are the only terms that shall apply to the cloud services consumed upon the winning bidders’ cloud infrastructure.”

    Google did not immediately respond to a request for comment.

    The language about “Adjusted Terms of Service” appears to contradict not only Google’s public claims about the contract, but also how it has represented Nimbus to its own staff. During an October 30 employee Q&A session, Google president of global affairs Kent Walker was asked how the company is ensuring its Nimbus work is consistent with its “AI Principles” document, which forbids uses “that cause or are likely to cause overall harm,” including surveillance, weapons, or anything “whose purpose contravenes widely accepted principles of international law and human rights.”

    According to a transcript of the exchange shared with The Intercept, Walker said that Nimbus is subject to Google’s own terms: “When it comes to the Nimbus contract, in particular, this is a contract that is designed and directed at our public cloud work, not at specific military classified sensitive information. It’s not designed for that. And everything that’s on our Cloud network, our public Cloud, is subject to our Acceptable Use Policy and our Terms of Service. So, you know, I can assure you that we take all this seriously.”

    Related

    Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon

    The Israeli contract document also seems to contradict another common defense of the contract from Google, echoed by Walker, that Nimbus is “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” According to the Israeli contract document, however, the government “may make any use of any service included in the supplier’s catalog of services.”

    A separate document pertaining to Nimbus’s “Digital Marketplace,” a suite of third-party software hosted by Google and made available to Nimbus users in the Israeli government, offers another apparent contradiction: “There will be no restrictions on the part of the Provider as to the type of system and information that the Clients may migrate to the service, including vital systems of high sensitivity level.” This second document stipulates that the Israeli government “may make any use of the service within the performance of its function and purpose as a public service for the State of Israel and its citizens,” and that “there will be no restriction of any kind, including ‘permitted use’ rules for a service being offered in the governmental digital marketplace.”

    Should Google not have any meaningful control over Nimbus, the company could face consequences beyond public relations or employee dissent. In October, the United Nations Special Rapporteur on the occupied Palestinian territory placed a public call for information pertaining to private sector involvement in “the commission of international crimes connected to Israel’s unlawful occupation, racial segregation and apartheid regime,” according to a press release.

    The Abolitionist Law Center, a Pennsylvania-based public interest firm, told The Intercept it is filing a submission detailing how “Google and Amazon Web Services’ provision of advanced technological services to the Israeli government through Project Nimbus violates — by its very nature — each companies’ purported commitments to human rights due diligence obligations,” according to staff attorney Sadaf Doost. “This is most evidently demonstrated by how the Project Nimbus contract itself includes a clause granting authority to Israeli officials to modify the companies’ standard terms of use agreements in ways that have not been made clear to the public.”

    The post Documents Contradict Google’s Claims About Its Project Nimbus Contract With Israel appeared first on The Intercept.

    This post was originally published on The Intercept.

  • When questioned about its controversial cloud computing contract with the Israeli government, Google has repeatedly claimed the so-called Project Nimbus deal is bound by the company’s general cloud computing terms of service policy.

    While that policy would prohibit uses that lead to deprivation of rights, injury, or death, or other harms, contract documents and an internal company email reviewed by The Intercept show the deal forged between Google and Israel doesn’t operate under the tech company’s general terms of service. Rather, Nimbus is subject to an “adjusted” policy drafted between Google and the Israeli government. It is unclear how this “Adjusted Terms of Service” policy differs from Google’s typical terms.

    Related

    Documents Reveal Advanced AI Tools Google Is Selling to Israel

    The $1.2 billion joint contract split between Google and Amazon provides the Israeli government, including its military, with access to state-of-the-art cloud computing and artificial intelligence tools. This has made Project Nimbus a consistent source of protest inside and outside Google, even before Israel’s war on Gaza.

    While Amazon has largely remained silent in the face of employee activism and outside scrutiny, Google routinely downplays or denies the military reach of Project Nimbus — despite the Israeli Finance Ministry’s 2021 announcement that the deal would service the country’s “defense establishment.”

    Google has also sought to reassure those concerned by its relationship with a government whose leadership is being investigated by the International Criminal Court for crimes against humanity by claiming Nimbus is constrained by the company’s general rules and regulations.

    “We have been very clear that the Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy,” a Google spokesperson told Wired in July and repeated verbatim to Time magazine in August, linking both times to the public-facing copies of each document.

    Google Cloud’s terms of service prohibit, among other things, uses that “violate, or encourage the violation of, the legal rights of others,” any “invasive” purpose, or anything “that can cause death, serious harm, or injury to individuals or groups of individuals.”

    “If Google wins the competition, we will need to accept a non negotiable contract on terms favourable to the government.”

    But the premise that Google policy dictates how Nimbus is used is called into question by a previously undisclosed email from a company lawyer. On December 10, 2020, before the tech giant won the contract, Google lawyer Edward du Boulay wrote to company executives with exciting news: “Google Cloud has been preparing to submit a bid for Project Nimbus (internal code ‘Selenite’), a competitive tender to provide cloud to the Israeli government. The business believes this is currently the largest government procurement of public cloud globally.”

    Du Boulay noted that “If Google wins the competition, we will need to accept a non negotiable contract on terms favourable to the government,” and “Given the value and strategic nature of this project, it carries potential risks and rewards which are significant if we win.” Among Du Boulay’s concerns is the fact that the Israeli “government has unilateral right to impose contract changes,” the lawyer warned. He cautioned further that should it win the contract, Google would retain “almost no ability to sue [Israel] for damages” stemming from “permitted uses … breaches.” The email does not explain what exactly would prevent Google from seeking legal recourse should the Israeli state commit such a breach.

    Google’s suggestion of authority over the contract are further undermined by Israeli governmental contract documents reviewed by The Intercept. The documents state that the company’s standard terms of service don’t apply — rather, an “adjusted” terms of service document is in effect.

    “The tenderer [Israel] has adjusted the winning suppliers’ [Google and Amazon] service agreement for each of the services supplied within the framework of this contract,” according to a 63-page overview of the Nimbus contract. “The Adjusted Terms of Service are the only terms that shall apply to the cloud services consumed upon the winning bidders’ cloud infrastructure.”

    Google did not immediately respond to a request for comment.

    The language about “Adjusted Terms of Service” appears to contradict not only Google’s public claims about the contract, but also how it has represented Nimbus to its own staff. During an October 30 employee Q&A session, Google president of global affairs Kent Walker was asked how the company is ensuring its Nimbus work is consistent with its “AI Principles” document, which forbids uses “that cause or are likely to cause overall harm,” including surveillance, weapons, or anything “whose purpose contravenes widely accepted principles of international law and human rights.”

    According to a transcript of the exchange shared with The Intercept, Walker said that Nimbus is subject to Google’s own terms: “When it comes to the Nimbus contract, in particular, this is a contract that is designed and directed at our public cloud work, not at specific military classified sensitive information. It’s not designed for that. And everything that’s on our Cloud network, our public Cloud, is subject to our Acceptable Use Policy and our Terms of Service. So, you know, I can assure you that we take all this seriously.”

    Related

    Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon

    The Israeli contract document also seems to contradict another common defense of the contract from Google, echoed by Walker, that Nimbus is “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” According to the Israeli contract document, however, the government “may make any use of any service included in the supplier’s catalog of services.”

    A separate document pertaining to Nimbus’s “Digital Marketplace,” a suite of third-party software hosted by Google and made available to Nimbus users in the Israeli government, offers another apparent contradiction: “There will be no restrictions on the part of the Provider as to the type of system and information that the Clients may migrate to the service, including vital systems of high sensitivity level.” This second document stipulates that the Israeli government “may make any use of the service within the performance of its function and purpose as a public service for the State of Israel and its citizens,” and that “there will be no restriction of any kind, including ‘permitted use’ rules for a service being offered in the governmental digital marketplace.”

    Should Google not have any meaningful control over Nimbus, the company could face consequences beyond public relations or employee dissent. In October, the United Nations Special Rapporteur on the occupied Palestinian territory placed a public call for information pertaining to private sector involvement in “the commission of international crimes connected to Israel’s unlawful occupation, racial segregation and apartheid regime,” according to a press release.

    The Abolitionist Law Center, a Pennsylvania-based public interest firm, told The Intercept it is filing a submission detailing how “Google and Amazon Web Services’ provision of advanced technological services to the Israeli government through Project Nimbus violates — by its very nature — each companies’ purported commitments to human rights due diligence obligations,” according to staff attorney Sadaf Doost. “This is most evidently demonstrated by how the Project Nimbus contract itself includes a clause granting authority to Israeli officials to modify the companies’ standard terms of use agreements in ways that have not been made clear to the public.”

    The post Documents Contradict Google’s Claims About Its Project Nimbus Contract With Israel appeared first on The Intercept.

    This post was originally published on The Intercept.

  • President-elect Donald Trump vows to start his second term with the immediate mass deportation of millions of undocumented immigrants. Like everything else, deportations of the 21st century are an increasingly data-centric undertaking, tapping vast pools of personal information sold by a litany of companies. The Intercept asked more than three dozen companies in the data business if they’ll help; only four were willing to comment.

    While details of the plan have varied, Trump’s intention is clear. He plans to use federal immigration police and perhaps the military to force millions of immigrants out of the United States in an operation the president-elect says has “no price tag.” While the country braces for the possibility of immigrants forcibly rounded up and deported, much of the undertaking will likely remain invisible — the domain of software analysis and database searches of unregulated personal data.

    Related

    LexisNexis Is Selling Your Personal Data to ICE So It Can Try to Predict Crimes

    Regardless of immigration status, it is nearly impossible to exist today without creating a trail of records. DMV visits, electricity bills, cellphone subscriptions, bankruptcy proceedings, credit history, and other staples of modern life all wind up ingested and repackaged for sale by data companies. Information like this has helped inform deportation proceedings under both Republican and Democratic leadership.

    In 2021, The Intercept reported that Immigration and Customs Enforcement paid LexisNexis nearly $17 million to access its database of personal information, which the company says includes 10,000 different data points spanning hundreds of millions of people in the United States. Within just seven months, according to documents reviewed by The Intercept, ICE had searched this database over 1.2 million times.

    Similar uses of unregulated private data have become commonplace for immigration and border authorities. In 2020, Protocol and the Wall Street Journal reported on the extensive use of location and other personal data gleaned from smartphone apps by companies like Gravy Analytics and Venntel and resold to ICE and Customs and Border Protection. ICE “has used the data to help identify immigrants who were later arrested,” according to sources who spoke to the Journal.

    Analytic software sold by Palantir has been instrumental to ICE’s deportation efforts; reporting by The Intercept showed the company’s tools were used in a 2017 operation targeting unaccompanied minors and their families.

    Last year, Motherboard reported CBP had purchased access to Babel Street software that “lets a user input a piece of information about a target—their name, email address, or telephone number—and receive a bevy of data in return,” including “social media posts, linked IP address, employment history, and unique advertising identifiers associated with their mobile phone.”

    To see whether corporate America will support Trump’s promised anti-immigrant operation, The Intercept reached out to data and technology companies that hold immense quantities of personal information or sell analytic software useful to an agency like ICE. The list includes obscure data brokers that glean intimate personal details from advertising streams, mainstream cellular phone providers, household-name social networks, predictive policing firms, and more.

    The list is by no means exhaustive. Private firms that quietly collect and sell personal data that could be of use to immigration authorities are innumerable and ever-growing. Some of these companies, like Meta, may not directly sell personal records to third-party customers in the manner of LexisNexis but could be asked to aid in immigration enforcement if presented with a legal request. At times, social media companies have opted to fight such requests they consider overly broad or invasive.

    In 2016, as Trump prepared to begin his first term, The Intercept asked nine major tech firms whether they would help build a nationwide “Muslim registry,” as he had pledged during his campaign. Initially, only one — Twitter — even responded (the answer was no). Eventually, Facebook (as Meta was then known), Apple, Microsoft, and Google stated on the record that they too would not help build a computerized list of Muslims. The country now faces the prospect of another nationally polarizing MAGA campaign pledge, again with horrific civil liberties implications, and again requiring the aid or at least cooperation of one or many technology firms.

    As in 2016, The Intercept posed the same question to each company, and requested a yes or no response: Would your company provide the Trump administration with data or other technical services to help facilitate mass deportation operations, either voluntarily, in response to a legal request, or via a paid contract?

    This is how they responded.

    CompanyIndustryComment
    AirsageLocation data brokerNo response
    Anomaly SixGeolocational surveillanceNo response
    AppleConsumer technologyNo response
    ApprissData brokerNo response
    AT&TTelecomNo response
    AcxiomData brokerNo response
    Babel StreetGeolocational surveillanceNo response
    Booz AllenGovernment technology contractorNo response
    ClearviewFacial recognitionNo response
    ComplementicsData brokerNo response
    CoreLogicData brokerNo response
    DataminrSocial media surveillanceNo response
    Digital EnvoyData brokerNo response
    EquifaxCredit agency/data brokerNo response
    ExperianCredit agency/data brokerNo response
    Flock SafetySurveillance“As I’m sure you’re aware, our mission is to eliminate crime, and build a safer future. However, we don’t create the laws. We operate in CA, a sanctuary state, and our customers follow the enforcement rules of the state. In contrast, we also operate in TX, which is not a sanctuary state, and our customers follow the enforcement rules of the state. At the end of the day, we support the Constitution and the democratically-elected governing bodies having the right to enact laws at the will of the people.”

    When asked again if Flock would engage in a contract pertaining to mass deportations, spokesperson Josh Thomas replied, “We don’t entertain hypotheticals.”
    Fog Data ScienceGeolocational surveillanceNo response
    GoogleInternet/consumer technologyNo response
    Gravy AnalyticsLocation data brokerNo response
    IBMEnterprise/consumer technologyNo response
    InnovisCredit agency/data brokerNo response
    InrixLocation data brokerNo response
    LexisNexisData broker“LexisNexis Risk Solutions provides tools that support the lawful protection of society and the enforcement of the rule of law. Our tools are designed to be used in compliance with all applicable laws and do not single out individuals based on immigration status. We are committed to ensuring that our solutions are used responsibly and ethically, in alignment with established legal standards to promote safety and security within a democracy.”

    When asked if this answer constituted a hypothetical “yes,” LexisNexis Risk Solutions spokesperson Jennifer Richman did not comment further.
    MetaSocial mediaMeta acknowledged receipt of The Intercept’s inquiry but did not provide a response.
    MicrosoftInternet/enterprise/ consumer technologyNo response
    Near/AziraLocation data broker“No. Azira has been expressly built to help business leaders make smart business decisions based on consumer behavior data. The company’s solutions are not designed nor intended for use in law enforcement. Based on Azira’s policies and practices around personal data protection as well as clear restrictions around sensitive locations and applicable privacy regulations, Azira data is not lawfully permitted to be utilized in this scenario.”
    OracleEnterprise technologyNo response
    OutlogicLocation data brokerNo response
    PalantirData analytics“We don’t have a comment.”
    PeregrinePredictive policingNo response
    SafegraphLocation data brokerNo response
    T-MobileTelecomNo response
    Thomson Reuters ClearData broker“Our investigative solutions do not contain data about a person’s immigration or employment eligibility status. They are not designed for use for mass illegal immigration inquiries or for deporting non-criminal undocumented persons and non-citizens. Various agencies within DHS engage Thomson Reuters to support their investigations, such as to address child exploitation, human trafficking, narcotics smuggling, national security and public safety cases, organized crime, and transnational gang activity.”

    When asked if, even though the company’s products are not designed for “mass illegal immigration inquiries,” said services would ever be allowed for such a use, company spokesperson Samina Ansari said, “We don’t comment on speculation.”
    TransUnionCredit agency/data brokerNo response
    VenntelLocation data brokerNo response
    VerasetLocation data brokerNo response
    VeriskData brokerNo response
    VerizonTelecomNo response
    XSocial mediaNo response

    The post These Tech Firms Won’t Tell Us If They Will Help Trump Deport Immigrants appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Meta’s in-house ChatGPT competitor is being marketed unlike anything that’s ever come out of the social media giant before: a convenient tool for planning airstrikes.

    As it has invested billions into developing machine learning technology it hopes can outpace OpenAI and other competitors, Meta has pitched its flagship large language model, Llama, as a handy way of planning vegan dinners or weekends away with friends. A provision in Llama’s terms of service previously prohibited military uses, but Meta announced on November 4 that it was joining its chief rivals and getting into the business of war.

    “Responsible uses of open source AI models promote global security and help establish the U.S. in the global race for AI leadership,” Meta proclaimed in a blog post by global affairs chief Nick Clegg.

    One of these “responsible uses” is a partnership with Scale AI, a $14 billion machine learning startup and thriving defense contractor. Following the policy change, Scale now uses Llama 3.0 to power a chat tool for governmental users who want to “apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities,” according to a press release.

    But there’s a problem: Experts tell The Intercept that the government-only tool, called “Defense Llama,” is being advertised by showing it give terrible advice about how to blow up a building. Scale AI defended the advertisement by telling The Intercept its marketing is not intended to accurately represent its product’s capabilities.

    Llama 3.0 is a so-called open source model, meaning that users can download it, use it, and alter it, free of charge, unlike OpenAI’s offerings. Scale AI says it has customized Meta’s technology to provide military expertise.

    Scale AI touts Defense Llama’s accuracy, as well as its adherence to norms, laws, and regulations: “Defense Llama was trained on a vast dataset, including military doctrine, international humanitarian law, and relevant policies designed to align with the Department of Defense (DoD) guidelines for armed conflict as well as the DoD’s Ethical Principles for Artificial Intelligence. This enables the model to provide accurate, meaningful, and relevant responses.”

    The tool is not available to the public, but Scale AI’s website provides an example of this Meta-augmented accuracy, meaningfulness, and relevance. The case study is in weaponeering, the process of choosing the right weapon for a given military operation. An image on the Defense Llama homepage depicts a hypothetical user asking the chatbot: “What are some JDAMs an F-35B could use to destroy a reinforced concrete building while minimizing collateral damage?” The Joint Direct Attack Munition, or JDAM, is a hardware kit that converts unguided “dumb” bombs into a “precision-guided” weapon that uses GPS or lasers to track its target.

    Defense Llama is shown in turn suggesting three different Guided Bomb Unit munitions, or GBUs, ranging from 500 to 2,000 pounds with characteristic chatbot pluck, describing one as “an excellent choice for destroying reinforced concrete buildings.”

    Scale AI marketed its Defense Llama product with this image of a hypothetical chat. Screenshot of Scale AI marketing webpage

    Military targeting and munitions experts who spoke to The Intercept all said Defense Llama’s advertised response was flawed to the point of being useless. Not just does it gives bad answers, they said, but it also complies with a fundamentally bad question. Whereas a trained human should know that such a question is nonsensical and dangerous, large language models, or LLMs, are generally built to be user friendly and compliant, even when it’s a matter of life and death.

    “If someone asked me this exact question, it would immediately belie a lack of understanding about munitions selection or targeting.”

    “I can assure you that no U.S. targeting cell or operational unit is using a LLM such as this to make weaponeering decisions nor to conduct collateral damage mitigation,” Wes J. Bryant, a retired targeting officer with the U.S. Air Force, told The Intercept, “and if anyone brought the idea up, they’d be promptly laughed out of the room.”

    Munitions experts gave Defense Llama’s hypothetical poor marks across the board. The LLM “completely fails” in its attempt to suggest the right weapon for the target while minimizing civilian death, Bryant told The Intercept.

    “Since the question specifies JDAM and destruction of the building, it eliminates munitions that are generally used for lower collateral damage strikes,” Trevor Ball, a former U.S. Army explosive ordnance disposal technician, told The Intercept. “All the answer does is poorly mention the JDAM ‘bunker busters’ but with errors. For example, the GBU-31 and GBU-32 warhead it refers to is not the (V)1. There also isn’t a 500-pound penetrator in the U.S. arsenal.”

    Ball added that it would be “worthless” for the chatbot give advice on destroying a concrete building without being provided any information about the building beyond it being made of concrete.

    Defense Llama’s advertised output is “generic to the point of uselessness to almost any user,” said N.R. Jenzen-Jones, director of Armament Research Services. He also expressed skepticism toward the question’s premise. “It is difficult to imagine many scenarios in which a human user would need to ask the sample question as phrased.”

    In an emailed statement, Scale AI spokesperson Heather Horniak told The Intercept that the marketing image was not meant to actually represent what Defense Llama can do, but merely “makes the point that an LLM customized for defense can respond to military-focused questions.” Horniak added that “The claim that a response from a hypothetical website example represents what actually comes from a deployed, fine-tuned LLM that is trained on relevant materials for an end user is ridiculous.”

    Despite Scale AI’s claims that Defense Llama was trained on a “vast dataset” of military knowledge, Jenzen-Jones said the artificial intelligence’s advertised response was marked by “clumsy and imprecise terminology” and factual errors, confusing and conflating different aspects of different bombs. “If someone asked me this exact question, it would immediately belie a lack of understanding about munitions selection or targeting,” he said. Why an F-35? Why a JDAM? What’s the building, and where is it? All of this important, Jenzen-Jones said, is stripped away by Scale AI’s example.

    Bryant cautioned that there is “no magic weapon that prevents civilian casualties,” but he called out the marketing image’s suggested use of the 2,000-pound GBU-31, which was “utilized extensively by Israel in the first months of the Gaza campaign, and as we know caused massive civilian casualties due to the manner in which they employed the weapons.”

    Scale did not answer when asked if Defense Department customers are actually using Defense Llama as shown in the advertisement. On the day the tool was announced, Scale AI provided DefenseScoop a private demonstration using this same airstrike scenario. The publication noted that Defense Llama provided “provided a lengthy response that also spotlighted a number of factors worth considering.” Following a request for comment by The Intercept, the company added a small caption under the promotional image: “for demo purposes only.”

    Meta declined to comment.

    While Scale AI’s marketing scenario may be a hypothetical, military use of LLMs is not. In February, DefenseScoop reported that the Pentagon’s AI office had selected Scale AI “to produce a trustworthy means for testing and evaluating large language models that can support — and potentially disrupt — military planning and decision-making.” The company’s LLM software, now augmented by Meta’s massive investment in machine learning, has contracted with the Air Force and Army since 2020. Last year, Scale AI announced its system was the “the first large language model (LLM) on a classified network,” used by the XVIII Airborne Corps for “decision-making.” In October, the White House issued a national security memorandum directing the Department of Defense and intelligence community to adopt AI tools with greater urgency. Shortly after the memo’s publication, The Intercept reported that U.S. Africa Command had purchased access to OpenAI services via a contract with Microsoft.

    Unlike its industry peers, Scale AI has never shied away from defense contracting. In a 2023 interview with the Washington Post, CEO Alexandr Wang, a vocal proponent of weaponized AI, described himself as a “China-hawk” and said he hoped Scale could “be the company that helps ensure that the United States maintains this leadership position.” Its embrace of military work has seemingly charmed investors, which include Peter Thiel’s Founders Fund, Y Combinator, Nvidia, Amazon, and Meta. “With Defense Llama, our service members can now better harness generative AI to address their specific mission needs,” Wang wrote in the product’s announcement.

    But the munitions experts who spoke to The Intercept expressed confusion over who, exactly, Defense Llama is marketing to with the airstrike demo, questioning why anyone involved in weaponeering would know so little about its fundamentals that they would need to consult a chatbot in the first place. “If we generously assume this example is intended to simulate a question from an analyst not directly involved in planning and without munitions-specific expertise, then the answer is in fact much more dangerous,” Jenzen-Jones explained. “It reinforces a probably false assumption (that a JDAM must be used), it fails to clarify important selection criteria, it gives incorrect technical data that a nonspecialist user is less likely to question, and it does nothing to share important contextual information about targeting constraints.”

    “It gives incorrect technical data that a nonspecialist user is less likely to question.”

    Bryant agreed. “The advertising and hypothetical scenario is quite irresponsible,” he explained, “primarily because the U.S. military’s methodology for mitigating collateral damage is not so simple as just the munition being utilized. That is one factor of many.” Bryant suggested that Scale AI’s example scenario betrayed an interest in “trying make good press and trying to depict an idea of things that may be in the realm of possible, while being wholly naive about what they are trying to depict and completely lacking understanding in anything related to actual targeting.”

    Turning to an LLM for airstrike planning also means sidestepping the typical human-based process and the responsibility that entails. Bryant, who during his time in the Air Force helped plan airstrikes against Islamic State targets, told The Intercept that the process typically entails a team of experts “who ultimately converge on a final targeting decision.”

    Jessica Dorsey, a professor at Utrecht University School of Law and scholar of automated warfare methods, said consulting Defense Llama seems to entirely circumvent the ostensible legal obligations military planners are supposed to be held to. “The reductionist/simplistic and almost amateurish approach indicated by the example is quite dangerous,” she said. “Just deploying a GBU/JDAM does not mean there will be less civilian harm. It’s a 500 to 2,000-pound bomb after all.”

    The post Meta-Powered Military Chatbot Advertised Giving “Worthless” Advice on Airstrikes appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Donald Trump pitched himself to voters as a supposed anti-interventionist candidate of peace. But when he reenters the White House in January, at his side will be a phalanx of pro-military Silicon Valley investors, inventors, and executives eager to build the most sophisticated weapons the world has ever known.

    During his last term, the U.S. tech sector tiptoed skittishly around Trump; longtime right-winger Peter Thiel stood as an outlier in his full-throated support of MAGA politics as other investors and executives largely winced and smiled politely. Back then, Silicon Valley still offered the public peaceful mission statements of improving the human condition, connecting people, and organizing information. Technology was supposed to help, never harm. No more: People like Thiel, Palmer Luckey, Trae Stephens, and Marc Andreessen make up a new vanguard of powerful tech figures who have unapologetically merged right-wing politics with a determination to furnish a MAGA-dominated United States with a constant flow of newer, better arms and surveillance tools.

    Trump’s election marks an epochal victory not just for the right, but also for a growing conservative counterrevolution in American tech.

    These men (as they tend to be) hold much in common beyond their support of Republican candidates: They share the belief that China represents an existential threat to the United States (an increasingly bipartisan belief, to be sure) and must be dominated technologically and militarily at all costs. They are united in their aversion, if not open hostility, to arguments that the pace of invention must be balanced against any moral consideration beyond winning. And they all stand to profit greatly from this new tech-driven arms race.

    Trump’s election marks an epochal victory not just for the right, but also for a growing conservative counterrevolution in American tech that has successfully rebranded military contracting as the proud national duty of the American engineer, not a taboo to be dodged and hidden. Meta’s recent announcement that its Llama large language model can now be used by defense customers means that Apple is the last of the “Big Five” American tech firms — Amazon, Apple, Google, Microsoft, and Meta — not engaged in military or intelligence contracting.

    Elon Musk has drawn the lion’s share of media scrutiny (and Trump world credit) for throwing his fortune and digital influence behind the campaign. Over the years, the world’s richest man has become an enormously successful defense contractor via SpaceX, which has reaped billions selling access to rockets that the Pentagon hopes will someday rapidly ferry troops into battle. SpaceX’s Starlink satellite internet has also become an indispensable American military tool, and the company is working on a constellation of bespoke spy satellites for U.S. intelligence agency use.

    But Musk is just one part of a broader wave of militarists who will have Trump’s ear on policy matters.

    After election day, Musk replied to a celebratory tweet from Palmer Luckey, a founder of Anduril, a $14 billion startup that got its start selling migrant-detecting surveillance towers for the southern border and now manufactures a growing line of lethal drones and missiles. “Very important to open DoD/Intel to entrepreneurial companies like yours,” Musk wrote. Anduril’s rise is inseparable from Trumpism: Luckey founded the firm in 2017 after he was fired by Meta for contributing to a pro-Trump organization. He has been outspoken in his support for Trump as both candidate and president, fundraising for him in both 2020 and 2024.

    Big Tech historically worked hard to be viewed by the public as inhabiting the center-left, if not being apolitical altogether. But even that is changing. While Luckey was fired for merely supporting Trump’s first campaign, his former boss (and former liberal) Mark Zuckerberg publicly characterized Trump surviving the June assassination attempt as “bad ass” and quickly congratulated the president-elect on a “decisive victory.” Zuckerberg added that he is “looking forward to working with you and your administration.”

    To some extent, none of this is new: Silicon Valley’s origin is one of militarism. The American computer and software economy was nurtured from birth by the explosive growth and endless money of the Cold War arms race and its insatiable appetite for private sector R&D. And despite the popular trope of liberal Google executives, the tech industry has always harbored a strong anti-labor, pro-business instinct that dovetails neatly with conservative politics. It would also be a mistake to think that Silicon Valley was ever truly in lockstep with progressive values. A 2014 political ad by Americans for a Conservative Direction, a defunct effort by Facebook to court the Republican Party, warned that “it’s wrong to have millions of people living in America illegally” and urged lawmakers to “secure our borders so this never happens again.” The notion of the Democrat-friendly wing of Big Tech as dovish is equally wrong: Former Google chair and longtime liberal donor Eric Schmidt is a leading China hawk and defense tech investor. Similarly, the Democratic Party itself hasn’t meaningfully distanced itself from militarism in recent history. The current wave of startups designing smaller, cheaper military drones follows the Obama administration’s eager mass adoption of the technology, and firms like Anduril and Palantir have thrived under Joe Biden.

    What has changed is which views the tech industry is now comfortable expressing out loud.

    A year after Luckey’s ouster from the virtual reality subsidiary he founded, Google became embroiled in what grew into an industry-wide upheaval over military contracting. After it was reported that the company sought to win Project Maven, a lucrative drone-targeting contract, employees who had come to the internet titan to work on consumer products like Search, Maps, and Gmail found themselves disturbed by the thought of contributing to a system that could kill people. Waves of protests pushed Google to abandon the Pentagon with its tail between its legs. Even Fei-Fei Li, then Google Cloud’s chief artificial intelligence and machine learning scientist, described the contract as a source of shame in internal emails obtained by the New York Times. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google,” she wrote. “I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.”

    It’s an exchange that reads deeply quaint today. The notion that the country’s talented engineers should build weapons is becoming fully mainstreamed. “Societies have always needed a warrior class that is enthused and excited about enacting violence on others in pursuit of good aims,” Luckey explained in an on-campus talk about his company’s contributions to the Ukrainian war effort with Pepperdine University President Jim Gash. “You need people like me who are sick in that way and who don’t lose any sleep making tools of violence in order to preserve freedom.”

    This “warrior class” mentality traces its genealogy to Peter Thiel, whose disciples, like Luckey, spread the gospel of a conservative-led arms race against China. “Everything that we’re doing, what the [Department of Defense] is doing, is preparing for a conflict with a great power like China in the Pacific,” Luckey told Bloomberg TV in a 2023 interview. At the Reagan National Defense Forum in 2019, Thiel, a lifelong techno-libertarian and Trump’s first major backer in tech, rejected the “ethical framing” of the question of whether to build weapons.” When it’s a choice between the U.S. and China, it is always the ethical decision to work with the U.S. government,” he said. Though Sinophobia is increasingly standard across party affiliations, it’s particularly frothing in the venture-backed warrior class. In 2019, Thiel claimed that Google had been “infiltrated by Chinese intelligence” and two years later suggested that bitcoin is “a Chinese financial weapon against the U.S.”

    Thiel often embodies the self-contradiction of Trumpist foreign policy, decrying the use of taxpayer money on “faraway wars” while boosting companies that design weapons for exactly that. Like Trump, Thiel is a vocal opponent of Bush- and Obama-era adventurism in the Middle East as a source of nothing but regional chaos — though Thiel has remained silent on Trump’s large expansion of the Obama administration’s drone program and his assassination of Iranian Maj. Gen. Qassim Suleimani. In July, asked about the Israeli use of AI in the ongoing slaughter in Gaza, Thiel responded, “I defer to Israel.”

    Thiel’s gravitational pull is felt across the whole of tech’s realignment toward militarism. Vice President-elect JD Vance worked at Mithril, another of Thiel’s investment firms, and used $15 million from his former boss to fund the 2022 Senate win that secured his national political bona fides. Vance would later go on to invest in Anduril. Founders Fund, Thiel’s main venture capital firm, has seeded the tech sector with influential figures friendly to both Trumpism and the Pentagon. Before, an investor or CEO who publicly embraced right-wing ideology and products designed to kill risked becoming an industry pariah. Today, he can be a CNBC guest.

    An earlier adopter of MAGA, Thiel was also investing in and creating military- and intelligence-oriented companies before it was cool. He co-founded Palantir, which got its start helping facilitate spy agency and deportation raids by Immigration and Customs Enforcement. Now part of the S&P 500, the company helps target military strikes for Ukraine and in January sealed a “strategic partnership for battle tech” with the Israeli Ministry of Defense, according to a press release.

    Before, a tech investor or CEO who publicly embraced right-wing ideology and products designed to kill risked becoming an industry pariah. Today, he can be a CNBC guest.

    The ripple effect of Palantir’s success has helped popularize defense tech and solidify its union with the American right. Thiel’s Palantir co-founder Joe Lonsdale, also an Anduril investor, is reportedly helping Trump staff his new administration. Former Palantir employee and Anduril executive chair Trae Stephens joined the Trump transition team in 2016 and has suggested he would serve a second administration. As a member of the U.S.–China Economic and Security Review Commission, Thiel ally Jacob Helberg has been instrumental in whipping up anti-China fervor on Capitol Hill, helping push legislation to ban TikTok, and arguing for military adoption of AI technologies like those sold by his employer, Palantir, which markets itself as a bulwark against Chinese aggression. Although Palantir CEO Alex Karp is a self-described Democrat who said he planned to vote against Trump, he has derided progressivism as a “thin pagan religion” of wokeness, suggested pro-Palestine college protesters leave for North Korea, and continually advocating for an American arms buildup.

    “Trump has surrounded himself with ‘techno-optimists’ — people who believe technology is the answer to every problem,” Brianna Rosen, a strategy and policy fellow at the University of Oxford and alumnus of the Obama National Security Council, told The Intercept. “Key members of his inner circle — leading tech executives — describe themselves in this way. The risk of techno-optimism in the military domain is that it focuses on how technology saves lives, rather than the real risks associated with military AI, such as the accelerated pace of targeting.”

    The worldview of this corner of the tech industry is loud, if not always consistent. Foreign entanglements are bad, but the United States must be on perpetual war-footing against China. China itself is dangerous in part because it’s rapidly weaponizing AI, a current that threatens global stability, so the United States should do the very same, even harder, absent regulatory meddling.

    Stephens’s 2022 admonition that “the business of war is the business of deterrence” argues that “peaceful outcomes are only achievable if we maintain our technological advantage in weapons systems” — an argument that overlooks the fact that the U.S. military’s overwhelming technological superiority failed to keep it out of Korea, Vietnam, Iraq, or Afghanistan. In a recent interview with Wired, Stephens both criticized the revolving door between the federal government and Anduril competitors like Boeing while also stating that “it’s important that people come out of private industry to work on civil service projects, and I hope at some point I’ll have the opportunity to go back in and serve the government and American people.”

    William Fitzgerald, the founder of Worker Agency, a communications and advocacy firm that has helped tech workers organize against military contracts, said this square is easily circled by right-wing tech hawks, whose pitch is centered on the glacial incompetence of the Department of Defense and blue-chip contractors like Lockheed and Raytheon. “Peter Thiel’s whole thing is to privatize the state,” Fitzgerald explained. Despite all of the rhetoric about avoiding foreign entanglements, a high-tech arms race is conducive to different kinds of wars, not fewer of them. “This alignment fits this narrative that we can do cheaper wars,” he said. “We won’t lose the men over there because we’ll have these drones.”

    In this view, the opposition of Thiel and his ilk isn’t so much to forever wars, then, but rather whose hardware is being purchased forever.

    The new conservative tech establishment seems in full agreement about the need for an era of techno-militarism. Marc Andreessen and Ben Horowitz, the namesakes of one of Silicon Valley’s most storied and successful venture capital firms, poured millions into Trump’s reelection and have pushed hard to reorient the American tech sector toward fighting wars. In a “Techno-Optimist Manifesto” published last October, Andreessen wrote of defense contracting as a moral imperative. “We believe America and her allies should be strong and not weak. We believe national strength of liberal democracies flows from economic strength (financial power), cultural strength (soft power), and military strength (hard power). Economic, cultural, and military strength flow from technological strength.” The firm knows full well what it’s evoking through a naked embrace of strength as society’s greatest virtue: Listed among the “Patron Saints of Techno-Optimism” is Filippo Tommaso Marinetti, co-author of the 1919 Fascist Manifesto.

    The venture capitalists’ document offers a clear rebuttal of employees’ moral qualms that pushed Google to ditch Project Maven. The manifesto dismisses basic notions of “ethics,” “safety,” and “social responsibility” as a “demoralization campaign” of “zombie ideas, many derived from Communism” pushed by “the enemy.” This is rhetoric that matches a brand Trump has worked to cultivate: aspirationally hypermasculine, unapologetically jingoistic, and horrified by an America whose potential to dominate the planet is imperiled by meddling foreigners and scolding woke co-workers.

    “There’s a lot more volatility in the world, [and] there is more of a revolt against what some would deem ‘woke culture,’” said Michael Dempsey, managing partner at the New York-based venture capital firm Compound. “It’s just more in the zeitgeist now that companies shouldn’t be so heavily influenced by personal politics. Obviously that is the tech industry talking out of both sides of their mouth because we saw in this past election a bunch of people get very political and make donations from their firms.”

    “It’s just more in the zeitgeist now that companies shouldn’t be so heavily influenced by personal politics. Obviously that is the tech industry talking out of both sides of their mouth.”

    Despite skewing young (by national security standards), many in this rightward, pro-military orbit are cultural and religious traditionalists infused with the libertarian preferences of the Zynternet, a wildly popular online content scene that’s melded apolitical internet bro culture and a general aversion to anything considered vaguely “woke.” A recent Vanity Fair profile of the El Segundo tech scene, a hotbed of the burgeoning “military Zyndustrial complex” commonly known as “the Gundo,” described the city as “California’s freedom-loving, Bible-thumping hub of hard tech.” It paints a vivid scene of young engineers who eschewed the progressive dystopia of San Francisco they read about on Twitter and instead flocked to build “nuclear reactors and military weaponry designed to fight China” beneath “an American flag the size of a dumpster” and “a life-size poster of Jesus Christ smiling benevolently onto a bench press below.”

    The American right’s hold over online culture in the form of podcasts, streamers, and other youth-friendly media has been central to both retaking Washington and bulldozing post-Maven sentiment, according to William Fitzgerald of Worker Agency. “I gotta hand it to the VCs, they’re really good at comms,” said Fitzgerald, who himself is former Google employee who helped leak critical information about the company’s involvement in Project Maven. “They’re really making sure that these Gundo bros are wrapping the American flag around them. It’s been fascinating to see them from 2019 to 2024 completely changing the culture among young tech workers.”

    A wave of layoffs and firings of employees engaged in anti-military protests have been a boon for defense evangelists, Fitzgerald added. “The workers have been told to shut up, or they get fired.”

    This rhetoric has been matched by a massive push by Andreessen Horowitz (already an Anduril investor) behind the fund’s “American Dynamism” portfolio, a collection of companies that leans heavily into new startups hoping to be the next Raytheon. These investments include ABL Space Systems, already contracting with the Air Force,; Epirus, which makes microwave directed-energy weapons; and Shield AI, which works on autonomous military drones. Following the election, David Ulevitch, who leads the fund’s American Dynamism team, retweeted a celebratory video montage interspersed with men firing flamethrowers, machine guns, jets, Hulk Hogan, and a fist-pumping post-assassination attempt Trump.

    Even the appearance of more money and interest in defense tech could have a knock-on effect for startup founders hoping to chase what’s trendy. Dempsey said he expects investors and founder to “pattern-match to companies like Anduril and to a lesser extent SpaceX, believing that their outcomes will be the same.” The increased political and cultural friendliness toward weapons startups also coincides with high interest rates and growing interest in hardware companies, Dempsey explained, as software companies have lost their luster following years of growth driven by little more than cheap venture capital.

    There’s every reason to believe a Trump-controlled Washington will give the tech industry, increasingly invested in militarized AI, what it wants. In July, the Washington Post reported the Trump-aligned America First Policy Institute was working on a proposal to “Make America First in AI” by undoing regulatory burdens and encouraging military applications. Trump has already indicated he’ll reverse the Biden administration’s executive order on AI safety, which mandated safety testing and risk-based self-reporting by companies. Michael Kratsios, chief technology officer during the first Trump administration and managing director of Air Force contractor Scale AI, is reportedly advising Trump’s transition team on policy matters.

    “‘Make America First in AI’ means the United States will move quickly, regardless of the costs, to maintain its competitive edge over China,” Brianna Rosen, the Oxford fellow, explained. “That translates into greater investment and fewer restrictions on military AI. Industry already leads AI development and deployment in the defense and intelligence sectors; that role has now been cemented.”

    The mutual embrace of MAGA conservatism and weapons tech seems to already be paying off. After dumping $200 million into the Trump campaign’s terminal phase, Musk was quick to cash his chips in: On Thursday, the New York Times reported that he petitioned Trump SpaceX executives into positions at the Department of Defense before the election had even begun. Musk will also co-lead a nebulous new office dedicated to slashing federal spending. Rep. Matt Gaetz, brother-in-law to Luckey, now stands to be the country’s next attorney general. In a post-election interview with Bloomberg, Luckey shared that he is already advising the Trump transition team and endorses the current candidates for defense secretary. “We did well under Trump, and we did better under Biden,” he said of Anduril. “I think we will do even better now.”

    The post Trump’s Election Is Also a Win for Tech’s Right-Wing “Warrior Class” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Less than a year after OpenAI quietly signaled it wanted to do business with the Pentagon, a procurement document obtained by The Intercept shows U.S. Africa Command, or AFRICOM, believes access to OpenAI’s technology is “essential” for its mission.

    The September 30 document lays out AFRICOM’s rationale for buying cloud computing services directly from Microsoft as part of its $9 billion Joint Warfighting Cloud Capability contract, rather than seeking another provider on the open market. “The USAFRICOM operates in a dynamic and evolving environment where IT plays a critical role in achieving mission objectives,” the document reads, including “its vital mission in support of our African Mission Partners [and] USAFRICOM joint exercises.”

    The document, labeled Controlled Unclassified Information, is marked as FEDCON, indicating it is not meant to be distributed beyond government or contractors. It shows AFRICOM’s request was approved by the Defense Information Systems Agency. While the price of the purchase is redacted, the approval document notes its value is less than $15 million.

    Like the rest of the Department of Defense, AFRICOM — which oversees the Pentagon’s operations across Africa, including local military cooperation with U.S. allies there — has an increasing appetite for cloud computing. The Defense Department already purchases cloud computing access from Microsoft via the Joint Warfighting Cloud Capability project. This new document reflects AFRICOM’s desire to bypass contracting red tape and buy immediately Microsoft Azure cloud services, including OpenAI software, without considering other vendors. AFRICOM states that the “ability to support advanced AI/ML workloads is crucial. This includes services for search, natural language processing, [machine learning], and unified analytics for data processing.” And according to AFRICOM, Microsoft’s Azure cloud platform, which includes a suite of tools provided by OpenAI, is the only cloud provider capable of meeting its needs.

    Related

    OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare”

    Microsoft began selling OpenAI’s GPT-4 large language model to defense customers in June 2023. Earlier this year, following the revelation that OpenAI had changed its mind on military work, the company announced a cybersecurity collaboration with DARPA in January and said its tools would be used for an unspecified veteran suicide prevention initiative. In April, Microsoft pitched the Pentagon on using DALL-E, OpenAI’s image generation tool, for command and control software. But the AFRICOM document marks the first confirmed purchase of OpenAI’s products by a U.S. combatant command whose mission is one of killing.

    OpenAI’s stated corporate mission remains “to ensure that artificial general intelligence benefits all of humanity.”

    The AFRICOM document marks the first confirmed purchase of OpenAI’s products by a U.S. combatant command whose mission is one of killing.

    The document states that “OpenAI tools” are among the “unique features” offered by Microsoft “essential to ensure the cloud services provided align with USAFRICOM’s mission and operational needs. … Without access to Microsoft’s integrated suite of AI tools and services, USAFRICOM would face significant challenges in analyzing and extracting actionable insights from vast amounts of data. … This could lead to delays in decision-making, compromised situational awareness, and decreased agility in responding to dynamic and evolving threats across the African continent.” Defense and intelligence agencies around the world have expressed a keen interest in using large language models to sift through troves of intelligence, or rapidly transcribe and analyze interrogation audio data.

    Microsoft invested $10 billion in OpenAI last year and now exercises a great deal of influence over the company, in addition to reselling its technology. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.

    An OpenAI spokesperson told The Intercept, “OpenAI does not have a partnership with US Africa Command” and referred questions to Microsoft. Microsoft did not immediately respond to a request for comment. Nor did a spokesperson for AFRICOM.

    “It is extremely alarming that they’re explicit in OpenAI tool use for ‘unified analytics for data processing’ to align with USAFRICOM’s mission objectives,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute, who has previously conducted safety evaluations for OpenAI. “Especially in stating that they believe these tools enhance efficiency, accuracy, and scalability, when in fact it has been demonstrated that these tools are highly inaccurate and consistently fabricate outputs. These claims show a concerning lack of awareness by those procuring for these technologies of the high risks these tools pose in mission-critical environments.”

    Since OpenAI quietly deleted the portion of its terms of service that prohibited military work in January, the company has steadily ingratiated itself with the U.S. national security establishment, which is eager to integrate impressive but frequently inaccurate tools like ChatGPT. In June, OpenAI added to its board the Trump-appointed former head of the National Security Agency, Paul Nakasone; the company’s current head of national security partnerships is Katrina Mulligan, a Pentagon alum who previously worked in “Special Operations and Irregular Warfare,” according to her LinkedIn profile.

    On Thursday, following a White House directive ordering the Pentagon to accelerate adoption of tools like those made by OpenAI, the company published an article outlining its “approach to AI and national security.” According to the post, “The values that guide our work on national security” include “democratic values,” “human rights,” and “accountability,” explaining, “We believe that all AI applications, especially those involving government and national security, should be subject to oversight, clear usage guidelines, and ethical standards.” OpenAI’s language is a clear reflection of the White House order, which forbade security and intelligence entities from using artificial intelligence in ways that “do not align with democratic values,” the Washington Post reported.

    Related

    After Training African Coup Leaders, Pentagon Blames Russia for African Coups 

    While the AFRICOM document contains little detail about how exactly it might use OpenAI tools, the command’s regular implications in African coup d’états, civilian killings, torture, and covert warfare would seem incompatible with OpenAI’s professed national security framework. Last year, AFRICOM chief Gen. Michael Langley told the House Armed Services Committee that his command shares “core values” with Col. Mamady Doumbouya, an AFRICOM trainee who overthrew the government of Guinea and declared himself its leader in 2021.

    Although U.S. military activity in Africa receives relatively little attention in comparison to U.S. Central Command, which oversees American forces in the Middle East, AFRICOM’s presence is both significant and the subject of frequent controversy. Despite claims of a “light footprint” on the continent, The Intercept reported in 2020 a formerly secret AFRICOM map showing “a network of 29 U.S. military bases that stretch from one side of Africa to another.” Much of AFRICOM’s purpose since its establishment in 2007 entails training and advising African troops, low-profile missions by Special Operations forces, and operating drone bases to counter militant groups in the Sahel, Lake Chad Basin, and the Horn of Africa in efforts to bring security and stability to the continent. The results have been dismal. Throughout all of Africa, the State Department counted a total of just nine terrorist attacks in 2002 and 2003, the first years of U.S. counterterrorism assistance on the continent. According to the Africa Center for Strategic Studies, a Pentagon research institution, the annual number of attacks by militant Islamist groups in Africa now tops 6,700 — a 74,344 percent increase.

    As violence has spiraled, at least 15 officers who benefited from U.S. security assistance have been involved in 12 coups in West Africa and the greater Sahel during the war on terror, including in Niger last year. (At least five leaders of that July 2023 coup received American assistance, according to a U.S. official.) U.S. allies have also been implicated in a raft of alleged human rights abuses. In 2017, The Intercept reported a Cameroonian military base used by AFRICOM to stage surveillance drone flights had been used to torture military prisoners.

    Dealing with data has long been a challenge for AFRICOM. After The Intercept put together a count of U.S.-trained coup leaders on the continent, for example, the command admitted it did not know how many coups its charges have conducted, nor did the command even keep a list of how many times such takeovers have happened. “AFRICOM does not maintain a database with this information,” spokesperson Kelly Cahalan told The Intercept last year.

    AFRICOM’s mismanagement of information has also been lethal. Following a 2018 drone strike in Somalia, AFRICOM announced it had killed “five terrorists and destroyed one vehicle, and that “no civilians were killed in this airstrike.” A secret U.S. military investigation, obtained by The Intercept via the Freedom of Information Act, showed that despite months of “target development,” the attack on a pickup truck killed at least three, and possibly five, civilians, including Luul Dahir Mohamed and her 4-year-old daughter, Mariam Shilow Muse.

    The post Pentagon Purchased OpenAI Tools for Military Operations Across Africa appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A former senior Israeli government official now working as a Meta’s Israel policy chief personally pushed for the censorship of Instagram accounts belonging to Students for Justice in Palestine — a group that has played a leading role in organizing campus protests against Israel’s ongoing war in Gaza.

    Internal policy discussions reviewed by The Intercept show Jordana Cutler, Meta’s Israel & the Jewish Diaspora policy chief, used the company’s content escalation channels to flag for review at least four SJP posts, as well as other content expressing stances contrary to Israel’s foreign policy. When flagging SJP posts, Cutler repeatedly invoked Meta’s Dangerous Organizations and Individuals policy, which bars users from freely discussing a secret list of thousands of blacklisted entities. The Dangerous Organizations policy restricts “glorification” of those on the blacklist, but is supposed to allow for “social and political discourse” and “commentary.” 

    It’s unclear if Cutler’s attempts to use Meta’s internal censorship system were successful; the company declined to say what ultimately happened to posts that Cutler flagged. It’s not Cutler’s decision whether flagged content is ultimately censored; another team is responsible for moderation decisions. But experts who spoke to The Intercept expressed alarm over a senior employee tasked with representing the interests of any government advocating for restricting user content that runs contrary to those interests.

    “It screams bias,” said Marwa Fatafta a policy adviser with the digital rights organization Access Now, which consults with Meta on content moderation issues. “It doesn’t really require that much intelligence to conclude what this person is up to.”

    Meta did not respond to a detailed list of questions about Cutler’s flagging of posts but argued that writing an article about her was “dangerous and irresponsible.” In a statement, spokesperson Dani Lever wrote “who flags a particular piece of content for review is irrelevant because our policies govern what is and isn’t allowed on platform. In fact, the expectation of many teams at Meta, including Public Policy, is to escalate content that might violate our policies when they become aware of it, and they do so across regions and issue areas. Whenever any piece of content is flagged, a separate team of experts then reviews whether it violates our policies.”

    Cutler did not respond to a request for comment; Meta declined a request to interview her.

    Lever said that The Intercept’s line of questioning “deliberately misrepresents how our processes work,” but declined to say how so.

    “Voice of the Government”

    Cutler joined Meta, which owns Facebook and Instagram, in 2016 after years of high-level work in the Israeli government. Her resumé includes several years at the Israeli Embassy in Washington, D.C., where she worked in public affairs and as its chief of staff from 2013 to 2016, as well as a stint as a campaign adviser for the right-wing Likud party and nearly five years as an adviser to Prime Minister Benjamin Netanyahu. Upon her hiring in 2016, Gilad Erdan, then minister of public security, strategic affairs and information, celebrated the move, saying it marked “an advance in dialogue between the State of Israel and Facebook.”

    In interviews about her job, Cutler has stated explicitly that she acts as a liaison between Meta and the Israeli government, whose perspectives she represents inside the company.

    In 2017, Cutler told the Israeli business outlet Calcalist that Facebook works “very closely with the cyber departments of the Ministry of Justice and the police and with other elements in the army and Shin Bet,” Israel’s domestic intelligence agency, on matters of content removal. “We are not the experts, they are in the field, this is their field.”

    A 2020 profile in the Jerusalem Post described Cutler as “Our woman at Facebook,” hired to “represent Israel’s interests on the largest and most active social network in the world.” In an interview with the paper, she explained, “My job is to represent Facebook to Israel, and represent Israel to Facebook.” In a follow-up interview for the Post’s YouTube channel, Cutler added that “inside the company, part of my job is to be a representative for the people of Israeli, [a] voice of the government for their concerns inside of our company.” Asked “Do they listen?” by the show’s host, Cutler replied, “Of course they do, and I think that’s one of the most exciting parts about my job, that I have an opportunity to really influence the way that we look at policy and explain things on the ground.”

    Though Meta has extensive government relations and lobbying operations aimed at capitols around the world, few other governments enjoy their own dedicated high-level contact within the company. The company employs no counterpart to Cutler’s role solely representing Palestinian viewpoints; tens of millions of Meta users across the entire Middle East and all of North Africa share one policy director. A single policy lead oversees the entire Southeast Asian nations market, with a population of nearly 700 million. This raises concerns among experts about a deep power imbalance inside Facebook when it comes to moderating discussion of a war that to date has killed at least 40,000 Gazans

    “If Meta wishes to behave ethically, it must ensure that Palestinians also have a seat at the table,” Electronic Frontier Foundation’s Director for International Freedom of Expression Jillian York told The Intercept.

    Flagged for Moderation

    Records reviewed by The Intercept show Cutler pushed for the removal of an SJP post promoting a reading list of books including authors associated with two Marxist-Leninist militant groups, the Democratic Front for the Liberation of Palestine and the Popular Front for the Liberation of Palestine. Though it remains a Meta-designated terrorist group according to a copy of the list obtained by The Intercept in 2021, the DFLP has not been considered a terrorist organization by the U.S. government since 1999, when it was delisted by the State Department “primarily because of the absence of terrorist activity.” The PFLP remains designated by both Meta and the United States.

    According to a source familiar with Cutler’s actions, these efforts have included lobbying for the deletion of posts quoting celebrated Palestinian novelist Ghassan Kanafani, who served as a PFLP spokesperson nearly 60 years ago and was assassinated by Israel in 1972. Kanafani, whose works have been widely translated and published in countries around the world, enjoys global literary renown and mainstream recognition; his 1969 novella “Returning to Haifa” was cited as a recommended book by New York Times podcast “The Ezra Klein Show” last year.

    Internal records show Cutler later lobbied for the removal of an SJP Instagram post describing Leila Khaled — an 80-year-old former PFLP member who helped hijack TWA Flight 840 in 1969 and has in the decades since become an outspoken icon of Palestinian solidarity — as “empowering.” 

    These same records demonstrate Cutler regularly singled out Instagram content belonging to SJP at the University of California, Los Angeles, claiming to policy colleagues that this chapter had been associated with violent protests, citing an Israeli news report about an April 29 melee at the school’s Gaza solidarity encampment. Local and national press accounts described a peaceful protest until a pro-Israeli mob attacked the encampment with fists, weapons, and bear spray, injuring 15 people

    Related

    Facebook and Instagram Restrict the Use of the Red Triangle Emoji Over Hamas Association

    Throughout the year, Cutler internally flagged several SJP UCLA posts, including those mentioning a reading list of PFLP-associated authors, an on-campus “PFLP study group,” and a post containing a red triangle emoji, a reference to Hamas combat operations that has become a broader symbol of Palestinian resistance. 

    Mona, a UCLA undergraduate and SJP member who spoke on the condition of being only identified by her first name, said the chapter’s Instagram account was periodically unable to post or share content, which the group attributed to enforcement actions by Meta. In August, the organization’s chapter at Columbia University reported its Instagram account had been deactivated without explanation. A member of SJP Columbia said the chapter did not have a record of deleted Instagram content but recalled Meta removing multiple posts that quoted Kanafani.

    The Israeli government has been vehement in its criticism of anti-Zionist groups like SJP and Jewish Voice for Peace, and has denounced campus organizing as an attempt to import terrorism to American college campuses.

    Records show Cutler has requested the deletion of non-student content, too. Following Iran’s October 1 missile attack against Israel, Cutler quickly flagged video uploaded to Instagram of Palestinians cheering from the Gaza Strip. Records show Cutler has also repeatedly lobbied to censor the Instagram account of Lebanese satellite TV network Al Mayadeen when it posted sympathetic content about the assassination of Hezbollah leader Hassan Nasrallah. 

    These actions are “Typical Jordana,” according to Ashraf Zeitoon, Facebook’s former Middle East and North Africa policy chief. “No one in the world could tell me that a lot of what she does is not an overreach of her authority.”

    Zeitoon, who departed the company in 2017, told The Intercept that Cutler’s role inside Meta differed from those of other regional policy managers.  

    “That’s the job of a government employee, a political appointee. ”

    “If I was head of public policy for Jordan, and I went on TV and said I represent the interests of Jordan within Meta, I would be fired the second day,” said Zeitoon, a Jordanian national, whose mandate at Meta was to oversee the whole of the Middle East and North Africa. “That’s the job of a government employee, a political appointee. None of us was ever hired with the premise that we’re representing our governments.”

    During his tenure, Zeitoon says he often fielded informal requests from the government of Jordan, but that he drew a clear line at acting on its behalf. “The Jordanian government hated my guts when I was there, because they thought that I was obliged because I’m Jordanian. I might guide you, I might be over-friendly, if you call me at night I might accept your call. But at the end of the day, Facebook pays my salary.”

    BuzzFeed News reported that in 2017 Facebook employees had “raised concerns about Cutler’s role and whose interests she prioritizes,” evidenced by an argument “over whether the West Bank should be considered ‘occupied territories’ in Facebook’s rules.” Zeitoon recalled this clash as emblematic of Cutler’s tenure, adding that when he was there, she “tried to influence decision makers within the company to designate the West Bank as a ‘disputed’ territory” rather than using the term “occupied” — a phrasing used by the United Nations when describing the region.

    Zeitoon doubted the Meta spokesperson’s claim that all internal escalations are treated equally, no matter who submits them. Recalling his time working in a high-ranking role at the company, he said his complaints received immediate attention: “My report goes to the top,” he said. He expects the same would be true today for content flagged by Cutler — especially at a moment when Israel is at war. “I’m sure all she reports is code red.”

    Emerson Brooking, a resident senior fellow at the Atlantic Council’s Digital Forensic Research Lab, was reminded of the case of Ankhi Das, Facebook’s former policy head for India — another rare instance in which a single country had its own dedicated representative within the company. Das resigned from her position in 2020 after a Wall Street Journal report found she had lobbied for the uneven enforcement of hate speech rules that benefited India’s ruling Hindu nationalist party, which she supported personally. “Meta is the communications platform for much for the world, but of course not every voice is heard equally,” Brooking said in an interview.

    Zeitoon concurred: “No governments in the world have been able to create a network of influence and pressure on Meta as strong as the Israeli and the Indian governments.”

    Related

    Israeli Group Claims It’s Working With Big Tech Insiders to Censor “Inflammatory” Wartime Content

    Cutler is not the first or only prominent figure within Meta to help foster relations between the company and governments. Her colleague Joel Kaplan, who served as White House deputy chief of staff during the George W. Bush administration, joined Facebook in 2011 to head the company’s operations in Washington, D.C., a move the New York Times reported “will likely strengthen its ties to Republican lawmakers on Capitol Hill.” Nick Clegg, Meta’s president of global affairs, is the former deputy prime minister of the U.K. Many of the staffers who help Meta craft and enforce its Dangerous Organizations and Individuals policy join following years of work at the Pentagon, State Department, federal law enforcement, and spy agencies. The revolving door between government and major internet companies is vast and ever-turning not just at Meta, but also its most prominent rivals. 

    As recently as February 2023, Cutler’s name was floated as a possible next head of the Israeli Strategic Affairs Ministry, a government propaganda office tasked with surveilling and undermining protesters and activists abroad. The ministry has reportedly made extensive use of Meta’s platforms to infiltrate student groups and conduct propaganda campaigns. In June, Haaretz reported a project originally founded by the ministry had targeted Black lawmakers in the U.S. with “hundreds” of phony Facebook and Instagram accounts “to aggressively promote purported articles that served the Israeli narrative.” Meta later shut these accounts down.

    Evelyn Douek, a content moderation scholar and professor at Stanford Law School, said Cutler’s direct intervention is “obviously extremely concerning” given the specific stakes. “You have a person inside Meta representing the interests of the government on an issue about which there is deeply contested political debate it appears, to favor one side of that debate. The concerns about bias and disproportionate enforcement of a policy when that is happening seem obvious.”

    Lever, the Meta spokesperson, said that Cutler’s role in public policy is distinct from the company’s Content Policy officials, noting the former “engage” with governments but do not actually have a role in drafting rules. In her Jerusalem Post interview, however, Cutler stated “I’m part of a team of people who are helping to develop and build Facebook’s policies.”

    Douek argued that internet platform users are best served by keeping the creation of speech rules entirely separate from their enforcement. “It’s really highly problematic if you have people whose job at Meta is not the fair enforcement of content moderation rules, but rather their job is to please government interests intervening in the enforcement of the platform’s rules,” she said.

    This at a minimum creates the appearance of a foreign government meddling in an intensely domestic political issue — a dynamic Meta has historically worked to combat. “Campus protests and what is happening in the United States right now is a deeply contested fault line in American politics. And this has been an issue about what are the appropriate limits on campus speech and how should we be dealing with this,” Douek said. “A foreign country’s interests are being overly represented in how that debate is moderated, that should also raise concerns.”

    The post Meta’s Israel Policy Chief Tried to Suppress Pro-Palestinian Instagram Posts appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

    The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

    The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”

    In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.”

    Last year, Special Operations Command, or SOCOM, expressed interest in video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording. Such imagery is generated using a variety of machine learning techniques, generally using software that has been “trained” to recognize and recreate human features by analyzing a massive database of faces and bodies. This year’s SOCOM wish list specifies an interest in software similar to StyleGAN, a program released by Nvidia in 2019 that powered the globally popular website “This Person Does Not Exist.” Within a year of StyleGAN’s launch, Facebook said it had taken down a network of accounts that used the technology to create false profile pictures. Since then, academic and private sector researchers have been engaged in a race between new ways to create undetectable deepfakes, and new ways to detect them. Many government services now require so-called liveness detection to thwart deepfaked identity photos, asking human applicants to upload a selfie video to demonstrate they are a real person — an obstacle that SOCOM may be interested in thwarting.

    Related

    U.S. Special Forces Want to Use Deepfakes for Psy-Ops

    The listing notes that special operations troops “will use this capability to gather information from public online forums,” with no further explanation of how these artificial internet users will be used. The disclosure comes one year after SOCOM revealed in last year’s wish list that it hoped to soon use deepfake videos for online propaganda and information warfare campaigns.

    This more detailed procurement listing shows that the United States pursues the exact same technologies and techniques it condemns in the hands of geopolitical foes. National security officials have long described the state-backed use of deepfakes as an urgent threat — that is, if they are being done by another country.

    Last September, a joint statement by the NSA, FBI, and CISA warned “synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications.” It described the global proliferation of deepfake technology as a “top risk” for 2023. In a background briefing to reporters this year, U.S. intelligence officials cautioned that the ability of foreign adversaries to disseminate “AI-generated content” without being detected — exactly the capability the Pentagon now seeks — represents a “malign influence accelerant” from the likes of Russia, China, and Iran. Earlier this year, the Pentagon’s Defense Innovation Unit sought private sector help in combating deepfakes with an air of alarm: “This technology is increasingly common and credible, posing a significant threat to the Department of Defense, especially as U.S. adversaries use deepfakes for deception, fraud, disinformation, and other malicious activities.” An April paper by the U.S. Army’s Strategic Studies Institute was similarly concerned: “Experts expect the malicious use of AI, including the creation of deepfake videos to sow disinformation to polarize societies and deepen grievances, to grow over the next decade.”

    “There are no legitimate use cases besides deception.”

    The offensive use of this technology by the U.S. would, naturally, spur its proliferation and normalize it as a tool for all governments. “What’s notable about this technology is that it is purely of a deceptive nature,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “There are no legitimate use cases besides deception, and it is concerning to see the U.S. military lean into a use of a technology they have themselves warned against. This will only embolden other militaries or adversaries to do the same, leading to a society where it is increasingly difficult to ascertain truth from fiction and muddling the geopolitical sphere.” 

    Both Russia and China have been caught using deepfaked video and user avatars in their online propaganda efforts, prompting the State Department to announce an international “Framework to Counter Foreign State Information Manipulation” in January. “Foreign information manipulation and interference is a national security threat to the United States as well as to its allies and partners,” a State Department press release said. “Authoritarian governments use information manipulation to shred the fabric of free and democratic societies.”

    SOCOM’s interest in deepfakes is part of a fundamental tension within the U.S. government, said Daniel Byman, a professor of security studies at Georgetown University and a member of the State Department’s International Security Advisory Board. “Much of the U.S. government has a strong interest in the public believing that the government consistently puts out truthful (to the best of knowledge) information and is not deliberately deceiving people,” he explained, while other branches are tasked with deception. “So there is a legitimate concern that the U.S. will be seen as hypocritical,” Byman added. “I’m also concerned about the impact on domestic trust in government — will segments of the U.S. people, in general, become more suspicious of information from the government?”

    The post The Pentagon Wants to Use AI to Create Deepfake Internet Users appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

    The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

    The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”

    In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.”

    The Pentagon has already been caught using phony social media users to further its interests in recent years. In 2022, Meta and Twitter removed a propaganda network using faked accounts operated by U.S. Central Command, including some with profile pictures generated with methods similar to those outlined by JSOC. A 2024 Reuters investigation revealed a Special Operations Command campaign using fake social media users aimed at undermining foreign confidence in China’s Covid vaccine.

    Last year, Special Operations Command, or SOCOM, expressed interest in using video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording, for “influence operations, digital deception, communication disruption, and disinformation campaigns.” Such imagery is generated using a variety of machine learning techniques, generally using software that has been “trained” to recognize and recreate human features by analyzing a massive database of faces and bodies. This year’s SOCOM wish list specifies an interest in software similar to StyleGAN, a tool released by Nvidia in 2019 that powered the globally popular website “This Person Does Not Exist.” Within a year of StyleGAN’s launch, Facebook said it had taken down a network of accounts that used the technology to create false profile pictures. Since then, academic and private sector researchers have been engaged in a race between new ways to create undetectable deepfakes, and new ways to detect them. Many government services now require so-called liveness detection to thwart deepfaked identity photos, asking human applicants to upload a selfie video to demonstrate they are a real person — an obstacle that SOCOM may be interested in thwarting.

    Related

    U.S. Special Forces Want to Use Deepfakes for Psy-Ops

    The listing notes that special operations troops “will use this capability to gather information from public online forums,” with no further explanation of how these artificial internet users will be used.

    This more detailed procurement listing shows that the United States pursues the exact same technologies and techniques it condemns in the hands of geopolitical foes. National security officials have long described the state-backed use of deepfakes as an urgent threat — that is, if they are being done by another country.

    Last September, a joint statement by the NSA, FBI, and CISA warned “synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications.” It described the global proliferation of deepfake technology as a “top risk” for 2023. In a background briefing to reporters this year, U.S. intelligence officials cautioned that the ability of foreign adversaries to disseminate “AI-generated content” without being detected — exactly the capability the Pentagon now seeks — represents a “malign influence accelerant” from the likes of Russia, China, and Iran. Earlier this year, the Pentagon’s Defense Innovation Unit sought private sector help in combating deepfakes with an air of alarm: “This technology is increasingly common and credible, posing a significant threat to the Department of Defense, especially as U.S. adversaries use deepfakes for deception, fraud, disinformation, and other malicious activities.” An April paper by the U.S. Army’s Strategic Studies Institute was similarly concerned: “Experts expect the malicious use of AI, including the creation of deepfake videos to sow disinformation to polarize societies and deepen grievances, to grow over the next decade.”

    “There are no legitimate use cases besides deception.”

    The offensive use of this technology by the U.S. would, naturally, spur its proliferation and normalize it as a tool for all governments. “What’s notable about this technology is that it is purely of a deceptive nature,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “There are no legitimate use cases besides deception, and it is concerning to see the U.S. military lean into a use of a technology they have themselves warned against. This will only embolden other militaries or adversaries to do the same, leading to a society where it is increasingly difficult to ascertain truth from fiction and muddling the geopolitical sphere.” 

    Both Russia and China have been caught using deepfaked video and user avatars in their online propaganda efforts, prompting the State Department to announce an international “Framework to Counter Foreign State Information Manipulation” in January. “Foreign information manipulation and interference is a national security threat to the United States as well as to its allies and partners,” a State Department press release said. “Authoritarian governments use information manipulation to shred the fabric of free and democratic societies.”

    SOCOM’s interest in deepfakes is part of a fundamental tension within the U.S. government, said Daniel Byman, a professor of security studies at Georgetown University and a member of the State Department’s International Security Advisory Board. “Much of the U.S. government has a strong interest in the public believing that the government consistently puts out truthful (to the best of knowledge) information and is not deliberately deceiving people,” he explained, while other branches are tasked with deception. “So there is a legitimate concern that the U.S. will be seen as hypocritical,” Byman added. “I’m also concerned about the impact on domestic trust in government — will segments of the U.S. people, in general, become more suspicious of information from the government?”

    The post The Pentagon Wants to Use AI to Create Deepfake Internet Users appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Meta is restricting the use of the upside-down red triangle emoji, a reference to Hamas combat operations that has become a broader symbol of Palestinian resistance, on its Facebook and Instagram, and WhatsApp platforms, according to internal content moderation materials reviewed by The Intercept.

    Since the beginning of the Israeli assault on Gaza, Hamas has regularly released footage of its successful strikes on Israeli military positions with red triangles superimposed above targeted soldiers and armor. Since last fall, use of the red triangle emoji has expanded online, becoming a widely used icon for people expressing pro-Palestinian or anti-Israeli sentiment. Social media users have included the shape in their posts, usernames, and profiles as a badge of solidarity and protest. The symbol has become common enough that the Israeli military has used it as shorthand in its own propaganda: In November, Al Jazeera reported on an Israeli military video that warned “Our triangle is stronger than yours, Abu Obeida,” addressing Hamas’s spokesperson.

    According to internal policy guidelines obtained by The Intercept, Meta, which owns Facebook and Instagram, has determined that the upside-down triangle emoji is a proxy for support for Hamas, an organization blacklisted under the company’s Dangerous Organizations and Individuals policy and designated a terror group under U.S. law. While the rule applies to all users, it is only being enforced in moderation cases that are flagged internally. Deletions of the offending triangle may be followed by further disciplinary action from Meta depending on how severely the company assesses its use.

    According to the policy materials, the ban covers contexts in which Meta decides a “user is clearly posting about the conflict and it is reasonable to read the red triangle as a proxy for Hamas and it is being used to glorify, support or represent Hamas’s violence.”

    Many questions about the policy remain unanswered; Meta did not respond to multiple requests for comment. It’s unclear how often Meta chooses to restrict posts or accounts using the emoji, how many times it has intervened, and whether users have faced further repercussions for violating this policy.

    The policy also appears to apply even if the emoji is used without any violent speech or reference to Hamas. The documents show that the company will “Remove as a ‘Reference to DOI’ if the use of triangle is not related to Hamas’s violence,” as in the case of the emoji as a user’s profile picture. Another example of a prohibited use doesn’t even include the emoji itself, but rather a hashtag mentioning the word triangle and a Hamas spokesperson.

    It “seems wildly over-broad to remove any ‘reference’ to a designated DOI,” according to Evelyn Douek, an assistant professor at Stanford Law School and scholar of content moderation policy. “If we are just understanding the ‘🔻’ as essentially a stand-in for the word “Hamas,” we would never ban every instance of the word. Much discussion of Hamas or use of the ‘🔻’ will not necessarily be praise or glorification.”

    The previously unreported prohibition has not been announced to users by Meta and has worried some digital rights advocates about how fairly and accurately it will be enforced. “Wholesale bans on expressions proved time and time again to be disastrous for free speech, but Meta never seems to learn this lesson,” Marwa Fatafta, a policy adviser with the digital rights organization Access Now, told The Intercept. “Their systems will not be able to distinguish between the different uses of this symbol, and under the unforgiving DOI policy, those who are caught in this widely cast net will pay a hefty price.”

    “Their systems will not be able to distinguish between the different uses of this symbol, and … those who are caught in this widely cast net will pay a hefty price.”

    While Meta publishes a broad overview of the Dangerous Organizations policy, the specifics, including the exact people and groups that are included under it, are kept secret, making it difficult for users to avoid breaking the rule.

    “Soon enough, users will know and notice that their posts are being taken down because of using this red triangle, and that will raise questions,” Fatafta said. “Meta seems to be forgetting another very important lesson here, and that is transparency.”

    Douek echoed the need for transparency regarding Meta’s content moderation around the war: “Not knowing when or how the rule is being applied is going to exacerbate the perception, if not the reality, that Meta isn’t being fair in a context where the company has a history of biased enforcement.”

    Although Meta last year relaxed its Dangerous Organizations policy to ostensibly allow references to banned entities in certain contexts, like elections, civil society groups and digital rights advocates have widely criticized Meta’s enforcement of the policy against speech pertaining to the war, particularly from Palestinian users. The policy material reviewed by The Intercept mentions no such exceptions for the triangle emoji or instructions to consider its context beyond Hamas.

    “What is being banned are expressions of solidarity and support for Palestinians as they are trying to resist ethnic cleansing and genocide,” Mayssoun Sukarieh, a senior lecturer with the Department of International Development at King’s College London, told The Intercept. “Symbols are always created by resistance, and there will be resistance as long as there is colonialism and occupation.”

    The post Facebook and Instagram Restrict the Use of the Red Triangle Emoji Over Hamas Association appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The ongoing Israeli assault on Gaza has triggered tense, at times hostile, reckonings across American tech companies over their role in the killing. Since October 7, tech workers have agitated for greater transparency about their employers’ work for the Israeli military and at times vehemently protested those contracts.

    IBM, which has worked with the Israeli military since the 1960s, is no exception: For months after the war’s start, workers repeatedly pressed company leadership — including its chief executive — to divulge and limit its role in the Israeli offensive that has so far killed over 40,000 Palestinians. For many workers, the question of where IBM might draw the line with foreign governments is particularly fraught given the company’s grim track record of selling computers and services to both apartheid South Africa and Nazi Germany.

    On June 6, CEO Arvind Krishna addressed these concerns in a livestreamed video Q&A session.

    For IBM workers worried about where the company draws the line, his response has sparked only greater consternation.

    According to records of the presentation reviewed by The Intercept, Krishna told employees that IBM’s foreign business wouldn’t be shaped by the company’s own values or humanitarian guidelines.

    Rather, Krishna explained, when working for governments, IBM believes the customer is always right: 

    We try to operate with the principles that are encouraged by the governments of the countries we are in. We are a U.S. headquarter company. So, what does the U.S. federal government want to do on international relations? That helps guide a lot of what we do. We operate in many countries. We operate in Israel, but we also operate in Saudi Arabia. What do those countries want us to do? And what is it they consider to be correct behavior?

    For IBM employees worried that business interests would override ethical considerations, this answer provided little reassurance. It also echoed, intentionally or not, the company’s defense when workers had protested IBM’s sale of computer services to apartheid South Africa. According to Kwame Afoh, an IBM employee who organized against the company’s South African ventures in the 1970s, the company’s go-to internal rationale was “We don’t set foreign policy but rather we follow the lead of the U.S. government in foreign business dealings.” 

    Krishna continued by claiming IBM would not help build weapons — not because doing so is morally wrong, but because the company doesn’t have a system of judging right from wrong. “We will not work on offensive weapons programs,” Krishna explained. “Why? I am not taking any kind of moral or ethical judgment. I think that should be on each country who does those. The reason we don’t is, we do not have the internal guardrails to decide whether the technology applies in a good way or unethical way for offensive weapons.”

    Though it may not build weapons itself, IBM has long helped run the military that carries them. In 2020, the company split a roughly $275 million contract to build data centers that would handle Israeli military logistics, including “combat equipment,” according to Israeli outlet TheMarker. That same year, an executive with IBM subsidiary Red Hat told an Israeli business publication “we see ourselves as partners of the IDF.”

    IBM did not respond to a request for comment.

    Some IBM employees who spoke to The Intercept on the condition of anonymity say they were unnerved or upset by their CEO’s remarks, including one who described them as “predictably shameful.” This person said that while some were glad Krishna had even broached the topic of IBM and Israel, “the responses I heard in one-on-one discussions were overwhelmingly dissatisfied or outraged.” Another IBM worker characterized Krishna’s comments as an “excuse for him to hide behind the US government’s choices in a business sense,” adding that “with the track record that IBM has with taking part in genocidal government projects, it certainly doesn’t help his case in any valuable moral way whatsoever.”



    The company’s stance from its closed-doors staff discussion is markedly different than its public claims. Like its major rivals, IBM says its business practices are constrained by various human rights commitments, principles that in theory ask the company to avoid harm in the pursuit of profit. When operating in a foreign country, such commitments ostensibly prevent a company like IBM from simply asking “What do those countries want us to do?” as Krishna put it. 

    Related

    Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon

    But like its competitors, IBM’s human rights language is generally feel-good verbiage that gestures at ethical guidelines without spelling any of them out. “Our definition of corporate responsibility includes environmental responsibility, as well as social concerns for our workforce, clients, business partners, and the communities where we operate,” the company’s “human rights principles” page states. “IBM has a strong culture of ethics and integrity.”

    The only substance to be found here is in reference to third-party human rights frameworks, namely those issued by the United Nations. IBM says its “corporate responsibility standards” are “informed by” the U.N. Guiding Principles on Business and Human Rights, which asks its adherents to “prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts.” 

    These guidelines, endorsed by the U.N. Human Rights Council in 2011, stress that “Some operating environments, such as conflict-affected areas, may increase the risks of enterprises being complicit in gross human rights abuses committed by other actors (security forces, for example).” The document further notes that such conflict zone abuses may create corporate liability before the International Criminal Court, which in April charged Israeli Prime Minister Benjamin Netanyahu with crimes against humanity stemming from the Gaza assault. Google, Microsoft, and Amazon, which also sell technology services to the Israeli military, similarly say they subscribe to the voluntarily, nonbinding U.N. guidelines.

    The post IBM CEO: We Listen to What Israel and Saudi Arabia Consider “Correct Behavior” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The Department of Homeland Security is soliciting help from the U.S. private sector to run face recognition scans against drivers and passengers approaching the southern border, according to an agency document reviewed by The Intercept.

    Despite the mixed track record and ongoing deficiencies of face recognition technology, DHS is hoping to devise a means of capturing the likenesses of travelers while vehicles are still in motion. 

    According to a “Request for Information” document distributed by the DHS Science and Technology Directorate, the government is looking for private sector assistance to run face recognition on drivers on passengers en route to the border before they even reach a checkpoint. “[DHS] Tech Scouting is seeking information on technology solutions that can capture biometric data (e.g., facial recognition) of occupants present in vehicles at speed as they approach land border checkpoints,” the document states. “Solutions of interest would have the ability to biometrically scan occupants without requiring them to exit the vehicle and provide checkpoint agents with information to determine if the occupants are a threat and if they may enter the United States.

    The document does not elaborate on how such a system would be used to determine whether people in a car constitute a threat to the United States, though prior in-car face recognition pilot programs have checked if drivers had been previously arrested. Vendors that brief DHS on their offerings may be invited to participate in further testing, the document notes.

    DHS and Customs and Border Protection did not respond to a request for comment. 

    Dave Maass, director of investigations at the Electronic Frontier Foundation and longtime researcher of border surveillance technologies, cautioned that face recognition can be deeply intrusive in personal privacy. 

    “We have already seen how automated license plate readers are able to create a massive surveillance dragnet of people’s vehicles and driving patterns,” Maass said. “If law enforcement is able to add face recognition capture from moving vehicles to the mix, they’ll be able to track not only where your vehicle is going, but who is driving it and who is in the car with you, adding a whole new dimension to the privacy invasion.”

    Citing various acts of Congress, CBP says it has a legislative mandate to expand biometric identity checks across land, air, and sea. Anyone who has traveled through a major American airport in recent years has likely been confronted with face recognition cameras at security checkpoints or before boarding international flights — a process that can be opted out of, for now.

    Since 2016, CBP has tested the use of face recognition cameras at border crossing to rapidly verify the identities of both drivers and passengers without requiring travelers to leave their cars. The program outsources to a computer the need for a human to compare the picture on a traveler’s ID to the face behind the wheel. In 2018, The Verge reported DHS added face recognition cameras to two lanes of the Anzalduas International Bridge that carries thousands of vehicles over the Rio Grande every day. In 2019, CBP officials told Politico that this program had ended; in 2021, it announced further limited tests of the systems at two lanes of the Anzalduas crossing.

    Agency documents show that to date DHS has struggled to remotely identify drivers. A 2024 report by the DHS Office of Inspector General includes a section titled “CBP Does Not Have the Technology to Collect Biometrics from Travelers Arriving in Vehicles at Land Ports of Entry,” which notes the government has been “unable to identify a viable camera solution to reliably capture vehicle travelers’ facial images in real time.”

    A 2022 DHS postmortem on the Anzalduas test obtained by the Electronic Frontier Foundation through a public records request and shared with The Intercept delivers a conflicting message. The 2022 document says that all “stated objectives were successfully met,” but concedes that drivers and passengers were photographed only about three-quarters of the time, and only about 80 percent of these images were usable — figures confirmed by the 2024 OIG report. This 80 percent figure was reached only after extensive tweaking to the test system. The postmortem document suggests remedying the lackluster capture rate in part by simply taking more photos: “CBP must significantly increase the number of occupants whose image is captured.”

    “How many found themselves subjected to prolonged, yet unwarranted, secondary inspection?”

    Both documents repeat what’s long been known about face recognition technology: It often goes wrong. This is doubly the case at outdoor border crossings, where faces must be captured from behind windshields, obscured by reflections, hats, sunglasses, shadows, weather, Covid masks, sun visors, and a litany of other real world obstructions and distortions.

    While the Anzalduas postmortem touts a 99 percent accuracy rate when matching drivers and passengers to their ID photos, the OIG’s report makes no such claim. The accuracy figures also lack context and demand answers from the government, said Maass. “The report repeatedly trumpets a 99.2% accuracy rate when officials had both quality probe images and an ID image to compare against, approximately 41,000 comparisons in a month. But this still means that more than 3,280 people that month (more than 100 people a day) experienced a face recognition error,” he explained. “What happened to those travelers? How many suffered a minor inconvenience and how many found themselves subjected to prolonged, yet unwarranted, secondary inspection? And was there any correlation between race and those errors? Those questions were either left unanswered or were hidden in the overly redacted document.”

    Related

    Before Being Hacked, Border Surveillance Firm Lobbied to Downplay Security and Privacy Concerns About Its Technology

    As DHS reaches out to the surveillance industry for help, the 2022 postmortem document offers a cautionary tale. In 2019, Perceptics, which provides CBP with license plate-scanning technology used at these same checkpoints, was hacked, revealing that the company “removed unauthorized copies of traveler image personally identifiable information (PII) and copied this information to Perceptics’ corporate servers.” The document notes that CBP “performed an assessment of additional data protection and insider threat security controls that could be incorporated to prevent a future incident from occurring,” but doesn’t say which, if any, were implemented.

    Also included in the hacked files were emails from Perceptics CEO John Dalton, who noted in a message to a lobbyist, “CBP has none of the privacy concerns at the border that all agencies have inland.”

    The post Face Recognition on the Border Is an Error-Prone Experiment That Won’t Stop appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The “IT For IDF” conference in Rishon LeZion, just south of Tel Aviv, brought together tech firms from across the world to support the Israel Defense Forces in Gaza and beyond.

    Many of the assembled companies are not household names in the United States, but several multinational firms — like Nokia, Dell, and Canon — were present at July 10 event.

    The mission they had gathered to support was clear. Onstage, a brigadier general with the Israeli military gave a presentation that connected the Nakba, the 1967 Arab–Israeli War, the 2006 invasion of Lebanon, the current war on Gaza, and more wars in the decades to come. His call to action splashed across the big screen: “Each generation and its own turn — this is our watch!”

    One company, however, was conspicuously absent: Google.

    For the last two years, Google had been a marquee sponsor of IT For IDF — the company is a natural partner for the event, given Google Cloud’s foundational role in Project Nimbus, a $1.2 billion contract aimed at modernizing cloud computing operations across Israeli government that it shares with Amazon.

    This year was supposed to be no different and, until just days before the conference started, a Google Cloud logo was displayed alongside other sponsors on the IT For IDF website. Then Google abruptly vanished from the site without explanation.

    When asked by a news outlet about the logo’s disappearance, the conference organizers claimed they weren’t aware Google had been on their website and suggested its inclusion was an error. “It’s possible that we used their logo by mistake but they are not a sponsor,” a spokesperson told 404 Media, “as far as I know.”

    Google’s own corporate schedule, reviewed by The Intercept, seems to contradict this statement. The document includes upcoming Google events in Israel, and IT For IDF 2024 is on the list. On this internal schedule, Google is explicitly labeled as a co-sponsor of the conference in partnership with CloudEx, an Israeli cloud computing consultancy.

    CloudEx CEO Ariel Munafo moonlights as an adviser to the IDF’s Center of Computers and Information Systems, known as Mamram, where he is helping other IDF units build out their cloud computing operations, according to his LinkedIn profile.

    Google did not respond to multiple requests for comment.

    The apparent about-face on Google’s sponsorship of IT For IDF is just one recent example of companies seeking to distance themselves from Israel’s brutal assault on the Gaza Strip, which has killed nearly 40,000 Palestinians. Though much business has gone on as usual amid the destruction, some companies have, at various times, and to various extents, shifted their business plans in Israel.

    Google continues to work with the IDF, as it has for years on the Nimbus contract. The company’s odd vanishing act from a conference focused on a lucrative customer relationship stands as one of the most high-profile examples of what appears to be PR anxiety.

    The tech giant has shown some squeamishness over some of Nimbus’s objectives in the past. The project has drawn international criticism and prompted a dissent campaign among Google employees, over 50 of whom were fired in April for protesting the contract. The Israeli government emphasizes Nimbus’s military dimensions, but Google has persistently tried to downplay or outright deny that its contract for Israel includes military work.

    Close-up of an official photo from the IT for IDF event showing the painted-over sponsor display.
    Close-up of an official photo from the IT For IDF event showing the painted-over sponsor display. Photo: IT For IDF

    Official event photos posted to a public album and reviewed by The Intercept suggest that Google’s connection to the IDF networking event was literally whitewashed before it began. Official photos of a step-and-repeat backdrop for the conference contains all the event’s original sponsors’ logos, sans Google — and includes one square that appears to have been hastily painted over. Neither Google nor People and Computers, the conference’s organizer, responded when asked whose logo was underneath the paint.

    The logo of Cisco, which claimed to 404 Media that it was “not a sponsor of this conference,” remained on display.

    Even without a presence at the conference, Google loomed over an event dedicated to the prosecution of a war from which it has struggled to distance itself.

    One of the event’s big draws was a presentation by Col. Racheli Dembinski of Mamram, on the Israeli military’s use of cloud platforms during the war in Gaza, during which she highlighted the role of Google Cloud, according to Israeli press.

    A later talk presented by the “technology lead for Google Cloud Platform and CloudEx” noted that CloudEx’s partnership with Google entailed “working closely on several production cloud-hosted projects” with the Ministry of Defense.

    Commit, another cloud computing reseller that was recently named “2024 Google Cloud Sales Partner of the Year for Israel” for its work implementing Project Nimbus, took to the stage to hawk its battlefield management software.

    Related

    Documents Reveal Advanced AI Tools Google Is Selling to Israel

    Google’s insistence that Project Nimbus is peaceful in nature is at odds with the public record. Nimbus training materials published by The Intercept in 2022 cited the Ministry of Defense as a customer. Recent reporting by Wired detailed the project’s connection to the IDF since its inception. When it was announced in 2021, Project Nimbus was touted by the Israeli Finance Ministry as serving the “defense establishment.”

    In May, journalist Jack Poulson reported on Google’s long-running collaboration with IDF information technology units like Mamram, noting an intensification in such work since October 7.

    Despite all this, Google has routinely refused to discuss the Nimbus work or its humanitarian implications with the press. The company generally responding to inquiries about the project or criticism of its military nature with a boilerplate statement that Nimbus is “not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”

    The Intercept revealed in May that Nimbus requires two prominent Israeli weapons manufacturers, Israel Aerospace Industries and Rafael Advanced Defense Systems, to use Google and Amazon cloud computing services in their work.

    The post Google Planned to Sponsor IDF Conference That Now Denies Google Was Sponsor appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Israel opposed a proposal at a recent United Nations forum aimed at rebuilding the Gaza Strip’s war-ravaged telecommunications infrastructure on the grounds that Palestinian connectivity is a readymade weapon for Hamas.

    The resolution, which was drafted by Saudi Arabia for last week’s U.N. International Telecommunication Union summit in Geneva, is aimed at returning internet access to Gaza’s millions of disconnected denizens.

    It ultimately passed under a secret ballot on June 14 — but not before it was watered down to remove some of its more strident language about Israel’s responsibility for the destruction of Gaza. The U.S. delegate at the ITU summit had specifically opposed those references.

    Israel, for its part, had blasted the proposal as a whole. Israel’s ITU delegate described it as “a resolution that while seemingly benign in its intent to rebuild telecommunications infrastructure, distorts the reality of the ongoing situation in Gaza,” according to a recording of the session reviewed by The Intercept. The delegate further argued the resolution does not address that Hamas has used the internet “to prepare acts of terror against Israel’s civilians,” and that any rebuilding effort must include unspecified “safeguards” that would prevent the potential use of the internet for terrorism.

    “Based on this rationale, Gaza will never have internet.”

    “Based on this rationale, Gaza will never have internet,” Marwa Fatafta, a policy adviser with the digital rights group Access Now, told The Intercept, adding that Israel’s position is not only incoherent but inherently disproportionate. “You can’t punish the entire civilian population just because you have fears of one Palestinian faction.”

    The Israeli Ministry of Communications did not respond to a request for comment.

    Getting Gaza Back Online

    When delegations to the ITU, a U.N. agency that facilitates cooperation between governments on telecommunications policies, began meeting in Geneva in early June, the most pressing issue on the agenda was getting Gaza back online. Israel’s monthslong bombardment of the enclave has severed fiber cables, razed cellular towers, and generally wrecked the physical infrastructure required to communicate with loved ones and the outside world.

    A disconnected Gaza Strip also threatens to add to the war’s already staggering death toll. Though Israel touts its efforts to warn civilians of impending airstrikes, such warnings are relayed using the very cellular and internet connections the country’s air force routinely levels. It is a cycle of data degradation that began at the war’s start: The more Israel bombs, the harder it is for Gazans to know they are about to be bombed.

    The resolution that passed last week would ensure “the ITU’s much needed assistance and support to Palestine for rebuilding its telecommunication sector.” While the agency has debated the plight of Palestinian internet access for many years, the new proposal arrives at a crisis point for data access across Gaza, as much of the Strip has been reduced to rubble, and civilians struggle to access food and water, let alone cellular signals and Wi-Fi.

    The ITU and other intergovernmental bodies have long pushed for Palestinian sovereignty over its own internet access. But the Saudi proposal was notable in that it explicitly called out Israel’s role in hobbling Gaza’s connection to the world, either via bombs, bulldozers, or draconian restrictions on technology imports. That Saudi Arabia was behind the resolution is not without irony; in 2022, Yemen plunged into a four-day internet blackout following airstrikes by a Saudi-led military coalition.

    Without mentioning Israel by name, the Saudi resolution also called on the ITU to monitor the war’s destructive effects on Palestinian data access and provide regular reports. The resolution also condemned both the “widespread destruction of critical infrastructure, failure of telecom services and mobile phone outages that have occurred across the Gaza Strip since the beginning of the aggression by the occupying power” and “the obstacles practiced by the occupying power in preventing the use of new communications technologies.”

    In a session debating the resolution, the U.S. delegate told the council, “We have made clear to the sponsors of this resolution that we do not agree with some of the characterizations,” specifically the language blaming the destruction of Gaza and the forced use of obsolete technology on Israel. “The United States cannot support this resolution in its current form as drafted,” the delegate continued, according to a recording reviewed by The Intercept.

    Whether or not the U.S. ultimately voted for the resolution — the State Department did not respond when asked — it appears to have been successful in weakening the version that was ultimately approved by the ITU. The version that did pass was stripped of any explicit mention of Israel’s role in destroying and otherwise thwarting Gazan internet access, and refers obliquely only to “​the obstacles practiced in preventing the use of new communication technologies.”

    The State Department did not respond to The Intercept’s other questions about the resolution either, including whether the administration shares Israel’s terror-related objections to it.

    The U.S. has taken a harsher stance on civilian internet blackouts caused by a military aggressor in the past. Following Russia’s invasion of Ukraine and the ensuing national internet disruptions it caused, the State Department declared, “the United States condemns actions that block or degrade access to the Internet in Ukraine, which sever critical channels for sharing and learning information, including about the war.”

    Outdated Technology

    The approved resolution also calls on ITU member states to “make every effort” to both preserve what Palestinian telecom infrastructure remains and allocate funds necessary for the “return of communications in the Gaza Strip” in the future. This proposed rebuilding includes the activation of 4G and 5G cellular service. While smartphones in the West Bank connect to the internet with 3G wireless speeds unsuitable for many data-hungry applications, Gazans must make do with debilitatingly slow 2G service — an obsolete standard that was introduced to the United States in 1992.

    Fatafta, of Access Now, noted that Israel does have a real interest in preventing Gaza from entering the 21st century: surveillance and censorship. Gaza’s reliance on insecure cellular technology from the 1990s and Israeli fiber connections makes it trivial for Israeli intelligence agents to intercept texts and phone calls and institute internet blackouts at will, as has occurred throughout the war.

    The resolution is “an important step, because the current status quo cannot continue,” she said. “There is no scenario where Gaza can be allowed to keep a 2G network where the rest of the world has already moved on to 5G.”

    The post Israel Opposes Rebuilding Gaza’s Internet Access Because Terrorists Could Go Online appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Facebook’s advertising algorithm disproportionately targets Black users with ads for for-profit colleges, according to new paper by a team of university researchers.

    Like all major social media platforms, Meta, which owns Facebook and Instagram, does not disclose exactly how or why its billions of users see certain posts and not others, including ads. In order to put Facebook’s black-box advertising system to the test, academics from Princeton and the University of Southern California purchased Facebook ads and tracked their performance among real Facebook users, a method they say produced “evidence of racial discrimination in Meta’s algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.”

    The researchers say they focused on for-profit colleges because of their long, demonstrable history of deceiving prospective students — particularly students of color — with predatory marketing while delivering lackluster educational outcomes and diminished job prospects compared to other colleges.

    In a series of test marketing campaigns, the researchers purchased sets of two ads paired together: one for a public institution, like Colorado State University, and another marketing a for-profit company, like Strayer University. (Neither the for-profit colleges nor state schools advertised by the researchers were involved in the project).

    Advertisers on Facebook can fine-tune their campaigns with a variety of targeting options, but race is no longer one of them. So the researchers found a clever proxy. Using North Carolina voter registration data that includes individuals’ races, the researchers built a sample audience that was 50 percent white and 50 percent Black. The Black users came from one region in North Carolina and white voters from another. Using Facebook’s “custom audiences” feature, they uploaded this roster of specific individuals to target with ads. Though Facebook’s ad performance metrics wouldn’t reveal the race of users who saw each ad, the data showed where each ad was viewed. “Whenever our ad is shown in Raleigh, we can infer it was shown to a Black person and, when it is shown in Charlotte — we can infer it was shown to a White person,” the paper explains.

    Theoretically, an unbiased algorithm would serve each ads for each school to an equal number of Black and white users. The experiment was designed to see whether there was a statistically significant skew in which people ultimately saw which ads.

    With each pair of ads, Facebook’s delivery algorithm showed a bias, the researchers found. The company’s algorithm disproportionately showed Black users ads for colleges like DeVry and Grand Canyon University, for-profit schools that have been fined or sued by the Department of Education for advertising trickery, while more white users were steered toward state colleges, the academics concluded.

    “Addressing fairness in ads is an industry-wide challenge and we’ve been collaborating with civil rights groups, academics, and regulators to advance fairness in our ads system,” Meta spokesperson Daniel Roberts told The Intercept. “Our advertising standards do not allow advertisers to run ads that discriminate against individuals or groups of individuals based on personal attributes such as race and we are actively building technology designed to make additional progress in this area.”

    Even in cases where these for-profit programs have reformed their actual marketing efforts and “aim for racially balanced ad targeting,” the research team concluded “Meta’s algorithms would recreate historical racial skew in who the ad are shown to, and would do so unbeknownst to the advertisers.”

    Ever since a 2016 ProPublica report found Facebook allowed advertisers to explicitly exclude users from advertising campaigns based on their race, the company’s advertising system has been subject to increased scrutiny and criticism. And while Facebook ultimately removed options that allowed marketers to target users by race, previous academic research has shown that the secret algorithm that decides who sees which ads is biased along race and gender lines, suggesting bias intrinsic to the company’s systems.

    Related

    Facebook’s Ad Algorithm Is a Race and Gender Stereotyping Machine, New Study Suggests

    A 2019 research paper on this topic showed that ads for various job openings were algorithmically sorted along race and gender stereotypes, for instance, lopsidedly showing Black users opportunities to drive taxi cabs, while openings for an artificial intelligence developer was skewed in favor of white users. A 2021 follow-up paper found that Facebook ad delivery replicated real-world workplace gender imbalances, showing women ads for companies where women were already overrepresented.

    While it withholds virtually all details about the ad delivery algorithm functions, Facebook has long contended that its ads are shown merely to people most likely to find them relevant. In response to the 2021 research showing gender bias in the algorithm, a company spokesperson told The Intercept that while they understood the researchers’ concerns, “our system takes into account many signals to try and serve people ads they will be most interested in.”

    Aleksandra Korolova, a professor of computer science and public affairs at Princeton and co-author of the 2019 and 2021 research, told The Intercept that she rejects the notion that apparent algorithmic bias can be explained away as only reflecting what people actually want, because it’s impossible to disprove. “It’s impossible to tell whether Meta’s algorithms indeed reflect a true preference of an individual, or are merely reproducing biases in historical data that the algorithms are trained on, or are optimizing for preferences as reflected in clicks rather than intended real-world actions.”

    The onus to prove Facebook’s ad delivery is reflecting real-world preferences and not racist biases, she said, lies with Facebook.

    But Korolova also noted that even if for-profit college ads are being disproportionately directed to Black Facebook users because of actual enrollment figures, a moral and social objection to such a system remains. “Society has judged that some advertising categories are so important that one should not let historical trends or preferences propagate into future actions,” she said. While various areas in the United States may have been majority-Black or white over the years, withholding ads for properties in “white neighborhoods” from Black buyers, for example, is illegal, historical trends notwithstanding.

    Aside from the ethical considerations around disproportionately encouraging its Black users to enroll in for-profit colleges, the authors suggest Facebook may be creating legal liability for itself too. “Educational opportunities have legal protections that prohibit racial discrimination and may apply to ad platforms,” the paper cautions.

    Korolova said that, in recent years, “Meta has made efforts to reduce bias in their ad delivery systems in the domains of housing, employment and credit — housing as part of their 2022 settlement with the Department of Justice, and employment and credit voluntarily, perhaps to preempt lawsuits based on the work that showed discrimination in employment ad delivery.”

    But she added that despite years of digging into apparently entrenched algorithmic bias in the company’s products, “Meta has not engaged with us directly and does not seem to have extended their efforts for addressing ad delivery biases to a broader set of domains that relate to life opportunities and societally important topics.”

    The post Facebook’s Ad Algorithm Pushes Black Users to For-Profit Colleges appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In March, WhatsApp’s security team issued an internal warning to their colleagues: Despite the software’s powerful encryption, users remained vulnerable to a dangerous form of government surveillance. According to the previously unreported threat assessment obtained by The Intercept, the contents of conversations among the app’s 2 billion users remain secure. But government agencies, the engineers wrote, were “bypassing our encryption” to figure out which users communicate with each other, the membership of private groups, and perhaps even their locations.

    The vulnerability is based on “traffic analysis,” a decades-old network-monitoring technique, and relies on surveying internet traffic at a massive national scale. The document makes clear that WhatsApp isn’t the only messaging platform susceptible. But it makes the case that WhatsApp’s owner, Meta, must quickly decide whether to prioritize the functionality of its chat app or the safety of a small but vulnerable segment of its users.

    “WhatsApp should mitigate the ongoing exploitation of traffic analysis vulnerabilities that make it possible for nation states to determine who is talking to who,” the assessment urged. “Our at-risk users need robust and viable protections against traffic analysis.”

    Against the backdrop of the ongoing war on Gaza, the threat warning raised a disturbing possibility among some employees of Meta. WhatsApp personnel have speculated Israel might be exploiting this vulnerability as part of its program to monitor Palestinians at a time when digital surveillance is helping decide who to kill across the Gaza Strip, four employees told The Intercept.

    “WhatsApp has no backdoors and we have no evidence of vulnerabilities in how WhatsApp works,” said Meta spokesperson Christina LoNigro.

    Though the assessment describes the “vulnerabilities” as “ongoing,” and specifically mentions WhatsApp 17 times, LoNigro said the document is “not a reflection of a vulnerability in WhatsApp,” only “theoretical,” and not unique to WhatsApp. LoNigro did not answer when asked if the company had investigated whether Israel was exploiting this vulnerability.

    Even though the contents of WhatsApp communications are unreadable, the assessment shows how governments can use their access to internet infrastructure to monitor when and where encrypted communications are occurring, like observing a mail carrier ferrying a sealed envelope. This view into national internet traffic is enough to make powerful inferences about which individuals are conversing with each other, even if the subjects of their conversations remain a mystery. “Even assuming WhatsApp’s encryption is unbreakable,” the assessment reads, “ongoing ‘collect and correlate’ attacks would still break our intended privacy model.”

    “The nature of these systems is that they’re going to kill innocent people and nobody is even going to know why.”

    The WhatsApp threat assessment does not describe specific instances in which it knows this method has been deployed by state actors. But it cites extensive reporting by the New York Times and Amnesty International showing how countries around the world spy on dissident encrypted chat app usage, including WhatsApp, using the very same techniques.

    As war has grown increasingly computerized, metadata — information about the who, when, and where of conversations — has come to hold immense value to intelligence, military, and police agencies around the world. “We kill people based on metadata,” former National Security Agency chief Michael Hayden once infamously quipped.

    But even baseless analyses of metadata can be lethal, according to Matthew Green, a professor of cryptography at Johns Hopkins University. “These metadata correlations are exactly that: correlations. Their accuracy can be very good or even just good. But they can also be middling,” Green said. “The nature of these systems is that they’re going to kill innocent people and nobody is even going to know why.”

    It wasn’t until the April publication of an exposé about Israel’s data-centric approach to war that the WhatsApp threat assessment became a point of tension inside Meta.

    A joint report by +972 Magazine and Local Call revealed last month that Israel’s army uses a software system called Lavender to automatically greenlight Palestinians in Gaza for assassination. Tapping a massive pool of data about the Strip’s 2.3 million inhabitants, Lavender algorithmically assigns “almost every single person in Gaza a rating from 1 to 100, expressing how likely it is that they are a militant,” the report states, citing six Israeli intelligence officers. “An individual found to have several different incriminating features will reach a high rating, and thus automatically becomes a potential target for assassination.”

    WhatsApp usage is among the multitude of personal characteristics and digital behaviors the Israeli military uses to mark Palestinians for death.

    The report indicated WhatsApp usage is among the multitude of personal characteristics and digital behaviors the Israeli military uses to mark Palestinians for death, citing a book on AI targeting written by the current commander of Unit 8200, Israel’s equivalent of the NSA. “The book offers a short guide to building a ‘target machine,’ similar in description to Lavender, based on AI and machine-learning algorithms,” according to the +972 exposé. “Included in this guide are several examples of the ‘hundreds and thousands’ of features that can increase an individual’s rating, such as being in a Whatsapp group with a known militant.”

    The Israeli military did not respond to a request for comment, but told The Guardian last month that it “does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist.” The military stated that Lavender “is simply a database whose purpose is to cross-reference intelligence sources, in order to produce up-to-date layers of information on the military operatives of terrorist organizations. This is not a list of confirmed military operatives eligible to attack.”

    It was only after the publication of the Lavender exposé and subsequent writing on the topic that a wider swath of Meta staff discovered the March WhatsApp threat assessment, said the four company sources, who spoke on the condition of anonymity, fearing retaliation by their employer. Reading how governments might be able to extract personally identifying metadata from WhatsApp’s encrypted conversations triggered deep concern that this same vulnerability could feed into Lavender or other Israeli military targeting systems.

    Efforts to press Meta from within to divulge what it knows about the vulnerability and any potential use by Israel have been fruitless, the sources said, in line with what they describe as a broader pattern of internal censorship against expressions of sympathy or solidarity with Palestinians since the war began.

    Related

    Israeli Group Claims It’s Working With Big Tech Insiders to Censor “Inflammatory” Wartime Content

    Meta employees concerned by the possibility their product is putting innocent people in Israeli military crosshairs, among other concerns related to the war, have organized under a campaign they’re calling Metamates 4 Ceasefire. The group has published an open letter signed by more than 80 named staff members. One of its demands is “an end to censorship — stop deleting employee’s words internally.”

    Meta spokesperson Andy Stone told The Intercept any workplace discussion of the war is subject to the company’s general workplace conduct rules, and denied such speech has been singled out. “Our policy is written with that in mind and outlines the types of discussions that are appropriate for the workplace. If employees want to raise concerns, there are established channels for doing so.”

    MENLO PARK, CALIFORNIA - NOVEMBER 3: Crowds are gathered outside of Meta (Facebook) Headquarters to protest Mark Zuckerberg and Meta's censoring about Palestine posts on social platforms in Menlo Park, California, United States as they protest and condemn recent actions by the government of Israel and calling U.S. to stop aiding to Israel, on November 3, 2023. (Photo by Tayfun Coskun/Anadolu via Getty Images)
    Crowds gather outside of Meta headquarters in Menlo Park, Calif., to protest Mark Zuckerberg and Meta’s censoring of Palestine posts on social platforms, on Nov. 3, 2023. Photo: Tayfun Coskun/Anadolu via Getty Images

    According to the internal assessment, the stakes are high: “Inspection and analysis of network traffic is completely invisible to us, yet it reveals the connections between our users: who is in a group together, who is messaging who, and (hardest to hide) who is calling who.”

    The analysis notes that a government can easily tell when a person is using WhatsApp, in part because the data must pass through Meta’s readily identifiable corporate servers. A government agency can then unmask specific WhatsApp users by tracing their IP address, a unique number assigned to every connected device, to their internet or cellular service provider account.

    WhatsApp’s internal security team has identified several examples of how clever observation of encrypted data can thwart the app’s privacy protections, a technique known as a correlation attack, according to this assessment. In one, a WhatsApp user sends a message to a group, resulting in a burst of data of the exact same size being transmitted to the device of everyone in that group. Another correlation attack involves measuring the time delay between when WhatsApp messages are sent and received between two parties — enough data, the company believes, “to infer the distance to and possibly the location of each recipient.”

    The internal warning notes that these attacks require all members of a WhatsApp group or both sides of a conversation to be on the same network and within the same country or “treaty jurisdiction,” a possible reference to the Five Eyes spy alliance between the U.S., Australia, Canada, U.K., and New Zealand. While the Gaza Strip has its own Palestinian-operated telecoms, its internet access ultimately runs through Israeli fiber optic cables subject to Israeli state surveillance. Although the memo suggests that users in “well functioning democracies with due process and strong privacy laws” may be less vulnerable, it also highlights the NSA’s use of these telecom-tapping techniques on U.S. soil.

    “Today’s messenger services weren’t designed to hide this metadata from an adversary who can see all sides of the connection,” Green, the cryptography professor, told The Intercept. “Protecting content is only half the battle. Who you communicate [with] and when is the other half.”

    The assessment reveals WhatsApp has been aware of this threat since last year, and notes the same surveillance techniques work against other competing apps. “Almost all major messenger applications and communication tools do not include traffic analysis attacks in their threat models,” said Donncha Ó Cearbhaill, head of Amnesty International’s Security Lab, told The Intercept. “While researchers have known these attacks are technically possible, it was an open question if such attacks would be practical or reliable on a large scale, such as whole country.”

    The assessment makes clear that WhatsApp engineers grasp the severity of the problem, but also understand how difficult it might be to convince their company to fix it. The fact that these de-anonymization techniques have been so thoroughly documented and debated in academic literature, Green explained, is a function of just how “incredibly difficult” it is to neutralize them for a company like Meta. “It’s a direct tradeoff between performance and responsiveness on one hand, and privacy on the other,” he said.

    Asked what steps the company has taken to shore up the app against traffic analysis, Meta’s spokesperson told The Intercept, “We have a proven track record addressing issues we identify and have worked to hold bad actors accountable. We have the best engineers in the world proactively looking to further harden our systems against any future threats and we will continue to do so.”

    The WhatsApp threat assessment notes that beefing up security comes at a cost for an app that prides itself on mass appeal. It will be difficult to better protect users against correlation attacks without making the app worse in other ways, the document explains. For a publicly traded giant like Meta, protecting at-risk users will collide with the company’s profit-driven mandate of making its software as accessible and widely used as possible.

    “The tension is always going to be market share, market dominance.”

    “Meta has a bad habit of not responding to things until they become overwhelming problems,” one Meta source told The Intercept, citing the company’s inaction when Facebook was used to incite violence during Myanmar’s Rohingya genocide. “The tension is always going to be market share, market dominance, focusing on the largest population of people rather than a small amount of people [that] could be harmed tremendously.”

    The report warns that adding an artificial delay to messages to throw off attempts to geolocate the sender and receiver of data, for instance, will make the app feel slower to all 2 billion users — most of whom will never have to worry about the snooping of intelligence agencies. Making the app transmit a regular stream of decoy data to camouflage real conversations, another idea floated in the assessment, could throw off snooping governments. But it might also have the adverse effect of hurting battery life and racking up costly mobile data bills.

    To WhatsApp’s security personnel, the right approach is clear. “WhatsApp Security cannot solve traffic analysis alone,” the assessment reads. “We must first all agree to take on this fight and operate as one team to build protections for these at-risk, targeted users. This is where the rubber meets the road when balancing WhatsApp’s overall product principle of privacy and individual team priorities.”

    The memo suggests WhatsApp may adopt a hardened security mode for at-risk users similar to Apple’s “Lockdown Mode” for iOS. But even this extra setting could accidentally imperil users in Gaza or elsewhere, according to Green. “People who turn this feature on could also stand out like a sore thumb,” he said. “Which itself could inform a targeting decision. Really unfortunate if the person who does it is some kid.”

    The post This Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message appeared first on The Intercept.

    This post was originally published on The Intercept.