Author: Sam Biddle

  • The Staten Island district attorney’s use of the highly controversial Clearview face recognition system included attempts to dig up the social media accounts of homicide victims and was paid for with equally controversial asset forfeiture cash, according to city records provided to The Intercept.

    Clearview has garnered international attention and intense criticism for its simple premise: What if you could instantly identify anyone in the world with only their picture? Using billions of images scraped from social media sites, Clearview sells police and other governmental agencies the ability to match a photo to a name using face recognition, no search warrant required — a power civil libertarians and privacy advocates say simply places too much unsupervised power in the hands of police.

    The use of Clearview by the Staten Island district attorney’s office was first reported by Gothamist, citing city records obtained by the Legal Aid Society. Subsequent records procured via New York State Freedom of Information Law request and provided to The Intercept now confirm the initial concerns about the tool’s largely unsupervised use by prosecutors. According to spokesperson Ryan Lavis, the DA’s office “completely stopped utilizing Clearview as an investigative tool last year.”

    Yet the documents provide new information about how Staten Island prosecutors used the notorious face recognition tool and show that the software was paid for with funds furnished by the Justice Department’s Equitable Sharing Program. The program lets state and local police hand seized cash and property over to a federal law enforcement agency, whereupon up to 80 percent of the proceeds are then sent back the original state or local department to pocket.

    A May 2 letter to Attorney General Merrick Garland by Reps. Jamie Raskin, D-Md., and Nancy Mace, R-S.C., alleged that the federal program is routinely abused by police. “We are concerned that the Equitable Sharing Program creates a loophole allowing state and local law enforcement to seize assets from individuals without bringing criminal charges or a conviction, even in states that prohibit civil asset forfeiture,” reads the letter, first reported by The Hill.

    Public records turned over to the Legal Aid Society in response to its request for information about how the Staten Island DA’s office paid for Clearview included a document titled “Guide to Equitable Sharing for State, Local, and Tribal Law Enforcement Agencies,” which outlines the program and how state entities can make use of it. In a letter sent to the Legal Aid Society and shared with The Intercept, the DA’s office confirmed that federal forfeiture proceeds had paid for its Clearview license. Asset forfeiture has become a contentious and frequently abused means of padding department budgets around the country, and critics say the Equitable Sharing program provides police in states with laws constraining asset seizures with a convenient federal workaround. While civil asset forfeiture is permitted in New York, the state places some limits on how and when seizures can be conducted, rules that the federal program could let a local district attorney skirt.

    “The revelation that the funds used to access the Clearview AI service was derived from property obtained without due process, from the same individuals who are most at risk to the devastating consequences of its flaws, is nearly dystopian,” said Diane Akerman, an attorney with the Legal Aid Society’s Digital Forensics Unit. “Perversely, the most overpoliced and targeted communities would be footing the bill for such surveillance through police seizures of their assets,” Akerman added.

    “These sorts of search tools not only destroy our privacy, but erode the bedrock of democracy.”

    Albert Fox Cahn, executive director of the New York-based Surveillance Technology Oversight Project, told The Intercept that there’s a troubling aptness to the funding. “You have New Yorkers whose assets are being stolen by the police to pay for facial recognition software that works by stealing our faces from social media,” Cahn noted in an interview. To face recognition critics like Cahn, Clearview is emblematic of the technology’s ability to simultaneously eradicate privacy expectations and enhance the surveillance powers of the state. “There’s this pattern here of the public’s money and data being taken without consent in these ways that are deemed lawful but seem criminal. … These sorts of search tools not only destroy our privacy, but erode the bedrock of democracy.”

    Among the disclosed records is a long list, albeit almost entirely redacted, of Clearview searches conducted by the DA’s office from 2019 to 2021, including the general purpose of the queries and names of the targets, which The Intercept has redacted to protect the privacy of those scrutinized by the DA. These search logs indicate that on many occasions, Clearview was tapped not to identify suspects in criminal investigations but to find and search through the social media histories of people whose identities were already known, including homicide victims and unspecified “personnel.” A handwritten note appended to a search conducted in January 2020 also indicates that the DA’s office used Clearview to assist in a “deportation case” — a law enforcement investigation not typically within the DA’s remit, particularly given New York’s status as a so-called sanctuary city. “Despite what we claim as being a sanctuary city, there’s no law in New York whatsoever that stops a conservative DA’s office like Staten Island from partnering with ICE,” said Cahn, referring to U.S. Immigration and Customs Enforcement.

    The search records are indicative of how face recognition technology isn’t just proliferating among government agencies but also becoming used in applications broader than the public may expect. “Typically, the NYPD’s use of facial recognition technology has been to attempt to identify unknown witnesses or suspects,” Akerman explained. “The Richmond County District Attorney’s Office” — Richmond County is coextensive with the Staten Island borough, and the DA operates as a county official — “is engaging in a new use of the technology — as a form of surveillance of a known person’s social media.” Akerman pointed to the fact that the New York Police Department, the country’s largest police force, already makes use of face recognition technologies and questioned why the smallest DA’s office in the city needed such a powerful tool. Akerman also questioned the need for such a powerful tool given that prosecutors already routinely obtain intimately personal data about individuals during criminal investigations. “DA’s offices already obtain warrants, which are largely rubber-stamped, to search individuals’ cellphones, social media, phone location records, etc., regardless of whether there is a connection to the incident.”

    Peter Thiel, co-founder of PayPal, Palantir Technologies, and Founders Fund, holds hundred dollar bills as he speaks during the Bitcoin 2022 Conference at Miami Beach Convention Center on April 7, 2022 in Miami, Florida.

    Right-wing billionaire Peter Thiel holds $100 bills as he speaks during the Bitcoin 2022 conference on April 7, 2022, in Miami.

    Photo: Marco Bello/Getty Images


    Although face recognition is a potentially invasive and dangerous technology no matter how or where it’s deployed, the Peter Thiel-backed Clearview and its right-wing founder have become emblematic of the threat that the powerful and typically unsupervised software poses, particularly given its rapid adoption by police forces across the country. While the company is already eagerly selling its software to surveillance-hungry police departments, its ambitions are far greater. In February, the Washington Post reported that Clearview recently boasted to investors that it was working toward growing its database of faces to 100 billion images by next year, a number it says would mean “almost everyone in the world will be identifiable” with a simple snapshot. In a sign that the company is expanding its clientele in addition to its capabilities, the Ukrainian military has reportedly begun using Clearview to identify Russian corpses.

    Critics of Clearview say the technology represents an untenable threat to personal privacy and, by virtue of the fact that it requires no judicial oversight, an assault on Fourth Amendment protections against undue searches. Clearview’s degree of accuracy is unclear, providing another klaxon for civil liberties advocates regardless of efficacy: If the technology works as advertised, its surveillance powers are an existential threat to privacy rights, but if it’s inaccurate, it risks implicating innocent people — particularly people of color — in crimes.

    The Staten Island DA’s office declined to answer questions about the expansive use of Clearview documented in the search logs.

    Cahn, of the Surveillance Technology Oversight Project, agreed that the disclosed records are a worrying sign that Clearview is being used far more broadly than initially advertised. “It’s increasingly clear that Clearview is not just a facial recognition tool, it’s a social media monitoring tool,” he said. “When so many people have social media accounts that they try to keep anonymous, where they try to keep their names off of the account, this becomes yet another tool to map out what people say, what they post, when they’re trying to keep their identities secret.”

    The post Staten Island DA Bought Clearview Face Recognition Software With Civil Forfeiture Cash appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In the months leading up to Russia’s invasion of Ukraine, two obscure American startups met to discuss a potential surveillance partnership that would merge the ability to track the movements of billions of people via their phones with a constant stream of data purchased directly from Twitter. According to Brendon Clark of Anomaly Six — or “A6” — the combination of its cellphone location-tracking technology with the social media surveillance provided by Zignal Labs would permit the U.S. government to effortlessly spy on Russian forces as they amassed along the Ukrainian border, or similarly track Chinese nuclear submarines. To prove that the technology worked, Clark pointed A6’s powers inward, spying on the National Security Agency and CIA, using their own cellphones against them.

    Virginia-based Anomaly Six was founded in 2018 by two ex-military intelligence officers and maintains a public presence that is scant to the point of mysterious, its website disclosing nothing about what the firm actually does. But there’s a good chance that A6 knows an immense amount about you. The company is one of many that purchases vast reams of location data, tracking hundreds of millions of people around the world by exploiting a poorly understood fact: Countless common smartphone apps are constantly harvesting your location and relaying it to advertisers, typically without your knowledge or informed consent, relying on disclosures buried in the legalese of the sprawling terms of service that the companies involved count on you never reading. Once your location is beamed to an advertiser, there is currently no law in the United States prohibiting the further sale and resale of that information to firms like Anomaly Six, which are free to sell it to their private sector and governmental clientele. For anyone interested in tracking the daily lives of others, the digital advertising industry is taking care of the grunt work day in and day out — all a third party need do is buy access.

    Company materials obtained by The Intercept and Tech Inquiry provide new details of just how powerful Anomaly Six’s globe-spanning surveillance powers are, capable of providing any paying customer with abilities previously reserved for spy bureaus and militaries.


    According to audiovisual recordings of an A6 presentation reviewed by The Intercept and Tech Inquiry, the firm claims that it can track roughly 3 billion devices in real time, equivalent to a fifth of the world’s population. The staggering surveillance capacity was cited during a pitch to provide A6’s phone-tracking capabilities to Zignal Labs, a social media monitoring firm that leverages its access to Twitter’s rarely granted “firehose” data stream to sift through hundreds of millions of tweets per day without restriction. With their powers combined, A6 proposed, Zignal’s corporate and governmental clients could not only surveil global social media activity, but also determine who exactly sent certain tweets, where they sent them from, who they were with, where they’d been previously, and where they went next. This enormously augmented capability would be an obvious boon to both regimes keeping tabs on their global adversaries and companies keeping tabs on their employees.

    The source of the materials, who spoke on the condition of anonymity to protect their livelihood, expressed grave concern about the legality of government contractors such as Anomaly Six and Zignal Labs “revealing social posts, usernames, and locations of Americans” to “Defense Department” users. The source also asserted that Zignal Labs had willfully deceived Twitter by withholding the broader military and corporate surveillance use cases of its firehose access. Twitter’s terms of service technically prohibit a third party from “conducting or providing surveillance or gathering intelligence” using its access to the platform, though the practice is common and enforcement of this ban is rare. Asked about these concerns, spokesperson Tom Korolsyshun told The Intercept “Zignal abides by privacy laws and guidelines set forth by our data partners.”

    A6 claims that its GPS dragnet yields between 30 to 60 location pings per device per day and 2.5 trillion locational data points annually worldwide, adding up to 280 terabytes of location data per year and many petabytes in total, suggesting that the company surveils roughly 230 million devices on an average day. A6’s salesperson added that while many rival firms gather personal location data via a phone’s Bluetooth and Wi-Fi connections that provide general whereabouts, Anomaly 6 harvests only GPS pinpoints, potentially accurate to within several feet. In addition to location, A6 claimed that it has built a library of over 2 billion email addresses and other personal details that people share when signing up for smartphone apps that can be used to identify who the GPS ping belongs to. All of this is powered, A6’s Clark noted during the pitch, by general ignorance of the ubiquity and invasiveness of smartphone software development kits, known as SDKs: “Everything is agreed to and sent by the user even though they probably don’t read the 60 pages in the [end user license agreement].”

    The Intercept was not able to corroborate Anomaly Six’s claims about its data or capabilities, which were made in the context of a sales pitch. Privacy researcher Zach Edwards told The Intercept that he believed the claims were plausible but cautioned that firms can be prone to exaggerating the quality of their data. Mobile security researcher Will Strafach agreed, noting that A6’s data sourcing boasts “sound alarming but aren’t terribly far off from ambitious claims by others.” According to Wolfie Christl, a researcher specializing in the surveillance and privacy implications of the app data industry, even if Anomaly Six’s capabilities are exaggerated or based partly on inaccurate data, a company possessing even a fraction of these spy powers would be deeply concerning from a personal privacy standpoint.

    Reached for comment, Zignal’s spokesperson provided the following statement: “While Anomaly 6 has in the past demonstrated its capabilities to Zignal Labs, Zignal Labs does not have a relationship with Anomaly 6. We have never integrated Anomaly 6’s capabilities into our platform, nor have we ever delivered Anomaly 6 to any of our customers.”

    When asked about the company’s presentation and its surveillance capabilities, Anomaly Six co-founder Brendan Huff responded in an email that “Anomaly Six is a veteran-owned small business that cares about American interests, natural security, and understands the law.”

    Companies like A6 are fueled by the ubiquity of SDKs, which are turnkey packages of code that software-makers can slip in their apps to easily add functionality and quickly monetize their offerings with ads. According to Clark, A6 can siphon exact GPS measurements gathered through covert partnerships with “thousands” of smartphone apps, an approach he described in his presentation as a “farm-to-table approach to data acquisition.” This data isn’t just useful for people hoping to sell you things: The largely unregulated global trade in personal data is increasingly finding customers not only at marketing agencies, but also federal agencies tracking immigrants and drone targets as well as sanctions and tax evasion. According to public records first reported by Motherboard, U.S. Special Operations Command paid Anomaly Six $590,000 in September 2020 for a year of access to the firm’s “commercial telemetry feed.”

    Anomaly Six software lets its customers browse all of this data in a convenient and intuitive Google Maps-style satellite view of Earth. Users need only find a location of interest and draw a box around it, and A6 fills that boundary with dots denoting smartphones that passed through that area. Clicking a dot will provide you with lines representing the device’s — and its owner’s — movements around a neighborhood, city, or indeed the entire world.

    As the Russian military continued its buildup along the country’s border with Ukraine, the A6 sales rep detailed how GPS surveillance could help turn Zignal into a sort of private spy agency capable of assisting state clientele in monitoring troop movements. Imagine, Clark explained, if the crisis zone tweets Zignal rapidly surfaces through the firehose were only a starting point. Using satellite imagery tweeted by accounts conducting increasingly popular “open-source intelligence,” or OSINT, investigations, Clark showed how A6’s GPS tracking would let Zignal clients determine not simply that the military buildup was taking place, but track the phones of Russian soldiers as they mobilized to determine exactly where they’d trained, where they were stationed, and which units they belonged to. In one case, Clark showed A6 software tracing Russian troop phones backward through time, away from the border and back to a military installation outside Yurga, and suggested that they could be traced further, all the way back to their individual homes. Previous reporting by the Wall Street Journal indicates that this phone-tracking method is already used to monitor Russian military maneuvers and that American troops are just as vulnerable.

    In another A6 map demonstration, Clark zoomed in closely on the town of Molkino, in southern Russia, where the Wagner Group, an infamous Russian mercenary outfit, is reportedly headquartered. The map showed dozens of dots indicating devices at the Wagner base, along with scattered lines showing their recent movements. “So you can just start watching these devices,” Clark explained. “Any time they start leaving the area, I’m looking at potential Russian predeployment activity for their nonstandard actors, their nonuniform people. So if you see them go into Libya or Democratic Republic of the Congo or things like that, that can help you better understand potential soft power actions the Russians are doing.”

    To fully impress upon its audience the immense power of this software, Anomaly Six did what few in the world can claim to do: spied on American spies.

    The pitch noted that this kind of mass phone surveillance could be used by Zignal to aid unspecified clients with “counter-messaging,” debunking Russian claims that such military buildups were mere training exercises and not the runup to an invasion. “When you’re looking at counter-messaging, where you guys have a huge part of the value you provide your client in the counter-messaging piece is — [Russia is] saying, ‘Oh, it’s just local, regional, um, exercises.’ Like, no. We can see from the data that they’re coming from all over Russia.”

    To fully impress upon its audience the immense power of this software, Anomaly Six did what few in the world can claim to do: spied on American spies. “I like making fun of our own people,” Clark began. Pulling up a Google Maps-like satellite view, the sales rep showed the NSA’s headquarters in Fort Meade, Maryland, and the CIA’s headquarters in Langley, Virginia. With virtual boundary boxes drawn around both, a technique known as geofencing, A6’s software revealed an incredible intelligence bounty: 183 dots representing phones that had visited both agencies potentially belonging to American intelligence personnel, with hundreds of lines streaking outward revealing their movements, ready to track throughout the world. “So, if I’m a foreign intel officer, that’s 183 start points for me now,” Clark noted.

    The NSA and CIA both declined to comment.

    jordan-base-google-maps

    Anomaly Six tracked a device that had visited the NSA and CIA headquarters to an air base outside of Zarqa, Jordan.

    Screenshot: The Intercept / Google Maps


    Clicking on one of dots from the NSA allowed Clark to follow that individual’s exact movements, virtually every moment of their life, from that previous year until the present. “I mean, just think of fun things like sourcing,” Clark said. “If I’m a foreign intel officer, I don’t have access to things like the agency or the fort, I can find where those people live, I can find where they travel, I can see when they leave the country.” The demonstration then tracked the individual around the United States and abroad to a training center and airfield roughly an hour’s drive northwest of Muwaffaq Salti Air Base in Zarqa, Jordan, where the U.S. reportedly maintains a fleet of drones.

    “It doesn’t take a lot of creativity to see how foreign spies can use this information for espionage, blackmail, all kinds of, as they used to say, dastardly deeds.”

    “There is sure as hell a serious national security threat if a data broker can track a couple hundred intelligence officials to their homes and around the world,” Sen. Ron Wyden, D-Ore., a vocal critic of the personal data industry, told The Intercept in an interview. “It doesn’t take a lot of creativity to see how foreign spies can use this information for espionage, blackmail, all kinds of, as they used to say, dastardly deeds.”

    Back stateside, the person was tracked to their own home. A6’s software includes a function called “Regularity,” a button clients can press that automatically analyzes frequently visited locations to deduce where a target lives and works, even though the GPS pinpoints sourced by A6 omit the phone owner’s name. Privacy researchers have long shown that even “anonymized” location data is trivially easy to attach to an individual based on where they frequent most, a fact borne out by A6’s own demonstration. After hitting the “Regularity” button, Clark zoomed in on a Google Street View image of their home.

    “Industry has repeatedly claimed that collecting and selling this cellphone location data won’t violate privacy because it is tied to device ID numbers instead of people’s names. This feature proves just how facile those claims are,” said Nate Wessler, deputy director of the American Civil Liberties Union’s Speech, Privacy, and Technology Project. “Of course, following a person’s movements 24 hours a day, day after day, will tell you where they live, where they work, who they spend time with, and who they are. The privacy violation is immense.”

    The demo continued with a surveillance exercise tagging U.S. naval movements, using a tweeted satellite photo of the USS Dwight D. Eisenhower in the Mediterranean Sea snapped by the commercial firm Maxar Technologies. Clark broke down how a single satellite snapshot could be turned into surveillance that he claimed was even more powerful than that executed from space. Using the latitude and longitude coordinates appended to the Maxar photo along with its time stamp, A6 was able to pick up a single phone signal from the ship’s position at that moment, south of Crete. “But it only takes one,” Clark noted. “So when I look back where that one device goes: Oh, it goes back to Norfolk. And actually, on the carrier in the satellite picture — what else is on the carrier? When you look, here are all the other devices.” His screen revealed a view of the carrier docked in Virginia, teeming with thousands of colorful dots representing phone location pings gathered by A6. “Well, now I can see every time that that ship is deploying. I don’t need satellites right now. I can use this.”

    Though Clark conceded that the company has far less data available on Chinese phone owners, the demo concluded with a GPS ping picked up aboard an alleged Chinese nuclear submarine. Using only unclassified satellite imagery and commercial advertising data, Anomaly Six was able to track the precise movements of the world’s most sophisticated military and intelligence forces. With tools like those sold by A6 and Zignal, even an OSINT hobbyist would have global surveillance powers previously held only by nations. “People put way too much on social media,” Clark added with a laugh.

    As location data has proliferated largely unchecked by government oversight in the United States, one hand washes another, creating a private sector capable of state-level surveillance powers that can also fuel the state’s own growing appetite for surveillance without the usual judicial scrutiny. Critics say the loose trade in advertising data constitutes a loophole in the Fourth Amendment, which requires the government to make its case to a judge before obtaining location coordinates from a cellular provider. But the total commodification of phone data has made it possible for the government to skip the court order and simply buy data that’s often even more accurate than what could be provided by the likes of Verizon. Civil libertarians say this leaves a dangerous gap between the protections intended by the Constitution and the law’s grasp on the modern data trade.

    “The Supreme Court has made clear that cellphone location information is protected under the Fourth Amendment because of the detailed picture of a person’s life it can reveal,” explained Wessler. “Government agencies’ purchases of access to Americans’ sensitive location data raise serious questions about whether they are engaged in an illegal end run around the Fourth Amendment’s warrant requirement. It is time for Congress to end the legal uncertainty enabling this surveillance once and for all by moving toward passage of the Fourth Amendment Is Not For Sale Act.”

    Though such legislation could restrict the government’s ability to piggyback off commercial surveillance, app-makers and data brokers would remain free to surveil phone owners. Still, Wyden, a co-sponsor of that bill, told The Intercept that he believes “this legislation sends a very strong message” to the “Wild West” of ad-based surveillance but that clamping down on the location data supply chain would be “certainly a question for the future.” Wyden suggested that protecting a device’s location trail from snooping apps and advertisers might be best handled by the Federal Trade Commission. Separate legislation previously introduced by Wyden would empower the FTC to crack down on promiscuous data sharing and broaden consumers’ ability to opt out of ad tracking.

    A6 is far from the only firm engaged in privatized device-tracking surveillance. Three of Anomaly Six’s key employees previously worked at competing firm Babel Street, which named all three of them in a 2018 lawsuit first reported by the Wall Street Journal. According to the legal filing, Brendan Huff and Jeffrey Heinz co-founded Anomaly Six (and lesser-known Datalus 5) months after ending their employment at Babel Street in April 2018, with the intent of replicating Babel’s cellphone location surveillance product, “Locate X,” in a partnership with major Babel competitor Semantic AI. In July 2018, Clark followed Huff and Heinz by resigning from his position as Babel’s “primary interface to … intelligence community clients” and becoming an employee of both Anomaly Six and Semantic.

    Like its rival Dataminr, Zignal touts its mundane partnerships with the likes of Levi’s and the Sacramento Kings, marketing itself publicly in vague terms that carry little indication that it uses Twitter for intelligence-gathering purposes, ostensibly in clear violation of Twitter’s anti-surveillance policy. Zignal’s ties to government run deep: Zignal’s advisory board includes a former head of the U.S. Army Special Operations Command, Charles Cleveland, as well as the CEO of the Rendon Group, John Rendon, whose bio notes that he “pioneered the use of strategic communications and real-time information management as an element of national power, serving as a consultant to the White House, U.S. National Security community, including the U.S. Department of Defense.” Further, public records state that Zignal was paid roughly $4 million to subcontract under defense staffing firm ECS Federal on Project Maven for “Publicly Available Information … Data Aggregation” and a related “Publicly Available Information enclave” in the U.S. Army’s Secure Unclassified Network.

    The remarkable world-spanning capabilities of Anomaly Six are representative of the quantum leap occurring in the field of OSINT. While the term is often used to describe the internet-enabled detective work that draws on public records to, say, pinpoint the location of a war crime from a grainy video clip, “automated OSINT” systems now use software to combine enormous datasets that far outpace what a human could do on their own. Automated OSINT has also become something of a misnomer, using information that is by no means “open source” or in the public domain, like commercial GPS data that must be bought from a private broker.

    While OSINT techniques are powerful, they are generally shielded from accusations of privacy violation because the “open source” nature of the underlying information means that it was already to some extent public. This is a defense that Anomaly Six, with its trove of billions of purchased data points, can’t muster. In February, the Dutch Review Committee on the Intelligence and Security Services issued a report on automated OSINT techniques and the threat to personal privacy they may represent: “The volume, nature and range of personal data in these automated OSINT tools may lead to a more serious violation of fundamental rights, in particular the right to privacy, than consulting data from publicly accessible online information sources, such as publicly accessible social media data or data retrieved using a generic search engine.” This fusion of publicly available data, privately procured personal records, and computerized analysis isn’t the future of governmental surveillance, but the present. Last year, the New York Times reported that the Defense Intelligence Agency “buys commercially available databases containing location data from smartphone apps and searches it for Americans’ past movements without a warrant,” a surveillance method now regularly practiced throughout the Pentagon, the Department of Homeland Security, the IRS, and beyond.

    The post American Phone-Tracking Firm Demo’d Surveillance Powers by Spying on CIA and NSA appeared first on The Intercept.

    This post was originally published on The Intercept.

  • An unprecedented spree of policy changes and carveouts aimed at protecting Ukrainian civilians from Facebook’s censorship systems has earned praise from human rights groups and free expression advocates. But a new open letter addressed to Facebook and its social media rivals questions why these companies seem to care far more about some attempts to resist foreign invasion than others.

    In response to the Russian invasion of Ukraine, Meta Platforms, which owns Facebook and Instagram, rapidly changed its typically strict speech rules in order to exempt a variety of posts that would have otherwise been deleted for violating the company’s prohibition against hate speech and violent incitement.

    Internal Meta materials reviewed by The Intercept show that in early March the company temporarily enacted an exception to its hate speech policy permitting Facebook and Instagram users in Ukraine to call for the “explicit removal [of] Russians from Ukraine and Belarus,” posts that would have otherwise been deleted for violating the company’s ban on calling for the “exclusion or segregation” of people based on their national origin. The rule change, previously reported by the New York Times, was part of a broader package of carveouts that included a rare dispensation to call for the death of Russian President Vladimir Putin, use dehumanizing language against Russian soldiers, and praise the notorious Azov Battalion of the Ukrainian National Guard, previously banned from the platform due to its neo-Nazi ideology.

    While Meta has argued that these unusual steps are necessary to ensure that Ukrainian civilians can speak in their own defense online, critics say the changes highlight the extent to which non-Western civilian populations are neglected by the platform and illustrate the pitfalls of a California tech company equitably dictating what’s permissible in a foreign war zone.

    In a statement signed by 31 civil society and human rights groups, this criticism is directed squarely at American internet titans like Facebook. “While we recognize the efforts of tech companies to uphold democracy and human rights in Ukraine, we call for long term investment in human rights, accountability, and a transparent, equal and consistent application of policies to uphold the rights of users worldwide,” reads the letter, which was shared with The Intercept ahead of publication. “Once platforms began to take action in Ukraine, they took extraordinary steps that they have been unwilling to take elsewhere. From the Syrian conflict to the genocide of the Rohingya in Myanmar, other crisis situations have not received the same amount of support even when lives are at stake.”

    The open letter calls on Facebook and other American social media platforms to increase the scope and transparency of their human rights due diligence and to apply it equitably. “Currently, platforms are devoting greater time, attention, and resources to their users in the United States and Western Europe,” the letter says. “This happens both because of the potential for greater regulation by the United States and the European Union and because media based in the United States plays a significant role in influencing public discourse about companies, prompting greater attention to issues of public interest for the United States.”

    Dia Kayyali, a researcher who studies the offline effects of content moderation at the nonprofit Mnemonic and organized the open letter, said, “While initially it seemed like platforms were responding rapidly and forcefully, it became clear that the reality was more complicated.” Kayyali noted that “like activists from so many other conflict zones, Ukrainian civil society tried to get platforms to take their concerns seriously after Russia’s initial invasion in 2014. It wasn’t until media pressure and interest from the U.S. and Western European countries that platforms really started taking action.” As a result, civilians suffering in conflict zones that draw comparably little attention from Western media are still waiting for hands-on treatment from the American companies that dominate so much of the internet.

    Signage in front of Meta Platforms headquarters in Menlo Park, California, U.S., on Monday Jan. 31, 2022. Meta Platforms Inc. is scheduled to release earnings figures on Feb. 2. Photographer: David Paul Morris/Bloomberg via Getty Images

    Signage in front of Meta Platforms headquarters in Menlo Park, California, on Jan. 31, 2022.

    Photo: David Paul Morris/Bloomberg via Getty Images


    The fact that Meta finds itself in the difficult position of policing the speech of a country defending itself against a foreign invader is a sign of its own incredible success and the extent to which it has consolidated control over global communications, with more than 2 billion users worldwide all subject to the company’s definitions of permitted speech. But the frequency of closed-door reversals and carveouts suggests that these rules are not rules at all, but buoys floated by the currents of popular opinion and American foreign policy. Critics say it’s no coincidence that Facebook’s decisions about global good guys and bad guys almost always match the official determinations of the United States, a national political bias that keeps the rules intact when an American ally (or the U.S. itself) is the invader, not the invaded.

    After Meta’s hate speech and incitement exemptions expired last month, a new internal policy memo distributed to Meta content moderators outlined the company’s current approach. Headlined “Removing ambiguity — Not permitting hate towards Russian citizens & allowing Ukrainians to call for self-defense from invasion,” the memo, reviewed by The Intercept, explained that the company is “allowing Ukrainians to call for national self defense in the context of the invasion. It applies only within Ukraine and only in the context of speech regarding the Russian military invasion of Ukraine.” Like the hate speech exemptions, the policy is a clear loosening of company rules to spare Ukrainians from running afoul of the company’s own twitchy censorship apparatus: “Our standard rules would limit our users ability to make their voices heard at a critical time in Ukrainian history,” as the company put it to moderators. It’s unclear how exactly Meta is defining a “call for self-defense from invasion,” and the only examples of the sort of speech the company is seeking to protect are very broad: “To call for national defense, discuss Ukraine’s military actions, and react to Ukrainian President Zelensky’s calls for civilians to take up arms in defense of their homes.”

    “They pretty consistently create policies that line up with Western goals.”

    “The entire guidance and language reeks of double standards,” said Marwa Fatafta, head of Middle East and North Africa digital rights policy at Access Now, a signatory to the new statement. Fatafta noted that the new language was encouraging because it suggested that Meta is “listening carefully, attuning, and adjusting their policies as the situation evolves,” but that enshrining expressions of “national self-defense” in Ukraine should mean enshrining resistance to military aggression outside Europe too. “Meta agrees that they should respect national calls for self-defense for Ukrainians, but they have never granted that to Syrians or Palestinians,” Fatafta noted. “Imagine Facebook making an exception for Hamas calling for resistance or self-defense against the Israeli occupation. It is unthinkable.”

    Protecting the ability of Ukrainians suffering at the hands of an invading army to freely express their pain and hope against defeat is uncontroversial. But this license to openly root against the opposing team is not doled out to all civilians equally: “Are Yemenis allowed to call for Saudis to leave their country? Are Palestinians allowed to call for Israelis to leave their country?” asked Jillian York, director for international freedom of expression at the Electronic Frontier Foundation. Meta did not respond when asked by The Intercept if it provides similar latitude to other populations to call for self-defense in the face of a foreign invasion or occupation. “We know they have done the exact opposite in Palestine,” York said, citing deletions of content protesting the Israeli occupation. “They pretty consistently create policies that line up with Western goals.”

    “There are other wars. Why can’t someone say similar things about U.S. soldiers?”

    Meta’s lopsided notions of whose “national self-defense” is worth protecting has spurred strife inside the company too. A Meta source familiar with the company’s content policy discussions told The Intercept that the changes “sparked heated and emotional debate on Meta’s internal Workplace,” a communication tool used at the company. “Voices of Russian employees were joined by the rest of Meta’s internal community, citing divergence from core principles and double standards in relation to other military conflicts.”

    Another Meta source, who also spoke to The Intercept on the condition of anonymity to protect their livelihood, noted that internally “there have been a lot of questions about why it is OK to do this to Russians but no one else” since “there are other wars. Why can’t someone say similar things about U.S. soldiers?”

    Internal attempts to get any answers out of Meta’s policymaking black box have been as futile as those outside the company. “There is no actual transparency here as to why specifically this is OK for Ukraine but [nowhere] else,” this source said, explaining that Meta’s explanations to corporate staff haven’t gone beyond CEO Mark Zuckerberg tapping Facebook policy chief Monika Bickert “to repeat the talking points” that are provided to the media. The source said that the company response to the war in Ukraine struck them as a reaction to outside pressure — “‘Meta Censors Ukrainian Freedom Fighters’ is a bad look,” they remarked — rather than a reflection of any consistent principles. “We twist ourselves into knots creating incoherent carveouts about ‘public interest’ and ‘right to know,’ but it’s just to save our own asses. There isn’t rhyme or reason otherwise. And that is really, really bad for a company that controls a significant portion of the internet.”

    “We twist ourselves into knots creating incoherent carveouts about ‘public interest’ and ‘right to know,’ but it’s just to save our own asses.”

    Facebook’s swift tailoring of its speech rules to help the Ukrainian resistance has drawn particularly pointed comparisons to Palestine, where Meta users appear to enjoy no such protections for civilians who have long resisted a foreign military force.

    In Palestine, where the ongoing Israeli occupation has seen decades of violence against civilians and international human rights condemnations, Facebook and Instagram users who voice opposition to that unwanted foreign military presence often have their posts deleted without explanation or recourse. “Never, never, ever was there any carveout or exception for Palestinian speech on Facebook,” said Fatafta of Access Now. “I can say that with full confidence.”

    The open letter notes how “in May 2021, in anticipation of forcible evictions in the Sheikh Jarrah neighborhood in Jerusalem, Palestinians engaged in protests that Israeli security forces violently suppressed. Social media platforms removed massive amounts of content posted by Palestinians and their supporters, who were trying to document and share these human rights violations, as well as political discussions about Palestine around the world.” The Palestinian digital rights group 7amleh “documented more than 500 violations against Palestinian content on these platforms between May 6 and May 19, 2021. … Speech was removed in this context that may have been left up had it received the kind of contextual analysis platforms are claiming to do in Ukraine now.”

    Far from loosening its rules to help Palestinians speak out against their occupation, Facebook has in fact implemented rule changes to aid the occupier: Last year The Intercept reported that the company had created stricter hate speech rules around using the term “Zionist,” a move experts said would make it even more difficult for Palestinians to protest their treatment by Israel.

    Just as longtime observers of Meta’s Dangerous Individuals and Organizations policy say the blacklist of terrorists and criminals is an avatar of American foreign policy values, content moderation experts told The Intercept that exceptions to the rule are fundamentally political choices. Civil society groups, including those that have discussed these issues with Meta, say there is no indication that the company similarly fine-tunes its policies to ensure the speech of other civilian populations under occupation and bombardment. The Electronic Frontier Foundation’s York argued that in other Facebook-entangled conflicts, unlike Ukraine, “the U.S. has an interest in supporting the occupier. Facebook staff, especially their policy staff, are American and not immune to bias.”

    The post Facebook’s Ukraine-Russia Moderation Rules Prompt Cries of Double Standard appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Conforme o domínio da Amazon no e-commerce global cresce, também aumentou sua vasta frota de veículos que transportam os pacotes dos armazéns para as portas dos clientes ao redor do mundo. Para expandir ainda mais seu império logístico que não para de crescer, a companhia discretamente tornou-se proprietária parcial da Air Transport Services Group Inc., a ATSG, uma das empresas mais importantes na indústria de carga aérea, que ajudou os Estados Unidos a deportar à força milhares de migrantes e, segundo alegam seus passageiros, por vezes os sujeitou a abusos horríveis durante a viagem.

    Em 9 de março de 2021, após cinco anos de uso do serviço para voos fretados de carga, a Amazon comprou 19,5% da ATSG por US$ 131 milhões e atualmente reserva opções que lhe permitiriam expandir essa participação para 40%. Entre as várias subsidiárias da ATSG está a Omni Air International, uma empresa de voos fretados de passageiros que transporta pessoas em nome do governo federal dos EUA. Seus dois clientes federais mais proeminentes são o Departamento de Defesa, que usa a empresa para o transporte de militares, e o Departamento de Segurança Nacional, que tem pago à companhia taxas supostamente exorbitantes ao longo dos anos, para executar os chamados voos fretados especiais de alto risco que fazem parte do chamado “ICE Air”. O Serviço de Controle de Imigração e Aduanas dos EUA, ou ICE na sigla em inglês, faz acordos com a Omni através de uma intermediária, a Classic Air Charter Inc., uma empresa de logística de voo subsidiária de outra companhia que anteriormente ajudou a CIA a transportar prisioneiros a locais secretos, onde seriam torturados.

    O Departamento de Segurança Nacional define esses voos de “alto risco” como qualquer voo “programado para repatriar indivíduos que não podem ser removidos através de companhias aéreas comerciais para locais no mundo inteiro, ou devido a outras preocupações de segurança ou fatores de risco”. De acordo com os documentos analisados pelo Intercept, a definição de “alto risco” é tão ampla que inclui praticamente qualquer pessoa, “incluindo, mas não limitado a, os seguintes: destino incomum ou de longa distância, não cumprimento do procedimento de remoção, remoção de alto perfil, etc.” A noção de que esses deportados de alguma forma representam um grave perigo criou um pretexto, segundo os críticos da agência, para que sejam espancados, humilhados e aterrorizados em nome da segurança nacional.

    Um porta-voz da Amazon recebeu o pedido do Intercept para comentar as alegações, mas não deu qualquer resposta. ATSG, Omni e ICE não responderam a repetidos pedidos por comentários.

    A reputação particular de brutalidade da ICE Air é merecida e exaustivamente catalogada. Em 2019, o Centro de Direitos Humanos da Universidade de Washington publicou uma série de relatórios sobre os voos, documentando uma “longa lista de indignidades e ilegalidades” que se estende por décadas. A crise do abuso nos voos de deportação é atribuída diretamente à falta de transparência de empresas como a Omni: “na última década, a infraestrutura institucional por trás desses voos mudou de uma operação dirigida pelo Serviço de Delegados dos EUA em aviões do governo, para uma rede em expansão de voos semi-secretos em aeronaves de propriedade privada”.

    Um dos exemplos mais infames é um voo de remoção notoriamente mal feito para a Somália em 2017, que o centro acredita que foi operado pela Omni, no qual os deportados alegaram ter sofrido com “espancamentos físicos, uso de camisas de força, abusos e ameaças verbais e a negação de acesso aos banheiros, o que forçou os passageiros a fazerem suas necessidades em seus assentos”. Embora o voo para a Somália tenha sido talvez o caso mais evidente de abuso no ICE Air, o Centro de Direitos Humanos concluiu que “dada a falta de supervisão efetiva sobre a ICE Air, é provável que muitos outros abusos não sejam relatados. … Entre os antigos deportados que entrevistamos, eram comuns relatos de maus-tratos rotineiros, incluindo o uso de epítetos e insultos racistas, e tratamento físico duro no embarque”.

    Mesmo sem alegações específicas de abuso, os voos vêm com uma brutalidade inerente para os deportados, que permanecem presos e acorrentados durante toda uma viagem internacional, às vezes com escalas de 30 horas ou mais. Cerca de 100 alegações formais de abuso e maus-tratos a bordo de voos da ICE Air foram apresentadas ao Escritório de Direitos e Liberdades Civis do Departamento de Seguraça Nacional entre 2007 e 2018, incluindo um incidente em 2012 no qual uma detenta do ICE sofreu um “aborto de seus trigêmeos” a bordo de um voo de remoção para El Salvador.

    A determinação de que a carga humana em questão apresenta uma “preocupação de segurança” tal que garanta um voo de deportação de “alto risco” pode resultar em um tratamento ainda mais severo do que os voos de remoção “normais”, de acordo com os observadores de deportação. Apesar dos esforços do ICE Air para operar em segredo quase total, Sarah Towle, escritora, defensora dos imigrantes e pesquisadora radicada em Londres, rastreou incansavelmente esses voos e deu voz a seus passageiros, conduzindo dezenas de entrevistas desde 2020 para obter um raro vislumbre do tratamento que receberam.

    Entre outros alegados horrores, Towle diz que uma brutalidade particular ocorrida durante os voos a motiva a continuar documentando o abuso dos deportados e a pressionar a Amazon a se desfazer da ATSG e de sua subsidiária afiliada ao ICE: o uso do sistema de contenção conhecido como WRAP. Vendido a departamentos de polícia em todas as partes dos Estados Unidos e parte dos “dispositivos de restrição de movimentos autorizados pelo ICE”, o WRAP incapacita indivíduos ligado suas pernas e braços atrás das costas em uma posição semi-sentada, como uma múmia, usado um conjunto de arreios, cadeados e correntes. O dispositivo Diablo, fabricado pela empresa californiana Safe Restraints Inc., diz que foi “inventado pela polícia e profissionais médicos para melhorar o método de prender um indivíduo com segurança” e que fornece um meio de imobilizar os detentos com “respeito” e “cuidado humano”, segundo os materiais de treinamento. Esse documento de treinamento acrescenta que não há limite de tempo para que uma pessoa possa ser mantida em um WRAP, sem descartar seu uso em detentas grávidas.

    Uma investigação da Capital & Main publicada em 3 de fevereiro constatou que não existem nos Estados Unidos “requisitos de teste, diretrizes de segurança ou certificações para sistemas de restrição de corpo inteiro como o WRAP” e “identificou 10 ações judiciais movidas por famílias de pessoas que morreram sob custódia policial em incidentes envolvendo o WRAP desde 2000”, embora essas mortes não tenham sido atribuídas de forma definitiva ao uso do dispositivo. O CEO da Safe Restraints disse à Capital & Main que a empresa “opera sob os mais altos padrões que acreditamos serem necessários para ajudar as pessoas a permanecerem vivas” e que “temos um histórico de segurança incrivelmente alto”, mas não citou qualquer evidência.

    Dispositivo de segurança “cruel, desumano e degradante” em voos

    Embora o WRAP seja vendido como um dispositivo de segurança destinado a proteger aqueles que ficam retidos nele, as pessoas submetidas à restrição de movimentos em voos do ICE, incluindo aqueles operados pela Omni, dizem que ele cria uma provação agonizante que equivale à tortura. Uma queixa, prestada no ano passado ao Escritório de Direitos e Liberdades Civis do Departamento de Segurança Nacional por uma coalizão de grupos de defesas de imigrantes e pela Escola de Direito da Universidade do Texas A&M, detalha a experiência angustiante de três imigrantes deportados através da Omni Air, alegando que o ICE “está usando o WRAP de maneira que constitui tortura ou tratamento cruel, desumano e degradante, em violação à Conveção Contra a Tortura”. A queixa é baseada em grande parte em entrevistas conduzidas por Towle. Um homem contou que, após ser lançado ao chão e atingido por balas de borracha, foi colocado em um WRAP e embarcado em um voo da Omni, onde seu corpo permaneceu preso em um ângulo de 40 graus por cerca de nove horas. “Foi muito doloroso”, ele disse. “A posição foi muito estressante para o meu corpo, meus músculos ficaram com dor durante toda a viagem de ônibus e no voo de volta para Camarões”.

    Dois outros homens deportados para Camarões e entrevistados por Towle recordaram uma experiência semelhante em um voo da Omni: “Meus pulmões estavam comprimidos, eu não conseguia respirar. Eu não conseguia me sentar. Eu estava imobilizado”, disse um deles. “Meu corpo estava sob muito estresse. Eu gritei: ‘vocês estão me matando!’ Eu realmente sentia que estava encontrando meu fim naquele momento. Seis agentes, três de cada lado, me pegaram e me levaram para o avião. Eles me jogaram no chão como uma carga de madeira, em uma fileira central de assentos”. Um terceiro homem incluído na queixa disse a Towle que ele foi colocado em um WRAP e deixado na pista enquanto outros prisioneiros eram embarcados em um avião que os transferiria para o voo de deportação da Omni. “Eles prenderam uma corda de uma fivela o meu peito a outra fivela nos meus pés e puxaram a parte superior do meu corpo para baixo, de modo que meu rosto estava nos meus joelhos. Eu não conseguia respirar direito. Eu estava completamente imobilizado”. Ele acrescentou: “no dia em que fui colocado no WRAP pelo ICE, eu queria morrer. Nunca senti uma dor tão horrível. Foi uma tortura”.

    “Nunca senti uma dor tão horrível. Foi uma tortura.”

    Um relatório de 10 de fevereiro da Human Rights Watch aprofundou os detalhes das alegações de tratamentos brutais nos voos da Omni para Camarões. Um camaronês deportado entrevistado pela Human Rights Watch recordou seus maus-tratos na viagem: “[o ICE] me colocou em um WRAP [ou dispositivo semelhante] porque eu me recusava a entrar no avião. … Eles amarram suas pernas e mãos, uma conectada à outra, e você não consegue sentar direito. É uma forma de punição. Depois eles colocaram algo parecido a uma rede no meu rosto. … Eu lhes disse que Deus iria julgá-los. Os agentes do ICE me disseram que eu deveria ir para o inferno, que qualquer reclamação que eu fizesse, o caso não iria a lugar algum, que eles poderiam fazer o que quisessem”.

    Quatro meses após esses homens serem deportados para Camarões a bordo da Omni Air International, a Amazon adquiriu uma participação de 19,5% em sua matriz, a ATSG, por US$ 131 milhões. A Amazon arrendava aviões de carga da ATSG desde 2015, mas em 2016 as duas empresas formaram uma parceria de longo prazo que permite à Amazon expandir sua participação acionária para 40% no futuro e indicar um membro para a diretoria da ATSG, segundo noticiou a Bloomberg na época. Em uma postagem no blog da empresa celebrado o lançamento da frota de cargas própria do Amazon Prime em 2016, operada em parte pela ATSG, o executivo da Amazon David Clark escreveu que “adicionar capacidade para os membros do Prime ao desenvolver uma rede dedicada de cargas aéreas garante que há capacidade suficiente para oferecer aos clientes uma ótima seleção de produtos, preços baixos e velocidades de entrega incríveis pelos próximos anos”. Segundo os arquivos da Comissão de Valores Mobiliários dos EUA, a Amazon era a maior cliente da ATSG até setembro passado, responsável por 35% da receita da empresa.

    “Uma quantidade desproporcional dos abusos que vimos estão em voos [da Omni] para países africanos. Eu não me surpreenderia se isso fosse um reflexo de uma postura de anti-negritude ou de racismo inerentes”.

    Quem acompanha o ICE Air há mais tempo afirma que os voos da Omni são conhecidos por um grau de desumanidade que supera aquele visto em seus rivais no mercado de voos fretados de deportação, mesmo sem o uso de um WRAP. Angelina Godoy, diretora do Centro de Direitos Humanos da Universidade de Washington, passou anos coletando materiais do Departamento de Segurança Nacional que documentam voos de deportação e catalogando as alegações de abusos. “As denúncias de abusos em voos da Omni são significativamente piores”, disse Godoy ao Intercept em uma entrevista, citando desde alegações de violência física até a negação de acesso a banheiros em voos de 18 horas de duração.

    Godoy aponta para dois voos particularmente horríveis do ICE Air que foram denunciados publicamente, e que a pesquisa de sua equipe indica que foram feitos pela Omni: a tristemente notória missão de 2017 para a Somália e um voo de 2016 no qual os deportados descreveram terem recebido choques elétricos e sido tratados como “sacos de vegetais” presos em um WRAP. “Agentes gravaram vídeos de celular dos prisioneiros no chão, eles contaram, e então agarraram as sacolas pelas alças e as jogaram dentro do avião, algumas delas caindo com um estrondo”, segundo uma matéria do Los Angeles Times (um porta-voz do ICE disse ao jornal que o WRAP tinha sido utilizado porque os deportados se recusavam a cumprir ordens). “Uma quantidade desproporcional dos abusos que vimos estão em voos [da Omni] para países africanos”, acrescentou Godoy. “Eu não me surpreenderia se isso fosse um reflexo de uma postura de anti-negritude ou de racismo inerentes. Eu me pergunto o que faz desse um voo especial de alto risco além da cor da pele das pessoas que estão no avião”.

    Desde que a Amazon adquiriu sua parte na matriz da Omni no ano passado, a companhia aérea realizou pelo menos 21 deportações e voos relacionados, de acordo com dados compilados pelo observador e pesquisador do ICE Air Tom Cartwright, que, como Towle, trabalha com o grupo de defesa dos imigrantes Witness at the Border [‘Testemunha na Fronteira’, em tradução livre]. No segundo semestre do ano passado, após uma repressão violenta e amplamente criticada contra imigrantes haitianos que fugiam do caos político e do desastre ambiental em seu país natal, o governo Biden executou uma expulsão em massa, deportado quase 4 mil pessoas de volta para o Haiti, incluindo milhares de pais e filhos. Todos tiveram negada a oportunidade de solicitar asilo nos Estados Unidos. Após serem perseguidos e, alegavam, chicoteados por agentes a cavalo da Patrulha de Fronteira, os migrantes haitianos capturados foram levados para uma operação de deportação em massa que foi facilitada em parte pela Omni, seis meses após a Amazon ter assumido sua participação; a operação envolvia o transporte de prisioneiros em voos fretados de Del Rio, no Texas, para El Paso, na fronteira entre o próprio Texas e o México, a caminho de Porto Príncipe, a capital haitiana. Mais recentemente, diz Towle, ela e Cartwright identificaram um voo da Omni em 25 de janeiro no qual 211 solicitantes de asilo, incluindo 90 crianças, foram deportados de volta para o Brasil. “Relatos de lá indicam que os brasileiros deportados acreditam que seus direitos humanos foram violados por agentes do ICE”, disse Towle ao Intercept.

    Um relatório publicado pela Anistia Internacional em dezembro condenou as deportações em massa via Omni, observando: “muitos haitianos desembarcaram dos voos de deportação dos EUA doentes, algemados, famintos, traumatizados e desorientados, apenas para se depararem com um ‘pesadelo humanitário’” no Haiti. Atraiu críticas em particular o uso que o governo fez do Title 42, uma política da era Trump que deportou migrantes que solicitavam asilo sob pretextos duvidosos relacionados ao coronavírus. Em outubro, Harold Koh, um advogado do Departamento de Estado dos EUA, renunciou ao cargo após os voos de deportação de haitianos, que ele criticou como “ilegais” e “desumanos”. A carta de Koh acusava o ICE de cometer “repulsão” (do francês “refoulement”), ou deportar indivíduos para um país sabendo que temem perseguição e violência naquele lugar, em violação ao direito internacional, uma acusação reverberada por grupos como a Anistia, a Human Rights Watch e o Centro de Direitos Humanos da Universidade de Washington.

    O “Compromisso com os direitos dos imigrantes” da Amazon

    Embora pareçam pequenos em comparação com o peso da Amazon no comércio eletrônico mundial, os voos de deportação da Omni são altamente lucrativos: em 2019, a Quartz publicou um documento do ICE justificando o custo imenso de operação da ICE Air, muito além das tarifas de mercado, explicando que “muitas companhias aéreas são desencorajadas pelo potencial de reação negativa da opinião pública ou atenção negativa da mídia” associados aos voos de deportação e “como resultado, nossas possibilidades de seleção de companhias aéreas foram reduzidas a uma única operadora, a Omni”. O documento observou que o ICE estava pagando a Omni aproximadamente o dobro da taxa esperada para os voos: um único fretamento de deportação em 18 de novembro de 2019 “custou ao contribuinte americano um total de US$ 1,8 milhão – até US$ 280 mil a mais que o preço do pedido original”, informou a Quartz. Em sua mais recente chamada para expor os lucros, a ATSG relatou uma receita trimestral recorde de US$ 466 milhões e disse aos investidores que “a Omni Air deu uma contribuição notável a esses ganhos”.

    Apesar do custo imenso, os voos privados de deportação dão ao ICE o benefício da discrição. Uma reportagem de 2020 do Miami Herald nos voos privados de deportação observou que as companhias aéreas fretadas “podem organizar voos para o ICE que mantenham um perfil discreto e evitem interferência da polícia local, em parte por decolar de aeroportos remotos”. Segundo os relatórios de 2019 do Centro de Direitos Humanos da Universidade de Washington a respeito da ICE Air, “o voos de deportação operados pela Omni Air International não começam pelo prefixo RPN” – o indicativo de chamada da Administração Federal de Aviação dos EUA tipicamente utilizados para identificar os voos da ICE Air – “e, portanto, são mais facilmente confundidos com outros voos charter privados operados pela mesma empresa”.

    Godoy diz que sua pesquisa sobre a ICE Air é ainda mais frustrada pelo sigilo institucional do Departamento de Segurança Nacional, que tem cada vez mais censurado os pedidos de registros públicos que proporcionariam transparência sobre os voos e as alegações de abusos que ocorrem neles. “O grau de censura é extremo”, disse Godoy por email, explicando que muitos dos documentos recentemente entregues a sua equipe na Universidade de Washington através de solicitações da Lei de Acesso à Informação foram quase inteiramente censurados. “Esses documentos servem apenas para ilustrar o sigilo em torno desses voos, [alguns] dos quais custaram aos contribuintes cerca de US$ 1 milhão”.

    “Esses documentos servem apenas para ilustrar o sigilo em torno desses voos, [alguns] dos quais custaram aos contribuintes cerca de US$ 1 milhão.”

    As operações de deportação da Omni são ainda mais mascaradas pelo fato de que a companhia usa exatamente os mesmos aviões para conduzir tanto voos da ICE Air quanto fretamentos de luxo. Em junho passado, a empresa estreou a “Omni Class”, um serviço privado que oferece aos clientes “180° de Luxo 360” com “um casulo inigualável de conforto e luxo para os passageiros, que se transforma em uma cama plana de 180 graus ao toque de um botão”. Um vídeo de marketing promovendo a Omni Class mostra os passageiros desfrutando de taças de champagne e bifes grelhados em seus “tronos autônomos que proporcionam o máximo em luxo e privacidade”. Um folheto promocional para a Omni Class revela que um dos aviões usados para essas viagens recreativas tem o número N378AX, o mesmo avião que a Omni usou para executar uma série de voos de deportação em dezembro, segundo dados de voo fornecidos por Cartwright. A Omni não comentou sobre a prática de utilizar a mesma aeronave tanto para deportações como para voos de luxo.

    A participação da Amazon em uma companhia aérea implicada na tortura de imigrantes parece estar em desacordo com sua retórica pública fortemente pró-imigração. Em junho passado, observando que “o Mês do Patrimônio Imigratório oferece uma oportunidade para honrar as contribuições de imigrantes e celebrar as formas poderosas que a diversidade enriquece a cultura, identidade comum e economia dos EUA”, o blog corporativo da Amazon publicou um post intitulado “Renovando nosso compromisso com os direitos dos imigrantes e a reforma da imigração”, enfatizando que a empresa continuou a contemplar “como podemos usar nossa voz de forma mais eficaz para defender os direitos dos imigrantes”. A empresa também publicou inúmeros compromissos escritos com noções generalizadas de “direitos humanos”, inclusive dentro de suas próprias cadeia de fornecimento e rede logística – o que inclui, é claro, fornecedores de serviços de entrega como a ATSG. “Comprometemo-nos a incorporar o respeito aos direitos humanos em toda a nossa empresa”, de acordo com os Princípios Globais de Direitos Humanos da Amazon. “Nós nos esforçamos para garantir que os produtos e serviços que fornecemos sejam produzidos de uma maneira que respeite os direitos humanos reconhecidos internacionalmente”, lê-se em outro documento corporativo. A imposição deliberada de posições dolorosas e degradantes como aquelas descritas pelos deportados submetidos ao WRAP foi descrita como como uma forma de tortura pelas Nações Unidas. A Human Rights First, uma organização sem fins lucrativas que atua pela abolição do uso de “posições de estresse” e criticou os amplos poderes de deportação do ICE conta com Jay Carney, vice-presidente sênior da Amazon para Assuntos Corporativos Globais, entre os membros mais proeminentes de seu conselho de administração. A Human Rights First não respondeu a um pedido por comentários.
    [Foto: Serviço de Controle de Imigração e Aduanas dos EUA’]

    Os ativistas que rastreiam deportações – e a Amazon

    A participação parcial da Amazon entre os proprietários da Omni é apenas um dos muitos envolvimentos lucrativos que a empresa tem com o aparato de deportação americano. O software ICM, da Palantir, que o Intercept noticiou anteriormente que é utilizado pelo Departamento de Segurança Nacional para rastrear e deportar imigrantes, tem sido executado em servidores alugados do serviço de computação em nuvem da Amazon, e em 2018 o Daily Beast escreveu que a Amazon havia tentado vender ao ICE sua tecnologia de reconhecimento facial, agora abandonada. A disposição da Amazon em construir uma infraestrutura informática integral para as operações do ICE tem estimulado ondas de protesto, tanto de grupos de defesa dos imigrantes quanto entre os próprios funcionários da empresa.

    Nos últimos dois anos, Towle, Cartwright e seus colegas na Witness at the Border, bem como outros grupos de defesa dos imigrantes, construíram uma operação independente e sofisticada para rastrear voos de deportação que tentam se esconder dos olhos do público, rastrear as alegações de abusos, e se organizar contra o sistema de deportação. Enquanto o ICE faz cada vez mais para esconder suas deportações, essa confederação de ativistas, pesquisadores e acadêmicos busca fundamentalmente fazer uma engenharia reversa dos planos de voo, um esforço coordenado em um grupo de WhatsApp. “Talvez devido à conexão da Omni com o Departamento de Defesa, a companhia mascara os números dos aviões, o que não é estritamente ilegal. Mas isso significa que Tom [Cartwright] só pode ‘ver’ os aviões da Omni quando eles estão no ar”, escreveu Towle em um email. “Para poder antecipar quando um novo voo pode estar chegando, ele se baseava nas pistas fornecidas pelos próprios detentos, passadas para a conversa em grupo por seus defensores e advogados”. Towle explicou que os observadores da Omni procuram sinais de uma deportação iminente, como um detento sendo convocado de forma abrupta para um teste de covid-19, tendo sua conta no centro de detenção bloqueada ou sendo transferido para um centro de detenção de migrantes repleto de compatriotas seus. Entre advogados e ativistas em terra e Cartwright observando os céus, o grupo se tornou hábil em detectar voos de deportação não anunciados à medida que eles acontecem.

    Towle diz que o aprendizado sobre o uso do WRAP em deportados e ouvir a respeito de seu efeito debilitante e aviltante em primeira mão estimulou o seu trabalho. “Nas conversas de WhatsApp, eu comecei a ouvir seus defensores e advogados usarem a mesma linguagem. … Os relatos dos indivíduos deportados incluíam pessoas sendo ‘empacotadas’, ‘ensacadas’, ‘ensacadas e amarradas’ ou ‘dobradas como tortillas’”. Armados com amplas evidências de abusos colhidas em mais de 50 entrevistas com detentos, Towle e seus colegas ativistas e observadores dizem que seu próximo passo é pressionar a Amazon a interromper, ou pelo menos reconhecer, seu papel em manter a máquina do ICE Air funcionando. “Primeiro, eu quero que a Amazon reconheça sua cumplicidade, em virtude de sua conexão com a Omni, na comissão de violações flagrantes dos direitos humanos”, disse Towle ao Intercept. “Em segundo lugar, a Amazon deve cortar as relações da Omni com o ICE”, uma decisão, por mais improvável que seja, que Towle espera que envie uma mensagem a outras empresas que lucram com as deportações em massa. Isso é, como ela aponta, uma gota d’água pequena no grande balde da empresa: “a Amazon pode deixar o ICE de lado e nunca sentir falta de um centavo”.

    Tradução: Maíra Santos

    The post Amazon tem participação em companhia aérea acusada de torturar deportados appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In early March, contractors working for Google to translate company text for the Russian market received an update from their client: Effectively immediately, the ongoing Russian war against Ukraine could no longer be referred to as a war but rather only vaguely as “extraordinary circumstances.”

    The internal email, obtained by The Intercept, was sent by management at a firm that translates corporate texts and app interfaces for Google and other clients.

    The email passed along instructions from Google with the new wording. The instructions also noted that the word “war” should continue to be used in other markets and that the policy change was intended to keep Google in compliance with a Russian censorship law enacted just after the invasion of Ukraine.

    Asked about the guidance, Google spokesperson Alex Krasov told The Intercept, “While we’ve paused Google ads and the vast majority of our commercial activities in Russia, we remain focused on the safety of our local employees. As has been widely reported, current laws restrict communications within Russia. This does not apply to our information services like Search and YouTube.”

    According to a translator who spoke to The Intercept, the orders apply to all Google products translated into Russian, including Google Maps, Gmail, AdWords, and Google’s policies and communications with users. (The translator asked for anonymity to avoid reprisal by their employer.)

    The internal memo helps explain why some Google web pages, including an advertising policy and video help document found by The Intercept, use euphemistic terms like “emergency in Ukraine” in their Russian version but “war in Ukraine” in the English version.

    The censorship law, signed by Russia President Vladimir Putin on March 4, created harsh criminal penalties of up to 15 years in prison for disseminating so-called false information about the Russian military. This is widely believed to include referring to Russia’s assault on Ukraine as a war or invasion, given that the Kremlin had previously drawn a hard line against such terms. The Kremlin calls the war a “special military operation,” and its internet censorship board has reportedly threatened to block websites that use terms like “invasion.”

    Like many other American companies, Google swiftly declared its support of Ukraine and opposition to the Russian invasion after the attack began. And like several other Silicon Valley titans, it also implemented new policies to stifle the Kremlin’s ability to propagandize. A March 1 company blog post by Google global affairs chief Kent Walker stated, “Our teams are working around the clock to support people in Ukraine through our products, defend against cybersecurity threats, [and] surface high-quality, reliable information.” Walker added that Google had “paused the vast majority of our commercial activities in Russia,” including sales to Russian advertisers, sales of advertising directed at Russian YouTube viewers, sign-ups for Google Cloud in Russia, and “payments functionality for most of our services.”

    Western commentators have generally lauded Google’s efforts related to the invasion. But the email and translations in Google’s Help Center suggest that its principled stand against Russian state propagandizing is to some extent outweighed by the company’s interest in continuing to do business in Russia.

    In an English language version of a Google advertising policy update note titled “Updates to Sensitive events Policy,” dated February 27, 2022, the company explained it was freezing online ads from Russian state media outlets because of the “current war in Ukraine,” considered a “sensitive event.” But the Russian version of the post refers only to the “emergency in Ukraine” rather than a “war.”

    A Google advertising policy page in Russian describes the war in Ukraine as "current events that require special attention (emergency in Ukraine).

    A Google advertising policy page in Russian describes the war in Ukraine as “current events that require special attention (emergency in Ukraine).”

    Screenshot: The Intercept

    In the Video Help Center, the post “Restricted Products and Services” repeats the warning: “Due to the ongoing war in Ukraine, we will temporarily pause the delivery of Google ads to users located in Russia.” In the Russian version, the warning is again changed: “Due to the emergency situation in Ukraine, we are temporarily suspending ad serving to users located in Russia.”

    A Google support document explains why the company is freezing online ad sales to Russian media outlets. The English version says it's because of the "current war in Ukraine," while the Russian version refers to the "emergency in Ukraine."

    A Google support document explains why the company is freezing certain online ad sales. The English version says it’s because of the “war in Ukraine,” while the Russian version refers to the “emergency situation in Ukraine.”

    Screenshot: The Intercept

    Another help post found by The Intercept shows a Russian-language version written in compliance with the new censorship law:

    A Google support document explains why the company is freezing online ad sales to Russian media outlets. The English version says it's because of the "current war in Ukraine," while the Russian version refers to the "emergency in Ukraine."

    A Google policy page, restricting advertising on certain content, references the “war in Ukraine” in the English version. The Russian version on March 10 referenced the “emergency” in Ukraine, and on March 23 was updated to state,  “Due to the extraordinary circumstances in Ukraine, we are suspending the monetization of content that uses, denies or justifies the current situation.”

    Screenshot: The Intercept

    In some cases, Russian help pages include both a reference to “war” and a state-sanctioned euphemism; it’s unclear why.

    It’s possible an automated translation system is at fault. According to the translator, most translations are done automatically via software. In more sensitive cases — community rules and support pages — there is usually human oversight to ensure accuracy. This source added that any potential usage of the term “war” in the context of Ukraine would be censored across all Google products still available in the Russian market. They also said the euphemism policy would hypothetically apply beyond support page text to other Google products like Maps.

    The move is only the most recent instance of acquiescence to Russian censorship demands by Google and its major Silicon Valley peers. In 2019, Apple agreed to recognize the Russian annexation of Crimea in its iOS Maps app in response to Kremlin pressure. In 2021, Google disclosed that it had complied with 75 percent of content deletion requests it had received from the Russian government that year; that same year, both Google and Apple agreed to remove apps affiliated with prominent Putin critic Aleksey Navalny.

    The post Google Ordered Russian Translators Not to Call War in Ukraine a War appeared first on The Intercept.

    This post was originally published on The Intercept.

  • O Facebook permitirá temporariamente que seus bilhões de usuários elogiem o Batalhão de Azov, uma unidade paramilitar neonazista ucraniana. O Intercept apurou que, no passado, publicações elogiosas ao batalhão foram proibidas de circular na rede, com base na política de Indivíduos e Organizações Perigosas criada pela própria empresa.

    A mudança de política, feita esta semana, está ligada à invasão russa em curso na Ucrânia. O Batalhão de Azov, que funciona como um braço armado do movimento nacionalista branco ucraniano Azov, começou como uma milícia voluntária anti-Rússia, antes de se juntar formalmente à Guarda Nacional Ucraniana em 2014. O regimento é conhecido por seu ultranacionalismo de extrema direita e pela ideologia neonazista difundida entre seus membros.

    Embora nos últimos anos tenha tentado minimizar essas conexões, a simpatia do grupo pelo neonazismo não são sutis: soldados de Azov marcham e treinam vestindo uniformes com símbolos do Terceiro Reich; seus líderes têm cortejado membros da alt-right e neonazistas americanos; e, em 2010, o primeiro comandante do batalhão e ex-parlamentar ucraniano, Andriy Biletsky, afirmou que o propósito nacional da Ucrânia era “liderar as raças brancas do mundo em uma cruzada final contra Untermenschen [subumanos] liderados por semitas”.

    Com as forças russas se movendo rapidamente contra alvos em toda a Ucrânia, a abordagem robótica do Facebook baseada em listas de moderação colocaram a empresa em uma sinuca de bico: o que acontece quando um grupo que você considera perigoso demais para ser discutido livremente está defendendo seu próprio país contra um ataque em grande escala?

    Segundo documentos de política interna analisados pelo Intercept, o Facebook “permitirá elogios ao Batalhão de Azov, quando o elogio for explícita e exclusivamente sobre seu papel na defesa da Ucrânia OU sobre seu papel como parte da Guarda Nacional da Ucrânia”. Exemplos de discursos publicados internamente que o Facebook agora considera aceitável incluem: “Os voluntários do movimento Azov são verdadeiros heróis, eles são um apoio muito necessário para nossa guarda nacional”; “Estamos sob ataque. Azov tem defendido corajosamente nossa cidade nas últimas 6 horas”; e “Acho que o Batalhão de Azov está desempenhando um papel patriótico durante esta crise”.

    Os documentos estipulam que o Batalhão de Azov ainda não pode usar as plataformas do Facebook para fins de recrutamento ou para publicar suas próprias declarações, e que os uniformes e faixas do regimento permanecerão como imagens proibidas de símbolo de ódio, mesmo que os soldados de Azov possam lutar os exibindo. Em um reconhecimento claro da ideologia do grupo, o memorando fornece dois exemplos de postagens que não seriam permitidas sob a nova política: “Goebbels, o Führer e Azov: todos são grandes modelos de sacrifício e heroísmo nacional” e “Parabéns Azov por proteger a Ucrânia e sua herança nacionalista branca”.

    Em nota enviada ao Intercept, Erica Sackin, porta-voz do Facebook, confirmou a decisão da empresa, mas não respondeu a perguntas sobre a nova política.

    A proibição formal do Batalhão de Azov no Facebook surgiu em 2019. O grupo – além de diversos membros associados, como Biletsky – entrou na lista de proibição da empresa contra grupos de ódio, sujeito às restrições mais duras, de “Nível 1”, que impedem os usuários de prestarem “elogios, apoio, ou representação” de entidades presentes na lista secreta da empresa. A lista (não mais) secreta do Facebook, publicada pelo Intercept no ano passado, categorizou o Batalhão de Azov ao lado de grupos como o Estado Islâmico e a Ku Klux Klan – todos grupos de Nível 1, em razão de suas propensões a “graves danos offline” e à “ violência contra civis”. Um relatório de 2016 do Escritório do Alto Comissariado das Nações Unidas para os Direitos Humanos revelou que soldados de Azov haviam estuprado e torturado civis durante a invasão russa na Ucrânia, em 2014.

    Não há dúvidas de que a nova exceção criará confusão para os moderadores do Facebook, encarregados de interpretar as regras de censura confusas e às vezes contraditórias da empresa sob condições exaustivas. Embora os usuários do Facebook agora possam elogiar qualquer ação futura no campo de batalha dos soldados Azov contra a Rússia, a nova política observa que “qualquer elogio à violência” cometido pelo grupo ainda é proibido. Não está claro que tipo de guerra não-violenta a empresa prevê.

    Segundo Dia Kayyali, pesquisadora especializada nos efeitos reais da moderação de conteúdo na organização sem fins lucrativos Mnemonic, a nova postura do Facebook sobre o Batalhão de Azov é absurda no contexto de proibições contra a violência offline. “É típico do Facebook”, acrescentou Kayyali, observando que, embora a isenção permita que os ucranianos comuns tenham discussões mais livres sobre uma catástrofe que se desenrola em torno deles e que poderia ser censurada, o fato de que esses ajustes de política são necessários reflete o estado disfuncional da lista secreta do Facebook. “Suas avaliações do que é uma organização perigosa devem sempre ser contextuais; não deve haver uma exclusão especial para um grupo que, de outra forma, se encaixaria na política apenas por causa de um momento específico. Eles deveriam ter esse nível de análise o tempo todo”.

    Embora a mudança possa ser uma notícia bem-vinda para os críticos que dizem que a ampla e secreta política de Indivíduos e Organizações Perigosas pode sufocar a liberdade de expressão online, ela também oferece mais evidências de que o Facebook determina qual discurso é permitido com base nos julgamentos de política externa dos Estados Unidos. No ano passado, por exemplo, o site Motherboard reportou que o Facebook também criou uma exceção às suas políticas de censura no Irã, permitindo temporariamente que os usuários postassem “morte a Khamenei” por um período de duas semanas. “Acho que é uma resposta direta à política externa dos EUA”, disse Kayyali sobre a isenção de Azov. “A lista sempre

    The post Facebook permitirá elogios a paramilitares neonazistas da Ucrânia – desde que eles lutem contra a Rússia appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Facebook will temporarily allow its billions of users to praise the Azov Battalion, a Ukrainian neo-Nazi military unit previously banned from being freely discussed under the company’s Dangerous Individuals and Organizations policy, The Intercept has learned.

    The policy shift, made this week, is pegged to the ongoing Russian invasion of Ukraine and preceding military escalations. The Azov Battalion, which functions as an armed wing of the broader Ukrainian white nationalist Azov movement, began as a volunteer anti-Russia militia before formally joining the Ukrainian National Guard in 2014; the regiment is known for its hardcore right-wing ultranationalism and the neo-Nazi ideology pervasive among its members. Though it has in recent years downplayed its neo-Nazi sympathies, the group’s affinities are not subtle: Azov soldiers march and train wearing uniforms bearing icons of the Third Reich; its leadership has reportedly courted American alt-right and neo-Nazi elements; and in 2010, the battalion’s first commander and a former Ukrainian parliamentarian, Andriy Biletsky, stated that Ukraine’s national purpose was to “lead the white races of the world in a final crusade … against Semite-led Untermenschen [subhumans].” With Russian forces reportedly moving rapidly against targets throughout Ukraine, Facebook’s blunt, list-based approach to moderation puts the company in a bind: What happens when a group you’ve deemed too dangerous to freely discuss is defending its country against a full-scale assault?

    According to internal policy materials reviewed by The Intercept, Facebook will “allow praise of the Azov Battalion when explicitly and exclusively praising their role in defending Ukraine OR their role as part of the Ukraine’s National Guard.” Internally published examples of speech that Facebook now deems acceptable include “Azov movement volunteers are real heroes, they are a much needed support to our national guard”; “We are under attack. Azov has been courageously defending our town for the last 6 hours”; and “I think Azov is playing a patriotic role during this crisis.”

    The materials stipulate that Azov still can’t use Facebook platforms for recruiting purposes or for publishing its own statements and that the regiment’s uniforms and banners will remain as banned hate symbol imagery, even while Azov soldiers may fight wearing and displaying them. In a tacit acknowledgement of the group’s ideology, the memo provides two examples of posts that would not be allowed under the new policy: “Goebbels, the Fuhrer and Azov, all are great models for national sacrifices and heroism” and “Well done Azov for protecting Ukraine and it’s white nationalist heritage.”

    In a statement to The Intercept, company spokesperson Erica Sackin confirmed the decision but declined to answer questions about the new policy.

    Azov’s formal Facebook ban began in 2019, and the regiment, along with several associated individuals like Biletsky, were designated under the company’s prohibition against hate groups, subject to its harshest “Tier 1” restrictions that bar users from engaging in “praise, support, or representation” of blacklisted entities across the company’s platforms. Facebook’s previously secret roster of banned groups and persons, published by The Intercept last year, categorized the Azov Battalion alongside the likes of the Islamic State and the Ku Klux Klan, all Tier 1 groups because of their propensity for “serious offline harms” and “violence against civilians.” Indeed, a 2016 report by the Office of the United Nations High Commissioner for Human Rights found that Azov soldiers had raped and tortured civilians during Russia’s 2014 invasion of Ukraine.

    The exemption will no doubt create confusion for Facebook’s moderators, tasked with interpreting the company’s muddled and at time contradictory censorship rules under exhausting conditions. While Facebook users may now praise any future battlefield action by Azov soldiers against Russia, the new policy notes that “any praise of violence” committed by the group is still forbidden; it’s unclear what sort of nonviolent warfare the company anticipates.

    Facebook’s new stance on Azov is “nonsensical” in the context of its prohibitions against offline violence, said Dia Kayyali, a researcher specializing in the real-world effects of content moderation at the nonprofit Mnemonic. “It’s typical Facebook,” Kayyali added, noting that while the exemption will permit ordinary Ukrainians to more freely discuss a catastrophe unfolding around them that might otherwise be censored, the fact that such policy tweaks are necessary reflects the dysfunctional state of Facebook’s secret blacklist-based Dangerous Individuals and Organizations policy. “Their assessments of what is a dangerous organization should always be contextual; there shouldn’t be some special carveout for a group that would otherwise fit the policy just because of a specific moment in time. They should have that level of analysis all the time.”

    Though the change may come as welcome news to critics who say that the sprawling, largely secret Dangerous Individuals and Organizations policy can stifle online free expression, it also offers further evidence that Facebook determines what speech is permissible based on the foreign policy judgments of the United States. Last summer, for instance, Motherboard reported that Facebook similarly carved out an exception to its censorship policies in Iran, temporarily allowing users to post “Death to Khamenei” for a two-week period. “I do think it is a direct response to U.S. foreign policy,” Kayyali said of the Azov exemption. “That has always been how the … list works.”

    The post Facebook Allows Praise of Neo-Nazi Ukrainian Battalion If It Fights Russian Invasion appeared first on The Intercept.

    This post was originally published on The Intercept.

  • As Amazon’s dominance of global e-commerce has grown, so has its vast fleet of vehicles shuttling packages from warehouse to doorstep around the world. To further expand its ballooning logistics empire, the company quietly became a partial owner of Air Transport Services Group Inc., a power player in the air cargo industry that has helped the United States forcibly deport thousands of migrants and, its passengers allege, at times subjected them to horrific abuse en route.

    On March 9, 2021, following five years of using the service for chartered cargo flights, Amazon purchased 19.5 percent of ATSG for $131 million and currently reserves options that would let it expand that stake to 40 percent. Among ATSG’s various aviation subsidiaries is Omni Air International, a passenger charter firm that moves humans on behalf of the federal government. Its two most prominent federal customers are the Department of Defense, which uses the firm for troop transports, and the Department of Homeland Security, which has paid the company reportedly exorbitant fees over the years in order to execute so-called special high-risk charter flights for its “ICE Air” deportation machine. Immigration and Customs Enforcement deals with Omni through an intermediary, Classic Air Charter Inc., a flight logistics firm whose parent company previously helped transport CIA prisoners to black sites to be tortured.

    Homeland Security defines these “high-risk” flights as any “scheduled to repatriate individuals who cannot be removed via commercial airlines to locations worldwide, or because of other security concerns or risk factors.” According to ICE Air contract documents reviewed by The Intercept, the definition of “high risk” is so broad as to include virtually anyone, “including, but not limited to, the following: uncommon or long-distance destination, failure to comply with removal proceeding, high profile removal, etc.” The notion that these deportees in some way pose a grave danger has created a pretext, agency critics allege, to beat, demean, and terrify them in the name of homeland security.

    An Amazon spokesperson acknowledged The Intercept’s request for comment on these allegations but did not provide any response. ATSG, Omni, and ICE did not respond to repeated requests for comment.

    ICE Air’s particular reputation for brutality is well earned and thoroughly catalogued. In 2019, the University of Washington Center for Human Rights published a string of reports on the flights, documenting a “long series of indignities and illegalities” stretching back decades. The deportation flight abuse crisis is attributed directly to the opacity of firms like Omni: “Over the past decade, the institutional infrastructure behind these flights has shifted from a government operation run by the US Marshals Service on government planes, to a sprawling, semi-secret network of flights on privately-owned aircraft.”

    One of the more infamous examples is a notoriously botched 2017 removal flight to Somalia, which the center believes was operated by Omni, on which deportees alleged “physical beatings, the use of straitjackets, verbal abuse and threats, and the denial of access to restrooms, which forced passengers to soil themselves in their seats.” Though the Somalia flight was perhaps the highest-profile instance of ICE Air abuse, the Center for Human Rights concluded that “given the lack of effective oversight of ICE Air, it is likely that many other abuses may go unreported. … Among the former deportees we interviewed, accounts of more routine ill-treatment were common, including the use of racist epithets and insults, and rough physical treatment upon boarding.”

    Even without specific allegations of abuse, the flights come with an inherent brutality for deportees, who remain bound and shackled for the entirety of an international flight, at times upward of 30 hours or more with stopovers. Nearly 100 formal allegations of abuse and mistreatment aboard ICE Air flights were filed to the Department of Homeland Security’s Office for Civil Rights and Civil Liberties between 2007 and 2018, including a 2012 incident in which an ICE detainee “miscarried her triplets” aboard a removal flight to El Salvador.

    The determination that the human cargo in question presents a “security concern” such that they warrant a “high-risk” deportation flight can result in even harsher treatment than “regular” removal flights, according to deportation watchers. Despite ICE Air’s efforts to operate in near-total secrecy, Sarah Towle, an author, immigrant advocate, and researcher based in London, has tirelessly tracked these flights and given a voice to their passengers, conducting dozens of interviews with deportees since 2020 to gain a rare glimpse into their treatment. While those deported by ICE typically

    Among other alleged horrors, Towle says one particular in-flight brutality motivates her to continue documenting deportee abuse and push Amazon to divest from ATSG and its ICE-affiliated subsidiary: the WRAP. Sold to police departments across the United States and part of ICE’s “authorized restraint devices,” the WRAP incapacitates individuals by binding their legs together and their arms behind their back in a semi-seated position using a mummy-like set of harnesses, locks, and chains. The device’s Diablo, California-based manufacturer, Safe Restraints Inc., says it was “invented by law enforcement and medical professionals to improve the method of safely restraining an individual” and provides a means of immobilizing detainees with “respect” and “humane care,” according to training materials. This training document adds that there is no time limit that a person can be kept in a WRAP, nor does it rule out its use on pregnant detainees.

    An investigation by Capital & Main published February 3 found that no “testing requirements, safety guidelines or certifications exist in the United States for full body restraint systems like the WRAP” and “identified 10 lawsuits brought by families of people who died in police custody during incidents involving the WRAP since 2000,” though these deaths were not definitively attributed to the device’s use. Safe Restraints’s CEO told Capital & Main that the company “operate[s] under the highest standards that we believe are necessary to help people stay alive” and that “we have an incredibly high safety record,” but cited no evidence.

    “Cruel, Inhuman, and Degrading” Safety Device on Flights

    Though the WRAP is marketed as a safety device meant to protect those stuffed into it, people subjected to the restraint on ICE flights, including those operated by Omni, say it creates an agonizing ordeal amounting to torture. A complaint filed last year to the Department of Homeland Security’s Office for Civil Rights and Civil Liberties by a coalition of immigrant advocacy groups and the Texas A&M University School of Law details the harrowing experience of three migrants deported via Omni Air, alleging that ICE “is using The WRAP in a manner that constitutes torture or cruel, inhuman, and degrading treatment in violation of the Convention Against Torture.” The complaint is based largely on interviews with the deportees conducted by Towle. One man recounted that after being thrown to the ground and shot with rubber bullets, he was placed in a WRAP and loaded onto an Omni flight, where his body remained locked at a 40-degree angle for about nine hours. “It was so painful,” he said. “The position was very stressful on my body, my muscles were shot with pain the entire bus ride and flight back to Cameroon.”

    Two other men deported to Cameroon and interviewed by Towle recalled a similar experience on an Omni flight: “My lungs were compressed, I couldn’t breathe. I couldn’t sit up. I was immobilized,” said one. “My body was in so much stress. I shouted, ‘You’re killing me!’ I truly felt I was meeting my death in that moment. Six officers, three on each side, picked me up and carried me onto the plane. They plopped me down, like a load of wood, across a center row of seats.” A third man included in the complaint told Towle that he was placed in a WRAP and left on the tarmac as other prisoners were loaded onto the Omni flight. “They attached a cord from a buckle at my chest to a buckle at my feet and they pulled my upper body down so my face was in my knees. I could not breathe well. I was completely immobilized.” He added, “The day I was put in The WRAP by ICE, I wanted to die. I have never felt such horrible pain. It was torture.”

    “I have never felt such horrible pain. It was torture.”

    A February 10 report by Human Rights Watch further detailed allegations of brutalization on the Omni flights to Cameroon. One Cameroonian deportee interviewed by Human Rights Watch recounted his mistreatment on the flight: “[ICE] put me in a Wrap [or similar restraint] because I was refusing to get in the plane. … [T]hey tie your legs and your hands, each is connected to each and you can’t sit up straight. It’s a form of punishment. Then they put something like a net cap on my face. … I told them God will judge them. The ICE officers told me I should go to hell, that whatever complaint I do, the case will go nowhere, that they can do whatever they want.”

    Four months after these men were deported to Cameroon aboard Omni Air International, Amazon purchased a 19.5 percent stake in its parent company, ATSG, for $131 million. Amazon has leased Boeing 767 cargo aircraft from ATSG since 2015, but in 2016 the two companies formed a long-term partnership that allows Amazon to expand its ownership stake to 40 percent in the future and to appoint a member to ATSG’s board, Bloomberg reported at the time. In a company blog post celebrating the launch of its Amazon Prime-liveried cargo fleet in 2016, operated in part by ATSG, Amazon executive Dave Clark wrote, “Adding capacity for Prime members by developing a dedicated air cargo network ensures there is enough available capacity to provide customers with great selection, low prices and incredible shipping speeds for years to come.” According to Securities and Exchange Commission filings, Amazon was ATSG’s largest customer as of September, accounting for 35 percent of the company’s revenues.

    “A disproportionate amount of the abuses that we have seen are on [Omni] flights to African countries of origin. I wouldn’t be surprised if that’s a reflection of inherent anti-Blackness or racism.”

    Longtime ICE Air observers say that Omni flights are known for a degree of inhumanity exceeding its deportation charter rivals, even without the use of a WRAP. Angelina Godoy, director of the University of Washington Center for Human Rights, has spent years collecting Homeland Security materials documenting deportation flights and cataloguing allegations of abuse. “The reporting of abuses on Omni flights is significantly worse,” Godoy told The Intercept in an interview, citing everything from allegations of physical violence to denied access to bathrooms on 18-hour flights.

    Godoy points to two particularly horrific publicly reported ICE Air flights that her team’s research indicates were conducted by Omni: The notoriously botched 2017 Somalia mission and a 2016 charter in which deportees described being tased and treated like “sacks of vegetables” in WRAP restraints. “Officers took cellphone videos of the prisoners as they lay on the ground, they said, then grabbed the bags by the handles and heaved them onto the plane, some landing with a thud,” according to a Los Angeles Times report. (An ICE spokesperson told the Times that the WRAP had been used because deportees refused to comply with orders.) “A disproportionate amount of the abuses that we have seen are on [Omni] flights to African countries of origin,” Godoy added. “I wouldn’t be surprised if that’s a reflection of inherent anti-Blackness or racism. I wonder what makes this a special high-risk charter other than the color of the skin of the people that are on the plane.”

    ICE prepares 37 individuals for deportation to Cambodia aboard an Omni Air flight, July 3rd 2019

    ICE prepares 37 individuals for deportation to Cambodia aboard an Omni Air flight, July 3rd 2019

    Photo: U.S. Immigration and Customs Enforcement


    Since Amazon acquired its chunk of Omni’s corporate parent last year, the airline has conducted at least 21 deportation flights, according to data compiled by researcher and ICE Air observer Tom Cartwright, who like Towle works with the migrant advocacy group Witness at the Border. In the fall of last year, following a graphic and widely criticized crackdown against Haitian migrants fleeing political chaos and environmental disaster in their home country, the Biden administration executed a mass expulsion, deporting nearly 4,000 individuals back to Haiti, including thousands of parents and children, all of whom were denied the opportunity to apply for asylum in the United States. After being chased and, they alleged, whipped by Border Patrol agents on horseback, captured Haitian migrants were funneled into a mass deportation operation that was facilitated in part by Omni, six months after Amazon assumed its stake; the operation involved carrying prisoners in charter flights from Del Rio, Texas, to El Paso, on the Texas-Mexico border, en route to Port-au-Prince. Most recently, Towle says, she and Cartwright identified a January 25 Omni flight on which 211 asylum-seeking deportees, including 90 children, were deported back to Brazil. “Reports from there indicate the deported Brazilians believe their human rights were violated by ICE officers,” Towle told The Intercept.

    A report published by Amnesty International in December condemned the Omni-enabled mass deportations, noting, “Many expelled Haitians have disembarked US deportation flights sick, handcuffed, hungry, traumatized, and disoriented only to find themselves in a ‘humanitarian nightmare’” back in Haiti. The Biden administration’s use of Title 42, a Trump-era policy that deported asylum-seeking migrants on dubious coronavirus-related public health grounds, drew particular criticism. In October, State Department lawyer Harold Koh resigned following the Haitian deportation flights, which he slammed as “illegal” and “inhumane.” Koh’s letter accused ICE of committing “refoulement,” or deporting individuals to a country knowing that they fear persecution and harm there, in violation of international law, a charge echoed by groups like Amnesty, Human Rights Watch, and the University of Washington Center for Human Rights.

    A Boeing 767 (767-200ER) jetliner, belonging to Omni Air International, lands at McCarran International Airport in Las Vegas, Nv., on Tues., Feb. 26, 2019. (Larry MacDougal via AP)

    A Boeing 767 jetliner belonging to Omni Air International lands at Harry Reid International Airport in Paradise, Nev., on Feb. 26, 2019.

    Photo: Larry MacDougal/AP

    Amazon’s “Commitment to Immigrant Rights”

    Though it pales in comparison to Amazon’s global e-commerce footprint, Omni’s deportation flights are highly lucrative: In 2019, Quartz published an ICE document justifying the immense cost of operating ICE Air, far beyond market rates, explaining that “Many carriers are discouraged by the potential of public backlash or negative media attention” associated with deportation flights, and “as a result, our carrier selection pool has been reduced to a single operator, Omni.” The document noted that ICE was paying Omni roughly double the expected rate for flights: A single deportation charter on November 18, 2019, “cost US taxpayers a total of $1.8 million—roughly $280,000 more than the original order price,” Quartz reported. In its most recent earnings call, ATSG reported a record $466 million quarterly revenue and told investors that “Omni Air was a notable contributor to those gains.”

    Despite their immense cost, private deportation flights give ICE the benefit of discretion. A 2020 Miami Herald report on private deportation flights noted that charter companies “can arrange flights for ICE that keep a low profile and avoid interference from local law enforcement, in part by departing from remote airfields.” According to the 2019 University of Washington Center for Human Rights reports on ICE Air, “deportation flights operated by Omni Air International do not begin with the RPN prefix” — the Federal Aviation Administration call sign typically used to identify ICE Air flights — “and are therefore more readily confused with other private charter flights operated by the same company.”

    Godoy says her research into ICE Air is further thwarted by institutional secrecy at the Department of Homeland Security, which has increasingly redacted public records requests that would provide transparency about the flights and abuse allegations therein. “The degree of redactions is extreme,” Godoy said via email, explaining that many of the documents recently turned over to her team at the University of Washington through Freedom of Information Act requests have been almost entirely redacted. “These documents only serve to illustrate the secrecy surrounding these flights, [some] of which cost taxpayers around $1 million.”

    “These documents only serve to illustrate the secrecy surrounding these flights, [some] of which cost taxpayers around $1 million.”

    Omni’s deportation operations are masked further by the fact that the company uses the exact same planes to conduct both ICE Air flights and luxury charters. Last June, the company debuted “Omni Class,” a private service offering clients “180° of 360 Luxury” with “an unparalleled cocoon of passenger comfort and luxury that transforms into a 180-degree full flat bed at the touch of a button.” A marketing video for Omni Class shows guests enjoying flutes of champagne and grilled steaks from their “stand-alone throne seats that provide the ultimate in luxury and privacy.” A promotional brochure for Omni Class reveals that one of the aircrafts used for these pleasure cruises carries the tail number N378AX, the same plane Omni used to execute a string of deportation flights in December, according to flight data provided by Cartwright. Omni did not comment on the practice of using the same aircraft for both deportations and luxury travel flights.

    Amazon’s ownership of an airline implicated in the torture of immigrants would appear to be at odds with its solidly pro-immigrant public rhetoric. Last June, noting that “Immigrant Heritage Month provides an opportunity to honor the contributions of immigrants and celebrate the powerful ways that diversity enriches America’s culture, common identity, and economy,” Amazon’s corporate blog published a post titled “Renewing our commitment to immigrant rights and immigration reform,” emphasizing that the company continued to contemplate “how we can more effectively use our voice to advocate for the rights of immigrants.” The company has also published numerous written commitments to generalized notions of “human rights,” including within its supply chain and logistics network — which includes, of course, shipping providers like ATSG. “We commit to embedding respect for human rights throughout our business,” according to the Amazon Global Human Rights Principles. “We strive to ensure the products and services we provide are produced in a way that respects internationally recognized human rights,” reads another corporate document. The deliberate infliction of painful and degrading “stress positions” like those described by deportees placed in WRAPs has been described as a form of torture by the United Nations. Human Rights First, a nonprofit organization that has advocated against the use of “stress positions” and criticized ICE’s expansive deportation powers, counts Amazon Senior Vice President of Global Corporate Affairs Jay Carney among the most prominent members of its board of directors. Human Rights First did not respond to a request for comment.

    An illegal immigrant board a plane for a removal flight on October 23, 2015.

    An unauthorized immigrant boards a plane for a removal flight on Oct. 23, 2015.

    Photo: U.S. Immigration and Customs Enforcement

    The Activists Tracking Deportations — and Amazon

    Amazon’s partial ownership of Omni is just one of many profitable entanglements the company has with the American deportation apparatus. Palantir’s ICM software, which The Intercept  previously reported is used by Homeland Security to track and deport immigrants en masse, has been running on servers leased from Amazon’s sprawling cloud computing business, and in 2018 the Daily Beast reported that Amazon had pitched its now-shelved facial recognition technology to ICE. Amazon’s willingness to build computer infrastructure integral to ICE operations has spurred waves of protests, both in the advocacy community and among its own employees.

    Over the past two years, Towle, Cartwright, and their colleagues at Witness at the Border and other migrant advocacy groups have built a sophisticated grassroots operation that tracks deportation flights meant to evade the public eye, documents alleged abuses, and organizes against the deportation system. As ICE goes to great lengths to obscure its deportations from the public, the loose confederation of activists, researchers, and academics essentially reverse-engineer the flight plans, an effort they coordinate via a WhatsApp group. “Perhaps because of Omni’s DOD connection, it masks its tail numbers, which isn’t strictly illegal. But it means that Tom [Cartwright] can only ‘see’ Omni planes once they are in the air,” Towle wrote in an email. “To anticipate when a new flight might be coming, he relied on clues provided by the detainees themselves, passed to the group chat by their advocates and attorneys.” Towle explained that the Omni watchers look for tell-tale signs of an impending deportation, like a detainee being abruptly summoned for Covid-19 testing, having their commissary account locked, or being transferred to a migrant detention facility filled with fellow nationals. Between lawyers and activists on the ground and Cartwright watching the skies, the group has become adept at detecting unannounced deportation flights as they’re happening.

    Towle says that learning of the WRAP’s use on deportees and hearing of its debilitating and demeaning effect firsthand has galvanized her work. “In the WhatsApp chat, I began to hear their advocates and attorneys use the same language. … Reports from the deported individuals included people being ‘stuffed into a sack,’ ‘bagged,’ ‘bagged and tied,’ or “tortilla’d.’” Armed with ample evidence of abuses gleaned from over 50 detainee interviews, Towle and her fellow activists and observers say their next step is pushing Amazon to cease, or at least acknowledge, its role in keeping ICE Air’s machine humming along. “First, I want Amazon to recognize its complicity, by virtue of its connection to Omni, in the commission of egregious human rights violations,” Towle told The Intercept. “Second, Amazon should sever Omni’s relationship with ICE,” a decision, however unlikely, that Towle hopes would send a message to other companies cashing in on mass deportations. It is, she points out, a small drop in the company’s very large bucket: “Amazon can cut ICE loose and never miss a cent.”

    The post Amazon Co-Owns Deportation Airline Implicated in Alleged Torture of Immigrants appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Facebook’s Dangerous Individuals and Organizations policy, a vast library of secret rules limiting the online speech of billions, is ostensibly designed to curtail offline violence. For the editors of the Tamil Guardian, an online publication covering Sri Lankan news, the policy has meant years of unrelenting, unexplained censorship.

    Thusiyan Nandakumar, the Tamil Guardian’s editor, told The Intercept that over the past several years, Facebook has twice suspended the publication’s Instagram account and removed dozens of its posts without warning — each time claiming a violation of the DIO policy. The censorship comes at a time of heightened scrutiny of this policy from free speech advocates, civil society groups, and even the company’s official Oversight Board.

    A string of meetings with Facebook have yielded nothing more than vague assurances, dissembling, and continued deletions, according to Nandakumar. Despite claims from the company that it would investigate the matter, Nandakumar says the situation has only gotten worse. Faced with ongoing censorship, the Guardian’s staff have decided to self-censor, no longer using the outlet’s Instagram account for fear of losing it permanently.

    Facebook admitted to The Intercept that some of the actions taken against the outlet had been made in error, while defending others without providing specifics.

    Civil liberties advocates who discussed the Tamil Guardian’s treatment said that it’s an immediately familiar dynamic and part of a troubling trend. Facebook moderators, whether in South Asia, Latin America, or in any of the other places they patrol content, routinely take down posts first and ask questions later, the advocates said. They tend to lack expertise and local nuance, and their employer is often under pressure from local governments. In Sri Lanka, authorities have “picked up and harassed” Tamil journalists for critical coverage in real life, according to Steven Butler of the Committee to Protect Journalists, who called the Tamil Guardian’s Facebook experience “definitely a press freedom issue.” Indeed, experts said Facebook’s censorship of the Guardian calls into fundamental question its ability to sensibly distinguish “dangerous” content that can instigate violence from journalistic and cultural expression about groups that have engaged in violence.

    Sri Lanka’s Information Offensive

    The roots of the Tamil Guardian’s very 21st-century online content dilemma go back more than four decades, to the civil war that erupted between Sri Lanka’s government and members of its Tamil ethnic minority in 1983. It was then that the Liberation Tigers of Tamil Eelam began a 25-year, sporadically fought conflict to establish an independent Tamil state. During the war, the LTTE, also known as the Tamil Tigers, developed an increasingly ruthless reputation. To the ruling party of Sri Lanka and its allies in the West, the Tamil Tigers were a bloody, irredeemable militant group, described by the FBI in 2008 as “among the most dangerous and deadly extremists in the world.” But for many Sri Lankan Tamils, the Tigers were their army, a bulwark against a government intent on repressing them. “It was an organization that at the time became almost synonymous with Tamil demands for independence, as they were the group that was quite literally willing to die for it,” Nandakumar explained via email.

    Unquestionably, however, the LTTE was a violent organization whose tactics included the use of suicide bombings, torture, civilian assaults, and political assassinations. The government, meanwhile, perpetrated decades of alleged war crimes, including the repeated massacre of Tamil civilians, generating waves of bloodshed that dispersed Sri Lankan Tamils throughout the world. The Tamil Guardian was founded in London in 1998 to serve members of this diaspora as well as those who remained in Sri Lanka. Though it was often considered a pro-Tiger publication in contemporaneous reporting during the war, the Tamil Guardian of today runs editorials by the likes of David Cameron and Ed Milliband, and its work is cited by larger outlets in the western political media mainstream.

    The Tigers were defeated and dissolved in 2009, bringing the civil war to a close after the deaths of an estimated 40,000 civilians. In the years since, Sri Lankan Tamils have observed Maaveerar Naal, an annual remembrance of those who died in the war, with ceremonies both at home in Sri Lanka and abroad. “When [Tigers] died or were killed, people lost family, friends, colleagues,” said Nandakumar. “They are people that many around the world still want to remember and commemorate.”

    Meanwhile, the Sri Lankan state has conducted what human rights observers have described as a campaign of brutal suppression against the memorialization of war casualties and other expressions of Tamil national identity. Mentions of the LTTE are subject to particularly fierce crackdowns by the hard-line government helmed by Gotabaya Rajapaksa, a former Sri Lankan defense secretary accused of directly ordering a multitude of atrocities during the war.

    The suppression campaign has included attempts to stifle unwanted online commentary. In September 2019, Gen. Shavendra Silva, Sri Lanka’s army chief, announced a military offensive against “information” at the nation’s Seventh Annual Cyber Security Summit. “Misguided youths sitting in front of the social media would be more dangerous than a suicide bomber,” Silva remarked. Soon after, Nandakumar says, the Tamil Guardian found itself unable to even mention the Tigers on Facebook without being subjected to censorship via the DIO policy. Nandakumar said that virtually any coverage from the Guardian related to the Tigers or even to sentiments of Tamil pride risks removal. Routinely stricken from the Tamil Guardian’s Facebook and Instagram accounts are posts covering Tamil nationalist political protests inside Sri Lanka as well as uploads merely depicting historically notable LTTE figures. Each time the Tamil Guardian has posts deleted or its account ejected, the only rationale provided is that the post somehow violated Facebook’s prohibition against “praise, support, or representation” of a dangerous organization, even though the policy is supposed to carry an exemption for journalism.

    “We have never been accused of breaching any UK, or indeed U.S., laws particularly with regards to terrorism,” Nandakumar told The Intercept.

    On the Tamil Guardian’s overall experience with Facebook, spokesperson Kate Hayes would say only, via email: “We remove content that violates our policies, but if accounts continue to share violating content, we will take stronger action. This could include temporary feature blocks and, ultimately, being removed from the platform.”

    Though defunct, the Tigers are still a designated terror organization in the U.S., Canada, and the European Union, and Facebook cribs much of its DIO roster from these designations, blacklisting and limiting discussion of not only the Tigers but also 26 other allegedly affiliated persons and groups. Still, as Nandakumar points out, Western outlets like the BBC and U.K. Guardian routinely cover the same protests and remembrances as his publication, and write obituaries for the same ex-LTTE cadres, without their publications being deemed terrorist propaganda.

    Nandakumar is convinced that the government is monitoring the Tamil Guardian’s Instagram account and reporting anything that could be construed pro-Tamil, Tiger or otherwise — although he concedes that he can’t prove the Sri Lankan state is behind the Facebook and Instagram suppression. In July 2020, Instagram removed a photo uploaded by the Tamil Guardian of Hugh McDermott, a member of the Australian Parliament, attending a Maaveerar Naal memorial event in Sydney, while a photo of a flower being laid at a similar event in London was deleted three months later. When the outlet published an article about Anton Balasingham, a former LTTE negotiator, in November 2020, on the anniversary of his death, an Instagram post promoting the article was quickly removed, as was a post that same month depicting the face of S. P. Thamilselvan, former head of the LTTE’s political wing and a peace negotiator who was killed by a Sri Lankan airstrike in 2007.

    Liberation Tigers for Tamil Eelam's (LTTE) chief negotiator Anton Balasingham during the press conference at the Bogis-Bossey chateau in Celigny, Switzerland, on Feb. 23, 2006.

    Liberation Tigers for Tamil Eelam’s (LTTE) chief negotiator Anton Balasingham during the press conference at the Bogis-Bossey chateau in Celigny, Switzerland, on Feb. 23, 2006.

    Photo illustration: Soohee Cho for The Intercept, Francois Mori/AP

    Facebook Adds to Government Pressure

    In January 2021, following two years of vanishing posts and requests for more information from Facebook, Nandakumar was able to secure a meeting with the team responsible for DIO enforcement. “The meeting was cordial, with Facebook acknowledging that … their policy can sometimes be bluntly applied and that mistakes can occur,” Nandakumar said. “They encouraged us to send examples, assuring us that this was an issue of importance and one that they would look into.” Nandakumar says the outlet then submitted an 11-page brief documenting the removals and hoped for the best.

    Meanwhile, the deletions kept coming. “We continued to send over examples, ensuring Facebook was kept almost constantly aware of the number of times our news coverage was being unfairly removed,” said Nandakumar.

    Despite Facebook’s suggestion that the posts had been removed in error, Nandakumar says that in February 2021, the DIO team flatly told him that the Tamil Guardian account had in fact been properly punished for its “praise, support, and representation” of terrorism. “It was extremely disappointing,” recounted Nandakumar in an email to The Intercept. “We had what seemed like a productive meeting, sent over a detailed brief and repeatedly emailed extensive examples, yet received a curt and blunt response which failed to address any of the issues we had raised. We were being brushed off. We highlighted once more that some of the events we covered were actually taking place in the US., legally and with full permission, but were still inexplicably being removed. Their reasoning just did not hold.”

    “We had what seemed like a productive meeting … yet received a curt and blunt response which failed to address any of the issues we had raised.”

    The deletions continued apace: When Kittu Memorial Park in Jaffna, Sri Lanka, burned to the ground in March 2021, the Tamil Guardian wrote an article accompanied by an Instagram post reporting on the suspected arson attack. The park was named for a Tiger colonel who committed suicide in 1993, and Facebook deleted the Instagram post associated with the Guardian article. Two months later, when the outlet published a series revisiting the 2009 destruction of a civilian hospital, believed to have been perpetrated by the Sri Lankan government and described by Human Rights Watch as a war crime, the accompanying Instagram posts were removed.

    A photo of Kittu Memorial Park posted to Instagram by the Tamil Guardian in March 2021 and removed later that month.

    Tamil Guardian

    A photo of Australian MP Hugh McDermott attending a Sri Lankan civil war memorial event in Sydney posted by the Tamil Guardian’s Instagram account, removed by Facebook in July 2020.

    Tamil Guardian

    During the weekend of Maaveerar Naal this past November, the account was reopened with an automated Facebook message saying that the suspension had been a mistake and then banned once more within the same 24-hour period. Though the account is currently reactivated, Nandakumar says the Tamil Guardian’s editors decided that using it to reach and grow the publication’s audience of about 40,000 monthly readers isn’t worth the risk.

    Facebook’s Hayes wrote, “We removed the Tamil Guardian account in error but we restored it as soon as we realized our mistake. We apologize for any inconvenience caused.” The company did not answer questions about why the Tamil Guardian’s deleted posts had been removed if its overall suspension had been an error.

    The Tamil Guardian obtained a second meeting with Facebook this past October after a pressure campaign from Canadian and British parliamentarians and Reporters Without Borders. At that meeting, Facebook cited its obligation “to comply with U.S. government regulation,” Nandakumar said, and stated that “our content may have continued to breach their guidelines.” Experts say there is no law on the books in the U.S. stopping Facebook from letting journalists or ordinary users freely discuss or even praise LTTE figures, commemorate the war’s victims, or depict contemporary remembrances of the dead. “I know of no obligation under U.S. law, no requirement that they remove such material,” Electronic Frontier Foundation Civil Liberties Director David Greene told The Intercept. “For years they would say, ‘I’m sorry, we are required by law to take that down.’ And we would ask them for the law, and we wouldn’t get anything.”

    The Daunting Job and “Human Error” of Moderators

    It appears then to be Facebook, not the federal government of the U.S., that is collapsing the LTTE and Sri Lankan Tamil nationalism into a single entity, the consequences of which make exploring the country’s painful past and uncertain future from the perspective of the war’s losing side a near impossibility on an internet where a presence on the company’s platforms is crucial to reaching an audience.

    Nandakumar said that the history of the Tigers and the future of Sri Lanka’s Tamils are impossible to untangle. “For newspapers and media organizations reporting on the conflict and the Tamil cause, it was impossible to avoid the LTTE – just as much as it would have been to avoid the Sri Lankan state,” he continued. Today, Nandakumar said, “alongside highlighting of the daily repression faced in the Tamil homeland, our role is to reflect and analyze the variety of Tamil political voices and opinion. We report on commemoration of historical or significant events as these remain important to the Tamil polity, who continue to mark these dates despite Sri Lanka’s attempts to stop them.”

    Tamil Guardian reporters, along with staff from other outlets, are frequently harassed and detained by Sri Lankan police, sometimes on the grounds that they’ve violated national anti-terror laws, according to a Reporters Without Borders report. In 2019, the Tamil Guardian’s Shanmugam Thavaseelan was arrested for “trying to cover a demonstration calling for justice for the Tamil civilians who disappeared during the civil war,” as the report put it.

    Nandakumar says he’s convinced that the Sri Lankan government has a hand in the Facebook deletions, in part because he’s learned that it has attempted similar tactics on other platforms: In December 2020, Twitter informed the Tamil Guardian that the Sri Lankan government had lobbied, unsuccessfully, to have the outlet’s tweets deleted on the platform. “This coincided with a ramping up of media suppression across the island and with the removal of our content on Facebook and Instagram.”

    “What is one person’s dangerous individual or organization is someone else’s hero.”

    “The action taken against The Tamil Guardian account was not in response to any government pressure or mass reporting,” said Facebook’s Hayes, adding that each of the two Instagram suspensions “was a case of human error.”

    Greene said that the Tamil Guardian’s treatment is illustrative of a fundamental parochialism behind the DIO policy: “What is one person’s dangerous individual or organization is someone else’s hero.” But before values come into play, there is the question of basic facts; a moderator overseeing Sri Lanka must know “who the Tamil Tigers were, what the political situation was, the fact that they don’t exist, what their ongoing legacy might be,” Greene said. “The amount of expertise that a company like Facebook is required to have on every single geopolitical situation around the world is really startling.”

    According to Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, the rigidity of Facebook’s DIO roster risks causing what she described as “cultural and historical erasure,” a status quo under which one can’t publicly and freely discuss a group designated as an enemy by the U.S., even after that enemy ceases to exist. “We’ve seen this with some groups in Latin America that are still on the U.S. [terror] list, like FARC,” the Colombian guerrilla army that dissolved in 2017 but remains banned from free discussion under Facebook policy. “At some point, you have to be able to talk about these things.”

    The post Facebook’s Tamil Censorship Highlights Risks to Everyone appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A Brinc, uma empresa ascendente entre as muitas que disputam o mercado da venda de drones para órgãos policiais, tem um mito fundador atrativo: logo após um ataque a tiros ocorrido em Las Vegas em 2017, o jovem criador da startup decidiu ajudar as polícias com o uso de robôs não violentos. No entanto, um vídeo promocional da empresa obtido pelo Intercept revela uma visão diferente: a venda de drones equipados com armas de choque, visando atacar migrantes que cruzam a fronteira dos EUA com o México.

    O fundador e CEO da empresa, Blake Resnick, apareceu recentemente na Fox Business News para comemorar uma conquista de capital de risco: o investimento de 25 milhões de dólares obtido junto a nomes importantes do Vale do Silício, como Sam Altman, Jeff Weiner – ex-CEO do LinkedIn e fundador da Next Play Ventures – e o ex-secretário de Defesa Patrick Shanahan. Aos 21 anos, membro da Thiel Fellowship e integrante recente da prestigiosa lista dos “30 com menos de 30” da Forbes na categoria Impacto Social, Resnick contou a Stuart Varney, da Fox Business, que os drones quadricópteros da Brinc estão ajudando a polícia a neutralizar, quase diariamente, situações perigosas envolvendo reféns. Resnick afirmou novamente que sua empresa havia sido fundada “em grande parte” com o intuito de salvar vidas, em resposta ao massacre de Las Vegas em 2017 – uma história inspiradora que atraiu a imprensa. Com a atenção crescente aos danos morais e corporais causados por robôs autônomos militarizados, a página “Valores e Ética” no site da Brinc oferece um bálsamo, afirmando o “dever de trazer essas tecnologias ao mundo de forma responsável” e o compromisso de “nunca construir tecnologias destinadas a ferir ou matar”.

    Mas um vídeo promocional de 2018, de um produto para a defesa de fronteiras que acabou não sendo lançado, mostra que os objetivos tecnológicos originais da startup tinham em vista a possibilidade de ferir pessoas. O vídeo mostra Resnick em um trecho não identificado da fronteira EUA-México explicando como os robôs voadores de sua empresa podem ser usados para detectar, rastrear, interrogar e, por fim, atacar fisicamente possíveis imigrantes. “Esta é uma das partes mais desoladas da nossa fronteira sul”, descreve Resnick, vestindo um blazer, ao lado de uma grande caixa metálica adornada com painéis solares. “Todo ano, mais de 100 bilhões de dólares em narcóticos e 500 mil pessoas cruzam áreas como esta.” Quando o vídeo foi gravado, a gestão Trump começava a investir nas chamadas tecnologias de vigilância de muro virtual – uma alternativa à construção da muralha física prometida durante a campanha presidencial. O governo assinou contratos com concorrentes da Brinc, como a Anduril Industries (também ligada à Peter Thiel, o cofundador do PayPal por trás da Thiel Fellowship). “Não há muro aqui, e um muro provavelmente não funcionaria devido ao terreno acidentado e a questões eminentes de domínios.” Felizmente, “há uma solução”, diz Resnick, apontando para o baú metálico.

    Resnick tinha por volta de 18 anos à época da gravação.

    No vídeo, ele chama essa solução de “Muro de Drones”, um sistema de caixas cintilantes que seria instalado ao longo da fronteira, cada uma delas abrigando um pequeno quadricóptero robótico, com sensores térmicos e de alta definição, habilidades de piloto automático, software de detecção humana e o elemento crucial: uma arma de choque. Tão logo detectasse uma pessoa “suspeita”, o drone de fronteira da Brinc conectaria seus sensores e alto-falante a um agente da Patrulha de Fronteira, que poderia “interrogar” o “infrator” remotamente. No vídeo de demonstração, um ator latino identificado como “José” caminha pelo deserto. Abordado pelo drone, José se recusa a mostrar seus documentos, aponta uma arma para o Brinc e se afasta. Quando José se afasta, o drone lhe dá um choque com uma arma Taser. José desaba no chão.

    A implementação efetiva do Muro de Drones envolveria centenas ou milhares desses robôs armados buscando alvos ao longo da fronteira de forma constante e colocando mais armas em uma região já altamente militarizada.

    A caça de um imigrante que perambula pelo deserto – com uso de choque e baseada em inteligência artificial – – não é uma cena conciliável com o juramento corporativo da Brinc: “Reflita sobre as implicações do nosso trabalho – não construiremos uma distopia”. Atualmente a empresa ainda desenvolve drones sofisticados voltados à segurança, com foco nas polícias, no Departamento de Segurança Interna e em clientes de defesa, mas sem a variante armada apresentada no vídeo. A principal oferta atual da Brinc para a polícia e equipes de emergência é o drone LEMUR S, que se parece muito com a unidade Muro de Drones, mas não tem uma arma instalada. O equipamento é descrito pela empresa como uma “ferramenta tática que pode ajudar a apaziguar situações, reduzir riscos e salvar vidas”. A empresa também vende o BRINC BALL, um dispositivo esférico semelhante a um celular que pode ser arremessado pela polícia em momentos de tensão para ouvir e se comunicar remotamente.

    Três anos após suas declarações na fronteira, o Blake Resnick de 2021 se diz arrependido por ter trabalhado no sistema do vídeo promocional. Por e-mail, ele contou ao The Intercept que o “vídeo é imaturo, profundamente lamentável e nem um pouco representativo da direção pela qual levei a empresa desde então”. Ele descreveu o sistema Muro de Drones como um “protótipo” que “nunca foi totalmente desenvolvido, vendido ou usado operacionalmente” e que foi descontinuado em 2018 por ser “sujeito a uso indevido desastroso”. “Concordo que a tecnologia apresentada não é ética, e esse é um dos motivos pelos quais criamos o conjunto de ‘Valores e Ética’ para nortear nosso trabalho”, acrescentou, referindo-se à seção do site da empresa.

    Resnick também disse que “o vídeo era fictício” – a empresa “nunca construiu um drone com uma Taser funcional”. Segundo ele, durante a gravação foi usado gás comprimido para disparar um dardo Taser no ator, mas “sem realmente colocar alta tensão nos fios”.

    Ainda assim, a empresa tentou vender o sistema. Resnick afirmou que “discutiu-se inicialmente com um número muito limitado de partes” a compra do sistema Muro de Drones, com a ideia de construir algo mais barato do que um muro de fronteira, reduzindo “o risco de troca de tiros entre policiais e traficantes armados que tentassem entrar nos Estados Unidos”. Mas “nada avançou” no projeto, e Resnick repetiu sua afirmação de que, inspirado pelo ataque de Las Vegas, buscou “se distanciar desses usos” e servir a equipes de emergência – embora o trabalho com o Muro de Drones tenha continuado no ano seguinte ao massacre. Ele afirma que essa virada e a declaração de valores da empresa precedem o primeiro funcionário da startup, as receitas, a entrega do produto e a arrecadação de fundos. Também declara que a Brinc se compromete a não vender drones armados.

    Apesar da mudança de visão e da orientação desarmada da empresa, fontes disseram ao The Intercept que o fato de a tecnologia ter estado sempre à mesa gera sérias preocupações em relação aos valores, às ambições e ao discernimento da Brinc e de seu jovem CEO. Embora o fundador da empresa diga ter se afastado dos drones construídos para interceptar e incapacitar imigrantes, a missão original da Brinc – vender robôs voadores para ajudar na segurança de Estado – segue em vigor, situando a startup em uma nova fronteira de negócios eticamente inquietante. A empresa contratou recentemente um “diretor federal de captura e estratégia” – anteriormente empregado por um fornecedor de defesa que vendia drones para o Comando de Operações Especiais dos EUA –, sugerindo um interesse em aplicações militares.

    “Ele tem toda essa narrativa sobre o ataque de Las Vegas, mas a ideia original era 100% usar drones para dar choque em imigrantes”, contou uma fonte com conhecimento direto da Brinc e que pediu anonimato para não se prejudicar profissionalmente. Essa pessoa disse que, à época, Resnick demonstrou pouco interesse em “aplicações que não envolvessem choques em imigrantes”, mesmo havendo “um milhão de coisas para você usar drones e que não envolvem eletrocutar pessoas”.

    Referindo-se à atual ênfase da Brinc na não violência e em apaziguar conflitos, essa fonte faz a seguinte análise: “Eles só inventaram isso quando obtiveram fundos de investidores reais, como Sam Altman. A empresa faz uma boa fachada sobre resgatar pessoas e não causar danos, mas imagine o que é dito aos policiais a portas fechadas?”

    Um ator representando um imigrante na fronteira dos Estados Unidos com o México é atingido por uma arma de choque, demonstrando as capacidades do sistema “Muro de Drones” da Brinc.

    Um ator representando um imigrante na fronteira dos Estados Unidos com o México é atingido por uma arma de choque, demonstrando as capacidades do sistema “Muro de Drones” da Brinc.

    Reprodução: The Intercept

    “As startups dão guinadas o tempo todo em direção ao dinheiro”, acrescentou a fonte. “O Google já disse ‘não seja mau’. Quando a roda gira, você tem clientes bancando, e eles querem alguma coisa.”

    Uma patente em nome de Resnick, para uma versão expandida do sistema mostrado no vídeo, levanta questões sobre o suposto interesse em se afastar de drones armados e o potencial da empresa em usar essa tecnologia no futuro. A Brinc solicitou a patente de forma provisória em 2017 e formalmente em junho de 2018 – sete meses após o ataque em Las Vegas que teria convencido Resnick a ajudar serviços de emergência. A patente foi concedida à Brinc no ano passado. A solicitação para “Patrulha de fronteira implementada por drones” afirma: “Se uma pessoa for detectada, um algoritmo de reconhecimento facial a bordo tentará identificá-la. (…) Em uma modalidade, o algoritmo de reconhecimento facial opera comparando as características faciais capturadas com o banco de dados de reconhecimento facial do Departamento de Estado dos EUA.”

    “Quando a roda gira, você tem clientes bancando, e eles querem alguma coisa.”

    A patente especifica que a arma a bordo é uma Taser X26, um poderoso armamento de eletrochoque que foi descontinuado e associado a um “maior risco cardíaco do que outros modelos”, conforme investigação da Reuters de 2017. Mas uma arma de choque era apenas uma das muitas opções possíveis. Outros potenciais armamentos anti-imigrantes descritos na patente incluem: spray de pimenta, gás lacrimogêneo, balas de borracha, balas de plástico, munição de feijão, granadas de esponja, “arma eletromagnética, arma laser, arma de micro-ondas, arma de feixe de partículas, arma sônica e/ou arma de plasma” com “uma abordagem sônica para incapacitar um alvo”.

    Defensores das liberdades civis e de imigrantes condenaram a tecnologia demonstrada no vídeo.

    “O governo Biden e o Congresso não devem fazer contratos com empresas como a Brinc”, disse Mitra Ebadolahi, advogado sênior da American Civil Liberties Union de San Diego e do condado de Imperial, na Califórnia, após assistir ao vídeo. “Fazer isso coloca o lucro acima das pessoas e não contribui em nada para aumentar a segurança ou proteção humana.” Ebadolahi acrescentou que o sistema Muro de Drones é “particularmente horrível quando os alvos potenciais são considerados: crianças desacompanhadas, grávidas e requerentes de asilo em busca de segurança”.

    Ela ecoou as preocupações da fonte ligada à Brinc sobre um retorno da empresa aos drones armados: “Em um mercado não regulamentado, executivos de tecnologia seguem o dinheiro e projetam seus produtos para compradores que prometem grandes lucros e pouco escrutínio. Os contratos governamentais mais atraentes são com nossas agências mais sobrefinanciadas e pouco investigadas: os órgãos policiais.”

    “É possível afirmar com clareza que essas empresas estão se inspirando em drones assassinos usados em outras partes do mundo.”

    Em entrevista ao The Intercept, Jacinta Gonzalez, do Mijente, um grupo latino de defesa dos direitos dos imigrantes, descreveu o vídeo do Muro de Drones como “absolutamente horripilante”. “É assustador pensar que essa não é apenas uma ideia horrível que alguém trouxe em um brainstorming, mas [que a Brinc] tenha ido tão longe a ponto de fazer o vídeo”, que Gonzalez considera ilustrativo de “como se tornou difusa a linha entre zonas de guerra e uma fronteira militarizada. “É possível afirmar com clareza que essas empresas estão se inspirando em drones assassinos usados em outras partes do mundo.”

    Gonzalez conta que ficou incomodada com o cenário retratado no vídeo, descrito por ela como uma “fantasia racista” e não representativa dos verdadeiros problemas humanitários da fronteira. “Se houvesse drones voando, provavelmente encontrariam famílias e pessoas que estão passando por uma crise sanitária muito difícil. (…) Eles estariam enfrentando pessoas que talvez não falem inglês.” Forçar o imigrante médio da fronteira sul dos EUA a um interrogatório com um robô projetado para eletrocutá-lo “apenas torna uma viagem perigosa ainda mais violenta, com maior probabilidade de resultar em mortes ou feridos”.

    Gonzalez vê com ceticismo, a longo prazo, a promessa atual da Brinc de não contribuir com uma distopia policial robótica: “Você não pode confiar em uma empresa que coloca ideias como essa na rua”. Evitar um futuro em que a fronteira dos EUA com o México é patrulhada por robôs voadores armados “não só exige o compromisso da empresa dizendo que não produzirá esse tipo de drone. É preciso que as polícias locais, a Imigração, o Departamento de Segurança Interna e a Patrulha de Fronteira digam de forma proativa: ‘Este não é o tipo de tecnologia no qual queremos investir, nunca implementaríamos algo desse tipo’”.

    Tradução: Ricardo Romanoff

    The post Drone dá choque em migrante na fronteira em vídeo de startup dos EUA appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Brinc, a rising star among the many companies jockeying to sell drones to police, has a compelling founding mythology: In the wake of the 2017 Las Vegas mass shooting, its young founder decided to aid law enforcement agencies through the use of nonviolent robots. A company promotional video obtained by The Intercept, however, reveals a different vision: Selling stun gun-armed drones to attack migrants crossing the U.S.-Mexico border.

    The company’s ascendant founder and CEO, Blake Resnick, recently appeared on Fox Business News to celebrate a venture capital coup: $25 million from Silicon Valley A-listers like Sam Altman, ex-LinkedIn CEO Jeff Weiner’s Next Play Ventures, and former acting Secretary of Defense Patrick Shanahan. The 21-year-old Resnick, a Thiel fellow and a new inductee to the prestigious Forbes “30 Under 30” list in the category of social impact, told Fox Business’s Stuart Varney that Brinc’s quadcopter drones are helping police defuse dangerous hostage situations on a near-daily basis. Resnick repeated his longtime claim that the company had been founded “in large part” as a lifesaving response to the 2017 Las Vegas massacre, an inspirational story that’s made its way into press coverage of the startup. With increased scrutiny paid to the moral and bodily harms posed by autonomous militarized robots, Brinc’s “Values & Ethics” webpage offers a salve, asserting a “duty to bring these technologies into the world responsibly” and a commitment to “never build technologies designed to hurt or kill.”

    But a 2018 promotional video for an unreleased border security product shows that the startup’s original technological goals did involve hurting people. In the video, Resnick, standing at an unnamed stretch of the U.S.-Mexico border, demonstrates how his company’s flying bots could be used to detect, track, interrogate, and ultimately physically attack would-be migrants. “This is one of the most desolate parts of our southern border,” a blazer-clad Resnick says in the video, standing beside a large metallic box adorned with solar panels. “Every year, over $100 billion of narcotics and half a million people flow through areas just like this one.” When the video was made, the Trump administration had begun investing in so-called virtual wall surveillance technologies to obviate the need for the physical wall that Donald Trump had promised during his presidential campaign, inking contracts with Brinc competitors like Anduril Industries (also linked to Peter Thiel, the PayPal co-founder behind the Thiel Fellowship). “There’s no wall here,” notes Resnick, “and it probably wouldn’t work anyway because of the rough terrain and eminent domain issues.” Luckily, “there is a solution,” says Resnick, gesturing to the metal chest.

    Resnick would have been about 18 at the time the video was made.

    In the video, Resnick calls that solution the “Wall of Drones,” in which the glinting boxes would be deployed across the border, each harboring a small robotic quadcopter with high-definition and thermal sensors, self-piloting abilities, human-detection software, and, crucially, a stun gun. Once Brinc’s border drone detected a “suspicious” person, it was to connect its sensors and built-in speaker with a Border Patrol agent, who would then remotely “interrogate” the “perpetrator.” In the video demonstration, a Latino actor referred to as “José” is walking in the middle of the desert when he is approached by the Brinc drone. José then refuses to show identification to the drone, points a gun at it, and walks away, whereupon the drone is depicted firing a Taser into his back and shooting an electrical current through him. José crumples into the dirt.

    Fully realized, the Wall of Drones would have entailed hundreds or thousands of these armed robots constantly searching for targets along the border, adding more weapons to an already highly militarized stretch of the Earth.

    The artificial intelligence-powered hunting and tasing of a wandering migrant isn’t a scene that’s immediately easy to reconcile with Brinc’s corporate vow: “Be mindful of the implications of our work — we won’t build a dystopia.” Today the company is still engineering sophisticated security-oriented drones with an eye toward police, the Department of Homeland Security, and defense customers but without the weaponized variant shown off in the desert. Brinc’s current main offering to police and other first responders is the LEMUR S drone, which closely resembles the Wall of Drones unit but does not have a weapon installed. It’s described by the company as a “tactical tool that can help to de-escalate, reduce risk, and save lives.” The company also sells the BRINC BALL, a spherical cellphone-like device that can be tossed into dangerous situations by police to listen and communicate remotely.

    The Blake Resnick of today, three years removed from his borderland demonstration, is contrite over having worked on the border system. He told The Intercept over email that the “video is immature, deeply regrettable and not at all representative of the direction I have taken the company in since.” He described the Wall of Drones system as a “prototype” that was “never fully developed, sold, or used operationally” and was discontinued in 2018 because it is “prone to disastrous misuse. … I agree that the technology as depicted is unethical and that is one of the reasons we created a set of Values and Ethics to guide our work,” he added, referring to the website section.

    Resnick also said that “the video was faked” — the company “never built a drone with a functional taser.” The video, he said, used compressed gas to fire a Taser dart at the actor but “without actually putting high voltage through the wires.”

    Still, the company did try to sell the system: Resnick noted that “BRINC had initial discussions with a very limited number of parties” about purchasing the Wall of Drones system, explaining that the idea was to build something cheaper than a border wall that would reduce “the risk of gunfights between law enforcement and armed traffickers attempting to cross into the United States.” But “nothing ever progressed” with the project, and Resnick repeated his claim that he was inspired by the Las Vegas shooting “to pivot away from these uses” to serving emergency responders, though work continued on the Wall of Drones into the year following the massacre. That pivot and the company values statement predated the startup’s first employee, revenue, product delivery, and fundraising, he said. Brinc, he said, is committed to not selling weaponized drones.

    Despite Resnick’s change of heart and the company’s current unarmed tack, some who spoke to The Intercept say the fact that the technology was ever on the table raises serious concerns about the values, ambitions, and judgment of Brinc and its young CEO. And though Brinc’s founder says that he’s pivoted away from drones built to intercept and incapacitate migrants, the company’s original mission — selling flying robots to aid in state security — remains in place, situating the company in an ethically fraught new frontier of business. The company recently hired a “federal capture and strategy director,” previously employed by a defense contractor selling drones to U.S. Special Operations Command, suggesting an interest in military applications.

    “He’s got this whole narrative about the shooting in Vegas, but the original idea was 100 percent to use drones to tase migrants,” a source with direct knowledge of Brinc told The Intercept. The source, who asked to remain anonymous to protect their livelihood, said that Resnick at the time showed little interest in drone “applications in the non-tasing immigrants business” even though there are “a million things you can use drones for that don’t involve electrocuting people.”

    Referring to Brinc’s current emphasis on nonviolence and de-escalation, this person said, “They only made that up when they raised funds from real investors like Sam Altman. The company puts out a good front about rescuing people and doing no harm, but imagine what is said to cops behind closed doors?”

    brinc-drone-theintercept-emb

    An actor depicting a migrant on the U.S.-Mexico border is struck with a stun gun, demonstrating the capabilities of Brinc’s “Wall of Drones” system.

    Still: The Intercept

    “Startups pivot all the time to where the money is,” this source added. “Google once said ‘don’t be evil.’ When the rubber hits the road, you’ve got paying customers, and those customers want things.”

    A patent in Resnick’s name protecting an expanded version of the system from the video raises further questions about both his stated motivation for pivoting away from weaponized drones and about the potential for the company to use such technology in the future. Brinc provisionally applied for the patent in 2017 but formally applied in June 2018 — seven months after the Vegas shooting that Resnick said convinced him to switch to helping emergency responders. The patent was awarded to Brinc last year. The patent application, for “Drone Implemented Border Patrol,”  states: “If a person is detected, an onboard facial recognition algorithm will attempt to identify the person. … In one embodiment, the facial recognition algorithm works by comparing captured facial features with the U.S. Department of State’s facial recognition database.”

    “When the rubber hits the road, you’ve got paying customers, and those customers want things.”

    The patent specifies that the onboard stun gun is a Taser X26, a powerful, discontinued electroshock weapon associated with “higher cardiac risk than other models,” according to a 2017 Reuters investigation. But a stun gun was only one of many possible options. Other potential anti-migrant armaments described in the patent include pepper spray, tear gas, rubber bullets, rubber buckshot, plastic bullets, beanbag rounds, sponge grenades, an “electromagnetic weapon, laser weapon, microwave weapon, particle beam weapon, sonic weapon and/or plasma weapon,” along with “a sonic approach to incapacitate a target.”

    Migrant and civil liberties advocates decried the technology demonstrated in the video.

    “The Biden administration and Congress must not contract with companies like Brinc,” said Mitra Ebadolahi, a senior staff attorney with the American Civil Liberties Union of San Diego and Imperial Counties, after reviewing the video. “Doing so promotes profits over people and does nothing to further human safety or security.” Ebadolah added that the Wall of Drones system is “particularly horrifying when one considers potential targets: unaccompanied children, pregnant people, and asylum-seekers searching for safety.”

    She echoed the source’s concerns about a pivot back to weaponized drones, stating: “In an unregulated market, tech executives follow the money, and they engineer their products for buyers that promise large profits and little scrutiny. The most attractive government contracts are with our most over-funded and under-scrutinized agencies: law enforcement.”

    “You can tell very clearly that these companies are getting their inspiration from the killer drones that are used in other parts of the world.”

    Jacinta Gonzalez of Mijente, a Latino advocacy and migrant rights group, described the Wall of Drones video as “absolutely horrifying” in an interview with The Intercept. “It’s terrifying to think that this is not just an awful idea that someone brings up in a brainstorming session, but [Brinc has] gone so far as to make the video,” which she says is illustrative of “how blurry the line has become between war zones and a militarized border. You can tell very clearly that these companies are getting their inspiration from the killer drones that are used in other parts of the world.”

    Gonzalez said that she was disturbed by the scenario depicted in the video, which she described as a “racist fantasy” and not representative of the true humanitarian problems along the border. “If there was a drone flying over, they would most likely be finding families and people who are going through a very difficult health crisis. … They would be confronting folks that might not be speaking English.” Forcing the average southern border migrant into an interrogation with a robot designed to electrocute them “just makes a dangerous journey all the more violent, all the more likely to result in death or harm.”

    Gonzalez shared skepticism over how Brinc’s current pledge to not help build a robotic police dystopia might fare in the longer term: “You cannot trust a company that is even putting ideas like this out into the world.” Avoiding a future in which the southern border is patrolled by armed flying robots “not only requires commitments from this company to say that they won’t produce this type of drone, but it also requires local police departments, and ICE, the Department of Homeland Security, and Border Patrol to all proactively say, ‘This is not the type of technology that we want to invest in, we would absolutely never implement something like this.’”

    The post Startup Pitched Tasing Migrants From Drones, Video Reveals appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Following the U.S. military withdrawal from Afghanistan and the ascendance of the Taliban, Facebook has found itself with a power nearly unprecedented in history: an American corporation unilaterally controlling the most popular means through which an entire foreign government speaks to its people.

    After the Taliban assumed power in August, Facebook initially tightened its controls on the group, which it had already blacklisted. But internal company materials reviewed by The Intercept show that Facebook has carved out several exceptions to its Taliban ban, permitting specific government ministries to share content via the company’s platforms and contributing to a growing tangle of internal policies on how the Taliban posts.

    Facebook has for years officially barred the Taliban and myriad affiliates from using its platforms under the company’s Dangerous Individuals and Organizations policy, an internal blacklist published by The Intercept in September. The DIO blocks thousands of groups and people from Facebook platforms and dictates what billions of people can say about them there. But unlike other banned groups on the DIO list, like Al Qaeda or the Third Reich, the Taliban is now a sovereign government engaged in the very real business of administering an entire country with millions of inhabitants.

    An internal policy memorandum obtained by The Intercept shows that, at the end of September, the company created a DIO exception “to allow content shared by the Ministry of Interior.” The memo cited only “important information about new traffic regulations,” noting “we assess the public value of this content to outweigh the potential harm,” although it did not limit its exception to traffic updates only. A second DIO exception added at the same time provides a far narrower carveout: Two specific posts from the Ministry of Health would be permitted on the grounds that they contained information relevant to Covid-19. Despite the exceptions, however, Interior’s Facebook page was deleted at the end of October, as first reported by Pajhwok Afghan News agency, while the Health Ministry’s page hasn’t posted since October 2.

    The exception memo cited “important information about new traffic regulations,” noting “we assess the public value of this content to outweigh the potential harm.”

    While no other government offices are currently allowed to share information, other exceptions to the DIO policy reviewed by The Intercept were even narrower in scope: For just 12 days in August, government figures on Facebook were permitted to recognize the Taliban “as official gov of Afghanistan” without risking deletion or suspension, according to another internal memo, and a similarly brief stretch from late August to September 3 granted users the freedom to post the Taliban’s public statements without having to “neutrally discuss, report on, or condemn” these statements.

    While exempting the Ministry of Interior would permit Afghans to receive information about a variety of important administrative functions like public security, driver’s licenses, and immigration matters, no such exceptions have been issued for other offices with responsibilities vital to the basic functioning of any country, like the ministries of agriculture, commerce, finance, and justice. Afghanistan is currently “on the brink of a humanitarian catastrophe,” according to a recent U.N. report, and the new Taliban administration is still struggling to establish itself.

    Facebook spokesperson Sally Aldous told The Intercept that the Taliban remains banned from the company’s services through the Dangerous Individuals and Organizations policy, adding, “We continue to review content and Pages against our policies and last month removed several Pages including those from the Ministry of Interior, Ministry of Finance and Ministry of Public Works. However, we’ve allowed some content about the provision of essential public services in Afghanistan, including, for example, two posts in August on the Afghan Health Page.”

    It’s unclear how Facebook has arrived at this piecemeal approach to its Taliban policy, or how exactly it determined which government ministries to permit. Aldous declined to explain how the company drafted these policy exceptions or why they they weren’t publicly disclosed, but told The Intercept that “Facebook does not make decisions about the recognized government in any particular country but instead respects the authority of the international community in making these determinations,” adding, “We have a dedicated team, including regional experts, working to monitor the situation in Afghanistan. We also have a wide and growing network of local and international partners that we work with to alert us to emerging issues and provide essential context.”

    Experts who spoke to The Intercept say these exceptions, even if well-intentioned, demand a public disclosure not only of their existence, but also of how the determinations were reached. Others criticized the policy exceptions as arbitrary in nature, underscoring the unchecked power the American company holds over the functioning of another country’s government, particularly in a society like Afghanistan where a lack of internet infrastructure creates a greater reliance on Facebook products. In 2019, a New York Times report noted that Facebook messaging product “WhatsApp has become second only to Facebook as a way for Afghans to communicate with one another, and with the outside world.” While poorer countries are a lucrative and growing target for Facebook’s advertising operations, years of reporting show these markets are often an afterthought in terms of content policy and moderation.

    Masuda Sultan, co-founder of Women for Afghan Women, told The Intercept that while the potential for Taliban propagandizing is a concern, Facebook platforms in Afghanistan may present “the only communication that many people have in order to relay messages with the entities in power, or for these entities to hear them.” In August, Sultan made use of the now-shuttered Taliban WhatsApp hotline when her NGO’s Kabul office was attacked amid the chaos of the American pullout. “It was incredibly important for us to have access to them because the police had abandoned their posts and we had no one else to call,” she added. “Especially during an emergency, it is not helpful to have communications shut down between ordinary people and those in power.”

    Facebook platforms in Afghanistan may present “the only communication that many people have in order to relay messages with the entities in power.”

    While Facebook is a publicly traded company and at times consults and collaborates with both governmental experts and regional NGOs, the company remains under the complete and total control of one man, founder and chief executive Mark Zuckerberg, and its policy decisions are ultimately his. It’s unclear to what extent the future of Afghanistan is a priority for Zuckerberg, even while his company’s undisclosed content policies continue to affect it.

    The company has stumbled through issues of national sovereignty in the past — throttling the military junta in Myanmar’s access to Facebook and banning the sitting president of the United States early this year — but the magnitude of banning an entire government and then creating niche exceptions to that ban is a new test of the company’s de facto control over the flow of information to billions of people around the world. “Facebook has had to make these calls before,” explained Jane Esberg, a senior social media analyst at International Crisis Group, a Brussels-based think tank, but “the scale of it is new in the sense that it is both extremely political in the United States, and it is with an organization that is a designated terror organization.”

    While the Taliban is not listed as a terrorist entity by the State Department, it is subject to economic sanctions through the Treasury Department’s Specially Designated Global Terrorist roster, a list of entities on which Facebook’s own internal blacklist relies heavily. Facebook has repeatedly pointed to the SDGT list as the legal rationale behind its Dangerous Individuals and Organizations policy, claiming it has no choice but to limit such speech, though legal scholars deny the company is under any legal obligation to censor the Taliban or any other SDGT entity, let alone censor those who want to mention them.

    However, Facebook appears to be operating based on its own extremely broad and conservative interpretation of the law, one that critics say isn’t grounded in the actual statutes at play but rather the company’s corporate prerogatives. In a recent Twitter thread on this topic, Electronic Frontier Foundation senior attorney and civil liberties director David Greene wrote, “I can confirm that for years we’ve been asking Facebook to provide the specific legal authority that compels them to remove these groups (as opposed to just deciding they don’t want them). I’ve always said there is none. And we’ve never had a specific law cited to us.”

    By notable contrast, Twitter continues to permit the Taliban to use their platform without legal penalty of any kind. “It’s completely unclear what the political logic is and who’s driving the political logic internally,” said Esberg, who emphasized the importance of “some degree of transparency so that we understand what the logic is, what counts as information that the Afghan public needs to see” versus whatever speech is deemed too “dangerous” for the platform. In a recent article for Just Security, Faiza Patel and Mary Pat Dwyer of New York University’s Brennan Center for Justice rebutted the notion that the company’s hands are tied by anti-terror statutes and sanctions compliance, writing: “Facebook needs to set aside the distracting fiction that U.S. law requires its current approach.”

    The ad hoc exception to certain elements of the Taliban government makes Facebook’s claims that it’s legally bound by the federal government to censor certain foreign groups even more untenable: If U.S. law mandates barring the Taliban regime from using its platforms, as the company and its executives repeatedly assert, then presumably these exceptions would be violate Facebook’s expansive interpretation of its legal obligations. Facebook spokesperson Sally Aldous did not respond to a question on this point.

    Ashley Jackson, a former aid worker with the U.N. and Oxfam and co-director of the Centre for the Study of Armed Groups, also criticized the company’s approach. “Why not exempt the Ministry of Education, or whatever else that deals with essential services?” she asked. “The post-2001 republic collapsed. The Islamic Emirate of Afghanistan have absolute power over the government. It makes little sense to pick and choose.”

    The ban is all the more baffling because members of the Taliban can thwart it, Jackson added. “I know that the Taliban have used Facebook to spread propaganda and wage the war because I’ve seen it and written about it,” she said. “I’ve even used Facebook to connect with Taliban commanders. All [Facebook] are doing is covering themselves and obstructing information.”

    Still, Facebook is no doubt under political pressure at home to deny the Taliban any benefit whatsoever, even if it means keeping Afghans in the dark.

    “What they are doing is a cynical PR exercise — not actual safeguarding.”

    “Facebook’s stance reflects a much more contentious debate on ‘legitimizing’ the Taliban, which has been marked by total and utter policy incoherence on the part of Western States,” Jackson explained. “It makes sense that Facebook’s own policy is incoherent, but erring on the side of conservatism — they’re trying to avoid public criticism. No private company should have this power, of course, but what they are doing is a cynical PR exercise — not actual safeguarding.”

    Facebook’s Taliban problem began as the militant group took control of Kabul in August, pitting the social network’s opaque and U.S.-centric content moderation policies against the undeniable reality on the ground. As the last American planes were escaping the city and Taliban officials were setting up shop in government buildings, Facebook terminated a WhatsApp “emergency hotline” created by the group “for civilians to report violence, looting or other problems,” the Financial Times reported. The move immediately drew a mixed reaction, satisfying foreign policy hard-liners while disturbing others who said it would only deprive an already beleaguered Afghan public of receiving information from their new government, however loathed in the West.

    But even though Afghanistan now occupies a diminished space in the American public consciousness and media, Facebook’s role there remains no less fraught. “There’s a real tension between wanting to keep certain information on the platform, including propaganda and misinformation,” said Esberg, “and allowing these actors to actually govern, and not completely scuttling their attempts at governing a country that is already facing a pretty severe crisis in and of itself.”

    The post Facebook Grants Government of Afghanistan Limited Posting Rights appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Legislation introduced in both the House and Senate today will force cops to obtain a warrant before extracting information stored in the computers onboard modern cars, closing what the bill’s sponsors say is a glaring, outdated loophole through the Fourth Amendment.

    Recent automobile models rely heavily on computers for everything from navigation to engine diagnostics to entertainment, and entice drivers to connect their smartphones for added features and convenience. These systems log drivers’ movements while also downloading deeply sensitive personal information from their smartphones over Bluetooth or Wi-Fi — typically silently, without their knowledge or consent.

    The conversion of cars into four-wheeled unprotected databases, with troves of information about owners’ travels and associates, has presented low-hanging fruit for law enforcement agencies, which are able to legally pull data off a vehicle without the owner’s knowledge. They are aided by a small but lucrative industry of tech firms that perform “vehicle forensics,” extracting not only travel data but often text messages, photos, and other private data from synced devices. Critics say this exploits a dangerous gap in the law: If police want to search the contents of your smartphone, the Fourth Amendment demands that they obtain a warrant first; if they want to search the computer built into your car, they don’t need any such permission, even if they end up siphoning data that originated on the exact same smartphone.

    The new legislation, titled “Closing the Warrantless Digital Car Search Loophole Act,” would bar such warrantless searches; evidence from them would be inadmissible in court, to establish probable cause, or for use by regulatory agencies. The measure was introduced in the Senate by Oregon Democrat Ron Wyden and Wyoming Republican Cynthia Lummis, and in the House by Rep. Peter Meijer, the Republican representing West Michigan’s 3rd Congressional District, and Rep. Ro Khanna, the Democrat in the San Francisco Bay Area’s 17th.

    “The idea the government can peruse digital car data without a warrant should sit next to the Geo Metro on the scrap heap of history,” Wyden said in an advance announcement shared with The Intercept.

    In May, The Intercept reported that U.S. Customs and Border Protection had contracted with MSAB, a Swedish company specializing in digital device cracking, to purchase vehicle forensics kits manufactured by Berla, an American firm. MSAB marketing materials make clear how powerful these kits are, touting the ability to pull “[r]ecent destinations, favorite locations, call logs, contact lists, SMS messages, emails, pictures, videos, social media feeds, and the navigation history of everywhere the vehicle has been,” as well as data that can be used to determine a target’s “future plan,” and “[i]dentify known associates and establish communication patterns between them.”

    CBP’s use of such tools is among the warrantless uses of car data that would be blocked by the new bill, Wyden spokesperson Keith Chu confirmed.

    “New vehicles are computers on wheels and should have the same 4th Amendment protections.”

    The bill protects a diverse range of data collected by today’s cars, including “all onboard and telematics data” in the vehicle or in attached “storage and communication systems,” including “diagnostic data, entertainment system data, navigation data, images or data captured by onboard sensors, or cameras, including images or data used to support automated features or autonomous driving, internet access, and communication to and from vehicle occupants.”

    There are carveouts; the bill exempts vehicles that require a commercial license to drive as well as traffic safety research and situations subject to “emergency provisions in the wiretap act and the USA Freedom Act, enabling the government to get a warrant after the fact,” according to an overview shared by Wyden’s office.

    The bill has endorsements from an array of left-leaning groups, including due process advocates like the American Civil Liberties Union and Electronic Frontier Foundation. But the Republican backing underlines that digital privacy and surveillance concerns resonate across party lines. “New vehicles are computers on wheels, and my constituents in Wyoming should have the same 4th Amendment protections for their vehicles as they do for their phones and home computers,” Lummis said in the announcement.

    The post Bipartisan Bill Seeks to Stop Warrantless Car Spying by Police appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The Treasury Department has in recent months expanded its digital surveillance powers, contracts provided to The Intercept reveal, turning to the controversial firm Babel Street, whose critics say it helps federal investigators buy their way around the Fourth Amendment.

    Two contracts obtained via a Freedom of Information Act request and shared with The Intercept by Tech Inquiry, a research and advocacy group, show that over the past four months, the Treasury acquired two powerful new data feeds from Babel Street: one for its sanctions enforcement branch, and one for the Internal Revenue Service. Both feeds enable government use of sensitive data collected by private corporations not subject to due process restrictions. Critics were particularly alarmed that the Treasury acquired access to location and other data harvested from smartphone apps; users are often unaware of how widely apps share such information.

    The first contract, dated July 15 at a cost of $154,982, is with Treasury’s Office of Foreign Assets Control, a quasi-intelligence wing responsible for enforcing economic sanctions against foreign regimes like Iran, Cuba, and Russia. According to contract documents, OFAC investigators can now use a Babel Street tool called Locate X to track the movements of individuals without a search warrant. Locate X provides clients with geolocational data gleaned from mobile apps, which often relay your coordinates to untold third parties via advertisements or pre-packaged code embedded to give the app social networking features or study statistics about users. This commercial location data exists largely in a regulatory vacuum, acquired by countless apps and bought, sold, and swapped between an incredibly vast and ever-growing ecosystem of ad tech firms and data brokers around the world, eventually landing in the possession of Babel Street, who then sells search access to government clients like OFAC.

    Critics of the software say it essentially allows the state to buy its way past the Fourth Amendment, which protects Americans from unreasonable searches. The contract notes that OFAC’s Office of Global Targeting will use Locate X for “analysis of cellphone ad-tech data … to research malign activity and identify malign actors, conduct network exploitation, examine corporate structures, and determine beneficial ownership,” a rare public admission by the government of its use of personal location data acquired with cash rather than a warrant. The contract does not indicate any restrictions or intentions around whether Locate X will be used against U.S. persons or foreigners.

    The contract provided to Tech Inquiry is heavily redacted in important sections that appear to discuss how Locate X will actually be used. Prior reporting on how other federal entities use Locate X, including a 2020 report by Protocol, indicates it allows agents to instantly determine what people were at a particular location at a particular time — and even where they arrived from and where they’d traveled in the preceding months.

    “My office has pressed Babel Street for answers. They won’t even put an employee on the phone.”

    Babel Street has claimed its location data is “anonymized,” meaning the harvested coordinates are tied not to an individual’s name but to a random string of characters. But researchers have found time and time again that deanonymizing precise historical location data is trivial. Indeed, in 2020 a company source told Motherboard “we could absolutely deanonymize a person” and that employees would “play with it, to be honest.” Oddly, despite the fact that the contract states Locate X will aid OFAC in “implementing its sanctions programs” — which is to say, enforcing the law — Babel Street’s terms of service, included in the FOIA response, state: “Due to the varied nature of Third-Party Data and Babel Street’s inability to attest to the accuracy of Third-Party Data (including any results Customer may obtain), Third-Party Data may be unsuitable for use in legal or administrative proceedings.”

    In an emailed statement, Democratic Oregon Sen. Ron Wyden, a vocal critic of ad-based geolocation, told The Intercept, “As part of my investigation into the sale of Americans’ private data, my office has pressed Babel Street for answers about where their data comes from, who they sell it to, and whether they respect mobile device opt-outs. Not only has Babel Street refused to answer questions over email, they won’t even put an employee on the phone.”

    Neither the Department of Treasury nor Babel Street responded to a request for comment on either contract.

    Corporate Surveillance vs. Constitutional Rights

    The government has long been able to pinpoint your location by tracking your phone through your mobile carrier, but the Supreme Court’s 2018 decision in Carpenter v. United States made clear that it needs a warrant to do so.The sprawling and unscrupulous digital advertising industry, constantly vacuuming up details of your movements in order to better target you with ads, provides a convenient loophole. Locate X has drawn intense scrutiny and criticism by allowing government agents to sidestep constitutional hurdles like those provided by the Carpenter ruling.

    “It is clear that multiple federal agencies have turned to purchasing Americans’ data to buy their way around Americans’ Fourth Amendment rights,” Wyden added. Wyden is the co-sponsor of the Fourth Amendment Is Not For Sale Act, proposed legislation that would force law enforcement and intelligence agencies to obtain a court order for this sort of app data, rather than simply buying it from any willing broker.

    “OFAC’s use of ad-tech location tracking for economic sanctions is a troubling extension of the previously known usage by CBP, the FBI and Secret Service, IRS, and the DoD,” Jack Poulson, Tech Inquiry’s founder, told The Intercept (for which he once wrote an opinion piece). He added, “Rather than backing off their invasive surveillance of smartphone location data after a prominent vendor was caught spying on a popular Muslim prayer app, US intelligence and law enforcement agencies appear to be finding more use cases,” referring to the Motherboard investigation that found the government buying location data from a Locate X competitor harvested via a popular Quran app.

    Through a second contract, finalized on September 30 and totaling $150,000, Virginia-based Babel Street will provide another Treasury agency, the IRS, with software that “captures information from public facing digital media records” in order to detect individual and small business-owning tax dodgers through their online posts, a capability the office has sought previously. Though the contract language doesn’t mention specific social media platforms by name, the automated “monitoring” of sites like Twitter and Facebook is Babel Street’s bread and butter, a capability similar to that of rival Dataminr; a 2017 Motherboard report on Babel Street noted that it offers clients “access to over 25 social media sites, including Facebook, Instagram, and to Twitter’s firehose. … Babel Street’s filtering options are extremely precise, and allow for the user to screen for dates, times, data type, language, and—interestingly enough—sentiment.”

    That the IRS would want to track down those trying to avoid paying their share is of course unsurprising, but the contract provides scant information about how greatly expanding the surveillance of what Americans say online — Babel Street’s tool will be able to handle “at least 25,000 concurrent users” — will achieve this end. In its initial request for contract solicitations, presumably now provided by Babel Street, the IRS cited capabilities far beyond simply searching tweets and Instagram posts, requiring that the winner of the contract provide “available bio-metric data, such as photos, current address, or changes to marital status” for individuals targeted by the agency, “provide publicly available information of taxpayers’ past or present locations,” as well as “reports showing that a taxpayer participated in an online chat room, blog, or forum, and reports showing the chat room or blog conversation threads.”

    “Babel Street’s support for the IRS increasing its surveillance of small businesses and the self employed — after the IRS has already largely given up on auditing the ultrawealthy — is an example of the U.S. surveillance industry being used to help shift the tax burden to the working class,” Poulson said.

    The post The U.S. Treasury Is Buying Private App Data to Target and Investigate People appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Há pelo menos uma década, para evitar acusações de que ajudava terroristas a espalhar propaganda, o Facebook tem impedido os usuários de falar livremente sobre pessoas e grupos que diz promoverem a violência.

    As restrições aparentemente remontam ao final de 2012, quando diante do crescente alarme no Congresso e nas Nações Unidas sobre o recrutamento online de terroristas, o Facebook acrescentou a suas Normas da Comunidade uma proibição de “organizações com um histórico de atividade terrorista ou criminosa violenta”. Essa humilde regra, desde então, transformou-se no que é conhecida como a política de Indivíduos e Organizações Perigosas, ou IOP, um conjunto de restrições generalizadas sobre o que os quase 3 bilhões de usuários do Facebook podem dizer sobre uma lista enorme e sempre crescente de entidades cuja atuação é vista como tendo ultrapassado os limites.

    Nos últimos anos, a política tem sido usada em um ritmo mais veloz, inclusive contra o presidente dos Estados Unidos, e assumiu um poder quase reverencial na rede social, sendo invocado para tranquilizar os usuários sempre que arroubos de violência, do genocídio em Mianmar aos motins no Capitólio, são ligados ao Facebook. Mais recentemente, após uma série de reportagens do Wall Street Journal mostrando que a empresa sabia que contribuía para uma miríade de danos offline, um vice-presidente do Facebook citou a política como prova da diligência da empresa em um memorando interno obtido pelo New York Times.

    A política IOP do Facebook se tornou um sistema que não responde a ninguém e que pune de forma desproporcional algumas comunidades.

    Mas, como em outras tentativas de limitar as liberdades pessoais em nome do contraterrorismo, a política IOP do Facebook se tornou um sistema que não responde a ninguém e que pune de forma desproporcional algumas comunidades, segundo críticos. Ela se ergue sobre uma lista proibida de 4 mil pessoas e grupos, incluindo políticos, escritores, instituições de caridade, hospitais, centenas de conjuntos musicais, e figuras históricas mortas há muito tempo.

    Diversos juristas e defensores das liberdades civis pediram que a empresa publicasse a lista para que os usuários saibam quando correm o risco de elogiar alguém que faz parte dela e ter uma postagem deletada ou sua conta suspensa. A empresa tem repetidamente se recusado a fazer isso, alegando que isso colocaria em risco seus funcionários e permitiria que as entidades proibidas driblassem as restrições. O Facebook não forneceu ao Intercept informações sobre qualquer ameaça específica aos seus funcionários. Além disso, o próprio Conselho de Supervisão, escolhido a dedo pela empresa, recomendou em várias ocasiões – a última delas em agosto – a publicação da lista completa, por uma questão de interesse público.

    O Intercept teve acesso a uma foto da lista completa de IOP e hoje publica uma reprodução do material em sua totalidade, com apenas pequenas alterações e edições para melhorar a clareza. Também estamos publicando um conjunto de documentos sobre políticas associadas, criadas para ajudar os moderadores a decidir quais postagens devem ser apagadas e quais usuários devem ser punidos.

    “O Facebook coloca seus usuários em uma posição quase impossível, dizendo-lhes que não podem postar sobre grupos e indivíduos perigosos, mas depois se recusa a identificar publicamente quem considera perigoso”, disse Faiza Patel, co-diretora do programa de liberdade e segurança nacional do Centro Brennan para a Justiça, que analisou o material.

    A lista e as regras associadas parecem ser uma encarnação das angústias americanas, preocupações políticas e valores da política externa desde o 11 de setembro de 2001, disseram especialistas, embora a política do IOP seja destinada a proteger todos os usuários do Facebook e se aplique a quem reside fora dos Estados Unidos (a grande maioria). Quase todos e tudo o que está na lista é considerado um inimigo ou ameaça pelos Estados Unidos ou seus aliados: mais da metade dela consiste de supostos terroristas estrangeiros, e a livre discussão sobre eles está sujeita à mais dura censura do Facebook.

    Segundo os especialistas, a política do IOP e a lista proibida também são muito mais frouxas em relação a comentários sobre milícias anti-governamentais predominentemente brancas do que quanto a grupos e indivíduos listados como terroristas, que são predominantemente do Oriente Médio, do Sul da Ásia e muçulmanos, ou aqueles que supostamente fazem parte de grupos criminosos violentos, que são predominentemente negros e latinos.

    Os materiais mostram que o Facebook tem “mão de ferro para algumas comunidades e uma mão mais comedida para outras”, disse Ángel Díaz, um professor da Faculdade de Direito da Universidade da Califórnia em Los Angeles, a UCLA, que pesquisou e escreveu sobre o impacto das políticas de moderação do Facebook em comunidades marginalizadas.

    O diretor de políticas de contraterrorismo e organizações perigosas do Facebook, Brian Fishman, disse numa declaração escrita que a empresa mantém a lista em segredo porque “esse é um ambiente adverso, por isso tentamos ser os mais transparentes possíveis, ao mesmo tempo que damos prioridade à segurança, limitando riscos legais e prevenindo oportunidades para os grupos contornarem as nossas regras”. Ele acrescentou: “Não queremos terroristas, grupos de ódio ou organizações criminosas na nossa plataforma, e é por isso que os proibimos e removemos conteúdos que os elogiam, representam ou apoiam. Uma equipe de mais de 350 especialistas no Facebook está concentrada em deter estas organizações e avaliar ameaças emergentes. Atualmente, banimos milhares de organizações, incluindo mais de 250 grupos de supremacia branca nos níveis mais altos das nossas políticas, e atualizamos regularmente as nossas políticas e organizações que se enquadram nas condições para serem banidas”.

    Embora os especialistas que revisaram o material digam que a política do Facebook é indevidamente obscurecida e punitiva para os usuários, ela é, no entanto, um reflexo de um verdadeiro dilema enfrentado pela empresa. Após o genocídio em Mianmar, a empresa reconheceu que talvez tenha se tornado o sistema mais poderoso já montado para a distribuição algorítmica global de incitação violenta. Não fazer algo diante dessa realidade seria visto como excessivamente negligente por uma grande parcela das pessoas — mesmo que as tentativas do Facebook de controlar o discurso de bilhões de usuários de internet ao redor do mundo sejam vistas como coisas dignas de uma autocracia. A lista IOP representa uma busca por equilibrar tudo isso por parte de uma empresa com uma concentração de poder sem precedentes sobre o discurso global.

    Restrições mais severas para populações marginalizadas e vulneráveis

    A lista, a base da política de indivíduos e organizações perigosas do Facebook, é em muitos aspectos o que a empresa já descreveu no passado: uma coletânea de grupos e líderes que ameaçaram ou se envolveram em derramamento de sangue. O retrato analisado pelo Intercept é separado nas categorias Ódio, Crime, Terrorismo, Movimentos Sociais Militarizados e Atores Violentos Não Estatais. Essas categorias foram organizadas em um sistema de três níveis sob regras implantadas pelo Facebook no final de junho, com cada nível correspondendo a restrições de fala de severidade variável.

    Mas, enquanto rótulos como “terrorista” e “criminoso” são conceitualmente amplos, eles parecem mais como substitutos raciais e religiosos uma vez que se vê como eles são aplicados a pessoas e grupos da lista, segundo os especialistas, o que aumenta a probabilidade de que o Facebook esteja colocando limitações discriminatórias à liberdade de expressão.

    Independentemente do nível, ninguém na lista IOP está autorizado a manter uma presença nas plataformas do Facebook, nem os usuários estão autorizados a se apresentar como membros de qualquer grupo da lista. Os níveis determinam, na verdade, o que outros usuários estão autorizados a dizer sobre as entidades proibidas. O nível 1 é o mais estritamente limitado; os usuários não podem expressar nada considerado como elogio ou apoio a grupos e pessoas da camada 1, mesmo para atividades não violentas (conforme a classificação determinada pelo Facebook). O nível 1 inclui supostos grupos de terror, ódio e criminosos e seus supostos membros, com terror definido como “organizar ou defender a violência contra civis”, e o ódio como “desumanizar repetidamente ou defender o mal” contra pessoas com características protegidas. A categoria criminosa de nível 1 é quase inteiramente composta por gangues de rua dos Estados Unidos e cartéis de droga latino-americanos, predominantemente integradas por negros e latinos. A categoria terrorista do Facebook, que representa 70% do nível 1, consiste esmagadoramente de organizações e indivíduos do Oriente Médio e do Sul da Ásia — que estão representados de forma desproporcional em toda a lista IOP, em todos os níveis, onde cerca de 80% dos indivíduos são rotulados como terroristas.

    fb-chart-final-01

    Gráfico: Soohee Cho/The Intercept

    O Facebook recebe a maioria dos nomes na categoria de terrorismo diretamente do governo dos EUA: cerca de 1.000 dos nomes na lista de terrorismo perigoso mencionam uma “fonte de designação” de “SDGT”, sigla em inglês para Terroristas Globais Especialmente Designados, uma lista de sanções mantida pelo Departamento do Tesouro e criadapor George W. Bush na sequência imediata dos ataques de 11 de setembro. Em muitos casos, os nomes na lista do facebook incluem passaportes e números de telefones encontrados na lista oficial de SDGT, sugerindo que os nomes são copiados diretamente dali.

    Outras fontes incluem o Terrorism Research & Analysis Consortium, o Consórcio de Pesquisa e Análise de Terorrismo ou TRAC na sigla em inglês, uma base de dados privada e fechada para assinantes que reúne supostos extremistas violentos, e o SITE, uma operação privada de rastreamento de terrorismo com uma longa e controversa história. “Uma palavra árabe pode ter quatro ou cinco significados diferentes na tradução”, disse em 2006 à revista New Yorker Michael Scheuer, antigo chefe da unidade da CIA sobre o Osama Bin Laden, observando que o SITE normalmente escolhe “a tradução mais bélica”. Parece que o Facebook tem trabalhado com gigantes tecnológicos da concorrência para compilar a lista IOP: um nome na lista trazia uma nota “incluída” por um funcionário de alto nível da Google que trabalhou anteriormente no ramo executivo em questões relacionadas com o terrorismo (o Facebook disse que não colabora com outras empresas de tecnologia em suas listas).

    Há cerca de 500 grupos de ódio no nível 1, incluindo os 250 grupos de supremacistas brancos citados por Fishman, mas Faiza Patel, do Centro Brennan, observou que centenas de grupos predominantemente de milícias brancas de direita, semelhantes aos grupos de ódio, são “tratados de maneira mais leve” e colocados no nível 3.

    O nível 2, de Atores Violentos Não Estatais, consiste principalmente de grupos como rebeldes armados que se envolvem em violência viando governos e não civis, e inclui muitas facções que lutam na Guerra Civil Síria. Usuários podem elogiar os grupos desta camada por suas ações não violentas, mas não podem expressar qualquer “apoio substantivo” aos próprios grupos.

    O nível 3 é para grupos que não são violento, mas que repetidamente se envolvem em discursos de ódio, parecem estar prontos para se tornar violentos em breve, ou violam de forma reiterada as políticas IOP. Usuários do Facebook são livres para discutir os listados no nível 3 como quiserem. O nível 3 inclui “Movimentos Sociai Militarizados” que, a julgar por suas inclusões no IOP, são em sua maioria milícias anti-governamentais americanas, que são brancas em praticamente sua totalidade.

    “As listas parecem criar dois sistemas discrepantes, com as penalidades mais pesadas aplicadas a regiões e comunidades fortemente muçulmanas.”

    “As listas parecem criar dois sistemas discrepantes, com as penalidades mais pesadas aplicadas a regiões e comunidades fortemente muçulmanas”, escreveu Patel em um email ao Intercept. As diferenças na composição democrática entre os níveis 1 e 3 “sugere que o Facebook – assim como o governo dos EUA – considera os muçulmanos como os mais perigosos”. Em contraste, aponta Patel, “grupos de ódio definidos como grupos de ódio antimuçulmanos pelo Southern Poverty Law Center estão esmagadoramente ausentes das listas do Facebook”.

    Milícias anti-governamentais, entre as que recebem as intervenções mais comedidas do Facebook, “apresentam a ameaça [doméstica violenta extremista] mais letal” para os EUA, concluíram oficiais de inteligência no início deste ano, uma visão compartilhada por muitos especialistas sem ligação com o governo. Uma diferença crucial entre os supostos grupos terroristas e, por exemplo, os Oath Keepers, é que as milícias domésticas têm um capital político considerável e apoio entre a direita americana. Os grupos incluídos na seção de Movimento Social Militarizado “parecem ter sido criados em resposta a organizações mais poderosas e grupos étnicos que quebram as regras com bastante regularidade”, disse Ángel Díaz, professor de Direito na UCLA. “[O Facebook] sentia que precisava dar uma resposta, mas não queriam que a resposta fosse tão ampla quanto foi para a parte do terrorismo, então criou uma subcategoria para limitar o impacto sobre o discurso de grupos politicamente poderosos”, acrescentou. Por exemplo, o movimento de extrema direita conhecida como “Boogaloo”, que defende uma segunda Guerra Civil, é considerado um Movimento Social Militarizado, o que o sujeitaria às regras relativamente indulgentes do nível 3. O Facebook classificou como nível 1 apenas um subconjunto do Boogaloo, que deixou claro que era “distinto do movimento boogaloo mais amplo e levemente relacionado”.

    O Facebook negou categoricamente que dê um tratamento especial aos grupos de extrema-direita nos EUA devido à sua associação com a política conservadora dominante. A porta-voz da empresa disse que as classificações da empresa se baseiam no seu comportamento: “Quando os grupos americanos atingem nossa definição de grupo terrorista, são designados como organizações terroristas (por exemplo, The Base, Atomwaffen Division, National Socialist Order). Os que atingem nossa definição de grupos de ódio são designados como organizações de ódio (por exemplo, Proud Boys, Rise Above Movement, Patriot Front)”.

    O Facebook considerou o tratamento da empresa em relação às milícias como uma regulamentação mais agressiva do que frouxa, afirmando que a lista de 900 desses grupos “está entre as mais robustas” do mundo: “A categoria Movimento Social Militarizado foi desenvolvida em 2020 explicitamente para expandir a gama de organizações sujeitas às nossas políticas IOP precisamente por causa do ambiente de ameaças em constante mudança. A nossa política relativa às milícias [digitais] é a mais forte do setor”.

    A respeito de a classificação do Facebook aparentemente seguir linhas raciais e religiosas, a empresa citou a presença dos supremacistas brancos e grupos de ódio no nível 1 e disse que “concentrar-se apenas em” grupos terroristas no nível 1 “é enganoso”. E também acrescentou: “Vale a pena notar que a nossa abordagem aos grupos de ódio de supremacia branca e organizações terroristas é muito mais agressiva do que a de qualquer governo. Dito isso, a Organização das Nações Unidas, a União Europeia, os Estados Unidos, o Reino Unido, o Canadá, a Austrália e a França diferenciam apenas 13 organizações de supremacia branca. Nossa definição de terrorismo é pública, detalhada e foi desenvolvida com contribuição de especialistas e acadêmicos. Ao contrário de algumas outras definições de terrorismo, nossa definição é agnóstica à religião, região, perspectiva política, ou ideologia. Incluímos muitas organizações sediadas fora do Oriente Médio e do Sul da Ásia como terroristas, incluindo organizações sediadas na América do Norte e Europa Ocidental (incluindo a National Socialist Order, a Feurerkrieg Division, o Irish Republican Army e o National Action Group”.

    Na lista do Facebook, no entanto, o número de grupos terroristas com sede na América do Norte ou na Europa Ocidental soma apenas algumas dezenas em mais de mil.

    Embora a lista inclua vários comandantes do Estado Islâmico e militantes da Al-Qaeda cujo perigo para os outros é incontroverso, seria difícil defender que alguns nomes constituem uma ameaça para qualquer um. Devido à forma como o Facebook emula as sanções federais contra o terrorismo, que se destinam a punir os adversários internacionais em vez de determinar a “periculosidade”, é política do Facebook que organizações como a Companhia de Tratores do Irã e o Fundo de Auxílio e Desenvolvimento Palestino, uma organização de ajuda humanitária com sede no Reino Unido, sejam ambas consideradas um perigo real demais para a livre discussão no Facebook, e colocadas no nível 1 ao lado de organizações terroristas como o Al-Shabaab.

    “Quando uma grande plataforma global escolhe alinhar suas políticas com os Estados Unidos — um país que há muito exerce hegemonia sobre grande parte do mundo (e, particularmente, nos últimos vinte anos, sobre muitos países predominantemente muçulmanos), está simplesmente recriando esses mesmos diferenciais de poder e tirando a agência de grupos e indivíduos já vulneráveis”, disse Jillian York, diretora de liberdade de expressão internacional da Electronic Frontier Foundation, que também revisou a documentação do Facebook.

    A lista do Facebook representa uma ampla definição de “perigoso” do início ao fim. Ela inclui o soldado infantil Mudassir Rashid Parray, morto aos 14 anos na região da Kashmira, mais de 200 conjuntos musicais, canais de televisão, um estúdio de videogames, companhias aéreas, a universidade médica iraniana que trabalha na vacina contra a covid-19, e figuras históricas mortas há muito tempo, como Joseph Gebbels e Benito Mussolini. Incluir tais figuras é algo “repleto de problemas”, disse recentemente um grupo de pesquisadores de mídia social da Universidade de Utah ao Conselho de Supervisão do Facebook.

    Diretrizes preocupantes para a aplicação da lei

    Os documentos internos do Facebook explicam aos moderadores o processo de censura o discurso de pessoas e grupos da lista proibida. Os materais, que tiveram partes previamente publicadas pelo Guardian e pela Vice, tentam definir o que significa para um usuário “elogiar”, “apoiar” ou “representar” um listado como IOP, e detalhar como identificar comentários proibidos.

    Embora o Facebook forneça um conjunto público dessas diretrizes, ele publica apenas exemplos limitados do que esses termos significam, ao invés de definições. Internamente, ele dá não apenas as definições, mas também exemplos muito mais detalhados, incluindo uma lista estonteante de hipóteses e casos limítrofes para ajudar a determinar o que fazer com um conteúdo sinalizado.

    A empresa espera que sua força de trabalho global de moderação de conteúdo, um exército de terceirizados que recebem por hora trabalhada, frequentemente traumatizados pela natureza explícita de seu trabalho, use essas definições e exemplos para descobrir se um determinado post constitui um “elogio” proibido ou chega ao limiar do “apoio”, entre outros critérios, encaixando o discurso de bilhões de pessoas de centenas de países e inúmeras culturas para uma estrutura limpinha decretada no Vale do Silício. Embora esses funcionários operem em conjunto com sistemas de software automatizados, determinar o que é “elogio” e o que não é se resume frequentemente a decisões de julgamento pessoal, que tentam descobrir as intenções dos autores. “Mais uma vez, isso deixa o verdadeiro trabalho duro de tentar tornar o Facebook seguro a moderadores de conteúdo terceirizados, mal pagos e sobrecarregados,  forçados a revisar as postagens e a fazer o seu melhor segundo sua localização geográfica, língua e contexto específicos”, disse Martha Dark, diretora da Foxglove, um grupo de assistência jurídica que trabalha com moderadores.

    Nos documentos internos, o Facebook diz essencialmente que os usuários estão autorizados a falar de entidades de nível 1 desde que este discurso seja neutro ou crítico, pois qualquer comentário considerado positivo poderia ser interpretado como “elogio”. Os usuários do Facebook estão impedidos de fazer qualquer coisa que “procure fazer com que outros pensem mais positivamente” ou “legitimem” um grupo ou pessoa perigosos do nível 1, ou “alinhar-se” com sua causa — todas as formas de discursos consideradas “elogios”. Os documentos dizem que “declarações apresentadas na forma de um fato sobre os motivos da entidade” são aceitáveis, mas qualquer coisa que “glorifica a entidade através do uso de adjetivos, frases, imagens, etc.” não é aceitável. Os usuários podem dizer que uma pessoa que o Facebook considera perigosa “não é uma ameaça, relevante ou digna de atenção”, mas não podem dizer que “apoiam” uma pessoa da lista que acreditam ter sido incluída erroneamente — isso é considerado alinhamento com o nome da lista. Os moderadores do Facebook também podem decidir por si mesmos o que constitui uma “glorificação” perigosa frente ao “discurso neutro” permitido, ou o que conta como “debate acadêmico” e “discurso informativo e educativo” para bilhões de pessoas.

    Determinar qual conteúdo atende às definições do Facebook de discurso proibido é um “sufoco”, de acordo com um moderador do Facebook que trabalha fora dos Estados Unidos e falou com o Intercept sob a condição de anonimato. Essa pessoa disse que os analistas “geralmente, lutam para reconhecer o discurso político e a condenação [desses discursos], que são contextos permitidos para o IOP”. Eles também notaram a tendência da política de errar o alvo: “As representações fictícias de [indivíduos perigosos] não são permitidas a menos que sejam compartilhadas em um contexto condenatório ou informativo, o que significa que compartilhar uma foto de Taika Waititi do [filme] Jojo Rabbit fará com que você seja banido, assim como um meme com o ator interpretando Pablo Escobar (o que está na piscina vazia)”.

    Estes desafios se tornam ainda mais complexos quando o moderador deve tentar avaliar como seus colegas moderadores avaliariam a postagem, uma vez que suas decisões são comparadas. “Um analista deve tentar prever que decisão tomaria um revisor de qualidade ou a maioria de moderadores, o que muitas vezes não é fácil”, disse o moderador.

    As regras são “um sério risco ao debate político e à livre expressão”, disse Patel, particularmente no mundo muçulmano, onde os grupos listados como IOP existem não simplesmente como inimigos militares, mas como parte do tecido sociopolítico. O que parece glorificação visto de uma escrivaninha nos EUA “em um determinado contexto, poderia ser visto como uma simples declaração de fatos”, concordou a EFF. “Pessoas que vivem em locais onde os chamados grupos terroristas desempenham um papel no governo precisam ser capazes de discutir esses grupos com nuances, e a política do Facebook não permite isso”.

    O moderador que trabalha fora dos EUA concordou que a lista reflete uma concepção americanizada do perigo: “As designações parecem ser baseadas em interesses americanos”, o que “não representa a realidade política desses países” em outras partes do mundo, disse a pessoa.

    Como diz Patel, “um comentarista na televisão poderia elogiar a promessa do Talibã por um governo inclusivo no Afeganistão, mas não poderia fazer isso no Facebook”.

    Particularmente confusa e censuradora é a definição do Facebook de um “Grupo de Apoio a Atos Violentos em Meio a Protestos”, uma subcategoria de “Movimentos Sociais Militarizados” impedida de utilizar as plataformas da empresa. O Facebook descreve esse grupo como “um ator não estatal” que se envolve em “representar [ou] retratar… atos de violência de rua contra civis ou policiais”, bem como “incendiar, pilhar ou outra destruição de propriedade privada ou pública”. Como está escrito, esta política parece dar licença ao Facebook para aplicar esse rótulo a praticamente qualquer organização de imprensa cobrindo — ou seja, retratando — um protesto de rua que resulta em danos à propriedade, ou punir qualquer participante que faça o upload de imagens de atos assim cometidos por outras pessoas. Considerando os elogios feitos ao Facebook há uma década pela crença de que ele havia ajudado a impulsionar os levantes da “Primavera Árabe” no Norte da África e no Oriente Médio, é notável que, digamos, uma organização egípcia documentando a violência nos protestos da Praça Tahrir em 2011 poderia ser considerada um perigoso Movimento Social Militarizado sob as regras de 2021.

    Díaz, da UCLA, disse ao Intercept que o Facebook deveria divulgar muito mais a respeito de como aplica essas regras relacionadas a protestos. A empresa encerrará imediatamente as páginas que organizam protestos no segundo que qualquer incêndio ou dano à propriedade ocorra? “As normas que eles estão articulando aqui sugerem que [a lista de IOP] poderia engolir muitos manifestantes ativos”, disse Diaz.

    É possível que a cobertura de protestos esteja relacionada à entrada na lista de IOP de duas organizações de mídia anticapitalista, a Crimethinc e a It’s Going Down. O Facebook baniu as duas publicações em 2020, citando a política IOP, e ambas são de fato encontradas na lista, designadas como “Movimentos Sociais Militarizados” e “Milícias Armadas”. Um representante da It’s Going Down, que solicitou o anonimato por questão de segurança, disse ao Intercept que “organizações de notícias em todo o espectro político informam sobre confrontos nas ruas, greves, tumultos e destruição de propriedade, mas aqui o Facebook parece deixar implícito que, se eles não gostam da análise… ou opinião que alguém escreve sobre por que milhões de pessoas tomaram as ruas no ano passado durante a pandemia nos maiores números da história dos EUA, então eles vão simplesmente remover você da conversa”. Eles negaram especificamente que o grupo seja uma milícia armada, ou mesmo um ativista ou um movimento social, explicando que é, na verdade, uma plataforma de mídia “apresentando notícias, opiniões, análises e podcasts a partir de uma perspectiva anarquista”. Um representante da Crimethinc também negou que o grupo esteja armado ou “‘militarizado’ em qualquer sentido. É um veículo de notícias e editor de livros, como Verso ou Jacobin”. O representante solicitou o anonimato citando ameaças de direita à organização.

    O Facebook não comentou sobre o porquê dessas organizações de mídia terem sido classificadas internamente como “milícias armadas”, mas em vez disso, quando perguntado sobre elas, reiterou sua proibição sobre tais grupos e sobre os grupos que apoiam atos violentos em meio a protestos.

    Os documentos internos de moderação do Facebook também deixam algumas brechas intrigantes. Depois que a plataforma desempenhou um papel na facilitação do genocídio em Mianmar, o executivo da empresa Alex Warofka escreveu que “concordamos que podemos e devemos fazer mais” para “evitar que nossa plataforma seja usada para fomentar a divisão e incitar a violência offline”. Mas a proibição do Facebook contra a incitação à violência é relativa, permitindo expressamente, nos documentos obtidos pelo Intercept, que se conclame a violência contra “locais não menores que uma vila”. Por exemplo, a afirmação “devemos invadir a Líbia” é citada como aceitável pelas regras.

    A porta-voz do Facebook disse que “o objetivo desta disposição é permitir o debate sobre estratégia militar e guerra, que é uma realidade do mundo em que vivemos”, e reconheceu que a política da empresa autoriza apelos de violência contra um país, cidade ou grupo terrorista, citando o exemplo de uma postagem permitida: “Devemos matar Osama bin Laden”.

    A sede do Facebook em Menlo Park, na Califórnia, em 10 de maio de 2021.

    A sede do Facebook em Menlo Park, na Califórnia, em 10 de maio de 2021.

    Foto: Nina Riggio/Bloomberg via Getty Images

    Supressão severa da liberdade de expressão sobre o Oriente Médio

    A aplicação das regras de IOP leva a alguns resultados surpreendentes para uma empresa que reivindica a “livre expressão” como um princípio central. Em 2019, citando a política de IOP, o Facebook bloqueou um simpósio universitário online com Leila Khaled, que participou de dois sequestros de aviões nos anos 1960, nos quais nenhum passageiro foi ferido. Khaled, hoje com 77 anos, ainda está presente na versão da lista de terrorismo do Facebook obtida pelo Intercept. Em fevereiro, o Conselho de Supervisão interno do Facebook agiu para reverter uma decisão de apagar um post questionando a prisão do revolucionário curdo esquerdista Abdullah Ocalan, listado como IOP e sequestrado por forças de inteligências turcas com ajuda dos EUA em 1999.

    Em julho, a jornalista Rania Khalek postou uma foto no Instagram de um outdoor próximo ao Aeroporto Internacional de Bagdá, retratando o general iraniano Qasem Soleimani e o comandante militar iraquiano Abu Mahdi al-Muhandis, ambos assassinados pelos Estados Unidos e presentes na lista de IOP. O upload de Khalek no Instagram foi rapidamente deletado por violar o que uma notificação chamou de política sobre “violência ou organizações perigosas”. Khalek disse ao Intercept, por email, que “minha intenção quando coloquei a foto era mostrar meu entorno” e que “o fato de que [o outdoor é] exibido de forma tão proeminente no aeroporto onde eles foram assassinados mostra como eles são percebidos até mesmo pelo oficialismo iraquiano”.

    Mais recentemente, a política de IOP do Facebook colidiu com a derrubada, pelo Talibã, do governo apoiado pelos EUA no Afeganistão. Depois que o Talibã assumiu o controle do país, o Facebook anunciou que o grupo estava banido de manter uma presença em seus aplicativos. O Facebook agora se encontra na posição de não apenas censurar a liderança política de um país inteiro, mas também de colocar sérias restrições à capacidade da população de discuti-la ou até mesmo de simplesmente retratá-la.

    Outros incidentes indicam que a lista IOP pode ser um instrumento muito complicado para ser utilizado eficazmente pelos moderadores do Facebook. Em maio, o Facebook apagou uma variedade de postagens de palestinos tentando documentar a violência do estado israelense na Mesquita Al Aqsa, o terceiro lugar mais sagrado do Islã, porque o pessoal da empresa a confundiu com uma outra organização na lista IOP que tinha “Al-Aqsa” em seu nome, segundo um memorando interno obtido pelo BuzzFeed News. No mês passado, o Facebook censurou um usuário egípcio que publicou um artigo da Al Jazeera sobre as Brigadas Al-Qassam, um grupo ativo na vizinha Palestina, junto com uma legenda que dizia simplesmente “Ooh” em árabe. Al-Qassam não aparece na lista IOP, e o Conselho de Supervisão do Facebook escreveu que “o Facebook não foi capaz de explicar porque dois revisores humanos julgaram que o conteúdo violaria a política [da empresa]”.

    Embora as últimas duas décadas tenham habituado muitos ao redor do mundo a listas secretas e de vigilância e proibições de voo, a versão disso privatizada pelo Facebook indica a Jillian York, da EFF, que “chegamos a um ponto em que o Facebook não está apenas obedecendo ou replicando as políticas dos EUA, mas indo muito além delas”.

    Além disso, diz York, “nunca devemos esquecer que ninguém elegeu Mark Zuckerberg, um homem que nunca ocupou um cargo que não fosse o de CEO do Facebook”.

    Tradução: Maíra Santos

    The post Esta é a lista secreta de grupos e pessoas ‘perigosas’ do Facebook appeared first on The Intercept.

    This post was originally published on The Intercept.

  • To ward off accusations that it helps terrorists spread propaganda, Facebook has for many years barred users from speaking freely about people and groups it says promote violence.

    The restrictions appear to trace back to 2012, when in the face of growing alarm in Congress and the United Nations about online terrorist recruiting, Facebook added to its Community Standards a ban on “organizations with a record of terrorist or violent criminal activity.” This modest rule has since ballooned into what’s known as the Dangerous Individuals and Organizations policy, a sweeping set of restrictions on what Facebook’s nearly 3 billion users can say about an enormous and ever-growing roster of entities deemed beyond the pale.

    In recent years, the policy has been used at a more rapid clip, including against the president of the United States, and taken on almost totemic power at the social network, trotted out to reassure the public whenever paroxysms of violence, from genocide in Myanmar to riots on Capitol Hill, are linked to Facebook. Most recently, following a damning series of Wall Street Journal articles showing the company knew it facilitated myriad offline harms, a Facebook vice president cited the policy as evidence of the company’s diligence in an internal memo obtained by the New York Times.

    Facebook’s DIO policy has become an unaccountable system that disproportionately punishes certain communities.

    But as with other attempts to limit personal freedoms in the name of counterterrorism, Facebook’s DIO policy has become an unaccountable system that disproportionately punishes certain communities, critics say. It is built atop a blacklist of over 4,000 people and groups, including politicians, writers, charities, hospitals, hundreds of music acts, and long-dead historical figures.

    A range of legal scholars and civil libertarians have called on the company to publish the list so that users know when they are in danger of having a post deleted or their account suspended for praising someone on it. The company has repeatedly refused to do so, claiming it would endanger employees and permit banned entities to circumvent the policy. Facebook did not provide The Intercept with information about any specific threat to its staff.

    Despite Facebook’s claims that disclosing the list would endanger its employees, the company’s hand-picked Oversight Board has formally recommended publishing all of it on multiple occasions, as recently as August, because the information is in the public interest.

    The Intercept has reviewed a snapshot of the full DIO list and is today publishing a reproduction of the material in its entirety, with only minor redactions and edits to improve clarity. It is also publishing an associated policy document, created to help moderators decide what posts to delete and what users to punish.

    “Facebook puts users in a near-impossible position by telling them they can’t post about dangerous groups and individuals, but then refusing to publicly identify who it considers dangerous,” said Faiza Patel, co-director of the Brennan Center for Justice’s liberty and national security program, who reviewed the material.

    The list and associated rules appear to be a clear embodiment of American anxieties, political concerns, and foreign policy values since 9/11, experts said, even though the DIO policy is meant to protect all Facebook users and applies to those who reside outside of the United States (the vast majority). Nearly everyone and everything on the list is considered a foe or threat by America or its allies: Over half of it consists of alleged foreign terrorists, free discussion of which is subject to Facebook’s harshest censorship.

    The DIO policy and blacklist also place far looser prohibitions on commentary about predominately white anti-government militias than on groups and individuals listed as terrorists, who are predominately Middle Eastern, South Asian, and Muslim, or those said to be part of violent criminal enterprises, who are predominantly Black and Latino, the experts said.

    The materials show Facebook offers “an iron fist for some communities and more of a measured hand for others,” said Ángel Díaz, a lecturer at the UCLA School of Law who has researched and written on the impact of Facebook’s moderation policies on marginalized communities.

    Facebook’s policy director for counterterrorism and dangerous organizations, Brian Fishman, said in a written statement that the company keeps the list secret because “[t]his is an adversarial space, so we try to be as transparent as possible, while also prioritizing security, limiting legal risks and preventing opportunities for groups to get around our rules.” He added, “We don’t want terrorists, hate groups or criminal organizations on our platform, which is why we ban them and remove content that praises, represents or supports them. A team of more than 350 specialists at Facebook is focused on stopping these organizations and assessing emerging threats. We currently ban thousands of organizations, including over 250 white supremacist groups at the highest tiers of our policies, and we regularly update our policies and organizations who qualify to be banned.”

    Though the experts who reviewed the material say Facebook’s policy is unduly obscured from and punitive toward users, it is nonetheless a reflection of a genuine dilemma facing the company. After the Myanmar genocide, the company recognized it had become perhaps the most powerful system ever assembled for the global algorithmic distribution of violent incitement. To do nothing in the face of this reality would be viewed as grossly negligent by vast portions of the public — even as Facebook’s attempts to control the speech of billions of internet users around the world is widely seen as the stuff of autocracy. The DIO list represents an attempt by a company with a historically unprecedented concentration of power over global speech to thread this needle.

    Harsher Restrictions for Marginalized and Vulnerable Populations

    The list, the foundation of Facebook’s Dangerous Individuals and Organizations policy, is in many ways what the company has described in the past: a collection of groups and leaders who have threatened or engaged in bloodshed. The snapshot reviewed by The Intercept is separated into the categories Hate, Crime, Terrorism, Militarized Social Movements, and Violent Non-State Actors. These categories were organized into a system of three tiers under rules rolled out by Facebook in late June, with each tier corresponding to speech restrictions of varying severity.

    But while labels like “terrorist” and “criminal” are conceptually broad, they look more like narrow racial and religious proxies once you see how they are applied to people and groups in the list, experts said, raising the likelihood that Facebook is placing discriminatory limitations on speech.

    The tiers determine what other Facebook users are allowed to say about the banned entities.

    Regardless of tier, no one on the DIO list is allowed to maintain a presence on Facebook platforms, nor are users allowed to represent themselves as members of any listed groups. The tiers determine instead what other Facebook users are allowed to say about the banned entities. Tier 1 is the most strictly limited; users may not express anything deemed to be praise or support about groups and people in this tier, even for nonviolent activities (as determined by Facebook). Tier 1 includes alleged terror, hate, and criminal groups and alleged members, with terror defined as “organizing or advocating for violence against civilians” and hate as “repeatedly dehumanizing or advocating for harm against” people with protected characteristics. Tier 1’s criminal category is almost entirely American street gangs and Latin American drug cartels, predominantly Black and Latino. Facebook’s terrorist category, which is 70 percent of Tier 1, overwhelmingly consists of Middle Eastern and South Asian organizations and individuals — who are disproportionately represented throughout the DIO list, across all tiers, where close to 80 percent of individuals listed are labeled terrorists.

    fb-chart-1-01-01

    Chart: Soohee Cho/The Intercept

    Facebook takes most of the names in the terrorism category directly from the U.S. government: Nearly 1,000 of the entries in the dangerous terrorism list note a “designation source” of “SDGT,” or Specially Designated Global Terrorists, a sanctions list maintained by the Treasury Department and created by George W. Bush in the immediate aftermath of the September 11 attacks. In many instances, names on Facebook’s list include passport and phone numbers found on the official SDGT list, suggesting entries are directly copied over.

    Other sources cited include the Terrorism Research & Analysis Consortium, a private subscription-based database of purported violent extremists, and SITE, a private terror-tracking operation with a long, controversial history. “An Arabic word can have four or five different meanings in translation,” Michael Scheuer, the former head of the CIA’s Osama bin Laden unit, told the New Yorker in 2006, noting that he thinks SITE typically chooses the “most warlike translation.” It appears Facebook has worked with its tech giant competitors to compile the DIO list; one entry carried a note that it had been “escalated by” a high-ranking staffer at Google who previously worked in the executive branch on issues related to terrorism. (Facebook said it does not collaborate with other tech companies on its lists.)

    There are close to 500 hate groups in Tier 1, including the more than 250 white supremacist organizations Fishman referenced, but Faiza Patel, of the Brennan Center, noted that hundreds of predominantly white right-wing militia groups that seem similar to the hate groups are “treated with a light touch” and placed in Tier 3.

    Tier 2, “Violent Non-State Actors,” consists mostly of groups like armed rebels who engage in violence targeting governments rather than civilians, and includes many factions fighting in the Syrian civil war. Users can praise groups in this tier for their nonviolent actions but may not express any “substantive support” for the groups themselves.

    Tier 3 is for groups that are not violent but repeatedly engage in hate speech, seem poised to become violent soon, or repeatedly violate the DIO policies themselves. Facebook users are free to discuss Tier 3 listees as they please. Tier 3 includes Militarized Social Movements, which, judging from its DIO entries, is mostly right-wing American anti-government militias, which are virtually entirely white.

    “The lists seem to create two disparate systems, with the heaviest penalties applied to heavily Muslim regions and communities.”

    “The lists seem to create two disparate systems, with the heaviest penalties applied to heavily Muslim regions and communities,” Patel wrote in an email to The Intercept. The differences in demographic composition between Tiers 1 and 3 “suggests that Facebook — like the U.S. government — considers Muslims to be the most dangerous.” By contrast, Patel pointed out, “Hate groups designated as Anti-Muslim hate groups by the Southern Poverty Law Center are overwhelmingly absent from Facebook’s lists.”

    Anti-government militias, among those receiving more measured interventions from Facebook, “present the most lethal [domestic violent extremist] threat” to the U.S., intelligence officials concluded earlier this year, a view shared by many nongovernmental researchers. A crucial difference between alleged foreign terror groups and say, the Oath Keepers, is that domestic militia groups have considerable political capital and support on the American right. The Militarized Social Movement entries “do seem to be created in response to more powerful organizations and ethnic groups breaking the rules pretty regularly,” said Ángel Díaz, of UCLA School of Law, “and [Facebook] feeling that there needs to be a response, but they didn’t want the response to be as broad as it was for the terrorism portion, so they created a subcategory to limit the impact on discourse from politically powerful groups.” For example, the extreme-right movement known as “boogaloo,” which advocates for a second Civil War, is considered a Militarized Social Movement, which would make it subject to the relatively lenient Tier 3 rules. Facebook has only classified as Tier 1 a subset of boogaloo, which it made clear was “distinct from the broader and loosely-affiliated boogaloo movement.”

    Do you have additional information about how moderation works inside Facebook or other platforms? Contact Sam Biddle over Signal at +1 978 261 7389.

    A Facebook spokesperson categorically denied that Facebook gives extremist right-wing groups in the U.S. special treatment due to their association with mainstream conservative politics. They added that the company tiers groups based on their behavior, stating, “Where American groups satisfy our definition of a terrorist group, they are designated as terrorist organizations (E.g. The Base, Atomwaffen Division, National Socialist Order). Where they satisfy our definition of hate groups, they are designated as hate organizations (For example, Proud Boys, Rise Above Movement, Patriot Front).”

    The spokesperson framed the company’s treatment of militias as one of aggressive regulation rather than looseness, saying Facebook’s list of 900 such groups “is among the the most robust” in the world: “The Militarized Social Movement category was developed in 2020 explicitly to expand the range of organizations subject to our DOI policies precisely because of the changing threat environment. Our policy regarding militias is the strongest in the industry.”

    On the issue of how Facebook’s tiers often seem to sort along racial and religious lines, the spokesperson cited the presence of the white supremacists and hate groups in Tier 1 and said “focusing solely on” terrorist groups in Tier 1 “is misleading.” They added: “It’s worth noting that our approach to white supremacist hate groups and terrorist organization is far more aggressive than any government’s. All told, the United Nations, European Union, United States, United Kingdom, Canada, Australia, and France only designate thirteen distinct white supremacist organizations. Our definition of terrorism is public, detailed and was developed with significant input from outside experts and academics. Unlike some other definitions of terrorism, our definition is agnostic to religion, region, political outlook, or ideology. We have designated many organizations based outside the Middle Eastern and South Asian markets as terrorism, including orgs based in North America and Western Europe (including the National Socialist Order, the Feurerkrieg Division, the Irish Republican Army, and the National Action Group).”

    On Facebook’s list, however, the number of listed terrorist groups based in North American or Western Europe amounts to only a few dozen out of over a thousand.

    Though the list includes a litany of ISIS commanders and Al Qaeda militants whose danger to others is uncontroversial, it would be difficult to argue that some entries constitute much of a threat to anyone at all. Due to the company’s mimicry of federal terror sanctions, which are meant to punish international adversaries rather than determine “dangerousness,” it is Facebook policy that the likes of the Iran Tractor Manufacturing Company and the Palestinian Relief and Development Fund, a U.K.-based aid organization, are both deemed too much of a real-world danger for free discussion on Facebook and are filed among Tier 1 terrorist organizations like al-Shabab.

    “When a major, global platform chooses to align its policies with the United States — a country that has long exercised hegemony over much of the world (and particularly, over the past twenty years, over many predominantly Muslim countries), it is simply recreating those same power differentials and taking away the agency of already-vulnerable groups and individuals,” said Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, who also reviewed the reproduced Facebook documents.

    Facebook’s list represents an expansive definition of “dangerous” throughout. It includes the deceased 14-year-old Kashmiri child soldier Mudassir Rashid Parray, over 200 musical acts, television stations, a video game studio, airlines, the medical university working on Iran’s homegrown Covid-19 vaccine, and many long-deceased historical figures like Joseph Goebbels and Benito Mussolini. Including such figures is “fraught with problems,” a group of University of Utah social media researchers recently told Facebook’s Oversight Board.

    Troubling Guidelines for Enforcement

    Internal Facebook materials walk moderators through the process of censoring speech about the blacklisted people and groups. The materials, portions of which were previously reported by The Guardian and Vice, attempt to define what it means for a user to “praise,” “support,” or “represent” a DIO listee and detail how to identify prohibited comments.

    Although Facebook provides a public set of such guidelines, it publishes only limited examples of what these terms mean, rather than definitions. Internally, it offers not only the definitions, but also much more detailed examples, including a dizzying list of hypotheticals and edge cases to help determine what to do with a flagged piece of content.

    “It leaves the real hard work of trying to make Facebook safe to outsourced, underpaid and overworked content moderators who are forced to pick up the pieces and do their best.”

    Facebook’s global content moderation workforce, an outsourced army of hourly contractors frequently traumatized by the graphic nature of their work, are expected to use these definitions and examples to figure out if a given post constitutes forbidden “praise” or meets the threshold of “support,” among other criteria, shoehorning the speech of billions of people from hundreds of countries and countless cultures into a tidy framework decreed from Silicon Valley. Though these workers operate in tandem with automated software systems, determining what’s “praise” and what isn’t frequently comes down to personal judgment calls, assessing posters’ intent. “Once again, it leaves the real hard work of trying to make Facebook safe to outsourced, underpaid and overworked content moderators who are forced to pick up the pieces and do their best to make it work in their specific geographic location, language and context,” said Martha Dark, the director of Foxglove, a legal aid group that works with moderators.

    In the internal materials, Facebook essentially says that users are allowed to speak of Tier 1 entities so long as this speech is neutral or critical, as any commentary considered positive could be construed as “praise.” Facebook users are barred from doing anything that “seeks to make others think more positively” or “legitimize” a Tier 1 dangerous person or group or to “align oneself” with their cause — all forms of speech considered “praise.” The materials say, “Statements presented in the form of a fact about the entity’s motives” are acceptable, but anything that “glorifies the entity through the use of approving adjectives, phrases, imagery, etc” is not. Users are allowed to say that a person Facebook considers dangerous “is not a threat, relevant, or worthy of attention,” but they may not say they “stand behind” a person on the list they believe was wrongly included — that’s considered aligning themselves with the listee. Facebook’s moderators are similarly left to decide for themselves what constitutes dangerous “glorification” versus permitted “neutral speech,” or what counts as “academic debate” and “informative, educational discourse” for billions of people.

    Determining what content meets Facebook’s definitions of banned speech under the policy is a “struggle,” according to a Facebook moderator working outside of the U.S. who responded to questions from The Intercept on the condition of anonymity. This person said analysts “typically struggle to recognize political speech and condemnation, which are permissible context for DOI.” They also noted the policy’s tendency to misfire: “[T]he fictional representations of [dangerous individuals] are not allowed unless shared in a condemning or informational context, which means that sharing a Taika Waititi photo from [the film] Jojo Rabbit will get you banned, as well as a meme with the actor playing Pablo Escobar (the one in the empty swimming pool).”

    These challenges are compounded because a moderator must try to gauge how their fellow moderators would assess the post, since their decisions are compared. “An analyst must try to predict what decision would a quality reviewer or a majority of moderators take, which is often not that easy,” the moderator said.

    The rules are “a serious risk to political debate and free expression,” Patel said, particularly in the Muslim world, where DIO-listed groups exist not simply as military foes but part of the sociopolitical fabric. What looks like glorification from a desk in the U.S. “in a certain context, could be seen a simple statements of facts,” EFF’s York agreed. “People living in locales where so-called terrorist groups play a role in governance need to be able to discuss those groups with nuance, and Facebook’s policy doesn’t allow for that.”

    As Patel put it, “A commentator on television could praise the Taliban’s promise of an inclusive government in Afghanistan, but not on Facebook.”

    The moderator working outside of the U.S. agreed that the list reflects an Americanized conception of danger: “The designations seem to be based on American interests,” which “does not represent the political reality in those countries” elsewhere in the world, the person said.

    Particularly confusing and censorious is Facebook’s definition of a “Group Supporting Violent Acts Amid Protests,” a subcategory of Militarized Social Movements barred from using the company’s platforms. Facebook describes such a group as “a non-state actor” that engages in “representing [or] depicting … acts of street violence against civilians or law enforcement,” as well as “arson, looting, or other destruction of private or public property.” As written, this policy would appear to give Facebook license to apply this label to virtually any news organization covering — that is to say, depicting — a street protest that results in property damage, or to punish any participant uploading pictures of these acts by others. Given the praise piled onto Facebook a decade ago for the belief it had helped drive the Arab Spring uprisings across North Africa and the Middle East, it’s notable that, say, an Egyptian organization documenting violence amid the protests in Tahrir Square in 2011 could be deemed a dangerous Militarized Social Movement under 2021’s rulebook.

    Díaz, of UCLA, told The Intercept that Facebook should disclose far more about how it applies these protest-related rules. Will the company immediately shut down protest organizing pages the second any fires or other property damage occurs? “The standards that they’re articulating here suggest that [the DIO list] could swallow up a lot of active protesters,” Díaz said.

    It’s possible protest coverage was linked to the DIO listing of two anti-capitalist media organizations: Crimethinc and It’s Going Down. Facebook banned both publications in 2020, citing DIO policy, and both are indeed found on the list, designated as Militarized Social Movements and further tagged as “armed militias.”

    A representative for It’s Going Down, who requested anonymity on the basis of their safety, told The Intercept that “outlets across the political spectrum report on street clashes, strikes, riots, and property destruction, but here Facebook seems to be imply if they don’t like what analysis … or opinion one writes about why millions of people took to the streets last summer during the pandemic in the largest outpouring in U.S. history, then they will simply remove you from the conversation.” They specifically denied that the group is an armed militia, or even activist or a social movement, explaining that it is instead a media platform “featuring news, opinion, analysis and podcasts from an anarchist perspective.” A representative of Crimethinc likewise denied that the group is armed or “‘militarized’ in any sense. It is a news outlet and book publisher, like Verso or Jacobin.” The representative requested anonymity citing right-wing threats to the organization.

    Facebook did not address questions about why these media organizations had been internally designated “armed militias” but instead, when asked about them, reiterated its prohibition on such groups and on Groups Supporting Violent Acts Amid Protests.

    Facebook’s internal moderation guidelines also leave some puzzling loopholes. After the platform played a role in facilitating a genocide in Myanmar, company executive Alex Warofka wrote, “We agree that we can and should do more” to “prevent our platform from being used to foment division and incite offline violence.” But Facebook’s ban against violent incitement is relative, expressly permitting, in the policy materials obtained by The Intercept, calls for violence against “locations no smaller than a village.” For example, cited as fair game in the rules is the statement “We should invade Libya.” The Facebook spokesperson, said, “The purpose of this provision is to allow debate about military strategy and war, which is a reality of the world we live in,” and acknowledged that it would allow for calls of violence against a country, city, or terrorist group, giving as an example of a permitted post under the last category a statement targeting an individual: “We should kill Osama bin Laden.”

    The Facebook headquarters in Menlo Park, California, U.S., on Monday, May 10, 2021. Facebook Inc. reopens its Menlo Park offices at 10% capacity starting today. Photographer: Nina Riggio/Bloomberg via Getty Images

    Facebook’s headquarters in Menlo Park, Calif., on May 10, 2021.

    Photo: Nina Riggio/Bloomberg via Getty Images

    Harsh Suppression of Speech About the Middle East

    Enforcing the DIO rules leads to some surprising outcomes for a company that claims “free expression” as a core principle. In 2019, citing the DIO policy, Facebook blocked an online university symposium featuring Leila Khaled, who participated in two plane hijackings in the 1960s in which no passengers were hurt. Khaled, now 77, is still present in the version of Facebook’s terrorism list obtained by The Intercept. In February, Facebook’s internal Oversight Board moved to reverse a decision to delete a post questioning the imprisonment of leftist Kurdish revolutionary Abdullah Öcalan, a DIO listee whom the U.S. helped Turkish intelligence forces abduct in 1999.

    In July, journalist Rania Khalek posted a photo to Instagram of a billboard outside Baghdad International Airport depicting Iranian general Qassim Suleimani and Iraqi military commander Abu Mahdi al-Muhandis, both assassinated by the United States and both on the DIO list. Khalek’s Instagram upload was quickly deleted for violating what a notification called the “violence or dangerous organizations” policy. In an email, Khalek told The Intercept, “My intent when I posted the photo was to show my surroundings,” and “the fact that [the billboard is] so prominently displayed at the airport where they were murdered shows how they are perceived even by Iraqi officialdom.”

    More recently, Facebook’s DIO policy collided with the Taliban’s toppling of the U.S.-backed government in Afghanistan. After the Taliban assumed control of the country, Facebook announced the group was banned from having a presence on its apps. Facebook now finds itself in the position of not just censoring an entire country’s political leadership but placing serious constraints on the public’s ability to discuss or even merely depict it.

    Other incidents indicate that the DIO list may be too blunt an instrument to be used effectively by Facebook moderators. In May, Facebook deleted a variety of posts by Palestinians attempting to document Israeli state violence at Al Aqsa Mosque, the third holiest site in Islam, because company staff mistook it for an unrelated organization on the DIO list with “Al-Aqsa” in its name (of which there are several), judging from an internal memo obtained by BuzzFeed News. Last month, Facebook censored an Egyptian user who posted an Al Jazeera article about the Al-Qassam Brigades, a group active in neighboring Palestine, along with a caption that read simply “Ooh” in Arabic. Al-Qassam does not appear on the DIO list, and Facebook’s Oversight Board wrote that “Facebook was unable to explain why two human reviewers originally judged the content to violate this policy.”

    While the past two decades have inured many the world over to secret ledgers and laws like watchlists and no-fly bans, Facebook’s privatized version indicates to York that “we’ve reached a point where Facebook isn’t just abiding by or replicating U.S. policies, but going well beyond them.”

    “We should never forget that nobody elected Mark Zuckerberg, a man who has never held a job other than CEO of Facebook.”

    The post Revealed: Facebook’s Secret Blacklist of “Dangerous Individuals and Organizations” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Software used by the Department of Homeland Security to scan the records of millions of immigrants can automatically flag naturalized Americans to potentially have their citizenship revoked based on secret criteria, according to documents reviewed by The Intercept.

    The software, known as ATLAS, takes information from immigrants’ case files and runs it though various federal databases. ATLAS looks for indicators that someone is dangerous or dishonest and is ostensibly designed to detect fraud among people who come into contact with the U.S. immigration system. But advocates for immigrants believe that the real purpose of the computer program is to create a pretext to strip people of citizenship. Whatever the motivation, ATLAS’s intended outcome is ultimately deportation, judging from the documents, which originate within DHS and were obtained by the Open Society Justice Initiative and Muslim Advocates through Freedom of Information Act lawsuits.

    ATLAS helps DHS investigate immigrants’ personal relationships and backgrounds, examining biometric information like fingerprints and, in certain circumstances, considering an immigrant’s race, ethnicity, and national origin. It draws information from a variety of unknown sources, plus two that have been criticized as being poorly managed: the FBI’s Terrorist Screening Database, also known as the terrorist watchlist, and the National Crime Information Center. Powered by servers at tech giant Amazon, the system in 2019 alone conducted 16.5 million screenings and flagged more than 120,000 cases of potential fraud or threats to national security and public safety.

    Ultimately, humans at DHS are involved in determining how to handle immigrants flagged by ATLAS. But the software threatens to amplify the harm caused by bureaucratic mistakes within the immigration system, mistakes that already drive many denaturalization and deportation cases. “ATLAS should be considered as suspect until it is shown not to generate unfair, arbitrary, and discriminatory results,” said Laura Bingham, a lawyer with the Open Society Justice Initiative. “From what we are able to scrutinize in terms of the end results — like the disparate impact of denaturalization based on national origin — there is ample reason to consider ATLAS a threat to naturalized citizens.”

    “From what we are able to scrutinize in terms of the end results … there is ample reason to consider ATLAS a threat to naturalized citizens.”

    Some critics believe it’s no accident that ATLAS could go after individual immigrants for flimsy reasons. “The whole point of ATLAS is to screen and investigate so that the government can deny applications or refer for criminal or civil or immigration enforcement,” said Muslim Advocates’ Deborah Choi. “The purpose of the secret rules and predictive analytics and algorithms are to find things to investigate.”

    The Department of Homeland Security refuses to disclose to the public how exactly ATLAS works or what rules it uses to determine when an immigrant should be flagged to potentially have their citizenship revoked. This secrecy makes it nearly impossible to tell whether ATLAS is targeting immigrants baselessly or not. The Open Society Justice Initiative this week filed a new FOIA request with DHS and its United States Citizenship and Immigration Services, or USCIS, division seeking details on how the algorithm functions.

    The revelations about ATLAS come as policymakers await a review of denaturalization policies that the Biden administration began in February to “ensure that these authorities are not used excessively or inappropriately,” as the White House put it at the time. President Joe Biden came to office promising a more “humane” approach to immigration than former President Donald Trump, who stripped dozens of naturalized Americans of their citizenship. A deadline related to the review came and went in May. Months later, the administration has yet to publish the review or speak publicly about the matter.

    ATLAS originates within USCIS, a DHS division with responsibility for granting citizenship and other immigration benefits. USCIS has called the software its “primary background screening system,” but ATLAS appears to be a feature of a larger computer program that helps manage case information on every person in the immigration system: USCIS’s Fraud Detection and National Security Data System, or FDNS-DS. A 2020 DHS assessment of ATLAS’s privacy implications, one of the few public sources of information about ATLAS, shows that when an individual’s information is run through the software — a virtual certainty for any immigrant — ATLAS autonomously scours an array of databases, including some that contain classified materials. It looks for anything it has been programmed to consider derogatory, dangerous, or potentially fraudulent. DHS documents do not provide a full list of the databases it scours.

    ATLAS appears to scrutinize not just individual immigrants but also their wider social networks. A 2016 privacy assessment of FDNS-DS said that ATLAS “visually displays linkages or relationships among individuals to assist in identifying non-obvious relationships among individuals and organizations with a potential nexus to criminal or terrorist activities.”

    Amazon Web Services, the cloud computing division of the large online retailer, was hosting the ATLAS system as of 2020. That arrangement is one of many instances in which Amazon has sold its services to a controversial Homeland Security initiative targeting immigrants. Amazon has faced protests both from the general public and its own employees demanding that the company cease any further anti-immigrant work; the company did not return a request for comment.

    USCIS spokesperson Matthew Bourke declined to answer any questions about ATLAS.

    drowning_in_fingerprint_Final_3_glandien

    Illustration: Alexander Glandien for The Intercept

    Tracking Millions of Immigrants With Potentially Catastrophic Consequences

    It’s unknown how many individuals have been denaturalized via ATLAS. But a 2019 USCIS press release gave some sense of the program’s scale, noting that the program that year processed more than 16 million “screenings” and generated 124,000 “automated potential fraud, public safety and national security detections requiring further analysis and manual review by USCIS officers.”

    Immigrants come into contact with ATLAS, according to the 2020 privacy assessment, when one “presents him or herself” to the USCIS for some reason, of which there are many; when “new derogatory information is associated with the individual in one or more U.S. Government systems”; or, according to the 2016 privacy document, whenever “FDNS performs an administrative investigation.” This apparently can happen even after an immigration-related decision has been made: Among the FOIA documents shared with The Intercept is a USCIS memo noting that ATLAS is used to detect “fraud patterns in immigration benefit filings … either pre- or post-adjudication,” suggesting that an immigrant could be subjected to algorithmic scrutiny indefinitely after their filing is approved.

    Once the system is triggered, ATLAS eventually decides whether to flag the immigrant in question, but it’s unclear exactly how it arrives at that decision. How ATLAS reasons — that is, its decision-making “algorithm” — is secret. And although DHS documents list a handful of data types ATLAS can potentially search, they do not indicate what sorts of personal information ATLAS will churn through to reach its decision.

    The 2020 privacy document states vaguely that “ATLAS contains a rules engine that applies pattern-based algorithms to look for indicators of fraud, public safety, and national security concerns,” a process described as “predictive.” It gives little information about these rules but does state that it is permissible to use ATLAS to target immigrants by race and ethnicity in “exceptional instances,” a term left glaringly undefined. The document claims that USCIS protects immigrants from discrimination by “limiting the consideration of an individual’s simple connection to a particular country, by birth or citizenship, as a screening criterion, unless such consideration is based on an assessment of intelligence and risk and in which alternatives do not meet security needs.” Caveats aside, the point is clear: ATLAS could be used to target certain ethnic groups or nationalities in “exceptional circumstances” or should DHS deem it a “security need.” Appealing to murky notions of “national security” and “fraud” is a long-standing tactic of the post-9/11 homeland security apparatus, and one that has historically permitted the state to justify efforts to harass or target marginalized communities in the U.S. under the auspices of public safety.

    If ATLAS produces a negative review, the next steps can lead to denaturalization, and a 2019 flowchart included in the FOIA documents provided to The Intercept illustrates how: When ATLAS finds something derogatory according to its secret list of rules, the software sends out a “System Generated Notification,” which is then “triaged” and forwarded directly to FDNS-DS if potentially “actionable.” From there, FDNS determines whether the notification constitutes a “possible criminal denaturalization referral,” and, if so, will “refer to ICE for criminal denaturalization action.” All told, going from an ATLAS notification to criminal denaturalization proceedings takes only four steps on the flowchart.

    FOIA-flowchart-1

    An internal USCIS document shows an ATLAS scan as the first step in identifying cases for denaturalization.

    Document: FOIA

    A USCIS spreadsheet summarizing the System Generated Notifications created in 2020, also obtained via FOIA litigation, cites 12 different categories of ATLAS alert. Though the meaning of these codes is unclear, the spreadsheet references notifications relating to “DACA,” presumably the Deferred Action for Childhood Arrivals policy that protects some undocumented immigrants from deportation; “DOD,” possibly referring to the Department of Defense; and two different “NS,” or national security, categories whose full names were redacted. Most of the notifications created in 2020 were in the “multiple identities” category, which refers to immigrants deliberately using false aliases.

    Legal scholars and technologists have widely criticized attempts to use software to predict national security threats, arguing that terrorism is so statistically rare as to be impossible to foresee by drawing “patterns” from a person’s biography. “Because the rules or factors underlying ATLAS’s screening functionality are unknown, there is no way to assess whether ATLAS is disproportionately flagging certain communities,” Choi of Muslim Advocates told The Intercept. “In fact, the Privacy Impact Assessment for ATLAS states that under certain circumstances, an individual’s country of birth or citizenship could be a screening criterion. As was the case in Operation Janus” — a DHS program that involved a review of past naturalization cases of people from “special interest countries” — “any rule based on country of origin is likely to target individuals from Muslim-majority countries.”

    The 2020 privacy document does little to dispel worries that ATLAS is making potentially life-ruining decisions on the basis of bad data. The document states that ATLAS’s output is subject to manual review by the agents who use it; it also notes that the accuracy of ATLAS’s input is taken as a given: “USCIS presumes the information submitted is accurate. … ATLAS relies on the accuracy of the information as it is collected from the immigration requestor and from the other government source systems. As such, the accuracy of the information in ATLAS is equivalent to the accuracy of the source information at the point in time when it is collected by ATLAS.” The document further notes that “ATLAS does not employ any mechanisms that allow individuals to amend erroneous information” and suggests that individuals directly contact the offices maintaining the various databases ATLAS uses if they wish to correct an error. The notion that someone struggling to navigate the U.S. immigration system would have the wherewithal to personally negotiate a correction of the FBI Terrorist Screening Database, or have an opportunity to learn of such an error to begin with, is questionable.

    An Opportunity To Stop the Denaturalization Wave

    The U.S. government’s use of denaturalization has varied widely over the last century. In the early to mid-1900s, the federal government pursued denaturalization for political, racist, and sexist reasons, even going after U.S.-born citizens. That changed after a 1967 U.S. Supreme Court decision vastly narrowed the potential uses of denaturalization. For nearly five decades afterward, the government brought denaturalization cases only sparingly, usually against accused war criminals and Nazis — up until the Trump presidency.

    In September 2017, the Department of Justice announced its intent to denaturalize three men it accused of lying about their immigration histories on their applications for citizenship. It was a loud proclamation of a new front in the Trump administration’s war on immigrants that would lead to nearly double the number of denaturalization cases filed during two years as compared to the number of cases filed from 2004 to 2016, according to a New York Times Magazine investigation.

    The infrastructure that helped the Trump Justice Department identify its first targets for denaturalization was years in the making. Under Operation Janus — an initiative that began at the end of George W. Bush’s presidency and continued under former President Barack Obama — the Department of Homeland Security began to digitize fingerprint data for about 315,000 people whose information was missing from a central database, ultimately identifying 1,029 people who had been naturalized after receiving final orders of deportation under another identity. According to a 2016 report from the DHS Office of Inspector General, U.S. Immigration and Customs Enforcement had begun the process of investigating some of those cases to decide whether the individuals should be denaturalized.

    “But the Obama administration proceeded with caution, instructing officials only to denaturalize those who appeared to pose a danger to the United States,” writes law professor Amanda Frost in her recent book, “You Are Not American: Citizenship Stripping from Dred Scott to the Dreamers.” “After the Trump administration took over, however, the program grew exponentially.”

    In early 2018, the Justice Department wrote in a press release that USCIS “has stated its intention to refer approximately an additional 1,600 for prosecution,” and later that year, USCIS announced the creation of a new office focused on denaturalization. (Asked about the status of that office, Bourke, the USCIS spokesperson, said that once the administration’s review of denaturalization policies is complete, “USCIS staffing will be adjusted accordingly to meet the needs of the agency.”) Ahead of the 2019 and 2020 fiscal years, the Department of Homeland Security asked for $207.6 million to fund, among other things, investigations into hundreds of additional leads under Operation Janus, as well as a review of another 700,000 immigrant files under Operation Second Look, a related program. In early 2020, the Justice Department created a new office to investigate “terrorists, war criminals, sex offenders, and other fraudsters who illegally obtained naturalization” for denaturalization.

    ATLAS is a direct descendent of these efforts to simultaneously digitize huge swaths of paper fingerprint records and sift through them en masse in order to find damning inconsistencies. One of the FOIA-produced documents shared with The Intercept, the USCIS memo on that office’s fingerprint digitization strategy, notes that ATLAS “will help to ensure USCIS is aware of cases with multiple identity fraud patterns so that officers can address this potentially derogatory information prior to final adjudication of immigration benefits.”

    Several of the documents obtained under FOIA suggest that deportation is the end goal of these recent efforts: A heavily redacted, undated USCIS presentation lists “Removal Proceedings (if Amenable)” as the final step in a denaturalization case, while a flow chart on the “Historical Fingerprint Enrollment Denaturalization Workflow” shows the second-to-last step as “Immigration Removal Proceedings Occur,” followed by a decision by an immigration judge. A 2018 USCIS memo states that a key consideration in settlement agreements is to determine if deportation “is a priority or if denaturalization is sufficient,” noting that deportation “would generally be within the enforcement priorities, where the subject is denaturalized with an admission or finding of fraud.” A 2009 ICE memo notes that in cases in which the Justice Department declines to criminally prosecute someone suspected of “identity and benefit fraud,” that person “must, if legally possible, be administratively arrested and placed in removal proceedings. Several of the subjects have been granted citizenship through naturalization. These cases should be given priority.” Additionally, a USCIS spreadsheet listing settlement proposals for 10 denaturalization cases in 2018 and 2019 (all of which were rejected) shows that all of the offers included some sort of protection from deportation — either explicitly or through an agreement to maintain permanent resident status.

    Denaturalization experts say that putting an immigrant’s paper trail through the algorithmic wringer can lead to automated punitive measures based not on that immigrant’s past conduct but the government’s own incompetence. Experts have long pointed out that using matches against shoddily maintained fingerprints, many collected on notecards decades ago, as evidence of deliberate “fraud” or malfeasance is likely to ensnare and punish innocent people.

    According to Choi, in some cases “denaturalization is sought on the basis of the mistakes of others, such as bad attorneys and translators, or even the government’s failures in record-keeping or the failures of the immigration system.” Bureaucratic blundering can easily be construed as a sign of fraud on an immigrant’s part, especially if decades have passed since filling out the paperwork in question. If ATLAS finds that your name doesn’t match a name associated with your historical fingerprint record, you could be fast-tracked for denaturalization without ever realizing that there was an inconsistency in your paperwork, potentially through no fault of your own. “Many denaturalization cases are based on the government’s allegations of fraud, but the government has never substantiated its sweeping justification of fraud prevention to warrant the irreparable harm to American families and society that is caused by denaturalization,” Choi added.

    The Justice Department’s denaturalization prosecutions appeared to slow in 2020, when the coronavirus pandemic caused massive delays throughout the judicial system, according to a document obtained by the Open Society Justice Initiative. Another USCIS document obtained by the group, however, shows that there were thousands of cases in the pipeline: As of April 2020, the agency had produced 2,628 “affidavits of good cause,” which are a procedural requirement for initiating civil denaturalization cases, and had assigned 1,265 cases to the USCIS Office of Chief Counsel. Of those, 745 cases were pending with the OCC and 502 had been referred to the Justice Department’s Office of Immigration Litigation. Asked about the current number of cases it is currently investigating or has referred to the Justice Department for prosecution, USCIS referred questions to the Justice Department. Justice Department spokesperson Danielle Blevins declined to comment on the department’s denaturalization caseload.

    Under Biden’s February executive order, the departments of State, Justice, and Homeland Security were due to submit a report to the president in early May. The State Department confirmed to The Intercept that it had completed its portion of the review and directed questions about if and when the report would be made public to the White House. Bourke of USCIS told The Intercept that the agency is working with DHS and the Justice Department on the review and that it would “potentially make adjustments following that assessment.” The White House did not respond to questions about the report.

    Advocates, meanwhile, have been pushing the administration to dismantle the denaturalization-focused infrastructure built by Trump and to restore the previous status quo of very limited pursuits of denaturalization. In May, Muslim Advocates was the lead signatory among 48 advocacy groups that detailed these demands in a letter to USCIS. The groups recommend that the agency halt its use of ATLAS until completing a “disparate impact review” and publicly release information on the rules ATLAS uses to flag people, demographic information about the people flagged by the system, and the number of screenings and flags, as well as their outcomes.

    Sameera Hafiz, policy director at the Immigrant Legal Resource Center, who has been involved in advocacy efforts related to denaturalization for several years, said she wants to see the administration do even more. “Our expectation is that the Biden administration will establish a clear process to immediately restore citizenship to all the individuals stripped of their citizenship during the Trump years and commit to dropping the pending denaturalization cases initiated by Trump,” she said. “Unfortunately, Biden’s immigration enforcement tactics continue to instill fear in our communities — this is one important step the administration must take to begin addressing the harms of the Trump years.”

    The post Little-Known Federal Software Can Trigger Revocation of Citizenship appeared first on The Intercept.

    This post was originally published on The Intercept.

  • AS REGRAS INTERNAS SECRETAS do Facebook para moderar o termo “sionista” permitem que a rede social suprima as críticas a Israel em meio a uma onda de abusos e violências praticados pelo país, segundo pessoas que analisaram essas políticas.

    As regras estariam em vigor desde 2019, o que aparentemente contradiz uma alegação feita pela empresa em março, de que ainda não teria decidido se “sionista” seria um termo usado como sinônimo de “judeu” ao verificar um possível “discurso de ódio”. As políticas, obtidas pelo Intercept, regem o uso de “sionista” em postagens não apenas no Facebook, mas em todos os aplicativos associados, incluindo o Instagram.

    Tanto o Facebook quanto o Instagram enfrentam acusações de censura na esteira da ampla e arbitrária exclusão de postagens recentes de usuários pró-Palestina com críticas ao governo israelense, incluindo aquelas que documentavam ocorrências de violência de Estado por parte de Israel.

    Israel e Gaza foram tomadas por intensa violência desde a semana passada. As tensões tiveram início em meio a protestos dos palestinos contra as remoções planejadas na parte ocupada de Jerusalém Oriental, com o objetivo de abrir espaço para colonos judeus. Em dado momento, as forças de segurança israelenses invadiram o complexo da mesquita Al Aqsa, na cidade velha de Jesuralém, um dos lugares mais sagrados do Islã. O grupo militante palestino Hamas respondeu com disparo de foguetes contra Israel. Israel, por sua vez, iniciou intenso bombardeio aéreo e ataques de artilharia contra a Palestina ocupada na Faixa de Gaza, e teria deixado mais de 200 pessoas mortas, incluindo 63 crianças. No mínimo 1,5 mil palestinos foram feridos. Segundo relatos, em Israel teriam morrido dez pessoas em decorrência dos confrontos, incluindo duas crianças, deixando mais de 500 feridos.

    “O Facebook alega que sua política em relação ao termo ‘sionista’ diz respeito à segurança dos judeus”, disse ao Intercept Dani Noble, da organização Jewish Voice for Peace [Voz Judaica pela Paz], que analisou as regras. “Porém, de acordo com o trecho da política de conteúdo, aparentemente os tomadores de decisão do Facebook estão mais preocupados em blindar os colonos israelenses sionistas e o governo de Israel da responsabilidade por esses crimes.”

    Embora nenhum dos casos de remoção de conteúdo pelo Facebook ou pelo Instagram tenha sido vinculado de forma inequívoca ao termo “sionista”, usuários e ativistas pró-Palestina ficaram alarmados com o desaparecimento de postagens e os avisos de violação de políticas ao longo da última semana. O Facebook declarou que a súbita exclusão de conteúdo profundamente perturbador documentando a violência do Estado israelense teria sido, como frequentemente a empresa alega, apenas um grande acidente. A representante da companhia, Sophie Vogel, em um e-mail enviado ao Intercept, colocou a culpa pelas postagens apagadas, muitas delas sobre as recentes tentativas dos colonos israelenses de se apropriarem de moradias palestinas, em uma inespecífica “falha técnica ampla” no Instagram e em uma série de exclusões “equivocadas” e “erros humanos”.

    Outra representante, Claire Lerner, declarou: “Permitimos a discussão crítica sobre sionistas, mas removemos ataques contra eles quando o conceito permite inferir que a palavra esteja sendo usada como sinônimo de judeus ou israelenses, ambas características protegidas pela nossa política contra o discurso de ódio”. Ela acrescentou: “Nós reconhecemos que esse debate é delicado, e que a palavra ‘sionista’ é frequentemente usada em discussões políticas importantes. Nossa intenção nunca é prejudicar o debate, mas assegurar que estamos permitindo o máximo de expressão possível, ao mesmo tempo em que mantemos a segurança da comunidade.”

    O Facebook não comentou sobre o momento de implementação das regras e a aparente contradição com suas declarações públicas de que essa política ainda estaria sob análise e não estaria ativa.

    Enquanto algumas postagens de palestinos no Facebook e no Instagram simplesmente desapareceram, o que tornaria plausível a possibilidade de um problema técnico, outros relatam terem recebido uma notificação de que suas postagens teriam sido removidas por violarem regras corporativas contra “símbolos ou discurso de ódio”. Essas supostas violações constituem apenas uma das proibições retiradas da biblioteca de documentos internos do Facebook, que supostamente ditam para seu público de bilhões de pessoas o que é permitido e o que deve ser excluído.

    Embora a empresa alegue que suas decisões de conteúdo são cada vez mais tomadas automaticamente por máquinas, Facebook e Instagram ainda dependem de uma legião de terceirizados mal remunerados em todo o mundo, a quem são delegadas as decisões de apagar ou preservar postagens usando uma mistura de juízo de valor individual e aplicação de intrincados manuais de regras, fluxogramas e exemplos hipotéticos. O Facebook já havia sido desonesto quanto à questão de acrescentar ou não “sionista” a uma matriz de termos usados para proteger categorias de pessoas, ao dizer a ativistas palestinos em uma videoconferência, em março, que ainda “não havia decisão” sobre o assunto. “Estamos pesquisando se em certos contextos limitados é correto considerar que a palavra sionista pode ser usada para substituir ‘judeu’ em alguns casos de discurso de ódio”, declarou a gestora de direitos humanos do Facebook, Miranda Sissons, no Fórum de Ativismo Digital da Palestina. Aparentemente, isso não é totalmente verdade (não foi possível entrar em contato com Sissons para esclarecimentos).

    Exemplos Inacreditáveis para os Moderadores

    Um trecho de um dos manuais internos de regras analisado pelo Intercept guia moderadores de conteúdo do Facebook e do Instagram pelo processo de determinar se postagens e comentários usando o termo “sionista” constituiriam discurso de ódio.

    Um moderador do Facebook disse que a política “dá muito pouco espaço de manobra para criticar o sionismo” num momento em que exatamente essa ideologia está sendo alvo de avaliação detida e intensos protestos.

    Em termos estritos, “sionismo” se refere ao movimento que defendeu historicamente a criação na Palestina de um Estado judeu, ou uma comunidade, e atualmente defende a nação que surgiu desse impulso, Israel. Um sionista é uma pessoa que participa do sionismo. Embora “sionista” e “sionismo” possam ser termos carregados, usados algumas vezes por pessoas abertamente antissemitas como um sinônimo disfarçado de “judeu” e “judaísmo”, essas palavras também têm significado histórico e político inequívoco, e usos claros, legítimos e não-odiosos, inclusive no contexto de críticas e debates sobre o governo de Israel e suas políticas.  Nas palavras de um moderador do Facebook que conversou com o Intercept sob a condição de anonimato para preservar seu emprego, a política, na prática, “dá muito pouco espaço de manobra para criticar o sionismo” num momento em que essa exata ideologia está sendo alvo de grandes atenções e intensos protestos.

    O texto da política sobre “sionista” é apenas um curto trecho de um documento bem maior que guia os moderadores pelo processo de identificação de uma grande variedade de classes protegidas e de discursos de ódio associados a elas. Ele dá instruções aos moderadores “para determinar se ‘sionista” está sendo usado como sinônimo de israelense/judeu”, o que sujeitaria a postagem à exclusão. O Facebook diz que atualmente não considera “sionista” uma categoria protegida específica. O texto é o seguinte:

    Quais são os indicadores para determinar se “sionista” está sendo usado como sinônimo de israelense/judeu?
    Usamos os seguintes Indicadores para determinar Sinônimo de judeu/israelense:
    quando conteúdo principal explicitamente menciona judeu ou israelense e comentário contém “sionista” como alvo combinado a ataque de discurso de ódio e não há outro contexto disponível então presumir judeu/israelense e excluir.

    Exemplos:

    Excluir: Conteúdo Principal, “Colonos israelenses se recusam a deixar casas construídas em território palestino”; Comentário, “Fodam-se os sionistas!”
    Nenhuma ação: Conteúdo Principal, “Movimento sionista faz 60 anos; Comentário, “Sionistas são terríveis, odeio todos”

    Em cenários de comparações visuais ou textuais consideradas desumanizantes em que haja referência a “ratos”, as referências a sionista(s) deverão ser consideradas sinônimo de “judeu(s)”?
    Sim, apenas nessas condições por favor considere “sionista(s)” como substituto de “judeu(s)” e aja de forma adequada.

    Os críticos comentaram que o primeiro exemplo está atrelado a um exemplo frequente e muitas vezes violento de ocorrência do mundo real – a tomada de moradias palestinas por colonos israelenses – quase sempre praticada com justificativas baseadas em ideologia sionista ou em políticas do governo israelense baseadas no sionismo. Os ativistas que questionam as regras do Facebook sobre o termo “sionista” se preocupam com a possibilidade de que denúncias dessas ações e políticas de Estado sejam consolidadas como discurso de ódio contra judeus, o que dificultaria fazer qualquer crítica a Israel nas redes.

    “O absurdo, a futilidade e a natureza politizada dessa política do Facebook devem ter ficado agora claros como o dia, à medida que testemunhamos a limpeza étnica na Jerusalém ocupada, e a nova guerra contra a população sitiada de Gaza”, declarou Dima Khalidi, diretora do grupo de defesa de direitos “Palestine Legal” [Palestina Jurídica]. “O problema fundamental é que o sionismo é uma ideologia política que justifica exatamente o tipo de expulsão forçada de palestinos – que torna alguns palestinos triplamente refugiados – que vemos agora em Sheikh Jarrah e outras regiões ocupadas de Jerusalém Oriental”.

    Colonialismo e Colonizadores

    Os críticos dizem que a decisão do Facebook de se concentrar em “sionista” como identidade étnica oculta o fato de que o termo descreve uma escolha ideológica concreta, e ignora como palestinos e outros passaram a usar essa palavra no contexto de sua opressão histórica por Israel. Esse foco inibe o próprio discurso político e os protestos em âmbito mundial que o Facebook alega estar protegendo, entende Jillian York, diretora de liberdade de expressão internacional na organização Electronic Frontier Foundation [Fundação Fronteira Eletrônica] e crítica de longa data das práticas de moderação de conteúdo do Facebook. “Considerando que ‘sionista’ é usado como autoidentificação, seu uso por judeus e outros (incluindo cristãos evangélicos) demonstra que não se trata simplesmente de um sinônimo para ‘judeu’, como o Facebook vem dando a entender”, explicou ela ao Intercept. “Além disso, seu uso na região é distinto – palestinos o utilizam como sinônimo de ‘colonizador’, não de ‘judeu’.” A política constitui uma forma de “excepcionalismo” ideológico em relação ao sionismo, segundo York, um tratamento que não é dado a outras identidades políticas como o socialismo e o neoconservadorismo “em decorrência de pressões políticas e de outros tipos”.

    Embora o Facebook tenha afirmado que nenhuma postagem no Instagram sobre a recente violência israelense foi removido a pedido do governo de Israel, o país frequentemente faz tais pedidos à empresa, que os acata em grande medida. Além disso, brigadas de voluntários pró-Israel espontaneamente organizadas, muitas vezes coordenadas pelo aplicativo de telefone Act.IL, participam de campanhas de denúncia em massa que conseguem enganar os sistemas de moderação automatizada do Facebook, levando-os a marcar discurso político não violento como incitação de ódio. A empresa se recusou a comentar publicamente a pergunta sobre indícios de campanhas de denúncia em massa.

    A existência de regras para “sionista” surpreendeu os ativistas palestinos, que afirmam o que o Facebook anteriormente teria dado a impressão de que os limites ao uso do termo estariam sendo avaliados internamente, mas não teriam sido implementados. “Fomos levados a crer que eles estariam avaliando essa política, e, portanto, que consultariam a sociedade civil”, disse, ao Intercept, Marwa Fatafta, diretora de políticas para o Oriente Médio e o Norte da África da organização Access Now. Fatafta comentou que sua opinião sobre a possibilidade de implementação de tal política foi solicitada em 2020, embora o documento indique que as regras relativas a “sionista” já teriam sido divulgadas aos moderadores em 2019.

    Depois de avaliar a política por conta própria, Fatafta comentou que ela reflete exatamente as preocupações que havia sentido quando foi apresentada a essa hipótese. “Sionismo é um termo politicamente complexo que exige sutileza”, disse ao Intercept. “Não há como o Facebook moderar esse conteúdo em grande escala sem que seus sistemas saiam de controle, coibindo posicionamentos políticos legítimos e silenciando os críticos”.

    Tradução: Deborah Leão

    The post Facebook apaga críticas a Israel que usem o termo ‘sionista’ appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Facebook’s secret internal rules for moderating the term “Zionist” let the social network suppress criticism of Israel amid an ongoing wave of Israeli abuses and violence, according to people who reviewed the policies.

    The rules appear to have been in place since 2019, seeming to contradict a claim by the company in March that no decision had been made on whether to treat the term “Zionist” as a proxy for “Jew” when determining whether it was deployed as “hate speech.” The policies govern the use of “Zionist” in posts not only on Facebook but across its subsidiary apps, including Instagram.

    Both Facebook and Instagram are facing allegations of censorship following the erratic, widespread removal of recent posts from pro-Palestinian users critical of the Israeli government, including those who documented instances of Israeli state violence.

    Mass violence has gripped Israel and Gaza since last week. Tensions kicked off amid Palestinian protests against planned evictions in occupied East Jerusalem to make way for Jewish settlers. Eventually, Israeli security forces stormed the Al Aqsa mosque compound in Jerusalem’s old city, one of the holiest sites in Islam. The Palestinian militant group Hamas responded with rocket fire aimed at Israel. Israel, in turn, unleashed massive aerial bombardments and artillery attacks against the occupied Palestinian Gaza Strip, reportedly leaving more than 120 people, including 20 children, dead. At least 900 Palestinians have been injured since Monday. Reports said that in Israel, seven people, including a soldier and a child, had died as a result of the violence, with more than 500 injured.

    “Facebook claims that their policy on the word ‘Zionist’ is about Jewish safety,” Dani Noble, an organizer with Jewish Voice for Peace who reviewed the rules, told The Intercept. “But, according to their content policy excerpt, it seems Facebook decision-makers are more concerned with shielding Zionist Israeli settlers and the Israeli government from accountability for these crimes.”

    Though none of Facebook and Instagram’s content removal has been tied conclusively to the term “Zionist,” users and pro-Palestinian advocates were alarmed by disappearing posts and notices of policy violations over the last week. Facebook said the sudden deletion of deeply disturbing content documenting Israeli state violence was, as the company so often claims, just a big accident. Company spokesperson Sophie Vogel, in an email to The Intercept, blamed the deleted posts, many about the recent attempts to seize Palestinian homes by Israeli settlers, on an unspecified “wider technical issue” within Instagram and on a series of “mistaken” deletions and “human error.”

    Another spokesperson, Claire Lerner, said, “We allow critical discussion of Zionists, but remove attacks against them when context suggests the word is being used as a proxy for Jews or Israelis, both of which are protected characteristics under our hate speech policy.” She added, “We recognize the sensitivity of this debate, and the fact that the word ‘Zionist’ is frequently used in important political debate. Our intention is never to stifle that debate, but to make sure that we’re allowing as much speech as possible, while keeping everyone in our community safe.”

    Facebook did not provide comment about when the rules were implemented and the apparent contradiction with its public statements that such a policy was still under consideration and not being actively used.

    While some Palestinians’ Facebook and Instagram posts have simply vanished, suggesting a technical problem of some sort could plausibly be the cause, many others reported receiving a notification that their posts were removed because they violated company rules against “hate speech or symbols.” Those alleged violations constitute just one of many prohibitions drawn from a library of internal Facebook documents that ostensibly dictate what’s permitted and what should be deleted for the company’s multibillion-person audience.

    Though the company claims its content decisions are increasingly made automatically by machines, Facebook and Instagram still rely on legions of low-paid contractors around the world, left to delete or preserve posts through a mix of personal judgment calls and the application of byzantine rule books, flow charts, and hypothetical examples. Facebook has previously dissembled on the question of whether it would add “Zionist” to a master list it maintains of protected classes of people, telling Palestinian activists at a virtual conference in March that it had made “no decision” on the matter. “We are looking about whether in some certain limited contexts, is it is accurate to consider that the word Zionist may be a proxy for Jew in some certain hate speech cases,” Facebook’s human rights chief Miranda Sissons told the Palestine Digital Activism Forum. This does not appear to be entirely true. (Sissons could not be reached for comment.)

    Baffling Examples for Moderators

    A portion of one such internal rulebook reviewed by The Intercept walks Facebook and Instagram moderators through the process of determining whether posts and comments that make use of the term “Zionist” constitute hate speech.

    One Facebook moderator said the policy “leaves very little wiggle room for criticism of Zionism” at a time when precisely that ideology is subject to intense scrutiny and protest.

    “Zionism,” strictly speaking, refers to the movement that advocated historically for the creation of a Jewish state or community in Palestine and more recently for the nation that emerged from that push, Israel. A Zionist is someone who participates in Zionism. Though “Zionist” and “Zionism” can be fraught terms, deployed at times by flagrant antisemitic people as a wink-and-nod synonym for “Jew” and “Judaism,” the words also have unequivocal historical and political meaning and clear, legitimate, and non-hateful uses, including in the context of criticism and discussion of the Israeli government and its policies. In the words of one Facebook moderator who spoke to The Intercept on the condition of anonymity to protect their job, in practice the policy “leaves very little wiggle room for criticism of Zionism” at a time when precisely that ideology is subject to intense scrutiny and protest.

    The policy text on “Zionist” is only a brief section of a much larger document that walks moderators through the process of identifying a wide variety of protected classes and associated hate speech. It provides moderators with instructions “to determine if ‘Zionist’ is used as a proxy for Israeli/Jew” and thereby subject to deletion. Facebook says it does not currently consider “Zionist” a protected class on its own. It reads as follows:

    What are the indicators to determine if “Zionist” is used as a proxy for Israeli/Jew?

    We use the following Indicators to determine Proxy for Jew/Israeli:

    1. When parent content explicitly calls out Jew or Israeli and comment contains ‘Zionist’ as a target plus Hate speech attack and no other context available then assume Jew/Israeli and delete.

    Examples:

    Delete: Parent Content, “Israeli settlers refuse to leave houses built on Palestinian territory”; Comment, “Fuck Zionists!”

    No Action: Parent Content, “Zionist movement turns 60”; Comment, “Zionists are awful, I really hate them all”

    In scenarios of visual or textual designated dehumanizing comparisons where there are references to “rats”, should the references to Zionist(s) be considered as a proxy for “Jew(s)”?

    Yes, only in these scenarios please consider “Zionist(s)” as a substitute to “Jew(s)” and action appropriately.

    Critics noted that the first example is tied to a frequent and often violent real-world event — seizures of Palestinian homes by Israeli settlers — almost always carried out with justifications rooted in ideological Zionism or Israeli government policies themselves rooted in Zionism. The advocates who question Facebook’s rules on the term “Zionist” worry they would collapse denunciations of such action and state policies into hate speech against Jews, making it difficult to criticize Israel online at all.

    “The absurdity, futility, and politicized nature of Facebook’s policy should be as clear as day right now, as we witness continued ethnic cleansing in occupied Jerusalem, and a new war on the besieged population of Gaza,” said Dima Khalidi, director of Palestine Legal, an advocacy group. “The fundamental problem is that Zionism is a political ideology that justifies exactly the kind of forced expulsion of Palestinians — making some Palestinians refugees 3-times over — that we’re seeing right now in Sheikh Jarrah and other occupied East Jerusalem neighborhoods.”

    Colonialism and Colonizers

    Critics said Facebook’s decision to zero in on “Zionist” as an ethnic identity elides the fact that it describes a concrete ideological choice and ignores how Palestinians and others have come to use the word in the context of their historical repression by Israel. This focus inhibits the very political discourse and protest throughout the world that Facebook claims it’s protecting, according to Jillian York, the Electronic Frontier Foundation’s director for international freedom of expression and a longtime critic of Facebook’s moderation practices. “While ‘Zionist’ is used as a self-identity, its use by Jews and others (including many Evangelical Christians) demonstrates that it’s not purely a synonym for ‘Jewish’ as Facebook has suggested,” York told The Intercept. “Further, its use in the region is different — Palestinians use it as a synonym for ‘colonizer’, not for ‘Jew.’” The policy constitutes a form of ideological “exceptionalism” around Zionism, according to York, a treatment not provided to other political identities like socialism or neoconservativism “as the result of political and other pressure.”

    Though Facebook said that no Instagram posts about the recent Israeli violence were removed at the request of the Israeli government, the country routinely makes such requests to the largely compliant company. And brigades of loosely organized pro-Israel volunteers, many coordinating through the smartphone app Act.IL, participate in mass-reporting campaigns that can essentially trick Facebook’s automated moderation systems into flagging nonviolent political speech as hateful incitement. The company declined to comment on the record when asked about evidence of mass-reporting campaigns.

    The existence of the “Zionist” rules comes as a surprise to Palestinian advocates who say Facebook previously created the impression that limits on using the term “Zionist” were being considered within the company but not actually implemented. “We were led to believe that they are considering this policy and therefore they were consulting with civil society,” Marwa Fatafta, Middle East and North Africa policy manager for Access Now, told The Intercept. Fatafta noted that she was asked to provide feedback on the possibility of such policy in 2020, whereas the document containing the rules indicates the rules on “Zionist” were released to moderators in 2019.

    After reviewing the policy for herself, Fatafta said it reflects precisely the concerns she held when it was posed to her as hypothetical. “Zionism is a politically complex term that requires nuance,” she told The Intercept. “There is no way for Facebook to moderate such content at scale without their systems running amok, curtailing legitimate political speech and silencing critical voices.”

    The post Facebook’s Secret Rules About the Word “Zionist” Impede Criticism of Israel appeared first on The Intercept.

    This post was originally published on The Intercept.

  • U.S. Customs and Border Protection purchased technology that vacuums up reams of personal information stored inside cars, according to a federal contract reviewed by The Intercept, illustrating the serious risks in connecting your vehicle and your smartphone.

    The contract, shared with The Intercept by Latinx advocacy organization Mijente, shows that CBP paid Swedish data extraction firm MSAB $456,073 for a bundle of hardware including five iVe “vehicle forensics kits” manufactured by Berla, an American company. A related document indicates that CBP believed the kit would be “critical in CBP investigations as it can provide evidence [not only] regarding the vehicle’s use, but also information obtained through mobile devices paired with the infotainment system.” The document went on to say that iVe was the only tool available for purchase that could tap into such systems.

    According to statements by Berla’s own founder, part of the draw of vacuuming data out of cars is that so many drivers are oblivious to the fact that their cars are generating so much data in the first place, often including extremely sensitive information inadvertently synced from smartphones.

    Indeed, MSAB marketing materials promise cops access to a vast array of sensitive personal information quietly stored in the infotainment consoles and various other computers used by modern vehicles — a tapestry of personal details akin to what CBP might get when cracking into one’s personal phone. MSAB claims that this data can include “Recent destinations, favorite locations, call logs, contact lists, SMS messages, emails, pictures, videos, social media feeds, and the navigation history of everywhere the vehicle has been.” MSAB even touts the ability to retrieve deleted data, divine “future plan[s],” and “Identify known associates and establish communication patterns between them.”

    The kit can discover “when and where a vehicle’s lights are turned on, and which doors are opened and closed at specific locations.”

    The kit, MSAB says, also has the ability to discover specific events that most car owners are probably unaware are even recorded, like “when and where a vehicle’s lights are turned on, and which doors are opened and closed at specific locations” as well as “gear shifts, odometer reads, ignition cycles, speed logs, and more.” This car-based surveillance, in other words, goes many miles beyond the car itself.

    iVe is compatible with over two dozen makes of vehicle and is rapidly expanding its acquisition and decoding capabilities, according to MSAB.

    Civil liberties watchdogs said the CBP contract raises concerns that these sorts of extraction tools will be used more broadly to circumvent constitutional protections against unreasonable searches. “The scale at which CBP can leverage a contract like this one is staggering,” said Mohammad Tajsar, an attorney with the American Civil Liberties Union of Southern California.

    MSAB spokesperson Carolen Ytander declined to comment on the privacy and civil liberties risks posed by iVe. When asked if the company maintains any guidelines on use of its technology, they said the company “does not set customer policy or governance on usage.”

    Getting Smartphone Data Without Having to Crack Into a Smartphone

    MSAB’s contract with CBP ran from June of last year until February 28, 2021, and was with the agency’s “forensic and scientific arm,” Laboratories and Scientific Services. It included training on how to use the MSAB gear.

    Interest from the agency, the largest law enforcement force in the United States, likely stems from police setbacks in the ongoing war to crack open smartphones.

    Attacking such devices was a key line of business for MSAB before it branched out into extracting information from cars. The ubiquity of the smartphone provided police around the world with an unparalleled gift: a large portion of an individual’s private life stored conveniently in one object we carry nearly all of the time. But as our phones have become more sophisticated and more targeted, they’ve grown better secured as well, with phone makers like Apple and phone device-cracking outfits like MSAB and Cellebrite engaged in a constant back-and-forth to gain a technical edge over the other.

    “We had a Ford Explorer … we pulled the system out, and we recovered 70 phones that had been connected to it. All of their call logs, their contacts and their SMS.”

    But as our phones have grown in sophistication as small computers, so have a whole host of everyday objects and appliances, our cars included. Data-hungry government agencies have increasingly moved to exploit the rise of the smart car, whose dashboard-mounted computers, Bluetooth capabilities, and USB ports have, with the ascendancy of the smartphone, become as standard as cup holders. Smart car systems are typically intended to be paired with your phone, allowing you to take calls, dictate texts, plug in map directions, or “read ”emails from behind the wheel. Anyone who’s taken a spin in a new-ish vehicle and connected their phone — whether to place a hands-free call, listen to Spotify, or get directions — has probably been prompted to share their entire contact list, presented as a necessary step to place calls but without any warning that a perfect record of everyone they’ve ever known will now reside inside their car’s memory, sans password.

    The people behind CBP’s new tool are well aware that they are preying on consumer ignorance. In a podcast appearance first reported by NBC News last summer, Berla founder Ben LeMere remarked, “People rent cars and go do things with them and don’t even think about the places they are going and what the car records.” In a 2015 appearance on the podcast “The Forensic Lunch,” LeMere told the show’s hosts how the company uses exactly this accidental-transfer scenario in its trainings: “Your phone died, you’re gonna get in the car, plug it in, and there’s going to be this nice convenient USB port for you. When you plug it into this USB port, it’s going to charge your phone, absolutely. And as soon as it powers up, it’s going to start sucking all your data down into the car.”

    In the same podcast, LeMere also recounted the company pulling data from a car rented at BWI Marshall Airport outside Washington, D.C.:

    “We had a Ford Explorer … we pulled the system out, and we recovered 70 phones that had been connected to it. All of their call logs, their contacts and their SMS history, as well as their music preferences, songs that were on their device, and some of their Facebook and Twitter things as well. … And it’s quite comical when you sit back and read some of the the text messages.”

    The ACLU’s Tajsar explained, “What they’re really saying is ‘We can exploit people because they’re dumb. We can leverage consumers’ lack of understanding in order to exploit them in ways that they might object to if it was done in the analog world.’”

    Exploiting the Wild “Frontier of the Fourth Amendment”

    The push to make our cars extensions of our phones (often without any meaningful data protection) makes them tremendously enticing targets for generously funded police agencies with insatiable appetites for surveillance data. Part of the appeal is that automotive data systems remain on what Tajsar calls the “frontier of the Fourth Amendment.” While courts increasingly recognize your phone’s privacy as a direct extension of your own, the issue of cracking infotainment systems and downloading their contents remains unsettled, and CBP could be “exploiting the lack of legal coverage to get at information that otherwise would be protected by a warrant,” Tajsar said.

    MSAB’s technology is doubly troubling in the hands of CBP, an agency with a powerful exception from the Fourth Amendment and a historical tendency toward aggressive surveillance and repressive tactics. The agency recently used drones to monitor protests against the police murder of George Floyd and routinely conducts warrantless searches of electronic devices at or near the border.

    “It would appear that this technology can be applied like warrantless phone searches on anybody that CBP pleases.”

    “It would appear that this technology can be applied like warrantless phone searches on anybody that CBP pleases,” said Mijente’s Jacinta Gonzalez, “which has been a problem for journalists, activists, and lawyers, as well as anyone else CBP decides to surveil, without providing any reasonable justification. With this capability, it seems very likely CBP would conduct searches based on intelligence about family/social connections, etc., and there wouldn’t seem to be anything preventing racial profiling.”

    Tajsar shared these concerns.

    “Whenever we have surveillance technology that’s deeply invasive, we are disturbed,” he said. “When it’s in the hands of an agency that’s consistently refused any kind of attempt at basic accountability, reform, or oversight, then it’s Defcon 1.”

    Part of the problem is that CBP’s parent agency, the Department of Homeland Security, is designed to proliferate intelligence and surveillance technologies “among major law enforcement agencies across the country,” said Tajsar. “What CBP have will trickle down to what your local cops on the street end up getting. That is not a theoretical concern.”

    The post Your Car Is Spying on You, and a CBP Contract Shows the Risks appeared first on The Intercept.

    This post was originally published on The Intercept.

  • U.S. Customs and Border Protection purchased technology that vacuums up reams of personal information stored inside cars, according to a federal contract reviewed by The Intercept, illustrating the serious risks in connecting your vehicle and your smartphone.

    The contract, shared with The Intercept by Latinx advocacy organization Mijente, shows that CBP paid Swedish data extraction firm MSAB $456,073 for a bundle of hardware including five iVe “vehicle forensics kits” manufactured by Berla, an American company. A related document indicates that CBP believed the kit would be “critical in CBP investigations as it can provide evidence [not only] regarding the vehicle’s use, but also information obtained through mobile devices paired with the infotainment system.” The document went on to say that iVe was the only tool available for purchase that could tap into such systems.

    According to statements by Berla’s own founder, part of the draw of vacuuming data out of cars is that so many drivers are oblivious to the fact that their cars are generating so much data in the first place, often including extremely sensitive information inadvertently synced from smartphones.

    Indeed, MSAB marketing materials promise cops access to a vast array of sensitive personal information quietly stored in the infotainment consoles and various other computers used by modern vehicles — a tapestry of personal details akin to what CBP might get when cracking into one’s personal phone. MSAB claims that this data can include “Recent destinations, favorite locations, call logs, contact lists, SMS messages, emails, pictures, videos, social media feeds, and the navigation history of everywhere the vehicle has been.” MSAB even touts the ability to retrieve deleted data, divine “future plan[s],” and “Identify known associates and establish communication patterns between them.”

    The kit can discover “when and where a vehicle’s lights are turned on, and which doors are opened and closed at specific locations.”

    The kit, MSAB says, also has the ability to discover specific events that most car owners are probably unaware are even recorded, like “when and where a vehicle’s lights are turned on, and which doors are opened and closed at specific locations” as well as “gear shifts, odometer reads, ignition cycles, speed logs, and more.” This car-based surveillance, in other words, goes many miles beyond the car itself.

    iVe is compatible with over two dozen makes of vehicle and is rapidly expanding its acquisition and decoding capabilities, according to MSAB.

    Civil liberties watchdogs said the CBP contract raises concerns that these sorts of extraction tools will be used more broadly to circumvent constitutional protections against unreasonable searches. “The scale at which CBP can leverage a contract like this one is staggering,” said Mohammad Tajsar, an attorney with the American Civil Liberties Union of Southern California.

    MSAB spokesperson Carolen Ytander declined to comment on the privacy and civil liberties risks posed by iVe. When asked if the company maintains any guidelines on use of its technology, they said the company “does not set customer policy or governance on usage.”

    Getting Smartphone Data Without Having to Crack Into a Smartphone

    MSAB’s contract with CBP ran from June of last year until February 28, 2021, and was with the agency’s “forensic and scientific arm,” Laboratories and Scientific Services. It included training on how to use the MSAB gear.

    Interest from the agency, the largest law enforcement force in the United States, likely stems from police setbacks in the ongoing war to crack open smartphones.

    Attacking such devices was a key line of business for MSAB before it branched out into extracting information from cars. The ubiquity of the smartphone provided police around the world with an unparalleled gift: a large portion of an individual’s private life stored conveniently in one object we carry nearly all of the time. But as our phones have become more sophisticated and more targeted, they’ve grown better secured as well, with phone makers like Apple and phone device-cracking outfits like MSAB and Cellebrite engaged in a constant back-and-forth to gain a technical edge over the other.

    “We had a Ford Explorer … we pulled the system out, and we recovered 70 phones that had been connected to it. All of their call logs, their contacts and their SMS.”

    But as our phones have grown in sophistication as small computers, so have a whole host of everyday objects and appliances, our cars included. Data-hungry government agencies have increasingly moved to exploit the rise of the smart car, whose dashboard-mounted computers, Bluetooth capabilities, and USB ports have, with the ascendancy of the smartphone, become as standard as cup holders. Smart car systems are typically intended to be paired with your phone, allowing you to take calls, dictate texts, plug in map directions, or “read ”emails from behind the wheel. Anyone who’s taken a spin in a new-ish vehicle and connected their phone — whether to place a hands-free call, listen to Spotify, or get directions — has probably been prompted to share their entire contact list, presented as a necessary step to place calls but without any warning that a perfect record of everyone they’ve ever known will now reside inside their car’s memory, sans password.

    The people behind CBP’s new tool are well aware that they are preying on consumer ignorance. In a podcast appearance first reported by NBC News last summer, Berla founder Ben LeMere remarked, “People rent cars and go do things with them and don’t even think about the places they are going and what the car records.” In a 2015 appearance on the podcast “The Forensic Lunch,” LeMere told the show’s hosts how the company uses exactly this accidental-transfer scenario in its trainings: “Your phone died, you’re gonna get in the car, plug it in, and there’s going to be this nice convenient USB port for you. When you plug it into this USB port, it’s going to charge your phone, absolutely. And as soon as it powers up, it’s going to start sucking all your data down into the car.”

    In the same podcast, LeMere also recounted the company pulling data from a car rented at BWI Marshall Airport outside Washington, D.C.:

    “We had a Ford Explorer … we pulled the system out, and we recovered 70 phones that had been connected to it. All of their call logs, their contacts and their SMS history, as well as their music preferences, songs that were on their device, and some of their Facebook and Twitter things as well. … And it’s quite comical when you sit back and read some of the the text messages.”

    The ACLU’s Tajsar explained, “What they’re really saying is ‘We can exploit people because they’re dumb. We can leverage consumers’ lack of understanding in order to exploit them in ways that they might object to if it was done in the analog world.’”

    Exploiting the Wild “Frontier of the Fourth Amendment”

    The push to make our cars extensions of our phones (often without any meaningful data protection) makes them tremendously enticing targets for generously funded police agencies with insatiable appetites for surveillance data. Part of the appeal is that automotive data systems remain on what Tajsar calls the “frontier of the Fourth Amendment.” While courts increasingly recognize your phone’s privacy as a direct extension of your own, the issue of cracking infotainment systems and downloading their contents remains unsettled, and CBP could be “exploiting the lack of legal coverage to get at information that otherwise would be protected by a warrant,” Tajsar said.

    MSAB’s technology is doubly troubling in the hands of CBP, an agency with a powerful exception from the Fourth Amendment and a historical tendency toward aggressive surveillance and repressive tactics. The agency recently used drones to monitor protests against the police murder of George Floyd and routinely conducts warrantless searches of electronic devices at or near the border.

    “It would appear that this technology can be applied like warrantless phone searches on anybody that CBP pleases.”

    “It would appear that this technology can be applied like warrantless phone searches on anybody that CBP pleases,” said Mijente’s Jacinta Gonzalez, “which has been a problem for journalists, activists, and lawyers, as well as anyone else CBP decides to surveil, without providing any reasonable justification. With this capability, it seems very likely CBP would conduct searches based on intelligence about family/social connections, etc., and there wouldn’t seem to be anything preventing racial profiling.”

    Tajsar shared these concerns.

    “Whenever we have surveillance technology that’s deeply invasive, we are disturbed,” he said. “When it’s in the hands of an agency that’s consistently refused any kind of attempt at basic accountability, reform, or oversight, then it’s Defcon 1.”

    Part of the problem is that CBP’s parent agency, the Department of Homeland Security, is designed to proliferate intelligence and surveillance technologies “among major law enforcement agencies across the country,” said Tajsar. “What CBP have will trickle down to what your local cops on the street end up getting. That is not a theoretical concern.”

    This post was originally published on Radio Free.

  • The U.S. Marshals Service flew unmanned drones over Washington, D.C., in response to last summer’s Black Lives Matter protests, documents obtained by The Intercept via the Freedom of Information Act show.

    The documents — two brief, heavily redacted emails — indicate the Marshals flew the drones over Washington on June 5 and 7, when nationwide protests against police brutality in the wake of George Floyd’s murder were at their height. The surveillance flights occurred just days after the Trump administration ordered the mobilization of the near entirety of federal law enforcement against Washington’s protesters. The aggressive physical crackdown against Black Lives Matter rallies, particularly in Washington, D.C., spurred its own wave of outrage as police beat, chased, and chemically dispersed largely peaceful demonstrators. Less visible law enforcement responses to the rallies also drew intense criticism, including the use of social media surveillance and, in particular, the use of aerial surveillance over multiple cities by the Air National Guard and Department of Homeland Security. Government aircraft monitored 15 cities during the protests, according to the New York Times, filming demonstrators in New York, Philadelphia, and Dayton, Ohio; a Predator drone was deployed over Minneapolis.

    One email provided by the Marshals Service is dated June 5 and carries the subject line “UAS Status for Protests,” apparently referring to Unmanned Aircraft Systems, common military jargon for drones. It contains only a few fragments of unredacted text but appears to have contained notes from a “UAS briefing in response to the protests” and states that a redacted entity “responded to Washington DC” and “conducted one flight,” the same day Mayor Muriel Browser asked Donald Trump to “withdraw all extraordinary law enforcement and military presence from Washington, DC.”  The June 7 email is similarly fragmentary and censored but notes that the redacted entity once again “responded to Washington DC” and “conducted several flights.”

    Marshals Service spokesperson James Stossel confirmed the flights to The Intercept but declined to answer any questions about their purpose or what data was collected, stating, “The USMS does not release details of operational missions,” and denying that the Marshals flew drones over the city on any other dates. Asked how the robotic aerial surveillance of protests conforms with the agency’s narrowly defined mission, Stossel said, “The Marshals Service conducts a broad array of missions as authorized by Federal Law which may include ensuring the rule of law is maintained during protests.” Press reports from this period describe the protests in question as peaceful.

    The previously unreported flights raise the question of why the U.S. Marshals Service would be flying drones over mass gatherings of First Amendment-protected activity in the nation’s capital. The marshals are the oldest law enforcement branch in the United States, dating to the 18th century, and their present day grab bag of responsibilities is more or less constrained to protecting courthouses, asset forfeitures, operating the Witness Protection Program, transporting prisoners, and hunting fugitives. The vestigial agency has historically been cagey about the existence or purpose of its drone program: In 2013, the Los Angeles Times reported, “In 2004 and 2005, the U.S. Marshals Service tested two small drones in remote areas to help them track fugitives,” but the test was “abandoned … after both drones crashed.”

    Documents obtained by the American Civil Liberties Union that same year via the Freedom of Information Act were also heavily redacted, providing only murky outlines of how the agency was conducting aerial surveillance. These ACLU documents stated that the Marshals possessed a “rapidly deployable overhead collection device that will provide a multi-role surveillance platform to assist in [redacted] detection of targets.” Another document provided to the ACLU noted that the marshals deployed surveillance drones through their Technical Operations Group, or TOG, which “provides the U.S. Marshal Service, other federal agencies, and any requesting state or local law enforcement agency, with the most timely and technologically advanced electronic surveillance and investigative intelligence available in the world,” according to the Marshals Service website. The Marshals’ spokesperson, however, told The Intercept, “No USMS UAS flights were conducted at the request of any other agency.”

    While the Marshals Service quietly acknowledged the existence of its drone surveillance “pilot program” in its 2020 annual report, the flights were largely described as tied to the agency’s core responsibility of apprehending fugitives. But the document does briefly note that “UAS operators also deployed … in support of the USMS mission during the nationwide civil unrest in Summer 2020.” The report doesn’t mention what exactly this drone-based “support” entailed, but the Marshals’ on-the-ground violence against protesters in Portland prompted widespread criticism last summer.

    ”Once again, high-tech tools sold for use against the worst criminals are deployed against peaceful protesters.”

    Experts say it’s still unclear why the U.S. Marshals are even in a position to conduct these flights in the first place. “How did it become part of the mission of U.S. Marshals Service to engage in aerial surveillance during a protest movement?” said Jay Stanley, senior policy analyst at the ACLU. “It’s hard to know with all the secrecy, but it looks like once again, powerful high-tech tools sold to the public for use against the worst criminals are now being deployed against peaceful protesters and activists.”

    Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, told The Intercept that the fact there’s a Marshals Service drone program at all is indicative of how thoroughly crime-fighting agencies in the United States now resemble war-fighting forces: “The Marshals service has drones for much the same reason that many local police departments have tanks,” Guariglia said. “The slow militarization of local and federal law enforcement as a result on the war on crime, war on drugs, and war on terror have created dozens of desperate law enforcement agencies with advanced technology and bloated budgets.” The mere knowledge that a drone is or even could be watching demonstrators “threatens to chill out right to protest,” Guariglia added.

    Stanley also objected to the near-full redaction of the flight emails, which the Marshals Service argued was warranted on the basis that they would reveal secret investigative techniques and “could reasonably be expected to endanger the life or physical safety of any individual.” But as Stanley pointed out, it’s not as if flying a camera-packing drone over a throng of people is a new or secret technique in the year 2021. “How high are the chances they used their drones in some clever, innovative way they need to keep secret because nobody else has thought of it?” he explained. “No matter how they’re using it, the Marshals Service needs to be open and transparent given the relative novelty of drones as a law enforcement surveillance tool and their significant implications for our privacy. This kind of reflexively secretive behavior is one reason activists and communities tend not to give agencies the benefit of the doubt when they seek new surveillance technologies.”

    The post U.S. Marshals Used Drones to Spy on Black Lives Matter Protests in Washington, D.C. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • New research from a team at the University of Southern California provides further evidence that Facebook’s advertising system is discriminatory, showing that the algorithm used to target ads reproduced real-world gender disparities when showing job listings, even among equally qualified candidates.

    In fields from software engineering to sales to food delivery, the team ran sets of ads promoting real job openings at roughly equivalent companies requiring roughly the same skills, one for a company whose existing workforce was disproportionately male and one that was disproportionately female. Facebook showed more men the ads for the disproportionately male companies and more women the ads for the disproportionately female companies, even though the job qualifications were the same. The paper concludes that Facebook could very well be violating federal anti-discrimination laws.

    “We confirm that Facebook’s ad delivery can result in skew of job ad delivery by gender beyond what can be legally justified by possible differences in qualifications,” the team wrote.

    The work builds on prior research that left Facebook reeling. A groundbreaking 2019 study from one member of the team provided strong evidence that Facebook’s ad algorithm isn’t just capable of bias, but is biased to its core. Responding to that study, and in the wake of widespread criticism over tools that could be used to run blatantly discriminatory ad campaigns, Facebook told The Intercept at the time, “We stand against discrimination in any form. We’ve made important changes to our ad targeting tools and know that this is only a first step. We’ve been looking at our ad delivery system and have engaged industry leaders, academics, and civil rights experts on this very topic — and we’re exploring more changes.”

    Based on this new research, it doesn’t appear that the company got very far beyond whatever that “first step” was. The paper — authored by USC computer science assistant professor Aleksandra Korolova, professor John Heidemann, and doctoral student Basileal Imana — revisits the question tackled in 2019: If advertisers don’t use any of Facebook’s demographic targeting options, which demographics will the system target on its own?

    The question is a crucial one, given that Facebook’s control over who sees which ads might determine who is provided with certain vital economic opportunities, from insurance to a new job to a credit card. This control is executed entirely through algorithms whose inner workings are kept secret. Since Facebook won’t provide any meaningful answers about how the algorithms work, researchers such as Korolova and her colleagues have had to figure it out.

    This time around, the team wanted to preempt claims that biased ad delivery could be explained by the fact that Facebook showed the ads to people who were simply more qualified for the advertised job, a possible legal defense against allegations of unlawful algorithmic bias under statutes like Title VII, which bars discrimination on the basis of protected characteristics like race and gender. “To the extent that the scope of Title VII may cover ad platforms, the distinction we make can eliminate the possibility of platforms using qualification as a legal argument against being held liable for discriminatory outcomes,” the team wrote.

    As in 2019, Korolova and her team created a series of advertisements for real-world job openings and paid Facebook to display these job listings to as many people as possible given their budget, as opposed to specifying a certain demographic cohort whose eyeballs they wanted to zero in on. This essentially left the decision of “who sees what” entirely up to Facebook (and its opaque algorithms), thus helping to highlight the bias engineered into Facebook’s own code.

    Facebook funneled gender-neutral ads for gender-neutral jobs to people on the basis of their gender.

    Even when controlling for job qualifications, the researchers found that Facebook automatically funneled gender-neutral ads for gender-neutral jobs to people on the basis of their gender.

    For example, Korolova’s team purchased Facebook ad campaigns to promote two delivery driver job listings, one from Instacart and another from Domino’s. Both positions are roughly equivalent in terms of required qualifications, and for both companies, “there is data that shows the de facto gender distribution is skewed”: Most Domino’s drivers are men, and most Instacart drivers are women. By running these ads with a mandate only to maximize eyeballs, no matter whose, the team sought to “study whether ad delivery optimization algorithms reproduce these de facto skews, even though they are not justifiable on the basis of differences in qualification,” with the expectation of finding “a platform whose ad delivery optimization goes beyond what is justifiable by qualification and reproduces de facto skews to show the Domino’s ad to relatively more males than the Instacart ad.” The results showed exactly that.

    Left to its own devices, the team found that Facebook’s ad delivery algorithm took the Domino’s and Instacart listings, along with later experiments based on ads for software engineering and sales associate gigs at other companies, and showed them to online audiences that essentially reproduced the existing offline gender disparities: “The skew we observe on Facebook is in the same direction as the de facto skew, with the Domino’s ad delivered to a higher fraction of men than the Instacart ad.” And since the experiments were designed to take job qualification out of the picture, the team says, they strengthen “the previously raised arguments that Facebook’s ad delivery algorithms may be in violation of anti-discrimination laws.” As an added twist, the team ran the same set of ads on LinkedIn, but saw no evidence of systemic gender bias.

    Facebook spokesperson Tom Channick told The Intercept that “our system takes into account many signals to try and serve people ads they will be most interested in, but we understand the concerns raised in the report,” adding that “we’ve taken meaningful steps to address issues of discrimination in ads and have teams working on ads fairness today. We’re continuing to work closely with the civil rights community, regulators, and academics on these important matters.”

    Though the USC team was able to cleverly expose the biased results of Facebook ads, their methodology hits a brick wall when it comes to answering why exactly this happens. This is by design: Facebook’s ad delivery algorithm, like all the rest of the automated decision-making systems it employs across its billions of users, is a black-box algorithm, completely opaque to anyone other than those inside the company, workers who are bound by nondisclosure agreements and sworn to secrecy. One possible explanation for the team’s findings is that the ad delivery algorithm trains itself based on who has clicked on similar ads in the past — maybe men tend to click on Domino’s ads more than women. Korolova says “skew due to prior user behavior observed by Facebook is possible” but that “if despite the clear indication of the advertiser, we still observe skewed delivery due to historical click-through rates (as we do!), this outcome suggests that Facebook may be overruling advertiser desires for broad and diverse outreach in a way that is aligned with their own long-term business interests.”

    The post Research Says Facebook’s Ad Algorithm Perpetuates Gender Bias appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Uma mulher negra foi preterida em uma seleção de emprego no Facebook, embora excepcionalmente qualificada para o cargo, conforme queixa apresentada na Comissão de Oportunidades Igualitárias de Emprego dos EUA (EEOC) à qual o Intercept teve acesso. Ela foi entrevistada somente por funcionários brancos, ouviu dos recrutadores que não iria gostar do trabalho e que a empresa buscava alguém com “fit cultural” forte [“encaixe” ou “adequação” à cultura da empresa, no jargão dos discursos organizacionais].

    A candidata à vaga se soma a três outras pessoas que recentemente prestaram queixas de racismo contra o Facebook na EEOC. A comissão iniciou uma investigação “sistêmica” da empresa para averiguar se as políticas da organização contribuem para a discriminação, conforme noticiado pela Reuters no início deste mês.

    A queixa soma-se a inúmeras evidências de que as grandes empresas do Vale do Silício não estão diversificando seus quadros de funcionários – de maioria branca e asiática – rápido o bastante, especialmente em posições técnicas e gerenciais com altos salários. O relatório mais recente do Facebook sobre diversidade, publicado em julho de 2020, informou que apenas 3,9% de seus funcionários nos EUA eram negros, enquanto 6,3% eram hispânicos. O Google divulgou que em 2020 sua equipe nos EUA era 5,5% negra e 6,6% latina e, como o Facebook, enfrentou reiteradas acusações de práticas racistas, incluindo – somente na semana anterior a esta reportagem – não priorizar candidatos de faculdades e universidades historicamente negras e recomendar a quem reclama de racismo que busque tratamento psicológico.

    A mulher cujos advogados compartilharam a queixa com o Intercept – sob a condição de que a candidata não fosse identificada – alega que em 2020 foi “submetida ao padrão ou prática de discriminação do Facebook contra candidatos negros” ao longo de uma série de entrevistas para um cargo gerencial. Na queixa, registrada em dezembro do ano passado, ela afirma que sua experiência anterior e o título de doutora diretamente relacionado ao trabalho a tornavam especialmente apta para a vaga de gerente de parcerias e programas do setor de Parcerias de Impacto Global do Facebook. Acrescenta ainda que sua experiência e formação só foram mencionadas em uma entrevista, com o gerente de contratação do cargo, que teria dito a ela: “Você tem um cérebro grande, não iria gostar desse trabalho”.

    “Acreditamos que é essencial proporcionar um ambiente de trabalho respeitoso e seguro a todos os funcionários”, afirmou o porta-voz do Facebook, Andy Stone, ao The Intercept. “Levamos a sério qualquer alegação de discriminação e investigamos todos os casos”, completou.

    Na queixa, a candidata diz que passou pelo processo de triagem inicial e foi convidada a novas entrevistas, culminando em uma rodada de reuniões presenciais com funcionários brancos do Facebook em San Francisco, Califórnia, durante as quais ela “sentiu que os entrevistadores não estavam priorizando suas entrevistas porque todas pareciam apressadas, depois de fazê-la esperar por várias horas”. Ela também aponta na queixa que não foi entrevistada por nenhuma pessoa negra e que “a única funcionária negra do Facebook que [ela] encontrou durante todo o processo de contratação foi uma recepcionista”.

    A candidata alega que durante uma das entrevistas presenciais na Califórnia lhe foi dito: “Não há dúvida de que você pode fazer o trabalho, mas estamos realmente procurando alguém com fit cultural”. O termo [em inglês, culture fit] é comum na cultura corporativa das empresas de tecnologia, geralmente descrito como a qualidade de contratar pessoas com quem você gostaria de sair socialmente ou tomar uma cerveja. No entanto, o conceito é muitas vezes criticado, sendo apontado como eufemismo para discriminação racial ou de gênero e uma forma de as empresas driblarem acusações de viés em suas contratações. Dada a esmagadora homogeneidade racial das empresas de tecnologia americanas – e a crença generalizada de que elas exigem algum tipo de visão comum do futuro tanto quanto qualquer habilidade técnica –, determinar o que é a “cultura” em questão ou como alguém pode “se encaixar” nela pode ser impossível se os candidatos não se parecerem com os fundadores ou a equipe de uma empresa.

    A queixa da mulher rejeitada à vaga menciona que a “política geral de discriminação contra candidatos negros” da empresa é construída em parte sobre “a forte atenção do Facebook ao ‘fit cultural’ na contratação, sem fornecer orientação objetiva o suficiente para gerentes e outros funcionários sobre como determinar quais candidatos e funcionários terão um bom ‘fit cultural’”.

    Em outras três queixas recentes recebidas pela EEOC contra o Facebook, há alegações semelhantes de que a empresa confiava no fit cultural e na avaliação de funcionários brancos e asiáticos para determinar a seleção. Elas foram apresentadas pelo funcionário do Facebook Oscar Veneszee Jr. e por duas outras pessoas que se candidataram e foram rejeitadas para vagas na empresa. Os quatro são representados pelos escritórios de advocacia Gupta Wessler PLLC, Katz Marshall & Banks e Mehri & Skalet.

    Tradução: Ricardo Romanoff

    The post Em entrevista de emprego, Facebook exigiu “fit cultural” de mulher negra com doutorado appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The popular legal research and data brokerage firm LexisNexis signed a $16.8 million contract to sell information to U.S. Immigration and Customs Enforcement, according to documents shared with The Intercept. The deal is already drawing fire from critics and comes less than two years after the company downplayed its ties to ICE, claiming it was “not working with them to build data infrastructure to assist their efforts.”

    Though LexisNexis is perhaps best known for its role as a powerful scholarly and legal research tool, the company also caters to the immensely lucrative “risk” industry, providing, it says, 10,000 different data points on hundreds of millions of people to companies like financial institutions and insurance companies who want to, say, flag individuals with a history of fraud. LexisNexis Risk Solutions is also marketed to law enforcement agencies, offering “advanced analytics to generate quality investigative leads, produce actionable intelligence and drive informed decisions” — in other words, to find and arrest people.

    The LexisNexis ICE deal appears to be providing a replacement for CLEAR, a risk industry service operated by Thomson Reuters that has been crucial to ICE’s deportation efforts. In February, the Washington Post noted that the CLEAR contract was expiring and that it was “unclear whether the Biden administration will renew the deal or award a new contract.”

    LexisNexis’s February 25 ICE contract was shared with The Intercept by Mijente, a Latinx advocacy organization that has criticized links between ICE and tech companies it says are profiting from human rights abuses, including LexisNexis and Thomson Reuters. The contract shows LexisNexis will provide Homeland Security investigators access to billions of different records containing personal data aggregated from a wide array of public and private sources, including credit history, bankruptcy records, license plate images, and cellular subscriber information. The company will also provide analytical tools that can help police connect these vast stores of data to the right person.

    Though the contract is light on details, other ICE documents suggest how the LexisNexis database will be put to use. A notice posted before the contract was awarded asked for a database that could “assist the ICE mission of conducting criminal investigations” and come with “a robust analytical research tool for … in-depth exploration of persons of interest and vehicles,” including what it called a “License Plate Reader Subscription.”

    LexisNexis Risk Solutions spokesperson Jennifer Richman declined to say exactly what categories of data the company would provide ICE under the new contract, or what policies, if any, will govern how agency agency uses it, but said, “Our tool contains data primarily from public government records. The principal non-public data is authorized by Congress for such uses in the Drivers Privacy Protection Act and Gramm-Leach-Bliley Act statutes.”

    ICE did not return a request for comment.

    The listing indicated the database would be used by ICE’s Homeland Security Investigations agency. While HSI is tasked with investigating border-related criminal activities beyond immigration violations, the office frequently works to raid and arrest undocumented people alongside ICE’s deportation office, Enforcement and Removal Operations, or ERO. A 2019 report from the Brennan Center for Justice described HSI as having “quietly become the backbone of the White House’s immigration enforcement apparatus. Its operations increasingly focus on investigating civil immigration violations, facilitating deportations carried out by ERO, and conducting surveillance of First Amendment-protected expression.” In 2018, The Intercept reported on an HSI raid of a Tennessee meatpacking plant that left scores of undocumented workers detained and hundreds of local children too scared to attend school the following day.

    Department of Homeland Security budget documents show that ICE has used LexisNexis databases since at least 2016 through the National Criminal Analysis and Targeting Center, a division of ERO that assists in “locating aliens convicted of criminal offenses and other aliens who are amenable to removal,” including “those who are unlawfully present in the United States.”

    It’s exceedingly difficult to participate in modern society without generating computerized records of the sort that LexisNexis obtains and packages for resale. 

    It’s hard to wrap one’s head around the enormity of the dossiers LexisNexis creates about citizens and undocumented persons alike. While you can at least attempt to use countermeasures against surveillance technologies like facial recognition or phone tracking, it’s exceedingly difficult to participate in modern society without generating computerized records of the sort that LexisNexis obtains and packages for resale. The company’s databases offer an oceanic computerized view of a person’s existence; by consolidating records of where you’ve lived, where you’ve worked, what you’ve purchased, your debts, run-ins with the law, family members, driving history, and thousands of other types of breadcrumbs, even people particularly diligent about their privacy can be identified and tracked through this sort of digital mosaic. LexisNexis has gone even further than merely aggregating all this data: The company claims it holds 283 million distinct individual dossiers of 99.99% accuracy tied to “LexIDs,” unique identification codes that make pulling all the material collected about a person that much easier. For an undocumented immigrant in the United States, the hazard of such a database is clear.

    For those seeking to surveil large populations, the scope of the data sold by LexisNexis and Thomson Reuters is equally clear and explains why both firms are listed as official data “partners” of Palantir, a software company whose catalog includes products designed to track down individuals by feasting on enormous datasets. This partnership lets law enforcement investigators ingest material from the companies’ databases directly into Palantir data-mining software, allowing agencies to more seamlessly spy on migrants or round them up for deportation. “I compare what they provide to the blood that flows through the circulation system,” explained City University of New York law professor and scholar of government data access systems Sarah Lamdan. “What would Palantir be able to do without these data flows? Nothing. Without all their data, the software is worthless.” Asked for specifics of the company’s relationship with Palantir, the LexisNexis spokesperson told The Intercept only that its parent company RELX was an early investor in Palantir and that “LexisNexis Risk Solutions does not have an operational relationship with Palantir.”

    And yet compared with Palantir, which eagerly sells its powerful software to clients like ICE and the National Security Agency, Thomson Reuters and LexisNexis have managed to largely avoid an ugly public association with controversial government surveillance and immigration practices. They have  protected their reputations in part by claiming that even though LexisNexis may contract with ICE, it’s not enabling the crackdowns and arrests that have made the agency infamous but actually helping ICE’s detainees defend their legal rights. In 2019, after hundreds of law professors, students, and librarians signed a petition calling for Thomson Reuters and LexisNexis to cease contracting with ICE, LexisNexis sent a mass email to law school faculty defending their record and seeming to deny that their service helps put people in jail. Describing this claim as “misinformation,” the LexisNexis email, which was shared with The Intercept, stated: “We are not providing jail-booking data to ICE and are not working with them to build data infrastructure to assist their efforts. … LexisNexis and RELX does not play a key ‘role in fueling the surveillance, imprisonment, and deportation of hundreds of thousands of migrants a year.” (Emphasis in the original.) The email stated that “one of our competitors” was responsible for how “ICE supports its core data needs.” It went on to argue that, far from harming immigrants, LexisNexis is actually in the business of empowering them: Through its existing relationship with ICE, “detainees are provided access to an extensive electronic library of legal materials … that enable detainees to better understand their rights and prepare their immigration cases.”

    “Your state might be down to give you a driver’s license, but that information might get into the hands of a data broker.”

    The notion that LexisNexis is somehow more meaningfully in the business of keeping immigrants free rather than detained has little purchase with the company’s critics. Jacinta Gonzalez, field director of Mijente, told The Intercept that LexisNexis’s ICE contract fills the same purpose as CLEAR. Like CLEAR, LexisNexis provides an agency widely accused of systemic human rights abuses with the data it needs to locate people with little if any oversight, a system that’s at once invisible, difficult to comprehend, and near impossible to avoid. Even in locales where so-called sanctuary laws aim to protect undocumented immigrants, these vast privatized databases create a computerized climate of intense fear and paranoia for undocumented people, Gonzalez said. “You might be in a city where your local politician is trying to tell you, ‘Don’t worry, you’re welcome here,’ but then ICE can get your address from a data broker and go directly to your house and try to deport you,” Gonzalez explained. “Your state might be down to give you a driver’s license, but that information might get into the hands of a data broker. You might feel like you’re in a life or death situation and have to go to the hospital, but you’re concerned that if you can’t pay your bill a collection agency is going to share that information with ICE.”

    Richman, the LexisNexis spokesperson, told The Intercept that “the contract complies with the new policies set out in President Biden’s Executive Order 13993 of January 21, which revised Civil Immigration Enforcement Policies and Priorities and the corresponding DHS interim guidelines” and that “these policies, effective immediately, emphasize a respect for human rights, and focus on threats to national security, public safety, and security at the border.” But Gonzalez says it would be naive to think ICE is somehow a lesser menace to undocumented communities with Donald Trump out of power. “At the end of the day, ICE is still made up by the same agents, by the same field office directors, by the same administrators. … I think that it is really important for people to understand that, as long as ICE continues to have so many agents and so many resources, that they’re going to have to have someone to terrorize.”

    The post LexisNexis to Provide Giant Database of Personal Information to ICE appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A Black woman passed over for a job at Facebook told federal regulators that even though she was exceptionally qualified for the position, she was rushed through interviews with entirely white staffers, told she wouldn’t like the job, and advised that the company wanted a strong “culture fit,” according to a complaint to the U.S. Equal Employment Opportunity Commission provided to The Intercept.

    The woman joins three others who have recently complained to the EEOC about anti-Black racism at Facebook. The agency has begun conducting a “systemic” probe of Facebook, looking into whether the company’s own policies further discrimination, Reuters reported earlier this month.

    The complaint comes as evidence piles up that large Silicon Valley companies are not diversifying their predominately white and Asian work forces quickly enough, particularly within high-paying technical and managerial roles. Facebook’s latest diversity report, from July, stated that only 3.9 percent of its U.S. employees were Black and 6.3 percent Hispanic. Google said that in 2020 its U.S. staff was 5.5 percent Black and 6.6 percent Latinx, and, like Facebook, has faced repeated accusations of racist practices, including, just in the last week, that it de-prioritizes applicants from historically Black colleges and universities and that it pushes people who complain about racism to seek mental health care.

    The woman, whose attorneys shared her complaint with The Intercept on the condition that she not be named, alleges she was “subjected to Facebook’s pattern or practice of discrimination against Black applicants” during a series of interviews for a managerial position at the company in 2020. In the complaint, filed in December, the applicant says her prior work experience and directly related doctoral degree made her particularly well-suited for the job: partnerships and program manager at Facebook’s Global Impact Partnerships. She adds that her experience and education were brought up only in an early interview with the position’s hiring manager, who she alleges told her, “You have a big brain, you wouldn’t like this job.”

    “We believe it is essential to provide all employees with a respectful and safe working environment,” Facebook spokesperson Andy Stone told The Intercept. “We take any allegations of discrimination seriously and investigate every case.”

    In the complaint, the applicant says she made it past the initial screening process and was granted further interviews, culminating in a round of in-person meetings with a group of all-white Facebook employees in San Francisco, during which she “sensed that the interviewers were not prioritizing her interviews because all of the interviews seemed rushed after making her wait for several hours.” The complaint notes that the applicant wasn’t interviewed by a single person of color and that the “only Black Facebook employee [she] encountered during the entire hiring process was a receptionist.”

    She further alleges that during one of the in-person interviews in California, she was told, “There’s no doubt you can do the job, but we’re really looking for a culture fit.” The term “culture fit” is common in corporate tech culture, typically defined as the quality of hiring someone you’d want to hang out with socially or grab a beer with, but often criticized as little more than a euphemistic stand-in for racial or gender-based discrimination and a way for companies to deflect accusations of hiring bias. Given the overwhelming racial homogeneity of American tech companies and the pervasive belief that they require some sort of common vision of the future as much as any technical skill, determining what the “culture” in question even is or how one might “fit” into it can be impossible if an applicant doesn’t closely resemble a company’s founders or current staff.

    Indeed, the applicant’s complaint states that Facebook’s “general policy of discrimination against Black applicants” is built partly on “Facebook’s strong consideration of ‘culture fit’ in hiring, without providing sufficient objective guidance to managers and other employees on how to determine which applicants and employees will be a good ‘culture fit’ at Facebook.”

    The three other recent EEOC complaints about Facebook made similar allegations that Facebook relied on “culture fit” and evaluation by white and Asian employees to determine who made the cut. They were filed by Facebook employee Oscar Veneszee Jr. and by two applicants turned down for jobs. All four individuals are represented by attorneys from Gupta Wessler PLLC, Katz Marshall & Banks, and Mehri & Skalet.

    The post Facebook Told Black Applicant With Ph.D. She Needed to Show She Was a “Culture Fit” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A Black woman passed over for a job at Facebook told federal regulators that even though she was exceptionally qualified for the position, she was rushed through interviews with entirely white staffers, told she wouldn’t like the job, and advised that the company wanted a strong “culture fit,” according to a complaint to the U.S. Equal Employment Opportunity Commission provided to The Intercept.

    The woman joins three others who have recently complained to the EEOC about anti-Black racism at Facebook. The agency has begun conducting a “systemic” probe of Facebook, looking into whether the company’s own policies further discrimination, Reuters reported earlier this month.

    The complaint comes as evidence piles up that large Silicon Valley companies are not diversifying their predominately white and Asian work forces quickly enough, particularly within high-paying technical and managerial roles. Facebook’s latest diversity report, from July, stated that only 3.9 percent of its U.S. employees were Black and 6.3 percent Hispanic. Google said that in 2020 its U.S. staff was 5.5 percent Black and 6.6 percent Latinx, and, like Facebook, has faced repeated accusations of racist practices, including, just in the last week, that it de-prioritizes applicants from historically Black colleges and universities and that it pushes people who complain about racism to seek mental health care.

    The woman, whose attorneys shared her complaint with The Intercept on the condition that she not be named, alleges she was “subjected to Facebook’s pattern or practice of discrimination against Black applicants” during a series of interviews for a managerial position at the company in 2020. In the complaint, filed in December, the applicant says her prior work experience and directly related doctoral degree made her particularly well-suited for the job: partnerships and program manager at Facebook’s Global Impact Partnerships. She adds that her experience and education were brought up only in an early interview with the position’s hiring manager, who she alleges told her, “You have a big brain, you wouldn’t like this job.”

    Facebook did not respond to a request for comment.

    In the complaint, the applicant says she made it past the initial screening process and was granted further interviews, culminating in a round of in-person meetings with a group of all-white Facebook employees in San Francisco, during which she “sensed that the interviewers were not prioritizing her interviews because all of the interviews seemed rushed after making her wait for several hours.” The complaint notes that the applicant wasn’t interviewed by a single person of color and that the “only Black Facebook employee [she] encountered during the entire hiring process was a receptionist.”

    She further alleges that during one of the in-person interviews in California, she was told, “There’s no doubt you can do the job, but we’re really looking for a culture fit.” The term “culture fit” is common in corporate tech culture, typically defined as the quality of hiring someone you’d want to hang out with socially or grab a beer with, but often criticized as little more than a euphemistic stand-in for racial or gender-based discrimination and a way for companies to deflect accusations of hiring bias. Given the overwhelming racial homogeneity of American tech companies and the pervasive belief that they require some sort of common vision of the future as much as any technical skill, determining what the “culture” in question even is or how one might “fit” into it can be impossible if an applicant doesn’t closely resemble a company’s founders or current staff.

    Indeed, the applicant’s complaint states that Facebook’s “general policy of discrimination against Black applicants” is built partly on “Facebook’s strong consideration of ‘culture fit’ in hiring, without providing sufficient objective guidance to managers and other employees on how to determine which applicants and employees will be a good ‘culture fit’ at Facebook.”

    The three other recent EEOC complaints about Facebook made similar allegations that Facebook relied on “culture fit” and evaluation by white and Asian employees to determine who made the cut. They were filed by Facebook employee Oscar Veneszee Jr. and by two applicants turned down for jobs. All four individuals are represented by attorneys from Gupta Wessler PLLC, Katz Marshall & Banks, and Mehri & Skalet.

    This post was originally published on Radio Free.

  • U.S. President Donald Trump uses his cellphone as he holds a roundtable discussion with governors in Washington, D.C., on June 18, 2020.

    Photo: Saul Loeb/AFP/Getty Images

    The swirling of the last dregs of the Trump administration around the drain has given some prominent Americans one last chance to prostrate themselves before the outgoing president. Facebook and Twitter’s decision to place the president in a temporary internet timeout following his incitement of a violent mob that trashed the U.S. Capitol is the perfect capstone to four years of appeasement and corporate cowardice.

    The advertising industry is generally acknowledged as one of the most risk-averse and craven industries on the planet, with decision-making guided largely by attempting to be as inoffensive as possible to as many people as possible, taking a position on an issue only in the weakest, safest, most carefully hedged terms available. Though companies like Facebook and Twitter hold the unfathomable power to control the distribution of information to billions of people around the world and like to think of themselves as helping bring humankind to some next level of consciousness, they are still very much in the advertising business.

    As advertising companies, cowardice runs deep in the souls of Twitter, Facebook, and Google, companies that have spent the past four years looking the other way, equivocating, and contorting themselves into pretzels in an attempt to justify Trump’s unfettered access to the most powerful information distribution system in world history. Despite perennial speculation in the press as to what might psychologically or ideologically explain Mark Zuckerberg and Jack Dorsey’s total unwillingness to meaningfully act, there is just one factor: money. Twitter and Facebook are only worth anything as businesses if they can boast to advertisers of their access to an enormous swath of the American market, across political and ideological lines, and fear of a right-wing backlash has been enough to keep Peter Thiel on Facebook’s board and Trump’s voter suppression dispatches on Twitter’s servers.

    According to a Facebook moderator who spoke to The Intercept on the condition of anonymity for fear of employer retaliation, watching the company drag its feet, yesterday in particular, has been excruciating. According to internal communications reviewed by The Intercept, the Capitol break-in is now considered, for purposes of Facebook’s willy-nilly application of the rules, “a violating event,” and any “praise,” “support,” or even friendly “representation” is banned on the basis of the company’s “Dangerous Organizations” policies, which this moderator explained is typically applied to posts celebrating terrorist attacks, drug cartel murders, and Aryan street gangs. The policy update was relayed to moderators, this source said, around 4:30 p.m. in Washington, by which point the Capitol had already been violently occupied for hours and a woman shot dead. Just today, as the broken glass is being swept up in the Capitol, Facebook blasted out another moderator update, informing them that the company was “internally designating” the entire United States as a “temporary high risk location,” which adds heightened restrictions to posts inciting violence, backdated to yesterday and effective through the end of Thursday.

    Fearful of Trump even on his shameful way out, Facebook did the bare minimum when it was too late to mean much.

    As some Facebook observers have pointed out, had the company cared to look, it could have easily found that its platform was being used to plan an event it would eventually categorize alongside the Lockerbie bombing. Instead, fearful of Trump even on his shameful way out, Facebook did the bare minimum when it was too late to mean much. “Facebook treated this event correctly but Facebook is also complicit in this event,” the moderator said. “It’s all so blatantly obvious.”

    The president’s past half-decade of incitement against the perceived ethnic enemies of his base have been met with nothing more than risible warning labels and worthless “fact checks,” as have his more recent efforts to dupe his already deeply confused supporters about the outcome of the 2020 presidential election. There’s no reason to believe these barely-there penalties did anything at all to chasten Trump or deter his message; their utility existed only to the companies themselves, who could no longer be accused of doing literally nothing. Just as Facebook put off acknowledging its role in the genocide in Myanmar until it was too late to matter, and just as the company built an election interference “war room” and quickly disbanded it after some photo ops, the recent decisions to mildly inconvenience the world’s most powerful living person when he has 13 days left in power is the perfect distillation of Big Tech’s attempt to pantomime principles, halfheartedly pointing to the void where a conscience would be.

    “Slightly more than literally nothing” has been the unifying theme of big tech’s response to years of public concern that Trump would eventually use the platforms to get people killed, and yesterday, as his most rabid supporters puttered around the Capitol aimlessly pushing over chairs and reading House Speaker Nancy Pelosi’s mail, represented the appeasement strategy’s ultimate failure: Four people are dead following a mob that Trump incited and directed. Hours after it would have made any difference, Facebook and Twitter, his two favorite platforms, did what they were previously unwilling to do: risk upsetting the president by temporarily restricting his ability to broadcast.

    In a stirring gesture of corporate bravery, Twitter put Trump in the penalty box for 12 whole hours, suggesting that if perhaps 8 people had been killed in the Capitol melee, or if he’d encouraged the mob to brawl its way into a second federal landmark, he may have gotten a whole day’s suspension. Facebook, also true to form, has banned Trump from posting “indefinitely,” a word that means absolutely nothing and will give the company the freedom to change its mind at any point in the future, in accordance with the shifting tides of governmental power and public opinion.

    This post was originally published on Radio Free.

  • The documents feature internal Facebook communications in which managers appear to admit to major flaws in ad targeting capabilities, including that ads reached the intended audience less than half of the time they were shown and that data behind a targeting criterion  was “all crap.” Facebook says the material is presented out of context.

    “More than half the time we’re showing ads to someone other than the advertisers’ intended audience.”

    They emerged from a suit currently seeking class-action certification in federal court. The suit was filed by the owner of Investor Village, a small business that operates a message board on financial topics. Investor Village said in court filings that it decided to buy narrowly targeted Facebook ads because it hoped to reach “highly compensated and educated investors” but “had limited resources to spend on advertising.” But nearly 40 percent of the people who saw Investor Village’s ad either lacked a college degree, did not make $250,000 per year, or both, the company claims. In fact, not a single Facebook user it surveyed met all the targeting criteria it had set for Facebook ads, it says.

    The complaint features Facebook documents indicating that the company knew its advertising capabilities were overhyped and underperformed. A “February 2016 internal memorandum” sent from an unnamed Facebook manager to Andrew Bosworth, a Zuckerberg confidant and powerful company executive who oversaw ad efforts at the time, reads, “[I]nterest precision in the US is only 41%—that means that more than half the time we’re showing ads to someone other than the advertisers’ intended audience. And it is even worse internationally. … We don’t feel we’re meeting advertisers’ interest accuracy expectations today.” The lawsuit goes on to quote unnamed “employees on Facebook’s ad team” discussing their targeting capabilities circa June 2016:

    One engineer celebrated that detailed targeting accounted for “18% of total ads revenue,” and $14.8 million on June 17th alone. Using a smiley emoticon, an engineering manager responded, “Love this chart! Although if the most popular option is to combine interest and behavior, and we know for a fact our behavior is almost all crap, does this mean we are misleading advertiser [sic] a bit? :)” That manager proceeded to suggest further examination of top targeting criteria to “see if we are giving advertiser [sic] false hope.”

    “Interest” and “behavior” are two key facets of the data dossiers Facebook compiles on us for advertisers; according to the company, the former includes things you like, “from organic food to action movies,” while the latter consists of “behaviors such as prior purchases and device usage.”

    The complaint also cites unspecified internal communications in which “[p]rivately, Facebook managers described important targeting data as ‘crap’ and admitted accuracy was ‘abysmal.’”

    Facebook has said in its court filings that these quotes are presented out of context. The company attempted to suppress the internal documents, obtained by the plaintiff through the legal discovery process, on the grounds that they were “confidential” and could be harmful if competitors were to read them — an argument rejected by the court, which in November ordered the filings unsealed with minor redactions. The social network argued further, in its rejected motion to dismiss the suit, that it’s never guaranteed complete accuracy in its targeting, and that any claims of sophisticated targeting the plaintiff cited in its decision to buy Facebook ads were “generalized, promotional statements about Facebook’s advertising on which a reasonable consumer could not rely as guaranteeing a specific accuracy rate.”

    “Facebook’s argument that its ad-targeting regime is good for small businesses is not only self interested — it is plain wrong.”

    The lawsuit comes at an awkward time for Facebook, which has recently taken out full-page ads in several national newspapers claiming new iOS privacy safeguards will strangle American small businesses, which are already struggling with the economic cataclysm of the Covid-19 pandemic. Facebook calls this anti-Apple effort its “Speak Up for Small” campaign. The Investor Village lawsuit suggests that, far from being a pandemic panacea, Facebook’s ballyhooed ad targeting has actually wasted the time and money of small advertisers it now says it’s championing. “Facebook is no friend to small business,” said Steven Molo, an attorney representing the plaintiff. “As detailed in the allegations of our class action suit on behalf of advertisers, Facebook substantially misrepresented its ability to deliver ads accurately, to the dismay of its own employees.”

    Though Facebook would like you to think it’s not, its new spat with Apple is dead simple. Historically, companies like Facebook have been able to monitor the way you use your iPhone (or iPad) in order to attempt to learn the details of your life on a vast scale in order to — as the pitch went to advertisers, at least — show you specific ads that reflect your private hopes, desires, friendships, movements, and so on. But starting in early 2021, Apple says this persistent surveillance will not longer be turned on by default; rather, iPhone owners would have to explicitly opt-in to be stalked by their apps, cutting off this firehose of deeply personal data. This is a major shift in an industry where surveillance is a given, rather than an option.

    According to Dipayan Ghosh — a former Facebook executive and current co-director of the Digital Platforms and Democracy Project at Harvard Kennedy School’s Shorenstein Center on Media, Politics, and Public Policy — both Facebook’s anti-privacy PR blitz and this new lawsuit ironically lead to a similar conclusion: Facebook isn’t anyone’s friend but Facebook’s. “Facebook’s argument that its ad-targeting regime is good for small businesses is not only self interested — it is plain wrong,” Ghosh told The Intercept in an email. “Further, the recent [Investor Village] complaint indicates clearly that, if anything, Facebook advertising is not as effective for its advertising clients as it could be. The lack of transparency coupled with the perceived low bang-for-the-buck of advertising on Facebook has troubled marketers for many years — and it appears as though those perceptions may well be true.”

    Facebook could not be immediately reached for comment.

    This post was originally published on Radio Free.