Category: Technology

  • Rep. Pete Sessions, R-Texas, serves a primarily rural district anchored in Waco, a city of 150,000. It’s unclear why he is so interested in NSO Group, the infamous Israeli spyware firm that was blacklisted by the U.S. for its role in human rights abuses.

    Between February and July, though, Sessions and his team met eight times with lobbyists on behalf of NSO.

    One meeting was held for a “briefing on Bureau of Industry and Security Status” — the Department of Commerce office that blacklisted NSO in November 2021. Others were for “discussions of news articles reporting on NSO technology and the war in Gaza” and “NSO VISA issue,” according to documents filed with the Foreign Agents Registrations Act, or FARA, at the end of August.

    In July, on the same day that the lobbyists from the law firm Pillsbury Winthrop Shaw Pittman, a D.C. powerhouse, met with Sessions to “discuss NSO technology and human rights policies,” one of its lawyers and former House member from Texas, Greg Laughlin, paid $1,000 by check to “Pete Sessions for Congress.” At other times, Laughlin — who is actively registered as a lobbyist for NSO, according to current filings — donated to the campaigns of other Texas Republicans who NSO met with, including Rep. Dan Crenshaw.

    “This absolutely looks like part of NSO Group lobbyists ongoing efforts to reverse the firm’s blacklisting.”

    Against the backdrop of Israel’s war efforts and the looming possibility of a Trump administration, NSO is doubling down on its efforts to connect with members of Congress — almost exclusively with Republicans — as it makes a bid to reverse its blacklisting. NSO’s Pegasus spyware can infect and infiltrate cellphones and has been used by authoritarian governments to hack the devices of dissidents, journalists, and human rights activists, enabling grave abuses.

    The latest blitz in its yearslong campaign for delisting kicked off at the onset of the Israeli war in Gaza last year, when NSO tried to persuade Secretary of State Anthony Blinken that its technology was of use to the American government.

    “It is, unfortunately, not uncommon for FARA registrants to make campaign contributions to the members of Congress they’re contacting on behalf of foreign interests,” Ben Freeman, director of the Democratizing Foreign Policy program at the Quincy Institute, told The Intercept. “And, even if that contribution occurs on the exact same day the meeting takes place, it’s perfectly legal.”

    Freeman said NSO’s focus on Republicans might arise from Democrats’ growing disillusionment with Israel.

    “This absolutely looks like part of NSO Group lobbyists ongoing efforts to reverse the firm’s blacklisting,” he said. “Frankly, they’re going to find Republicans an easier target than Democrats for putting pressure on the Commerce Department to delist NSO Group, with many Democrats’ souring on Israel because of the thousands of civilians they’ve killed in the Gaza war.”

    Related

    Israeli Spyware Firm NSO Demands “Urgent” Meeting With Blinken Amid Gaza War Lobbying Effort

    In a statement to The Intercept, Sessions spokesperson Matt Myams said, “Former Congressman Greg Laughlin has known and politically supported Congressman Sessions for many years. During that time, they have discussed a wide variety of topics, including general questions about how certain immigration laws work.” (NSO Group declined to comment.)

    So far this year NSO has spent over $1.8 million on lobbying, according to FARA documents. Alongside Pillsbury, D.C.-based Chartwell Strategy Group as well as Los Angeles lobbying firm Paul Hastings have also heavily focused their efforts on connecting predominately with Republican lawmakers on behalf of NSO.

    While NSO continues to rebuild its reputation in the U.S., others have thrown in the towel. Candiru, another Israeli spyware company which was blacklisted along with NSO, lost its U.S. contracts and terminated its Washington lobbying contract with ArentFox Schiff earlier this year.

    NSO, in contrast, continued its effort by using Israel’s war to boost its chances. The company marketed itself as a volunteer in the war on Gaza, claiming to help track down missing Israelis and hostages. The bid to persuade the American government to let it come back to the table has been called an attempt to “crisis-wash” NSO’s record.

    NSO in Court

    Even as it makes the case in Washington that it complies with U.S. requirements for discouraging rights abuses using its software, in California, NSO is being accused in a separate instance of flagrantly defying a federal court order.

    Earlier this month, WhatsApp and its parent company Meta asked the judge in their case against NSO to award them a total win as punishment for NSO’s violations of discovery requirements. The spyware company has refused to produce internal email communications and the source code of its technology.

    In NSO’s response, filed on October 16, the firm said there was “no basis for any sanction, let alone terminating sanctions, because Defendants have not violated any order.” WhatsApp’s termination request, the filing said, was “ludicrous.” NSO said WhatsApp’s case was “the first of five ill-conceived lawsuits filed against Defendants in the United States amidst a wave of negative press coverage.”

    Related

    In Video From Gaza, Former CEO of Pegasus Spyware Firm Announces Millions for New Venture

    In its five-year-long case, WhatsApp this month made sweeping allegations about NSO’s refusal to produce internal email communications and Pegasus source code.

    “NSO’s discovery violations were willful, and unfairly skew the record on virtually every key issue in the case, from the merits, to jurisdiction, to damages, making a full and fair trial on the facts impossible,” the company said in the filing.

    Last year, NSO asked the court for a protective order to insulate them from the discovery process under Israeli law, which was denied. At a hearing this February, the court said it “would not feel reluctant to impose sanctions” if NSO failed to meet its discovery obligations.

    “NSO group has made a lot of arguments to resist discovery and kind of draw out the early stages of litigation in these cases as much as possible,” said Stephanie Krent, attorney at Columbia University’s Knight First Amendment Institute.

    The next hearing in the WhatsApp case will take place on November 7.

    “We remain focused on protecting our users,” a WhatsApp spokesperson told The Intercept. “We firmly believe NSO’s operations violate U.S. law and they must be held accountable for their unlawful attacks.”

    Meanwhile, in addition to the case by WhatsApp, NSO is facing other hefty accusations in U.S. litigation.

    Last month, Apple asked a court in San Francisco to dismiss its three-year hacking suit against NSO. The California tech giant said its case was no longer viable after Israeli government officials took files from NSO’s headquarters in an apparent attempt to frustrate lawsuits in the U.S. Apple argued it may now never be able to get the most critical files about Pegasus spyware.

    U.K. Case

    As NSO continues to face problems in the U.S., the High Court in London ruled this month that a case against Saudi Arabia for its use of Pegasus can move forward, according to documents obtained by The Intercept. (The Saudi government did not respond to a request for comment.)

    The ruling came after four human rights defenders who were hacked with Pegasus on British soil submitted a report to the Metropolitan Police last month asking them to open an investigation and prosecute the company.

    Yahya Assiri, a Saudi human rights activist who has been granted refugee status in the U.K., had previously lodged a civil claim against Saudi Arabia. As a check to protect diplomatic relations, British courts have a process to approve the routing of claims to foreign governments.

    “Violators’ impunity is the main power for repression to continue.”

    With this month’s ruling, Assiri has overcome that hurdle. His lawyers can now serve his claim against Saudi Arabia for using NSO’s Pegasus and another spyware product by Quadream — also an Israeli company founded by NSO veterans — to hack his phone multiple times between 2018 and 2020 while he was living in the U.K.

    The claim will be sent through diplomatic channels to the Saudi Ministry of Foreign Affairs.

    “Violators’ impunity is the main power for repression to continue. Accountability — bringing them before courts, the media, and the world — can at least partially deter them,” Assiri told The Intercept. “The evidence is strong, and the authorities’ request for immunity has been rejected. This means that the world must stop looking for justifications for violations — there is no justification for any violation.”

    The post Pegasus Spyware Maker Said to Flout Federal Court as It Lobbies to Get Off U.S. Blacklist appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

    The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

    The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”

    In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.”

    Last year, Special Operations Command, or SOCOM, expressed interest in video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording. Such imagery is generated using a variety of machine learning techniques, generally using software that has been “trained” to recognize and recreate human features by analyzing a massive database of faces and bodies. This year’s SOCOM wish list specifies an interest in software similar to StyleGAN, a program released by Nvidia in 2019 that powered the globally popular website “This Person Does Not Exist.” Within a year of StyleGAN’s launch, Facebook said it had taken down a network of accounts that used the technology to create false profile pictures. Since then, academic and private sector researchers have been engaged in a race between new ways to create undetectable deepfakes, and new ways to detect them. Many government services now require so-called liveness detection to thwart deepfaked identity photos, asking human applicants to upload a selfie video to demonstrate they are a real person — an obstacle that SOCOM may be interested in thwarting.

    Related

    U.S. Special Forces Want to Use Deepfakes for Psy-Ops

    The listing notes that special operations troops “will use this capability to gather information from public online forums,” with no further explanation of how these artificial internet users will be used. The disclosure comes one year after SOCOM revealed in last year’s wish list that it hoped to soon use deepfake videos for online propaganda and information warfare campaigns.

    This more detailed procurement listing shows that the United States pursues the exact same technologies and techniques it condemns in the hands of geopolitical foes. National security officials have long described the state-backed use of deepfakes as an urgent threat — that is, if they are being done by another country.

    Last September, a joint statement by the NSA, FBI, and CISA warned “synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications.” It described the global proliferation of deepfake technology as a “top risk” for 2023. In a background briefing to reporters this year, U.S. intelligence officials cautioned that the ability of foreign adversaries to disseminate “AI-generated content” without being detected — exactly the capability the Pentagon now seeks — represents a “malign influence accelerant” from the likes of Russia, China, and Iran. Earlier this year, the Pentagon’s Defense Innovation Unit sought private sector help in combating deepfakes with an air of alarm: “This technology is increasingly common and credible, posing a significant threat to the Department of Defense, especially as U.S. adversaries use deepfakes for deception, fraud, disinformation, and other malicious activities.” An April paper by the U.S. Army’s Strategic Studies Institute was similarly concerned: “Experts expect the malicious use of AI, including the creation of deepfake videos to sow disinformation to polarize societies and deepen grievances, to grow over the next decade.”

    “There are no legitimate use cases besides deception.”

    The offensive use of this technology by the U.S. would, naturally, spur its proliferation and normalize it as a tool for all governments. “What’s notable about this technology is that it is purely of a deceptive nature,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “There are no legitimate use cases besides deception, and it is concerning to see the U.S. military lean into a use of a technology they have themselves warned against. This will only embolden other militaries or adversaries to do the same, leading to a society where it is increasingly difficult to ascertain truth from fiction and muddling the geopolitical sphere.” 

    Both Russia and China have been caught using deepfaked video and user avatars in their online propaganda efforts, prompting the State Department to announce an international “Framework to Counter Foreign State Information Manipulation” in January. “Foreign information manipulation and interference is a national security threat to the United States as well as to its allies and partners,” a State Department press release said. “Authoritarian governments use information manipulation to shred the fabric of free and democratic societies.”

    SOCOM’s interest in deepfakes is part of a fundamental tension within the U.S. government, said Daniel Byman, a professor of security studies at Georgetown University and a member of the State Department’s International Security Advisory Board. “Much of the U.S. government has a strong interest in the public believing that the government consistently puts out truthful (to the best of knowledge) information and is not deliberately deceiving people,” he explained, while other branches are tasked with deception. “So there is a legitimate concern that the U.S. will be seen as hypocritical,” Byman added. “I’m also concerned about the impact on domestic trust in government — will segments of the U.S. people, in general, become more suspicious of information from the government?”

    The post The Pentagon Wants to Use AI to Create Deepfake Internet Users appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

    The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

    The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”

    In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.”

    The Pentagon has already been caught using phony social media users to further its interests in recent years. In 2022, Meta and Twitter removed a propaganda network using faked accounts operated by U.S. Central Command, including some with profile pictures generated with methods similar to those outlined by JSOC. A 2024 Reuters investigation revealed a Special Operations Command campaign using fake social media users aimed at undermining foreign confidence in China’s Covid vaccine.

    Last year, Special Operations Command, or SOCOM, expressed interest in using video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording, for “influence operations, digital deception, communication disruption, and disinformation campaigns.” Such imagery is generated using a variety of machine learning techniques, generally using software that has been “trained” to recognize and recreate human features by analyzing a massive database of faces and bodies. This year’s SOCOM wish list specifies an interest in software similar to StyleGAN, a tool released by Nvidia in 2019 that powered the globally popular website “This Person Does Not Exist.” Within a year of StyleGAN’s launch, Facebook said it had taken down a network of accounts that used the technology to create false profile pictures. Since then, academic and private sector researchers have been engaged in a race between new ways to create undetectable deepfakes, and new ways to detect them. Many government services now require so-called liveness detection to thwart deepfaked identity photos, asking human applicants to upload a selfie video to demonstrate they are a real person — an obstacle that SOCOM may be interested in thwarting.

    Related

    U.S. Special Forces Want to Use Deepfakes for Psy-Ops

    The listing notes that special operations troops “will use this capability to gather information from public online forums,” with no further explanation of how these artificial internet users will be used.

    This more detailed procurement listing shows that the United States pursues the exact same technologies and techniques it condemns in the hands of geopolitical foes. National security officials have long described the state-backed use of deepfakes as an urgent threat — that is, if they are being done by another country.

    Last September, a joint statement by the NSA, FBI, and CISA warned “synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications.” It described the global proliferation of deepfake technology as a “top risk” for 2023. In a background briefing to reporters this year, U.S. intelligence officials cautioned that the ability of foreign adversaries to disseminate “AI-generated content” without being detected — exactly the capability the Pentagon now seeks — represents a “malign influence accelerant” from the likes of Russia, China, and Iran. Earlier this year, the Pentagon’s Defense Innovation Unit sought private sector help in combating deepfakes with an air of alarm: “This technology is increasingly common and credible, posing a significant threat to the Department of Defense, especially as U.S. adversaries use deepfakes for deception, fraud, disinformation, and other malicious activities.” An April paper by the U.S. Army’s Strategic Studies Institute was similarly concerned: “Experts expect the malicious use of AI, including the creation of deepfake videos to sow disinformation to polarize societies and deepen grievances, to grow over the next decade.”

    “There are no legitimate use cases besides deception.”

    The offensive use of this technology by the U.S. would, naturally, spur its proliferation and normalize it as a tool for all governments. “What’s notable about this technology is that it is purely of a deceptive nature,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “There are no legitimate use cases besides deception, and it is concerning to see the U.S. military lean into a use of a technology they have themselves warned against. This will only embolden other militaries or adversaries to do the same, leading to a society where it is increasingly difficult to ascertain truth from fiction and muddling the geopolitical sphere.” 

    Both Russia and China have been caught using deepfaked video and user avatars in their online propaganda efforts, prompting the State Department to announce an international “Framework to Counter Foreign State Information Manipulation” in January. “Foreign information manipulation and interference is a national security threat to the United States as well as to its allies and partners,” a State Department press release said. “Authoritarian governments use information manipulation to shred the fabric of free and democratic societies.”

    SOCOM’s interest in deepfakes is part of a fundamental tension within the U.S. government, said Daniel Byman, a professor of security studies at Georgetown University and a member of the State Department’s International Security Advisory Board. “Much of the U.S. government has a strong interest in the public believing that the government consistently puts out truthful (to the best of knowledge) information and is not deliberately deceiving people,” he explained, while other branches are tasked with deception. “So there is a legitimate concern that the U.S. will be seen as hypocritical,” Byman added. “I’m also concerned about the impact on domestic trust in government — will segments of the U.S. people, in general, become more suspicious of information from the government?”

    The post The Pentagon Wants to Use AI to Create Deepfake Internet Users appeared first on The Intercept.

    This post was originally published on The Intercept.

  • ClearVue Technologies has raised another $7.5 million to fuel the global export ambitions for technology that can transform windows and walls into solar energy producers. The ASX-listed company announced on Thursday that it had received “firm commitments from institutional, professional and sophisticated investors”, many of which are based in Hong Kong. The funds raised through…

    The post ClearVue raises $7.5m for solar glass expansion appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • There’s zero chance that a Republican will win this year’s Senate election in Massachusetts. That hasn’t stopped the crypto industry from lighting money on fire in a doomed bid to defeat Sen. Elizabeth Warren.

    Cryptocurrency figures have given the majority of the donations or loans to the effort to replace Warren, an industry scourge, with Republican John Deaton, campaign finance reports filed Tuesday show.

    Crypto industry big shots have given more than three times as much to Deaton’s campaign, or a super PAC supporting him, than small-dollar donors. Their combined $3.6 million effort pales in comparison to the $19 million Warren has raised, but it shows how much her criticisms have enraged the crypto world.

    In their first debate Tuesday, Warren took a swipe at Deaton’s close ties to the industry.

    “His crypto buddies are going to want a return on their investment. He’s going to fight for crypto.”

    “I fight for everybody, fight for working people,” Warren said. “If John Deaton has a chance to go to Washington, his crypto buddies are going to want a return on their investment. He’s going to fight for crypto.”

    Warren trounced her Republican opponent by a 24-point margin six years ago and is gliding toward a similar result this time, according to recent polls.

    Still, cryptocurrency industry leaders have contributed heavily to the push to elect Deaton, an attorney who has sued the Securities and Exchange Commission on behalf of the industry. Crypto trade publications dubbed the contest “the first Bitcoin election.”

    When it comes to Deaton’s own campaign organization, he has loaned or donated to himself more than he has raised from individual contributions.

    A pro-Deaton super PAC called the Commonwealth Unity Fund, which is bankrolled by the industry, has raised and spent more money than Deaton’s campaign.

    The San Francisco-based crypto company Ripple Labs gave the PAC $1 million in May. Bitcoin-boosting twins Tyler and Cameron Winklevoss chipped in $500,000 each in July. Days later, the co-founder of the Tether stablecoin, Philip Potter, contributed nearly $500,000 more. The PAC’s Tuesday filing shows that its only more recent receipt was from a crypto industry lawyer.

    A different crypto super PAC, Fairshake, has raised more than $160 million while spending its money on races expected to be far closer: the Republican bid to unseat Sen. Sherrod Brown, D-Ohio. The massive spending campaign is aimed at convincing Congress to relax regulations on crypto.

    During the Tuesday debate, Deaton painted the crypto industry as a way for ordinary Americans to escape bank fees — something Warren helped to do by spurring the creation of the Consumer Financial Protection Bureau.

    “I can’t help it,” he said, of the industry spending on his race. “When she goes to ban an entire industry, that they are motivated against her because they see a viable candidate.”

    The post Elizabeth Warren’s Crypto Haters Are Burning Cash in Her Senate Race appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The Committee to Protect Journalists (CPJ) joined eight human rights and digital rights organizations on October 15 to provide comments to the U.S. Commerce Department in response to its proposed rules to strengthen surveillance technology export regulations.

    The joint comments assess and offer recommendations for the Commerce Department to help curb the proliferation of such surveillance technologies.

    The comments also note the U.S. government’s use of export controls to protect human rights, including through the Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware and the Export Controls and Human Rights Initiative.

    While these actions are welcome, the United States and other governments around the world must do more to curb the abuse of surveillance technologies.

    CPJ has repeatedly documented the use of surveillance technology, including spyware, to undermine press freedom and journalist safety around the world.

    Read the joint comments here.


    This content originally appeared on Committee to Protect Journalists and was authored by CPJ Staff.

    This post was originally published on Radio Free.

  • When I wrote about cyberhate in my book Troll Hunting (2019), I wanted people to understand two important points. First, that trolls are rarely those stereotypical lonely guys, spitting out vitriol alone in their mothers’ basements. They’re more likely to be white-collar, professional types, working strategically in groups to silence or harm their victims, or drive them to self-harm. Second, that cyberhate doesn’t exist in some online bubble. Often it spills over into the “real world”, resulting in stalking, physical harm, and even terrorism. These insights were vital to convince governments and authorities that trolling has ‘real world’ consequences, has to be taken seriously and properly regulated.

    Last month, when I attended a discussion between Australia’s Van Badham and Nina Jankowicz, an American disinformation expert, I was intrigued to learn that those who spread disinformation on the internet work similarly to trolls. The conversation was expertly hosted by politician, lawyer and author, Andrew Leigh as part of the Australian National University’s Meet the Author series.

    Listen to the whole conversation here via the link above. 

    Both Nina and Van agreed it is imperative governments, authorities and the general public understand what is happening because this online “info war” is nothing less than an ideological war against democracy, undertaken by groups and with real world consequences.

    Nina and Van’s discussion focused on the rise and impact of conspiracy theories and disinformation. Nina Jankowicz, author of How to Lose the Information War and How to Be a Woman Online, has worked as an adviser on disinformation for both the Ukrainian and American governments. Van Badham is a well-known Australian activist and writer whose book, QAnon and On exposed the conspiracy theories spread by a group which convinced thousands, possibly millions of people, that our governments have been compromised by a global cabal of paedophiles.

    Van explained that conspiracy theories are the tools used to build communities and mobilize people, both online and in real life.

    Van took a moment to explain the difference between “misinformation” and “disinformation”. Misinformation involves untruths spread by those who genuinely believe the veracity of what they’re posting – repeated without malign intent. Conversely, the aim of disinformation campaigns is to mobilize people towards believing things that are not true, and to act on claims that are not true.

    Nina made it clear that disinformation campaigns are being waged with the clear intent to exploit fissures in society as a means of destabilizing democratically elected governments. Van added that what may appear to be “grassroots” movements are actually communities being assembled, “stoked, encouraged and provoked by organized pro-disinformation operations” aligned with the interests of authoritarian governments.

    In Australia last year, both speakers were horrified to see the disinformation campaign built around The Voice referendum. Watching the public debate, Van saw precise targeting by sponsored groups like Advance Australia around a “No” case “absolutely saturated with disinformation.”

    Van explained that the aim of the Voice disinformation campaign was to create uncertainty and confusion – noise – so that Australians would feel less confident about voting “Yes”. Those with a vested interest in derailing the Indigenous Voice to Parliament used a strategy famously described by Trumpist, Steve Bannon, as “flooding the zone with shit.”

    Watching this all play out, Van thought to herself, “Oh my God! It’s here. It’s come to Australia!”

    Now, she is seeing the same strategy being used in the debate about nuclear power stations in Australia.

    Both Nina and Van agreed that artificial intelligence technology is increasingly being used to build sophisticated disinformation campaigns designed to mislead, confuse and agitate the public. The rise of AI has “turbo-charged” disinformation campaigns. For example, Nina said that tools like Chat-GPT have made it easier for Russian disinformation to appear as if it’s written by native English speakers.

    Andrew Leigh, left, Nina Jankowicz, centre, and Van Badham, right, speaking in Canberra about disinformation. Picture: Ginger Gorman

    Andrew Leigh, left, Nina Jankowicz, centre, and Van Badham, right, speaking in Canberra about disinformation. Picture: Ginger Gorman

    In this country, Van has been tracking the debate over nuclear power stations and discovered “quite discernible patterns of AI generated content that is targeting susceptible groups within the electorate to soften them on the issue of nuclear messaging.”

    Importantly, a more permissive social media environment, particularly on X (formerly Twitter) under the leadership of Elon Musk has made it easier for fake personas and disinformation to proliferate.

    Democracies rely on public debate – it’s the way we decide what policies will most benefit our families, and society as a whole. This influences the way we vote. It’s perfectly reasonable for people to hold different views. But, when the well of information from which those views are formed is purposefully poisoned by foreign interests, the result is the kind of culture wars we now see driving a massive wedge in American society. Into this wedge step charismatic, authoritarian leaders who serve particular vested interests with voting blocs they have built through online disinformation campaigns.

    Van explained that one of the reasons she and Nina were touring the country was to raise consciousness about the “clear and present” dangers of disinformation to Australian democracy.

    Van warned we are all vulnerable to disinformation.  She said, “I’ve been lured into disinformation. It’s not something to be ashamed of.”  It’s easy to be manipulated especially when Australia’s online environment is largely unregulated.

    She said, “I had the horror of my life seeing someone who I would have formerly considered a friend, sharing material that I knew was being produced by a Russian disinformation account.”

    Both Nina and Van acknowledged that speaking out against these bad actors is likely to result in a torrent of online abuse that may well spill into the real world. The aim is to frighten and silence opponents.

    Despite death threats, both have persisted, but they warn women, in particular, to learn and practice cyber-security measures and to step away from the computer or phone for a while if what’s happening online is affecting your mental health.

    Regulation of fake accounts and disinformation by platforms such as X and Facebook is desirable, but there is considerable pushback because dissent and chaos drives “clicks”, and “clicks” drive profits. Raising consciousness about disinformation campaigns amongst friends and family is something we can all do to combat this assault on our democracy.

    Working to heal the fissures – the open wounds which leave our societies vulnerable to attack – is another priority. Fact-check before you share information online. And all of us can exercise our democratic rights by contacting our local MP, demand they take the spread of disinformation seriously and pass legislation to control it. Recommend, perhaps, that they read Van Badham’s and Nina Jankowicz’s books – or send them a copy.

    • Picture at top: Australia’s Van Badham and Nina Jankowicz speaking together at the ‘Something Digital’ conference in Brisbane. Picture: Supplied

    The post The gutsy women fighting global disinformation online appeared first on BroadAgenda.

    This post was originally published on BroadAgenda.

  • After two elections where he bucked Ohio’s rightward trend, Democratic Sen. Sherrod Brown is clinging to the narrowest of polling leads. If he loses to unpopular car salesman Bernie Moreno next month, he might have crypto to blame.

    Cryptocurrency companies are pouring tens of millions of dollars into the race through a super PAC in response to Brown’s scathing criticism of the industry as Senate Banking Committee chair.

    Their leading role in the race shows how much money — no matter the “coin” — talks. 

    Crypto sat in the political doghouse after the collapse of Sam Bankman-Fried’s FTX fraud two years ago, but it drew broad bipartisan support for its top legislative priority this May as it showered money on congressional races.

    “Really their only avenue here to continue their scams is to get enough politicians to change the law.”

    If crypto can take down Brown next month, critics warn, it could lead to more success for an agenda that includes neutering the Securities and Exchange Commission and opening the door for more traditional banks to hold crypto.

    “They’re losing in the courts, they’re losing in the court of public opinion, so really their only avenue here to continue their scams is to get enough politicians to change the law,” said Dennis Kelleher, the CEO of financial reform nonprofit Better Markets. “The key to that is taking out anybody who opposes them.”

    Mad Money

    Operating through a cluster of blandly named super PACs, the crypto industry had made nearly half of all corporate donations in this year’s elections as of August. A single pro-crypto super PAC, Fairshake, has raised more than $200 million and spent more than $132 million this cycle.

    Fairshake and its affiliates have spent millions backing Democratic Senate candidates Reps. Ruben Gallego in Arizona and Elissa Slotkin in Michigan, along with House candidates on both sides of the aisle.

    Nowhere has crypto’s influence been more obvious than Ohio. In the last election cycle, a super PAC bankrolled by Bankman-Fried backed now-Rep. Shontel Brown, D-Ohio, over progressive Nina Turner. 

    This year, crypto is coming even harder into the state. A Fairshake affiliate, Defend American Jobs, has spent more than $38 million on ads boosting Moreno and blasting Brown, according to a recent Washington Post analysis.

    A Fairshake spokesperson did not return a request for comment, but the reasons for the attack ads are clear enough. Well before Bankman-Fried’s downfall, Brown was a vocal critic of cryptocurrency.

    “Stablecoins and crypto markets aren’t actually an alternative to our banking system,” he said in December 2021. “They’re a mirror of the same broken system –– with even less accountability, and no rules at all.”

    The super PAC’s spending on a race that could hand control of the Senate to the GOP has made some Democratic industry leaders uncomfortable. A spokesperson for one of the PAC’s top donors, the crypto exchange Coinbase, said the PAC’s spending decisions are made independently, a claim echoed by Andreessen Horowitz, a venture capital firm that has invested billions in the crypto industry.

    Coinbase CEO Brian Armstrong said in a blog post that the company was making its donations in an effort to get “regulatory clarity.”

    In June, Armstrong said, “Crypto voters won’t be taken seriously until we send a clear message to political candidates that it is bad politics to be anti-crypto.”

    Taming the SEC

    Yet it isn’t just “clarity” that Armstrong and other industry players want. They also want specific legislation. “Getting the wrong kind of regulation is worse than none at all,” Armstrong said last month.

    Top of the list is legislation called the Financial Innovation and Technology for the 21st Century Act, or FIT 21, which would reclassify many kinds of crypto as commodities rather than securities.

    The obscure-sounding shift has broad implications. Observers generally consider the rules for commodities — items like corn and wheat — to be looser than those for securities such as stocks and bonds.

    “The CFTC was set up to regulate corn futures.”

    Just as importantly, crypto critics say, would be a corresponding shift in oversight. Under the congressional legislation, crypto would shed the SEC for the Commodity Futures Trading Commission, a body with less resources and a leaner regulatory staff.

    “The CFTC was set up to regulate corn futures,” said Mark Hays, a senior policy analyst with Americans for Financial Reform and Demand Progress. “The people they’re looking at are sophisticated hedge funds or ag traders, they are not set up to protect your cousin or your grandma logging onto their phone.”

    SEC Chair Gary Gensler, who has emerged as a crypto industry foil, warned of the bill’s consequences in a statement after it passed the House with bipartisan support in May. Scammers could label themselves crypto companies in order to evade government oversight, he said.

    Gensler said, “The crypto industry’s record of failures, frauds, and bankruptcies is not because we don’t have rules or because the rules are unclear. It’s because many players in the crypto industry don’t play by the rules.”

    Opening Up the Banks

    So far, the crypto industry’s favorite piece of legislation has not advanced in the Senate, although Majority Leader Chuck Schumer, D-N.Y., recently made supportive-sounding comments.

    Gensler also warned of the potential for larger contamination of the U.S. capital market. The issue arose in 2022, when SEC staffers tried to curb the danger with guidance advising financial institutions like banks to treat crypto as a liability rather than an asset on their balance sheets.

    The SEC’s thinking was that crypto is too vulnerable to theft, fraud, or lost wallet keys, but the crypto industry and bankers cried foul.

    Stand With Crypto, an industry advocacy group, said the guidance “disincentivizes banks from offering digital asset custody at scale and limits banks’ ability to develop safe, innovative use cases for blockchain technology.”

    While the guidance was nonbinding, for banks that decided to follow through, it meant that holding crypto for customers would require them to increase other holdings.

    Congress passed legislation overriding the guidance, only to be vetoed by President Joe Biden in June. For now, the guidance remains in place. Yet the crypto industry still harbors its larger ambition of making it easier for traditional financial institutions to hold crypto.

    Hays, the Americans for Financial Reform policy analyst, said, “They also want some of the other non-bank actors that provide crypto custody to be in the green.”

    “Stable” Coins

    Sometimes lost in the fallout from the Bankman-Fried saga is the story of an earlier crash involving a so-called stablecoin, TerraUSD, which was supposed to maintain a one-to-one peg with the dollar.

    In short, it didn’t. Investors who thought they were getting into crypto in the safest way possible had their savings wiped out.

    “A stablecoin is really nothing more than a crypto money market fund, with all the risks and dangers of a money market fund.”

    TerraUSD was an “algorithmic” stablecoin, meaning that it was not backed by actual assets. One of the industry’s best hopes in Congress is to get legislation passed authorizing stablecoins that are backed by concrete assets.

    Rep. Maxine Waters, D-Calif., the ranking member of the House Financial Services Committee and a frequent crypto skeptic, floated the idea last month of reaching a “grand bargain” with Republicans during the lame-duck Congress after the election.

    Although stablecoins seem to have more legislative legs than other crypto proposals, skeptics like Kelleher, of Better Markets, are wary. He likened them to money market funds, which had to be saved from collapse by the Federal Reserve in 2008 and 2020.

    “A stablecoin is really nothing more than a crypto money market fund, with all the risks and dangers of a money market fund,” he said. “Except it has even more, because it’s a crypto product that is not only unregulated, but because it’s also untransparent.”

    Editor’s Note: In September 2022, The Intercept received $500,000 from Sam Bankman-Fried’s foundation, Building a Stronger Future, as part of a $4 million grant to fund our pandemic prevention and biosafety coverage. That grant has been suspended. In keeping with our general practice, The Intercept disclosed the funding in subsequent reporting on Bankman-Fried’s political activities.

    The post Crypto Billionaires Could Flip the Senate to the GOP. Here’s What They Want. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • When a hurricane like Helene or Milton ravages coastal communities, already-strained first responders face a novel, and growing, threat: the lithium-ion batteries that power electric vehicles, e-bikes, and countless gadgets. When exposed to the salty water of a storm surge, they are at risk of bursting into flames — and taking an entire house with them.

    “Anything that’s lithium-ion and exposed to salt water can have an issue,” said Bill Morelli, the fire chief in Seminole, Florida, and the bigger the battery, the greater the threat. That’s what makes EVs especially hazardous. “[The problem] has expanded as they continue to be more and more popular.”

    It is not yet clear how many vehicles might have caught fire in the wake of Hurricane Milton, which slammed into Tampa Bay on Wednesday, leaving at least 13 people dead and some 80,000 in shelters. But there have been 48 confirmed battery fires related to storm surge from Hurricane Helene, 11 of them associated with EVs.

    Morelli’s crews fought three of them. St. Petersburg Fire Rescue reported at least two, one from an electric bike and another from a Mercedes-Benz EQB300 that led to what a fire department representative called “major damage to the home.” CNN and other outlets reported on a fire in Sarasota sparked by a Tesla Model X. 

    Overall, such fires are far from common. Idaho National Laboratory found that of the 3,000 to 5,000 electric vehicles damaged by Hurricane Ian in 2022, about three dozen caught fire. Public awareness of the risk has mounted since then, with officials up to and including Florida Governor Ron DeSantis urging residents to move their EVs to higher ground ahead of storms. But the chemistry and construction of lithium-ion batteries make them especially prone to fires that are difficult for first responders to combat.

    “They burn hot, they burn fast, and they’re hard to extinguish,” Morelli said.

    St. Petersburg Fire Rescue responded to at least two electrical fires during hurricane Helene, one of which involved an electric vehicle. St. Petersburg Fire Rescue

    The battery in an EV is comprised of thousands of cells stacked and packed into a sealed enclosure. If salt water, which is particularly conductive, reaches the interior of a battery, it can cause a short circuit, which can generate excessive heat that jumps from cell to cell. “That’s called ‘thermal runaway,’” said Andrew Klock, senior manager of education and development at the National Fire Protection Association. 

    As a battery heats up, it releases flammable gases that can ignite. Once the car starts burning, methods of putting out traditional vehicle fires — such as foam or thermal blankets to smother the flames — aren’t as effective. “Lithium-ion batteries generate their own oxygen and heat when they are on fire,” Klock said. “You can’t starve the fire.“

    Instead, first responders must direct high volumes of water at the battery pack as directly as possible in order to reduce the heat. The International Association of Fire Chiefs recommends having 3,000 to 8,000 gallons on hand — which can be difficult during a disaster, when hydrants may not be working properly and trucks have a limited supply aboard. 

    “They take tons and tons and tons and tons of water to extinguish,” said Morelli, who is working with other departments to acquire more thermal blankets. A ready supply of them could allow firefighters to smother the flames enough to move the car away from structures so it can burn itself out safely. 

    Klock said “training is paramount” to effectively fighting these fires. But of the roughly 1.2 million firefighters in the country, only 350,000 or so have completed the association’s training, he said. “There’s a lot of work to do.”

    The danger doesn’t end when a storm passes, either. According to the Department of Transportation, “the time frame in which a damaged battery can ignite varies, from days to weeks,” which is one reason Tesla urges owners not to operate their vehicle until a dealer inspects it. 

    The Alliance for Automotive Innovation, which represents 44 automakers and suppliers, declined to comment but cited a letter it sent to Republican Senator Rick Scott of Florida on the issue in 2022. It notes that “safety is a top priority for our members, which is why they have been engaged in long-standing efforts to address fire risks for both conventionally fueled vehicles and EVs.”  

    In the meantime, a range of efforts are underway to try to prevent these fires from occurring. The Federal Emergency Management Agency has funded research into emerging hazards of at-home battery storage systems. Other researchers are looking at how to make batteries safer, including Yang Yang, an associate professor of materials science and engineering at the University of Central Florida. His team developed a battery that, instead of fighting salt water, utilizes it as the main electrolyte. 

    “It can be soaked in the salty water and still works well,” said Yang, who started working on the project after living in Houston and Florida and seeing firsthand the problem floods present. While he said car companies have yet to contact him about his research, he’s optimistic that safer batteries could be on the market within the next few years. 

    Until then, storms like Helene and Milton may be among the biggest drivers of public attention to both the problem and prevention methods. Yang, for one, finds that possibility bittersweet at best: “I don’t want people to have any issues with their electric vehicles.”

    This story was originally published by Grist with the headline Helene and Milton reveal an emerging challenge for first responders: EV batteries catching fire on Oct 11, 2024.

    This post was originally published on Grist.

  • The Internet Archive recently was the target of a data breach that exposed information related to 31 million users, including their usernames and email addresses, among other materials. The group SN_Blackmeta has claimed responsibility for a concurrent DDoS attack that took the site offline. The party responsible for the data breach has not yet been identified.

    Related

    New York Times Doesn’t Want Its Stories Archived

    The nonprofit Internet Archive plays a vital role in online culture, preserving web content and other digitized materials and operating the popular Wayback Machine, which lets visitors see historic versions of websites.

    It is not yet clear how the data breach occurred, though some in the information security community have speculated that credentials for the Internet Archive’s servers may have been found in the logs of “information stealer” malware, which exfiltrates sensitive information from infected systems.  

    The recent data breach is not the only way that Internet Archive user email addresses have been vulnerable online. But for more than a decade, the Internet Archive has been exposing the email addresses of anyone who uploaded a file to its library, despite its claims that it does not share uploader email addresses with anyone.

    When content is uploaded to the Internet Archive, a metadata file is automatically generated that includes a variety of information about the content, such as date of upload, any user-entered description of file contents, as well as the subject and media type. Alongside the metadata, however, there is an “uploader” field that shows the uploader’s email address. The metadata file is publicly viewable by clicking the “Show All” link viewable on the main page of any uploaded content. The metadata can also be accessed by going to a specific metadata URL for the file. 

    Users have been raising concerns about the visibility of email addresses at Internet Archive for more than a decade. On its own site, in response to the question of “How can I contact the person / group who uploaded an item?”, the Internet Archive states that it is “unable to release any contact information for patrons.” Similarly, in a section of its guide titled “Why do you need my email address?”, the Internet Archive explains that it needs email addresses to verify accounts, allow users to log into accounts, help recover passwords, and receive notifications. The Archive goes on to “promise we will not share your data with anyone.”

    Despite these assurances, however, the Internet Archive appears to readily reveal the email address of content uploaders, ignoring support requests from users who flagged the issue for years. In 2013, a user made a post on the Archive’s support forums pointing out that uploader information, specifically the uploader’s email address, was made available in a metadata file the Archive generated for every upload. The post didn’t receive a response from anyone at the Archive. 

    In 2024, another user posted an issue on the Internet Archive’s GitHub page, referencing the earlier 2013 post and similarly detailing the fact that uploader emails are publicly viewable. “There is nothing on the website warning users that their email addresses are going to be exposed,” the post states. It goes on to describe this as a “betrayal of uploaders’ privacy.” Even if users subsequently updated the email address affiliated with their account, older uploads still revealed the email address which was associated with the account at the time of the upload, the user noted. As with the earlier post from 2013, no one from the Internet Archive publicly responded to the raised issue.

    The Internet Archive did not immediately respond to questions about the breach or about why uploader emails are made public, despite documentation stating that uploader emails are not shared with anyone.

    To mitigate the adverse impact of potential account leaks, users should have a unique, random password for each of their accounts, so that if a breach of a particular service were to occur, attackers wouldn’t be able to use the same password to attempt to get into other accounts, in what’s known as a credential stuffing attack. In this case, password materials included in the breach were hashed or scrambled using a secure algorithm, meaning victims of the attack shouldn’t be immediately at risk. 

    To further safeguard yourself against data breaches, choose random and unique usernames for each online service. Setting up a unique email address for every online account makes things even more secure — and it isn’t as cumbersome as one might think thanks to new services offered by some e-mail providers. 

    The post Internet Archive Was Exposing User Email Addresses for Years Before Recent Breach appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Destructive displays of technological prowess in Lebanon serve to distract the Israeli public from the military’s failure to achieve its long-stated war aims.

    This post was originally published on Dissent MagazineDissent Magazine.

  • Due to a quirk of geology, the purest quartz in all the world comes from the picturesque town of Spruce Pine, North Carolina. The mineral, created deep within the earth when silicon-rich magmas cooled and crystallized some 370 million years ago, is essential to the production of computer chips and solar panels.

    China, India, and Russia provide high purity quartz as well, but what’s mined there does not match the quality or quantity of what lies beneath the Blue Ridge Mountains. With Spruce Pine among the scores of Appalachian communities reeling from Hurricane Helene, the sudden closure of quartz mines that have supplied chip manufacturers for decades has rattled the global tech industry. But this quartz is vital to the solar industry too. And while industry experts expect companies to withstand the temporary closure of the town’s two mines, it highlights the precarity of a clean energy economy that relies on materials produced at a single location — especially in a world of increasingly ferocious natural disasters.

    Helene’s impact on Spruce Pine “absolutely lays bare the danger of having a monopoly in any part of the supply chain,” said Debra DeShong, head of corporate communications at solar manufacturer QCells North America. QCells, which manufactures photovoltaic panels in Georgia and is building an additional facility that will manufacture the components needed to assemble them, is evaluating whether the Spruce Pine mine closures will impact it.

    The industry relies on quartz primarily to make polysilicon, a highly refined type of silicon that forms the sunlight-harvesting cells in most photovoltaic panels. But the quartz from Spruce Pine serves another purpose: It is used to make the crucibles in which molten polysilicon crystallizes into cylindrical or rectangular ingots. Those rods are cut into the solar wafers that are further processed to produce the cells within panels.

    Forming solar ingots requires heating polysilicon to over 2,500 degrees Fahrenheit. Only the highest purity quartz sand provides the thermal stability needed to create the crucibles capable of enduring such heat, and the best of it is found in western North Carolina.

    “Spruce Pine is a very unusual quartz deposit and it is incredibly pure,” said Jenny Chase, the lead solar analyst at energy consultancy BloombergNEF. 

    BloombergNEF estimates that Spruce Pine supplies more than 80 percent of the ultra-pure quartz sand used to manufacture crucibles for both the solar and the semiconductor industry, as well as for optical and lighting applications. (There isn’t any public data on how much of the town’s quartz is used by each sector, but BloombergNEF estimates that in China, the world’s leading producer of photovoltaic panels, 80 percent of the high purity quartz it uses goes into solar applications.) Spruce Pine dominates this market, and supplies nearly all of the material that lines the inside of solar crucibles, which come in direct contact with molten silicon. There, purity is particularly important for ensuring high ingot yields and long crucible lifespans.

    The amount of quartz required to support solar crucible production is fairly small. Chase says that Spruce Pine produced about 20,000 tons of high purity quartz sand last year — more than enough to satisfy the demands of the solar industry. That same year, global polysilicon production stood at 1.52 million metric tons. Producing that much polysilicon likely required about 3 million metric tons of quartz, according to Chase. All of which is to say, Spruce Pine is, she said, “quite a small cog” in the solar supply chain.

    Still, a small cog can become a big problem if there are no contingencies when it breaks down. But Chase suspects that most crucible manufacturers — an industry based largely in East Asia — have stockpiles of high purity quartz. May Haugen, who leads communications at The Quartz Corp, a Norwegian company that produces high purity quartz sand at Spruce Pine, confirmed this in an email to Grist.

    “The Quartz Corp operates in long value chains where everybody has learnt through Covid the importance of sizable safety stocks,” Haugen wrote. “Between our own safety stocks which are built in different locations and the ones down in the value chain, we are not concerned about shortages in the short or medium term.”

    In preparation for Hurricane Helene, The Quartz Corp halted all mining operations in Spruce Pine on September 26th. So did the Belgian firm Sibelco, the town’s other producer.

    It is unclear when either company will resume mining: In an October 2 statement, The Quartz Corp wrote that while its plants do not seem to have been seriously damaged by the storm it is still “too early to tell” when they will reopen, “as this will also depend on the rebuilding of local infrastructure.” In an October 4 statement shared with Grist, Sibelco wrote that its facilities appear to have sustained “minor damage” and that the company hopes to “restart operations as soon as we can.”

    “Our dedicated teams are on-site, conducting cleanup,” the statement noted. “Our final product stock has not been impacted.” The company declined to say how the hurricane could impact its plan to double production capacity in Spruce Pine by 2025.

    Even if both mines remain shuttered for months, the solar industry could adapt, Chase said. The Japanese firm Mitsubishi Chemical Group manufactures high-purity synthetic silica for the semiconductor industry, and the material meets the standards required for solar crucibles, according to Chase. 

    However, production would need to ramp up. Mitsubishi Chemical Group representative Kana Nuruki told Grist in an email that the company currently does not have enough synthetic quartz to support the solar industry, and what it does produce is “considerably more expensive” than the real thing.

    Paying a premium for synthetic quartz would be a challenge for the price-sensitive solar industry, Chase said. “But if it had no choice, it would do it.” 

    Developing alternative supplies of high purity quartz, even ones that cost more, could help fortify the solar supply chain against the next climate-fueled disaster. “As solar becomes a larger piece of our electrification, it’s going to be increasingly important that we ensure we have a stable supply chain,” DeShong of QCells said.

    Still, manufacturing both semiconductors and solar panels in America is a key priority of the Biden Administration, and it seems unlikely that Washington will want to see a critical cog in both supply chains move overseas. A spokesperson for the US Department of Energy told Grist that the agency “is closely monitoring Hurricane Helene’s effects [on] the supply chain” while “advancing efforts to maintain the stability of America’s energy systems.”

    Spencer Bost, executive director of the community development organization Downtown Spruce Pine, said that quartz mining is the largest private employer in the county and restarting it quickly is “very important from a local economy perspective.” If the federal government cares about building clean energy in America, Bost said, “we have all the stuff here.” 

    “We have the people who need the jobs here,” he added. 

    This story was originally published by Grist with the headline The solar supply chain runs through this flooded North Carolina town on Oct 8, 2024.

    This post was originally published on Grist.

  • Meta is restricting the use of the upside-down red triangle emoji, a reference to Hamas combat operations that has become a broader symbol of Palestinian resistance, on its Facebook and Instagram, and WhatsApp platforms, according to internal content moderation materials reviewed by The Intercept.

    Since the beginning of the Israeli assault on Gaza, Hamas has regularly released footage of its successful strikes on Israeli military positions with red triangles superimposed above targeted soldiers and armor. Since last fall, use of the red triangle emoji has expanded online, becoming a widely used icon for people expressing pro-Palestinian or anti-Israeli sentiment. Social media users have included the shape in their posts, usernames, and profiles as a badge of solidarity and protest. The symbol has become common enough that the Israeli military has used it as shorthand in its own propaganda: In November, Al Jazeera reported on an Israeli military video that warned “Our triangle is stronger than yours, Abu Obeida,” addressing Hamas’s spokesperson.

    According to internal policy guidelines obtained by The Intercept, Meta, which owns Facebook and Instagram, has determined that the upside-down triangle emoji is a proxy for support for Hamas, an organization blacklisted under the company’s Dangerous Organizations and Individuals policy and designated a terror group under U.S. law. While the rule applies to all users, it is only being enforced in moderation cases that are flagged internally. Deletions of the offending triangle may be followed by further disciplinary action from Meta depending on how severely the company assesses its use.

    According to the policy materials, the ban covers contexts in which Meta decides a “user is clearly posting about the conflict and it is reasonable to read the red triangle as a proxy for Hamas and it is being used to glorify, support or represent Hamas’s violence.”

    Many questions about the policy remain unanswered; Meta did not respond to multiple requests for comment. It’s unclear how often Meta chooses to restrict posts or accounts using the emoji, how many times it has intervened, and whether users have faced further repercussions for violating this policy.

    The policy also appears to apply even if the emoji is used without any violent speech or reference to Hamas. The documents show that the company will “Remove as a ‘Reference to DOI’ if the use of triangle is not related to Hamas’s violence,” as in the case of the emoji as a user’s profile picture. Another example of a prohibited use doesn’t even include the emoji itself, but rather a hashtag mentioning the word triangle and a Hamas spokesperson.

    It “seems wildly over-broad to remove any ‘reference’ to a designated DOI,” according to Evelyn Douek, an assistant professor at Stanford Law School and scholar of content moderation policy. “If we are just understanding the ‘🔻’ as essentially a stand-in for the word “Hamas,” we would never ban every instance of the word. Much discussion of Hamas or use of the ‘🔻’ will not necessarily be praise or glorification.”

    The previously unreported prohibition has not been announced to users by Meta and has worried some digital rights advocates about how fairly and accurately it will be enforced. “Wholesale bans on expressions proved time and time again to be disastrous for free speech, but Meta never seems to learn this lesson,” Marwa Fatafta, a policy adviser with the digital rights organization Access Now, told The Intercept. “Their systems will not be able to distinguish between the different uses of this symbol, and under the unforgiving DOI policy, those who are caught in this widely cast net will pay a hefty price.”

    “Their systems will not be able to distinguish between the different uses of this symbol, and … those who are caught in this widely cast net will pay a hefty price.”

    While Meta publishes a broad overview of the Dangerous Organizations policy, the specifics, including the exact people and groups that are included under it, are kept secret, making it difficult for users to avoid breaking the rule.

    “Soon enough, users will know and notice that their posts are being taken down because of using this red triangle, and that will raise questions,” Fatafta said. “Meta seems to be forgetting another very important lesson here, and that is transparency.”

    Douek echoed the need for transparency regarding Meta’s content moderation around the war: “Not knowing when or how the rule is being applied is going to exacerbate the perception, if not the reality, that Meta isn’t being fair in a context where the company has a history of biased enforcement.”

    Although Meta last year relaxed its Dangerous Organizations policy to ostensibly allow references to banned entities in certain contexts, like elections, civil society groups and digital rights advocates have widely criticized Meta’s enforcement of the policy against speech pertaining to the war, particularly from Palestinian users. The policy material reviewed by The Intercept mentions no such exceptions for the triangle emoji or instructions to consider its context beyond Hamas.

    “What is being banned are expressions of solidarity and support for Palestinians as they are trying to resist ethnic cleansing and genocide,” Mayssoun Sukarieh, a senior lecturer with the Department of International Development at King’s College London, told The Intercept. “Symbols are always created by resistance, and there will be resistance as long as there is colonialism and occupation.”

    The post Facebook and Instagram Restrict the Use of the Red Triangle Emoji Over Hamas Association appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Researchers at the University of New South Wales have claimed a world-first breakthrough by demonstrating provable quantum entanglement between two atoms in silicon, a crucial step for scaling quantum computers. Entanglement between at least two qubits is the phenomena that enables information to be encoded and processed in a quantum computer. This was demonstrated by…

    The post UNSW delivers atomic quantum computing breakthrough appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Digital security appears to be a fixation of New York Mayor Eric Adams and his staff, at least according to his indictment on multiple charges, including soliciting and receiving campaign contributions from a foreign national, bribery, and wire fraud. 

    But then why were they so bad at it?

    Case in point: The indictment quotes a text message exchange between Adams and an unnamed staffer, in which the staffer allegedly tells Adams to “be o[n the] safe side Please Delete all messages you send me.” 

    Adams, according to the indictment, texts back, “Always do.”

    It goes without saying that this policy of deleting messages did not prevent investigators from discovering these communications.

    Nor did an alleged attempt by the same staff member to delete encrypted messaging apps after asking for a bathroom break during a meeting with FBI agents. The staff member, according to the indictment, asked to excuse herself from the conversation, then removed from her phone the apps she had used to communicate with Adams, a Turkish official who coordinated various dealings with Adams, and others. 

    This is not the first time the run-to-the-bathroom-to-flush-messages-down-the-toilet trick has been attempted. When Apple sued former iOS engineer Andrew Aude for allegedly leaking information on upcoming Apple products, the complaint noted that “Feigning the need to visit the bathroom mid-interview, Mr. Aude then extracted his iPhone from his pocket during the break and permanently deleted significant amounts of evidence from his device,” which included the popular encrypted messaging app Signal. 

    Much like how attempts to flush drugs down the toilet don’t always destroy incriminating evidence, there are a plethora of forensic techniques to recover lingering trace evidence of applications which have been installed on a phone even after the app may have been deleted from the device. There are also a number of ways to recover trace remnants of communications, even if those communications are conducted via encrypted messaging apps. 

    Deleting messages, or even an entire app, may nonetheless leave an array of bread crumbs for investigators that would betray the fact that interactions between certain parties may have transpired, even if the actual contents of the conversations may no longer be recoverable. 

    Take Signal, for example. Signal offers a variety of options to delete messages, including the ability to delete a message that you sent to someone from the recipients’ devices, as well as the ability to set message duration lengths, after which they will disappear. However, these various deletion measures come with critical caveats that can nonetheless leave traces of the fact that communications between certain parties may have transpired — which in some cases may be sufficient to pose problems for the people implicated. 

    There are a plethora of forensic techniques to recover lingering trace evidence of applications, even after the app may have been deleted from the device.

    For example, although Signal offers the option for a sender to delete a message they sent to recipients, this feature comes with two notable asterisks. First, this “delete for everyone” feature can only be done within 24 hours of a message being sent, Second, the deleted messages are not deleted entirely, but are instead replaced with the boilerplate text that reads “This message was deleted” on recipients’ devices, or “You deleted this message” on the sender’s devices. Metadata about the original message, such as the time the original message was sent and received, is preserved as well. To effectively eliminate traces that a message had been sent and then deleted, both the sender and the recipients must individually tap on the deleted message placeholder and select “delete.”

    When placing a Signal voice call on an iPhone, Signal integrates with the iPhone so that Signal calls show up in the “recents” list of calls in the iOS Phone app. This means that forensic investigators can simply check the Phone app to see who an individual called on Signal without having to utilize the Signal app at all. Though these instructions don’t appear to be documented on Signal’s official support portal, this feature can nonetheless be disabled by going to Settings, then Privacy, and making sure “Show Calls in Recents” is turned off on the Signal iOS app.

    This is all to say, if you find yourself in a situation where you need an impromptu bathroom break in the middle of an interrogation to delete messages, you’re already in deep shit.

    The post Encrypted Apps Can Protect Your Privacy — Unless You Use Them Like Eric Adams appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The Israeli Defense Tech Conference, aimed at tech companies working with the Israeli military, was scheduled for November at the Google for Startups campus in Tel Aviv.

    The event, according to a listing posted on the event RSVP app Luma, was pitched at “founders, investors and innovators” looking to network and learn more about the defense tech space. It was co-sponsored by Google, Fusion Venture Capital, Genesis, a startup accelerator, and the Israeli military’s research and development arm, known as the Directorate of Defense Research and Development (DDR&D, or Ma’fat). 

    When The Intercept contacted Google, the event page disappeared.

    Google was not only listed as the physical host of the event and one of its sponsors, but the event listing also included a notice that attendees “approve of sharing [their] details with the organizers (Fusion & Google)” as part of signing up.

    When The Intercept contacted Google, as well as the other companies and venture capital firms on the event page, the event page disappeared. Google spokesperson Andréa Willis told The Intercept in an email, “Google is not associated with this event.” Willis did not respond when asked how this could be possible if Google is hosting and co-sponsoring the event, or why the event page went down. None of the other companies or venture capital firms on the event page responded to requests for comment.

    After months of sustained protests against Google’s relationship with Israel, the company appears to be trying to muddy that relationship, at least in the public eye, while continuing its collaboration with the Israeli military.

    In July, Google’s name was mysteriously removed from the website of a separate IT for IDF conference, meant to highlight tech companies working with the Israel Defense Forces, which identified it as a co-sponsor. Conference organizers claimed Google’s inclusion was a mistake, but internal documents from Google name the company as an event cosponsor. 

    According to the event listing, November’s Israeli Defense Tech Conference would take place at the Tel Aviv campus of Google for Startups, which offers resources for companies that work or partner with Google. Its lead speaker is listed as Nir Weingold, head of planning, economics, and IT for DDR&D, who was scheduled to talk about “trends in Israeli defense tech.” Weingold was followed by a panel of venture capitalists talking about investing in Israeli military startups.

    The event page also had a panel with executives of companies “leading Israeli defense tech.” One of these companies was SpearUAV, a company whose surveillance and explosive drones are used by the Israeli military. Another was Spectralx (formerly Polaris), a company that makes sensors, drones, and other military-grade technology. According to its website, it’s being used by the DDR&D and leading Israeli weapon manufacturer Elbit Systems, as well as the U.S. Navy and U.S. Special Operations Command. The third company, AIR, makes consumer-grade single-passenger electric planes. To date, the company has not publicly disclosed a relationship with the Israeli military.

    Screenshots of the conference page on Luma, taken before it was scrubbed. Screenshot

    The conference, if it takes place, would also feature several venture capital firms that are funding companies that are publicly working with the U.S. military, but have not yet disclosed any contracts with Israeli forces. One of these firms is Tal Ventures, which funds Magnus Metal, winner of a U.S. Department of Defense competition. Tal Ventures also funds Scribe, which got a contract with the U.S. Department of Homeland Security. 

    Another company that was scheduled to be at the conference is Intel Capital, the investment wing of the major chip manufacturer. The firm funds Syntiant Corp, which recently secured a contract with the U.S. Defense Innovation Unit for its targeting systems for unmanned vehicles. The firm has also published two pieces on its company blog about the importance of working with military and defense agencies.

    Venture capital firm 10D was also scheduled to be at the conference. It funds Exodigo, an underground mapping company that has not disclosed any military contracts, but has spoken publicly about the importance of working with Israel’s military. Similarly, venture capital firm TLV Partners was scheduled to be at the conference, and has written on its company blog about the importance of private sector collaboration with the Israeli military.

    For the past year, Google-sponsored conferences have been the target of people protesting Project Nimbus, the $1.2 billion contract it shares with Amazon that involves providing cloud services and other tools to the Israeli government. 

    Shortly after news about Project Nimbus became public in 2021, hundreds of Google and Amazon workers signed and published an open letter condemning it, and the activist group No Tech for Apartheid formed. The group, made up of tech workers and organizers with MPower Change and Jewish Voice for Peace, has been protesting Project Nimbus since 2022.

    For years, Google has insisted that Project Nimbus is “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” However, reporting has since revealed an extensive relationship between Project Nimbus and the IDF, accompanied by public paper trails.

    In March, Google Cloud engineer Eddie Hatfield protested Project Nimbus by interrupting the Google Israel managing director at “Mind the Tech,” an Israeli tech industry conference in New York that is co-sponsored by Google. Hatfield was fired days later. 

    Weeks later, Google employees staged sit-ins protesting Project Nimbus at company offices in Sunnyvale and New York, with simultaneous protests taking place outside. Nine people occupying company office space were arrested, and fifty people were fired shortly after. They later filed a complaint with the National Labor Relations Board about the incident. 

    The post Google Was Set to Host An Israeli Military Conference. When We Asked About It, The Event Disappeared. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The Israeli Defense Tech Conference, aimed at tech companies working with the Israeli military, was scheduled for November at the Google for Startups campus in Tel Aviv.

    The event, according to a listing posted on the event management app Luma, was pitched at “founders, investors and innovators” looking to network and learn more about the defense tech space. It was co-sponsored by Google; Fusion Venture Capital; Genesis, a startup accelerator; and the Israeli military’s research and development arm, known as the Directorate of Defense Research and Development (DDR&D, or Ma’fat). 

    When The Intercept contacted Google, the event page disappeared.

    Google was not only listed as the physical host of the event and one of its sponsors, but the event listing also included a notice that attendees “approve of sharing [their] details with the organizers (Fusion & Google)” as part of signing up.

    When The Intercept contacted Google and the other companies and venture capital firms on the event page, the event page disappeared. Google spokesperson Andréa Willis told The Intercept in an email, “Google is not associated with this event.” Willis did not respond when asked how this could be possible if Google is hosting and co-sponsoring the event, or why the event page went down. None of the other companies or venture capital firms on the event page responded to requests for comment.

    After months of sustained protests against Google’s relationship with Israel, the company appears to be trying to muddy that relationship, at least in the public eye, while continuing its collaboration with the Israeli military.

    In July, Google’s name was mysteriously removed from the website of a separate conference, IT for IDF, meant to highlight tech companies working with the Israel Defense Forces, which identified it as a co-sponsor. Conference organizers claimed Google’s inclusion was a mistake, but internal documents from Google name the company as an event co-sponsor. 

    According to the event listing, November’s Israeli Defense Tech Conference would take place at the Tel Aviv campus of Google for Startups, which offers resources for companies that work or partner with Google. Its lead speaker is listed as Nir Weingold, head of planning, economics, and IT for DDR&D, who was scheduled to talk about “trends in Israeli defense tech.” Weingold was followed by a panel of venture capitalists talking about investing in Israeli military startups.

    The event page also had a panel with executives of companies “leading Israeli defense tech.” One of these companies was SpearUAV, a company whose surveillance and explosive drones are used by the Israeli military. Another was Spectralx (formerly Polaris), a company that makes sensors, drones, and other military-grade technology. According to its website, it’s being used by the DDR&D and leading Israeli weapon manufacturer Elbit Systems, as well as the U.S. Navy and U.S. Special Operations Command. The third company, AIR, makes consumer-grade, single-passenger electric planes. To date, the company has not publicly disclosed a relationship with the Israeli military.

    Screenshots of the conference page on Luma, taken before it was scrubbed. Screenshot

    The conference, if it takes place, would also feature several venture capital firms that are funding companies that are publicly working with the U.S. military, but have not yet disclosed any contracts with Israeli forces. One of these firms is Tal Ventures, which funds Magnus Metal, winner of a U.S. Defense Department competition. Tal Ventures also funds Scribe, which got a contract with the U.S. Department of Homeland Security. 

    Another company that was scheduled to be at the conference is Intel Capital, the investment wing of the major chip manufacturer. The firm funds Syntiant Corp., which recently secured a contract with the U.S. Defense Innovation Unit for its targeting systems for unmanned vehicles. The firm has also published two pieces on its company blog about the importance of working with military and defense agencies.

    Venture capital firm 10D was also scheduled to be at the conference. It funds Exodigo, an underground mapping company that has not disclosed any military contracts, but has spoken publicly about the importance of working with Israel’s military. Similarly, venture capital firm TLV Partners was scheduled to be at the conference and has written on its company blog about the importance of private sector collaboration with the Israeli military.

    For the past year, Google-sponsored conferences have been the target of people protesting Project Nimbus, the $1.2 billion contract it shares with Amazon that involves providing cloud services and other tools to the Israeli government. 

    Shortly after news about Project Nimbus became public in 2021, hundreds of Google and Amazon workers signed and published an open letter condemning it, and the activist group No Tech for Apartheid formed. The group, made up of tech workers and organizers with MPower Change and Jewish Voice for Peace, has been protesting Project Nimbus since 2022.

    For years, Google has insisted that Project Nimbus is “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” However, reporting has since revealed an extensive relationship between Project Nimbus and the IDF, accompanied by public paper trails.

    In March, Google Cloud engineer Eddie Hatfield protested Project Nimbus by interrupting the Google Israel managing director at Mind the Tech, an Israeli tech industry conference in New York that is co-sponsored by Google. Hatfield was fired days later. 

    Weeks later, Google employees staged sit-ins protesting Project Nimbus at company offices in Sunnyvale, California, and New York, with simultaneous protests taking place outside. Nine people occupying company office space were arrested, and fifty people were fired shortly after. They later filed a complaint with the National Labor Relations Board about the incident. 

    The post Google Was Set to Host an Israeli Military Conference. When We Asked About It, the Event Disappeared. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published.

    This story is part of a collaboration between FRONTLINE and ProPublica that includes an upcoming documentary.

    The recent crackdown on the social media platform Telegram has triggered waves of panic among the neo-Nazis who have made the app their headquarters for posting hate and planning violence.

    “Shut It Down,” one person posted in a white supremacist chat on Tuesday, hours after Telegram founder Pavel Durov announced he would begin sharing some users’ identifying information with law enforcement.

    With over 900 million users around the globe, Telegram has been both revered and reviled for its hands-off approach to moderating posted content. The platform made headlines this summer when French authorities arrested Durov, seeking to hold him responsible for illegal activity that has been conducted or facilitated on the platform — including organized drug trafficking, child pornography and fraud.

    Durov has called the charges “misguided.” But he acknowledged that criminals have abused the platform and promised in a Telegram post to “significantly improve things in this regard.” Durov’s announcement marked a considerable policy shift: He said Telegram will now share the IP addresses and phone numbers of users who violate the platform’s rules with authorities “in response to valid legal requests.”

    This was the second time in weeks that extremists had called on their brethren to abandon Telegram. The first flurry of panic followed indictments by the Justice Department of two alleged leaders of the Terrorgram Collective, a group of white supremacists accused of inciting others on the platform to commit racist killings.

    “EVERYONE LEAVE CHAT,” posted the administrator of a group chat allied with the Terrorgram Collective the day the indictments were announced.

    An analysis by ProPublica and FRONTLINE, however, shows that despite the wave of early panic, users didn’t initially leave the platform. Instead there was a surge in activity on Terrorgram-aligned channels and chats, as allies of the group tried to rally support for their comrades in custody, railed against the government’s actions and sought to oust users they believed to be federal agents.

    Federal prosecutors in the U.S. have charged Dallas Humber and Matthew Allison, two alleged leaders of the Terrorgram Collective, with a slew of felonies including soliciting the murder of government officials on Telegram.

    Humber has pleaded not guilty. She made a brief appearance in federal court in Sacramento, California, on Sept. 13, during which she was denied bail. Humber, shackled and clad in orange-and-white jail garb, said nothing. Allison, who has not yet entered a plea, was arrested in Idaho but will face trial in California.

    Attorneys for Humber and Allison did not respond to separate requests for comment.

    The two are alleged Accelerationists, a subset of white supremacists intent on accelerating the collapse of today’s liberal democracies and replacing them with all-white ethno-states, according to the indictment.

    Through a constellation of linked Telegram channels, the collective distributes books, audio recordings, videos, posters and calendars celebrating white supremacist mass murderers, such as Brenton Tarrant, who in early 2019 stormed two mosques in New Zealand and shot to death 51 Muslim worshippers.

    The group explicitly aims to inspire similar attacks, offering would-be terrorists tips and tools for carrying out spectacular acts of violence and sabotage. A now-defunct channel allegedly run by Humber, for example, featured instructions on how to make a vast array of potent explosives. After their arrests, channels allegedly run by Humber and Allison went silent.

    But within days of the indictments, an anonymous Telegram user had set up a new channel “dedicated to updates about their situation.”

    “I understand that some people may not like these two, however, their arrests and possible prosecution affects all of us,” the user wrote. The criminal case, they argued, “shows us that Telegram is under attack globally.”

    The channel referred to Humber and Allison by their alleged Telegram usernames, Ryder_Returns and Btc.

    A long-running neo-Nazi channel with more than 13,000 subscribers posted a lengthy screed. “We are very sad to hear of the egregious overreach of government powers with these arrests,” stated the poster, who used coded language to suggest that white supremacists should forcefully overthrow the U.S. government.

    One group closely aligned with the Terrorgram Collective warned like-minded followers that federal agents could be lurking. In a post, it said that it had been in contact with Humber since her arrest, and that she gave them information about an undercover FBI agent who had infiltrated the Accelerationist scene.

    “If this person is in your chats, remove them,” said one post, referring to the supposed agent. “Don’t threaten them. Don’t say anything to them. Just remove them from contacts and chats.”

    Matthew Kriner, managing director of the Accelerationism Research Consortium, said the Terrorgram Collective had already been badly weakened by a string of arrests in the U.S., Europe and Canada over the past two years. “Overall, the arrests of Humber and Allison are likely the final blow to the Terrorgram Collective,” Kriner said.

    In the U.S., federal agents this year have arrested at least two individuals who were allegedly inspired by the group. The first was Alexander Lightner, a 26-year-old construction worker who was apprehended in January during a raid on his Florida home. In a series of Telegram posts, Lightner said he planned to commit a racially or ethnically motivated mass killing, according to prosecutors. Court records show that agents found a manual produced by the Terrorgram Collective and a copy of “Mein Kampf” in Lightner’s home.

    Lightner has pleaded not guilty to charges of making online threats and possessing an illegal handgun silencer. His attorney declined to comment.

    This summer, prosecutors charged Andrew Takhistov of New Jersey with soliciting an individual to destroy a power plant. Takhistov allegedly shared a PDF copy of a different Terrorgram publication with an undercover agent. The 261-page manual includes detailed instructions for building explosives and encourages readers to destabilize society through murder and industrial sabotage. Takhistov has not yet entered a plea. His attorney did not respond to a request for comment.

    Durov’s August arrest also sent a spasm of fear through the extremist scene. “It’s over,” one user of a white supremacist chat group declared.

    “Does this mean I have to Nuke my Telegram account?” asked another member of the group. “I just got on.”

    Their concerns grew when Telegram removed language from its FAQ page stating that the company would not comply with law enforcement requests regarding users in private Telegram chats.

    Alarmed, Accelerationists on Telegram discussed the feasibility of finding another online sanctuary. Some considered the messaging service Signal, but others warned it was likely controlled by U.S. intelligence agencies. One post suggested users migrate to more obscure encrypted messaging apps like Briar and Session.

    In extremist circles, there was more discussion about fleeing Telegram after Durov’s announcement this week. “Time is running out on this sinking ship,” wrote one user. “So we’re ditching Telegram?” asked another.

    “Every time we have a success against one of them, they learn, they adapt, they modify,” said Don Robinson, who as an FBI agent conducted infiltration operations against white supremacists. “Extremists can simply pick up and move to a new platform once they are de-platformed for content abuses. This leaves law enforcement and intelligence agencies playing an endless game of Whac-a-Mole to identify where the next threat may be coming from.”

    Do You Have a Tip for ProPublica? Help Us Do Journalism.

    This post was originally published on ProPublica.

  • With the InnovationAus Awards for Excellence finalists now locked-in for 2024, the voting has now opened to the public to find a worthy winner of the prestigious People’s Choice Award. The People’s Choice Award is how we identify the crowd favourite from among all of our 2024 finalists. The voting is open to the public…

    The post Voting now open for InnovationAus People’s Choice Award appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • “In a peer reviewed study Scientists show that Nano structures self assemble after Pfizer/Moderna shots are injected into humans.” RealWorldNewsChannel.T.ME.

    The post Transhumanism Tripe first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.

  • Anas Altikriti was in London, and busy, on the day in July 2020 when his phone was hacked. He frequently works as a hostage negotiator and, at the time, he was negotiating a deal to free a hostage being held on the Libya–Chad border. Altikriti also had a meeting with former Labour Party leader Jeremy Corbyn. But his schedule did not include having his phone infiltrated by Pegasus, the phone hacking software made by Israel’s NSO Group. 

    Four years later, Altikriti, an Iraqi-born British citizen and vocal critic of the United Arab Emirates, is filing a report to the Metropolitan Police in London accusing the Israeli spyware firm NSO Group of complicity in the targeted hacking of his phone. On Wednesday, he filed the complaint about NSO and its associates alongside three fellow U.K.-based human rights defenders whose phones were also hacked.

    “This case has some real legs,” said Leanna Burnard, a lawyer at the nonprofit Global Legal Action Network, who prepared the complaint. “The U.K. shouldn’t stand for the hacking of human rights defenders on its own soil.”

    Assembled with the help of advocates from GLAN on behalf of the victims, the extensively footnoted filing sent to the Metropolitan Police, which was obtained by The Intercept, puts the ball in the police’s court. The police now have discretion over whether to open an investigation and subsequently bring charges.

    “The U.K. shouldn’t stand for the hacking of human rights defenders on its own soil.”

    “Due to regulatory constraints, we cannot confirm or deny any alleged specific customers,” Gil Lanier, vice president for global communications at NSO, told The Intercept. “NSO complies with all laws and regulations and sells its technologies exclusively to vetted intelligence and law enforcement agencies. Our customers use these technologies daily, as Pegasus continues to play a crucial role in thwarting terrorist activities, breaking up criminal rings, and saving thousands of lives.”

    The Metropolitan Police declined to comment.

    The U.S. blacklisted NSO in 2021 after its software was accused of enabling human rights abuses by the company’s authoritarian government clients. Amnesty International has said NSO was complicit in many of these phone hackings. 

    The cyber spying firm, however, has never been sanctioned in the U.K., despite calls from members of Parliament. The failure to act was particularly jarring because the government itself had been a target of the software. In 2022, cybersecurity researchers at Citizen Lab said that the U.K. prime minister’s office and the Foreign Office likely had been victims of multiple Pegasus attacks, with the UAE as the main suspect. 

    While prosecutors around the world have investigated criminal claims against NSO in countries, including Spain, Hungary, and Poland, so far there have been no formal charges.

    The complaint against NSO to London police has been two years in the making, since lawyers began investigating the hackings victims on British soil. Lawyers on the case said they hoped the police report could lead to a landmark moment for human rights defenders who have been targeted. Altikriti, alongside the other complainants, certainly hopes so. 

    “This has to be exposed,” he said. “We are now talking about a potential world where literally no one can ever claim to enjoy anything called privacy.”

    Hacked on British Soil

    Alongside Altikriti, the hacking victims include include Azzam Tamimi, a Palestinian-born British journalist and academic, a prominent critic of the Saudi regime; Mohammed Kozbar, a Lebanese-born British citizen and the leader of the Finsbury Park mosque; and Yusuf Al Jamri, a Bahraini human rights activist who was granted asylum in the U.K. All were hacked between 2018 and 2021 on British soil.

    Their complaint to the police is being made against NSO Group and its board members; the firm’s parent company Luxembourg-based Q Cyber Technologies; London-based private equity firm Novalpina, which bought NSO in 2019. The human rights activists are alleging the people involved with NSO breached the U.K.’s Computer Misuse Act by enabling state actors to hack their phones using Pegasus. (Novalpina did not respond to a request for comment.)

    The hackers in question are believed to be the Kingdom of Saudi Arabia, the UAE, and the Kingdom of Bahrain. 

    The U.K. recently became more significant to NSO’s operations. In 2023, the management of five NSO-linked companies was moved to London and two U.K.-based officers were appointed. 

    Meanwhile, NSO continues to face a slew of civil cases in the U.S., with the company moving for dismissal in lawsuits by hacked Salvadoran journalists and Hanan Elatr Khashoggi, the widow of murdered journalist Jamal Khashoggi

    Last week, Apple asked a court in San Francisco to dismiss its three-year hacking suit against NSO, after Israeli officials took files from NSO’s headquarters — an apparent attempt to frustrate lawsuits in the U.S. Apple argued it may now never be able to get the most critical files about Pegasus and that the revelation of its own defensive systems in court might aid other spyware companies. 

    “NSO is very vigorously defending these lawsuits,” said Stephanie Krent, attorney at the Knight First Amendment Institute. “It is trying to draw litigation out and really avoid being held to account.” 

    Absolute Non-reaction”

    In July 2021, Altikriti was notified by The Guardian as part of its Pegasus Project that his number was on a leaked list of those suspected to be hacked. According to The Guardian, Altikriti’s phone number was on a list of people of interest to the UAE given to NSO. Altikriti was concerned but not surprised.

    For many years, he had been vocally critical of the UAE, where he previously lived. The UAE designated his organization, the Cordoba Foundation — which works to promote dialogue and rapprochement between Islam and the West — as a terrorist group in 2014. In response, the organization issued a statement calling the country a “despotic regime seeking to silence any form of dissent.” He made similar declarations about the UAE over the following years. 

    Around the time Altikriti was hacked in July 2020, he had been working on several hostage release deals, mainly in the Middle East. He alleges that phone hacking interfered with his communications related to one deal.

    After he was notified of the potential hack, Altikriti’s phone was tested by Amnesty International and Citizen Lab at the University of Toronto, which studies cyber issues affecting human rights. The hack was confirmed. Altikriti quickly went public about the cyberattack, posting a statement calling on the U.K. government to stand against the use of such spyware. Altikriti has since become increasingly frustrated by the lack of action. 

    “You think that the U.K. Government, having seen a number of its own citizens and those on its lands being violated in the way that we have evidence now, would do something,” Altrikiti told The Intercept. “But so far we have seen an absolute non-reaction.” 

    In 2022, Altikriti and Kozbar, one of the other human rights activists behind the complaint to police, sent a pre-claim notice to NSO, the UAE, and Saudi Arabia, of their intention to file a civil suit over the alleged Pegasus phone hacking. In formal response letter obtained by The Intercept, NSO said there was “no basis for the claims.”

    The company said that since Q Cyber Technologies Ltd and NSO Group Technologies Ltd are each lsraeli companies and neither was present in England and Wales, English courts had no jurisdiction over them. They also argued that the claims were barred by state immunity because, if the alleged attacks happened, they were conducted on behalf of foreign governments who are immune from prosecution.

    In Wednesday’s complaint to police, other claimants have stories similar to Altikriti. Al Jamri was active on social media promoting awareness of human rights abuses and political issues in Bahrain. In 2011, he was politically active during the Arab Spring. In its wake, he was regularly subjected to interrogation and harassment by authorities. He was detained for the third time in August 2017 and subjected to torture. Upon his release, he sought asylum in the U.K. 

    Two years later, Al Jamri was targeted with Pegasus by servers traced to Bahrain, according to Citizen Lab. This happened around the same time he was posting about an incident at the British Embassy of Bahrain, when a dissident was allegedly assaulted by staff. In August 2019, like Altikriti, Al Jamri was notified by The Guardian, and his phone was subsequently tested and confirmed to have infections. He also went public about the hack.

    U.S. Lawsuits 

    Despite Apple’s attempt to withdraw its case, NSO still faces a slew of lawsuits in the U.S. In October 2019, WhatsApp filed a lawsuit against the Israeli company for using its platform to hack the phones of 1,400 of the chat app’s users. NSO has repeatedly tried to get the case thrown out, including by claiming sovereign immunity — that it acted as an agent of foreign governments — though that effort was rejected in January. 

    In November 2021, the same month NSO was blacklisted by the U.S. government for its role enabling human rights abuses, Apple also filed its case against NSO to hold it accountable for the surveillance and targeting of its users. On September 13, the company moved to dismiss its case, saying that Israeli officials’ seizure of NSO documents “were part of an unusual legal maneuver created by Israel to block the disclosure of information about Pegasus.” 

    NSO is known to have a close relationship with the Israeli government, which it claims to have been working with during Israel’s war on Gaza. In November, in an attempt to rehabilitate its image, NSO sent an urgent letter to request a meeting with Secretary of State Antony Blinken and officials at the U.S. State Department, citing the threat of Hamas. 

    In 2022, the Knight Institute filed its lawsuit on behalf of current and former journalists of El Faro, one of Central America’s leading independent news organizations, based in El Salvador. It was the first case filed by journalists against NSO in U.S. court. A judge dismissed the case in March, but it is currently on appeal. 

    “We felt it was important that victims have access to courts in order to hold NSO Group to account,” said Krent, the Knight attorney. “At the end of the day, they are facing the most serious threats from the use of this spyware.”

    The post Pegasus Spyware Victims Ask U.K. Police to Charge Shadowy NSO Group appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Anas Altikriti was in London, and busy, on the day in July 2020 when his phone was hacked. He frequently works as a hostage negotiator and, at the time, he was negotiating a deal to free a hostage being held on the Libya–Chad border. Altikriti also had a meeting with former Labour Party leader Jeremy Corbyn. But his schedule did not include having his phone infiltrated by Pegasus, the phone hacking software made by Israel’s NSO Group. 

    Four years later, Altikriti, an Iraqi-born British citizen and vocal critic of the United Arab Emirates, is filing a report to the Metropolitan Police in London accusing the Israeli spyware firm NSO Group of complicity in the targeted hacking of his phone. On Wednesday, he filed the complaint about NSO and its associates alongside three fellow U.K.-based human rights defenders whose phones were also hacked.

    “This case has some real legs,” said Leanna Burnard, a lawyer at the nonprofit Global Legal Action Network, who prepared the complaint. “The U.K. shouldn’t stand for the hacking of human rights defenders on its own soil.”

    Assembled with the help of advocates from GLAN on behalf of the victims, the extensively footnoted filing sent to the Metropolitan Police, which was obtained by The Intercept, puts the ball in the police’s court. The police now have discretion over whether to open an investigation and subsequently bring charges.

    “The U.K. shouldn’t stand for the hacking of human rights defenders on its own soil.”

    “Due to regulatory constraints, we cannot confirm or deny any alleged specific customers,” Gil Lanier, vice president for global communications at NSO, told The Intercept. “NSO complies with all laws and regulations and sells its technologies exclusively to vetted intelligence and law enforcement agencies. Our customers use these technologies daily, as Pegasus continues to play a crucial role in thwarting terrorist activities, breaking up criminal rings, and saving thousands of lives.”

    The Metropolitan Police declined to comment.

    The U.S. blacklisted NSO in 2021 after its software was accused of enabling human rights abuses by the company’s authoritarian government clients. Amnesty International has said NSO was complicit in many of these phone hackings. 

    The cyber spying firm, however, has never been sanctioned in the U.K., despite calls from members of Parliament. The failure to act was particularly jarring because the government itself had been a target of the software. In 2022, cybersecurity researchers at Citizen Lab said that the U.K. prime minister’s office and the Foreign Office likely had been victims of multiple Pegasus attacks, with the UAE as the main suspect. 

    While prosecutors around the world have investigated criminal claims against NSO in countries, including Spain, Hungary, and Poland, so far there have been no formal charges.

    The complaint against NSO to London police has been two years in the making, since lawyers began investigating the hackings victims on British soil. Lawyers on the case said they hoped the police report could lead to a landmark moment for human rights defenders who have been targeted. Altikriti, alongside the other complainants, certainly hopes so. 

    “This has to be exposed,” he said. “We are now talking about a potential world where literally no one can ever claim to enjoy anything called privacy.”

    Hacked on British Soil

    Alongside Altikriti, the hacking victims include include Azzam Tamimi, a Palestinian-born British journalist and academic, a prominent critic of the Saudi regime; Mohammed Kozbar, a Lebanese-born British citizen and the leader of the Finsbury Park mosque; and Yusuf Al Jamri, a Bahraini human rights activist who was granted asylum in the U.K. All were hacked between 2018 and 2021 on British soil.

    Their complaint to the police is being made against NSO Group and its board members; the firm’s parent company Luxembourg-based Q Cyber Technologies; London-based private equity firm Novalpina, which bought NSO in 2019. The human rights activists are alleging the people involved with NSO breached the U.K.’s Computer Misuse Act by enabling state actors to hack their phones using Pegasus. (Novalpina did not respond to a request for comment.)

    The hackers in question are believed to be the Kingdom of Saudi Arabia, the UAE, and the Kingdom of Bahrain. 

    The U.K. recently became more significant to NSO’s operations. In 2023, the management of five NSO-linked companies was moved to London and two U.K.-based officers were appointed. 

    Meanwhile, NSO continues to face a slew of civil cases in the U.S., with the company moving for dismissal in lawsuits by hacked Salvadoran journalists and Hanan Elatr Khashoggi, the widow of murdered journalist Jamal Khashoggi

    Last week, Apple asked a court in San Francisco to dismiss its three-year hacking suit against NSO, after Israeli officials took files from NSO’s headquarters — an apparent attempt to frustrate lawsuits in the U.S. Apple argued it may now never be able to get the most critical files about Pegasus and that the revelation of its own defensive systems in court might aid other spyware companies. 

    “NSO is very vigorously defending these lawsuits,” said Stephanie Krent, attorney at the Knight First Amendment Institute. “It is trying to draw litigation out and really avoid being held to account.” 

    Absolute Non-reaction”

    In July 2021, Altikriti was notified by The Guardian as part of its Pegasus Project that his number was on a leaked list of those suspected to be hacked. According to The Guardian, Altikriti’s phone number was on a list of people of interest to the UAE given to NSO. Altikriti was concerned but not surprised.

    For many years, he had been vocally critical of the UAE, where he previously lived. The UAE designated his organization, the Cordoba Foundation — which works to promote dialogue and rapprochement between Islam and the West — as a terrorist group in 2014. In response, the organization issued a statement calling the country a “despotic regime seeking to silence any form of dissent.” He made similar declarations about the UAE over the following years. 

    Around the time Altikriti was hacked in July 2020, he had been working on several hostage release deals, mainly in the Middle East. He alleges that phone hacking interfered with his communications related to one deal.

    After he was notified of the potential hack, Altikriti’s phone was tested by Amnesty International and Citizen Lab at the University of Toronto, which studies cyber issues affecting human rights. The hack was confirmed. Altikriti quickly went public about the cyberattack, posting a statement calling on the U.K. government to stand against the use of such spyware. Altikriti has since become increasingly frustrated by the lack of action. 

    “You think that the U.K. Government, having seen a number of its own citizens and those on its lands being violated in the way that we have evidence now, would do something,” Altrikiti told The Intercept. “But so far we have seen an absolute non-reaction.” 

    In 2022, Altikriti and Kozbar, one of the other human rights activists behind the complaint to police, sent a pre-claim notice to NSO, the UAE, and Saudi Arabia, of their intention to file a civil suit over the alleged Pegasus phone hacking. In formal response letter obtained by The Intercept, NSO said there was “no basis for the claims.”

    The company said that since Q Cyber Technologies Ltd and NSO Group Technologies Ltd are each lsraeli companies and neither was present in England and Wales, English courts had no jurisdiction over them. They also argued that the claims were barred by state immunity because, if the alleged attacks happened, they were conducted on behalf of foreign governments who are immune from prosecution.

    In Wednesday’s complaint to police, other claimants have stories similar to Altikriti. Al Jamri was active on social media promoting awareness of human rights abuses and political issues in Bahrain. In 2011, he was politically active during the Arab Spring. In its wake, he was regularly subjected to interrogation and harassment by authorities. He was detained for the third time in August 2017 and subjected to torture. Upon his release, he sought asylum in the U.K. 

    Two years later, Al Jamri was targeted with Pegasus by servers traced to Bahrain, according to Citizen Lab. This happened around the same time he was posting about an incident at the British Embassy of Bahrain, when a dissident was allegedly assaulted by staff. In August 2019, like Altikriti, Al Jamri was notified by The Guardian, and his phone was subsequently tested and confirmed to have infections. He also went public about the hack.

    U.S. Lawsuits 

    Despite Apple’s attempt to withdraw its case, NSO still faces a slew of lawsuits in the U.S. In October 2019, WhatsApp filed a lawsuit against the Israeli company for using its platform to hack the phones of 1,400 of the chat app’s users. NSO has repeatedly tried to get the case thrown out, including by claiming sovereign immunity — that it acted as an agent of foreign governments — though that effort was rejected in January. 

    In November 2021, the same month NSO was blacklisted by the U.S. government for its role enabling human rights abuses, Apple also filed its case against NSO to hold it accountable for the surveillance and targeting of its users. On September 13, the company moved to dismiss its case, saying that Israeli officials’ seizure of NSO documents “were part of an unusual legal maneuver created by Israel to block the disclosure of information about Pegasus.” 

    NSO is known to have a close relationship with the Israeli government, which it claims to have been working with during Israel’s war on Gaza. In November, in an attempt to rehabilitate its image, NSO sent an urgent letter to request a meeting with Secretary of State Antony Blinken and officials at the U.S. State Department, citing the threat of Hamas. 

    In 2022, the Knight Institute filed its lawsuit on behalf of current and former journalists of El Faro, one of Central America’s leading independent news organizations, based in El Salvador. It was the first case filed by journalists against NSO in U.S. court. A judge dismissed the case in March, but it is currently on appeal. 

    “We felt it was important that victims have access to courts in order to hold NSO Group to account,” said Krent, the Knight attorney. “At the end of the day, they are facing the most serious threats from the use of this spyware.”

    The post These Human Rights Defenders Were Hacked by Pegasus. Now They Want Police to Charge the Spyware Maker. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A recent attack that caused pagers to explode across Lebanon and Syria, injuring thousands of people and killing at least 12, including an 8-year-old girl, marks the latest iteration in a historical pattern of boobytrapping electronics.  

    Israel reportedly booby-trapped pagers used by Hezbollah members in what appears to be a supply-chain attack — meaning the gadgets were tampered with or outright replaced with rigged devices containing explosives and a detonator at some point prior to arriving in the hands of the targets. Hezbollah, which also attributes the exploding pagers to an Israeli attack, had reportedly recently switched en masse to using pagers for communications to evade Israeli surveillance

    The scale of the coordinated attack was shocking, but the tactic of turning an electronic gadget into explosive device is not unprecedented. In fact, it dates back at least half a century, according to U.S. military documents.

    A diagram of a booby-trapped communications headset. U.S. Department of the Army Field Manual

    Field Manual 5-31, titled simply “Boobytraps” and first published by the U.S. Department of the Army in 1965, describes the titular objects as explosive charges “cunningly contrived to be fired by an unsuspecting person who disturbs an apparently harmless object or performs a presumably safe act.” The 130-page manual provides an array of intricate wiring diagrams and cross-sectional schematics for booby-trapping various devices ranging from office equipment like desks and telephone list finders (early phone directories) to kitchenware like pots and kettles, as well as items like televisions and beds. 

    The manual also describes a World War II-era booby-trapped communications headset “containing an electric detonator connected to the terminals on the back. The connection of the headset into the live communication line initiated detonation.”

    An earlier edition of the “Boobytraps” field manual from the same year includes a diagram of a desk phone rigged with an explosive charge in its base. It states that “a phony telephone can be manufactured that will detonate when an attempt is made to use the instrument.” This diagram was omitted from the later version of the manual.

    A schematic depicts an exploding desk phone. U.S. Department of the Army Field Manual

    Another Army manual from 1966, TM 31-200-1, covering “Unconventional Warfare Devices and Techniques,” further elaborates that while the “test history” of the booby-trapped headset is not known, the concept nonetheless “appears to be workable” and that “even a small charge of explosives detonated near the ear will cause serious injury.”

    Some 30 years later, in 1996, the Israeli Security Agency, also known as Shin Bet, is said to used a similar technique to detonate a small charge of explosives near the ear of Hamas bomb-maker Yahya Ayyash. Knowing that Ayyash used the phones of his friends, orchestrators of the assassination managed to deliver a rigged phone to a relative of one of Ayyash’s childhood pals. When Ayyash answered a booby-trapped phone, reports suggest his communications were intercepted by aerial surveillance and Shin Bet remotely detonated the booby-trapped device, killing him. 

    Communications devices are not the only electronics that have been turned into explosives. An issue of Inspire from 2010, a magazine published by Al Qaeda, contains an article by Ikrimah Al-Muhajir of the “Explosives Department,” elaborating at length how a printer was booby-trapped to include explosives in the ink cartridge. According to the article, bomb-makers used a circuit from a Nokia cellphone to allow the device to pass through airport security undetected. 

    Al-Muhajir claimed that the molecular number of the printer’s toner was close to the molecular number of the explosive compound used, while the cellphone’s circuitry helped it blend in with the internal circuits of the printer. Two devices were discovered owing to intelligence about the plot, having already made their way successfully onto numerous planes. 

    More recently, Ecuadorian journalists in 2023 were sent booby-trapped USB sticks, which, when plugged into their computers, exploded and injured a television presenter. The USB sticks are said to have used the same type of explosive compound, RDX, used in the assassination of Ayyash. 

    In the aftermath of the coordinated attack on pagers in Lebanon and Syria, more details might emerge on how the devices were manipulated to explode. But the takeaway for now is that electronic communications tools aren’t merely susceptible to surveillance — they can, in fact, be turned into weapons themselves. 

    The post A Brief History of Booby-Trapping Electronics to Blow Up appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Two large-scale, coordinated attacks this week rocked Lebanon — the latest iteration in a historical pattern of booby-trapping electronics.

    On Tuesday, one attack caused pagers to explode across Lebanon and Syria, injuring thousands of people and killing at least 12, including a 9-year-old girl and an 11-year-old boy. A second wave of bombings unfolded on Wednesday, when explosives detonated inside a slew of hand-held radios across the country, leaving nine dead and 300 wounded, according to reports

    Israel, which is widely assumed to be behind both attacks, reportedly booby-trapped pagers used by Hezbollah members, carrying out a similar feat with the hand-held radios. The bombings appear to be supply-chain attacks — meaning the gadgets were tampered with or outright replaced with rigged devices containing explosives and a detonator at some point prior to arriving in the hands of the targets. Hezbollah, which also attributes the exploding electronics to Israel, had reportedly recently switched en masse to using pagers for communications to evade Israeli surveillance

    The scale of the coordinated attacks was shocking, but the tactic of turning an electronic gadget into an explosive device is not unprecedented. In fact, it dates back at least half a century, according to U.S. military documents.

    A diagram of a booby-trapped communications headset. U.S. Department of the Army Field Manual

    Field Manual 5-31, titled simply “Boobytraps” and first published by the U.S. Department of the Army in 1965, describes the titular objects as explosive charges “cunningly contrived to be fired by an unsuspecting person who disturbs an apparently harmless object or performs a presumably safe act.” The 130-page manual provides an array of intricate wiring diagrams and cross-sectional schematics for booby-trapping various devices ranging from office equipment like desks and telephone list finders (early phone directories) to kitchenware like pots and kettles, as well as items like televisions and beds. 

    The manual also describes a World War II-era booby-trapped communications headset “containing an electric detonator connected to the terminals on the back. The connection of the headset into the live communication line initiated detonation.”

    An earlier edition of the “Boobytraps” field manual from the same year includes a diagram of a desk phone rigged with an explosive charge in its base. It states that “a phony telephone can be manufactured that will detonate when an attempt is made to use the instrument.” This diagram was omitted from the later version of the manual.

    A schematic depicts an exploding desk phone. U.S. Department of the Army Field Manual

    Another Army manual from 1966, TM 31-200-1, covering “Unconventional Warfare Devices and Techniques,” further elaborates that while the “test history” of the booby-trapped headset is not known, the concept nonetheless “appears to be workable” and that “even a small charge of explosives detonated near the ear will cause serious injury.”

    Some 30 years later, in 1996, the Israeli Security Agency, also known as Shin Bet, is said to used a similar technique to detonate a small charge of explosives near the ear of Hamas bomb-maker Yahya Ayyash. Knowing that Ayyash used the phones of his friends, orchestrators of the assassination managed to deliver a rigged phone to a relative of one of Ayyash’s childhood pals. When Ayyash answered a booby-trapped phone, reports suggest his communications were intercepted by aerial surveillance and Shin Bet remotely detonated the booby-trapped device, killing him. 

    Communications devices are not the only electronics that have been turned into explosives. An issue of Inspire from 2010, a magazine published by Al Qaeda, contains an article by Ikrimah Al-Muhajir of the “Explosives Department,” elaborating at length how a printer was booby-trapped to include explosives in the ink cartridge. According to the article, bomb-makers used a circuit from a Nokia cellphone to allow the device to pass through airport security undetected. 

    Al-Muhajir claimed that the molecular number of the printer’s toner was close to the molecular number of the explosive compound used, while the cellphone’s circuitry helped it blend in with the internal circuits of the printer. Two devices were discovered owing to intelligence about the plot, having already made their way successfully onto numerous planes. 

    More recently, Ecuadorian journalists in 2023 were sent booby-trapped USB sticks, which, when plugged into their computers, exploded and injured a television presenter. The USB sticks are said to have used the same type of explosive compound, RDX, used in the assassination of Ayyash. 

    In the aftermath of the coordinated attacks in Lebanon and Syria, more details might emerge on how the devices were manipulated to explode. But the takeaway for now is that electronic communications tools aren’t merely susceptible to surveillance — they can, in fact, be turned into weapons themselves. 

    Update: September 18, 2024, 1:48 p.m. ET
    The story has been updated with new information regarding the two children who were killed in the pager attack on Tuesday as well as a similar coordinated attack on Wednesday in Lebanon that targeted hand-held radios.

    The post A Brief History of Booby-Trapping Electronics to Blow Up appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Right now, much of the food sent to space is dehydrated or vacuum–packed, but fresh food might be on the horizon for space explorers. The University of California has genetically engineered a cherry tomato plant that is currently undergoing NASA observation at the Kennedy Space Center in Florida. Within the next year, researchers hope that its seeds will be the first ever to germinate in the International Space Station’s Advanced Plant Habitat laboratory, 260 miles above Earth.

    The hope is that the cherry tomato seeds will produce fruit, and then that fruit will produce more seeds, which will be planted again to produce more fruit. “It’s going to be a seed-to-a-seed-to-a-seed, which has never been done before in space,” said Robert Jinkerson, an associate professor of chemical and environmental engineering at the University of California.

    person holding cherry tomatoesPexels

    Why NASA is growing plants in space 

    Eventually, the University of California’s cherry tomato seeds might provide astronauts with a sustained, tasty, and fresh source of nutrition—vital for the type of long space exploration trips NASA wants to see in the future. In the 2030s, the US government agency is hoping to send astronauts to Mars, for example.

    “A lack of vitamin C was all it took to give sailors scurvy, and vitamin deficiencies can cause a number of other health problems,” notes the NASA website. “Simply packing some multivitamins will not be enough to keep astronauts healthy as they explore deep space. They will need fresh produce.”

    There are many projects on the go that aim to provide astronauts with fresh produce. On the International Space Station, there is a small vegetable production system called Veggie that can hold up to six plants at a time. In Veggie, each plant grows in its own special “pillow,” which is filled with a clay-like material and fertilizer and helps to give plant roots the right amount of air, water, and nutrients (without these pillows, water, and air would behave strangely in space, either flooding the roots with too much water or trapping them in air bubbles).

    So far, Veggie has produced a range of plants, including red Russian kale, zinnia flowers, and three different types of lettuce.

    “Good food, proper food with a lot of variety, tailored to the needs of the individual astronauts is crucial for a successful deep space mission. I think people underestimate how important it is.” —Sonja Brungs, ESA’s astronaut operations deputy lead, to the ’BBC’

    There’s also the Advanced Plant Habitat, where NASA hopes to plant cherry tomato seeds. It’s similar to Veggie, as it uses clay material and slow-release fertilizer, as well as LED lights, but it also differs in many ways. The habitat is fully enclosed and runs automatically with more than 180 sensors that keep track of everything, like water, air, moisture, and temperature. 

    The Advanced Plant Habitat system is controlled by a team at the Kennedy Space Center. When plants are ready to be studied, astronauts collect samples, preserve them, and send them back to Earth to see how space affects their growth.

    nasa growing lettuceNASA/Cory Huston

    What other foods are suitable for space?

    Jinkerson’s focus isn’t just cherry tomatoes. The professor is also working on a mushroom that can be grown in space, after receiving $250,000 from NASA to create a compact system that produces enough mushrooms to provide astronauts with around 4,000 calories per day.

    Research suggests that fungi will be a key part of space diets in the future. “Fungi is very versatile,” Carlos Otero, who works in the R&D team at Mycorena, said. The Swedish food technology brand also has funding from NASA and is currently working on a mycoprotein made from a mix of fungi and microalgae. “It can grow on different substrates, it grows fast and you can design a small and efficient system capable of producing enough food for the crew,” he added. “It is also very robust, resistant to radiation, and easy to store and transport.”

    In 2022, a vegan fungi protein bioreactor from Chicago-based company Nature’s Fynd also orbited aboard the SpaceX-25, a Commercial Resupply Service mission (CRS) to the International Space Station that was a collaboration between NASA and Elon Musk’s spacecraft company SpaceX.

    The bioreactor grew Fy, Nature’s Fynd’s own nutritional fungi protein. “Our breakthrough fermentation system is relatively simple, uses minimal energy and water, and delivers a nutritious protein that is easy to harvest, with little to no waste in a matter of days—as perfect for space as it is here on Earth,” said Thomas Jonas, CEO and co-founder of Nature’s Fynd at the time.

    VegNews.SpaceX3SpaceX

    Experts have also experimented with growing cultivated meat in space. After all, it would be extremely difficult to produce traditional meat on board a spaceship. In 2019, Aleph Farms worked with the International Space Station to grow the first-ever piece of cultivated meat in space.

    “Developed countries have the historical opportunity to move away from farming and killing animals, being a very inefficient process to produce food, unsustainable for the planet, dangerous for our health and raising more and more ethical concerns among the population.”—ESA engineer Paolo Corradi

    In 2023, the ESA revealed it was also exploring the idea of cultivated meat in space. Its study teams concluded that the idea is “not far-fetched,” although further research is required.

    ESA engineer Paolo Corradi noted that the research could lead to much-needed food system transformation on Earth, too. “The feeling is that we are at the beginning of a process that could transform the industry, making the conventional meat production model obsolete,” he said.

    This post was originally published on VegNews.com.

  • Nomophobia: the fear of being without one’s smartphone. What’s so smart about that?

    The post Nomophobia first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.

  • In 2023, the fast fashion giant Shein was everywhere. Crisscrossing the globe, airplanes ferried small packages of its ultra-cheap clothing from thousands of suppliers to tens of millions of customer mailboxes in 150 countries. Influencers’ “#sheinhaul” videos advertised the company’s trendy styles on social media, garnering billions of views

    At every step, data was created, collected, and analyzed. To manage all this information, the fast fashion industry has begun embracing emerging AI technologies. Shein uses proprietary machine-learning applications — essentially, pattern-identification algorithms — to measure customer preferences in real time and predict demand, which it then services with an ultra-fast supply chain.

    As AI makes the business of churning out affordable, on-trend clothing faster than ever, Shein is among the brands under increasing pressure to become more sustainable, too. The company has pledged to reduce its carbon dioxide emissions by 25 percent by 2030 and achieve net-zero emissions no later than 2050. 

    But climate advocates and researchers say the company’s lightning-fast manufacturing practices and online-only business model are inherently emissions-heavy — and that the use of AI software to catalyze these operations could be cranking up its emissions. Those concerns were amplified by Shein’s third annual sustainability report, released late last month, which showed the company nearly doubled its carbon dioxide emissions between 2022 and 2023.

    “AI enables fast fashion to become the ultra-fast fashion industry, Shein and Temu being the fore-leaders of this,” said Sage Lenier, the executive director of Sustainable and Just Future, a climate nonprofit. “They quite literally could not exist without AI.” (Temu is a rapidly rising e-commerce titan, with a marketplace of goods that rival Shein’s in variety, price, and sales.)

    In the 12 years since Shein was founded, it has become known for its uniquely prolific manufacturing, which reportedly generated over $30 billion of revenue for the company in 2023. Although estimates vary, a new Shein design may take as little as 10 days to become a garment, and up to 10,000 items are added to the site each day. The company reportedly offers as many as 600,000 items for sale at any given time with an average price tag of roughly $10. (Shein declined to confirm or deny these reported numbers.) One market analysis found that 44 percent of Gen Zers in the United States buy at least one item from Shein every month. 

    That scale translates into massive environmental impacts. According to the company’s sustainability report, Shein emitted 16.7 million total metric tons of carbon dioxide in 2023 — more than what four coal power plants spew out in a year. The company has also come under fire for textile waste, high levels of microplastic pollution, and exploitative labor practices. According to the report, polyester — a synthetic textile known for shedding microplastics into the environment — makes up 76 percent of its total fabrics, and only 6 percent of that polyester is recycled.

    And a recent investigation found that factory workers at Shein suppliers regularly work 75-hour weeks, over a year after the company pledged to improve working conditions within its supply chain. Although Shein’s sustainability report indicates that labor conditions are improving, it also shows that in third-party audits of over 3,000 suppliers and subcontractors, 71 percent received a score of C or lower on the company’s grade scale of A to E — mediocre at best.

    Machine learning plays an important role in Shein’s business model. Although Peter Pernot-Day, Shein’s head of global strategy and corporate affairs, told Business Insider last August that AI was not central to its operations, he indicated otherwise during a presentation at a retail conference at the beginning of this year. 

    A man with short hair and glasses, in a long-sleeve black shirt, speaks into a headset at a conference
    Peter Pernot-Day speaking at the Collision 2024 technology conference in June. Piaras Ó Mídheach / Sportsfile for Collision via Getty Images

    “We are using machine-learning technologies to accurately predict demand in a way that we think is cutting edge,” he said. Pernot-Day told the audience that all of Shein’s 5,400 suppliers have access to an AI software platform that gives them updates on customer preferences, and they change what they’re producing to match it in real time. 

    “This means we can produce very few copies of each garment,” he said. “It means we waste very little and have very little inventory waste.” On average, the company says it stocks between 100 to 200 copies of each item — a stark contrast with more conventional fast fashion brands, which typically produce thousands of each item per season, and try to anticipate trends months in advance. Shein calls its model “on-demand,” while a technology analyst who spoke to Vox in 2021 called it “real-time” retail.

    At the conference, Pernot-Day also indicated that the technology helps the company pick up on “micro trends” that customers want to wear. “We can detect that, and we can act on that in a way that I think we’ve really pioneered,” he said. A designer who filed a recent class action lawsuit in a New York District Court alleges that the company’s AI market analysis tools are used in an “industrial-scale scheme of systematic, digital copyright infringement of the work of small designers and artists,” that scrapes designs off the internet and sends them directly to factories for production.

    In an emailed statement to Grist, a Shein spokesperson reiterated Peter Pernot-Day’s assertion that technology allows the company to reduce waste and increase efficiency and suggested that the company’s increased emissions in 2023 were attributable to booming business. “We do not see growth as antithetical to sustainability,” the spokesperson said.

    An analysis of Shein’s sustainability report by the Business of Fashion, a trade publication, found that last year, the company’s emissions rose at almost double the rate of its revenue — making Shein the highest-emitting company in the fashion industry. By comparison, Zara’s emissions rose half as much as its revenue. For other industry titans, such as H&M and Nike, sales grew while emissions fell from the year before. 

    Shein’s emissions are especially high because of its reliance on air shipping, said Sheng Lu, a professor of fashion and apparel studies at the University of Delaware. “AI has wide applications in the fashion industry. It’s not necessarily that AI is bad,” Lu said. “The problem is the essence of Shein’s particular business model.” 

    Other major brands ship items overseas in bulk, prefer ocean shipping for its lower cost, and have suppliers and warehouses in a large number of countries, which cuts down on the distances that items need to travel to consumers. 

    According to the company’s sustainability report, 38 percent of Shein’s climate footprint comes from transportation between its facilities and to customers, and another 61 percent come from other parts of its supply chain. Although the company is based in Singapore and has suppliers in a handful of countries, the majority of its garments are produced in China and are mailed out by air in individually addressed packages to customers. In July, the company sent about 900,000 of these to the U.S. every day.

    A person in a tutu and red balaclava holds a Shein-branded fan that has been painted over to say "FUCK Shein". An advertisement in the foreground warns passer-bys about consumption.
    A group of activists protesting during Black Friday in Barcelona, Spain, in November 2023.
    Marc Asensio / NurPhoto via Getty Images

    Shein’s spokesperson told Grist that the company is developing a decarbonization roadmap to address the footprint of its supply chain. Recently, the company has increased the amount of inventory it keeps stored in U.S. warehouses, allowing it to offer American customers quicker delivery times, and increased its use of cargo ships, which are more carbon efficient than cargo planes.

    “Controlling the carbon emissions in the fashion industry is a really complex process,” Lu said, adding that many brands use AI to make their operations more efficient. “It really depends on how you use AI.”

    There is research that indicates using certain AI technologies could help companies become more sustainable. “It’s the missing piece,” said Shahriar Akter, an associate dean of business and law at the University of Wollongong in Australia. In May, Akter and his colleagues published a study finding that when fast fashion suppliers used AI data management software to comply with big brands’ sustainability goals, those companies were more profitable and emitted less. A key use of this technology, Atker says, is to closely monitor environmental impacts, such as pollution and emissions. “This kind of tracking was not available before AI based tools,” he said. 

    Shein didn’t reply to a request for comment on whether it uses machine learning data management software to track emissions, which is one of the uses of AI included in Akter’s study. But the company’s much-touted usage of machine-learning software to predict demand and reduce waste is another of the uses of AI included in the research. 

    Regardless, the company has a long way to go before meeting its goals. Grist calculated that the emissions Shein reportedly saved in 2023 — with measures such as providing its suppliers with solar panels and opting for ocean shipping — amounted to about 3 percent of the company’s total carbon emissions for the year.

    Lenier, from Sustainable and Just Future, believes there is no ethical use of AI in the fast fashion industry. She said that the largely unregulated technology allows brands to intensify their harmful impacts on workers and the environment. “The folks who work in fast fashion factories are now under an incredible amount of pressure to turn out even more, even faster,” she said. 

    Lenier and Lu both believe that the key to a more sustainable fashion industry is convincing customers to buy less. Lu said if companies use AI to boost their sales without changing to their unsustainable practices, their climate footprints will also grow accordingly. “It’s the overall effect of being able to offer more market-popular items and encourage consumers to purchase more than in the past,” he said. “Of course, the overall carbon impact will be higher.”

    This story was originally published by Grist with the headline As fast fashion giant Shein embraces AI, its emissions are soaring on Sep 10, 2024.

    This post was originally published on Grist.

  • The ongoing Israeli assault on Gaza has triggered tense, at times hostile, reckonings across American tech companies over their role in the killing. Since October 7, tech workers have agitated for greater transparency about their employers’ work for the Israeli military and at times vehemently protested those contracts.

    IBM, which has worked with the Israeli military since the 1960s, is no exception: For months after the war’s start, workers repeatedly pressed company leadership — including its chief executive — to divulge and limit its role in the Israeli offensive that has so far killed over 40,000 Palestinians. For many workers, the question of where IBM might draw the line with foreign governments is particularly fraught given the company’s grim track record of selling computers and services to both apartheid South Africa and Nazi Germany.

    On June 6, CEO Arvind Krishna addressed these concerns in a livestreamed video Q&A session.

    For IBM workers worried about where the company draws the line, his response has sparked only greater consternation.

    According to records of the presentation reviewed by The Intercept, Krishna told employees that IBM’s foreign business wouldn’t be shaped by the company’s own values or humanitarian guidelines.

    Rather, Krishna explained, when working for governments, IBM believes the customer is always right: 

    We try to operate with the principles that are encouraged by the governments of the countries we are in. We are a U.S. headquarter company. So, what does the U.S. federal government want to do on international relations? That helps guide a lot of what we do. We operate in many countries. We operate in Israel, but we also operate in Saudi Arabia. What do those countries want us to do? And what is it they consider to be correct behavior?

    For IBM employees worried that business interests would override ethical considerations, this answer provided little reassurance. It also echoed, intentionally or not, the company’s defense when workers had protested IBM’s sale of computer services to apartheid South Africa. According to Kwame Afoh, an IBM employee who organized against the company’s South African ventures in the 1970s, the company’s go-to internal rationale was “We don’t set foreign policy but rather we follow the lead of the U.S. government in foreign business dealings.” 

    Krishna continued by claiming IBM would not help build weapons — not because doing so is morally wrong, but because the company doesn’t have a system of judging right from wrong. “We will not work on offensive weapons programs,” Krishna explained. “Why? I am not taking any kind of moral or ethical judgment. I think that should be on each country who does those. The reason we don’t is, we do not have the internal guardrails to decide whether the technology applies in a good way or unethical way for offensive weapons.”

    Though it may not build weapons itself, IBM has long helped run the military that carries them. In 2020, the company split a roughly $275 million contract to build data centers that would handle Israeli military logistics, including “combat equipment,” according to Israeli outlet TheMarker. That same year, an executive with IBM subsidiary Red Hat told an Israeli business publication “we see ourselves as partners of the IDF.”

    IBM did not respond to a request for comment.

    Some IBM employees who spoke to The Intercept on the condition of anonymity say they were unnerved or upset by their CEO’s remarks, including one who described them as “predictably shameful.” This person said that while some were glad Krishna had even broached the topic of IBM and Israel, “the responses I heard in one-on-one discussions were overwhelmingly dissatisfied or outraged.” Another IBM worker characterized Krishna’s comments as an “excuse for him to hide behind the US government’s choices in a business sense,” adding that “with the track record that IBM has with taking part in genocidal government projects, it certainly doesn’t help his case in any valuable moral way whatsoever.”



    The company’s stance from its closed-doors staff discussion is markedly different than its public claims. Like its major rivals, IBM says its business practices are constrained by various human rights commitments, principles that in theory ask the company to avoid harm in the pursuit of profit. When operating in a foreign country, such commitments ostensibly prevent a company like IBM from simply asking “What do those countries want us to do?” as Krishna put it. 

    Related

    Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon

    But like its competitors, IBM’s human rights language is generally feel-good verbiage that gestures at ethical guidelines without spelling any of them out. “Our definition of corporate responsibility includes environmental responsibility, as well as social concerns for our workforce, clients, business partners, and the communities where we operate,” the company’s “human rights principles” page states. “IBM has a strong culture of ethics and integrity.”

    The only substance to be found here is in reference to third-party human rights frameworks, namely those issued by the United Nations. IBM says its “corporate responsibility standards” are “informed by” the U.N. Guiding Principles on Business and Human Rights, which asks its adherents to “prevent or mitigate adverse human rights impacts that are directly linked to their operations, products or services by their business relationships, even if they have not contributed to those impacts.” 

    These guidelines, endorsed by the U.N. Human Rights Council in 2011, stress that “Some operating environments, such as conflict-affected areas, may increase the risks of enterprises being complicit in gross human rights abuses committed by other actors (security forces, for example).” The document further notes that such conflict zone abuses may create corporate liability before the International Criminal Court, which in April charged Israeli Prime Minister Benjamin Netanyahu with crimes against humanity stemming from the Gaza assault. Google, Microsoft, and Amazon, which also sell technology services to the Israeli military, similarly say they subscribe to the voluntarily, nonbinding U.N. guidelines.

    The post IBM CEO: We Listen to What Israel and Saudi Arabia Consider “Correct Behavior” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Artificial Intelligence, or AI for short, no longer exists strictly within the confines of tech companies and Hollywood blockbusters. Once a tenet of science fiction and futuristic horror films, artificial intelligence has been piquing the curiosity of scientists since the middle of the 20th century when Allen Newell, Cliff Shaw, and Herbert Simon unveiled the world’s first AI program aimed at proving mathematical theorems by mimicking human problem solving. But today, this field of study has rapidly evolved, becoming the hot topic on everyone’s mind—and it has come a long way from its mathematical origins. Now, AI is driving vegan innovation as plant-based companies use it to emulate the taste, texture, and functionality of animal products without the cruelty. Check out how three vegan companies are leveraging this new technology to help feed the world.

    VegNews.VeganCheeseBrie.ClimaxFoodsClimax Foods

    Climax Foods

    Their goal: Create a new generation of affordable, planet-friendly vegan foods
    What they make: Hyper-realistic cheese—already adopted by food industry veterans like Michelin-starred chef Dominique Crenn—with the help of the world’s first plant-based casein
    How they do it: The company’s AI-enabled Deep Plant Intelligence platform analyzes the characteristics of animal-based foods on a molecular level. “We continuously train our portfolio of machine intelligence tools to enable the plant-based recreation of any taste and texture, while optimizing for nutrition and lowering costs,” says Karthik Sekar, PhD, Head of Data Science. Employing its findings and using the vast ingredients available in the plant kingdom, Climax Foods is able to create delectable vegan cheeses—like its funky Blue and bloomy Brie varieties—that are indistinguishable from their dairy counterparts.

    VegNews.VeganChickenSteak.MeatiFoodsMeati Foods

    Meati

    Their goal: Harness the biotechnological power of fungi to meet the world’s sustainability and nutritional needs What they make: Healthier, tastier, more nutritious plant-based meat
    How they do it: Made from mycelium (the fast-growing root system of mushrooms), Meati’s whole-cut steaks and chicken cutlets are beloved for their realistic taste and texture. And now, the company is turning to AI to do even more. With studies showing a desire among consumers for improved meat alternatives—including increased protein and less sodium—Meati’s next phase will be to optimize the nutritional profile of its range. And its new partnership with AI food science company PIPA—allowing it to analyze vast amounts of data on the power of mycelium—will help do just that.

    VegNews.OscarMayerVeganHotDog.TheKraftHeinzNotCoThe Kraft Heinz Not Company

    NotCo

    Their goal: Create and usher in a new food industry devoid of animal foods
    What they make: Ingenious formulations that replicate meat and cheese on a molecular level
    How they do it: From juicy chicken patties to nostalgic Kraft mac and cheese, NotCo’s AI platform Giuseppe is helping to break away from standard industry formulations to perfect plant-based alternatives by bringing over 300,000 edible plants and billions of combinations to NotCo’s fingertips. First, Giuseppe analyzes the molecular structure of animal foods and replicates them using only animal-free ingredients—a process that has resulted in unique formulations and near-identical alternatives (think pineapple and cabbage to recreate the flavor and mouthfeel of dairy milk). The AI then generates recipes for NotCo chefs and scientists to test and provide feedback—helping Giuseppe to learn, get faster, and improve even more.

    This post was originally published on VegNews.com.

  • I am from rural America, sort of. I’m an intellectual, sort of. I’m certainly on the political left, but some comrades believe I’m turned conservative.

    Like many people, I don’t fit easily into conventional labels used in today’s polarized political debates. To understand me—and anyone else—takes some sorting out. Here’s how I sort myself out.

    I was born in North Dakota and grew up mostly in the big city of Fargo (well, it’s the largest city in the state). I never lived in a rural area, but I was a part of a larger rural culture, in which most everyone had some connection to the countryside through family, friends, or business. After living in several big cities during my professional life, I now live in northern New Mexico outside the small town of Taos, in a county with a smaller population than the university where I used to teach. Recent imports like me live alongside farmers and ranchers, interacting regularly through the acequia irrigation system.

    I’m not rural, but I like to think I understand rural.

    I started my professional life as a newspaper journalist before earning a PhD and becoming a professor at the University of Texas at Austin. But once I secured the guaranteed employment that comes with tenure, I walked away from the scholarly world of academic journals and conferences. I continued to teach but wrote for a general audience, immersing myself in a variety of community organizing projects.

    I was an intellectual by profession, but I never really wanted to be part of formal intellectual life.

    I’ve met intellectuals who assume rural life is bereft of intellectual activity. And I’ve met rural people who assume that intellectuals are condescending and annoying. There’s a kernel of truth in both assumptions. Since moving to a rural area, I have fewer opportunities for certain kinds of intellectual engagement; I don’t go to as many scholarly lectures as I did in Austin. At the same time, I don’t find myself wishing I was back in a faculty meeting and dealing with academic status-seeking. But I’ve met too many smart rural people and too many wonderful professors to fall back on stereotypes.

    As I explain in It’s Debatable: Talking Authentically about Tricky Topics, perhaps most important to my identity is that I’m a radical. My politics are based on a critique of systems and structures of power that create impediments to meaningful social justice and real ecological sustainability: patriarchy, white supremacy, capitalism, First-World domination, and the worship of high-energy/high-technology gadgets in an industrial worldview. But how I apply these analyses make me both a part of the left and alienated from the left.

    Let’s start with patriarchy. I was first politicized by the radical feminist movement to challenge the sexual-exploitation industries (pornography, prostitution, stripping—the ways men buy and sell objectified female bodies for sexual pleasure). That form of radical politics goes to the heart of systems and structures of male power. I also embraced what is typically called a radical analysis of racism, economic inequality, and imperialism. I thought that this kind of consistent critique—going to the root of problems by focusing on systems of power—was what it meant to be on the left, but over time I realized that most of my left comrades didn’t much care for radical feminism. Over time, more and more leftists not only rejected the critique of the sexual-exploitation industries but celebrated “sex work,” sometimes even portraying it as liberating.

    When I started offering a critique of the ideology of the transgender movement, an analysis rooted in that radical feminism, I found myself not only disagreeing with left comrades but effectively being banished from left organizing groups. I learned quickly, starting in 2014, that a radical feminist critique of trans politics was unacceptable, even seen as a sign of closet conservativism.

    But that shunning didn’t mean I wanted to find a home on the right. Conservatives weren’t much interested in a feminist critique of male domination—many on the right see patriarchy as the “natural” state of human societies. Conservatives might share a concern about the sexual-exploitation industries and transgender ideology, but for very different reasons than feminists.

    Meanwhile, my focus on ecology and a deepening critique of technological fundamentalism—the belief that more technology can solve all ecological problems, including those created by previous technologies—has put me at odds with both right and left. Those who believe in the miracle of the market usually dismiss any talk of ecological collapse because free enterprise will save us. My left friends take environmental degradation and climate change more seriously but routinely argue that a more participatory democracy in a more socialist economy will save us.

    Across the political spectrum, it’s hard to find anyone who agrees that a sustainable human future requires us to put dramatic limits on our consumption of energy and material resources, while we also dramatically reduce the human population. Conservatives often believe that is what leftists are secretly planning for, but I meet very few leftists who advocate those goals. The majority of left environmentalists I meet believe that renewable energy, combined with amazing yet-to-be-invented inventions, will allow us to dodge collapse.

    I think I am making consistent and coherent arguments. But many of my left friends think I have abandoned left politics, even though we still agree on many issues. Conservatives will accept my political positions that seem in line with their own, though typically they aren’t interested in the radical analysis behind those positions.

    I have changed my mind about specific policy proposals over the past four decades—as new information and insights emerge, reasonable people should adapt. But my analytical framework remains unchanged. I focus not merely on individual choices but on how systems work, and I don’t ignore the data that suggests collapse is all but inevitable on our current trajectory.

    This leaves me largely in agreement with left comrades, but dealing with uncomfortable tensions when we disagree. Meanwhile, I’m at odds with right opponents most of the time, and when there is apparent agreement on policy there is an uncomfortable tension underneath.

    How do I sort out all these political tensions, and sort out myself? To friends, I have started describing myself as an “intellectual hick.” I have no problem defending my intellectual contributions but also am happy to be living at a healthy distance from official intellectual spaces. Even with neighbors who don’t agree with my politics, our shared interest in caring for the land and water creates deep bonds.

    How I label myself is less important than realizing that we all would benefit from sorting out ourselves. Once we critically self-reflect about our identities and ideas, it’s a lot easier talking with others about how they have sorted themselves out.

    The post “Intellectual Hick”: Sorting Out Our Complex Identities first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.