Category: Technology

  • Unlike any other point in history, hackers, whistleblowers, and archivists now routinely make off with terabytes of data from governments, corporations, and extremist groups. These datasets often contain gold mines of revelations in the public interest and in many cases are freely available for anyone to download. 

    Revelations based on leaked datasets can change the course of history. In 1971, Daniel Ellsberg’s leak of military documents known as the Pentagon Papers led to the end of the Vietnam War. The same year, an underground activist group called the Citizens’ Commission to Investigate the FBI broke into a Federal Bureau of Investigation field office, stole secret documents, and leaked them to the media. This dataset mentioned COINTELPRO. NBC reporter Carl Stern used Freedom of Information Act requests to publicly reveal that COINTELPRO was a secret FBI operation devoted to surveilling, infiltrating, and discrediting left-wing political groups. This stolen FBI dataset also led to the creation of the Church Committee, a Senate committee that investigated these abuses and reined them in. 

    Huge data leaks like these used to be rare, but today they’re increasingly common. More recently, Chelsea Manning’s 2010 leaks of Iraq and Afghanistan documents helped spark the Arab Spring, documents and emails stolen by Russian military hackers helped elect Donald Trump as U.S. president in 2016, and the Panama Papers and Paradise Papers exposed how the rich and powerful use offshore shell companies for tax evasion.

    Yet these digital tomes can prove extremely difficult to analyze or interpret, and few people today have the skills to do so. I spent the last two years writing the book “Hacks, Leaks, and Revelations: The Art of Analyzing Hacked and Leaked Data” to teach journalists, researchers, and activists the technologies and coding skills required to do just this. While these topics are technical, my book doesn’t assume any prior knowledge: all you need is a computer, an internet connection, and the will to learn. Throughout the book, you’ll download and analyze real datasets — including those from police departments, fascist groups, militias, a Russian ransomware gang, and social networks — as practice. Throughout, you’ll engage head-on with the dumpster fire that is 21st-century current events: the rise of neofascism and the rejection of objective reality, the extreme partisan divide, and an internet overflowing with misinformation.

    My book officially comes out January 9, but it’s shipping today if you order it from the publisher here. Add the code INTERCEPT25 for a special 25 percent discount.

    The following is a lightly edited excerpt from the first chapter of “Hacks, Leaks, and Revelations” about a crucial and often underappreciated part of working with leaked data: how to verify that it’s authentic.

    Photo: Micah Lee

    You can’t believe everything you read on the internet, and juicy documents or datasets that anonymous people send you are no exception. Disinformation is prevalent.

    How you go about verifying that a dataset is authentic completely depends on what the data is. You have to approach the problem on a case-by-case basis. The best way to verify a dataset is to use open source intelligence (OSINT), or publicly available information that anyone with enough skill can find. 

    This might mean scouring social media accounts, consulting the Internet Archive’s Wayback Machine, inspecting metadata of public images or documents, paying services for historical domain name registration data, or viewing other types of public records. If your dataset includes a database taken from a website, for instance, you might be able to compare information in that database with publicly available information on the website itself to confirm that they match. (Michael Bazzell also has great resources on the tools and techniques of OSINT.)

    Below, I share two examples of authenticating data from my own experience: one about a dataset from the anti-vaccine group America’s Frontline Doctors, and another about leaked chat logs from a WikiLeaks Twitter group. 

    In my work at The Intercept, I encounter datasets so frequently I feel like I’m drowning in data, and I simply ignore most of them because it’s impossible for me to investigate them all. Unfortunately, this often means that no one will report on them, and their secrets will remain hidden forever. I hope “Hacks, Leaks, and Revelations” helps to change that. 

    The America’s Frontline Doctors Dataset

    In late 2021, in the midst of the Covid-19 pandemic, an anonymous hacker sent me hundreds of thousands of patient and prescription records from telehealth companies working with America’s Frontline Doctors (AFLDS). AFLDS is a far-right anti-vaccine group that misleads people about Covid-19 vaccine safety and tricks patients into paying millions of dollars for drugs like ivermectin and hydroxychloroquine, which are ineffective at preventing or treating the virus. The group was initially formed to help Donald Trump’s 2020 reelection campaign, and the group’s leader, Simone Gold, was arrested for storming the U.S. Capitol on January 6, 2021. In 2022, she served two months in prison for her role in the attack.

    My source told me that they got the data by writing a program that made thousands of web requests to a website run by one of the telehealth companies, Cadence Health. Each request returned data about a different patient. To see whether that was true, I made an account on the Cadence Health website myself. Everything looked legitimate to me. The information I had about each of the 255,000 patients was the exact information I was asked to provide when I created my account on the service, and various category names and IDs in the dataset matched what I could see on the website. But how could I be confident that the patient data itself was real, that these people weren’t just made up?

    I wrote a simple Python script to loop through the 72,000 patients (those who had paid for fake health care) and put each of their email addresses in a text file. I then cross-referenced these email addresses with a totally separate dataset containing personal identifying information from members of Gab, a social network popular among fascists, anti-democracy activists, and anti-vaxxers. In early 2021, a hacktivist who went by the name “JaXpArO and My Little Anonymous Revival Project” had hacked Gab and made off with 65GB of data, including about 38,000 Gab users’ email addresses. Thinking there might be overlap between AFLDS and Gab users, I wrote another simple Python program that compared the email addresses from each group and showed me all of the addresses that were in both lists. There were several.

    Armed with this information, I started scouring the public Gab timelines of users whose email addresses had appeared in both datasets, looking for posts about AFLDS. Using this technique, I found multiple AFLDS patients who posted about their experience on Gab, leading me to believe that the data was authentic. For example, according to consultation notes from the hacked dataset, one patient created an account on the telehealth site and four days later had a telehealth consultation. About a month after that, they posted to Gab saying, “Front line doctors finally came through with HCQ/Zinc delivery” (HCQ is an abbreviation for hydroxychloroquine).

    Having a number of examples like this gave us confidence that the dataset of patient records was, in fact, legitimate. You can read our AFLDS reporting at The Intercept — which led to a congressional investigation into the group — here.

    The WikiLeaks Twitter Group Chat

    In late 2017, journalist Julia Ioffe published a revelation in The Atlantic: WikiLeaks had slid into Donald Trump Jr.’s Twitter DMs. Among other things, before the 2016 election, WikiLeaks suggested to Trump Jr. that even if his father lost the election, he shouldn’t concede. “Hi Don,” the verified @wikileaks Twitter account wrote, “if your father ‘loses’ we think it is much more interesting if he DOES NOT conceed [sic] and spends time CHALLENGING the media and other types of rigging that occurred—as he has implied that he might do.”

    A long-term WikiLeaks volunteer who went by the pseudonym Hazelpress started a private Twitter group with WikiLeaks and its biggest supporters in mid-2015. After watching the group become more right-wing, conspiratorial, and unethical, and specifically after learning about WikiLeaks’ secret DMs with Trump Jr., Hazelpress decided to blow the whistle on the whistleblowing group itself. She has since publicly come forward as Mary-Emma Holly, an artist who spent years as a volunteer legal researcher for WikiLeaks.

    To carry out the WikiLeaks leak, Holly logged in to her Twitter account, made it private, unfollowed everyone, and deleted all of her tweets. She also deleted all of her DMs except for the private WikiLeaks Twitter group and changed her Twitter username. Using the Firefox web browser, she then went to the DM conversation — which contained 11,000 messages and had been going on for two-and-a-half years — and saw the latest messages in the group. She scrolled up, waited for Twitter to load more messages, scrolled up again, and kept doing this for four hours until she reached the very first message in the group. She then used Firefox’s Save Page As function to save an HTML version of the webpage, as well as a folder full of resources like images that were posted in the group.

    Now that she had a local, offline copy of all the messages in the DM group, Holly leaked it to the media. In early 2018, she sent a Signal message to the phone number listed on The Intercept’s tips page. At that time, I happened to be the one checking Signal for incoming tips. Using OnionShare — software that I developed for this purpose — she sent me an encrypted and compressed file, along with the password to decrypt it. After extracting it, I found a 37MB HTML file — so big that it made my web browser unresponsive when I tried opening it and which I later split into separate files to make it easier to work with — and a folder with 82MB of resources.

    How could I verify the authenticity of such a huge HTML file? If I could somehow access the same data directly from Twitter’s servers, that would do it; only an insider at Twitter would be in a position to create fake DMs that show up on Twitter’s website, and even that would be extremely challenging. When I explained this to Holly (who, at the time, I still knew only as Hazelpress), she gave me her Twitter username and password. She had already deleted all the other information from that account. With her consent, I logged in to Twitter with her credentials, went to her DMs, and found the Twitter group in question. It immediately looked like it contained the same messages as the HTML file, and I confirmed that the verified account @wikileaks frequently posted to the group.

    Following these steps made me extremely confident in the authenticity of the dataset, but I decided to take verification one step further. Could I download a separate copy of the Twitter group myself in order to compare it with the version Holly had sent me? I searched around and found DMArchiver, a Python program that could do just that. Using this program, along with Holly’s username and password, I downloaded a text version of all of the DMs in the Twitter group. It took only a few minutes to run this tool, rather than four hours of scrolling up in a web browser.

    Note: After this investigation, the DMArchiver program stopped working due to changes on Twitter’s end, and today the project is abandoned. However, if you’re faced with a similar challenge in a future investigation, search for a tool that might work for you. 

    The output from DMArchiver, a 1.7MB text file, was much easier to work with compared to the enormous HTML file, and it also included exact time stamps. Here’s a snippet of the text version:

    [2015-11-19 13:46:39] <WikiLeaks> We believe it would be much better for GOP to win.

    [2015-11-19 13:47:28] <WikiLeaks> Dems+Media+liberals woudl then form a block to reign in their worst qualities.

    [2015-11-19 13:48:22] <WikiLeaks> With Hillary in charge, GOP will be pushing for her worst qualities., dems+media+neoliberals will be mute.

    [2015-11-19 13:50:18] <WikiLeaks> She’s a bright, well connected, sadistic sociopath.

    I could view the HTML version in a web browser to see it exactly as it had originally looked on Twitter, which was also useful for taking screenshots to include in our final report.

    A screenshot of the leaked HTML file.

    Along with the talented reporter Cora Currier, I started the long process of reading all 11,000 chat messages, paying closest attention to the 10 percent of them from the @wikileaks account — which was presumably controlled by Julian Assange, WikiLeaks’s editor — and picking out everything in the public interest. We discovered the following details:

    • Assange expressed a desire for Republicans to win the 2016 presidential election.
    • Assange and his supporters were intensely focused on discrediting two Swedish women who had accused him of rape and molestation, as well as discrediting their lawyers. Assange and his defenders spent weeks discussing ways to sabotage articles about his rape case that feminist journalists were writing.
    • After Associated Press journalist Raphael Satter wrote a story about harm caused when WikiLeaks publishes personal identifiable information, Assange called him a “rat” and said that “he’s Jewish and engaged in the ((())) issue,” referring to an antisemitic neo-Nazi meme. He then told his supporters to “bog him down. Get him to show his bias.”

    You can read our reporting on this dataset at The Intercept. After The Intercept published this article, Assange and his supporters also targeted me personally with antisemitic abuse, and Russia Today, the state-run TV station, ran a segment about me. 

    The techniques you can use to authenticate datasets vary greatly depending on the situation. Sometimes you can rely on OSINT, sometimes you can rely on help from your source, and sometimes you’ll need to come up with an entirely different method.

    Regardless, it’s important to explain in your published report, at least briefly, what makes you confident in the data. If you can’t authenticate it but still want to publish your report in case it’s real — or in case others can authenticate it — make that clear. When in doubt, err on the side of transparency.

    My book, “Hacks, Leaks, and Revelations,” officially comes out on January 9, but it’s shipping today if you order it from the publisher here. Add the code INTERCEPT25 for a special 25 percent discount.

    The post How to Authenticate Large Datasets appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In a letter sent Thursday to Meta chief executive Mark Zuckerberg, Sen. Elizabeth Warren, D-Mass., calls on the Facebook and Instagram owner to disclose unreleased details about wartime content moderation practices that have “exacerbated violence and failed to combat hate speech,” citing recent reporting by The Intercept.

    “Amidst the horrific Hamas terrorist attacks in Israel, a humanitarian catastrophe including the deaths of thousands of civilians in Gaza, and the killing of dozens of journalists, it is more important than ever that social media platforms do not censor truthful and legitimate content, particularly as people around the world turn to online communities to share and find information about developments in the region,” the letter reads, according to a copy shared with The Intercept.

    Since Hamas’s October 7 attack, social media users around the world have reported the inexplicable disappearance of posts, comments, hashtags, and entire accounts — even though they did not seem to violate any rules. Uneven enforcement of rules generally, and Palestinian censorship specifically, have proven perennial problems for Meta, which owns Facebook and Instagram, and the company has routinely blamed erratic rule enforcement on human error and technical glitches, while vowing to improve.

    Following a string of 2021 Israeli raids at the Al-Aqsa Mosque in occupied East Jerusalem, Instagram temporarily censored posts about the holy site on the grounds that it was associated with terrorism. A third-party audit of the company’s speech policies in Israel and Palestine conducted last year found that “Meta’s actions in May 2021 appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.”

    Users affected by these moderation decisions, meanwhile, are left with little to no recourse, and often have no idea why their posts were censored in the first place. Meta’s increased reliance on opaque, automated content moderation algorithms has only exacerbated the company’s lack of transparency around speech policy, and has done little to allay allegations that the company’s systems are structurally biased against certain groups.

    The letter references recent articles by The Intercept, the Wall Street Journal, and other outlets’ reporting on the widespread, unexplained censorship of Palestinians and the broader discussion of Israel’s ongoing bombardment of Gaza. Last month, for instance, The Intercept reported that Instagram users leaving Palestinian flag emojis in post comments had seen those comments quickly hidden; Facebook later told The Intercept it was hiding these emojis in contexts it deemed “potentially offensive.”

    “Social media users deserve to know when and why their accounts and posts are restricted, particularly on the largest platforms where vital information-sharing occurs.”

    These “reports of Meta’s suppression of Palestinian voices raise serious questions about Meta’s content moderation practices and anti-discrimination protections,” Warren writes. “Social media users deserve to know when and why their accounts and posts are restricted, particularly on the largest platforms where vital information-sharing occurs. Users also deserve protection against discrimination based on their national origin, religion, and other protected characteristics.” Outside of its generalized annual reports, Meta typically shares precious little about how it enforces its rules in specific instances, or how its policies are determined behind closed doors. This general secrecy around the company’s speech rules mean that users are often in the dark about whether a given post will be allowed — especially if it even mentions a U.S.-designated terror organization like Hamas — until it’s too late.

    To resolve this, and “[i]n order to further understand what legislative action might be necessary to address these issues,” Warren’s letter includes a litany of specific questions about how Meta treats content pertaining to the war, and to what extent it has enforced its speech rules depending on who’s speaking. “How many Arabic language posts originating from Palestine have been removed [since October 7]?” the letter asks. “What percentage of total Arabic language posts originating from Palestine does the above number represent?” The letter further asks Meta to divulge removal statistics since the war began (“How often did Meta limit the reachability of posts globally while notifying the user?”) and granular details of its enforcement system (“What was the average response time for a user appeal of a content moderation decision for Arabic language posts originating from Palestine?”).

    The letter asks Meta to respond to Warren’s dozens of questions by January 5, 2024.

    The post Sen. Elizabeth Warren Questions Meta Over Palestinian Censorship appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The post High-Tech Entrapments first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.

  • As COP28 draws to a close – mired in inaction and controversy – a new app is launching to let the public take matters into their own hands. “Earthize” will allow people to avoid any companies that greenwash, invest in fossil fuels, or damage the environment. That is, you can now stop using corporations that fuel the climate and biodiversity crises.

    COP28: what’s the point?

    At the end of COP28 on Tuesday 12 December, the United Arab Emirates (UAE) promised to try again to strike a deal as the Dubai climate summit passed its deadline. At-risk nations and Western powers rejected a proposal that stopped short of phasing out fossil fuels.

    The 13-day COP28 summit in the glitzy metropolis built on petrodollars has debated a historic first-ever global exit from oil, gas, and coal, the main culprits in a planetary crisis of warming. But a draft put forward by COP28 president Sultan Al Jaber, himself head of the UAE oil company, fell well short. Instead it presented reductions in fossil fuels as one of several options.

    Negotiators described a mood of anger and tension in talks that again ran through the night, with activists confronting delegates and island leaders saying their very existence was at risk. The Emirati hosts put a brave face on the outrage, saying they were working on a new draft and noting that UN rules require consensus from the nearly 200 countries at COP28.

    Scientists say the planet has already warmed by 1.2 degrees Celsius (2.2 Fahrenheit) from pre-industrial times and that 2023 — marked by lethal disasters including wildfires across the world — has likely been the warmest in 100,000 years. Yet COP28 still failed to properly address this.

    So, what can the rest of us do? Well, a new app will go at least some way to helping people make climate and biodiversity-informed choices.

    Earthize: making climate and biodiversity-informed choices

    Earthize is a new website and Android app launching Wednesday 13 December. It will allow users to search for eco-friendly brands, cutting through the greenwash and having a real impact on the environment:

    Users can type keywords into the search bar on Earthize such as “banks”, “mobile phones”, or “crisps”, and the app shows them a list of the most eco-friendly goods and services.

    Each entry in the app’s database has specific markers so that consumers can see whether products are vegan, plastic-free, or made with only renewable energy. More complex services like bank accounts or mobile phone contracts have a written description to inform users why they’re more eco-friendly than your average high street name:

    Unlike similar websites, Earthize doesn’t require users to sign up and hand over their personal details, there are no subscriptions, and it’s designed to avoid overloading the user with too much information.

    Earthize managing director Daniel Johnston said:

    Our politicians, lobbied heavily by fossil fuel companies, have failed us again at COP28. We developed Earthize to beat them at their own game. If the only thing on their minds is profit, then we’ll have to hit them in the pockets.

    People often don’t realise how much power they have in their own hands to fight climate change without turning their lives upside down. It’s about giving those eco-friendly brands a fighting chance and making sure your money doesn’t end up in a fossil fuel company’s offshore bank account.

    The website and Android app launch together on 13 December, just in time to check out the most eco-friendly places to grab your last minute Christmas shopping. The Earthize team plans to release an iPhone app in future, as well as adding more brands and features to the existing app:

    Visit the Earthize website here. From 13 December you can download the Earthize Android app here.

    Additional reporting via Agence France-Presse

    Featured image and additional images via Earthize

    By Steve Topple

    This post was originally published on Canary.

  • A suite of recent cybersecurity data breaches highlight an urgent need to overhaul how companies and government agencies handle our data. But these incidents pose particular risks to victim-survivors of domestic violence.

    In fact, authorities across Australia and the United Kingdom are raising concerns about how privacy breaches have endangered these customers.

    The onus is on service providers – such as utilities, telcos, internet companies and government agencies – to ensure they don’t risk the safety of their most vulnerable customers by being careless with their data.

    A suite of incidents

    Earlier this year, the UK Information Commissioner reported it had reprimanded seven organisations since June 2022 for privacy breaches affecting victims of domestic abuse.

    These included organisations revealing the safe addresses of the victims to their alleged abuser. In one case, a family had to be moved immediately to emergency accommodation.

    In another case, an organisation disclosed the home address of two children to their birth father (who was in prison for raping their mother).

    The UK Information Commissioner has called for better training and processes. This includes regular verification of contact information and securing data against unauthorised access.

    In 2021, the Australian Information Commissioner and Privacy Commissioner took action against Services Australia for disclosing a victim-survivor’s new address to her former partner.

    The commissioner ordered a written apology and a A$19,980 compensation payment. It also ordered an independent audit of how Services Australia updates contact details for separating couples with shared records.

    An earlier case involved a telecommunications company and the publisher of a public directory.

    The commissioner ordered them each to pay $20,000 to a victim of domestic violence whose details were made public, which jeopardised her safety.

    More recently, the Energy and Water Ombudsman Victoria reported a case where an electricity provider inadvertently provided a woman’s new address to her ex-partner. The woman had to buy security cameras for protection. The company has since revised its procedures.

    The Energy and Water Ombudsman Victoria has also reviewed complaints received in 2022-23 related to domestic violence. These include failing to flag accounts of victims who disclosed abuse, as well as potentially unsafe consumer automation and data governance processes.

    The Victorian Essential Services Commission accepted a court-enforceable undertaking from a water company that it would improve processes after allegations its actions put customers affected by family violence at risk.

    The commission found the company failed to adequately protect the personal information of two separate customers in 2021 and 2022, by sending correspondence with their personal information to the wrong addresses.

    In both cases, the customer had not disclosed their experience of domestic violence. Nevertheless, the regulator noted these “erroneous information disclosures put these customers at risk of harm”.

    Australia’s Telecommunications Industry Ombudsman received about 300 complaints involving domestic violence in 2022-23, with almost two-thirds relating to mobile phones.

    Complaints included instances of telcos disclosing the addresses of victim-survivors to perpetrators or of frontline staff not believing victim-survivors. There were also cases of telcos insisting a consumer experiencing family violence contact the perpetrator of family violence. The report noted:

    For example, one person was asked by her telco to bring her abusive ex-partner into a store to change her number to her new account. We’ve also had complaints about telcos disconnecting the services of a consumer experiencing family violence – sometimes at the request of the account holder who is the perpetrator of the violence – despite access to those services being critical to the consumer staying safe.

    The Australian Financial Complaints Authority resolved more than 500 complaints from people experiencing domestic and family violence in 2021-22, including those related to privacy breaches.

    Change is slowly under way

    In May, new national rules came into force to provide better protection and support to energy customers experiencing domestic violence.

    These rules mandate retailers prioritise customer safety and protect their personal information. This includes account security measures to prevent perpetrators from accessing victim-survivors’ sensitive data.

    They also prohibit the disclosure of information without consent. In issuing its rules, the Australian Energy Markets Commission noted the heightened risk of partner homicides following separations.

    The Telecommunications Industry Ombudsman has called for mandatory, uniform and enforceable rules. The current voluntary industry code and guidelines fall short in protecting phone and internet customers experiencing domestic violence.

    New rules should include training, policies and recognition of violence as a cause of payment difficulties. They should also factor in how service suspension or disconnection affects victim-survivors.

    The Australian Information and Privacy Commissioner said last year:

    Sadly, we continue to receive cases of improper disclosure of personal information off line by businesses to ex partners who target women in family disputes and domestic violence. All of these issues reinforce the need for privacy by design.

    In its response to a review of the Privacy Act, the government has agreed the Office of the Australian Information Commissioner should help develop guidance to reduce risk to customers.

    We must work harder to ensure data and privacy breaches do not leave victim-survivors of domestic violence at greater risk from perpetrators.

    The National Sexual Assault, Family and Domestic Violence Counselling Line – 1800 RESPECT (1800 737 732) – is available 24 hours a day, seven days a week for any Australian who has experienced, or is at risk of, family and domestic violence and/or sexual assault.The Conversation

     

     

    Please note: Picture at top is a stock photo. Used under the Pixabay Content License. Credit: antonynjoro

    The post Data breaches can be extraordinarily dangerous for victim-survivors appeared first on BroadAgenda.

    This post was originally published on BroadAgenda.

  • We’ve all heard the unbelievable gossip that seems to spark like wildfire across our screens – the story of a famous celebrity involved in a scandal that shocked the world, only to discover later that it was nothing more than a carefully spun web of lies.  The impact of misinformation on public figures not only shapes perceptions but can also have dire consequences, blurring the lines between truth and fabrication. These instances of fake news, meticulously crafted to deceive and captivate our attention, are not just occasional blips in our news feeds. They’ve become a pervasive part of our digital landscape, blurring the lines between reality and fiction, often at the expense of our favorite public figures.

    Understanding the landscape

    From viral hoaxes to manipulated images, fake news in the realm of celebrities manifests in multifaceted ways. Take for instance Elon Musk’s frequent presence in the public eye, particularly through his active involvement on his social media platform X, which has positioned him as the prime target in the 2023 Fake News Index. ExpressVPN reports an astonishing 157,385 engagements linked to fabricated stories about Musk, potentially reaching an audience of over 15 million people.

    The fabricated stories are often concocted to tarnish reputations, as well as the manipulated photos that incite controversies. Remember the purported feuds between stars, amplified by misinterpreted quotes or doctored videos? These instances not only deceive but also thrive on the relentless hunger for sensationalism. 

    But the impact extends beyond mere gossip. The repercussions of such misinformation on both celebrities and society cannot be overstated. False narratives can damage careers, strain relationships, and perpetuate harmful stereotypes, all while eroding trust in the veracity of information disseminated.

    Navigating the spectrum of fake news

    Within this labyrinth, distinguishing harmless satire from intentional deception becomes paramount. Genuine reporting errors, while unintended, can swiftly morph into viral falsehoods. Case in point: a misquoted statement quickly escalates into a headline-grabbing scandal, triggering a cascade of misinformation.

    Yet, the most insidious form remains deliberate falsehoods strategically crafted to mislead and manipulate. This, compounded by the ever-evolving digital landscape, poses a daunting challenge in sifting fact from fiction. Social media platforms, despite their immense connectivity, serve as breeding grounds for unchecked information, blurring the boundaries between truth and deception.

    Impact on celebrities

    Celebrities, often prime targets, bear the brunt of such misinformation campaigns. Instances abound where unfounded rumors or malicious fabrications have upended lives and careers. Consider the mental toll on individuals subjected to baseless scandals or the damage wrought on professional trajectories due to false accusations.

    The vulnerability of public figures in the face of fake news raises ethical concerns regarding the responsibility of media outlets and the public’s consumption of unverified information. Upholding journalistic integrity and ethical reporting becomes pivotal in mitigating these repercussions.

    Addressing the issue

    Combatting this pervasive issue demands a multifaceted approach. Promoting media literacy and fostering critical thinking skills are pivotal in arming individuals against the onslaught of misinformation. Fact-checking initiatives and stringent verification protocols within media circles stand as bulwarks against the unchecked proliferation of fake news.

    Furthermore, an informed and conscientious readership, one mindful of responsible information sharing, serves as the bedrock for a more discerning and vigilant society.

    Call to action

    As we draw the curtain on this discourse, the imperative lies in collective action. Each individual plays a pivotal role in shaping a more informed media landscape. Let us pledge to question, verify, and disseminate information responsibly. Together, we can navigate this intricate web, unveiling the truth behind the glare of fake news in celebrity culture.

    Join the movement, apply critical thinking skills, and cultivate a culture of responsible information consumption. Our collective efforts can serve as the beacon guiding us through the labyrinth of misinformation, steering us toward a more transparent and trustworthy media environment.

    Featured image via pixabay

    By The Canary

    This post was originally published on Canary.

  • Hundreds of protest actions against the advertising industry and Amazon have taken place across Europe and the US this week, as concerns grow over the human and environmental costs of Black Friday and Cyber Monday.

    Formed in 2017, Subvertisers International is a global movement of individuals and organisations concerned with how advertising affects society. It is made up of local and national groups of activists, artists, NGOs, not-for-profits, teachers, parents, scientists and doctors. Subvertisers International has over 18 active groups in Belgium, France, Germany, UK, Spain, Portugal, Argentina, the US, and Australia.

    Recently, it turned its attention to Black Friday and Cyber Monday.

    Anti-ad actions worldwide – as well as #MakeAmazonPay

    Over 100 actions specifically targeting the advertising industry have already taken place in Brussels, Paris, London, San Francisco, Hamburg, Liverpool, Birmingham and Bristol in the last two weeks.

    On Black Friday itself, 150 strikes and protests took place as part of the ‘Make Amazon Pay’ campaign in 30 countries:

    Meanwhile, the protests against advertising included:

    Oragami action re-purpose ad posters 2, Make Love Not War 1k ZAP Games Cyber Monday

    • Large billboards taken over in London in solidarity with striking Amazon workers #MakeAmazonPay:

    2_Make Amazon Pay billboard installation London 2023 2MD

    02_Toyota advertising agency proests, Credit Angela Christofilou

    • Digital screens covered in Reading with protest signs that read “Blackout Friday”:

    1-Blackout Friday Reading 1

    • Environmentalists re-purposing bus stop advertising spaces in Birmingham with anti-consumerist messages such as “Don’t Buy Stuff, Enjoy Your Friends”:

    3- Enjoy your friends, Birmingham, ZAP Games 2023, Full Size

    • Artists in Brussels replacing Black Friday posters (without permission) with paintings with appeals such as: “I wish for a city free of advertising”

    Actions were timed to precede Black Friday, which campaigners argue is a relatively new, unnecessary, and unwelcome pressure to consume beyond our needs. The ‘ZAP Games’ are “two weeks of affinity group actions”, coordinated by the Subvertisers’ International network of groups, encouraging activists to intervene in advertising sites, remove adverts, and replace them with art and creative messages promoting a world “beyond consumerism”.

    An annual targeting of Black Friday and Cyber Monday

    ‘ZAP’ (Zone Anti-Publicité, or “anti-advertising zone” in French) began in Brussels in 2020 and has since spread to become an annual event ahead of Black Friday and Cyber Monday. Activists target outdoor advertising such as bus stops, billboards, and digital screens – repurposing them for artistic purposes “to hush the relentless noise of consumerism”.

    Activists held ‘Award Ceremonies’ in London and Brussels on evening of 25 November with trophies distributed to winners of eight Action Categories for their protest interventions.

    Subvertisers International argue that Black Friday, as championed by Amazon, is the ultimate symbol of an economy dominated by large corporations, who dodge taxes, exploit their workers and are accelerating the climate and ecological collapse through a system of hyper-consumerism. Set against its current growth-based model, it will take Amazon until the year 2378 to reach its stated 2040 target of net zero emissions, according to the Make Amazon Pay coalition.

    Also in London, activists replaced ads on the Tube, highlighting the environmental cost of the industry:

    Adverts fuel climate chaos Black Friday Cyber Monday

    Over the last decade, outdoor advertising has shifted from traditional paper and paste to digital screens. A smaller double-sided “six-sheet” screen such as is seen on many high streets today consumes as much electricity as three average UK homes, whilst larger digital billboards can consume as much as 11 times.

    As well as artistic protest actions, campaign groups in Subvertisers International are making progress in removing and banning new advertising screens in European cities.

    Ban them now

    In June 2023, the city government of Lyon, France approved plans to ban new digital advertising screens and remove 120 digital screens from the subway in Spring 2024. Large-format wrap-around tarp advertising on buildings and roof-top advertising will also be prohibited, as well as a reduction in the number and maximum size of advertising boards. This followed a five-year campaign from Résistance À L’Agression Publicitaire and Collectif Plein La Vue.

    In Bristol, England, a local anti-advertising group welcomed new planning measures in Bristol Council’s Local Plan which should make it harder for billboard companies to receive planning permission for digital ad screens.

    Last March, the city of Geneva in Switzerland came very close to fully banning commercial outdoor advertising in public space in a referendum. 48.1% voted in favour of the ban, with 51.9% against.

    Robbie Gillett from Adfree Cities which campaigns to remove corporate outdoor advertising from public spaces said:

    In the run up to Black Friday, our streets become filled with advertising messages compelling us to buy ever more things. But this relatively new festival on the consumerism calendar masks the human costs on workers for corporations like Amazon, who are also failing to pay their taxes and hastening the over-extraction of the planetary resources. And this is all just before the Christmas mega-marketing machine gets going.

    Black Friday should act as a warning sign of a world in overdrive. Sustainable production and consumption must become the new goal of our economy in the 21st century – and that means moving beyond the blinkered target of economic growth-for-growth’s sake, redistributing the amassed wealth of multinational corporations – and at a local level freeing our public spaces from the advertising industry’s pressure to consume.

    When we confront the outdoor advertising industry, we’re calling for system change, a new economy, a better world and a liveable future.

    Featured image and additional images via Subvertisers International

    By The Canary

    This post was originally published on Canary.

  • The gaming industry has a unique mission – it allows you to personally experience many events and actions that cannot be experienced in real life without certain circumstances and skills. Unlike films and books, you fully bring your personal participation into the event, even if it is definitely a game plot.

    So you can feel the main role in fictional or realistically recreated real situations and experience at least a small share of the emotions experienced by the people who find themselves in them:

    Call of Duty

    Call of Duty Modern Warfare Series

    The developers from Activision invited all interested players to fully experience the developing crisis and the clash of major countries in a local and military conflict.

    You will play as representatives of the special forces of various countries and soldiers of the regular army, fulfilling their role in the conflict that has arisen.

    Special forces, as expected, will carry out jewelry operations and actions behind enemy lines, and regular armies will roll through the lands of enemies and terrorists carrying out orders and tasks.

    There is another conflict in the Middle East with the risk of the use of nuclear weapons, so American rangers arrive on the scene to find and destroy the leader, and you will be one of these soldiers.

    Among the interesting mechanics and actions, you will find a real nuclear explosion and the opportunity to see it in action and feel the full range of its effects. This is not a spoiler, but simply one of the mechanics that Call of Duty allows you to experience, and thank God, only through the monitor screen.

    The conflict and the actions of terrorists from Russia will lead to a major conflict between the US and Russia in the storyline and when using mw3 services to play better online.

    Enemies will suddenly invade the territory of the United States and you will take part in battles to defend key cities, including Washington, where the forces of the Rangers and Delta will enter the battle to knock out and push back the enemies from the capital and continue a series of operations to liberate the entire territory of the United States.

    When you carry out a series of operations as a special operations operative and a Delta squad soldier, the Russian army will be thrown back overseas, but the war will not end there, and the fighting will move to European territory.

    You will liberate Berlin and Paris and transfer the fighting to the territory of the aggressor in order to reach the terrorist Makarov and destroy him and return the legitimate president, kidnapped and terrorized by the possible murder of his daughter.

    Starting in 2019, Activision relaunched the Call of Duty series, adding new conflict mechanics and opportunities for carrying out missions and battles, in single player mod, or online rank boosting mw3.

    Now more attention is paid to tactical details – peeking out from behind cover, detailed aiming at specific parts of the body for better control of the battle and elimination of opponents.

    In all other respects, it is the same conflict with the participation of the United States and Russia, but with different main characters, locations, updated graphics and game specifics.

    Now all participants in battles will use more tactical techniques, and you will see in action many new means that are periodically used in wars, for example, phosphorus ammunition. This is a type of chemical weapon that burns through everything it touches, such weapons are prohibited by the rules of war, but the games are visual entertainment so that you can see important moments in the development of humanity and the consequences that orders given from the use of such weapons may have:

    Operation Flashpoint Dragon Rising

    Operation Flashpoint Dragon Rising

    Bohemia Interactive offers players to fully experience the conflict between China and the United States.

    Due to an acute shortage of resources for the further development of the state, China begins an invasion of Russia to capture them, and the US army arrives to help its unexpected allies.

    You will play as a special forces operative who will lead his squad through a series of battles and operations behind enemy lines and during direct defense operations or breaking through the front.

    If you wish, you can go through the main campaign with friends – a combat group of four fighters.

    Despite the fact that Dragon Rising is a fairly old project, its graphics and mechanics are quite relevant today.

    You will find small and large operations, actions behind enemy lines and large assault operations with the support of other types of troops.

    You will be tasked with capturing key heights, covertly destroying enemy air defenses, assaulting villages with the support of armored vehicles and aircraft, and capturing an airfield to strengthen the positions of the US Army.

    ARMA 3

    A complete and realistic military simulator that focuses on the real military experience of preparing and conducting combat operations.

    You will undergo basic training in all the basic mechanics of combat in an augmented reality helmet – the technology that special forces use to train their soldiers and go to a remote fictional island to take part in a peacekeeping mission to keep two armies from direct combat.

    The conflict will quickly get out of control and the peacekeepers themselves will find themselves targeted and will have to survive on the island and carry out guerrilla activities to replenish funds and ammunition to continue the fight.

    In ARMA, the emphasis is on realism, in which each participant in the battle uses cover and camouflage, shoots accurately using the technique of controlling the initiative and suppressing the enemy with fire, and also uses grenades.

    You will quickly realize that here you can destroy the enemy with one bullet and just as easily die if you act carelessly, do not play in a team with other group members and ignore cover and combat tactics.

    Injuries will also play a role, and you need to respond to them and apply a dressing package, which will temporarily stop the blood loss, but if you do not receive full help in the near future, you will die.

    A wound in the arm will affect shooting accuracy, a wound in the leg will affect the speed of movement, and with serious wounds, your fighter will not be able to get up. Wounds to the head are almost always fatal, unless of course there are exceptions, then this is more an accident than a pattern.

    Featured image and additional images supplied

    By The Canary

    This post was originally published on Canary.

  • The U.S. has been using a dysfunctional app to “manage” a humanitarian crisis, and the situation is reaching a breaking point. Migrants have been using the CBP One app to get appointments with border officials since January. When Title 42 expired in May, the U.S. returned to Title 8 to punish anyone who tries to cross between checkpoints and doesn’t use the app with a five-year ban.

    Source

    This post was originally published on Latest – Truthout.

  • Defence Industry minister Pat Conroy has dismissed concerns surrounding proposed reforms to the Defence Control Act, describing the changes as “a huge opportunity for Australian Industry”. The proposed Defence Trade Controls Export Bill 2023 intends to support the creation of a export license free Defence industrial base between Australia, the United States, and the United…

    The post Conroy talks up ‘huge opportunity’ of export control reforms appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Between 2013 and 2022, Turkey’s government agencies have requested Google to remove content over 18.9k times, positioning Turkey in 4th place globally, according to privacy protection company Surfshark. The most frequently cited reason in these requests is Defamation. The US comes in sixth, with around 11k requests.

    Globally, governmental requests for content removal exceeded 355k over the last decade, and 2022 stands out as a record-breaking year, witnessing a 50% surge in removal requests.

    Turkey’s government requested removal of 90.4k items in a decade

    Over the past decade, the Turkish government made 18.9k requests for content removal from Google, averaging five requests per day. Turkey’s top three justifications were defamation (40.1%), involving harm to reputation, including claims of libel, slander, and corporate defamation. This was followed by privacy and security (18.2%), related to claims of violations of an individual user’s privacy or personal information, and obscenity/nudity (13.7%), concerning content that is not pornographic but may violate laws surrounding nudity.

    Each request often includes multiple items, significantly elevating the content item count. In total, 18.9k requests from Turkey in the last decade comprised 90.4k items for removal, averaging five items per request. Since 2019, the top three biggest requesters for content removal from Google in Turkey have been a court order directed at Google, a court order directed at a 3rd party, and information and communications authority.

    Compared to Greece, Turkey requested 82 times more content to be removed from Google over the last decade. Most of the content country requested to remove was from YouTube (7.7k), Web Search (3.9k), and Blogger (3.7k).

    However, government requests are on the rise. Compared to 2021, in 2022, there are 22% more requests submitted to Google by the Turkish government. 2022 accounts for 14% of all requests over the last decade, marking it a record year.

    Top countries by Google content removal requests

    Drawing from Google’s bi-annual content removal reports spanning 2009 to 2023, Surfshark’s study scrutinises a decade’s worth of data on removal requests, encompassing total counts, year-over-year trends, reasons, products, and requester types across 150 countries. The analysis, based on data collected as of 16 October 2023, prioritises request counts over individual items, as each request may encompass multiple items with a single selected product and reason.

    In the last decade, six countries have accounted for over 85% of the total content removal requests:

    • Russia is responsible for 215k requests, averaging 59 requests each day over the last 10 years. 
    • South Korea follows with 27k requests in total, averaging seven requests daily.
    • India has submitted 20k requests at a rate of 5.5 per day. 
    • Turkey has submitted a total of 19k requests, averaging five requests per day.
    • Brazil is responsible for 12k requests, averaging three requests per day. 
    • The US follows with 11k content removal requests, averaging three requests per day.
    • Two-thirds of the analysed countries have submitted fewer than 100 requests in the past decade, emphasising the rarity of requests in most nations.

    Governments have requested content removal from 50 different Google products — from Images and YouTube to Maps. The top products with the most requests are YouTube (175k), Google Search (104k), and Blogger (17k). 

    Surfshark: governments’ ‘encroaching into censorship’?

    There are 22 justifications that allow governments to request the removal of content from Google products. The top 5 justifications frequently used by governments over the last decade were:

    1. National security – quoted in over 27% of all the requests made over the last decade. 
    2. Copyright – mentioned in almost 20% of all the requests over the last decade. 
    3. Defamation – quoted in slightly more than 10% of all the requests.
    4. The fourth and fifth most quoted reasons are Regulated Goods and Services and Privacy and Security. Both account for 10% each.

    Over the past decade, Google received an average of approximately 97 content removal requests per day from governments worldwide.

    Agneska Sablovskaja, lead researcher at Surfshark, said:

    Government requests for content removal from Google products are on the rise. In the past decade, the global count has surged nearly 13 times — from 7k to 91k requests annually. National security stands out as the most frequently cited reason by governments globally seeking the removal of undesirable content. A notable increase in content removal requests to Google by governments around the world during times of international conflicts and wars prompts us to consider the balance between genuine interest in a country’s public safety and the potential encroachment into censorship.

    Featured image via Google – YouTube and Surfshark

    By The Canary

    This post was originally published on Canary.

  • At every stage of evolution, people were eager for emotion, fun, and, of course, wins. The UK space is not an exception – gambling here has a long history, which dates back to the 15th century. During all this time, the industry has gone through radical changes, to reach the form we know it today.

    Gambling in The UK Now

    First of all, let’s take a look at the current state of the UK gambling industry. Nowadays, new online casinos are constantly appearing and it is becoming increasingly difficult for players to choose a really good online casino. In this regard, the Wagering Advisors’ casino rankings became very useful, conducting reviews of UK online casinos and providing players with useful information about licensing, game offers, bonuses, payment methods, etc. 

    However, online casinos have gained popularity in the last few decades. Before this, the gambling industry was mainly about land-based games. Plus, for a long time, this game was available only for rich and influential people. Let’s see how the industry has evolved over the centuries to become a very accessible one today.

    Land-based Games: From the Tudors to The 17th Century

    Land-based gaming in the UK has its roots in the Tudor era (1485-1603) when Queen Elizabeth launched the first national public lottery (in 1566, to raise money for repairing the harbour). During this period the first awareness of the negative effects that this type of fun can have, if the principles of responsible gaming are not respected, appeared. As a consequence – games was banned.

    This did not stop the industry. Gambling continued to flourish in secret houses, horse racing tracks, spa resorts, and other places where the rich and noble enjoyed lawn bowling, cards, dice, and other money games. Gambling was also popular among the common people who bet on cockfighting and other forms of entertainment.

    The 17-19th Centuries: Boom of Diversification

    In the 18th and 19th centuries, gambling became more varied and accessible with the emergence of new forms of games such as betting houses, football pools, greyhound racing, and others. Football pools became a mass phenomenon in the 1920s when the working classes began betting on football.

    1990s to the Present Day: Online Industry

    Many experts associate the 1990s with the beginning of mass digitization. Also during this period, the industry in the UK began to develop actively. In this period first online casinos, poker rooms, betting sites, and other platforms offering real-time gambling appeared. Online gambling quickly gained popularity among players who valued the convenience, anonymity, variety, and security of online gaming. 

    It was no longer necessary to go somewhere to play your favorite slots or place sports bets. Everything became accessible directly from the computer, through the Internet. The growing popularity of online gambling has led the UK government to implement a series of laws to regulate the field.

    The most important piece of legislation was The Gambling Act (2005). This act identified three main objectives for the regulation of gambling in the UK: 

    • to prevent criminal and anti-social activities associated with gambling; 
    • to ensure the integrity and transparency of gambling; 
    • and to protect children and vulnerable people from gambling in the UK.

    Gambling in the UK: Today’s Perspective

    The data provided by Statista best reflects the state of the online casino industry in the UK: above 40 percent of total gambling in the UK in 2020 belongs to the online casinos. Online gambling has also become more diverse and innovative, offering players different types of games. 

    Casinos are becoming more and more accessible – nowadays it takes less than 5 minutes to create a player account. Registration at Bet9ja only requires filling out a registration form (4-5 minutes) and confirming your email address. Many online casinos also offer you the possibility to log in through your Gmail, X, or Facebook account.

    Plus, many casinos offer generous welcome packages. Gambling in the UK is accessible even to new players who don’t have huge bankrolls or years of online casino gaming experience.

    Conclusion

    As you can see, it took centuries for the casino industry to reach the format we know today. Such a long route only creates advantages for players – progress has left only the best casino games. Due to regulation, players’ rights are now respected and guaranteed by the UK government. Licensed online casinos offer you a safe, honest and transparent gaming experience.

    Many experts predict that gambling could soon become the most popular form of online entertainment. Gambling in the UK continues to evolve, adapting to new technologies, player needs and preferences, but also faces new challenges and risks that require attention and solutions from all stakeholders. 

    Featured image by FabrikaPhoto – Envato Elements 

    By The Canary

    This post was originally published on Canary.

  • The iGaming companies have undergone an aggressive expansion, and they are trying to maintain the momentum. As the market is becoming more competitive, operators do their best to win over potential players with new promotions and promises of instant withdrawals. Due to the tight regulatory framework, as well as KYC requirements, the payment processing time for gambling businesses hasn’t been great. This is especially true for offshore sites that accept players from the US, Australia, New Zealand, and Canada. Still, this hasn’t prevented operators from advertising as instant withdrawal casinos, and for many players who have experienced frustration due to slow payouts, this sounds very appealing.

    Having quick access to your winnings is technically possible, but in practice, things don’t go always as planned. One of the prerequisites of course is to play on a site that has an instant payout feature. You can even find a list of the fastest payout online casino sites if you consult trusted reviewers. Even these sites will mention how exactly the process works, and what’s the ideal withdrawal time that you can expect. Here we will take a look at these claims, explain whether fast cashout is possible, and how it works. 

    Why Are Withdrawals Slow in the First Place

    Casinos that are locally regulated and fully compliant allow players to instantly access their winnings. That being said, if you hit a jackpot or a big win you’ll likely have to wait for a bit longer for that money to appear in your account. Most of the instant banking options have limitations when it comes to payment volume. Bigger transfers in general warrant further investigation to be cleared. 

    When you play on offshore sites you are receiving money from businesses that are based in high-risk jurisdictions. It’s still possible to receive payments from them but banks are required to look into these transfers, before giving a green light. 

    Casinos are also regulated and need to be transparent, so every payment request has to be cleared by their team as well. RNG algorithms are at the core of casino games and are even used in regular video games. If the algorithm isn’t working properly then the luck element in the game is removed, and some luck element is there for fairness. Given how your winnings can be a result of a game bug, casinos cannot afford fully automated payouts. They need to check if the winnings were obtained legitimately.  

    What’s more the payment processor might require casinos to submit documents and justification for the transfer. So, they will need some of your personal information at the ready if they are to expedite this process. 

    How Are Instant Withdrawals Possible Then?

    Offshore sites are usually regulated by the commission from Curacao. This is the only regulator that allows operators to use cryptocurrencies for deposits and withdrawals. Even if multiple crypto scandals, or the most recent one with Sam Bankman-Fried have stirred trust in crypto trades, blockchain is still a more efficient network for payments. So one of the ways they can improve the processing time is by incentivizing players to use crypto. 

    Another way to expedite transfers is to have the account fully verified. This means the player has submitted all of the verification documents before they even started playing. Finally, players should read the terms of service in detail. Even review sites list those terms, especially when they are writing about bonuses. Promotions almost always come with wagering requirements. These demand the bonus amount be wagered multiple times before you become eligible for payout. Whenever you claim a bonus, the casino will have to check if the requirements have been met, and this can take time. 

    What is The Fastest Withdrawal Time Possible

    In theory, it is possible to cash out your winnings within a single business day. That being said all of these conditions should be met.

    • The withdrawal amount needs to be larger than the minimum withdrawal allowed by the casino.
    • Using blockchain as the banking options
    • Not having any pending wagering requirements
    • Having a fully verified account
    • Submitting the withdrawal request early during the casino’s working hours
    • Playing on a reputable site that doesn’t have a massive userbase
    • More users means more payout requests and that can slow down the clearing rate. 

    It is tricky for all of these elements to be fully met, which is why operators state that users should wait up to 72 hours for the funds to arrive. The only way to further expedite this process is to be a VIP member. Operators with a loyalty program usually list fast withdrawals as one of the perks. This usually means you get your account manager and your requests have higher priority. Since it’s technically possible to get a fast payout, casinos use that fact to market themselves as instant withdrawal operators. 

    Featured image via Andy_Dean_Photog – Envato Elements

    By The Canary

    This post was originally published on Canary.

  • A series of advertisements dehumanizing and calling for violence against Palestinians, intended to test Facebook’s content moderation standards, were all approved by the social network, according to materials shared with The Intercept.

    The submitted ads, in both Hebrew and Arabic, included flagrant violations of policies for Facebook and its parent company Meta. Some contained violent content directly calling for the murder of Palestinian civilians, like ads demanding a “holocaust for the Palestinians” and to wipe out “Gazan women and children and the elderly.” Others posts, like those describing kids from Gaza as “future terrorists” and a reference to “Arab pigs,” contained dehumanizing language.

    “The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people.”

    “The approval of these ads is just the latest in a series of Meta’s failures towards the Palestinian people,” Nadim Nashif, founder of the Palestinian social media research and advocacy group 7amleh, which submitted the test ads, told The Intercept. “Throughout this crisis, we have seen a continued pattern of Meta’s clear bias and discrimination against Palestinians.”

    7amleh’s idea to test Facebook’s machine-learning censorship apparatus arose last month, when Nashif discovered an ad on his Facebook feed explicitly calling for the assassination of American activist Paul Larudee, a co-founder of the Free Gaza Movement. Facebook’s automatic translation of the text ad read: “It’s time to assassinate Paul Larudi [sic], the anti-Semitic and ‘human rights’ terrorist from the United States.” Nashif reported the ad to Facebook, and it was taken down.

    The ad had been placed by Ad Kan, a right-wing Israeli group founded by former Israel Defense Force and intelligence officers to combat “anti-Israeli organizations” whose funding comes from purportedly antisemitic sources, according to its website. (Neither Larudee nor Ad Kan immediately responded to requests for comment.)

    Calling for the assassination of a political activist is a violation of Facebook’s advertising rules. That the post sponsored by Ad Kan appeared on the platform indicates Facebook approved it despite those rules. The ad likely passed through filtering by Facebook’s automated process, based on machine-learning, that allows its global advertising business to operate at a rapid clip.

    “Our ad review system is designed to review all ads before they go live,” according to a Facebook ad policy overview. As Meta’s human-based moderation, which historically relied almost entirely on outsourced contractor labor, has drawn greater scrutiny and criticism, the company has come to lean more heavily on automated text-scanning software to enforce its speech rules and censorship policies.

    While these technologies allow the company to skirt the labor issues associated with human moderators, they also obscure how moderation decisions are made behind secret algorithms.

    Last year, an external audit commissioned by Meta found that while the company was routinely using algorithmic censorship to delete Arabic posts, the company had no equivalent algorithm in place to detect “Hebrew hostile speech” like racist rhetoric and violent incitement. Following the audit, Meta claimed it had “launched a Hebrew ‘hostile speech’ classifier to help us proactively detect more violating Hebrew content.” Content, that is, like an ad espousing murder.

    Incitement to Violence on Facebook

    Amid the Israeli war on Palestinians in Gaza, Nashif was troubled enough by the explicit call in the ad to murder Larudee that he worried similar paid posts might contribute to violence against Palestinians.

    Large-scale incitement to violence jumping from social media into the real world is not a mere hypothetical: In 2018, United Nations investigators found violently inflammatory Facebook posts played a “determining role” in Myanmar’s Rohingya genocide. (Last year, another group ran test ads inciting against Rohingya, a project along the same lines as 7amleh’s experiment; in that case, all the ads were also approved.)

    The quick removal of the Larudee post didn’t explain how the ad was approved in the first place. In light of assurances from Facebook that safeguards were in place, Nashif and 7amleh, which formally partners with Meta on censorship and free expression issues, were puzzled.

    “Meta has a track record of not doing enough to protect marginalized communities.”

    Curious if the approval was a fluke, 7amleh created and submitted 19 ads, in both Hebrew and Arabic, with text deliberately, flagrantly violating company rules — a test for Meta and Facebook. 7amleh’s ads were designed to test the approval process and see whether Meta’s ability to automatically screen violent and racist incitement had gotten better, even with unambiguous examples of violent incitement.

    “We knew from the example of what happened to the Rohingya in Myanmar that Meta has a track record of not doing enough to protect marginalized communities,” Nashif said, “and that their ads manager system was particularly vulnerable.”

    Meta’s appears to have failed 7amleh’s test.

    The company’s Community Standards rulebook — which ads are supposed to comply with to be approved — prohibit not just text advocating for violence, but also any dehumanizing statements against people based on their race, ethnicity, religion, or nationality. Despite this, confirmation emails shared with The Intercept show Facebook approved every single ad.

    Though 7amleh told The Intercept the organization had no intention to actually run these ads and was going to pull them before they were scheduled to appear, it believes their approval demonstrates the social platform remains fundamentally myopic around non-English speech — languages used by a great majority of its over 4 billion users. (Meta retroactively rejected 7amleh’s Hebrew ads after The Intercept brought them to the company’s attention, but the Arabic versions remain approved within Facebook’s ad system.)

    Facebook spokesperson Erin McPike confirmed the ads had been approved accidentally. “Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes,” she said. “That’s why ads can be reviewed multiple times, including once they go live.”

    Just days after its own experimental ads were approved, 7amleh discovered an Arabic ad run by a group calling itself “Migrate Now” calling on “Arabs in Judea and Sumaria” — the name Israelis, particularly settlers, use to refer to the occupied Palestinian West Bank — to relocate to Jordan.

    According to Facebook documentation, automated, software-based screening is the “primary method” used to approve or deny ads. But it’s unclear if the “hostile speech” algorithms used to detect violent or racist posts are also used in the ad approval process. In its official response to last year’s audit, Facebook said its new Hebrew-language classifier would “significantly improve” its ability to handle “major spikes in violating content,” such as around flare-ups of conflict between Israel and Palestine. Based on 7amleh’s experiment, however, this classifier either doesn’t work very well or is for some reason not being used to screen advertisements. (McPike did not answer when asked if the approval of 7amleh’s ads reflected an underlying issue with the hostile speech classifier.)

    Either way, according to Nashif, the fact that these ads were approved points to an overall problem: Meta claims it can effectively use machine learning to deter explicit incitement to violence, while it clearly cannot.

    “We know that Meta’s Hebrew classifiers are not operating effectively, and we have not seen the company respond to almost any of our concerns,” Nashif said in his statement. “Due to this lack of action, we feel that Meta may hold at least partial responsibility for some of the harm and violence Palestinians are suffering on the ground.”

    The approval of the Arabic versions of the ads come as a particular surprise following a recent report by the Wall Street Journal that Meta had lowered the level of certainty its algorithmic censorship system needed to remove Arabic posts — from 80 percent confidence that the post broke the rules, to just 25 percent. In other words, Meta was less sure that the Arabic posts it was suppressing or deleting actually contained policy violations.

    Nashif said, “There have been sustained actions resulting in the silencing of Palestinian voices.”

    The post Facebook Approved an Israeli Ad Calling for Assassination of Pro-Palestine Activist appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A joint project of Human Rights Watch and New York University to document human rights abuses in the Democratic Republic of the Congo has been taken offline after exposing the identities of thousands of vulnerable people, including survivors of mass killings and sexual assaults.

    The Kivu Security Tracker is a “data-centric crisis map” of atrocities in eastern Congo that has been used by policymakers, academics, journalists, and activists to “better understand trends, causes of insecurity and serious violations of international human rights and humanitarian law,” according to the deactivated site. This includes massacres, murders, rapes, and violence against activists and medical personnel by state security forces and armed groups, the site said.

    But the KST’s lax security protocols appear to have accidentally doxxed up to 8,000 people, including activists, sexual assault survivors, United Nations staff, Congolese government officials, local journalists, and victims of attacks, an Intercept analysis found. Hundreds of documents — including 165 spreadsheets — that were on a public server contained the names, locations, phone numbers, and organizational affiliations of those sources, as well as sensitive information about some 17,000 “security incidents,” such as mass killings, torture, and attacks on peaceful protesters.

    The data was available via KST’s main website, and anyone with an internet connection could access it. The information appears to have been publicly available on the internet for more than four years.

    Experts told The Intercept that a leak of this magnitude would constitute one of the most egregious instances ever of the online exposure of personal data from a vulnerable, conflict-affected population.

    “This was a serious violation of research ethics and privacy by KST and its sponsoring organizations,” said Daniel Fahey, former coordinator of the United Nations Security Council’s Group of Experts on the Democratic Republic of the Congo, after he was told about the error. “KST’s failure to secure its data poses serious risks to every person and entity listed in the database. The database puts thousands of people and hundreds of organizations at risk of retaliatory violence, harassment, and reputational damage.”

    “If you’re trying to protect people but you’re doing more harm than good, then you shouldn’t be doing the work in the first place.”

    “If you’re an NGO working in conflict zones with high-risk individuals and you’re not managing their data right, you’re putting the very people that you are trying to protect at risk of death,” said Adrien Ogée, the chief operations officer at the CyberPeace Institute, which provides cybersecurity assistance and threat detection and analysis to humanitarian nongovernmental organizations. Speaking generally about lax security protocols, Ogée added, “If you’re trying to protect people but you’re doing more harm than good, then you shouldn’t be doing the work in the first place.”

    The dangers extend to what the database refers to as Congolese “focal points” who conducted field interviews and gathered information for the KST. “The level of risk that local KST staff have been exposed to is hard to describe,” said a researcher close to the project who asked not to be identified because they feared professional reprisal. “It’s unbelievable that a serious human rights or conflict research organization could ever throw their staff in the lion’s den just like that. Militias wanting to take revenge, governments of repressive neighboring states, ill-tempered security services — the list of the dangers that this exposes them to is very long.”

    The spreadsheets, along with the main KST website, were taken offline on October 28, after investigative journalist Robert Flummerfelt, one of the authors of this story, discovered the leak and informed Human Rights Watch and New York University’s Center on International Cooperation. HRW subsequently assembled what one source close to the project described as a “crisis team.”

    Last week, HRW and NYU’s Congo Research Group, the entity within the Center on International Cooperation that maintains the KST website, issued a statement that announced the takedown and referred in vague terms to “a security vulnerability in its database,” adding, “Our organizations are reviewing the security and privacy of our data and website, including how we gather and store information and our research methodology.” The statement made no mention of publicly exposing the identities of sources who provided information on a confidential basis.

    In an internal statement sent to HRW employees on November 9 and obtained by The Intercept, Sari Bashi, the organization’s program director, informed staff of “a security vulnerability with respect to the KST database which contains personal data, such as the names and phone numbers of sources who provided information to KST researchers and some details of the incidents they reported.” She added that HRW had “convened a team to manage this incident,” including senior leadership, security and communications staff, and the organization’s general counsel.

    The internal statement also noted that one of HRW’s partners in managing the KST had “hired a third-party cyber security company to investigate the extent of the exposure of the confidential data and to help us to better understand the potential implications.” 

    “We are still discussing with our partner organizations the steps needed to fulfill our responsibilities to KST sources in the DRC whose personal information was compromised,” reads the statement, noting that HRW is working with staff in Congo to “understand, prepare for, and respond to any increase in security risks that may arise from this situation.” HRW directed staffers not to post on social media about the leak or publicly share any press stories about it due to “the very sensitive nature of the data and the possible security risks.”

    The internal statement also said that “neither HRW, our partners, nor KST researchers in the DRC have received any information to suggest that anybody has been threatened or harmed as a result of this database vulnerability.”

    The Intercept has not found any instances of individuals affected by the security failures, but it’s currently unknown if any of the thousands of people involved were harmed. 

    “We deeply regret the security vulnerability in the KST database and share concerns about the wider security implications,” Human Rights Watch’s chief communications officer, Mei Fong, told The Intercept. Fong said in an email that the organization is “treating the data vulnerability in the KST database, and concerns around research methodology on the KST project, with the utmost seriousness.” Fong added, “Human Rights Watch did not set up or manage the KST website. We are working with our partners to support an investigation to establish how many people — other than the limited number we are so far aware of — may have accessed the KST data, what risks this may pose to others, and next steps. The security and confidentiality of those affected is our primary concern.” 

    A peacekeeper of the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO) looks on at the force's base during a field training exercise in Sake, eastern Democratic Republic of Congo, November 06, 2023. UN peacekeepers in the Democratic Republic of Congo announced a joint operation with the national army on November 3, 2023 designed to stop M23 rebels from capturing key eastern cities. The announcement follows a surge in clashes with the M23 group since last month, which has forced 200,000 people from their homes according to the UN, after a period of relative calm. (Photo by Glody MURHABAZI / AFP) (Photo by GLODY MURHABAZI/AFP via Getty Images)
    A peacekeeper of the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo looks on in Sake, Democratic Republic of the Congo, on Nov. 6, 2023.
    Photo: Glody Murhabzi/AFP via Getty Images

    Bridgeway Foundation

    Two sources associated with the KST told The Intercept that, internally, KST staff are blaming the security lapse on the Bridgeway Foundation, one of the donors that helped conceive and fund the KST and has publicly taken credit for being a “founding partner” of the project.

    Bridgeway is the philanthropic wing of a Texas-based investment firm. Best known for its support for the “Kony 2012” campaign, the organization was involved in what a U.S. Army Special Operations Command’s historian called “intense activism and lobbying” that paved the way for U.S. military intervention in Central Africa. Those efforts by Bridgeway and others helped facilitate a failed $780 million U.S. military effort to hunt down Joseph Kony, the leader of a Ugandan armed group known as the Lord’s Resistance Army, or LRA.

    More recently, the foundation was accused of partnering with Uganda’s security forces in an effort to drag the United States into “another dangerous quagmire” in Congo. “Why,” asked Helen Epstein in a 2021 investigation for The Nation, “is Bridgeway, a foundation that claims to be working to end crimes against humanity, involved with one of Africa’s most ruthless security agencies?”

    One Congo expert said that Bridgeway has played the role of a “humanitarian privateer” for the U.S. government and employed tactics such as “private intelligence and military training.” As part of Bridgeway’s efforts to track down Kony, it helped create the LRA Crisis Tracker, a platform nearly identical to the KST that tracks attacks by the Ugandan militia. After taking an interest in armed groups in Congo, Bridgeway quietly pushed for the creation of a similar platform for Congo, partnering with NYU and HRW to launch the KST in 2017.

    While NYU’s Congo Research Group oversaw the “collection and triangulation of data” for the KST, and HRW provided training and other support to KST researchers, the Bridgeway Foundation offered “technical and financial support,” according to a 2022 report by top foundation personnel, including Tara Candland, Bridgeway’s vice president of research and analysis, and Laren Poole, its chief operations officer. In a report published earlier this year, Poole and others wrote that the foundation had “no role in the incident tracking process.” 

    Several sources with ties to KST staff told The Intercept that Bridgeway was responsible for contracting the companies that designed the KST website and data collection system, including a tech company called Semantic AI. Semantic’s website mentions a partnership with Bridgeway to analyze violence in Congo, referring to their product as “intelligence software” that “allows Bridgeway and their partners to take action to protect the region.” The case study adds that the KST platform helps Bridgeway “track, analyze, and counter” armed groups in Congo.

    Poole said that the KST had hired a cybersecurity firm to conduct a “comprehensive security assessment of the servers and hosting environment with the goal of better understanding the nature and extent of the exposure.” But it appears that answers to the most basic questions are not yet known. “We cannot currently determine when the security vulnerability occurred or how long the data was exposed,” Poole told The Intercept via email. “As recently as last year, an audit of the site was conducted that included assessing security threats, and this vulnerability was not identified.”

    Like HRW, Bridgeway disclaimed direct responsibility for management of the KST’s website, attributing that work to two web development firms, Fifty and Fifty, which built and managed the KST from its inception until 2022, and Boldcode. That year, Poole said, “Boldcode was contracted to assume management and security responsibilities of the site.” But Poole said that “KST project leadership has had oversight over firms contracted for website development and maintenance since its inception.”

    The Intercept did not receive a response to multiple messages sent to Fifty and Fifty. Boldcode did not immediately respond to a request for comment.

    Warnings of Harm

    Experts have been sounding the alarm about the dangers of humanitarian data leaks for years. “Critical incidents – such as breaches of platforms and networks, weaponisation of humanitarian data to aid attacks on vulnerable populations, and exploitation of humanitarian systems against responders and beneficiaries – may already be occurring and causing grievous harm without public accountability,” wrote a trio of researchers from the Signal Program on Human Security and Technology at the Harvard Humanitarian Initiative in 2017, the same year the KST was launched.

    A 2022 analysis by the CyberPeace Institute identified 157 “cyber incidents” that affected the not-for-profit sector between July 2020 and June 2022. In at least 60 cases, personal data was exposed, and in at least 28, it was taken. “This type of sensitive personal information can be monetized or simply used to cause further harm,” the report says. “Such exploitation has a strong potential for re-victimization of individuals as well as the organizations themselves.”

    In 2021, HRW itself criticized the United Nations Refugee Agency for having “improperly collected and shared personal information from ethnic Rohingya refugees.” In some cases, according to HRW, the agency had “failed to obtain refugees’ informed consent to share their data,” exposing refugees to further risk.

    Earlier this year, HRW criticized the Egyptian government and a private British company, Academic Assessment, for leaving the personal information of children unprotected on the open web for at least eight months. “The exposure violates children’s privacy, exposes them to the risk of serious harm, and appears to violate the data protection laws in both Egypt and the United Kingdom,” reads the April report.

    In that case, 72,000 records — including children’s names, birth dates, phone numbers, and photo identification — were left vulnerable. “By carelessly exposing children’s private information, the Egyptian government and Academic Assessment put children at risk of serious harm,” said Hye Jung Han, children’s rights and technology researcher and advocate at HRW at the time.

    The threats posed by the release of the KST information are far greater than the Egyptian breach. For decades, Congo has been beset by armed violence, from wars involving the neighboring nations of Rwanda and Uganda to attacks by machete-wielding militias. More recently, in the country’s far east, millions have been killed, raped, or driven from their homes by more than 120 armed groups.

    Almost all the individuals in the database, as well as their interviewers, appear to have confidentially provided sensitive information about armed groups, militias, or state security forces, all of which are implicated in grave human rights violations. Given the lawlessness and insecurity of eastern Congo, the most vulnerable individuals — members of local civil society organizations, activists, and residents living in conflict areas — are at risk of arrest, kidnapping, sexual assault, or death at the hands of these groups.

    “For an organization working with people in a conflict zone, this is the most important type of data that they have, so it should be critically protected,” said CyberPeace Institute’s Ogée, who previously worked at European cybersecurity agencies and the World Economic Forum.

    The KST’s sensitive files were hosted on an open “bucket”: a cloud storage server accessible to the open internet. Because the project posted monthly public reports on the same server that contained the sensitive information, the server’s URL was often produced in search engine results related to the project.

    “The primary methodology in the humanitarian sector is ‘do no harm.’ If you’re not able to come into a conflict zone and do your work without creating any more harm, then you shouldn’t be doing it,” Ogée said. “The day that database is created and uploaded on that bucket, an NGO that is security-minded and thinks about ‘do no harm’ should have every process in place to make sure that this database never gets accessed from the outside.”

    The leak exposed the identities of 6,000 to 8,000 individuals, according to The Intercept’s analysis. The dataset references thousands of sources labeled “civil society” and “inhabitants” of villages where violent incidents occurred, as well as hundreds of “youth” and “human rights defenders.” Congolese health professionals and teachers are cited hundreds of times, and there are multiple references to students, lawyers, psychologists, “women leaders,” magistrates, and Congolese civil society groups, including prominent activist organizations regularly targeted by the government.

    “It’s really shocking,” said a humanitarian researcher with long experience conducting interviews with vulnerable people in African conflict zones. “The most important thing to me is the security of my sources. I would rather not document a massacre than endanger my sources. So to leave their information in the open is incredibly negligent. Someone needs to take responsibility.”

    Residents of Bambo in Rutshuru territory, 60 kilometers north of Goma, the capital of North Kivu, eastern Democratic Republic of Congo, flee as the M23 attacked the town on October 26, 2023. Around noon, M23 rebels, supported by the Rwandan army according to the UN, the USA and the European Union, attacked the town of Bambo with mortars, causing several thousand inhabitants to flee. Hundreds of Congolese soldiers, police officers and proxy militiamen were seen joining the population as they tried to escape the fighting. Several civilians were killed and wounded in the fighting, according to medical sources on the spot. The M23 has captured swathes of territory in North Kivu province since 2021, forcing more than a million people to flee. (Photo by ALEXIS HUGUET / AFP) (Photo by ALEXIS HUGUET/AFP via Getty Images)
    Residents of Bambo in Rutshuru territory in the Democratic Republic of the Congo flee rebel attacks on Oct. 26, 2023.
    Photo: Alexis Huguet/AFP via Getty Images

    Breach of Ethics

    Since being contacted by The Intercept, the organizations involved have sought to distance themselves from the project’s lax security protocols. 

    In its internal statement to staff, HRW emphasized that it was not responsible for collecting information or supervising activities for KST, but was “involved in designing the research methodology, provided training, guidance and logistical support to KST researchers, and spot-checked some information.”

    “HRW does not manage the KST website and did not set up, manage or maintain the database,” the internal statement said.

    The Intercept spoke with multiple people exposed in the data leak who said they did not consent to any information being stored in a database. This was confirmed by four sources who worked closely with the KST, who said that gaining informed consent from people who were interviewed, including advising them that they were being interviewed for the KST, was not a part of the research methodology.

    Sources close to the KST noted that its researchers didn’t identify who they were working for. The failure to obtain consent to collect personal information was likely an institutional oversight, they said.

    “Obtaining informed consent is an undisputed core principle of research ethics,” the researcher who collaborated with the KST told The Intercept. “Not telling people who you work for and what happens to the information you provide to them amounts to lying. And that’s what has happened here at an unimaginable scale.”

    In an email to NYU’s Center on International Cooperation and their Human Research Protections Program obtained by The Intercept, Fahey, the former coordinator of the Group of Experts on the Democratic Republic of the Congo, charged that KST staff “apparently failed to disclose that they were working for KST when soliciting information and did not tell sources how their information would be cataloged or used.”

    In response, Sarah Cliffe, the executive director of NYU’s Center on International Cooperation, did not acknowledge Fahey’s concerns about informed consent, but noted that the institution takes “very seriously” concerns about the security of sources and KST staff exposed in the leak, according to an email seen by The Intercept. “We can assure you that we are taking immediate steps to investigate this and decide on the best course of action,” Cliffe wrote on November 1. 

    Fahey told The Intercept that NYU’s Human Research Protections Program did not respond to his questions about KST’s compliance with accepted academic standards and securing informed consent from Congolese informants. That NYU office includes the university’s institutional review board, or IRB, the body comprised of faculty and staff who review research protocols to ensure protection of human subjects and compliance with state and federal regulations as well as university policies.

    NYU spokesperson John Beckman confirmed that while the KST’s researchers received training on security, research methodology, and research ethics, “including the importance of informed consent,” some of the people interviewed “were not informed that their personally identifiable information would be recorded in the database and were unaware that the information was to be used for the KST.” 

    Beckman added, “NYU is convening an investigative panel to review these human subject-related issues.”

    Beckman also stated that the failure of Congolese “focal points” to provide informed consent tended to occur in situations that may have affected their own security. “Nevertheless, this raises troubling issues,” Beckman said, noting that all the partners involved in the KST “will be working together to review what happened, to identify what needs to be corrected going forward, and to determine how best to safeguard those involved in collecting and providing information about the incidents the KST is meant to track.”

    Fong, of HRW, also acknowledged failures to provide informed consent in all instances. “We are aware that, while the KST researchers appropriately identified themselves as working for Congolese civil society organizations, some KST researchers did not in all cases identify themselves as working for KST, for security reasons,” she told The Intercept. “We are reviewing the research protocols and their implementation.”

    “The partners have been working hard to try to address what happened and mitigate it,” Beckman told The Intercept, specifying that all involved were working to determine the safest method to inform those exposed in the leak.

    Both NYU and HRW named their Congolese partner organization as being involved in some of the original errors and the institutional response. 

    The fallout from the exposure of the data may extend far beyond the breach of academic or NGO protocols. “Given the lack of security on KST’s website, it’s possible that intelligence agencies in Rwanda, Uganda, Burundi, DRC, and elsewhere have been accessing and mining this data for years,” Fahey said. “It is also possible that Congolese armed groups and national security forces have monitored who said what to KST staff.”

    The post Online Atrocity Database Exposed Thousands of Vulnerable People in Congo appeared first on The Intercept.

    This post was originally published on The Intercept.

  • According to Government statistics, women only make up 27% of the workforce across all STEM industries. That’s why it can feel a bit lonely! And those professionals may long for a wonderful support network. Non-profit Franklin Women is making a splash in Canberra. On November 29, they’ll be an event where women working in the health and medical research ecosystem can come together and connect. Ahead of this event, BroadAgenda editor, Ginger Gorman, had a chat with UC research psychologist, Dr Janie Busby Grant, who is involved with Franklin women. Yes, she works with robots. Super cool. 

     

    If you were sitting next to someone at a dinner party, how would you explain your work and research in a nutshell?

    There’s a bunch of different ways of answering this! I usually say I’m a researcher and lecturer in cognitive psychology, so I investigate different aspects of how people think, using approaches like experiments, surveys, interviews and ‘living labs.’

    Some of my research looks at how we remember the past and imagine the future, and how this links in with mental health issues like anxiety. For example, in a recent analysis with my PhD student, Jessica Du, we found that people with high levels of anxiety imagine negative future events more vividly and in more detail than people with low levels of anxiety, and it’s the opposite pattern for positive events. This type of research can help us understand what drives or maintains anxiety in people, and help develop interventions to reduce anxiety, which is really relevant given the high rates of anxiety we’re seeing at the moment, especially among young people.

    I also do a lot of work in human-robot interaction, which is about how we engage with and think about robots. In our Collaborative Robotics Lab, the roboticist Damith Herath and I look at factors that affect how robots interact with people in environments like aged care homes and hospitals.

    I really like heading out ‘into the field’ with our team to see how robots are currently being used in different settings, and working out how we can improve the design, functions and roll-out of robots in those environments. It’s a really exciting field, because there’s so many questions we don’t have answers to yet, not just practical issues about where and when to deploy robots, but deeper questions about what people understand and believe about them, and how this can be shaped by different features of the interaction. There’s so much to do!

    The Collaborative Robotics Lab at the University of Canberra.

    The Collaborative Robotics Lab at the University of Canberra. Picture: Supplied

    How do you incorporate the insights and experiences of people into your work in a way that’s ethical? How do you get them interested in what you’re doing?

    The best way to figure out what’s going on in a particular situation is to approach it from a lot of different angles, and I try do this in all of my work. So for example, with my work with young people and anxiety, we have projects analysing the data from big randomised controlled trials of interventions that could improve their wellbeing, but then also smaller projects where we ask young people directly their personal experience of the things they think affect their anxiety. Incorporating people’s perspectives in different ways not only gives you new ideas and helps stop you going down dead ends in a research sense, but also makes sure the people you’re working with are heard, giving them space to say in their own words what matters to them, which is critical.

    Similarly with our robotics work, we’ve got these highly structured lab-based studies where we try to control everything except the variable we’re interested in, but then out in real-world situations it’s about listening to people and trying to understand their perspectives on the situation or problem. One of our Lab students, Sharni Konrad, is great at this – finding and listening to people to help understand how or why a robot might be being used (or not used!). Without incorporating everyone’s voices, there just isn’t going to be a good outcome, whether you define a ’good outcome’ from a scientific, efficiency, accessibility or ethical standpoint.

    Let’s wind back the clock a bit. Why did you go into this field?  What was compelling about it?  

    Great question. At a really basic level, I guess I’m interested in why people do things – I think a lot of us feel this way! I’m just lucky enough to have a job that I get to do this. So I was always drawn to how we think, and what factors affect our thinking. When combined with my interest in technology, I found myself in human-computer interaction, human-robot interaction, and artificial intelligence.

    Event Information Wed 29 November 2023 5.30 – 8.00pm (AEDT)

    Venue Hotel Realm, 18 National Circuit, Barton ACT 2600

    Tickets Members – $29 (join here) Non-members – $55 * Your ticket includes food and drinks!

    Book here

    What impact do you hope your work has? Or give concrete examples of where/how your work is making change? What do you want readers to know about?

    I wouldn’t be in this job if I didn’t think I could make a difference! I do like that I get to contribute in a lot of different ways. On a very basic level I’m contributing to what we know about how the mind works, and then there’s the potential applicability of this work in mental health treatment. Some recent analyses we did looking at what factors predict suicidal behaviours in young people is helping to build the case for anxiety as an important predictor of later behaviours, and anything we can contribute in that space is so important.

    I also think everyone recognises that we’re at a significant point with humans and technology, and my role here is to help focus on the human perspective – the more psychology and social science researchers we can get working in this space the better! We need multidisciplinary answers to these questions, and sometimes it can be hard to get all of the right people ‘in the room’.

    And I also teach a lot! I work with several hundred first year students every year, teaching them about how to conduct research, and it is really rewarding to give them lots of real-world examples of how to do research, and talk about why understanding and knowing how to do it is important.

    Janie Busby Grant - 2023

    Janie Busby Grant. Picture: Supplied 

    Do you view yourself as feminist researcher? Why? Why not? What does the word mean to you in the context of your own values and also your work?

    Yes, definitely, I think of myself as a feminist always, including in my research. I think it pervades all aspects of how I go about my work, how I identify and learn from my participants, how I try to ensure I’m hearing from diversity of sources, how I interpret and report on my findings.

    And then there’s all the practical ways in which I try to support women in research, particularly through providing opportunities for up-and-coming female researchers – and seeking out mentoring from those women more experienced than me.

    Particularly in the technology space, there’s still a whole lot more we need to do for girls and women to get engaged in technology – we see a lot of great female representation in psychology, but… not so much once you start getting into the humans-and-tech space. So I’m working to bring in some great young female researchers into the field!

    Why did you get involved with Franklin Women?

    Speaking of supporting women and being supported… I’ve so much enjoyed being part of Franklin Women for the last year or so – it’s been a whole new way of finding interesting women to talk with about a whole range of work (and not-work!) issues. I started with their mentoring program last year, which was fantastic and really helped me rework what I want to achieve over the next few years. Since then, it’s been fantastic to find a dedicated space that is work-adjacent – supporting me within the research space, while being a social community as well. I’m super excited to be part of their new Canberra Peer Advisory Group next year!

    • Picture at top: Janie working with Softbank’s Pepper robot in the Collaborative Robotics Lab. Picture: Supplied 

    The post How we engage with and think about robots appeared first on BroadAgenda.

    This post was originally published on BroadAgenda.

  • The popular data broker LexisNexis began selling face recognition services and personal location data to U.S. Customs and Border Protection late last year, according to contract documents obtained through a Freedom of Information Act request.

    According to the documents, obtained by the advocacy group Just Futures Law and shared with The Intercept, LexisNexis Risk Solutions began selling surveillance tools to the border enforcement agency in December 2022. The $15.9 million contract includes a broad menu of powerful tools for locating individuals throughout the United States using a vast array of personal data, much of it obtained and used without judicial oversight.

    “This contract is mass surveillance in hyperdrive.”

    Through LexisNexis, CBP investigators gained a convenient place to centralize, analyze, and search various databases containing enormous volumes of intimate personal information, both public and proprietary.

    “This contract is mass surveillance in hyperdrive,” Julie Mao, an attorney and co-founder of Just Futures Law, told The Intercept. “It’s frightening that a rogue agency such as CBP has access to so many powerful technologies at the click of the button. Unfortunately, this is what LexisNexis appears now to be selling to thousands of police forces across the country. It’s now become a one-stop shop for accessing a range of invasive surveillance tools.”

    A variety of CBP offices would make use of the surveillance tools, according to the documents. Among them is the U.S. Border Patrol, which would use LexisNexis to “help examine individuals and entities to determine their admissibility to the US. and their proclivity to violate U.S. laws and regulations.”

    Among other tools, the contract shows LexisNexis is providing CBP with social media surveillance, access to jail booking data, face recognition and “geolocation analysis & geographic mapping” of cellphones. All this data can be queried in “large volume online batching,” allowing CBP investigators to target broad groups of people and discern “connections among individuals, incidents, activities, and locations,” handily visualized through Google Maps.

    CBP declined to comment for this story, and LexisNexis did not respond to an inquiry. Despite the explicit reference to providing “LexisNexis Facial Recognition” in the contract, a fact sheet published by the company online says, “LexisNexis Risk Solutions does not provide the Department of Homeland Security” — CBP’s parent agency — “or US Immigration and Customs Enforcement with license plate images or facial recognition capabilities.”

    The contract includes a variety of means for CBP to exploit the cellphones of those it targets. Accurint, a police and counterterror surveillance tool LexisNexis acquired in 2004, allows the agency to do analysis of real-time phone call records and phone geolocation through its “TraX” software.

    While it’s unclear how exactly TraX pinpoints its targets, LexisNexis marketing materials cite “cellular providers live pings for geolocation tracking.” These materials also note that TraX incorporates both “call detail records obtained through legal process (i.e. search warrant or court order) and third-party device geolocation information.” A 2023 LexisNexis promotional brochure says, “The LexisNexis Risk Solutions Geolocation Investigative Team offers geolocation analysis and investigative case assistance to law enforcement and public safety customers.”

    Any CBP use of geolocational data is controversial, given the agency’s recent history. Prior reporting found that, rather than request phone location data through a search warrant, CBP simply purchased such data from unregulated brokers — a practice that critics say allows the government to sidestep Fourth Amendment protections against police searches.

    According to a September report by 404 Media, CBP recently told Sen. Ron Wyden, D-Ore., it “will not be utilizing Commercial Telemetry Data (CTD) after the conclusion of FY23 (September 30, 2023),” using a technical term for such commercially purchased location information.

    The agency, however, also told Wyden that it could renew its use of commercial location data if there were “a critical mission need” in the future. It’s unclear if this contract provided commercial location data to CBP, or if it was affected by the agency’s commitment to Wyden. (LexisNexis did not respond to a question about whether it provides or provided the type of phone location data that CBP had sworn off.)

    The contract also shows how LexisNexis operates as a reseller for surveillance tools created by other vendors. Its social media surveillance is “powered by” Babel X, a controversial firm that CBP and the FBI have previously used.

    According to a May 2023 report by Motherboard, Babel X allows users to input one piece of information about a surveillance target, like a Social Security number, and receive large amounts of collated information back. The returned data can include “social media posts, linked IP address, employment history, and unique advertising identifiers associated with their mobile phone. The monitoring can apply to U.S. persons, including citizens and permanent residents, as well as refugees and asylum seekers.”

    While LexisNexis is known to provide similar data services to U.S. Immigration and Customs Enforcement, another division of the Department of Homeland Security, details of its surveillance work with CBP were not previously known. Though both agencies enforce immigration law, CBP typically focuses on enforcement along the border, while ICE detains and deports migrants inland.

    In recent years, CBP has drawn harsh criticism from civil libertarians and human rights advocates for its activities both at and far from the U.S.-Mexico border. In 2020, CBP was found to have flown a Predator surveillance drone over Minneapolis protests after the murder of George Floyd; two months later, CBP agents in unmarked vehicles seized racial justice protesters off the streets of Portland, Oregon — an act the American Civil Liberties Union condemned as a “blatant demonstration of unconstitutional authoritarianism.”

    Just Futures Law is currently suing LexisNexis over claims it illegally obtains and sells personal data.

    The post LexisNexis Sold Powerful Spy Tools to U.S. Customs and Border Protection appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A Google employee protesting the tech giant’s business with the Israeli government was questioned by Google’s human resources department over allegations that he endorsed terrorism, The Intercept has learned. The employee said he was the only Muslim and Middle Easterner who circulated the letter and also the only one who was confronted by HR about it.

    The employee was objecting to Project Nimbus, Google’s controversial $1.2 billion contract with the Israeli government and its military to provide state-of-the-art cloud computing and machine learning tools.

    Since its announcement two years ago, Project Nimbus has drawn widespread criticism both inside and outside Google, spurring employee-led protests and warnings from human rights groups and surveillance experts that it could bolster state repression of Palestinians.

    Mohammad Khatami, a Google software engineer, sent an email to two internal listservs on October 18 saying Project Nimbus was implicated in human rights abuses against Palestinians — abuses that fit a 75-year pattern that had brought the conflict to the October 7 Hamas massacre of some 1,200 Israelis, mostly civilians. The letter, distributed internally by anti-Nimbus Google workers through company email lists, went on to say that Google could become “complicit in what history will remember as a genocide.”

    “Strangely enough, I was the only one of us who was sent to HR over people saying I was supporting terrorism or justifying terrorism.”

    Twelve days later, Google HR told Khatami they were scheduling a meeting with him, during which he says he was questioned about whether the letter was “justifying the terrorism on October 7th.”

    In an interview, Khatami told The Intercept he was not only disturbed by what he considers an attempt by Google to stifle dissent on Nimbus, but also believes he was left feeling singled out because of his religion and ethnicity. The letter was drafted and internally circulated by a group of anti-Nimbus Google employees, but none of them other than Khatami were called by HR, according to Khatami and Josh Marxen, another anti-Nimbus organizer at Google who helped spread the letter. Though he declined to comment on the outcome of the HR meeting, Khatami said it left him shaken.

    “It was very emotionally taxing,” Khatami said. “I was crying by the end of it.”

    “I’m the only Muslim or Middle Eastern organizer who sent out that email,” he told The Intercept. “Strangely enough, I was the only one of us who was sent to HR over people saying I was supporting terrorism or justifying terrorism.”

    Marxen shared a screenshot of the virtually identical email he sent, also on October 18, with The Intercept. Though there are a few small changes — Marxen’s email refers to “a seige [sic] upon all of Gaza” whereas Khamati’s cites “the complete destitution of Gaza” — both contain verbatim language connecting the October 7 attack to Israel’s past treatment of Palestinians.

    Google spokesperson Courtenay Mencini told The Intercept, “We follow up on every concern raised, and in this case, dozens of employees reported this individual’s email – not the sharing of the petition itself – for including language that did not follow our workplace policies.” Mencini declined to say which workplace policies Khatami’s email allegedly violated, whether other organizers had gotten HR calls, or if any other company personnel had been approached by Employee Relations for comments made about the war.

    The incident comes just one year after former Google employee Ariel Koren said the company attempted to force her to relocate to Brazil in retaliation for her early anti-Nimbus organizing. Koren later quit in protest and remains active in advocating against the contract. Project Nimbus, despite the dissent, remains in place, in part because of contractual terms put in place by Israel forbidding Google from cutting off service in response to political pressure or boycott campaigns.

    Dark Clouds Over Nimbus

    Dissent at Google is neither rare nor ineffective. Employee opposition to controversial military contracts has previously pushed the company to drop plans to help with the Pentagon’s drone warfare program and a planned Chinese version of Google Search that would filter out results unwanted by the Chinese government. Nimbus, however, has managed to survive.

    In the wake of the October 7 Hamas attacks against Israel and resulting Israeli counteroffensive, now in its second month of airstrikes and a more recent ground invasion, Project Nimbus is again a flashpoint within the company.

    With the rank and file disturbed by the company’s role as a defense contractor, Google has attempted to downplay the military nature of the contract.

    Mencini, the Google spokesperson, said that anti-Nimbus organizers were “misrepresenting” the contract’s military role.

    “This is part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” Mencini said. “We have been very clear that the Nimbus contract is for workloads running on our commercial platform by Israeli government ministries such as finance, healthcare, transportation, and education. Our work is not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services.”

    Nimbus training documents published by The Intercept last year, however, show the company was pitching its use for the Ministry of Defense. Moreover, the Israeli government itself is open about the military applications of Project Nimbus: A 2023 press release by the Israeli Ministry of Finance specifically names the Israel Defense Forces as a beneficiary, while an overview written by the country’s National Digital Agency describes the contract as “a comprehensive and in-depth solution to the provision of public cloud services to the Government, the defense establishment and other public organizations.”

    “If we do not speak out now, we are complicit in what history will remember as a genocide.”

    Against this backdrop, Khatami, in coordination with others in the worker-led anti-Nimbus campaign, sent his October 18 note to internal Arab and Middle Eastern affinity groups laying out their argument against the project and asking like-minded colleagues to sign an employee petition.

    “Through Project Nimbus, Google is complicit in the mass surveillance and other human rights abuses which Palestinians have been subject to daily for the past 75 years, and which is the root cause of the violence initiated on October 7th,” the letter said. “If we do not speak out now, we are complicit in what history will remember as a genocide.”

    On October 30, Khatami received an email from Google’s Employee Relations division informing him that he would soon be questioned by company representatives regarding “a concern about your conduct that has been brought to our attention.”

    According to Khatami, in the ensuing phone call, Google HR pressed him about the portion of his email that made a historical connection between the October 7 Hamas attack and the 75 years of Israeli rights abuses that preceded it, claiming some of his co-workers believed he was endorsing violence. Khatami recalled being asked, “Can you see how people are thinking you’re justifying the terrorism on October 7th?”

    Khatami said he and his fellow anti-Nimbus organizers were in no way endorsing the violence against Israeli civilians — just as they now oppose the deaths of more than 10,000 Palestinians, according to the latest figures from Gaza’s Ministry of Health. Rather, the Google employees wanted to provide sociopolitical context for Project Nimbus, part of a broader employee-led effort of “demilitarizing our company that was never meant to be militarized.” To point out the relevant background leading to the October 7 attack, he said, is not to approve it.

    “We wrote that the root cause of the violence is the occupation,” Khatami explained. “Analysis is not justification.”

    Double Standard

    Khatami also objects to what he said is a double standard within Google about what speech about the war is tolerated, a source of ongoing turmoil at the company. The day after his original email, a Google employee responded angrily to the email chain: “Accusing Israel of genocide and Google of being complicit is a grave accusation!” This employee, who works at the company’s cloud computing division, itself at the core of Project Nimbus, continued:

    To break it down for you, project nimbus contributes to Israel’s security. Therefore, any calls to drop it are meant to weaken Israel’s security. If Israel’s security is weak, then the prospect of more terrorist attacks, like the one we saw on October 7, is high. Terrorist attacks will result in casualties that will affect YOUR Israeli colleagues and their family. Attacks will be retaliated by Israel which will result in casualties that will affect your Palestinian colleagues and their family (because they are used as shields by the terrorists)…bottom line, a secured Israel means less lives lost! Therefore if you have the good intention to preserve human lives then you MUST support project Nimbus!

    While Khatami disagrees strongly with the overall argument in the response email, he objected in particular to the co-worker’s claim that Israel is killing Palestinians “because they are used as shields by the terrorists” — a justification of violence far more explicit than the one he was accused of, he said. Khatami questioned whether widespread references to the inviolability of Israeli self-defense by Google employees have provoked treatment from HR similar to what he received after his email about Nimbus.

    Internal employee communications viewed by The Intercept show tensions within Google over the Israeli–Palestinian conflict aren’t limited to debates over Project Nimbus. A screenshots viewed by The Intercept shows an Israeli Google employee repeatedly asking Middle Eastern colleagues if they support Hamas, while another shows a Google engineer suggesting Palestinians worried about the welfare of their children should simply stop having kids. Another lamented “friends and family [who] are slaughtered by the Gaza-grown group of bloodthirsty animals.”

    According to a recent New York Times report, which found “at least one” instance of “overtly antisemitic” content posted through internal Google channels, “one worker had been fired after writing in an internal company message board that Israelis living near Gaza ‘deserved to be impacted.’”

    Another screenshot reviewed by The Intercept, taken from an email group for Israeli Google staff, shows employees discussing a post by a colleague criticizing the Israeli occupation and encouraging donations to a Gaza relief fund.

    “During this time we all need to stay strong as a nation and united,” one Google employee replied in the email group. “As if we are not going through enough suffering, we will unfortunately see many emails, comments either internally or on social media that are pro Hamas and clearly anti semitic. report immediately!” Another added: “People like that make me sick. But she is a lost cause.” A third chimed in to say they had internally reported the colleague soliciting donations. A separate post soliciting donations for the same Gaza relief fund was downvoted 139 times on an internal message board, according to another screenshot, while a post stating only “Killing civilians is indefensible” received 51 downvotes.

    While Khatami says he was unnerved and disheartened by the HR grilling, he’s still committed to organizing against Project Nimbus.

    “It definitely emotionally affected me, it definitely made me significantly more fearful or organizing in this space,” he said. “But I think knowing that people are dying right now and slaughtered in a genocide that’s aided and abetted by my company, remembering that makes the fear go away.”

    The post Google Activists Circulated Internal Petition on Israel Ties. Only the Muslim Got a Call from HR. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • On November 7, NSO Group, the Israeli spyware company infamous for its Pegasus phone-tapping technology, sent an urgent email and letter by UPS to request a meeting with Secretary of State Antony Blinken and officials at the U.S. State Department. 

    “I am writing on behalf of NSO Group to urgently request an opportunity to engage with Secretary Blinken and the officials at the State Department regarding the importance of cyber intelligence technology in the wake of the grave security threats posed by the recent Hamas terrorist attacks in Israel and their aftermath,” wrote Timothy Dickinson, partner at the Los Angeles-based law firm Paul Hastings, headquartered in Los Angeles, on behalf of NSO. 

    In the last two years NSO’s reputation has taken a beating amid revelations about its spyware’s role in human rights abuses. 

    As controversy was erupting over its role in authoritarian governments’ spying, NSO Group was blacklisted by the U.S. Department of Commerce in November 2021, “to put human rights at the center of US foreign policy,” the agency said at the time. A month after the blacklisting, it was revealed that Pegasus had been used to spy on American diplomats

    NSO’s letter to Blinken — publicly filed as part of Paul Hastings’s obligation under the Foreign Agents Registration Act — is part of the company’s latest attempt to reinvent its image and stay afloat and, most importantly, a bid to reverse the blacklisting. (Neither the State Department nor Paul Hastings responded to requests for comment.)

    For NSO, the blacklisting has been an existential threat. The push to reverse it, which included hiring multiple American public relations and law firms, has cost NSO $1.5 million in lobbying last year, more than the government of Israel itself spent. It focused heavily on Republican politicians, many of whom are now vocal in their support of Israel, and against a ceasefire in the brutal war being waged by the country in the Gaza Strip. 

    Amid the Israeli war effort, NSO appears more convinced than ever that it is of use to the American government. 

    “NSO’s technology is supporting the current global fight against terrorism in any and all forms,” said the letter to Blinken. “These efforts squarely align with the Biden-Harris administration’s repeated messages and actions of support for the Israeli government.” 

    NSO is marketing itself as a volunteer in the Israeli war effort, allegedly helping track down missing Israelis and hostages. And at this moment, which half a dozen experts have described to The Intercept as NSO’s attempt at “crisis-washing,” some believe that the American government may create a space for NSO to come back to the table. 

    “NSO’s participation in the Israeli government’s efforts to locate citizens in Gaza seems to be an effort by the company to rehabilitate its image in this crisis,” said Adam Shapiro, director of advocacy for Israel–Palestine at Democracy for the Arab World Now, a group founded by the slain journalist Jamal Khashoggi to advocate for human rights in the Middle East. “But alarm bells should be ringing that NSO Group has been recruited in Israel’s war effort.”

    Documents obtained by The Intercept through FARA and public records requests illustrate the company’s intense lobbying efforts — especially among hawkish, pro-Israel Republicans.

    Working on NSO’s behalf, Pillsbury Winthrop Shaw Pittman, a New York-based law firm, held over half a dozen meetings between March and August with Rep. Pete Sessions, R-Texas, who sits on the House Financial Services Committee as well as Oversight and Reform. One was to “discuss status of Bureau of Industry and Security Communications, U.S. Department of Commerce appeal.” (Pillsbury did not respond to a request for comment.)

    “NSO’s participation in the Israeli government’s efforts to locate citizens in Gaza seems to be an effort by the company to rehabilitate its image in this crisis.”

    The lobbyists also had three meetings in March and April with Justin Discigil, then chief of staff to the far-right Rep. Dan Crenshaw, R-Texas, who sits on the House Permanent Select Committee on Intelligence. (Neither Sessions nor Crenshaw responded to requests for comment.)

    Public records about NSO’s push also offer concrete examples of something the company has been at pains to evade, and that the American government has routinely overlooked: the existing relationship between the Israeli state and the spyware company. 

    “NSO’s Pegasus tool is treated in Israel as a defense article subject to regulation by the country’s regulators, which conducts its own assessment of human rights risks in countries across the world,” the letter to Blinken said. 

    A previously unreported May 2022 email from Department of Commerce official Elena Love to lobbyists for NSO also draws a connection between the Israeli government and NSO. In her email, Love asked the lobbyists working to undo NSO’s blacklisting for permission to send a list of questions directly to Israeli officials. (The Department of Commerce said there is no change to the status of NSO on the blacklist and declined to comment further. NSO Group and the Israeli government did not respond to requests for comment.)

    Currently, in the war effort, the Israeli government is letting NSO sit upfront. In a podcast by the Israeli news outlet Haaretz from October 19 — podcasts are less heavily censored by the government than written articles — a reporter discusses how NSO has reported for duty, in essence taken on work for the Ministry of Defense.

    “What’s really, really important to understand is that these companies,” said Haaretz journalist Omer Benjakob in the podcast, “some of them have already been working with the state of Israel.”

    Hiring Lobbyists in D.C.

    By selling its spyware to authoritarian governments, NSO has facilitated a variety of human rights abuses: from use by the United Arab Emirates to spy on Khashoggi, the journalist later killed by Saudi Arabia, to reporting just this week on its use to spy on Indian journalists. According to the research group Forensic Architecture, the use of NSO Group’s products has contributed to over 150 physical attacks against journalists, rights advocates, and other civil society actors, including some of their deaths. 

    Now the company is mounting a rapacious public relations push to undo the harm to its reputation. 

    NSO’s recent hiring of two lobbyists, Jeffrey Weiss and Stewart Baker, from the Washington-based white-shoe law firm Steptoe & Johnson, was made public at the end of October in a filing with the House of Representatives. On behalf of NSO, the firm was to address “US national security and export control policy in an international context.”

    Baker, former assistant secretary for policy at the Department of Homeland Security and a former National Security Agency general counsel, previously told The Associated Press, before representing NSO, that the blacklisting of the company “certainly isn’t a death penalty and may over time just be really aggravating.” 

    Weiss, for his part, had relevant experience to help get NSO off the Department of Commerce blacklist: He was deputy director of policy and strategic planning at the agency from 2013 to 2017. 

    Weiss and another Steptoe & Johnson partner, Eric Emerson, had also been hired by the Israeli government a few months earlier, according to previously unreported FARA documents. Weiss registered to provide both services to the Economic and Trade Mission at the Embassy of Israel in July, and then NSO in October. 

    Emerson, who has worked at Steptoe for over 30 years specializing in international trade law and policy issues, registered to engage with Natalie Gutman-Chen, Israeli minister of trade and economic affairs. Documents show that Steptoe’s annual budget for this work is $180,000.

    10/18/2023, Washington DC, U.S. Hundreds of protesters attend a pro-Palestinian demonstration outside Embassy of Israel in Washington DC, Greece, on Wednesday, Oct. 18, 2023. A day after a deadly blast tore through Al-Ahli Baptist Hospital in Gaza sparking protests across the region and western countries. (Photo by Ali Khaligh / Middle East Images / Middle East Images via AFP) (Photo by ALI KHALIGH/Middle East Images/AFP via Getty Images)
    Demonstrators in support of Palestine gather at the Israeli Embassy in Washington, D.C., on Oct. 18, 2023.
    Photo: Ali Khaligh /Middle East Images/AFP via Getty Images

    Steptoe’s description of its work for the Israeli mission is similar to its goals for the NSO contract: to “provide advice on various international trade related matters affecting the State of Israel” which will be used “to develop its position w/re various U.S. policies.” 

    It is not illegal to register to lobby for two affiliated clients, and powerful law firms doing lobbying work often do so for purposes of efficiency and holding meetings together.

    “It is not uncommon to kill two birds with one stone,” said Anna Massoglia, editorial and investigations manager at OpenSecrets, which tracks lobbying money in Washington. “It’s possible NSO got a discount because they already had Israel.” 

    “It’s possible NSO got a discount because they already had Israel.”

    On October 30, amid the Israeli onslaught against Gaza, Steptoe filed their supplemental statement, in which lobbyists are supposed to detail their meetings and outreach to the Department of Justice. It was left curiously blank, perhaps portending a later amendment to the filing. (“The filing covers what we have been asked to advise on and we can’t comment any further at this time,” Steptoe said in a statement.)

    “It’s hard to prove it’s deliberate,” Massoglia said. “But the timing is interesting.” 

    Ties to Israeli Government 

    Last year, a previously unreported email, obtained through a public records request, provided another example of the interweaving relationship between the Israeli government and NSO.

    In May 2022, Love, the acting chair of the End-User Review Committee at the Department of Commerce, emailed lobbyists at Steptoe and Pillsbury. Love sent along a list of questions for their client, NSO, about the company’s appeal to be removed from the blacklist.

    “We are also requesting permission to provide these questions to the government of Israel,” Love wrote. 

    The email, however, had been sent about a year and half before Steptoe filed FARA registrations for its staff to lobby on behalf of NSO — and raises questions about adherence to the foreign lobbying law. (Pillsbury was registered under FARA at the time.)

    FARA requires lobbyists to register with the Department of Justice when taking on foreign principals — both governments and companies — as clients.

    “What has never been a gray area under FARA is if you are communicating directly with the U.S. government on behalf of a foreign principal, that’s a political activity,” said Ben Freeman, director of the democratizing foreign policy program at the Quincy Institute. Of the period when Steptoe was working for NSO but hadn’t registered yet, Freeman said, “By skirting FARA registration, they are really playing with fire.” 

    Though FARA cases have increased since 2016, charges brought by the Justice Department remain relatively rare. The statute itself is forgiving, the enforcement mechanisms like warning letters often render failures to register moot, and, with so little case law owing to so few indictments, prosecutors are loath to try their hand at bringing charges. (The Department of Justice did not respond to a request for comment.)

    “By skirting FARA registration, they are really playing with fire.”

    In a letter sent to the Justice Department in July of last year, Democracy for the Arab World Now called on the government to investigate what, at the time, was described as the firms’ lack of registration as agents for Israel under FARA. “We believe that misrepresentation to be intentional,” the letter said.

    None of the four companies hired by NSO said in their registrations that there is any Israeli government control over the spyware group, despite the evidence laid out by Democracy for the Arab World Now of Israeli influence on the company that meets the U.S. definition of government control. This includes the fact that all of NSO’s contracts are determined by the government of Israel, allegedly to serve political interests.

    The Department of Justice, however, does not give updates or responses to such referrals. Neither has it published an opinion or issued a penalty. 

    “Based on FARA filings, one would be under the impression that NSO was a run of the mill private corporate entity,” said Shapiro, of Democracy for the Arab World Now. “But given its role in spyware, understanding the government’s control is really important.”

    The post Israeli Spyware Firm NSO Demands “Urgent” Meeting With Blinken Amid Gaza War Lobbying Effort appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In Phoenix, Austin, Houston, Dallas, Miami, and San Francisco, hundreds of so-called autonomous vehicles, or AVs, operated by General Motors’ self-driving car division, Cruise, have for years ferried passengers to their destinations on busy city roads. Cruise’s app-hailed robot rides create a detailed picture of their surroundings through a combination of sophisticated sensors, and navigate through roadways and around obstacles with machine learning software intended to detect and avoid hazards.

    AV companies hope these driverless vehicles will replace not just Uber, but also human driving as we know it. The underlying technology, however, is still half-baked and error-prone, giving rise to widespread criticisms that companies like Cruise are essentially running beta tests on public streets.

    Despite the popular skepticism, Cruise insists its robots are profoundly safer than what they’re aiming to replace: cars driven by people. In an interview last month, Cruise CEO Kyle Vogt downplayed safety concerns: “Anything that we do differently than humans is being sensationalized.”

    The concerns over Cruise cars came to a head this month. On October 17, the National Highway Traffic Safety Administration announced it was investigating Cruise’s nearly 600-vehicle fleet because of risks posed to other cars and pedestrians. A week later, in San Francisco, where driverless Cruise cars have shuttled passengers since 2021, the California Department of Motor Vehicles announced it was suspending the company’s driverless operations. Following a string of highly public malfunctions and accidents, the immediate cause of the order, the DMV said, was that Cruise withheld footage from a recent incident in which one of its vehicles hit a pedestrian, dragging her 20 feet down the road.

    In an internal address on Slack to his employees about the suspension, Vogt stuck to his message: “Safety is at the core of everything we do here at Cruise.” Days later, the company said it would voluntarily pause fully driverless rides in Phoenix and Austin, meaning its fleet will be operating only with human supervision: a flesh-and-blood backup to the artificial intelligence.

    Even before its public relations crisis of recent weeks, though, previously unreported internal materials such as chat logs show Cruise has known internally about two pressing safety issues: Driverless Cruise cars struggled to detect large holes in the road and have so much trouble recognizing children in certain scenarios that they risked hitting them. Yet, until it came under fire this month, Cruise kept its fleet of driverless taxis active, maintaining its regular reassurances of superhuman safety.

    “This strikes me as deeply irresponsible at the management level to be authorizing and pursuing deployment or driverless testing, and to be publicly representing that the systems are reasonably safe,” said Bryant Walker Smith, a University of South Carolina law professor and engineer who studies automated driving.

    In a statement, a spokesperson for Cruise reiterated the company’s position that a future of autonomous cars will reduce collisions and road deaths. “Our driverless operations have always performed higher than a human benchmark, and we constantly evaluate and mitigate new risks to continuously improve,” said Erik Moser, Cruise’s director of communications. “We have the lowest risk tolerance for contact with children and treat them with the highest safety priority. No vehicle — human operated or autonomous — will have zero risk of collision.”

    “These are not self-driving cars. These are cars driven by their companies.”

    Though AV companies enjoy a reputation in Silicon Valley as bearers of a techno-optimist transit utopia — a world of intelligent cars that never drive drunk, tired, or distracted — the internal materials reviewed by The Intercept reveal an underlying tension between potentially life-and-death engineering problems and the effort to deliver the future as quickly as possible. With its parent company General Motors, which purchased Cruise in 2016 for $1.1 billion, hemorrhaging money on the venture, any setback for the company’s robo-safety regimen could threaten its business.

    Instead of seeing public accidents and internal concerns as yellow flags, Cruise sped ahead with its business plan. Before its permitting crisis in California, the company was, according to Bloomberg, exploring expansion to 11 new cities.

    “These are not self-driving cars,” said Smith. “These are cars driven by their companies.”

    Kyle Vogt, co-founder, president and chief technology officer for Cruise Automation Inc., holds an articulating radar as he speaks during a reveal event in San Francisco, California, U.S., on Tuesday, Jan. 21, 2020. The shuttle is designed to be more spacious and passenger-friendly than a conventional, human-driven car. The silver, squared-off vehicle lacks traditional controls like pedals and a steering wheel, freeing up room for multiple people to share rides, Cruise Chief Executive Officer Dan Ammann said at the event. Photographer: David Paul Morris/Bloomberg via Getty Images
    Kyle Vogt — co-founder, president, chief executive officer, and chief technology officer of Cruise — holds an articulating radar as he speaks during a reveal event in San Francisco on Jan. 21, 2020.
    Photo: David Paul Morris/Bloomberg via Getty Images

    “May Not Exercise Additional Care Around Children”

    Several months ago, Vogt became choked up when talking about a 4-year-old girl who had recently been killed in San Francisco. A 71-year-old woman had taken what local residents described as a low-visibility right turn, striking a stroller and killing the child. “It barely made the news,” Vogt told the New York Times. “Sorry. I get emotional.” Vogt offered that self-driving cars would make for safer streets.

    Behind the scenes, meanwhile, Cruise was grappling with its own safety issues around hitting kids with cars. One of the problems addressed in the internal, previously unreported safety assessment materials is the failure of Cruise’s autonomous vehicles to, under certain conditions, effectively detect children. “Cruise AVs may not exercise additional care around children,” reads one internal safety assessment. The company’s robotic cars, it says, still “need the ability to distinguish children from adults so we can display additional caution around children.”

    In particular, the materials say, Cruise worried its vehicles might drive too fast at crosswalks or near a child who could move abruptly into the street. The materials also say Cruise lacks data around kid-centric scenarios, like children suddenly separating from their accompanying adult, falling down, riding bicycles, or wearing costumes.

    The materials note results from simulated tests in which a Cruise vehicle is in the vicinity of a small child. “Based on the simulation results, we can’t rule out that a fully autonomous vehicle might have struck the child,” reads one assessment. In another test drive, a Cruise vehicle successfully detected a toddler-sized dummy but still struck it with its side mirror at 28 miles per hour.

    The internal materials attribute the robot cars’ inability to reliably recognize children under certain conditions to inadequate software and testing. “We have low exposure to small VRUs” — Vulnerable Road Users, a reference to children — “so very few events to estimate risk from,” the materials say. Another section concedes Cruise vehicles’ “lack of a high-precision Small VRU classifier,” or machine learning software that would automatically detect child-shaped objects around the car and maneuver accordingly. The materials say Cruise, in an attempt to compensate for machine learning shortcomings, was relying on human workers behind the scenes to manually identify children encountered by AVs where its software couldn’t do so automatically.

    In its statement, Cruise said, “It is inaccurate to say that our AVs were not detecting or exercising appropriate caution around pedestrian children” — a claim undermined by internal Cruise materials reviewed by The Intercept and the company’s statement itself. In its response to The Intercept’s request for comment, Cruise went on to concede that, this past summer during simulation testing, it discovered that its vehicles sometimes temporarily lost track of children on the side of the road. The statement said the problem was fixed and only encountered during testing, not on public streets, but Cruise did not say how long the issue lasted. Cruise did not specify what changes it had implemented to mitigate the risks.

    Despite Cruise’s claim that its cars are designed to identify children to treat them as special hazards, spokesperson Navideh Forghani said that the company’s driving software hadn’t failed to detect children but merely failed to classify them as children.

    Moser, the Cruise spokesperson, said the company’s cars treat children as a special category of pedestrians because they can behave unpredictably. “Before we deployed any driverless vehicles on the road, we conducted rigorous testing in a simulated and closed-course environment against available industry benchmarks,” he said. “These tests showed our vehicles exceed the human benchmark with regard to the critical collision avoidance scenarios involving children.”

    “Based on our latest assessment this summer,” Moser continued, “we determined from observed performance on-road, the risk of the potential collision with a child could occur once every 300 million miles at fleet driving, which we have since improved upon. There have been no on-road collisions with children.”

    Do you have a tip to share about safety issues at Cruise? The Intercept welcomes whistleblowers. Use a personal device to contact Sam Biddle on Signal at +1 (978) 261-7389, by email at sam.biddle@theintercept.com, or by SecureDrop.

    Cruise has known its cars couldn’t detect holes, including large construction pits with workers inside, for well over a year, according to the safety materials reviewed by The Intercept. Internal Cruise assessments claim this flaw constituted a major risk to the company’s operations. Cruise determined that at its current, relatively miniscule fleet size, one of its AVs would drive into an unoccupied open pit roughly once a year, and a construction pit with people inside it about every four years. Without fixes to the problems, those rates would presumably increase as more AVs were put on the streets.

    It appears this concern wasn’t hypothetical: Video footage captured from a Cruise vehicle reviewed by The Intercept shows one self-driving car, operating in an unnamed city, driving directly up to a construction pit with multiple workers inside. Though the construction site was surrounded by orange cones, the Cruise vehicle drives directly toward it, coming to an abrupt halt. Though it can’t be discerned from the footage whether the car entered the pit or stopped at its edge, the vehicle appears to be only inches away from several workers, one of whom attempted to stop the car by waving a “SLOW” sign across its driverless windshield.

    “Enhancing our AV’s ability to detect potential hazards around construction zones has been an area of focus, and over the last several years we have conducted extensive human-supervised testing and simulations resulting in continued improvements,” Moser said. “These include enhanced cone detection, full avoidance of construction zones with digging or other complex operations, and immediate enablement of the AV’s Remote Assistance support/supervision by human observers.”

    Known Hazards

    Cruise’s undisclosed struggles with perceiving and navigating the outside world illustrate the perils of leaning heavily on machine learning to safely transport humans. “At Cruise, you can’t have a company without AI,” the company’s artificial intelligence chief told Insider in 2021. Cruise regularly touts its AI prowess in the tech media, describing it as central to preempting road hazards. “We take a machine-learning-first approach to prediction,” a Cruise engineer wrote in 2020.

    The fact that Cruise is even cataloguing and assessing its safety risks is a positive sign, said Phil Koopman, an engineering professor at Carnegie Mellon, emphasizing that the safety issues that worried Cruise internally have been known to the field of autonomous robotics for decades. Koopman, who has a long career working on AV safety, faulted the data-driven culture of machine learning that leads tech companies to contemplate hazards only after they’ve encountered them, rather than before. The fact that robots have difficulty detecting “negative obstacles” — AV jargon for a hole — is nothing new.

    “Safety is about the bad day, not the good day, and it only takes one bad day.”

    “They should have had that hazard on their hazard list from day one,” Koopman said. “If you were only training it how to handle things you’ve already seen, there’s an infinite supply of things that you won’t see until it happens to your car. And so machine learning is fundamentally poorly suited to safety for this reason.”

    The safety materials from Cruise raise an uncomfortable question for the company about whether robot cars should be on the road if it’s known they might drive into a hole or a child.

    “If you can’t see kids, it’s very hard for you to accept that not being high risk — no matter how infrequent you think it’s going to happen,” Koopman explained. “Because history shows us people almost always underestimate the risk of high severity because they’re too optimistic. Safety is about the bad day, not the good day, and it only takes one bad day.”

    Koopman said the answer rests largely on what steps, if any, Cruise has taken to mitigate that risk. According to one safety memo, Cruise began operating fewer driverless cars during daytime hours to avoid encountering children, a move it deemed effective at mitigating the overall risk without fixing the underlying technical problem. In August, Cruise announced the cuts to daytime ride operations in San Francisco but made no mention of its attempt to lower risk to local children. (“Risk mitigation measures incorporate more than AV behavior, and include operational measures like alternative routing and avoidance areas, daytime or nighttime deployment and fleet reductions among other solutions,” said Moser. “Materials viewed by The Intercept may not reflect the full scope of our evaluation and mitigation measures for a specific situation.”)

    A quick fix like shifting hours of operation presents an engineering paradox: How can the company be so sure it’s avoiding a thing it concedes it can’t always see? “You kind of can’t,” said Koopman, “and that may be a Catch-22, but they’re the ones who decided to deploy in San Francisco.”

    “The reason you remove safety drivers is for publicity and optics and investor confidence.”

    Precautions like reduced daytime operations will only lower the chance that a Cruise AV will have a dangerous encounter with a child, not eliminate that possibility. In a large American city, where it’s next to impossible to run a taxi business that will never need to drive anywhere a child might possibly appear, Koopman argues Cruise should have kept safety drivers in place while it knew this flaw persisted. “The reason you remove safety drivers is for publicity and optics and investor confidence,” he told The Intercept.

    Koopman also noted that there’s not always linear progress in fixing safety issues. In the course of trying to fine-tune its navigation, Cruise’s simulated tests showed its AV software missed children at an increased rate, despite attempts to fix the issues, according to materials reviewed by The Intercept.

    The two larger issues of kids and holes weren’t the only robot flaws potentially imperiling nearby humans. According to other internal materials, some vehicles in the company’s fleet suddenly began making unprotected left turns at intersections, something Cruise cars are supposed to be forbidden from attempting. The potentially dangerous maneuvers were chalked up to a botched software update.

    The Cruise Origin, a self-driving vehicle with no steering wheel or pedals, is displayed at the Honda Motor's booth during the press day of the Japan Mobility Show in Tokyo on October 25, 2023. (Photo by Kazuhiro NOGI / AFP) (Photo by KAZUHIRO NOGI/AFP via Getty Images)
    The Cruise Origin, a self-driving vehicle with no steering wheel or pedals, is displayed at Honda’s booth during the press day of the Japan Mobility Show in Tokyo on Oct. 25, 2023.
    Photo: Kazuhiro Nog/AFP via Getty Images

    The Future of Road Safety?

    Part of the self-driving industry’s techno-libertarian promise to society — and a large part of how it justifies beta-testing its robots on public roads — is the claim that someday, eventually, streets dominated by robot drivers will be safer than their flesh-based predecessors.

    Cruise cited a RAND Corporation study to make its case. “It projected deploying AVs that are on average ten percent safer than the average human driver could prevent 600,000 fatalities in the United States over 35 years,” wrote Vice President for Safety Louise Zhang in a company blog post. “Based on our first million driverless miles of operation, it appears we are on track to far exceed this projected safety benefit.”

    During General Motors’ quarterly earnings call — the same day California suspended Cruise’s operating permit — CEO Mary Barra told financial analysts that Cruise “is safer than a human driver and is constantly improving and getting better.”

    In the 2022 “Cruise Safety Report,” the company outlines a deeply unflattering comparison of fallible human drivers to hyper-intelligent robot cars. The report pointed out that driver distraction was responsible for more than 3,000 traffic fatalities in 2020, whereas “Cruise AVs cannot be distracted.” Crucially, the report claims, a “Cruise AV only operates in conditions that it is designed to handle.”

    “It’s I think especially egregious to be making the argument that Cruise’s safety record is better than a human driver.”

    When it comes to hitting kids, however, internal materials indicate the company’s machines were struggling to match the safety performance of even an average human: Cruise’s goal was, at the time, for its robots to merely drive as safely around children at the same rate as an average Uber driver — a goal the internal materials note it was failing to meet.

    “It’s I think especially egregious to be making the argument that Cruise’s safety record is better than a human driver,” said Smith, the University of South Carolina law professor. “It’s pretty striking that there’s a memo that says we could hit more kids than an average rideshare driver, and the apparent response of management is, keep going.”

    In a statement to The Intercept, Cruise confirmed its goal of performing better than ride-hail drivers. “Cruise always strives to go beyond existing safety benchmarks, continuing to raise our own internal standards while we collaborate with regulators to define industry standards,” said Moser. “Our safety approach combines a focus on better-than-human behavior in collision imminent situations, and expands to predictions and behaviors to proactively avoid scenarios with risk of collision.”

    Cruise and its competitors have worked hard to keep going despite safety concerns, public and nonpublic. Before the California Public Utilities Commission voted to allow Cruise to offer driverless rides in San Francisco, where Cruise is headquartered, the city’s public safety and traffic agencies lobbied for a slower, more cautious approach to AVs. The commission didn’t agree with the agencies’ worries. “While we do not yet have the data to judge AVs against the standard human drivers are setting, I do believe in the potential of this technology to increase safety on the roadway,” said commissioner John Reynolds, who previously worked as a lawyer for Cruise.

    Had there always been human safety drivers accompanying all robot rides — which California regulators let Cruise ditch in 2021 — Smith said there would be little cause for alarm. A human behind the wheel could, for example, intervene to quickly steer a Cruise AV out of the path of a child or construction crew that the robot failed to detect. Though the company has put them back in place for now, dispensing entirely with human backups is ultimately crucial to Cruise’s long-term business, part of its pitch to the public that steering wheels will become a relic. With the wheel still there and a human behind it, Cruise would struggle to tout its technology as groundbreaking.

    “We’re not in a world of testing with in-vehicle safety drivers, we’re in a world of testing through deployment without this level of backup and with a whole lot of public decisions and claims that are in pretty stark contrast to this,” Smith explained. “Any time that you’re faced with imposing a risk that is greater than would otherwise exist and you’re opting not to provide a human safety driver, that strikes me as pretty indefensible.”

    The post Cruise Knew Its Self-Driving Cars Had Problems Recognizing Children — and Kept Them on the Streets appeared first on The Intercept.

  • Lawn care equipment — leaf-blowers, lawnmowers, and the like — doesn’t top most people’s lists of climate priorities. But a new report documents how, in aggregate, lawn care is a major source of U.S. air pollution. 

    Using the latest available data from the Environmental Protection Agency’s 2020 National Emissions Inventory, the report found that the equipment released more than 68,000 tons of smog-forming nitrous oxides, which is roughly on par with the pollution from 30 million cars. Lawn equipment also spewed 30 millions tons of climate-warming carbon dioxide, which is more than the total emissions of the city of Los Angeles.

    “When it comes to these small engines in lawn and garden equipment, it’s really counterintuitive,” said Kirsten Schatz, the lead author of the report and a clean air advocate at Colorado PIRG, a nonprofit environmental organization. “This stuff is really disproportionately causing a lot of air pollution, health problems and disproportionately contributing to climate change.”

    Lawn equipment also contributed to a litany of other air toxics, such as formaldehyde and benzene, according to the report, which is titled “Lawn Care Goes Electric.” But perhaps the most concerning pollutant it releases is the fine particulate matter known as PM2.5. 

    PM2.5 is far smaller than the width of a human hair and can lead to health problems ranging from cancer, reproductive ailments, and mental health problems to premature death. The report found that gas-powered lawn equipment belched 21,800 tons of PM2.5 in 2020 — an  amount equivalent to the pollution from 234 million typical cars over the course of a year.

    That outsize impact comes because gas-powered lawn equipment runs on different types of engines than passenger cars. They are smaller — coming in two- and four-stroke versions, which reference the differences in the engines’ combustion cycles — and are generally less efficient, with two-stroke engines being particularly problematic because they run a mix of lubricating oil and gasoline.

    “[This] really inefficient engine technology is pound for pound more polluting than the cars and trucks,” said Schatz. “Outdoor equipment generates a pretty shocking amount of pollution.”

    Emissions also vary widely by state. California and Florida ranked highest for carbon dioxide emissions from lawn equipment, while Florida and Texas topped the list of PM2.5 pollution. While one might expect the sheer amount of lawn care in California, the most populous U.S. state, to rank it higher on PM2.5 pollution, it only comes in 29th. Lower two-stroke engine use accounts for the gap between the state’s carbon and particulate emissions, according to Tony Dutzik, a senior policy analyst at Frontier Group and contributor to the report.

    He explained that nationally, two-stroke engines are responsible for 82 percent of PM2.5 from lawn equipment but in California it’s only 41 percent. Researchers are not exactly sure why the use difference is so stark, but one theory is that California’s history of regulating small engines is paying off. 

    “California has consistently led on [small engine] emission standards since the mid-1990s,” said Dutzik. That leadership is ongoing: A statewide ban on small off-road engines, including lawn equipment, is set to go into effect next year. Schatz argues that the rest of the country should follow California’s lead and promote electric alternatives that run on rechargeable batteries.

    “We have so many cleaner, quieter electric alternatives available now,” said Schatz. “Battery technology has come a long way.”

    Many states and municipalities offer rebates on battery-powered lawn equipment, and more people are making the switch. That’s true even in the commercial lawn-care sector, which is responsible for the bulk of emissions but is more difficult to electrify because companies often need more powerful machines, with longer runtimes, than residential users. 

    Kelly Giard started the Clean Air Lawn Care company in 2006, at a time when he said the technology for commercial work was “limited.” But that’s rapidly changing and it’s helped his company grow. His franchisees now serve roughly 10,000 customers across 16 states. 

    “At this point,” said Girard of the performance of his electric fleet, “it’s very comparable to gas.”

    This story was originally published by Grist with the headline Lawn equipment spews ‘shocking’ amount of air pollution, new data shows on Oct 30, 2023.

    This post was originally published on Grist.

  • As Israel imposed an internet blackout in Gaza on Friday, social media users posting about the grim conditions have contended with erratic and often unexplained censorship of content related to Palestine on Instagram and Facebook.

    Since Israel launched retaliatory airstrikes in Gaza after the October 7 Hamas attack, Facebook and Instagram users have reported widespread deletions of their content, translations inserting the word “terrorist” into Palestinian Instagram profiles, and suppressed hashtags. Instagram comments containing the Palestinian flag emoji have also been hidden, according to 7amleh, a Palestinian digital rights group that formally collaborates with Meta, which owns Instagram and Facebook, on regional speech issues.

    Numerous users have reported to 7amleh that their comments were moved to the bottom of the comments section and require a click to display. Many of the remarks have something in common: “It often seemed to coincide with having a Palestinian flag in the comment,” 7amleh spokesperson Eric Sype told The Intercept.

    Users report that Instagram had flagged and hidden comments containing the emoji as “potentially offensive,” TechCrunch first reported last week. Meta has routinely attributed similar instances of alleged censorship to technical glitches. Meta confirmed to The Intercept that the company has been hiding comments that contain the Palestinian flag emoji in certain “offensive” contexts that violate the company’s rules.

    “The notion of finding a flag offensive is deeply distressing for Palestinians,” Mona Shtaya, a nonresident fellow at the Tahrir Institute for Middle East Policy who follows Meta’s policymaking on speech, told The Intercept.

    “The notion of finding a flag offensive is deeply distressing for Palestinians.”

    Asked about the contexts in which Meta hides the flag, Meta spokesperson Andy Stone pointed to the Dangerous Organizations and Individuals policy, which designates Hamas as a terrorist organization, and cited a section of the Community Standards rulebook that prohibits any content “praising, celebrating or mocking anyone’s death.”

    It remains unclear, however, precisely how Meta determines whether the use of the flag emoji is offensive enough to suppress. The Intercept reviewed several hidden comments containing the Palestinian flag emoji that had no reference to Hamas or any other banned group. The Palestinian flag itself has no formal association with Hamas and predates the militant group by decades.

    Some of the hidden comments reviewed by The Intercept only contained emojis and no other text. In one, a user commented on an Instagram video of a pro-Palestinian demonstration in Jordan with green, white, and black heart emojis corresponding to the colors of the Palestinian flag, along with emojis of the Moroccan and Palestinian flags. In another, a user posted just three Palestinian flag emojis. Another screenshot seen by The Intercept shows two hidden comments consisting only of the hashtags #Gaza, #gazaunderattack, #freepalestine, and #ceasefirenow.

    “Throughout our long history, we’ve endured moments where our right to display the Palestinian flag has been denied by Israeli authorities. Decades ago, Palestinian artists Nabil Anani and Suleiman Mansour ingeniously used a watermelon as a symbol of our flag,” Shtaya said. “When Meta engages in such practices, it echoes the oppressive measures imposed on Palestinians.”

    Faulty Content Moderation

    Instagram and Facebook users have taken to other social media platforms to report other instances of censorship. On X, formerly known as Twitter, one user posted that Facebook blocked a screenshot of a popular Palestinian Instagram account he tried to share with a friend via private message. The message was flagged as containing nonconsensual sexual images, and his account was suspended.

    On Bluesky, Facebook and Instagram users reported that attempts to share national security reporter Spencer Ackerman’s recent article criticizing President Joe Biden’s support of Israel were blocked and flagged as cybersecurity risks.

    On Friday, the news site Mondoweiss tweeted a screenshot of an Instagram video about Israeli arrests of Palestinians in the West Bank that was removed because it violated the dangerous organizations policy.

    Meta’s increasing reliance on automated, software-based content moderation may prevent people from having to sort through extremely disturbing and potentially traumatizing images. The technology, however, relies on opaque, unaccountable algorithms that introduce the potential to misfire, censoring content without explanation. The issue appears to extend to posts related to the Israel-Palestine conflict.

    An independent audit commissioned by Meta last year determined that the company’s moderation practices amounted to a violation of Palestinian users’ human rights. The audit also concluded that the Dangerous Organizations and Individuals policy — which speech advocates have criticized for its opacity and overrepresentation of Middle Eastern, Muslim, and South Asians — was “more likely to impact Palestinian and Arabic-speaking users, both based upon Meta’s interpretation of legal obligations, and in error.”

    Last week, the Wall Street Journal reported that Meta recently dialed down the level of confidence its automated systems require before suppressing “hostile speech” to 25 percent for the Palestinian market, a significant decrease from the standard threshold of 80 percent.

    The audit also faulted Meta for implementing a software scanning tool to detect violent or racist incitement in Arabic, but not for posts in Hebrew. “Arabic classifiers are likely less accurate for Palestinian Arabic than other dialects … due to lack of linguistic and cultural competence,” the report found.

    “Since the beginning of this crisis, we have received hundreds of submissions documenting incitement to violence in Hebrew.”

    Despite Meta’s claim that the company developed a speech classifier for Hebrew in response to the audit, hostile speech and violent incitement in Hebrew are rampant on Instagram and Facebook, according to 7amleh.

    “Based on our monitoring and documentation, it seems to be very ineffective,” 7amleh executive director and co-founder Nadim Nashif said of the Hebrew classifier. “Since the beginning of this crisis, we have received hundreds of submissions documenting incitement to violence in Hebrew, that clearly violate Meta’s policies, but are still on the platforms.”

    An Instagram search for a Hebrew-language hashtag roughly meaning “erase Gaza” produced dozens of results at the time of publication. Meta could not be immediately reached for comment on the accuracy of its Hebrew speech classifier.

    The Wall Street Journal shed light on why hostile speech in Hebrew still appears on Instagram. “Earlier this month,” the paper reported, “the company internally acknowledged that it hadn’t been using its Hebrew hostile speech classifier on Instagram comments because it didn’t have enough data for the system to function adequately.”

    The post Instagram Hid a Comment. It Was Just Three Palestinian Flag Emojis. appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Human rights campaigners say the Pegasus initiative wrongly criminalises people of colour, women and LGBTQ+ people

    Some of Britain’s biggest retailers, including Tesco, John Lewis and Sainsbury’s, have been urged to pull out of a new policing strategy amid warnings it risks wrongly criminalising people of colour, women and LGBTQ+ people.

    A coalition of 14 human rights groups has written to the main retailers – also including Marks & Spencer, the Co-op, Next, Boots and Primark – saying that their participation in a new government-backed scheme that relies heavily on facial recognition technology to combat shoplifting will “amplify existing inequalities in the criminal justice system”.

    Continue reading…

    This post was originally published on Human rights | The Guardian.

  • This week’s News on China.

    • More US sanctions against Chinese chip industry
    • China tightens graphite export controls
    • Industrial renaissance in northeast China
    • China approves GM soybeans and corn

    This post was originally published on Dissident Voice.

  • Illustration: Chen Xia/GT

    Illustration: Chen Xia/GT

    The US doesn’t seem to be hesitant for a second to sacrifice its allies so as to contain China. The tightening chip export restrictions show that it is willing to do whatever it takes to hinder China’s technological development. But that doesn’t mean its allies will unconditionally follow such an extreme approach toward China, especially when their own interests will be at greater risk.

    The fact that the subject of the new US export restrictions involving ASML is under heated debate in the Netherlands is the latest example of the conflict of interests. Several Dutch lawmakers on Tuesday challenged the Netherlands’ trade minister over whether the US has acted correctly in unilaterally imposing new rules regulating the export to China of a chipmaking machine made by ASML, Reuters reported on Wednesday.

    Dutch media reports disclosed that Dutch political parties CDA, D66 and Volt called for the cabinet to advocate more strongly for chip machine manufacturer ASML when dealing with the US.

    The US last week announced new rules giving Washington the right to restrict the export of ASML’s Twinscan NXT1930Di machine if it contains any US parts at all.

    As a result, ASML needs to apply for a license from Washington to sell these machines, even though they could be exported without issues under Dutch rules.

    After Huawei Mate 60, a smartphone made by Chinese tech giant Huawei using advanced chips, alarmed the US, Western media reports emerged that the mysterious chips were produced on ASML chipmaking machines that were not on the US export restriction list. So the new US rules are clearly aimed at further tightening restrictions on technology exports to China to stem the potential for Chinese technology companies to break through the semiconductor bottleneck.

    But this has also once again put the Netherlands in an awkward position, as the Dutch government now needs to come up with a reasonable justification for its response to the legitimate demand to protect the interests of domestic businesses.

    AMSL has been prohibited from selling its most sophisticated chipmaking machines to China since 2019. Under US pressure this year, the Netherlands has introduced stricter export controls on high-end chipmaking equipment.

    China has become a major buyer of ASML equipment. In the third quarter, China’s purchase accounted for 46 percent of ASML’s sales, partly because Chinese companies rushed to place orders ahead of looming export controls. But from 2024, when the Dutch restrictions are set to take full effect, ASML will see decreased sales to China. Under such circumstances, further strengthening export restrictions on more equipment is expected to seriously harm the company’s interests.

    Indeed, ASML’s release of lower-than-expected orders and warning of flat sales next year indicates the importance of the Chinese market.

    Based in the Netherlands, ASML has become an important part of the Dutch economy and a symbol of the country’s technology prowess. So the Dutch government knows clearly what a sudden and sweeping cutoff of ASML’s supplies to China would mean for the country.

    So it announced the export restriction but said it would be implemented next year, with the view of taking care of its own company in a flexible way.

    But the Dutch approach is anticipated to face a severe test because the US apparently won’t allow any time or opportunity for Chinese companies to achieve a breakthrough, even at the expense of hurting its allies’ interests.

    Now anger and concerns about the potential damage of the new US rules to the Dutch economy has triggered disputes and debate within the Netherlands. How to protect ASML’s legitimate interests under US pressure will be a test of the Netherlands’ economic independence.

    As for China, it not only needs to speed up its own research and development, but also needs to strengthen communication and cooperation with third parties, such as the Netherlands, which are constrained by US policies.

    By offering them with more favorable and open market access, as well as promoting economic and trade cooperation based on international norms, China, together with other partners, can jointly tackle the US coercion.

    This post was originally published on Dissident Voice.

  • Elon Musk, chief executive officer of Tesla, speaks to members of the media following Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, DC, US, on Wednesday, Sept. 13, 2023. The gathering is part of the Senate majority leader's strategy to give Congress more influence over the future of artificial intelligence as it takes on a growing role in the professional and personal lives of Americans. Photographer: Al Drago/Bloomberg via Getty Images
    Elon Musk speaks to members of the media following the Senate AI Insight Forum on Capitol Hill in Washington, D.C., on Sept. 13, 2023.
    Photo: Al Drago/Bloomberg via Getty Images

    After Elon Musk finalized his purchase of Twitter on October 27, 2022, I wrote an article in which I warned, “We need to take seriously the possibility that this will end up being one of the funniest things that’s ever happened.”

    Today, I have to issue an apology: I was wrong. Musk’s ownership of Twitter may well be — at least for people who manage to enjoy catastrophic human folly — the funniest thing that’s ever happened. 

    Let’s take a look back and see how I was so mistaken.

    Musk began his tenure as Twitter’s owner by posting this message to the company’s advertisers, in which he said, “Twitter aspires to be the most respected advertising platform in the world that strengthens your brand and grows your enterprise. … Twitter obviously cannot be a free-for-all hellscape, where anything can be said with no consequences! In addition to adhering to the laws of the land, our platform must be warm and welcoming to all.”

    Musk had to say this for obvious reasons: 90 percent of Twitter’s revenues came from ads, and corporate America gets nervous about its ads appearing in an environment that’s completely unpredictable. 

    I assumed that Musk would make a serious effort here. But this was based on my belief that, while he might be a deeply sincere ultra-right-wing crank, he surely had the level of self-control possessed by a 6-year-old. He does not. Big corporations now comprehend this and are understandably anxious about advertising with a company run by a man who, at any moment, may see user @JGoebbels1488 posting excerpts from “The Protocols of the Elders of Zion” and reply “concerning!”

    The consequences of this have been what you’d expect. The marketing consultancy Ebiquity represents 70 of the 100 companies that spend the most on ads, including Google and General Motors. Before Musk’s takeover, 31 of their big clients bought space on Twitter. Last month, just two did. Ebiquity’s chief strategy officer told Business Insider that “this is a drop we have not seen before for any major advertising platform.” 

    This is why Twitter users now largely see ads from micro-entrepreneurs who are, say, selling 1/100th scale papier-mâché models of the Eiffel Tower. The good news for Twitter is that such companies don’t worry much about brand safety. But the bad news is that their annual advertising budget is $25. Hence, Twitter’s advertising revenue in the U.S. is apparently down 60 percent year over year.

    I also never imagined it possible that Musk would rename Twitter — which had become an incredibly well-known brand — to “X” just because he’s been obsessed with the idea of a company with that name since he was a kid. It’s as though he bought Coca-Cola and changed its name to that of his beloved childhood pet tortoise Zoinks. The people who try to measure this kind of thing claim that this has destroyed between $4 and $20 billion of Twitter’s value. (As you see in this article, I refuse to refer to Twitter as X just out of pure orneriness.)

    Another of my mistaken beliefs was that Musk understood the basic facts about Twitter. The numbers have gone down somewhat since Musk’s purchase of the company, but right now, about 500 million people log on to Twitter at least once a month. Perhaps 120 million check it out daily; these average users spend about 15 minutes on it. A tenth of these numbers — that is, about 12 million people — are heavy users, who account for 70 percent of all the time spent by anyone on the app.

    Musk is one of these heavy users. He adores Twitter, as do some other troubled souls. But this led him to wildly overestimate its popularity among normal humans. A company with 50 million fanatically devoted users could possibly survive a collapse in ad revenue by enticing them to pay a subscription fee. But Twitter does not have such users and now never will, given Musk’s relentless antagonizing of the largely progressive Twitterati. 

    So how much is Twitter worth today? When Musk became involved with the company in the first months of 2022, its market capitalization was about $28 billion. He then offered to pay $44 billion for it, which was so much more than the company was worth that its executives had to accept the offer or they would have been sued by their shareholders. Now that the company’s no longer publicly traded — and so its basic financials don’t have to be disclosed — it’s more difficult to know what’s going on. However, Fidelity Investments, a financial services company, holds a stake in Twitter and has marked down its valuation of this stake by about two-thirds since Musk bought it. This implies that Twitter is now worth around $15 billion.

    The significance of this is that Musk and his co-investors only put up $31 billion or so of the $44 billion purchase price. The remaining $13 billion was borrowed by Twitter at high interest rates from Wall Street. In other words, Musk and company are perilously close to having lost their entire $31 billion.

    In the end, I did not understand Musk’s determination to torment himself by forcing his entire existence into an extremely painful Procrustean bed. The results have been bleak and awful for Twitter and the world, but not just bleak and awful: They have also been hilarious. Anyone who likes to laugh about human vanity and hubris has to appreciate his commitment to the bit.

    The post One Year After Elon Musk Bought Twitter, His Hilarious Nightmare Continues appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Step into the street in most any major city and an e-bike carrying a commuter, a messenger, or a delivery is sure to whiz by. The zippy machines, which use electric motors to achieve speeds of up to 28 mph, are increasingly ubiquitous, particularly in New York City.

    Their popularity has exploded, with annual sales growing roughly tenfold between 2017 and 2022, according to data provided by the industry group People for Bikes — an increase propelled in part by state and local rebates and other incentives. That growth has been accompanied by an increase in rider injuries and, in some cases, bikes literally exploding. 

    The federal Consumer Protection Safety Commision, or CSPC, identified a fire hazard in almost half of its 59 e-bike investigations last year. It also estimated that there were nearly 25,000 emergency room visits for e-bike injuries more broadly in 2022. The two-wheelers also have been involved in a spate of high-profile fatalities in recent years, especially in the Big Apple. 

    “​​[E-bikes] gained momentum unexpectedly,” said Matt Moore, People for Bikes’ general and policy counsel, citing the pandemic as a key accelerant. The result, he explained, was a boom of new bikes and new companies. “That rapid entry into the market really led to a huge growth in very low quality products.”

    Although mishaps seem to be growing more slowly than sales, they are prompting calls for manufacturers to have their products certified by the likes of UL Solutions. Last month, New York City became the first jurisdiction in the nation to require exactly that, some 10 months after the CSPC sent a letter to e-bike companies urging them to seek such certification.

    “I urge you to review your product line immediately and ensure that all micromobility devices that you manufacture, import, distribute, or sell in the United States comply with the relevant [standards],” the letter read. “Failure to do so puts U.S. consumers at risk of serious harm and may result in enforcement action.”

    The industry has responded. The relevant standard in the U.S. and Canada is UL 2849, which was established in 2020, and examines a bike’s electric drive system for fire risk, charging performance, and performance in extreme cold and other conditions. (Separate standards apply to the batteries and general mechanical components). “We have seen inquiries about [UL 2849] testing and certification go up substantially,” a representative of UL Solutions told Grist. The 13 companies that have achieved certification this year is nearly twice the number seen in 2022.

    The hope is that certification steers people toward safer bikes, and ultimately leads to fewer accidents. The move toward certification, however, won’t happen quickly or without bumps.

    “I think everyone in the industry is aligned that we need to do something,” said Heather Mason, president of the National Bicycle Dealers Association. “The disagreements come down to what.”

    One issue is that many bike manufacturers already certify their bikes to the European benchmark, EN 15194, because they sell far more of them there. And while UL 2849 was based on its European counterpart and the two standards may eventually harmonize, significant discrepancies remain. For example, the European standard has lower power limits for the motor than in the U.S.

    Tweaking a design to meet the UL2849 could add time and significant costs. Moore says developing and certifying a drive system can cost $200,000 or more and take years. 

    “Anytime there’s a change in regulation, and you raise that floor, there are compliance costs,” she said. “It’s a cost that the industry is more than willing to bear.” 

    UL Solutions wouldn’t say what certification costs, but said it is far less than Moore’s estimate and usually takes only a month or two and once it has received all of the components. Once a company’s drive system is certified, it can theoretically use that hardware in multiple models. For a large manufacturer, the cost per bike can be relatively minimal. 

    But there are also potholes on the road toward certification. 

    Small but reputable manufacturers, for instance, may find the cost prohibitive. And, more immediately, it will create an inventory backlog of bikes that already are built to high standards but not UL2849 and can’t be sold in New York City, where the requirement took effect September 16. A UL Solutions or other seal of approval also won’t address every safety concern.

    “The highest risk factors are crashes or falls on roads,” said Chris Cherry, a civil engineering professor and e-bike expert at the University of Tennessee. “A certified battery, or certified something else, isn’t going to solve those problems.”

    But certification will allow consumers to shop more discerningly. An e-bike that meets the UL standard could be identified with a mark cast into the product or a sticker (though there have been reports of counterfeits). Bike shops should be able to identify models that meet UL 2849 as well, as would a company’s website. 

    “The end goal,” said Moore, “is to remove unsafe products from the market.”

    This story was originally published by Grist with the headline As e-bikes grow in popularity, so to do calls for safety certification on Oct 27, 2023.

    This post was originally published on Grist.

  • The very obscure, archaic technologies that make cellphone roaming possible also makes it possible to track phone owners across the world, according to a new investigation by the University of Toronto’s Citizen Lab. The roaming tech is riddled with security oversights that make it a ripe target for those who might want to trace the locations of phone users.

    As the report explains, the flexibility that made cellphones so popular in the first place is largely to blame for their near-inescapable vulnerability to unwanted location tracking: When you move away from a cellular tower owned by one company to one owned by another, your connection is handed off seamlessly, preventing any interruption to your phone call or streaming video. To accomplish this handoff, the cellular networks involved need to relay messages about who — and, crucially, precisely where — you are.

    “Notably, the methods available to law enforcement and intelligence services are similar to those used by the unlawful actors and enable them to obtain individuals’ geolocation information.”

    While most of these network-hopping messages are sent to facilitate legitimate customer roaming, the very same system can be easily manipulated to trick a network into divulging your location to governments, fraudsters, or private sector snoops.

    “Foreign intelligence and security services, as well as private intelligence firms, often attempt to obtain location information, as do domestic state actors such as law enforcement,” states the report from Citizen Lab, which researches the internet and tech from the Munk School of Global Affairs and Public Policy at the University of Toronto. “Notably, the methods available to law enforcement and intelligence services are similar to those used by the unlawful actors and enable them to obtain individuals’ geolocation information with high degrees of secrecy.”

    The sheer complexity required to allow phones to easily hop from one network to another creates a host of opportunities for intelligence snoops and hackers to poke around for weak spots, Citizen Lab says. Today, there are simply so many companies involved in the cellular ecosystem that opportunities abound for bad actors.

    Citizen Lab highlights the IP Exchange, or IPX, a network that helps cellular companies swap data about their customers. “The IPX is used by over 750 mobile networks spanning 195 countries around the world,” the report explains. “There are a variety of companies with connections to the IPX which may be willing to be explicitly complicit with, or turn a blind eye to, surveillance actors taking advantage of networking vulnerabilities and one-to-many interconnection points to facilitate geolocation tracking.”

    This network, however, is even more promiscuous than those numbers suggest, as telecom companies can privately sell and resell access to the IPX — “creating further opportunities for a surveillance actor to use an IPX connection while concealing its identity through a number of leases and subleases.” All of this, of course, remains invisible and inscrutable to the person holding the phone.

    Citizen Lab was able to document several efforts to exploit this system for surveillance purposes. In many cases, cellular roaming allows for turnkey spying across vast distances: In Vietnam, researchers identified a seven-month location surveillance campaign using the network of the state-owned GTel Mobile to track the movements of African cellular customers. “Given its ownership by the Ministry of Public Security the targeting was either undertaken with the Ministry’s awareness or permission, or was undertaken in spite of the telecommunications operator being owned by the state,” the report concludes.

    African telecoms seem to be a particular hotbed of roaming-based location tracking. Gary Miller, a mobile security researcher with Citizen Lab who co-authored the report, told The Intercept that, so far this year, he’d tracked over 11 million geolocation attacks originating from just two telecoms in Chad and the Democratic Republic of the Congo alone.

    In another case, Citizen Lab details a “likely state-sponsored activity intended to identify the mobility patterns of Saudi Arabia users who were traveling in the United States,” wherein Saudi phone owners were geolocated roughly every 11 minutes.

    The exploitation of the global cellular system is, indeed, truly global: Citizen Lab cites location surveillance efforts originating in India, Iceland, Sweden, Italy, and beyond.

    While the report notes a variety of factors, Citizen Lab places particular blame with the laissez-faire nature of global telecommunications, generally lax security standards, and lack of legal and regulatory consequences.

    As governments throughout the West have been preoccupied for years with the purported surveillance threats of Chinese technologies, the rest of the world appears to have comparatively avoided scrutiny. “While a great deal of attention has been spent on whether or not to include Huawei networking equipment in telecommunications networks,” the report authors add, “comparatively little has been said about ensuring non-Chinese equipment is well secured and not used to facilitate surveillance activities.”

    The post Vulnerabilities in Cellphone Roaming Let Spies and Criminals Track You Across the Globe appeared first on The Intercept.

    This post was originally published on The Intercept.