Category: Technology

  • If online casino advertising promises untold treasures and seduces players, there must be a force that opposes it. In gambling, this principle of containment manifests itself in the form of different types of blocking. With their help, players looking to get away from the temptation of non-stop gambling can return to a life free from addictions.

    Why do people need gambling assistance?

    Players love to visit online casinos and bookmakers not only because of the opportunity to win but also because of the vivid emotions and adrenaline. For people with a certain mentality type, these flashes of bright emotions can give rise to a painful addiction, which can lead to the collapse of their social life.

    Specialists of various profiles, including psychologists and psychiatrists, have comprehensively studied the features of such an unhealthy attachment. To help overcome it, they have developed special schemes that are being implemented in the form of national programs and apps. These schemes are called self-exclusion programs because for them to start, the player must take the first step toward salvation.

    Popular solutions to help with gambling issues

    Assistance in overcoming gambling addiction should be multi-faceted. The best programs are pervasive; they block the different ways in which temptation can push the player to resume the addiction. Here are the most popular of them that have shown their effectiveness in many cases of treatment. Most of them are free and try to help as many players who find themselves in difficult situations as possible.

    GamStop

    If you are looking for a one-stop solution without wanting to risk losing money and spending too much time on gambling, GamStop is an excellent choice. Participation in the program blocks access to all gambling establishments registered in the UK. This lock is so strong that even if the player wants to return to the game, they simply cannot do it. The self-exclusion period indicated at the outset cannot be subject to further negotiation and will be strictly enforced.

    However, there are still numerous gambling sites not on GamStop accessible in the UK for GamStop-blocked users. They have internal self-exclusion tools that are more convenient for British players. But at the same time, vulnerable gamblers should consider using blockers and other resources to promote a safer gambling environment for themselves.

    Gamban

    If the previous solutions are free, Gamban is a commercial project but the monthly fee for using it is only £2.49. The program works like BetBlocker, looking for all kinds of gambling resources on the Internet and blocking access to them. The peculiarity is that, along with traditional gambling, GamBan blocks trading platforms, crypto networks, NFT, and many other gaming hobbies that can make them addicted.

    Moses

    The Moses self-exclusion program applies to bookmakers operating in the UK. It is activated for free after a call from a bettor who decided to stop this addiction. The Moses team finds out the list of bookmakers from which you want to protect yourself and negotiates with them to block a particular player. Thus, you get rid of the need for personal contact with all operators on this issue.

    Sense

    Self-exclusion programs would be incomplete without the addition of land-based casinos. Sense does an excellent job with this task. After registering in this program, the player will not be able to visit land-based gambling establishments registered in the UK. And if they do, the casino is obliged to deprive them of access to the gaming halls immediately. And even if with the help of some trick, they manage to come in and start playing, they still won’t be able to get a win.

    BetBlocker

    GamStop has a drawback, which is fully compensated by installing the BetBlocker application. The powers of GamStop apply only to gambling sites registered in the UK. So even if you registered on GamStop you can still use sports betting sites via NonGamStopBetting.org and similar UK platforms. BetBlocker is a great alternative to GamStop. The BetBlocker application goes further and restricts access to all similar resources on the Internet, regardless of their license. The peculiarity of the application is that it cannot be removed from the computer until the self-exclusion period specified during app activation has expired.

    Conclusion

    Each of the schemes has its advantages and drawbacks, which does not make them universal solutions for all occasions. However, they increase their effectiveness by working in combination. And if you find that by registering in one of them, there are still loopholes for temptation, supplement it with another program or application. The more the containment mechanism is activated by a player who has fallen into a painful addiction, the faster they will get rid of it.

    Featured image supplied 

    By The Canary

    This post was originally published on Canary.

  • Micropayments have changed the iGaming world once and forever, providing users with the chance to invest a minimum and get maximum benefits. The trend began developing when operators understood that high prices for entertainment distract potential players.

    Undeniably, the concept of low deposits increased customer engagement in online casinos, so this trend is rapidly evolving. Multiple gaming platforms implement this option in their activities, boosting demand.

    The appeal of low deposits at online casinos

    The benefits of low deposits are evident, as users can enjoy the best real money games without investing a fortune. This feature makes virtual casinos more accessible for more risk hunters globally. Even a 10-pound top-up is enough to try multiple titles; for instance, some slots allow users to bet £0.1 per spin.

    Recently multiple $10 deposit operators at Casino Deps have become incredibly popular among players from New Zealand and worldwide. In addition, gamblers can take advantage of various bonuses and boost their initial replenishment.

    At first glance, only gamblers benefit from such a strategy, but this is far from the case. Gambling operators get the chance to attract more attention and engage customers to their platforms. Micropayments are a significant part of their income, which is advantageous for all parties.

    Minimum replenishments would be impossible without implementing innovative payment options, allowing users to top up their balances with a few clicks. Let’s explore the casino banking methods that are quickly evolving now.

    Credit cards

    This payment solution is the most common in the current era: it’s difficult to imagine one’s life without credit and debit cards. Over 1.2 billion people globally own them, so it’s not surprising this method is in demand in online casinos.

    Currently, Visa and Mastercard are the leading providers operating globally. They are available on most online gaming sites and allow players to proceed with minimum top-ups. The smallest replenishment accepted by Mastercard is £10, while Visa allows money transfers starting from £5.

    The broad accessibility of these payment options in virtual casinos encourages gamblers to use them. Of course, they can take advantage of additional rewards and immediately boost their minimum stake.

    Prepaid cards

    Many prefer prepaid vouchers due to the possibility of depositing securely and anonymously. A user should purchase a card in advance and simply enter its number on the casino site to replenish the gaming balance. Therefore, there’s no need to reveal banking details during transactions.

    Paysafecard is the most common solution. Many online casinos offer Paysafe as one of the payment methods to their members. It allows low transactions starting from £10. This payment method minimises the risk of overspending, as a user cannot deposit more than the voucher’s value. Quite a good way to develop self-control and stick to responsible gambling rules!

    Digital wallets

    E-wallets slightly conquer the digital world, being a universal payment solution in many industries, and gambling is no exception. Being available in over 200 countries globally, PayPal is an undisputed market leader. Online casino members choose this banking solution due to its security, speedy transactions, and the possibility of proceeding with £1 deposits.

    PayPal is not the only e-wallet popular among gamblers. Neteller and Skrill are the common alternatives, providing reliable services and safe money transfers. Both options are also widespread in online casinos. However, keep in mind that the minimum top-up you can make via Neteller is £10; Skrill allows you to deposit £5 or more.

    Cryptocurrencies for microtransactions

    Blockchain boomed the entire world, and the gambling industry couldn’t be left aside. Many online casinos allow their members to proceed with instant, anonymous cryptocurrency transactions. Players often choose Bitcoin or its alternatives, as they erase all limitations of the gambling world.

    Of course, low deposits are a significant benefit – cryptocurrencies can be divided into microparticles. Therefore, it’s easy to transfer £1 or even more and enjoy casino games with a minimum budget. Online gaming establishments often provide special bonuses for crypto users, and they are usually higher than those for fiat currencies.

    Pay by mobile

    Gambling on smartphones is becoming more and more popular daily, as players can enjoy the best online casinos wherever they are. The chance to deposit with one click and have fun immediately benefits users. Therefore, many operators have already adapted their platforms for mobile devices and implemented corresponding payment options.

    Google Pay and Apple Pay are the most popular solutions people use daily. The chance to use the same method for low deposits in online casinos. The minimum acceptable top-up via these banking options is £5, but remember that the conditions depend on the platform and check everything beforehand.

    The final verdict

    Low-deposit casinos are not the latest development, as they have been around for several years already. However, the trend doesn’t seem to lose its relevance among risk lovers: they have the chance to gamble with minimum expenses and explore all the benefits of the chosen gaming website.

    This micro-payment model is revolutionising online markets by making it easier to engage in small-scale transactions. The conditions vary: sometimes, the minimum top-up is £10, while in other instances, it’s enough to invest £1 to start playing. Check all the details and pick the best payment option before engaging in the world of real money games – and remember to keep an eye on your bankroll!

    Featured image supplied 

    By The Canary

    This post was originally published on Canary.

  • Romain Godin prides himself on being able to fix a wide variety of consumer devices. But recently, what was once a basic repair job for his Portland, Oregon-based business Hyperion Computerworks — replacing a cracked iPhone screen — has become needlessly complicated.

    In the past, Godin would have replaced the broken screen on the spot with a working screen harvested from a dead phone, saving the customer from having to buy a brand new screen from Apple. But if Godin performs this simple procedure on one the latest models of iPhones, features such as True Tone, which adjusts screen brightness and color based on the ambient light conditions, won’t work anymore. What’s more, the phone will issue a repeated message warning the user that Apple cannot determine if the screen is genuine.

    This is because many replacement iPhone parts, including screens, must now be “paired” with the phone using Apple’s proprietary software before they will function properly. And Apple’s “parts pairing” software will only recognize replacement parts purchased directly from Apple for that specific repair job — meaning an independent shop would have to order the part when the customer comes in, then potentially wait days for it to arrive. Meanwhile, the customer could go to an Apple Store and get their phone fixed with authorized parts on the spot. Godin says he’s losing business because of the hurdles a customer might face getting their iPhone fixed at his shop.

    “It’s a lot of telling customers in advance, ‘We might run into this or that complication,’” Godin told Grist. “And the majority of the time, they don’t have us do the repair.”

    A man repairs a broken mobile phone display in Hoi An, Vietnam. Pascal Deloche / Godong / Universal Images Group via Getty Images

    As more and more states enact laws protecting consumers’ right to fix the devices they already have, or get them fixed at the repair shop of their choosing, tech titans are clashing over parts pairing. New York, Minnesota, and California all passed digital right-to-repair bills over the last two years, and repair advocates say Apple and trade associations it belongs to worked behind the scenes to weaken or block language that interfered with parts pairing. But while advocates expect Apple will fight new laws targeting the practice, Google came out in support of Oregon’s bill earlier this month — partly because of its ban on parts pairing. With tech giants now staking out opposing positions on parts pairing, repair advocates see a potential opportunity to gain ground.

    Godin testified at a recent hearing of the Oregon Senate’s Energy and Environment Committee, which is considering a right-to-repair bill that explicitly bans parts pairing. If passed into law, the Oregon bill would represent the strongest legal rebuke yet of a practice that many independent repair shops see as an existential threat to their businesses, and that repair advocates say is fueling electronic waste and unnecessary resource consumption. 

    Restricting parts pairing is “the next step that likely needs to happen for right to repair to really gain a lot more traction,” said Colorado state representative Brianna Titone, who is sponsoring a digital right-to-repair bill in her state for the fourth time this year.

    The right-to-repair movement is premised on the idea that when consumers have access to the parts, tools, and information needed to repair the devices they own, they can use those devices for longer. This both saves consumers money and reduces the environmental downsides of technology, which include electronic waste and the greenhouse gas emissions and resource consumption associated with manufacturing new products.

    But parts pairing threatens to undermine the benefits of repair. Parts pairing refers to when companies use software to track their parts and control how they are used. Companies can assign spare parts serial numbers and program those parts to work properly only after their installation has been authenticated using the manufacturer’s specialized software. The practice isn’t exactly new: For years, agricultural equipment maker John Deere has restricted access to the software tools needed to install replacement parts on its tractors, while some automakers have engaged in “VIN burning,” or using software to limit the installation of replacement parts to a single vehicle. But in the consumer technology realm, parts pairing is becoming a greater concern for independent repairers, largely due to Apple’s growing use of the practice for its iPhone and laptops.

    Apple claims its pairing process using the company’s “System Configuration” software tool is important for calibrating parts after their installation to ensure the best performance. But this kind of technology also represents a powerful tool for restricting repair. A person whose iPhone 15 battery dies can still go to an independent shop to get it swapped for a new one, or do the repair on their own. But in order to receive battery health updates and avoid nagging warning messages that the phone contains an unrecognizable part, that shop or individual needs to purchase the new battery from Apple, then pair it using System Configuration. This can cause repairs to take more time, in addition to driving up costs (while ensuring that Apple gets a cut). 

    A man and a woman each hold an iPhone inside an Apple Store in front of a sign that says iPhone 15
    Customers look at iPhones at Apple’s flagship store in Shanghai, China. Costfoto / NurPhoto via Getty Images

    Parts pairing is perhaps an even bigger problem for refurbishers, who use secondhand parts to keep costs down as they’re restoring old devices for resale. If a customer buys a refurbished laptop only to encounter warning messages about non-genuine parts, they may return it, said Marie Castelli, head of public affairs at the online refurbished device store Back Market

    “We have clients opening [return] tickets because they are afraid of messages” alerting them that the device cannot recognize a part like the screen, Castelli told Grist. 

    There’s an environmental cost to parts pairing, as well. When customers are steered away from used parts, those parts become e-waste. Meanwhile, demand for new parts rises, resulting in additional resource consumption and emissions tied to manufacturing. Right-to-repair is “about trying to keep things out of landfills; trying to reduce our dependence on all these minerals that go into all these parts,” Titone said. “And that’s really the big problem with parts pairing.”

    The right-to-repair movement has made considerable progress forcing manufacturers to make spare parts, tools, and repair documentation available through a recently passed wave of state bills. But advocates say that Apple, and its trade association allies, have been largely successful in keeping bans on parts pairing out of the bills that have passed so far. 

    In New York, language that would have interfered with Apple’s internet-based system for pairing replacement parts was removed from the state’s right-to-repair bill after the trade association TechNet — which Apple is a member of — requested its deletion. In California, a coalition of lawmakers, repair advocates, and industry representatives negotiating the text of the state’s new right-to-repair law reached an agreement with Apple on the bill text prior to its approval by the Senate, according to David Stammerjohan, chief of staff for California state senator and bill sponsor Susan Eggman. After that agreement was reached, some coalition members wanted to add language that explicitly addressed parts pairing.

    “We discussed it with Apple, which indicated they wanted to stick to the terms of the agreement,” Stammerjohn told Grist in an email. “Like all major bill negotiations, there were things industry would have liked in the bill that did not get in and things we would have liked in the bill that did not get into the final agreement.” 

    Minnesota’s new right-to-repair law does address parts pairing more explicitly. The law requires device manufacturers to make spare parts, tools, and documentation available to the public on fair and reasonable terms that cannot include any “requirement that a part be registered, paired with, or approved by the original equipment manufacturer or an authorized repair provider before the part is operational.” But while this may sound like a ban on parts pairing, Gay Gordon-Byrne, executive director of the repair advocacy organization Repair.org, worries that the language isn’t airtight enough to deter a manufacturer from locking down certain functions using software and then arguing its case before the state’s attorney general if anyone complains.

    “I don’t have to be a legal expert to tell you I would expect that,” Gordon-Byrne told Grist.

    Repair.org has written a template right-to-repair bill that includes provisions its members would like to see incorporated into actual legislation. For the 2024 legislative cycle, the organization updated its template bill language to more explicitly define, and prohibit, parts pairing. Some of that stronger language made its way into the bill now under consideration in Oregon, which states that manufacturers cannot use parts pairing to reduce the functionality or performance of a device or cause the device to display “unnecessary or misleading alerts or warnings about unidentified parts.”

    This language is a key reason Google, whose more recent Pixel smartphones and Chromebook laptops would be covered under the latest version of Oregon’s bill, chose to throw its support behind the legislation, according to Stephen Nickel, who heads up repair operations at the company. In an interview with Grist, Nickel said that Google liked the “comprehensive” nature of the bill.

    A hand holds a Pixel phone in a bright, white interior environment in front of a wall that says Made by Google
    A Google Pixel phone is displayed during a product launch event in New York in 2023. Ed Jones / AFP via Getty Images

    “It considers all aspects of repairability, and particularly … the issue of parts pairing,” he said, adding that Google is prepared to comply with “any and all” of the bill’s requirements.

    Google’s newfound opposition to parts pairing, Titone said, suggests that the closed-door negotiations between lawmakers and device manufacturers are now spilling out into a “turf war” among tech companies vying for customers “in a very competitive market.”

    “I think there is a market strategy that Google is trying to exploit right now,” she said. “Having [Pixel smartphone] repairability be really high is an edge.” 

    It remains to be seen whether Apple or others will fight the new Oregon bill or future ones that seek to restrict parts pairing. But all signs suggest a battle is brewing. Titone, the Colorado representative, said that when she spoke with Apple several months back concerning the right-to-repair bill she’s introducing this year, the company asked her to allow parts pairing in the text.

    In response to Grist’s request for comment, an Apple spokesperson shared remarks that AppleCare VP Brian Nauman made at a White House repair event in October affirming the company’s commitment to a uniform federal repair law that “balances repairability with product integrity, usability, and physical safety.” The spokesperson declined to address criticisms of Apple’s parts pairing practices or share the company’s position on the anti-parts pairing provisions in the new Oregon bill or Minnesota’s recently passed law.

    It isn’t just U.S. lawmakers seeking to limit parts pairing. The European Union is currently negotiating a new set of EU-wide rules on the right to repair that would make it easier and more cost-effective for consumers to repair devices versus replacing them. In November, the European Parliament adopted a draft version of the right-to-repair rules that, among other things, prohibits companies from using software to impede independent repair. The Parliament’s version of the rules must now be reconciled with draft versions proposed by the European Commission and European Council, with negotiations over the final text set to conclude in early February. But Castelli of Back Market, who is following the negotiations closely, said that “hopes are high in terms of what’s achievable” when it comes to parts pairing.

    “Everyone wants to land on something and is open to negotiations,” she said.

    This story was originally published by Grist with the headline Apple uses software to control how phones get fixed. Lawmakers are pushing back. on Jan 30, 2024.

    This post was originally published on Grist.

  • Chicago organizers will soon learn whether the city’s progressive mayor will honor a key campaign promise. Brandon Johnson campaigned on a pledge to end the city’s contract for SoundThinking’s gunshot detection service known as ShotSpotter. Former Chicago Mayor Rahm Emanuel entered into a $33 million contract with the company in 2018, saying the technology would help reduce crime.

    Source

    This post was originally published on Latest – Truthout.

  • Problems related to violating responsible gambling principles are one of the main issues that impact both casinos (their reputation) and gamblers. The same is true for the local budget. For example, it was estimated that the overall losses related to responsible gambling in 2022 were from £1.05 to £1.77bn.

    The UK gambling sector has a long way to go to become one of the most well-regulated and user-friendly in the global casino industry, from the implementation of the Gambling Act of 2005 and the GamStop program to modern biometric scans for personalised controls.

    Modern apps and web-based solutions allow you to block gambling-related content, limit the time spent playing games, exclude specific casino sites from being displayed by search engines, and so on. All this contributes to maintaining a flexible, responsible gambling approach based on the players’ individual characteristics and needs.

    Online tools for enhanced self-exclusion

    Modern self-exclusion solutions work on different levels. There may be cross-platform programs you may access from any device like GamStop or dedicated applications that require installation on your PC or mobile devices (for example, BetBlocker). Also, there are tools that allow you to filter specific content and use additional services like Parental Control (NetNanny).

    GamStop – streamlining self-exclusion across platforms

    GamStop is one of the popular centralised tools that help people self-exclude from multiple UK-based casinos. All you need is to register on the site without the need to download any software.

    After joining the program, your information is added to databases of 128 casinos within 24 hours. It is a simple and, at the same time, cohesive approach to self-exclusion from all casinos controlled by UKGC for a 6-month, 1-year, or 5-year period. Players are also welcome to register with non GamStop gambling sites on NonGamStopCasinos.org if they are looking for a gaming experience outside of the UKGC licensed platforms.

    They are the same safe and stick to Responsible Gambling rules through proprietary self-exclusion programs and the ability to set deposit limits and cool-off periods.

    BetBlocker – device-level self-exclusion

    BetBlocker is a handy device to restrict access to gambling platforms across various devices and stick to responsible gambling habits. It is a free solution you may use without signing up. You only need to download and install the app on your PC, laptop, tablet, or smartphone. BetBlocker provides a reliable block of more than 77,700 gambling sites and over 1,500 applications.

    The best thing is that the database of this tool is updated weekly. BetBlocker offers a calendaring feature to customise the booking time in advance and the Parental Control option for better convenience.

    NetNanny – filtering and restriction tools

    Thanks to NetNanny, you can control access to specific (gambling-related) content, including casinos and bookmakers. Along with BetBlocker, NetNanny is a cross-platform solution that helps get rid of gambling triggers, concentrate on current affairs, and distract from gambling during the cool-off period.

    Along with blocking gambling sites and apps, the platform supports screen time management, social media protection, a report feature, and location tracking. Unlike the previous option, it comes with pricing plans.

    Biometric verification for secure player protection

    Modern technologies allow self-exclusion and blocking tools to create a secure environment for gamblers. First, this relates to incorporating biometric checks, including facial recognition and fingerprint scans.

    Facial recognition technology

    Using the face recognition option, online casinos can quickly identify the user by comparing the photo with documents uploaded during the ID verification. This method is implemented through access to the device’s camera. Facial Recognition technology has obvious security advantages, such as preventing unauthorised access to the account to steal sensitive data.

    In addition, facial recognition allows casinos to filter out people who are on self-exclusion programs and block their access to gambling content. Thus, it helps people control their behaviour and prevent gambling-related problems initially.

    Fingerprint scans and personalised controls

    Fingerprint scans, as well as Face Recognition technology, contribute to safe gambling. Most modern smartphones, tablets, and laptops support the TouchID feature that helps identify the device’s owner. The info about clients can not be transferred to external servers/cloud storage as well as used by unauthorised third parties and casino operators.

    Along with security measures. Fingerprint scans allow casinos to identify a person by comparing him with databases of those who have problems with gambling behaviour. It is also a convenient mechanism for receiving information about betting patterns (the number of log-ins to the site, transaction confirmations, etc.).

    Positive impacts and future technological trends

    Device-level self-exclusion and blocking tools are now among the main trends in ensuring responsible gambling principles. First of all, it relates to solutions such as BetBlocker and Gamban. They offer a very flexible adaptation of self-exclusion to the needs of each user.

    Along with this, the gambling industry experienced a decrease in the popularity of more critical self-exclusion methods, such as GamStop, when you are simply banned from all gambling platforms simultaneously. Gamblers also actively explore deposit and time limit features incorporated into casino sites.

    As for blockchain technology, it effectively ensures the users’ security and the transparency of casino operations activity. However, it is not so widely implemented, so there is still a long way to go. The same applies to biometrics, which are effective for identifying problem players and supporting safe gambling in general.

    Nevertheless, this technology is not available on all gambling sites. Artificial Intelligence and machine learning are among the main trends that will grow in the future. Using AI, casinos can process large data amounts, set restrictions for individual players, and analyse game patterns to predict future violations of responsive gambling rules.

    Conclusion

    Responsible gambling principles are among the most important aspects contributing to a pleasant online casino gaming experience without negative consequences. At the same time, this approach is also critically important for the casinos themselves, as compliance with these rules affects their reputation.

    Modern technologies allow players to conveniently and flexibly set limits (deposit and time), use cool-off periods, and filter gambling content on their devices. This is realised thanks to both built-in tools on casino sites and global solutions such as GamStop.

    Moreover, there are multiple solutions (paid and free) to control the time spent playing games and deposit sums used. Modern technologies make it possible to make safe gambling convenient and customised for a specific user.

    Featured image supplied

    By The Canary

    This post was originally published on Canary.

  • Criminal networks in Southeast Asia are growing their illegal operations in through the innovative use of technology, according to the UN Office on Drugs and Crime (UNODC).The trafficking of people and drugs as well as money laundering and fraudulent scam activities are being boosted by the use of cryptocurrencies, the dark web, artificial intelligence and social media platforms, as criminals continue to base their operations in parts of the region where the rule of law is weak or non-existent. Laura Gil asked UNODC Regional Representative for Southeast Asia and the Pacific, Jeremy Douglas, how the landscape of transnational organized crime in the region is being changed by technology.


    This content originally appeared on UN News – Global perspective Human stories and was authored by UNODC/ Laura Gil.

    This post was originally published on Radio Free.

  • Notorious Blackwater founder and perennial mercenary entrepreneur Erik Prince has a new business venture: a cellphone company whose marketing rests atop a pile of muddled and absurd claims of immunity to surveillance. On a recent episode of his podcast, Prince claimed that his special phone’s purported privacy safeguards could have prevented many of the casualties from Hamas’s October 7 attack.

    The inaugural episode of “Off Leash with Erik Prince,” the podcast he co-hosts with former Trump campaign adviser Mark Serrano, focused largely on the Hamas massacre and various intelligence failures of the Israeli military. Toward the end of the November 2 episode, following a brief advertisement for Prince’s new phone company, Unplugged, Serrano asked how Hamas had leveraged technology to plan the attack. “I think that when the post-op of this disaster is done, I think the main source of intel for Hamas was cellphone data,” Serrano claimed, without evidence. “How does Gaza access that data? I mean, Hamas?”

    Prince answered that location coordinates, commonly leaked from phones via advertising data, were surely crucial to Hamas’s ability to locate Israel Defense Forces installations and kibbutzim.

    Serrano, apparently sensing an opportunity to promote Prince’s $949 “double encrypted” phone, continued: “If all of Israel had Unplugged [phones] on October 7, what would that have done to Hamas’s strategy?”

    Prince didn’t miss a beat. “I will almost guarantee that whether it’s the people living on kibbutzes, but especially the 19, 20, 21-year-old kids that are serving in the IDF, if they’re not on duty, they’re on their phones and on social media, and that cellphone data was tracked and collected and used for targeting by Hamas,” he said. “This phone, Unplugged, prevents that from happening.”

    Unplugged’s product documentation is light on details, privacy researcher Zach Edwards told The Intercept, and the features the company touts can be replicated on most phones just by tinkering with settings. Both Android devices and iPhones, Edwards pointed out, allow users to deactivate their advertising IDs. It’s unclear what makes Unplugged any different, let alone a tool that could have thwarted the Hamas attack. “Folks should wait for proof before accepting those claims,” Edwards said.

    “Simply Not True”

    This isn’t the first time Prince has used an act of violence as a business opportunity. Following the 1999 mass shooting at Columbine High School, Prince constructed a mock school building called R U Ready High School where police could pay to train for future shootings. In 2017, he pitched the Trump White House on a plan, modeled after the British East India Company, to privatize the American war in Afghanistan with mercenaries.

    With Unplugged, Prince’s main claim seems to be that, unlike most phones, his company’s devices don’t have advertising IDs: unique codes generated by every Android and iOS phone that marketers use to surveil consumer habits, including location. Unplugged claims its phones use a customized version of Android that strips out these IDs. But the notion that Prince’s phone, which is still unavailable for purchase more than a year after it was announced, could have saved lives on October 7 was contradicted by mobile phone security experts, who told The Intercept that just about every aspect of the claim is false, speculative, or too vague to verify.

    “That is simply not true and that is not how mobile geolocation works,” said Allan Liska, an intelligence analyst at the cybersecurity firm Recorded Future. While Prince is correct that the absence of an advertising ID would diminish to some degree the amount of personal data leaked by a phone, it by no means cuts it off entirely. So long as a device is connected to cellular towers — generally considered a key feature for cellphones — it’s susceptible to tracking. “Mobile geolocation is based on tower data triangulation and there is no level of operating system security that that can bypass that,” Liska added.

    Unplugged CEO Ryan Paterson told The Intercept that Prince’s statement about how his phone could have minimized Israeli deaths on October 7 “has much to do with the amount of data that the majority of cell phones in the world today create about the users of the device, their locations, patterns of life and behaviors,” citing a 2022 Electronic Frontier Foundation report on how mobile advertising data fuels police surveillance. Indeed, smartphone advertising has created an immeasurably vast global ecosystem of intimate personal data, unregulated and easily bought and sold, that can facilitate state surveillance without judicial oversight.

    “Unplugged’s UP Phone has an operating system that does not contain a [mobile advertising ID] that can be passed [on], does not have any Google Mobile Services, and has a built-in firewall that blocks applications from sending any tracker information from the device, and delivering advertisements to the phone,” Paterson added in an email. “Taking these data sources away from the Hamas planners could have seriously disrupted and limited their operations effectiveness.”

    Unplugged did not respond to a request for more detailed information about its privacy and security measures.

    Neither Erik Prince nor an attorney who represents him responded to questions from The Intercept.

    Articles of Faith

    “While it’s true that anyone could theoretically find aggregate data on populated areas and possibly more specific data on an individual using mobile advertiser identifiers, it is completely unclear if Hamas used this and the ‘could have’ in the last sentence is doing a lot of work,” William Budington, a security researcher at the Electronic Frontier Foundation who regularly scrutinizes Android systems, wrote in an email to The Intercept. “If Hamas was getting access to location information through cell tower triangulation methods (say their targets were connecting to cell towers within Gaza that they had access to), then [Prince’s] phone would be as vulnerable to this as any iOS or Android device.”

    The idea of nixing advertising IDs is by no means a privacy silver bullet. “When he is talking about advertising IDs, that is separate from location data,” Budington noted. If a phone’s user gives an app permission to access that phone’s location, there’s little to nothing Prince can do to keep that data private. “Do some apps get location data as well as an advertising ID? Yes. But his claim that Hamas had access to this information, and it was pervasively used in the attack to establish patterns of movement, is far-fetched and extremely speculative,” Budington wrote.

    Liska, who previously worked in information security within the U.S. intelligence community, agreed. “I also find the claim that Hamas was purchasing advertising/location data to be a bit preposterous as well,” he said. “Not that they couldn’t do it (I am not familiar with Israeli privacy laws) but that they would have enough intelligence to know who to target with the purchase.”

    Hamas’s assault displayed a stunningly sophisticated understanding of the Israeli state security apparatus, but there’s been no evidence that this included the use of commercially obtained mobile phone data.

    While it’s possible that Unplugged phones block all apps from requesting location tracking permission in the first place, this would break any location-based features in the phone, rendering something as basic as a mapping app useless. But even this hypothetical is impossible to verify, because the phone has yet to leave Prince’s imagination and reach any actual customers, and its customized version of Android, dubbed “LibertOS,” has never been examined by any third parties.

    While Unplugged has released a one-page security audit, conducted by PwC Digital Technology, it applied only to the company’s website and an app it offers, not the phone, making its security and privacy claims largely articles of faith.

    The post Erik Prince Claims His Vaporware Super-Phone Could Have Thwarted Hamas Attack appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In an era dominated by visual content, accessibility has become a pivotal aspect of online communication. Social media platforms serve as the pinpoint of our digital interactions, and ensuring that content is easily comprehensible to diverse audiences is crucial. CapCut, with its innovative text-to-speech (TTS) feature, emerges as a game-changer in making social media more inclusive and engaging for everyone.

    The power of CapCut’s text-to-speech

    CapCut’s text-to-speech generator offers a unique solution for transforming written content into spoken words. This feature is not merely a convenience; it’s a stepping stone towards a more inclusive digital landscape. By enabling users to convert text into voice, CapCut empowers content creators to reach a broader audience, including those with visual impairments or learning disabilities.

    • Seamless integration into the content

    The process of converting text to speech on CapCut is seamlessly integrated into the content creation workflow. Users can effortlessly upload their media files from various sources such as the computer, Google Drive, Dropbox, or even Myspace. This flexibility ensures that creators can bring diverse content into the tool without any hassle.

    • Customisation for a personalised experience

    One of the standout features of CapCut’s TTS is its customisation options. Creators can select the language, voice gender, and apply various effects to tailor the audio experience according to their audience’s preferences. Moreover, the inclusion of noise reduction, volume adjustment, fade in and fade-out options adds a layer of sophistication, enriching the audio content for a more immersive experience.

    A three-step guide to inclusive content creation

    • Step 1: simple upload process

    The journey towards inclusive content creation begins with the simple act of uploading your media files. Whether it’s a video, audio clip, or a compilation of visuals, CapCut supports various file formats and sources. The feature’s user-friendly interface ensures a smooth experience for creators, allowing them to focus on the substance of their content.

    • Step 2: transforming text to speech

    Once the media files are uploaded, creators can delve into the heart of the inclusive content creation process – transforming text to speech. CapCut offers a selection of text templates, enabling users to choose a style that complements their content. Inputting text is straightforward, and users have the flexibility to apply the text-to-speech feature to an entire video or specific clips within it. The ability to incorporate voice effects, adjust volume, and reduce noise adds a layer of creativity, making the content not only inclusive but also engaging.

    • Step 3: export and share with the world

    The final step involves setting parameters for export. Creators can choose file names, resolutions, formats, and quality according to their preferences. CapCut ensures that the inclusive content is not confined to its tool by allowing users to download their videos or share them directly on popular social media channels like TikTok. This seamless integration into broader social media platforms amplifies the reach of inclusive content, fostering a more diverse and connected digital community.

    Elevating social media content creation with free screen recorder

    CapCut text-to-speech social mediaAdditionally, CapCut’s free screen recorder emerges as a game-changer, offering myriad of features that empower creators to craft compelling content seamlessly. Let’s delve into how this tool can revolutionise the social media content creation process.

    • High-quality visuals for enhanced engagement

    Capturing the audience’s attention begins with stunning visuals. CapCut’s screen recorder ensures top-notch recording quality, providing crisp resolution, smooth frame rates, and vibrant colors. This feature elevates the overall quality of social media content, making it visually appealing and professional. Whether showcasing tutorials, gameplay, or creative design processes, the recorder guarantees that every detail is captured with precision.

    • Unparalleled flexibility in content selection

    Social media content comes in diverse formats, and CapCut’s screen recorder offers unparalleled flexibility in content selection. Creators can choose to capture the entire screen, specific windows, or selected regions. This versatility allows for a tailored approach to content creation, ensuring that the final product meets the specific requirements of different platforms. The ability to integrate a webcam further enhances the personal touch, facilitating face-to-face interactions within the content.

    • Enhancing creativity with multi-layered recording

    Social media thrives on creativity, and CapCut’s screen recorder provides creators with multi-layered recording options. Whether combining screen recordings with webcam footage, integrating audio overlays, or incorporating annotations, the tool allows for a dynamic and creative approach to content creation. This flexibility empowers creators to experiment with diverse formats, adding a unique touch to their social media presence.

    • Streamlined collaborative content creation

    In the realm of social media, collaboration is increasingly common. CapCut’s screen recorder facilitates collaborative content creation by allowing multiple contributors to record their screens simultaneously. This is invaluable for remote teams, influencers working with editors, or joint content creation efforts. The recorder’s ability to capture audio from different sources enhances the collaborative process, ensuring a cohesive and well-coordinated final product.

    Conclusion

    In a world where digital communication reigns supreme, tools like CapCut’s text-to-speech are catalysts for change. By seamlessly integrating inclusivity into the content creation process, CapCut ensures that the power of social media is harnessed by all. Similarly, CapCut’s free online screen recorder emerges as a versatile and powerful tool for social media content creators. From ensuring top-tier visual quality to offering unmatched flexibility and collaboration features, this tool has the potential to transform the way content is conceptualised and presented in the ever-evolving landscape of social media.

    Featured image and additional images supplied

    By The Canary

    This post was originally published on Canary.

  • Amnesty International Logotype

    On 22 January 2024, Amnesty International published an interesting piece by Alex, a 31-year-old Romanian activist working at the intersection of human rights, technology and public policy.

    Seeking to use her experience and knowledge of tech for political change, Alex applied and was accepted onto the Digital Forensics Fellowship led by the Security Lab at Amnesty Tech. The Digital Forensics Fellowship (DFF) is an opportunity for human rights defenders (HRDs) working at the nexus of human rights and technology and expand their learning.

    Here, Alex shares her activism journey and insight into how like-minded human rights defenders can join the fight against spyware:

    In the summer of 2022, I watched a recording of Claudio Guarnieri, former Head of the Amnesty Tech Security Lab, presenting about Security Without Borders at the 2016 Chaos Communication Congress. After following the investigations of the Pegasus Project and other projects centring on spyware being used on journalists and human rights defenders, his call to action at the end — “Find a cause and assist others” — resonated with me long after I watched the talk.

    Becoming a tech activist

    A few days later, Amnesty Tech announced the launch of the Digital Forensics Fellowship (DFF). It was serendipity, and I didn’t question it. At that point, I had already pushed myself to seek out a more political, more involved way to share my knowledge. Not tech for the sake of tech, but tech activism to ensure political change.

    Portrait of a young woman with dark hair looking downwards in a thoughtful manner
    Alex is a 31-year-old Romanian activist, working at the intersection of human rights, technology and public policy.

    I followed an atypical path for a technologist. Prior to university, I dreamt of being a published fiction author, only to switch to studying industrial automation in college. I spent five years as a developer in the IT industry and two as Chief Technology Officer for an NGO, where I finally found myself using my tech knowledge to support journalists and activists.

    My approach to technology, like my approach to art, is informed by political struggles, as well as the questioning of how one can lead a good life. My advocacy for digital rights follows this thread. For me, technology is merely one of many tools at the disposal of humanity, and it should never be a barrier to decent living, nor an oppressive tool for anyone.

    Technology is merely one of many tools at the disposal of humanity. It should never be a barrier to decent living, nor an oppressive tool for anyone.

    The opportunity offered by the DFF matched my interests and the direction I wanted to take my activism. During the year-long training programme from 2022-2023, the things I learned turned out to be valuable for my advocacy work.

    In 2022, the Child Sexual Abuse Regulation was proposed in the EU. I focused on conducting advocacy to make it as clear as possible that losing encrypted communication would make life decidedly worse for everyone in the EU. We ran a campaign to raise awareness of the importance of end-to-end encryption for journalists, activists and people in general. Our communication unfolded under the banner of “you don’t realize how precious encryption is until you’ve lost it”. Apti.ro, the Romanian non-profit organisation that I work with, also participated in the EU-wide campaign, as part of the EDRi coalition. To add fuel to the fire, spyware scandals erupted across the EU. My home country, Romania, borders countries where spyware has been proven to have been used to invade the personal lives of journalists, political opponents of the government and human rights defenders.

    The meaning of being a Fellow

    The Security Lab provided us with theoretical and practical sessions on digital forensics, while the cohort was a safe, vibrant space to discuss challenges we were facing. We debugged together and discussed awful surveillance technology at length, contributing our own local perspective.

    The importance of building cross-border networks of cooperation and solidarity became clear to me during the DFF. I heard stories of struggles from people involved in large and small organizations alike. I am convinced our struggles are intertwined, and we should join forces whenever possible.

    Now when I’m working with other activists, I try not to talk of “forensics”. Instead, I talk about keeping ourselves safe, and our conversations private. Often, discussions we have as activists are about caring for a particular part of our lives – our safety when protesting, our confidentiality when organizing, our privacy when convening online. Our devices and data are part of this process, as is our physical body. At the end of the day, digital forensics are just another form of caring for ourselves.

    I try to shape discussions about people’s devices similarly to how doctors discuss the symptoms of an illness. The person whose device is at the centre of the discussion is the best judge of the symptoms, and it’s important to never minimize their apprehension. It’s also important to go through the steps of the forensics in a way that allows them to understand what is happening and what the purpose of the procedure is.

    I never use a one-size-fits-all approach because the situation of the person who owns a device informs the ways it might be targeted or infected.

    The human approach to technology

    My work is human-centred and technology-focused and requires care and concentration to achieve meaningful results. For activists interested in working on digital forensics, start by digging deep into the threats you see in your local context. If numerous phishing campaigns are unfolding, dig into network forensics and map out the owners of the domains and the infrastructure.

    Secondly, get to know the person you are working with. If they are interested in secure communications, help them gain a better understanding of mobile network-based attacks, as well as suggesting instant messaging apps that preserve the privacy and the security of their users. In time, they will be able to spot “empty words” used to market messaging apps that are not end-to-end encrypted.

    Finally, to stay true to the part of me that loves a well-told story, read not only reports of ongoing spyware campaigns, but narrative explorations from people involved. “Pegasus: The Story of the World’s Most Dangerous Spyware” by Laurent Richard and Sandrine Rigaud is a good example that documents both the human and the technical aspects. The Shoot the Messenger podcast, by PRX and Exile Content Studio, is also great as it focuses on Pegasus, starting from the brutal murder of Jamal Khashoggi to the recent infection of the device of journalist and founder of Meduza, Galina Timchenko.

    We must continue to do this research, however difficult it may be, and to tell the stories of those impacted by these invasive espionage tactics. Without this work we wouldn’t be making the political progress we’ve seen to stem the development and use of this atrocious technology.

    https://www.amnesty.org/en/search/Alex/

    This post was originally published on Hans Thoolen on Human Rights Defenders and their awards.

  • A new parliamentary inquiry is calling for policy ideas to support Western Australia’s innovation ecosystem following the state’s post-pandemic “brain gain”. The inquiry, which kicked off in November, is focused on the state of the current innovation ecosystem and the government’s role in supporting entrepreneurs, startups, and SMEs to grow and ultimately stay in Western…

    The post ‘Brain gain’ sparks innovation rethink in WA appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The concept of micropayments is not new, as it has existed for over five decades. However, its significant evolution was witnessed with the rapid development of the internet: the high demand for digital transactions boosted the necessity of minimum money transfers. This trend was first implemented in the virtual world, allowing users to purchase online content and pay for low-cost services.

    The iGaming companies have become the early adopters of this concept, attracting more attention from customers. The innovation allowed to make the industry more accessible for more users; therefore, online gaming sites significantly increased customer engagement. Now various lowest deposit casinos outlined at Min Deposit Casino UK are on the rise among the UK players and worldwide, and it’s impossible to imagine the sector without the possibility to top up the balance for £1 and engage in games and slots. But it wasn’t always like that.

    The emergence of micropayments

    The micropayments system was introduced by Ted Nelson in the 1990s. It implies a small e-commerce money transfer, allowing users to receive the desired product or service at a considerably low price. The technology initially aimed to popularise digital content and services, making them affordable for more people. Therefore, micropayments quickly gained popularity; the first industries accepting them included gaming, media, and charity. The smallest deposit is really tiny: it’s enough for players to invest £1 or even less to engage in gambling.

    The spread of minimal transactions worldwide encouraged more payment providers to adapt their services to customers’ needs and demands. The infrastructure wasn’t as well-developed as it is now. Millicent has become the first protocol to allow low money transfers. Other options before the 2000s included PayWord and Micro-Mint. Later on, these mechanisms stopped existing, allowing newer financial providers to demonstrate their innovative solutions.

    The role of micropayments in iGaming

    The online gambling industry has been developing fast since its inception, and microtransactions played a huge role in this process. Casinos have for long been entertainment for the wealthy. The concept of microtransactions changes the landscape, allowing every user to access the chosen online casino for a minimum top-up. Currently, players can enjoy a broad range of £1 deposit gaming platforms and the multiple advantages they offer.

    Low-deposit bonuses are among the most lucrative deals players can receive in virtual casinos. They can immediately boost the initial stake or provide free spins on featured slots. Such conditions motivate users to join gambling platforms: they can get amazing prizes for only £1.

    On the other hand, this model still has drawbacks. As many know low minimum deposit bonuses on WithCasinoBonus imply minimum risks, so users tend to invest seemingly insignificant sums. However, it’s easy to lose control over the gambling habits, thinking that another small replenishment won’t lead to problems. This approach goes beyond the responsible gambling policy, implying strict control over expenses in online casinos. Some operators implement limitations and special mechanisms to regulate player activities, but this challenge should still be addressed globally.

    Online gaming

    It’s impossible to deny that the possibility of low transactions changed the world, allowing digital content consumers to get what they need at a cheap price. The gaming industry was among the first to accept this concept – and the solution eventually boosted the sector’s popularity. Currently, users don’t have to pay thousands of dollars to receive the desired content, as a couple of dollars deposit is usually enough to get what they need. What do game lovers usually purchase using microtransactions?

    • Download paid apps and software
    • In-built purchases to boost the progress
    • Loot boxes with random items
    • Personalised customisation during the game
    • Pay-to-win (to get an advantage over rivals)

    It’s worth noting that the above benefits unlocked for micropayments are available across the gaming world: users playing from their computers, mobile devices, and consoles can enjoy this option. This opportunity increases customer engagement, as users know they have the chance to receive the desired game or additional perks at a low price. At the same time, it might seem that gaming companies don’t profit from microtransactions, which is far from the case. The small cost attracts more users, so they can benefit from a huge target audience.

    Micropayments in digital media and content consumption

    Digital media is another thing that users are ready to pay for – but the possibility of investing a minimum undeniably attracts more customers. Content creators often lack financing: if before, advertisements were the top solution, the quick evolution of ad blockers and other similar software significantly reduced their final income. Therefore, companies and individuals had to search for other ways to monetise their work. Hardly anyone would be happy to pay hundreds for an article, even if it’s extremely useful. But what if the cost is only a couple of bucks?

    Currently, creators use several models to profit from their content consumption. The first one is evident: a user can only see a part of the product and is required to pay to get the whole thing. On the other hand, the tipping concept is also gaining popularity. For instance, streamers provide free access to their content but ask listeners to support them financially. Users often feel grateful after listening to or watching something useful and see no problem in sending some tips.

    Future trends and potential of micropayments

    In 2024, the concept of micropayments is still evolving, so it will hardly lose its popularity in the near future. Players still enjoy the chance to get game improvements for a few pounds, while content consumers gladly purchase what they need. Of course, this technology couldn’t but touch other industries. E-commerce is the leading sector for small transactions, as many people prefer purchasing stuff online. Online education is another titbit that is especially popular among youngsters. Why spend years in college if there’s a chance to receive the necessary knowledge on online courses, which are considerably cheap?

    Luckily, numerous payment providers unlock opportunities for those desiring to proceed with microtransactions quickly and with no additional fees. Systems like PayPal or Apple Pay have been around for a while, making it easy to proceed with instant small money transfers. The rising demand for cryptocurrencies also creates a lot of noise, as this option allows minimum transactions with no intermediaries. Enhanced security and privacy are among the priorities for modern consumers, so it’s not surprising they choose anonymous and safe blockchain transfers. Despite the high volatility and lack of regulatory basis, users take advantage of Bitcoin and its alternatives more often than before, and the trend continues to grow.

    Final verdict

    Micropayments cannot be considered the latest trend: surprisingly, it has been actively used since the 1990s. Their appeal has definitely been boosted with the broader evolution of the Internet and all the World Wide Web’s digital services. Gaming, e-commerce, virtual content, and education are only some sectors allowing microtransactions. While users can get the desired products at a low cost, companies receive higher profits due to the increased target audiences. Considering the popularity of minimum money transfers, it’s forecasted that the trend will remain relevant and develop even more in the future.

    Featured image supplied

    By The Canary

    This post was originally published on Canary.

  • In 2023, the landscape of online casinos in the UK significantly evolved, offering alternatives for individuals seeking gambling opportunities outside the scope of Gamstop. Non-Gamstop casinos in the UK have gained prominence, providing a distinct set of features, benefits, and considerations catering to a specific segment of participants; ergo, this article evaluates the overarching characteristics of these casinos, highlighting their benefits and exploring important considerations for those engaging in online gambling. Ultimately, we show how the emergence of non-Gamstop casinos underscores the dynamic nature of the online gambling industry, acknowledging the diverse preferences and choices of players seeking a range of gaming experiences.

    Understanding non-Gamstop casinos

    Understanding non-Gamstop casinos is essential for those seeking alternatives to the self-exclusion program, which recorded a 21% rise in new registrations in February 2023. As a result, enthusiasts seeking to avert Gamstop-governed casinos have been left isolated; however, operating independently of Gamstop, these platforms, according to this list of non-Gamstop casinos, offer a unique gaming experience. Driven by individual preferences, players choose these casinos for diverse reasons, such as different gaming atmospheres or preferences not covered by Gamstop. Acknowledging the varied landscape of the online gambling community, these casinos cater to a diverse range of player needs. This understanding underscores the importance of recognising the distinct role these casinos play in providing options for individuals navigating their gambling preferences beyond the confines of established self-exclusion programs.

    Features

    • Diverse game selection: non-Gamstop casinos often feature a diverse array of games, ranging from classic slots and table games to live dealer options. The expansive selection caters to different preferences, proffering a wide variety of gaming experiences for players.
    • Flexible payment options: many non-Gamstop casinos offer flexibility in payment methods, accommodating various preferences for deposit and withdrawal. In an era of high UK inflation, this inclusivity enhances the accessibility of these platforms for players with different financial backgrounds and preferences.
    • Attractive bonuses and promotions: non-Gamstop casinos frequently provide competitive bonuses and promotions to attract players; these can include welcome bonuses, free spins and loyalty programs, enhancing the overall gaming experience and providing added value to players.
    • International accessibility: non-Gamstop casinos often welcome players from around the world, contributing to a more diverse gaming community. This international accessibility broadens the player base and enriches the gaming environment.

    Benefits of non-Gamstop casinos

    • Freedom of choice: one of the primary benefits of non-Gamstop casinos is the freedom of choice they offer to players; individuals seeking a break from self-exclusion or those who prefer to explore platforms not covered by Gamstop find these casinos appealing.
    • Enhanced gaming variety: the diverse range of games available in non-Gamstop casinos provides players with an enriched gaming experience; from traditional casino games to niche and specialised options, these platforms cater to a broad spectrum of interests.
    • Flexible gaming limits: non-Gamstop casinos often allow players to set their own gaming limits, providing a level of control over their gambling activities. Real-time player tracking, a hallmark of responsible gaming, has seen remarkable growth, with 90% of non-GamStop casinos integrating this technology into their platforms. This feature aligns with responsible gambling practices, empowering players to manage their gaming habits.
    • International gaming communities: engaging with players from different parts of the world contributes to a global gaming community. Non-Gamstop casinos, with their international accessibility, facilitate connections among players with diverse backgrounds and preferences.

    Considerations for players engaging with non-Gamstop casinos

    • Responsible gambling practices: although non-Gamstop casinos may offer flexible gaming limits, players must maintain responsible gambling practices. Setting personal limits, adhering to time constraints and monitoring gaming expenditures are essential to fostering a healthy gambling experience.
    • Regulatory compliance: players engaging with non-Gamstop casinos should be mindful of the regulatory framework governing these platforms. It is advisable to choose casinos licensed and regulated by recognised authorities, ensuring fair play and transparent operations.
    • Payment security: ensuring the security of financial transactions is critical; players should opt for non-Gamstop casinos with secure and reputable payment methods, safeguarding personal and financial information for an anxiety-free gaming experience.
    • Customer support and reputation: assessing the customer support and reputation of a non-Gamstop casino is vital; reliable customer service, clear communication and positive reviews contribute to a trustworthy gaming environment, guaranteeing participants have a positive and secure gaming experience.

    The future landscape

    The future landscape of non-Gamstop casinos in the UK is poised for dynamic evolution, driven by transformative trends; live dealer experiences promise an immersive blend of virtual and physical casinos while cryptocurrency integration foretells a decentralised, secure financial realm. Gamification elements add interactive layers, enhancing entertainment and enhanced mobile compatibility ensures gaming a part of a hectic daily lifestyle becomes seamless. Community building and social integration paint a picture of a global gaming village; as these trends converge, players can anticipate a future where non-Gamstop casinos redefine online gaming, offering diverse experiences with a glimpse into the innovative forefront of the industry.

    In conclusion

    Non-Gamstop casinos in the UK offer a distinct alternative for players seeking diverse gaming experiences outside the scope of the self-exclusion program. The features and benefits, including a broad game selection, flexible payment options and attractive bonuses contribute to the allure of these platforms; however, players should approach these casinos with careful consideration, emphasising responsible gambling practices, adherence to regulatory standards and an awareness of the importance of secure financial transactions. Ultimately, the choice to engage with non-Gamstop casinos is a personal one, and players should approach it with an informed and mindful perspective.

    Featured image via pexels

    By The Canary

    This post was originally published on Canary.

  • In an increasingly digitised world, technology has left an indelible mark on every aspect of our lives, including romance and relationships. From the advent of dating apps to the social media revolution, the way we meet potential partners and develop relationships has undergone a sea change. However, it’s essential to acknowledge that along with these advancements, the digital realm has also seen the emergence of online platforms offering companionship services of escorts in India, which can influence our perceptions of intimacy and connection. In this article, we will explore how technology is changing the rules of the romance game, including the impact of such online services, and how these changes are affecting the way we relate and connect with others.

    The dating app revolution

    One of the most remarkable transformations in relationships and romance in the digital age has been the dating app revolution. These platforms have radically altered the way people search for and find potential partners, while changing the traditional dynamics of courtship and dating. Over the past few years, apps such as Tinder, Bumble, Hinge, OkCupid, and many others have gained immense popularity around the world, generating both excitement and debate about their effects on contemporary society.

    These dating apps operate on a simple but effective logic: they allow users to swipe their fingers across the screen of their mobile devices, choosing potential matches based primarily on their profile pictures and a brief description. If both parties swipe to the right (a “match”), the possibility of a conversation opens up. This seemingly superficial process has altered the way people present themselves and others in the search for romantic connections.

    One of the most obvious advantages of these apps is their accessibility and convenience. People can browse a wide variety of profiles from the comfort of their homes or anywhere they have access to a mobile device and internet connection. This has greatly expanded the pool of potential matches, which was previously limited to connections in the real world or through mutual friends.

    However, this ease of use has also led to criticism. Some argue that dating apps promote a culture of superficiality, where people are judged primarily on their physical appearance rather than their personality or values. This can lead to discouraging and superficial experiences, where connections are based solely on physical attraction rather than a deeper understanding of the other.

    In addition, there is concern that these apps encourage a “consumer” approach to relationships, where people change partners as easily as they change channels on TV. The lack of long-term commitment and the constant search for “something better” can undermine the ability to build strong and lasting relationships.

    Ultimately, the dating app revolution has transformed the way we look for love and connections in the 21st century. While they offer notable advantages in terms of accessibility and choice, they also present significant challenges in terms of authenticity, superficiality and stability in romantic relationships. As a result, they continue to be the subject of debate and reflection in our evolving society.

    The influence of social networking on relationships

    Social media has played a significant role in transforming the way we experience relationships and romance in the digital age. Platforms such as Facebook, Instagram, Twitter and Snapchat have become an integral part of our lives, and their impact on relationship dynamics cannot be underestimated.

    Firstly, social media provides a window into the lives of others. Through photos, status updates and online posts, people share aspects of their lives, relationships and experiences. This can create a sense of virtual intimacy and allow people to learn about aspects of each other’s lives that might otherwise remain hidden. However, this constant exposure has also led to comparison and competition. People often feel pressured to maintain a perfect appearance online, which can lead to insecurity and the creation of an idealised image of themselves and their relationships.

    On the other hand, social networks have made it easier to communicate over long distances and connect with people all over the world. Couples who are geographically separated can maintain a relationship through text messages, video calls and online chats. This has expanded the possibilities of building relationships with people who would otherwise never have met, but it has also created challenges in terms of building intimacy and trust at a distance.

    An interesting topic is the impact of social networks on the process of meeting someone new. Before a date, it is common for people to research their potential partner online, which can provide useful information but also lead to premature judgements. Social networks can also provide a platform for interaction and flirting before a date, which can be beneficial for some, but can lead to the creation of unrealistic expectations.

    In today’s world, even people looking for escorts can be influenced by social media. Online platforms like Simpleescorts Ireland can be used to search for and contact these services, which poses its own challenges and risks in terms of security and confidentiality. The influence of social media on relationships is a complex and multifaceted issue. While these platforms can bring us closer to people and allow us to share significant moments in our lives, they can also generate insecurity, comparisons and the feeling that we live a life that is more public than private. How individuals handle social media in the context of their relationships depends on a number of factors, such as personality, communication and perceptions of privacy.

    In short, social media has altered the way we share our lives and romantic experiences with the world, providing both opportunities and challenges in the search for love and connections.

    Featured image via Prostock-studio – Envato Elements

    By The Canary

    This post was originally published on Canary.

  • The internet has become one of the best ways to communicate. With just a click, you can express yourself to the masses, share ideas, reach new audiences, and more.

    Unfortunately, there are growing concerns over how our rights to privacy, freedom of expression, and anonymity are not protected. Many websites, services, and apps are developing new ways to uncover people’s identities and discover who is behind the screen.

    So, is the internet really such a safe place to share our views? Can your activity be traced back to you? And if so, what can we do to protect ourselves?

    This article will explore how you can stay anonymous online. From implementing encryption to reviewing terms and conditions thoroughly, you’ll learn simple ways of protecting your identity while still getting the most from using the internet.

    The internet and your right to anonymity

    Online privacy is not just a luxury people would like – but a fundamental right. According to the ‘Charter of Human Rights and Principles for the Internet,’ everyone has the right to privacy and to online anonymity.

    In theory, this makes the internet a fantastic place to express yourself without compromising your safety. But unfortunately, that isn’t always the reality.

    Nowadays, our personal information is at risk by companies keen to learn about their customers through various methods, including tracking cookies and identifying IP addresses. All of these are used to pinpoint your location, identify you, and trace a comment, a photo, or even a website visit back to you.

    So what can be done to change this? How can consumers level the playing field? How can you reclaim your anonymity?

    Not everything is lost. There are still ways you can fortify your privacy online.

    Use more secure software, websites, and settings

    One of the easiest ways of controlling what information you give to companies is choosing better, more secure-conscious options. This includes everything from the operating systems you use to the software installed and what kind of websites you visit.

    For example, many operating systems like Windows require you to set up an account with them using your phone number or email address. Devices may come with pre-installed ‘bloatware,’ which can indirectly monitor your activity. You can avoid this by opting for open-source systems like Linux, often described as pro-privacy, and uninstalling unused software you aren’t using.

    Additionally, you should consider using privacy-focused websites. Search engines like Google and Bing can track user search activity and feed this information into targeted ads. Alternatives like DuckDuckGo, for example, have dozens of privacy protections that assure your search history and IP addresses will not be tracked.

    Finally, you should regularly clean out files like third-party cookies and your internet cache. Over time, these files can collect large amounts of data on your user activity. Deleting them will improve your browser performance and limit the amount of information websites receive about you.

    Use encryption to safeguard your online activity

    Encryption is one of the best ways to protect your online identity. It is when data sent or received is scrambled, making it unreadable to anyone monitoring you.

    A virtual private network (VPN) is perhaps the most effective and easiest way to use encryption, as it encrypts the internet connection. Your online activity is sent through an encrypted tunnel, hiding it from websites you visit and your internet service provider.

    You might ask, though, won’t websites be able to identify me through my IP address? Won’t that give away my location? Not quite. A VPN can change your IP address with just one click, making it seem like you’re living elsewhere.

    Relocating your IP address has additional benefits. You’ll also be able to access otherwise censored or blocked websites in your country, bypass geo-restricted content, and more.

    Review terms and conditions thoroughly

    Whether it’s a website or a new app, many people will accept terms and conditions without question, not realising what they agree to. This can be a dangerous habit because you might not be aware of the information you’re willingly giving away or how the data collected could be used.

    The best way to counter this is to review requests and permissions, see what data is collected, and decide whether it’s acceptable. For example, should an ordinary app be allowed to track your location? Or access your contact book? If not, is there a competitor that treats your privacy better?

    If you are worried about services you’re already using, websites like ‘Terms of Service, Didn’t Read‘ catalogue the most popular websites and apps, summarise their terms of use and the documents in question, and rate their privacy.

    Reconsider home assistants and IoT devices

    We live in a connected world where most devices now connect to the internet and monitor our activity.

    Research from Deloitte found that up to 75% of device owners feel they should be doing more to protect their information from companies and hackers but don’t feel empowered to act or know what to do.

    The fact of the matter is that many of these devices monitor your actions, including your calendar and what you buy, as well as record your voice and location to provide you with services.

    While we might enjoy these devices’ convenience, they raise serious privacy concerns. Consumers should review the terms and conditions of devices, examine their permissions, and decide whether they’re okay with the amount of data collected to safeguard their privacy.

    Featured image via AndersonPiza – Envato Elements

    By The Canary

    This post was originally published on Canary.

  • It was an unusual place for a tech company to announce a successful $33 million round of venture capital fundraising. But, on November 7, former NSO Group CEO Shalev Hulio and two colleagues stood in the Gaza Strip, stared into a laptop’s built-in webcam, and did exactly that.

    “We are here on the Gaza border,” said Hulio, the Israeli entrepreneur, on a little-noted YouTube video released by his new start-up, Dream Security. Hulio, a reservist who had been called up for duty, appeared in the video with a gun slung over his shoulder.

    “It’s very emotional,” he said. “After all of us being here, some of us reserves, some of us helping the government in many other ways, I think that doing it here is a great message to the high-tech community and the people of Israel.”

    Hulio, who stepped down from his role at NSO in August 2022, was sending a clear signal: He was back.

    After a rocky few years, marred by revelations about the role of NSO’s spyware in human rights abuses and the company’s blacklisting by the U.S. government, Hulio and his team were using the moment — timed exactly one month after Hamas’s attack — to announce lofty ambitions for their new cybersecurity firm, Dream Security.

    “Israeli high-tech is not only here to stay, but will grow better out of this,” said Michael Eisenberg, an Israeli American venture capitalist and Dream co-founder, in the promo video. “It’s going to deliver on time, wherever it’s needed, to whatever country or whatever company it’s needed at.”

    Their new project is another cybersecurity company. Instead of phone hacking, though, Dream — an acronym for “Detect, Respond, and Management” — offers cyber protection for so-called critical infrastructure, such as energy installations.

    Dream Security builds on the successful team NSO put together, with talent brought on board from the embattled spyware firm. At least a dozen of NSO’s top officials and staffers, along with an early investor in both NSO and Dream, followed Hulio to Dream since its founding last year.

    Lawyers for Dream Security who responded to The Intercept’s request for comment said the companies were distinct entities. “The only connection between the two entities is Mr. Hulio and a small portion of talented employees who previously worked at NSO Group,” said Thomas Clare, a lawyer for Dream, in a letter. Liron Bruck, a spokesperson for NSO Group, told The Intercept, “The two companies are not involved in any way.”

    “It’s worrying. It seems like a new way to whitewash NSO’s image and past record.”

    Now, with so many NSO people gathered under a new banner, critics are concerned that their old firm’s scandals will be forgotten.

    “It’s worrying,” said Natalia Krapiva, tech-legal counsel at Access Now, a digital rights advocacy group. “It seems like a new way to whitewash NSO’s image and past record.”

    At the same time, NSO Group is also using Israel’s war effort to try and revamp its own reputation. After Pegasus, NSO’s phone hacking software, was exposed for its role in human rights abuses and the firm was blacklisted in the U.S., the company suffered years of financial troubles. In the new year, it seemed to be bouncing back, with Israeli media reporting on its expansion and reorganization.

    Clare, Dream’s lawyer, stressed that Hulio was no longer affiliated with NSO. “Currently, Mr. Hulio holds no interest in NSO Group—not as an officer, employee, or stockholder,” Clare wrote to The Intercept. “Since Dream Security’s foundation in late 2022, he has exclusively led the company.”

    With Hulio at its helm, Dream boasts an eclectic and influential leadership team with connections to various far-right figures in Israel, Europe, and the U.S. — and an ambitious plan to leverage their ties to dominate the cybersecurity sector.

    SAPIR, ISRAEL - NOVEMBER 11:  A view of the entrance of the Israeli cyber company NSO Group branch in the Arava Desert on November 11, 2021 in Sapir, Israel. The company, which makes the spyware Pegasus, is being sued in the United States by WhatsApp, which alleges that NSO Group's spyware was used to hack 1,400 users of the popular messaging app. An US appeals court ruled this week that NSO Group is not protected under sovereign immunity laws.  (Photo by Amir Levy/Getty Images)
    A view of the entrance of the Israeli cyber company NSO Group branch in the Arava Desert on Nov. 11, 2021, in Sapir, Israel.
    Photo: Amir Levy/Getty Images

    New Mission, Same Executives

    Hulio has said that, with Dream, he moved from the “attack side to defense” — focusing on defending infrastructure, including gas and oil installations. A jargon-laden blurb for the company brags that it delivers surveillance to detect threats and an unspecified “power to respond fast.”

    “Dream Security’s product is a defensive cybersecurity solution to protect critical infrastructure and state-level assets,” Clare said. “Dream Security is not involved in the creation, marketing, or sale of any spyware or other malware product.”

    Clare said that Dream’s mission is “to enable decision-makers to act promptly and efficiently against any actual and potential cyber threats, such as malware attacks committed by states, terrorist organizations, and hacker groups, among others.”

    Kathryn Humphrey, another Dream lawyer and an associate at Clare’s firm, said in one of a series of emails, “Dream Security is not involved with offensive cyber, nor does it have an intention of becoming involved with offensive cyber. Dream Security is developing the world’s best AI-based defensive cyber security platform, and that is its only mission.”

    The Intercept found that 13 former NSO staffers now work at Dream Security — about a fifth of the new company.

    The mission may be new, but Dream is staffed in part by NSO veterans. A recent report from the Israeli business press said Dream has 70 employees, 60 of them in Israel. The Intercept found that 13 former NSO staffers now work at Dream Security — about a fifth of the new company.

    “Dream Security recruited the best talent to achieve its goal of becoming the globally leading AI-based cyber security company,” said Humphrey in a letter to The Intercept. “A small minority is top talent from NSO Group, including executives and other employees.”

    In addition to Hulio himself, former top NSO officials permeate the upper echelons of Dream. From the heads of sales to human resources to their legal departments, at least seven former executives from NSO now hold positions at Dream in the same jobs. Five additional Dream employees — from security researchers to software engineers and marketing designers — formerly worked at NSO.

    Dream’s lawyers told The Intercept that the “only overlap” between the companies were Hulio and former NSO employees, but other people tie NSO history and Dream’s present together. In one case, it was familial: Gil Dolev, one of Dream’s founders, is the brother of Shiri Dolev, who, according to NSO spokesperson Bruck, was NSO Group’s president until last year. (Shiri Dolev did not respond to a request for comment.)

    The two companies also share at least one investor. Eddy Shalev, the first investor in NSO, told The Intercept he had put money into Dream. “I was an early investor in NSO,” Shalev said. “I am no longer involved with NSO. I did invest in Dream Security.”

    Asked about Shalev’s investments in Dream and NSO, Humphrey said, “While Eddy Shalev is a valued investor, he is not a major investor—his investment is roughly 1% of the overall amount invested in Dream Security.”

    Former Austrian Chancellor Sebastian Kurz, left, accompanied by his lawyer Walter Suppan, right, arrives at court on the first day of his trial in Vienna, Austria, Wednesday, Oct. 18, 2023. Kurz is charged with making false statements to a parliamentary inquiry into alleged corruption in his first government. (AP Photo/Heinz-Peter Bader)
    Former Austrian Chancellor Sebastian Kurz arrives at court on the first day of his trial in Vienna on Oct. 18, 2023. Kurz is charged with making false statements to a parliamentary inquiry into alleged corruption in his first government.
    Photo: Heinz-Peter Bader/AP

    Austria’s Mini-Trump

    From its inception, Dream Security’s strategy was based around an in-house connection to the international right. Former Austrian Chancellor Sebastian Kurz, dubbed “Austria’s mini-Trump,” is a Dream co-founder.

    The former chancellor was forced to step down from the Austrian government in October 2021, facing corruption allegations and he remains on trial for related charges. 

    Along the way, Kurz had made powerful friends. He reportedly has relationships with top officials around Europe and the U.S., including right-wing Hungarian Prime Minister Victor Orbán, and Jared Kushner, former President Donald Trump’s son-in-law and former top adviser. Last year, Kurz joined Kushner on the honorary advisory council to the Abraham Accords Peace Institute, a group set up to foster normalization between Israel and Gulf monarchies like the United Arab Emirates — the very authoritarians that used NSO’s Pegasus software to crack down on dissidents.

    For all his connections to powerful politicians, experts said Kurz was never purely an ideologue. “Kurz is really a political professional,” said Laurenz Ennser-Jedenastik, a professor of Austrian politics at the University of Vienna. “He never struck anybody as extremely convicted of anything. I think his personal career and business were always the number one priority.”

    “Kurz is really a political professional. … I think his personal career and business were always the number one priority.”

    Once Kurz was out of government, he pivoted to the world of tech investment. He first met the cyber-spying titan Peter Thiel in 2017 and landed a job at one of the far-right billionaire’s firms, Thiel Capital, in 2021. Thiel, one of the largest donors to right-wing causes in the U.S., is deeply involved in the world of spy tech: His company Palantir, which allows for the sorting and exploitation of masses of data, helped empower and expand the U.S. government’s international spy machine.

    When Dream’s creation was announced, Kurz’s connections to Thiel — and therefore Palantir — raised alarms. In the European Parliament, lawmakers in the Committee of Inquiry to investigate the use of Pegasus and equivalent surveillance spyware took note.

    “The cooperation between Kurz and Hulio constitutes an indirect but alarming connection between the spyware industry and Peter Thiel and his firm Palantir,” said a committee report earlier this year. (Thiel is not involved with NSO or Dream, a person familiar with his business told The Intercept.)

    In November, nearly 80 percent of the European Parliament voted to condemn the European Commission for not doing enough to tackle spyware abuse, including NSO’s Pegasus software, across member states.

    Questions have cropped up about whether Dream will, like NSO before it, sell powerful cybersecurity tools to authoritarian governments who might use them for nefarious purposes.

    Asked by the Israeli business publication Globes about where Dream would sell its wares, Kurz said, “This is a company that was founded in Israel and is currently looking to the European market.”

    According to Globes, Kurz was brought on to open doors to European governments. Dream has said that its customers already include the cybersecurity authority of one major European country, though it has declined to say which.

    Over time, Europe has become a strong market for commercial cybersecurity firms. Sophie in ’t Veld, a European parliamentarian from the Netherlands who led the charge on the Pegasus committee resolution, told The Intercept, “Europe is paradise for this kind of business.”

    The Israeli Right

    Dream’s right-wing network is nowhere more concentrated than in Israel itself. Venture capitalist Dovi Frances, a major Republican donor who led Dream’s recent $33 million fundraising round, is close to Israeli Prime Minister Benjamin Netanyahu. And Lior Atar, head of cyber security at the Israeli Ministry of Energy for six years, was directly plucked from his government role to join Dream earlier this year.

    Dream officials’ entanglement with the Israeli right also extends to grassroots right-wing movements. Two investors and Hulio are involved in a ground-level organization considered to be Israel’s largest militia, HaShomer HaChadash, or “the new guardians.” A Zionist education nonprofit established in 2007, HaShomer HaChadash says it safeguards Israel’s agricultural lands, largely along the Gaza border. 

    “I look forward to building Dream, against all odds, to become the world’s largest cybersecurity company. Mark my word: It fucking will be.”

    Eisenberg, the Dream co-founder, chairs HaShomer HaChadash’s board. Hulio became a HaShomer HaChadash board member in May 2017 — a month before NSO Group was put up for sale for $1 billion — and has donated nearly $100,000 to the group. (Neither Dream nor HaShomer HaChadash responded to questions about whether Hulio remains on the board.) Another Dream investor, Noam Lanir, has also been vocal about his own contributions to the organization, according to Haaretz.

    HaShomer HaChadash has a budget of approximately $33 million in 2022, of which over $5 million came from the government, according to documents filed with the Israeli Corporations Authority. The group is staffed in part by volunteers as well as active-duty personnel detailed from the Israeli military.

    “They seem like a mainstream organization,” said Ran Cohen, chair of the Democratic Bloc, which monitors anti-democratic incitement in Israel. “But in reality, the origins of their agenda is rooted in the right wing. They have also been active in illegal outposts in the West Bank.”

    For Dream, HaShomer HaChadash is but one node of its prolific links to the right at home and abroad. With those connections and the business chops that brought the world NSO Group, Dream — as the name itself suggests — has large ambitions. “I look forward to building Dream, against all odds, to become the world’s largest cybersecurity company,” Frances, the VC, said from the U.S. in the YouTube video announcing the successful fundraising drive. “Mark my word: It fucking will be.”

    The post In Video From Gaza, Former CEO of Pegasus Spyware Firm Announces Millions for New Venture appeared first on The Intercept.

    This post was originally published on The Intercept.

  • What’s to stop the U.S. government from throwing the kill switch and shutting down phone and internet communications in a time of so-called crisis?

    After all, it’s happening all over the world.

    Communications kill switches have become tyrannical tools of domination and oppression to stifle political dissent, shut down resistance, forestall election losses, reinforce military coups, and keep the populace isolated, disconnected and in the dark, literally and figuratively.

    In an internet-connected age, killing the internet is tantamount to bringing everything—communications, commerce, travel, the power grid—to a standstill.

    In Myanmar, for example, the internet shutdown came on the day a newly elected government was to have been sworn in. That’s when the military staged a digital coup and seized power. Under cover of a communications blackout that cut off the populace from the outside world and each other, the junta “carried out nightly raids, smashing down doors to drag out high-profile politicians, activists and celebrities.”

    These government-imposed communications shutdowns serve to not only isolate, terrorize and control the populace, but also underscore the citizenry’s lack of freedom in the face of the government’s limitless power.

    Yet as University of California Irvine law professor David Kaye explains, these kill switches are no longer exclusive to despotic regimes. They have “migrated into a toolbox for governments that actually do have the rule of law.”

    This is what digital authoritarianism looks like in a technological age.

    Digital authoritarianism, as the Center for Strategic and International Studies cautions, involves the use of information technology to surveil, repress, and manipulate the populace, endangering human rights and civil liberties, and co-opting and corrupting the foundational principles of democratic and open societies, “including freedom of movement, the right to speak freely and express political dissent, and the right to personal privacy, online and off.”

    For those who insist that it can’t happen here, it can and it has.

    In 2005, cell service was disabled in four major New York tunnels, reportedly to avert potential bomb detonations via cell phone.

    In 2009, those attending President Obama’s inauguration had their cell signals blocked—again, same rationale.

    And in 2011, San Francisco commuters had their cell phone signals shut down, this time, to thwart any possible protests over a police shooting of a homeless man.

    With shutdowns becoming harder to detect, who’s to say it’s not still happening?

    Although an internet kill switch is broadly understood to be a complete internet shutdown, it can also include a broad range of restrictions such as content blocking, throttling, filtering, complete shutdowns, and cable cutting.

    As Global Risk Intel explains:

    Content blocking is a relatively moderate method that blocks access to a list of selected websites or applications. When users access these sites and apps, they receive notifications that the server could not be found or that access was denied by the network administrator. A more subtle method is throttling. Authorities decrease the bandwidth to slow down the speed at which specific websites can be accessed. A slow internet connection discourages users to connect to certain websites and does not arouse immediate suspicion. Users may assume that connection service is slow but may not conclude that this circumstance was authorized by the government. Filtering is another tool to censor targeted content and erases specific messages and terms that the government does not approve of.

    How often do most people, experiencing server errors and slow internet speeds, chalk it up to poor service? Who would suspect the government of being behind server errors and slow internet speeds?

    Then again, this is the same government that has subjected us to all manner of encroachments on our freedoms (lockdowns, mandates, restrictions, contact tracing programs, heightened surveillance, censorship, over-criminalization, shadow banning, etc.) in order to fight the COVID-19 pandemic, preserve the integrity of elections, and combat disinformation.

    These tactics have become the tools of domination and oppression in an internet-dependent age.

    It really doesn’t matter what the justifications are for such lockdowns. No matter the rationale, the end result is the same: an expansion of government power in direct proportion to the government’s oppression of the citizenry.

    In this age of manufactured crises, emergency powers and technofascism, the government already has the know-how, the technology and the authority.

    Now all it needs is the “right” crisis to flip the kill switch.

    This particular kill switch can be traced back to the Communications Act of 1934. Signed into law by President Franklin D. Roosevelt, the Act empowers the president to suspend wireless radio and phone services “if he deems it necessary in the interest of national security or defense” during a time of “war or a threat of war, or a state of public peril or disaster or other national emergency, or in order to preserve the neutrality of the United States.”

    That national emergency can take any form, can be manipulated for any purpose and can be used to justify any end goal—all on the say so of the president.

    Given the government’s penchant for weaponizing one national crisis after another in order to expand its powers and justify all manner of government tyranny in the so-called name of national security, it’s only a matter of time before this particular emergency power to shut down the internet is activated.

    Then again, an all-out communications blackout is just a more extreme version of the technocensorship that we’ve already been experiencing at the hands of the government and its corporate allies.

    In fact, these tactics are at the heart of several critical cases before the U.S. Supreme Court over who gets to control, regulate or remove what content is shared on the internet: the individual, corporate censors or the police state.

    Nothing good can come from techno-censorship.

    As I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, these censors are laying the groundwork to preempt any “dangerous” ideas that might challenge the power elite’s stranglehold over our lives.

    Whatever powers you allow the government and its corporate operatives to claim now, whatever the reason might be, will at some point in the future be abused and used against you by tyrants of your own making.

    By the time you add AI technologies, social credit systems, and wall-to-wall surveillance into the mix, you don’t even have to be a critic of the government to get snared in the web of digital censorship.

    Eventually, as George Orwell predicted, telling the truth will become a revolutionary act.

    The post Digital Kill Switches: How Tyrannical Governments Stifle Political Dissent first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.

  • As the world grapples with rising water use and climate-fueled drought, countries from the United States to Israel to Australia are building huge desalination plants to bolster their water supplies. These plants can create water for thousands of households by extracting the salt from ocean water, but they have also drawn harsh criticism from many environmental groups: Desalinating water requires a huge amount of energy, and it also produces a toxic brine that many plants discharge right back into the ocean, damaging marine life. Recent desalination plant proposals have drawn furious opposition in Los Angeles, California, and Corpus Christi, Texas.

    But a new startup called Capture6 claims it can solve desalination’s controversial brine problem with another controversial climate technology: carbon capture. The company announced new plans this week to build a carbon capture facility in South Korea that will work in tandem with a nearby desalination plant, sucking carbon dioxide out of the air and storing it in desalination brine, which it will import from the plant. But that’s not all. Capture6 also claims it can wring new fresh water out of the brine, bolstering the company’s sustainability claims — and its potential profit — even further.

    If it works, this facility will deliver a triple benefit. It will decrease the concentration of greenhouse gasses in the atmosphere, create a new source of freshwater, and limit the polluting effects of desalination. But that’s still a very big “if.”

    So-called “direct air capture” facilities use a chemical reaction to pull carbon dioxide out of the air and fuse it with another substance, preventing it from leaking into the atmosphere. The greenhouse gas can then be stored in solid compounds like limestone or in chemical solutions — or, previous studies have shown, in salty brine. Capture6’s innovation is to source that brine from wastewater treatment plants and desalination plants, which have every reason to want to dispose of it in a way that does not open them to charges of pollution.

    The newly announced venture in South Korea, known as Project Octopus, takes the process one step further. The facility will be located at the Daesan Industrial Complex, an oil and gas industrial park in a region of the country that has suffered from water shortages due to an ongoing drought. The Korean state water utility, K-water, is building a seawater desalination plant at the industrial park to provide water to the oil and gas plants, which use thousands of gallons of water to cool down their machinery as it operates.

    The Capture6 facility will use the brine created by K-water’s desalination plant to capture carbon dioxide, and it will also use the modified brine to extract even more fresh water that the oil and gas plants can then use in lieu of pumping from less sustainable sources. Carbon6 also says that the solvent produced by its direct air capture operations can then be used for additional point-source carbon capture at the nearby oil and gas plants, providing a double emissions benefit before the company buries all the carbon deep underwater. In other words, Capture6 will use the byproduct of water production to create even more water, and it will use the byproduct of capturing carbon to capture more carbon.

    “It’s an interesting example of solvent-based direct air capture, but what is innovative here is the pairing of direct air capture with the brine from desalination,” said Daniel Pike, the head of the carbon capture team at the Rocky Mountain Institute, a nonpartisan climate think tank. “Essentially what’s going on is the company is saying, ‘hey, where do we get the chemicals for our solvents? We’ll get them from desalination plants.’”

    (Capture6 received funding from Third Derivative, a carbon capture accelerator launched by Rocky Mountain Institute, but Pike himself doesn’t have a financial relationship with the company.)

    The company, which has received early funding from several venture capital funds as well as the states of California and New York, announced its first facility last year in Southern California. That facility, known as Project Monarch, will store carbon dioxide in wastewater from a water treatment plant in the city of Palmdale, then sell fresh water back to the city’s water system. 

    “What we are trying to do is really to decarbonize the water sector,” said Leo Park, the vice president for strategic development at Capture6. “So we’re trying to integrate our facilities into the easiest thing, which is wastewater and desalination plants.”

    The company’s initial pilot facility in Korea will operate on a small scale. The test project will capture 1,000 tons of carbon per year. That’s equivalent to the annual emissions of around 220 passenger cars, and about as much as is being captured at a much-lauded direct air capture facility that began operations in Tracy, California, last year. The company’s carbon capture process will yield around 14 million gallons of fresh water, enough to supply around 80 homes. 

    But when K-water’s desalination plant gets running at full capacity, Capture6 says it will be able to capture almost 500,000 tons of carbon dioxide each year by 2026. That’s many times more storage than other direct air capture facilities have achieved so far. The large-scale plant will also produce around 5 billion gallons of fresh water each year, half as much as the Daesan desalination plant itself and enough to supply around 30,000 homes.

    Pike says that the company’s growth goal is extremely ambitious, and it’s unclear whether the facility will have a net negative impact on emissions, given that desalination and direct air capture both require a lot of energy. In the case of Project Octopus, that energy will initially come from Korea’s power grid, which relies heavily on fossil fuels.

    “Even assuming you have the solvent, you have an intense energy need just to power a direct air capture process, and a big challenge we have in direct air capture is how to improve energy efficiency,” he said. “Then what they’re doing is they’re also running a very energy-intensive process for deriving the solvent, moving a lot of water around. It’s a lot of energy, a lot of water. That big picture is the challenge here.”

    If Capture6 can prove that its facilities store more carbon than they emit, the company won’t have any trouble monetizing its technology. The oil and gas companies in Daesan will buy its produced water for their cooling needs, and K-water will rely on the company to minimize the environmental harms of desalination, which generated backlash when the plant was first announced.

    “There were a lot of local concerns about brine discharge, because [locals] were worried that it was going to impact the marine ecosystem and fishing activities,” said Park. “Our solution can help minimize brine discharge, so there’s an additional environmental benefit we can generate. This is one of the reasons K-water wanted to work with us.”

    Even so, the full-size Capture6 facility will only absorb around half of the brine that the K-water desal plant produces, meaning the utility will still have to release a lot of toxic liquid into the ocean. Park says he hopes the company can eventually scale up far enough to absorb all the plant’s brine, but they’re not there yet.

    Unlike many other direct air capture companies, Capture6 doesn’t need to sell carbon credits to make money. Park hopes to someday sell credits to private companies seeking to offset their emissions, but for the moment Capture6’s main revenue source is the same as any ordinary desalination plant: water. Park says the company could build future facilities at lithium mines, which also produce brine and need water to operate.

    But Ekta Patel, a researcher and doctoral student at Duke University who studies the politics of desalination, said the big question about this business model was how much energy it takes for Capture6 to make the new water. 

    “My mind jumps immediately to the issue of energy,” she said. “How much more energy does reclaiming the additional water take, is it from renewable or nonrenewable sources, and what does that do to the cost of the water?” She added that if “addressing challenges related to carbon emissions and water” required a jump in energy usage, the solution was just “shifting around resource problems.”

    This story was originally published by Grist with the headline Can carbon capture solve desalination’s waste problem? on Jan 17, 2024.

    This post was originally published on Grist.

  • In the world of sports, there’s a rich tapestry woven from tradition, excitement, and innovation. At the heart of this narrative is the world of horse racing, a sport steeped in history and now embracing the digital age. Picture the scene: the roar of the crowd, the thunder of hooves, the tense moments as the results roll in, and among it all, today horse racing card seamlessly transitions from the physical to the digital realm. It’s more than just a list of races; it’s an evolving story that now unfolds at the click of a button, connecting enthusiasts far and wide to the pulse of the tracks.

    The evolution of horse racing cards for today’s digital audience

    The traditional horse racing card was once a tangible symbol of the day’s excitement, something racegoers would clutch in eager anticipation. But now, it takes on a new life online. It’s become interactive, informative, and instantly accessible. The modern audience can revel in detailed analyses, real-time updates, and even watch races live through their gadgets. It’s a delightful blend of old and new, where the long-standing customs of horse racing meet the immediacies of the digital revolution, bringing a fresh, convenient way to enjoy the spectacle.

    Preserving the heritage of horse racing in the digital realm

    As online platforms become the go-to source for sports betting and race watching, there’s a careful balance to be struck. How do we honor and preserve the rich heritage of horse racing while making strides in the digital era? It’s about blending the narrative of history with tech – using digital means to tell the tales of legendary races, historic wins, and the timeless stories of the tracks. Through virtual tours of iconic venues or documentaries streamed alongside live racing, the legacy lives on, engaging a new generation without losing the essence that makes horse racing enduringly captivating.

    How real-time data enhances the betting experience for racing enthusiasts

    The excitement of placing a bet on a horse race has been magnified by the power of technology. Real-time data feeds, once the realm of sci-fi, are now commonplace, offering up-to-the-second stats on horses, jockeys, and track conditions. In a sport where every second counts, having this data at one’s fingertips can transform the betting experience from a game of chance to a calculated decision. This immediacy and depth of information create a more immersive and strategic involvement for seasoned bettors and newcomers alike, adding a thrilling new dimension to the time-honored thrill of horse racing.

    Ethical wagering: walking the tightrope in online betting platforms

    In the rush of the race, it’s vital to remember the importance of ethical wagering. Online betting platforms are tasked with offering the fun and potential rewards of betting, while also ensuring the well-being of their users. It’s a fine line – providing resources for responsible gambling, such as setting limits and offering self-exclusion options, are essential. These platforms have an opportunity to lead by example, showing that the excitement of betting can coexist with a commitment to responsible participation and support systems for those who may need it.

    The environmental considerations of digital gambling platforms

    Every aspect of our digital lives comes with an environmental cost, including the energy used by online betting platforms. The eco-footprint of the digital bettor is not always front and center, but as we become more environmentally conscious it’s an area that can’t be ignored. From optimizing server efficiency to using green energy sources, there’s a growing push for sustainability in the betting world. It’s about enjoying the sport and the bet, all while being mindful of the energy we consume, striving for a greener way to engage with our favorite pastimes.

    Featured image supplied

    By The Canary

    This post was originally published on Canary.

  • OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

    Up until January 10, OpenAI’s “usage policies” page included a ban on “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.” That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished.

    The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document “clearer” and “more readable,” and which includes many other substantial language and formatting changes.

    “We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept. “A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.”

    Felix declined to say whether the vaguer “harm” ban encompassed all military use, writing, “Any use of our technology, including by the military, to ‘[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system,’ is disallowed.”

    “OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications,” said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper she co-authored with OpenAI researchers that specifically flagged the risk of military use. Khlaaf added that the new policy seems to emphasize legality over safety. “There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law,” she said. “Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties.”

    The real-world consequences of the policy are unclear. Last year, The Intercept reported that OpenAI was unwilling to say whether it would enforce its own clear “military and warfare” ban in the face of increasing interest from the Pentagon and U.S. intelligence community.

    “Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” said Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission. “The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement.”

    While nothing OpenAI offers today could plausibly be used to directly kill someone, militarily or otherwise — ChatGPT can’t maneuver a drone or fire a missile — any military is in the business of killing, or at least maintaining the capacity to kill. There are any number of killing-adjacent tasks that a LLM like ChatGPT could augment, like writing code or processing procurement orders. A review of custom ChatGPT-powered bots offered by OpenAI suggests U.S. military personnel are already using the technology to expedite paperwork. The National Geospatial-Intelligence Agency, which directly aids U.S. combat efforts, has openly speculated about using ChatGPT to aid its human analysts. Even if OpenAI tools were deployed by portions of a military force for purposes that aren’t directly violent, they would still be aiding an institution whose main purpose is lethality.

    Experts who reviewed the policy changes at The Intercept’s request said OpenAI appears to be silently weakening its stance against doing business with militaries. “I could imagine that the shift away from ‘military and warfare’ to ‘weapons’ leaves open a space for OpenAI to support operational infrastructures as long as the application doesn’t directly involve weapons development narrowly defined,” said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. “Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system – including command and control infrastructures – of which it’s part.” Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, “It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons.”

    Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.

    The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs. LLMs are trained on giant volumes of books, articles, and other web data in order to approximate human responses to user prompts. Though the outputs of an LLM like ChatGPT are often extremely convincing, they are optimized for coherence rather than a firm grasp on reality and often suffer from so-called hallucinations that make accuracy and factuality a problem. Still, the ability of LLMs to quickly ingest text and rapidly output analysis — or at least the simulacrum of analysis — makes them a natural fit for the data-laden Defense Department.

    While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools. In a November address, Deputy Secretary of Defense Kathleen Hicks stated that AI is “a key part of the comprehensive, warfighter-centric approach to innovation that Secretary [Lloyd] Austin and I have been driving from Day 1,” though she cautioned that most current offerings “aren’t yet technically mature enough to comply with our ethical AI principles.”

    Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, told a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”

    The post OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare” appeared first on The Intercept.

    This post was originally published on The Intercept.

  • A small group of volunteers from Israel’s tech sector is working tirelessly to remove content it says doesn’t belong on platforms like Facebook and TikTok, tapping personal connections at those and other Big Tech companies to have posts deleted outside official channels, the project’s founder told The Intercept.

    The project’s moniker, “Iron Truth,” echoes the Israeli military’s vaunted Iron Dome rocket interception system. The brainchild of Dani Kaganovitch, a Tel Aviv-based software engineer at Google, Iron Truth claims its tech industry back channels have led to the removal of roughly 1,000 posts tagged by its members as false, antisemitic, or “pro-terrorist” across platforms such as X, YouTube, and TikTok.

    In an interview, Kaganovitch said he launched the project after the October 7 Hamas attack, when he saw a Facebook video that cast doubt on alleged Hamas atrocities. “It had some elements of disinformation,” he told The Intercept. “The person who made the video said there were no beheaded babies, no women were raped, 200 bodies is a fake. As I saw this video, I was very pissed off. I copied the URL of the video and sent it to a team in [Facebook parent company] Meta, some Israelis that work for Meta, and I told them that this video needs to be removed and actually they removed it after a few days.”

    Billed as both a fight against falsehood and a “fight for public opinion,” according to a post announcing the project on Kaganovitch’s LinkedIn profile, Iron Truth vividly illustrates the perils and pitfalls of terms like “misinformation” and “disinformation” in wartime, as well as the mission creep they enable. The project’s public face is a Telegram bot that crowdsources reports of “inflammatory” posts, which Iron Truth’s organizers then forward to sympathetic insiders. “We have direct channels with Israelis who work in the big companies,” Kaganovitch said in an October 13 message to the Iron Truth Telegram group. “There are compassionate ones who take care of a quick removal.” The Intercept used Telegram’s built-in translation feature to review the Hebrew-language chat transcripts.

    Iron Truth vividly illustrates the perils and pitfalls of terms like “misinformation” and “disinformation” in wartime.

    So far, nearly 2,000 participants have flagged a wide variety of posts for removal, from content that’s clearly racist or false to posts that are merely critical of Israel or sympathetic to Palestinians, according to chat logs reviewed by The Intercept. “In the U.S. there is free speech,” Kaganovitch explained. “Anyone can say anything with disinformation. This is very dangerous, we can see now.”

    “The interests of a fact checking or counter-disinformation group working in the context of a war belongs to one belligerent or another. Their job is to look out for the interests of their side,” explained Emerson Brooking, a fellow with the Atlantic Council’s Digital Forensic Research Lab. “They’re not trying to ensure an open, secure, accessible online space for all, free from disinformation. They’re trying to target and remove information and disinformation that they see as harmful or dangerous to Israelis.”

    While Iron Truth appears to have frequently conflated criticism or even mere discussion of Israeli state violence with misinformation or antisemitism, Kaganovitch says his views on this are evolving. “In the beginning of the war, it was anger, most of the reporting was anger,” he told The Intercept. “Anti-Israel, anti-Zionist, anything related to this was received as fake, even if it was not.”

    The Intercept was unable to independently confirm that sympathetic workers at Big Tech firms are responding to the group’s complaints or verify that the group was behind the removal of the content it has taken credit for having deleted. Iron Truth’s founder declined to share the names of its “insiders,” stating that they did not want to discuss their respective back channels with the press. In general, “they are not from the policy team but they have connections to the policy team,” Kaganovitch told The Intercept, referring to the personnel at social media firms who set rules for permissible speech. “Most of them are product managers, software developers. … They work with the policy teams with an internal set of tools to forward links and explanations about why they need to be removed.” While companies like Meta routinely engage with various civil society groups and NGOs to discuss and remove content, these discussions are typically run through their official content policy teams, not rank-and-file employees.

    The Iron Truth Telegram account regularly credits these supposed insiders. “Thanks to the TikTok Israel team who fight for us and for the truth,” read an October 28 post on the group’s Telegram channel. “We work closely with Facebook, today we spoke with more senior managers,” according to another post on October 17. Soon after a Telegram chat member complained that something they’d posted to LinkedIn had attracted “inflammatory commenters,” the Iron Truth account replied, “Kudos to the social network LinkedIn who recruited a special team and have so far removed 60% of the content we reported on.”

    Kaganovitch said the project has allies outside Israel’s Silicon Valley annexes as well. Iron Truth’s organizers met with the director of a controversial Israeli government cyber unit, he said, and its core team of more than 50 volunteers and 10 programmers includes a former member of the Israeli Parliament.

    “Eventually our main goal is to get the tech companies to differentiate between freedom of speech and posts that their only goal is to harm Israel and to interfere with the relationship between Israel and Palestine to make the war even worse,” Inbar Bezek, the former Knesset member working with Iron Truth, told The Intercept in a WhatsApp message.

    “Across our products, we have policies in place to mitigate abuse, prevent harmful content and help keep users safe. We enforce them consistently and without bias,” Google spokesperson Christa Muldoon told The Intercept. “If a user or employee believes they’ve found content that violates these policies, we encourage them to report it through the dedicated online channels.” Muldoon added that Google “encourages employees to use their time and skills to volunteer for causes they care about.” In interviews with The Intercept, Kaganovitch emphasized that he works on Iron Truth only in his free time, and said the project is entirely distinct from his day job at Google.

    Meta spokesperson Ryan Daniels pushed back on the notion that Iron Truth was able to get content taken down outside the platform’s official processes, but declined to comment on Iron Truth’s underlying claim of a back channel to company employees. “Multiple pieces of content this group claims to have gotten removed from Facebook and Instagram are still live and visible today because they don’t violate our policies,” Daniels told The Intercept in an emailed statement. “The idea that we remove content based on someone’s personal beliefs, religion, or ethnicity is simply inaccurate.” Daniels added, “We receive feedback about potentially violating content from a variety of people, including employees, and we encourage anyone who sees this type of content to report it so we can investigate and take action according to our policies,” noting that Meta employees have access to internal content reporting tools, but that this system can only be used to remove posts that violate the company’s public Community Standards.

    Neither TikTok nor LinkedIn responded to questions about Iron Truth. X could not be reached for comment.

    GAZA CITY, GAZA - OCTOBER 18: A Palestinian woman around the belongings of Palestinians cries at the garden of Al-Ahli Arabi Baptist Hospital after it was hit in Gaza City, Gaza on October 18, 2023. Over 500 people were killed on Al-Ahli Arabi Baptist Hospital in Gaza on Tuesday, Health Ministry spokesman Ashraf al-Qudra told. According to the Palestinian authorities, Israeli army is responsible for the deadly bombing. (Photo by Mustafa Hassona/Anadolu via Getty Images)
    A Palestinian woman cries in the garden of Al-Ahli Arab Hospital after it was hit in Gaza City, Gaza, on Oct. 18, 2023.
    Photo by Mustafa Hassona/Anadolu via Getty Images

    “Keep Bombing!”

    Though confusion and recrimination are natural byproducts of any armed conflict, Iron Truth has routinely used the fog of war as evidence of anti-Israeli disinformation.

    At the start of the project in the week after Hamas’s attack, for example, Iron Truth volunteers were encouraged to find and report posts expressing skepticism about claims of the mass decapitation of babies in an Israeli kibbutz. They quickly surfaced posts casting doubt on reports of “40 beheaded babies” during the Hamas attack, tagging them “fake news” and “disinformation” and sending them to platforms for removal. Among a list of LinkedIn content that Iron Truth told its Telegram followers it had passed along to the company was a post demanding evidence for the beheaded baby claim, categorized by the project as “Terror/Fake.”

    But the skepticism they were attacking proved warranted. While many of Hamas’s atrocities against Israelis on October 7 are indisputable, the Israeli government itself ultimately said it couldn’t verify the horrific claim about beheaded babies. Similarly, Iron Truth’s early efforts to take down “disinformation” about Israel bombing hospitals now contrast with weeks of well-documented airstrikes against multiple hospitals and the deaths of hundreds of doctors from Israeli bombs.

    On October 16, Iron Truth shared a list of Facebook and Instagram posts it claimed responsibility for removing, writing on Telegram, “Significant things reported today and deleted. Good job! Keep bombing! 💪

    While most of the links no longer work, several are still active. One is a video of grievously wounded Palestinians in a hospital, including young children, with a caption accusing Israel of crimes against humanity. Another is a video from Mohamed El-Attar, a Canadian social media personality who posts under the name “That Muslim Guy.” In the post, shared the day after the Hamas attack, El-Attar argued the October 7 assault was not an act of terror, but of armed resistance to Israeli occupation. While this statement is no doubt inflammatory to many, particularly in Israel, Meta is supposed to allow for this sort of discussion, according to internal policy guidance previously reported by The Intercept. The internal language, which detailed the company’s Dangerous Individuals and Organizations policy, lists this sentence among examples of permitted speech: “The IRA were pushed towards violence by the brutal practices of the British government in Ireland.”

    While it’s possible for Meta posts to be deleted by moderators and later reinstated, Daniels, the spokesperson, disputed Iron Truth’s claim, saying links from the list that remain active had never been taken down in the first place. Daniels added that other links on the list had indeed been removed because they violated Meta policy but declined to comment on specific posts.

    Under their own rules, the major social platforms aren’t supposed to remove content simply because it is controversial. While content moderation trigger-happiness around mere mentions of designated terror organizations has led to undue censorship of Palestinian and other Middle Eastern users, Big Tech policies on misinformation are, on paper, much more conservative. Facebook, Instagram, TikTok, and YouTube, for example, only prohibit misinformation when it might cause physical harm, like snake oil cures for Covid-19, or posts meant to interfere with civic functions such as elections. None of the platforms targeted by Iron Truth prohibit merely “inflammatory” speech; indeed, such a policy would likely be the end of social media as we know it.

    Still, content moderation rules are known to be vaguely conceived and erratically enforced. Meta for instance, says it categorically prohibits violent incitement, and touts various machine learning-based technologies to detect and remove such speech. Last month, however, The Intercept reported that the company had approved Facebook ads calling for the assassination of a prominent Palestinian rights advocate, along with explicit calls for the murder of civilians in Gaza. On Instagram, users leaving comments with Palestinian flag emojis have seen these responses inexplicably vanished. 7amleh, a Palestinian digital rights organization that formally partners with Meta on speech issues, has documented over 800 reports of undue social media censorship since the war’s start, according to its public database.

    Disinformation in the Eye of the Beholder

    “It’s really hard to identify disinformation,” Kaganovitch acknowledged in an interview, conceding that what’s considered a conspiracy today might be corroborated tomorrow, and pointing to a recent Haaretz report that an Israel Defense Forces helicopter may have inadvertently killed Israelis on October 7 in the course of firing at Hamas.

    Throughout October, Iron Truth provided a list of suggested keywords for volunteers in the project’s Telegram group to use when searching for content to report to the bot. Some of these terms, like “Kill Jewish” and “Kill Israelis,” pertained to content flagrantly against the rules of major social media platforms, which uniformly ban explicit violent incitement. Others reflected stances that might understandably offend Israeli social media users still reeling from the Hamas attack, like “Nazi flag israel.”

    But many other suggestions included terms commonly found in news coverage or general discussion of the war, particularly in reference to Israel’s brutal bombardment of Gaza. Some of those phrases — including “Israel bomb hospital”; “Israel bomb churches”; “Israel bomb humanitarian”; and “Israel committing genocide” — were suggested as disinformation keywords as the Israeli military was being credibly accused of doing those very things. While some allegations against both Hamas and the IDF were and continue to be bitterly disputed — notably who bombed the Al-Ahli Arab Hospital on October 17 — Iron Truth routinely treated contested claims as “fake news,” siding against the sort of analysis or discussion often necessary to reach the truth.

    “This post must be taken down, he is a really annoying liar and the amount of exposure he has is crazy.”

    Even the words “Israel lied” were suggested to Iron Truth volunteers on the grounds that they could be used in “false posts.” On October 16, two days after an Israeli airstrike killed 70 Palestinians evacuating from northern Gaza, one Telegram group member shared a TikTok containing imagery of one of the bombed convoys. “This post must be taken down, he is a really annoying liar and the amount of exposure he has is crazy,” the member added. A minute later, the Iron Truth administrator account encouraged this member to report the post to the Iron Truth bot.

    Although The Intercept is unable to see which links have been submitted to the bot, Telegram transcripts show the group’s administrator frequently encouraged users to flag posts accusing Israel of genocide or other war crimes. When a chat member shared a link to an Instagram post arguing “It has BEEN a genocide since the Nakba in 1948 when Palestinians were forcibly removed from their land by Israel with Britain’s support and it has continued for the past 75 years with US tax payer dollars,” the group administrator encouraged them to report the post to the bot three minutes later. Links to similar allegations of Israeli war crimes from figures such as popular Twitch streamer Hasan Piker; Colombian President Gustavo Petro; psychologist Gabor Maté; and a variety of obscure, ordinary social media users have received the same treatment.

    Iron Truth has acknowledged its alleged back channel has limits: “It’s not immediate unfortunately, things go through a chain of people on the way,” Kaganovitch explained to one Telegram group member who complained a post they’d reported was still online. “There are companies that implement faster and there are companies that work more slowly. There is internal pressure from the Israelis in the big companies to speed up the reports and removal of the content. We are in constant contact with them 24/7.”

    Since the war began, social media users in Gaza and beyond have complained that content has been censored without any clear violation of a given company’s policies, a well-documented phenomenon long before the current conflict. But Brooking, of the Atlantic Council, cautioned that it can be difficult to determine the process that led to the removal of a given social media post. “There are almost certainly people from tech companies who are receptive to and will work with a civil society organization like this,” he said. “But there’s a considerable gulf between claiming those tech company contacts and having a major influence on tech company decision making.”

    Iron Truth has found targets outside social media too. On November 27, one volunteer shared a link to NoThanks, an Android app that helps users boycott companies related to Israel. The Iron Truth administrator account quickly noted that the complaint had been forwarded to Google. Days later, Google pulled NoThanks from its app store, though it was later reinstated.

    The group has also gone after efforts to fundraise for Gaza. “These cuties are raising money,” said one volunteer, sharing a link to the Instagram account of Medical Aid for Palestinians. Again, the Iron Truth admin quickly followed up, saying the post had been “transferred” accordingly.

    But Kaganovitch says his thinking around the topic of Israeli genocide has shifted. “I changed my thoughts a bit during the war,” he explained. Though he doesn’t agree that Israel is committing a genocide in Gaza, where the death toll has exceeded 20,000, according to the Gaza Health Ministry, he understands how others might. “The genocide, I stopped reporting it in about the third week [of the war].”

    Several weeks after its launch, Iron Truth shared an infographic in its Telegram channel asking its followers not to pass along posts that were simply anti-Zionist. But OCT7, an Israeli group that “monitors the social web in real-time … and guides digital warriors,” lists Iron Truth as one of its partner organizations, alongside the Israeli Ministry for Diaspora Affairs, and cites “anti-Zionist bias” as part of the “challenge” it’s “battling against.”

    Despite Iron Truth’s occasional attempts to rein in its volunteers and focus them on finding posts that might actually violate platform rules, getting everyone on board has proven difficult. Chat transcripts show many Iron Truth volunteers conflating Palestinian advocacy with material support for Hamas or characterizing news coverage as “misinformation” or “disinformation,” perennially vague terms whose meaning is further diluted in times of war and crisis.

    “By the way, it would not be bad to go through the profiles of [United Nations] employees, the majority are local there and they are all supporters of terrorists,” recommended one follower in October. “Friends, report a profile of someone who is raising funds for Gaza!” said another Telegram group member, linking to the Instagram account of a New York-based beauty influencer. “Report this profile, it’s someone I met on a trip and it turns out she’s completely pro-Palestinian!” the same user added later that day. Social media accounts of Palestinian journalist Yara Eid; Palestinian photojournalist Motaz Azaiza; and many others involved in Palestinian human rights advocacy were similarly flagged by Iron Truth volunteers for allegedly spreading “false information.”

    Iron Truth has at times struggled with its own followers. When one proposed reporting a link about Israeli airstrikes at the Rafah border crossing between Gaza and Egypt, the administrator account pointed out that the IDF had indeed conducted the attacks, urging the group: “Let’s focus on disinformation, we are not fighting media organizations.” On another occasion, the administrator discouraged a user from reporting a page belonging to a news organization: “What’s the problem with that?” the administrator asked, noting that the outlet was “not pro-Israel, but is there fake news?”

    But Iron Truth’s standards often seem muddled or contradictory. When one volunteer suggested going after B’Tselem, an Israeli human rights organization that advocates against the country’s military occupation and broader repression of Palestinians, the administrator account replied: “With all due respect, B’Tselem does publish pro-Palestinian content and this was also reported to us and passed on to the appropriate person. But B’Tselem is not Hamas bots or terrorist supporters, we have tens of thousands of posts to deal with.”

    11 September 2022, Israel, Jerusalem: Israeli flags fly in front of the Knesset, the unicameral parliament of the State of Israel. Photo by: Christophe Gateau/picture-alliance/dpa/AP Images
    Israeli flags fly in front of the Knesset, the unicameral parliament of the state of Israel, on Sept. 11, 2022, in Jerusalem.
    Photo: Christophe Gateau/AP

    Friends in High Places

    Though Iron Truth is largely a byproduct of Israel’s thriving tech economy — the country is home to many regional offices of American tech giants — it also claims support from the Israeli government.

    The group’s founder says that Iron Truth leadership have met with Haim Wismonsky, director of the controversial Cyber Unit of the Israeli State Attorney’s Office. While the Cyber Unit purports to combat terrorism and miscellaneous cybercrime, critics say it’s used to censor unwanted criticism and Palestinian perspectives, relaying thousands upon thousands of content takedown demands. American Big Tech has proven largely willing to play ball with these demands: A 2018 report from the Israeli Ministry of Justice claimed a 90 percent compliance rate across social media platforms.

    Following an in-person presentation to the Cyber Unit, Iron Truth’s organizers have remained in contact, and sometimes forward the office links they need help removing, Kaganovitch said. “We showed them the presentation, they asked us also to monitor Reddit and Discord, but Reddit is not really popular here in Israel, so we focus on the big platforms right now.”

    Wismonsky did not respond to a request for comment.

    Kaganovitch noted that Bezek, the former Knesset member, “helps us with diplomatic and government relationships.” In an interview, Bezek confirmed her role and corroborated the group’s claims, saying that while Iron Truth had contacts with “many other employees” at social media firms, she is not involved in that aspect of the group’s work, adding, “I took on myself to be more like the legislation and legal connection.”

    “What we’re doing on a daily basis is that we have a few groups of people who have social media profiles in different medias — LinkedIn, X, Meta, etc. — and if one of us is finding content that is antisemitic or content that is hate claims against Israel or against Jews, we are informing the other people in the group, and few people at the same time are reporting to the tech companies,” Bezek explained.

    Bezek’s governmental outreach has so far included organizing meetings with Israel’s Ministry of Foreign Affairs and “European ambassadors in Israel.” Bezek declined to name the Israeli politicians or European diplomatic personnel involved because their communications are ongoing. These meetings have included allegations of foreign, state-sponsored “antisemitic campaigns and anti-Israeli campaigns,” which Bezek says Iron Truth is collecting evidence about in the hope of pressuring the United Nations to act.

    Iron Truth has also collaborated with Digital Dome, a similar volunteer effort spearheaded by the Israeli anti-disinformation organization FakeReporter, which helps coordinate the mass reporting of unwanted social media content. Israeli American investment fund J-Ventures, which has reportedly worked directly with the IDF to advance Israeli military interests, has promoted both Iron Truth and Digital Dome.

    FakeReporter did not respond to a request for comment.

    While most counter-misinformation efforts betray some geopolitical loyalty, Iron Truth is openly nationalistic. An October 28 write-up in the popular Israeli news website Ynet — “Want to Help With Public Diplomacy? This is How You Start”— cited the Telegram bot as an example of how ordinary Israelis could help their country, noting: “In the absence of a functioning Information Ministry, Israeli men and women hope to be able to influence even a little bit the sounding board on the net.” A mention in the Israeli financial news website BizPortal described Iron Truth as fighting “false and inciting content against Israel.”

    Iron Truth is “a powerful reminder that it’s still people who run these companies at the end of the day,” said Brooking. “I think it’s natural to try to create these coordinated reporting groups when you feel that your country is at war in or in danger, and it’s natural to use every tool at your disposal, including the language of disinformation or fact checking, to try to remove as much content as possible if you think it’s harmful to you or people you love.”

    The real risk, Brooking said, lies not in the back channel, but in the extent to which companies that control the speech of billions around the world are receptive to insiders arbitrarily policing expression. “If it’s elevating content for review that gets around trust and safety teams, standing policy, policy [into] which these companies put a lot of work,” he said, “then that’s a problem.”

    The post Israeli Group Claims It’s Using Big Tech Back Channels to Censor “Inflammatory” Wartime Content appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Apple Pay, a contactless payment service provider available in Saddleworth and other parts of the UK, has rolled out a new open banking feature. And it’s special in many ways. For instance, it allows users to access banking information from their iPhones instantly and on the go.

    So, suppose you’ve been using Apple to do virtually everything, from paying for groceries to funding online casino accounts accessible on mobiles. In that case, the new open banking feature for Apple pay makes your experience even easier.

    What is open banking?

    Open banking is a technology that uses application programming interfaces (APIs) to give systems like Apple Pay access to financial data from banks and other financial institutions. So, when you hear someone mention open banking together with Apple Pay, it means that the relevant financial institutions have opened up their data for the latter to access, use, and share as per regulations.

    Earlier this year, Apple announced its soft launch of an iPhone Wallet app integration tailored to allow users in the UK to check their bank transaction histories and update account balances. That was on 27 September, 2023. Roughly two weeks later, the company has taken another step by introducing a better open banking feature that no one could imagine some years back.

    What can Apple’s new open banking feature do?

    Apple’s new open banking feature lets you connect valid credit and debit cards to Apple Pay and do several things. First, it enables you to check your debit card balance. How cool is that? Say goodbye to having to go to an ATM, risking getting mugged, and dealing with unnerving fees, all to check if you are running dry or have enough in your bank account.

    Moreover, with the new feature, you can easily check all your transactions, from deposits, withdrawals, and payments. That is crucial because it’s the key to staying on top of your spending habits and cutting back if necessary. It also helps you spend what you have and avoid overdrafts, which often attract significant fees.

    Equally significant, by allowing you to check transactions, this new Apple Pay feature puts you in a better position to identify suspicious activities. This is especially crucial for kids who use their parents’ credit cards without permission.

    And when left unchecked, such incidences can lead to hefty financial repercussions. But you don’t have to worry too much now that Apple Pay has rolled out the next-level open banking feature.

    What you need to enjoy the new Apple Pay feature

    Apple’s new open banking feature is only accessible to people who:

    • Own Apple phones running the latest version of IOS 17.1. Remember, Apple Pay only works on iPhones. Therefore, you can’t use it if you have an Android device. Also, the OS requirement is non-negotiable. That means, for you to leverage this integration, you must have an iPhone XS or a better model compatible with IOS 17.
    • Have eligible credit cards. Apple Users who have also partnered with the Royal Bank of Scotland, Barclays, Lloyds, NatWest, Mozo, Halifax, First Direct, Barclaycard, M&S Bank, and HSBC can enjoy the new feature. Suppose your cards aren’t affiliated with any of these financial institutions. In that case, you may have to wait until Apple Pay starts working with your service provider or switch to a supported company.

    How to access Apple Pay’s new feature

    If you meet all stipulated requirements, follow these steps to access the Apple Pay feature.

    • Step 1: Go to Apple Wallet, where you store the debit and credit cards you use with Apple Pay, and specify your card. If you don’t have the app, download it from the Apple app store.
    • Step 2: Once you’ve selected a valid card, the app will take you to your financial service provider’s website or app. There, you’ll need to authenticate your accounts. Once you’ve done so, you’ll receive instructions to connect your accounts.

    But before the full integration, you must consent to the UK Open Banking requirements. Be sure to read and understand everything before proceeding.

    Final thoughts

    Using Apple Pay in the UK has a plethora of benefits. First, it’s safer than using a conventional debit or credit card, which you have to carry around and can be stolen by malicious actors. Not to mention, Apple Pay optimises your security and safety by requiring anyone using your Apple Pay to use a specific passcode, Face ID, or Touch ID to initiate every purchase.

    And now, using this payment service has one more perk. Thanks to the new open banking feature, rolled out in mid-November 2023, you can now conveniently check your bank balance, transaction history, and payments made to different entities, including your favourite Saddleworth pub, from your iPhone. What a day to be alive!

    Featured image via Apple Support – YouTube

    By The Canary

    This post was originally published on Canary.

  • It’s an election year in Great Britain, and you know what that means – shenanigans. In 2019, “dirty tricks” perpetrated by the Tory Party included a fake punch, several fake fact-checking websites, and pretty much everything Boris Johnson said (or did (or secretly thought)).

    The 2024 general election might not happen until later this year, and yet the dodgy tactics are already on display. The latest trick was highlighted with a community note on the website we’re choosing to refer to as Twitter:

    “Classic data-grabbing”

    IT law professor Paul Bernal described the Tory ploy as follows:

    Other people had similar things to say:

    Responses to Bernal’s tweet highlighted a sad truth – namely that the Labour Party is doing something similarly slimey:

    The tactic is one which seems to have been deployed for some time, and by both parties:

    Rock-bottom trust in an election year

    In December 2023, polling company Ipsos reported:

    Trust in politicians reaches its lowest score in 40 years

    Ouch. It elaborated further, noting:

    Just nine per cent of the British public say they trust politicians to tell the truth, down from twelve per cent in 2022. This makes them the least trusted profession in Britain*. Although trust in politicians is usually low, this years’ score is the lowest for politicians since the first wave of the survey in 1983; aside from 2022 the previous low was a score of 13%, which occurred in 2009 following the expenses scandal.

    Could hoodwinking working people be making the problem worse? It certainly can’t be helping.

    If you yourself number among the 9% who still trust British politicians, I’ve got a data scam I think you’d be interested in signing up to.

    * To give you the full picture, we should note that journalists also number in the top 5 least-trusted professions. Having observed the mainstream British press for several years, we’re fully on board with this assessment.

    Featured image via Twitter

    By The Canary

    This post was originally published on Canary.

  • A forthcoming drone made by Autel, a Chinese electronics manufacturer and drone-maker, is being marketed using images of the unmanned aerial vehicle carrying a payload of what appears to be explosive shells. The images were discovered just two months after the company condemned the military use of its flying robots.

    Two separate online retail preorder listings for the $52,000 Autel Titan drone, with a cargo capacity of 22 pounds and an hour of flight time, advertised a surprising feature: the ability to carry (and presumably fire) weapons.

    In response to concerns from China-hawk lawmakers in the U.S. over Autel’s alleged connections to the Chinese government and its “potentially supporting Russia’s ongoing invasion of Ukraine,” according to a congressional inquiry into the firm, Autel issued a public statement disowning battlefield use of its drones: “Autel Robotics strongly opposes the use of drone products for military purposes or any other activities that infringe upon human rights.” A month later, it issued a second, similarly worded denial: “Autel Robotics is solely dedicated to the development and production of civilian drones. Our products are explicitly designed for civilian use and are not intended for military purposes.”

    It was surprising, then, when Spanish engineer and drone enthusiast Konrad Iturbe discovered a listing for the Titan drone armed to the teeth on OBDPRICE.com, an authorized reseller of Autel products. While most of the product images are anodyne promotional photos showing the drone from various angles, including carrying a generic cargo container, three show a very different payload: what appears to be a cluster of four explosive shells tucked underneath, a configuration similar to those seen in bomb-dropping drones deployed in Ukraine and elsewhere. Samuel Bendett, an analyst with the Center for Naval Analyses, told The Intercept that the shells resembled mortar rounds. Arms analyst Patrick Senft said the ordnance shown might actually be toy replicas, as they “don’t resemble any munitions I’ve seen deployed by UAV.”

    Contacted by email, an OBDPRICE representative who identified themselves only as “Alex” told The Intercept: “The drone products we sell cannot be used for military purposes.” When asked why the site was then depicting the drone product in question carrying camouflage-patterned explosive shells, they wrote: “You may have misunderstood, those are some lighting devices that help our users illuminate themselves at night.” The site has not responded to further queries, but shortly after being contacted by The Intercept, the mortar-carrying images were deleted.

    A “heavy lift” drone made by Autel, a Chinese electronics manufacturers listed for resale on eBay on Jan. 5, 2023, showing Autel’s renderings of the drone carrying a payload of camouflage-clad explosives.
    Screenshot: The Intercept

    Iturbe also identified a separate listing from an Autel storefront on eBay using the very same three images of an armed Titan drone. When asked about the images and whether the drone is compatible with other weapons systems, the account replied via eBay message: “The bombs shown in the listing for this drone is just for display. Pls note that Titan comes with a standard load of 4 kilograms and a maximum load up to 10 kilograms.”

    The images bear a striking resemblance to ordnance-carrying drones that have been widely used during the Russian invasion of Ukraine, where their low cost and sophisticated cameras make them ideal for both reconnaissance and improvised bombing runs. Autel’s drones in particular have proven popular on both sides of the conflict: A March 2023 New York Times report found that “nearly 70 Chinese exporters sold 26 distinct brands of Chinese drones to Russia since the invasion. The second-largest brand sold was Autel, a Chinese drone maker with subsidiaries in the United States, Germany and Italy.” A December 2022 report from the Washington Post, meanwhile, cited Autel’s EVO II model drone as particularly popular among volunteer efforts to source drones for the Ukrainian war effort.

    Last summer, researchers who’ve closely followed the use of drones in the Russia–Ukraine war documented an effort by two Russian nationals, self-chronicled via Telegram, to obtain Chinese drones for the country’s ongoing war in Ukraine. Their visit to Shenzhen resulted in a meeting at an Autel facility and the procurement, the individuals claimed, of military-purpose drones. 

    Autel’s New York-based American subsidiary did not respond to a request for comment.

    The post Drones From Company That “Strongly Opposes” Military Use Marketed With Bombs Attached appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Last year, climate change came into sharp relief for much of the world: The planet experienced its hottest 12-month period in 125,000 years. Flooding events inundated communities from California to East Africa to India. A heat wave in South America caused temperatures to spike above 100 degrees Fahrenheit in the middle of winter, and a heat dome across much of the southern United States spurred a 31-day streak in Phoenix of 110 degree-plus temperatures. The formation of an El Niño, the natural phenomenon that raises temperatures globally, intensified extreme weather already strengthened by climate change. The U.S. alone counted 25 billion-dollar weather disasters in 2023 — more than any other year. 

    Yet this devastation was met by some of the largest gains in climate action to date. World leaders agreed for the first time to “transition away” from oil and gas at the annual United Nations climate summit, hosted last month by the United Arab Emirates. Funds and incentives from President Joe Biden’s signature climate law, the Inflation Reduction Act, started to roll out to companies and municipalities. Electric vehicle sales skyrocketed, thousands of young people signed up for the first-ever American Climate Corps, and companies agreed to pay billions of dollars to remove harmful chemicals called PFAS from drinking water supplies.

    As we enter a new year, we asked Grist reporters what big stories they’re watching on their beats, 24 predictions for 2024. Their forecasts depict a world on the cusp of change in regard to climate — both good and bad, and often in tandem. Here’s what we’re keeping an eye on, from hard-won international financial commitments, to battles over mining in-demand minerals like lithium, to the expansion of renewable energy.


    Protesters hold placards during a climate march in New York City last September. Photo by Ryan Rahman/Pacific Press/LightRocket via Getty Images

    Politics & Policy

    A new climate corps will turn young people’s anxiety into action

    The American Climate Corps will officially kick off in the summer of 2024, sending 20,000 18- to 26-year-olds across the country to install solar projects, mitigate wildfire risk, and make homes more energy-efficient. President Biden’s New Deal-inspired program is modeled after Franklin D. Roosevelt’s Climate Conservation Corps and attracted 100,000 applicants. As it rolls out, the climate corps will continue to draw criticism from the left for low wages and ageism, and from the right for being a “made-up government work program … for young liberal activists.” Yet the program will remain popular with the public, bolstering towns’ resilience to weather disasters and training thousands of young people to help fill the country’s shortage of skilled workers needed for decarbonization.

    Kate Yoder Staff writer examining the intersections of climate, language, history, culture, and accountability

    Despite rising temperatures, climate change takes a backseat during the 2024 election

    Although more than a decade of surveys and polls show that a growing proportion of Americans are concerned about climate change, it has never been a defining issue in a general election — and will likely remain that way in 2024, at least on the main stage. Put simply, there are too many immediate concerns that will dominate the campaign trail as President Joe Biden faces off against the Republican nominee — most likely former President Donald Trump: Russia’s ongoing war in Ukraine, Israel’s war against Hamas, the overturning of Roe v. Wade and the fight for abortion rights, new charges against Biden’s son, Hunter, and, of course, the numerous criminal charges against Trump. Biden may herald his signature climate law, the Inflation Reduction Act, in his own messaging, but climate change is unlikely to cross party lines.

    Zoya Teirstein Staff writer covering politics and the intersection between climate change and health

    A climate reparations fund gets off the ground

    During COP28, the U.N. climate conference that took place in Dubai last year, countries agreed to set up a climate reparations fund on an interim basis at the World Bank. The fund was a longtime priority of developing countries and climate justice advocates who argued that nations that had contributed negligibly to a warming planet were facing the consequences. This year, the World Bank is expected to set up the fund and begin disbursing money to poor nations. Board members will be selected, an executive director will be appointed, decisions about how countries can access the money will be made, and money will begin flowing to those in need. During COP28, wealthy countries chipped in more than $650 million to the fund. More money will also fill the coffers this year.

    Naveena Sadasivam Senior staff writer covering environmental justice and accountability

    ‘Greenhushing’ spreads as companies seek to dodge lawsuits

    Just a few years ago, splashy corporate climate promises were everywhere. Even oil companies promised to cut their emissions. But there won’t be as many misleading advertisements touting companies’ climate progress in 2024. Amid new regulations against false environmental marketing and a pileup of greenwashing lawsuits, more corporations will join in hiding their climate commitments to avoid scrutiny. This trend of “greenhushing” ramped up in 2023, when 1 in 5 companies declined to publicly release their sustainability targets, a threefold increase from the prior year. While this makes it harder to see what companies are doing, California’s new “anti-greenwashing” law, which went into effect on January 1, will tackle the transparency problem by requiring companies to disclose their carbon emissions.

    Kate Yoder Staff writer examining the intersections of climate, language, history, culture, and accountability

    A global treaty to end plastic pollution faces delays

    Delegates from around the world have been working to finalize a U.N. treaty by the end of 2024 that will “end plastic pollution.” They’ve had three negotiating sessions so far, and two more are scheduled for later this year. Despite signs of progress, petrochemical industry interests have resisted the most ambitious proposals to limit plastic production — they’d prefer a treaty focused on cleaning up plastic litter and improving plastic recycling rates. After countries failed to make significant headway at the most recent round of talks, it’s now possible that an extended deadline will be needed to deliver the final treaty. To some involved in the talks, that’s OK if it’ll mean a stronger agreement. But the pressure is still on, as every year without a treaty means more unchecked plastic pollution.  

    Joseph Winters Staff writer covering plastics, pollution, and the circular economy

    Employees of NY State Solar, a residential and commercial photovoltaic-systems company, install solar panels on a roof in Massapequa, New York, in 2022. AP Photo/John Minchillo

    Energy

    Expect a deluge of new household electrification and efficiency rebates

    When the Inflation Reduction Act passed in 2022, some decarbonization incentives were quickly accessible — such as tax credits for solar and heat pump installation — but others have taken longer to kick in. The wait, however, is almost over, and 2024 is set to see a slew of new, or expanded, opportunities come online. The Inflation Reduction Act earmarked $8.8 billion for residential electrification and energy-use reduction, especially in low-income households.Think things like induction cooktops and energy-efficient clothes dryers, which don’t currently have federally funded rebates. The Department of Energy is in the process of allocating funding to participating states, which will be in charge of getting the money into Americans’ pockets.

    Tik Root Senior staff writer focusing on the clean energy transition

    A push for public power takes root in communities nationwide

    Across the country, close to a dozen communities are exploring ways to replace their investor-owned electric utilities with publicly owned ones. Advocates say they want to lower electricity costs, improve reliability, and speed up a clean energy transition. While a referendum in Maine to create a statewide publicly owned utility failed this past November, supporters elsewhere are just getting started. Next year, a group in San Diego could succeed in getting a vote for a municipal utility on the ballot. Decorah, Iowa, is contemplating a similar vote, and ongoing efforts could gain traction in San Francisco, the South San Joaquin Irrigation District in California, New Mexico, and Rochester, New York.

    Akielly Hu News and politics reporting fellow

    Puerto Rico becomes be a U.S. leader in residential-solar energy adoption

    While the nationwide rate of residential-solar installations is expected to shrink by more than 10 percent next year, due to interest rates and changes in California’s net-metering rules, installations show no sign of slowing down in Puerto Rico. The archipelago of 1.2 million households already installs 3,400 residential rooftop solar and battery-storage systems per month. In spring 2024, the Energy Department will begin deploying $440 million in residential-solar funding, which they say will be enough for about 30,000 homes. Analysts predict that by 2030, one-quarter of Puerto Rico households will have photovoltaic systems, though that depends in part on whether Puerto Rico passes a pending bill that would protect net metering until then.

    Gabriela Aoun Angueira Climate solutions reporter who helms The Beacon, Grist’s solutions-oriented newsletter

    Workers walk the assembly line of Model Y electric vehicles at Tesla’s factory in Berlin in 2022. Patrick Pleul/picture alliance via Getty Images

    Business & Technology

    Changes to the federal tax credit will improve EV access for lower-income drivers

    As of January 1, consumers can redeem the Inflation Reduction Act’s clean-vehicle tax credit directly at car dealerships. Last year, the $7,500 incentive for new electric vehicles and $4,000 for previously owned ones were only available as a credit, meaning that car buyers had to wait until they filed their taxes to get any benefit. The point-of-sale rebate will make getting a clean vehicle more accessible to buyers who can’t afford a hefty down payment, or whose income is too low to owe taxes. But their model options will also shrink — the Treasury Department just proposed rules disqualifying cars with battery components or minerals that come from countries deemed hostile to the U.S.

    Gabriela Aoun Angueira Climate solutions reporter who helms The Beacon, Grist’s solutions-oriented newsletter

    Carbon-capture tech will continue to boom (and be controversial)

    In some ways, it was a mixed year for carbon capture. While the world’s largest carbon-capture plant broke ground in Texas, the builders of a major carbon dioxide pipeline — which would be used to transport captive emissions to their final destination underground — canceled the project in the face of regulatory pushback. Climate activists have also long been skeptical of carbon capture as an industry ruse to keep burning fossil fuels. Overall, though, the carbon-capture market is surging on the tailwinds of largely favorable government policies in recent years. The use of the technology is also spreading beyond traditional sectors, such as natural gas facilities, into other industrial arenas, including cement, steel, and iron manufacturing. Next year will bring some continued hiccups but, overwhelmingly, continued growth.

    Tik Root Senior staff writer focusing on the clean energy transition

    Republicans ramp up their war on “woke” ESG investing

    An ongoing Republican crusade against ESG investing — shorthand for the environmental, social, and governance criteria investors use to evaluate companies — could end up costing retirees and insurers millions in lost returns next year. GOP lawmakers claim that considering climate risks while making investments imposes “woke” values and limits investment returns. Yet anti-ESG laws passed in Kansas, Oklahoma, and Texas last year were estimated to have cost taxpayers up to hundreds of millions of dollars. That’s partly because most Wall Street banks and businesses still employ ESG strategies. The backlash could continue through next year’s election — presidential candidates Ron DeSantis and Vivek Ramaswamy have both taken strong anti-ESG positions.

    Akielly Hu News and politics reporting fellow

    Unions expand their fight for electric vehicle worker protections

    United Auto Workers recently won provisions for electric vehicle employees after a sweeping strike at Detroit’s Big Three carmakers — Ford, Stellantis, and General Motors. Now, the union has launched organizing campaigns at 13 non-union shops, including at EV leaders like Tesla and at other companies just getting into the EV space, such as Volkswagen and Hyundai. Next year, these campaigns will begin to go public, with resulting walkouts, negotiations, and expected union-busting tactics. Such efforts have failed in the past, and some companies have announced wage increases to entice workers away from a potential union drive, but UAW has already announced thousands of new member sign-ups and filed labor grievances against several companies, signaling a hard-headed approach that may win new contracts to protect workers as the auto industry increasingly shifts toward EVs.

    Katie Myers Climate solutions reporting fellow

    A ConocoPhillips refinery abuts a residential area in the Wilmington neighborhood of Los Angeles in 2022. Luis Sinco / Los Angeles Times via Getty Images

    Environmental Justice

    The EPA will back away from using civil rights law to protect residents

    In 2020, a federal judge ordered the Environmental Protection Agency to start investigating the complaints it receives under Title VI of the Civil Rights Act, which prohibits discrimination on the basis of race or national origin in any program that gets funding from the federal government. Since then, communities around the country have attempted to use the law to achieve environmental justice in their backyards. But after the agency dropped its highest profile civil rights case in Louisiana’s “Cancer Alley” following a lawsuit from the state attorney general, advocates worry that the legal avenue won’t fulfill its promise. In 2024, it’s likely that the EPA will pursue Title VI complaints in states with cooperative environment agencies, but shy away from pressuring industry-friendly states like Louisiana and Texas to make big changes based on the law.

    Lylla Younes Senior staff writer covering chemical pollution, regulation, and frontline communities

    Additional testing will reveal the true scope of “forever chemical” pollution

    Major chemical manufacturers like 3M, DuPont, and Chemours were forced to strike multibillion-dollar settlements last year with coalitions of states, cities, and townships over PFAS — the deadly “forever chemicals” these companies knowingly spewed into the environment for decades. 2024 will be a big year for determining just how pervasive this problem is in U.S. water supplies. New hotspots are likely to emerge as the EPA conducts additional testing across the country, particularly in areas where little data on the chemicals currently exists. New fights over forever chemicals will also unfold in places like Minnesota, where lawmakers have introduced a bill that would require 3M and other large chemical corporations to pay for medical testing for PFAS-exposed communities, and in North Carolina, where the United Nations just declared PFAS pollution a human rights violation.

    Zoya Teirstein Staff writer covering politics and the intersection between climate change and health

    A booming liquefied natural gas industry goes bust … maybe

    The liquefied natural gas industry is booming on the U.S. Gulf Coast as companies export huge amounts of fracked gas to Europe and Asia, but the buildout of liquefaction facilities in the South has stumbled in recent months. A federal court revoked one facility’s permit in Texas, and the federal Department of Energy denied another company seeking an extension to build a facility in Louisiana. The coming year will be a big test for the nascent business: If courts and regulators delay more of these expensive projects, the companies behind them may abandon them and instead try building smaller, cheaper terminals elsewhere in the United States or even offshore.

    Jake Bittle Staff writer focusing on climate impacts and adaptation

    Polluting countries could be legally liable to vulnerable ones

    At COP28, negotiators from small island states sought to hold larger countries financially accountable for their outsize role in fueling carbon emissions. In 2024, that issue could be decided in international courts: As soon as March, the International Court of Justice will weigh arguments regarding countries’ obligations under international law to protect current and future generations from the harmful effects of climate change. The case brought by Vanuatu raises the question of how much big polluters owe island nations, with Vanuatu and other Pacific island communities particularly affected by rising sea levels and worsening storms.

    Anita Hofschneider Senior staff writer focusing on Indigenous affairs

    An aerial view of Thacker Pass in northern Nevada. A proposed lithium mine on the site has drawn impassioned protest from the local Indigenous population, ranchers, and environmentalists. Carolyn Cole / Los Angeles Times via Getty Images

    Land Use

    Mining for rare earths takes off, as new discoveries and investments are made

    Discoveries of major new deposits of rare earth minerals will continue to explode in the western and southeastern U.S. — places like the Salton Sea in California and a lithium belt in North Carolina — as well as in Alaska. These developments, alongside incentives from the Inflation Reduction Act, will bolster domestic mining and renewable energy industries in 2024. Many of these discoveries are being made in coalfields and oil fields by fossil fuel companies looking to diversify their portfolios. In response, expect a boom in the efforts to reform laws around the poorly regulated mining industry as well as community-driven activism against places like the Thacker Pass lithium mine in Nevada.

    Katie Myers Climate solutions reporting fellow

    Congress doles out funds for unproven “climate-smart” agriculture

    2024 could be the biggest year yet for “climate-smart” agriculture. Billions of dollars that Congress earmarked a year and a half ago in the Inflation Reduction Act are starting to flow to farmers planting trees and cover crops that sequester carbon. Lawmakers will have the chance to carve out even more funds in the farm bill, the sprawling legislative package that will be up for renewal next year. But climate advocates won’t be satisfied with all of the results: The fight over what counts as “climate smart” will heat up as subsidies go to tools like methane digesters, which some advocates blame for propping up big polluters.

    Max Graham Food and agriculture reporting fellow

    More renewable energy comes to public lands

    The Bureau of Land Management controls a tenth of the land base in the U.S. — some 245 millions acres. The Biden administration has been trying to utilize that public land for renewable energy projects and infrastructure, with the Department of Interior recently announcing 15 such initiatives. The department is also aiming to reduce fees to promote solar and wind development. These efforts have run into roadblocks in the past, including from Indigenous nations. For example, the Tohono O’odham Nation and San Carlos Apache Tribe challenged a transmission line in southern Arizona because of its potential to harm cultural sites. But with the goal of permitting 25 gigawatts of renewable energy on BLM land by 2025, expect the federal government to continue pushing its buildout next year.

    Tik Root Senior staff writer focusing on the clean energy transition

    Residents in Houston look out at flooding from Hurricane Harvey in August 2017. Scott Olson/Getty Images

    Climate Impacts

    El Niño peaks, bringing a preview of life in the 2030s

    Last year brought the onset of the latest cycle of El Niño, a natural phenomenon that spurs the formation of a band of warm water in the Pacific Ocean and fuels above-average temperatures globally. In fact, the cycle has already nudged the world over 1.5 degrees Celsius (2.7 degrees Fahrenheit) of warming for the first time

    Because these systems tend to peak from December to April, the worst impacts will likely hit in the first half of 2024. Scientists predict the world will experience its hottest summer on record, giving us a preview of what life will look like in the 2030s. El Niño has already spurred an onslaught of knock-on effects, including heat waves in South America, flooding in East Africa, and infectious disease outbreaks in the Americas and the Caribbean. This year, researchers expect El Niño will lead to an unusually strong hurricane season in the Pacific, impact agricultural production and food security, lead to more explosions of vector-borne diseases, and depress the global economy. In some places, this is already happening.

    Zoya Teirstein Staff writer covering politics and the intersection between climate change and health

    To migrate or not: Pacific islanders weigh their options

    Last year, a proposed treaty between Australia and Tuvalu made international headlines for a unique provision: migration rights for climate refugees from the Pacific island country, which is at particular risk of rising seas. Now, Tuvalu’s general election, set for later this month, may serve as a de facto referendum on the agreement. But the country’s voters aren’t the only ones weighing their options as their islands slowly sink. The coming year will bring more attention to the plight of Pacific Islanders who are confronting a future of forced migration and grappling with the question of where their communities will go, what rights they’ll have, and how their sovereignty will persist.

    Anita Hofschneider Senior staff writer focusing on Indigenous affairs

    Insurers flee more disaster-prone states

    California. Louisiana. Florida. Who’s next? The insurance markets in these hurricane- and fire-prone states have descended into turmoil over the past few years as private companies drop policyholders and flee local markets after expensive disasters. State regulators are stepping in to stop this downward spiral, but stable insurance markets will mean higher prices for homeowners, especially in places like low-lying Miami, where the average insurance premium is already around $300 a month. The next year will see the same kind of insurance crisis pop up in other states such as Hawaiʻi, Oregon, and South Carolina, as private carriers try to stem their climate-induced losses.

    Jake Bittle Staff writer focusing on climate impacts and adaptation

    Despite barriers, workplace heat standards make slow progress

    Earlier this year, Miami-Dade County in Florida — where the region’s humidity makes outdoor workers especially vulnerable to extreme heat — was poised to pass one of the most comprehensive and thoughtful workplace heat standards in the country. Instead, county commissioners bowed to pressure from industry groups, and the vote was deferred. On the national level, OSHA, the agency responsible for workplace safety, has been in the process of creating a federal heat standard for over two years. That work is far from over, and it seems unlikely that the agency will announce a finalized rule next year, despite record-breaking heat. That leaves states and municipalities to lead the way in 2024 for worker-heat protections, but as was the case in Miami-Dade, local officials will likely face obstacles from powerful industry groups as they do so.

    Siri Chilukuri Environmental justice reporting fellow

    “Heatflation” comes for desserts 

    Heatflation came for condiments like olive oil and sriracha in 2023. This year, it’ll strike desserts. Unusually dry weather and a poor sugar cane harvest in India and Thailand — two of the world’s biggest producers — have driven global sugar prices to their highest level in more than a decade. Heavy rainfall in West Africa has led to widespread rot on the region’s prolific cocoa farms, causing chocolate prices to soar and snack companies like Mondelēz, which makes Oreos, to warn of more expensive products in 2024. And an extra-hot year fueled by a strong El Niño could be a rough one for wheat growers and flour prices. So now’s the time to indulge in chocolate cake — before it’s too late.

    Max Graham Food and agriculture reporting fellow

    This story was originally published by Grist with the headline 24 Predictions for 2024 on Jan 3, 2024.

    This post was originally published on Grist.

  • Finance analysts free of moral scruple can point to Palantir with relish and note that 2023 was a fairly rewarding year for it.  The company, which bills itself as a “category-leading software” builder “that empowers organizations to create and govern artificial intelligence”, launched its initial public offering in 2020.  But the milky confidence curdled, as with much else with tech assets, leading to the company stock falling by as much as 87% of value.  But this is the sort of language that delights the economy boffins no end, a bloodless exercise that ignores what Palantir really does.

    The surveillance company initially cut its teeth on agendas related to national security and law enforcement through Gotham.  A rather dry summation of its services is offered by Adrew Iliadis and Amelia Acker: “The company supplies information technology solutions for data integration and tracking to police and government agencies, humanitarian organizations, and corporations.”

    Founded in 2003 and unimaginatively named after the magical stones in The Lord of the Rings known as “Seeing Stones” or palantíri, its ambition was to remake the national security scape, a true fetishist project envisaging technology as deliverer and saviour.  While most of its work remains painfully clandestine, it does let the occasional salivating observer, such as Portugal’s former Secretary of State or European Affairs Bruno Maçães, into its citadel to receive the appropriate indoctrination.

    It’s impossible to take any commentary arising from these proselytised sorts seriously, but what follows can be intriguing.  “The target coordination cycle: find, track, target, and prosecute,” Maçães writes for Time, reflecting on the technology on show at the company’s London headquarters.  “As we enter the algorithmic age, time is compressed.  From the moment the algorithms set to work detecting their targets until these targets are prosecuted – a term of art in the field – no more than two or three minutes elapse.”  Such commentary takes the edge of the cruelty, the lethality, the sheer destruction of life that such prosecution entails.

    While its stable of government clients remain important, the company also sought to further expand its base with Foundry, the commercial version of the software. “Foundry helps businesses make better decisions and solve problems, and Forrester estimated Foundry delivers a 315% return on investment (ROI) for its users,” writes Will Healy, whose commentary is, given his association with Palantir, bound to be cherubically crawling while oddly flat.

    This tech beast is also claiming to march to a more moral tune, with Palantir Technologies UK Ltd announcing in April that it had formed a partnership with the Prosecutor General’s Office of Ukraine (OPG) to “enable investigators on the ground and across Europe to share, integrate, and process all key data relating to more than 78,000 registered war crimes.”

    The company’s co-founder and chief executive officer, Alexander C. Karp, nails his colours to the mast with a schoolboy’s binary simplicity.  “The invasion of Ukraine represents one of the most significant challenges to the global balance of power.  To that end, the crimes that are being committed in Ukraine must be prosecuted.”

    Having picked the Ukrainian cause as a beneficial one, Palantir revealed that it was “already helping Ukraine militarily, and supporting the resettlement of refugees in the UK, Poland and in Lithuania.”  For Karp, “Software is a product of the legal and moral order in which it is created, and plays a role in defending it.”

    Such gnomic statements are best kept in the spittoon of history, mere meaningless splutter, but if they are taken seriously, Karp is in trouble.  He is one who has admitted with sissy’s glee that the “core mission of our company always was to make the West, especially America, the strongest in the world, the strongest it’s ever been, for the sake of global peace and prosperity”.  Typically, such money-minded megalomaniacs tend to confuse personal wealth and a robber baron’s acquisitiveness with the more collective goals of peace and security.  Murdering thieves can be most moral, even as they carry out their sordid tasks with silver tongs.

    When Google dropped Project Maven, the US Department of Defense program that riled employees within the company, Palantir was happy to offer its services.  It did not matter one jot that the project, known in Palantir circles as “Tron”, was designed to train AI to analyse aerial drone footage to enable the identification of objects and human beings (again bloodless, chilling, instrumental).  “It’s commonly known that our software is used in an operational context at war,” Karp is reported as saying.  “Do you really think a war fighter is going to trust a software company that pulls the plug because something becomes controversial with their life?  Currently, when you’re a war fighter your life depends on your software.”

    War is merely one context where Palantir dirties the terrain of policy.  In 2020, Amnesty International published a report outlining the various human rights risks arising from Palantir’s contracts with the US Department of Homeland Security.  Of particular concern were associated products and services stemming from its Homeland Security Investigations (HIS) division of Immigration and Customs Enforcement.  Human rights groups such as Mijente, along with a number of investors, have also noted that such contracts enable ICE to prosecute such activities as surveillance, detentions, raids, de facto family separations and deportations.

    In 2023, protests by hundreds of UK health workers managed to shut down the central London headquarters of the tech behemoth. The workers in question were protesting the award of a £330 million contract to Palantir by the National Health Service (NHS) England.  Many felt particularly riled at the company, given its role in furnishing the Israeli government with such military and surveillance technology, including predictive policing services.  The latter are used to analyse social media posts by Palestinians that might reveal threats to public order or praise for “hostile” entities.

    As Gaza is being flattened and gradually exterminated by Israeli arms, Palantir remains loyal, even stubbornly so.  “We are one of the few companies in the world to stand and announce our support for Israel, which remains steadfast,” the company stated in a letter to shareholders.  With a record now well washed in blood, the company deserves a global protest movement that blocks its appeal and encourages a shareholder exodus.

    The post Amoral Compass: Palantir and its Quest to Remake the World first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.

  • ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published. This story was co-published with the Tow Center for Digital Journalism at Columbia University.

    “My sisters have died,” the young boy sobbed, chest heaving, as he wailed into the sky. “Oh, my sisters.” As Israel began airstrikes on Gaza following the Oct. 7 Hamas terrorist attack, posts by verified accounts on X, the social media platform formerly called Twitter, were being transmitted around the world. The heart-wrenching video of the grieving boy, viewed more than 600,000 times, was posted by an account named “#FreePalestine 🇵🇸.” The account had received X’s “verified” badge just hours before posting the tweet that went viral.

    Days later, a video posted by an account calling itself “ISRAEL MOSSAD,” another “verified” account, this time bearing the logo of Israel’s national intelligence agency, claimed to show Israel’s advanced air defense technology. The post, viewed nearly 6 million times, showed a volley of rockets exploding in the night sky with the caption: “The New Iron beam in full display.”

    And following an explosion on Oct. 14 outside the Al-Ahli Hospital in Gaza where civilians were killed, the verified account of the Hamas-affiliated news organization Quds News Network posted a screenshot from Facebook claiming to show the Israel Defense Forces declaring their intent to strike the hospital before the explosion. It was seen more than half a million times.

    None of these posts depicted real events from the conflict. The video of the grieving boy was from at least nine years ago and was taken in Syria, not Gaza. The clip of rockets exploding was from a military simulation video game. And the Facebook screenshot was from a now-deleted Facebook page not affiliated with Israel or the IDF.

    Just days before its viral tweet, the #FreePalestine 🇵🇸 account had a blue verification check under a different name: “Taliban Public Relations Department, Commentary.” It changed its name back after the tweet and was reverified within a week. Despite their blue check badges, neither Taliban Public Relations Department, Commentary nor ISRAEL MOSSAD (now “Mossad Commentary”) have any real-life connection to either organization. Their posts were eventually annotated by Community Notes, X’s crowdsourced fact-checking system, but these clarifications garnered about 900,000 views — less than 15% of what the two viral posts totaled. ISRAEL MOSSAD deleted its post in late November. The Facebook screenshot, posted by the account of the Quds News Network, still doesn’t have a clarifying note. Mossad Commentary and the Quds News Network did not respond to direct messages seeking comment; Taliban Public Relations Department, Commentary did not respond to public mentions asking for comment.

    An investigation by ProPublica and Columbia University’s Tow Center for Digital Journalism shows how false claims based on out-of-context, outdated or manipulated media have proliferated on X during the first month of the Israel-Hamas conflict. The organizations looked at over 200 distinct claims that independent fact-checks determined to be misleading, and searched for posts by verified accounts that perpetuated them, identifying 2,000 total tweets. The tweets, collectively viewed half a billion times, were analyzed alongside account and Community Notes data.

    ProPublica and Columbia University’s Tow Center for Digital Journalism identified more than 2,000 tweets by verified accounts that contained debunked claims based on out-of-context media. Quds News Network made five of those posts and continues to post about the conflict. Some of its English-language accounts on Facebook and Instagram have been suspended. (Screenshots of X taken and annotated by ProPublica and the Tow Center for Digital Journalism.)

    The ongoing conflict in Gaza is the biggest test for changes implemented by X owner Elon Musk since his acquisition of Twitter last year. After raising concerns about the power of platforms to determine what speech is appropriate, Musk instituted policies to promote “healthy” debate under the maxim “freedom of speech, not reach,” where certain types of posts that previously would have been removed for violating platform policy now have their visibility restricted.

    Within 10 days of taking ownership, Musk cut 15% of Twitter’s trust and safety team. He made further cuts in the following months, including firing the election integrity team, terminating many contracted content moderators and revoking existing misinformation policies on specific topics like COVID-19. In place of these safeguards, Musk expanded Community Notes. The feature, first launched in 2021 as Birdwatch, adds crowdsourced annotations to a tweet when users with diverse perspectives rate them “helpful.”

    “The Israel-Hamas war is a classic case of an information crisis on X, in terms of the speed and volume of the misinformation and the harmful consequences of that rhetoric,” said Michael Zimmer, the director of the Center for Data, Ethics, and Society at Marquette University in Wisconsin, who has studied how social media platforms combat misinformation.

    While no social media platform is free of misinformation, critics contend that Musk’s policies, along with his personal statements, have led to a proliferation of misinformation and hate speech on X. Advertisers have fled the platform — U.S. ad revenue is down roughly 60% compared to last year. Last week, Musk reinstated the account of Alex Jones, who was ordered to pay $1.1 billion in defamation damages for repeatedly lying about the 2012 Sandy Hook school shooting. Jones appealed the verdict. This week, the European Union opened a formal investigation against X for breaching multiple provisions of the Digital Services Act, including risk management and content moderation, as well as deceptive design in relation to its “so-called Blue checks.”

    ProPublica and the Tow Center found that verified blue check accounts that posted misleading media saw their audience grow on X in the first month of the conflict. This included dozens of accounts that posted debunked tweets three or more times and that now have over 100,000 followers each. The false posts appear to violate X’s synthetic and manipulated media policy, which bars all users from sharing media that may deceive or confuse people. Many accounts also appear to breach the eligibility criteria for verification, which state that verified accounts must not be “misleading or deceptive” or engage in “platform manipulation and spam.” Several of the fastest-growing accounts that have posted multiple false claims about the conflict now have more followers than some regional news organizations covering it.

    We also found that the Community Notes system, which has been touted by Musk as a way to improve information accuracy on the platform, hasn’t scaled sufficiently. About 80% of the 2,000 debunked posts we reviewed had no Community Note. Of the 200 debunked claims, more than 80 were never clarified with a note.

    When clarifying Community Notes did appear, they typically reached a fraction of the views that the original tweet did, though views on Community Notes are significantly undercounted. We also found that in some cases, debunked images or videos were flagged by a Community Note in one tweet but not in others, despite X announcing, partway through the period covered by our dataset, it has improved its media-matching algorithms to address this. For tweets that did receive a Community Note, it typically didn’t become visible until hours after the post.

    This last finding expands on a recent report by Bloomberg, which analyzed 400 false posts tagged by Community Notes in the first two weeks after the Oct. 7 attack and found it typically took seven hours for a Community Note to appear.

    For the tweets analyzed by ProPublica and the Tow Center, the median time that elapsed before a Community Note became visible decreased to just over five hours in the first week of November after X improved its system. Outliers did exist: Sometimes it still took more than two days for a note to appear, while in other cases, a note appeared almost instantaneously because the tweet used media that the system had already encountered.

    Multiple emails sent to X’s press inbox seeking comment on our findings triggered automated replies to “check back later” with no further response. Keith Coleman, who leads the Community Notes team at X, was separately provided with summary findings relevant to Community Notes as well as the dataset containing the compiled claims and tweets.

    Via email, Coleman said that the tweets identified in this investigation were a small fraction of those covered by the 1,500 visible Community Notes on X about the conflict from this time period. He also said that many posts with high-visibility notes were deleted after receiving a Community Note, including ones that we did not identify. When asked about the number of claims that did not receive a single note, Coleman said that users might not have thought one was necessary, pointing to examples where images generated by artificial intelligence tools could be interpreted as artistic depictions. AI-generated images accounted for around 7% of the tweets that did not receive a note; none acknowledged that the media was AI-generated. Coleman said that the current system is an upgrade over X’s historic approaches to dealing with misinformation and that it continues to improve; “most importantly,” he said, the Community Notes program “is found helpful by people globally, across the political spectrum.”

    Community Notes were initially meant to complement X’s various trust and safety initiatives, not replace them. “It still makes sense for platforms to keep their trust and safety teams in a breaking-news, viral environment. It’s not going to work to simply fling open the gates,” said Mike Ananny, an associate professor of communication and journalism at the University of Southern California, who is skeptical about leaving moderation to the community, particularly after the changes Musk has made.

    “I’m not sure any community norm is going to work given all of the signals that have been given about who’s welcome here, what types of opinions are respected and what types of content is allowed,” he said.

    ProPublica and the Tow Center compiled a large sample of data from multiple sources to study the effectiveness of Community Notes in labeling debunked claims. We found over 1,300 verified accounts that posted misleading or out-of-context media at least once in the first month of the conflict; 130 accounts did so three or more times. (For more details on how the posts were gathered, see the methodology section at the end of this story.)

    Musk overhauled Twitter’s account verification program soon after acquiring the company. Previously, Twitter gave verified badges to politicians, celebrities, news organizations, government agencies and other vetted notable individuals or organizations. Though the legacy process was criticized as opaque and arbitrary, it provided a signal of authenticity for users. Today, accounts receive the once-coveted blue check in exchange for $8 a month and a cursory identity check. Despite well-documented impersonation and credibility issues, these “verified” accounts are prioritized in search, in replies and across X’s algorithmic feeds.

    If an account continuously shares harmful or misleading narratives, X’s synthetic and manipulated media policy states that its visibility may be reduced or the account may be locked or suspended. But the investigation found that prominent verified accounts appeared to face few consequences for broadcasting misleading media to their large follower networks. Of the 40 accounts with more than 100,000 followers that posted debunked tweets three times or more in the first month of the conflict, only seven appeared to have had any action taken against them, according to account history data shared with ProPublica and the Tow Center by Travis Brown. Brown is a software developer who researches extremism and misinformation on X.

    Those 40 accounts, a number of which have been identified as the most influential accounts engaging in Hamas-Israel discourse, grew their collective audience by nearly 5 million followers, to around 17 million, in the first month of the conflict alone.

    A few of the smaller verified accounts in the dataset received punitive action: About 50 accounts that posted at least one false tweet were suspended. On average, these accounts had 7,000 followers. It is unclear whether the accounts were suspended for manipulated media policy violations or for other reasons, such as bot-like behavior. Around 80 accounts no longer have a blue check badge. It is unclear whether the accounts lost their blue checks because they stopped paying, because they had recently changed their display name (which triggers a temporary removal of the verified status), or because Twitter revoked the status. X has said it removed 3,000 accounts by “violent entities,” including Hamas, in the region.

    On Oct. 29, X announced a new policy where verified accounts would no longer be eligible to share in revenue earned from ads that appeared alongside any of their posts that had been corrected by Community Notes. In a tweet, Musk said, “the idea is to maximize the incentive for accuracy over sensationalism.” Coleman said that this policy has been implemented, but did not provide further details.

    False claims that go viral are frequently repeated by multiple accounts and often take the form of decontextualized old footage. One of the most widespread false claims, that Qatar was threatening to stop supplying natural gas to the world unless Israel halted its airstrikes, was repeated by nearly 70 verified accounts. This claim, which was based on a false description of an unrelated 2017 speech by the Qatari emir to bolster its credibility, received over 15 million views collectively, with a single post by Dominick McGee (@dom_lucre) amassing more than 9 million views. McGee is popular in the QAnon community and is an election denier with nearly 800,000 followers who was suspended from X for sharing child exploitation imagery in July 2023. Shortly after, X reversed the suspension. McGee denied that he had shared the image when reached by direct message on X, claiming instead that it was “an article touching it.”

    Community Notes like this one appear alongside many false posts claiming Qatar is threatening to cut off its gas supply to the world. This note was seen more than 400,000 times across 159 posts that shared the same video clip, and it appeared on nine out of nearly 70 posts in our dataset that made this claim. (Screenshot of X taken and annotated by ProPublica and the Tow Center for Digital Journalism.)

    Another account, using the pseudonym Sprinter, shared the same false claim about Qatar in a post that was viewed over 80,000 times. These were not the only false posts made by either account. McGee shared six debunked claims about the conflict in our dataset; Sprinter shared 20.

    Sprinter posted an image of casualties from the Hamas attack on Oct. 7, most of whom were civilians, and purported that it showed Israeli military losses during the ground war later in the month. Another post mistranslated the words of an injured Israeli soldier. (Screenshots of X taken and annotated by ProPublica and the Tow Center for Digital Journalism.)

    Sprinter has tweeted AI-generated images, digitally altered videos and the unsubstantiated claim that Ukraine is providing weapons to Hamas. Each of these posts has received hundreds of thousands of views. The account’s follower count has increased by 60% to about 500,000, rivaling the following of Haaretz and the Times of Israel on X. Sprinter’s profile — which has also used the pseudonyms SprinterTeam, SprinterX and WizardSX, according to historical account data provided by Brown — was “temporarily restricted” by X in mid-November, but it retained its “verified” status. Sprinter’s original profile linked to a backup account. That account — whose name and verification status continues to change — still posts dozens of times a day and has grown to over 25,000 followers. Sprinter did not respond to a request for comment and blocked the reporter after being contacted. The original account appears to no longer exist.

    Verification badges were once a critical signal in sifting official accounts from inauthentic ones. But with X’s overhaul of the blue check program, that signal now essentially tells you whether the account pays $8 a month. ISRAEL MOSSAD, the account that posted video game footage falsely claiming it was an Israeli air defense system, had gone from fewer than 1,000 followers, when it first acquired a blue check in September 2023, to more than 230,000 today. In another debunked post, published the same day as the video game footage, the account claimed to show more of the Iron Beam system. That tweet still doesn’t have a Community Note, despite having nearly 400,000 views. The account briefly lost its blue check within a day of the two tweets being posted, but regained it days after changing its display name to Mossad Commentary. Even though it isn’t affiliated with Israel’s national intelligence agency, it continues to use Mossad’s logo in its profile picture.

    “The blue check is flipped now. Instead of a sign of authenticity, it’s a sign of suspicion, at least for those of us who study this enough,” said Zimmer, the Marquette University professor.

    Verified Accounts That Shared Misinformation Grew Quickly During the Israel-Hamas Conflict

    Several of the fastest-growing accounts that have posted multiple false claims about the conflict now have more followers than some regional news organizations actively covering it.

    (Lucas Waldron/ProPublica)

    Of the verified accounts we reviewed, the one that grew the fastest during the first month of the Israel-Hamas conflict was also one of the most prolific posters of misleading claims. Jackson Hinkle, a 24-year-old political commentator and self-described “MAGA communist” has built a large following posting highly partisan tweets. He has been suspended from various platforms in the past, pushed pro-Russian narratives and claimed that YouTube permanently suspended his account for “Ukraine misinformation.” Three days later, he tweeted that YouTube had banned him because it didn’t want him telling the truth about the Israel-Hamas conflict. Currently, he has more than two million followers on X; over 1.5 million of those arrived after Oct. 7. ProPublica and the Tow Center found over 20 tweets by Hinkle using misleading or manipulated media in the first month of the conflict; more than half had been tagged with a Community Note. The tweets amassed 40 million views, while the Community Notes were collectively viewed just under 10 million times. Hinkle did not respond to a request for comment.

    All told, debunked tweets with a Community Note in the ProPublica-Tow Center dataset amassed 300 million views in aggregate, about five times the total number of views on the notes, even though Community Notes can appear on multiple tweets and collect views from all of them, including from tweets that were not reviewed by the news organizations.

    Hinkle misleadingly claimed that China was sending warships in the direction of Israel, even though the ships had been in routine operation in the region since May. Hinkle also posted footage claiming to show Hezbollah’s anti-ship missiles, but the video is from 2019 and not related to the current conflict. (Screenshots of X taken and annotated by ProPublica and the Tow Center for Digital Journalism.)

    X continues to improve the Community Notes system. It announced updates to the feature on Oct. 24, saying notes are appearing more often on viral and high-visibility content, and are appearing faster in general. But ProPublica and Tow Center’s review found that less than a third of debunked tweets created since the update received a Community Note, though the median time for a note to become visible dropped noticeably, from seven hours to just over five hours in the first week of November. The Community Notes team said over email that their data showed that a note typically took around five hours to become visible in the first few days of the conflict.

    Aviv Ovadya, an affiliate at Harvard’s Berkman Klein Center For Internet & Society who has worked on social media governance and algorithms similar to the one Community Notes uses, says that any fact-checking process, whether it relies on crowdsourced notes or a third-party fact-checker, is likely to always be playing catch-up to viral claims. “You need to know if the claim is worth even fact-checking,” Ovadya said. “Is it worth my time?” Once a false post is identified, a third-party fact-check may take longer than a Community Note.

    Coleman, who leads the Community Notes team, said over email that his team found Community Notes often appeared faster than posts by traditional fact-checkers, and that they are committed to making the notes visible faster.

    Our review found that many viral tweets with claims that had been debunked by third-party fact- checkers did not receive a Community Note in the long run. Of the hundreds of tweets in the dataset that gained over 100,000 impressions, only about half had a note. Coleman noted that of those widely viewed tweets, the ones with visible Community Notes attached had nearly twice as many views.

    To counter the instances where false claims spread quickly because many accounts post the same misleading media in a short time frame, the company announced in October that it would attach the same Community Note to all posts that share a debunked piece of media. ProPublica and the Tow Center found the system wasn’t always successful.

    For example, on and after Oct. 25, multiple accounts tweeted an AI-generated image of a man with five children amid piles of rubble. Community Notes for this image appeared thousands of times on X. However, of the 22 instances we identified in which a verified account tweeted the image, only seven of those were tagged with a Community Note. (One of those tweets was later deleted after garnering more than 200,000 views.)

    We found X’s media-matching system to be inconsistent for numerous other claims as well. Coleman pointed to the many automatic matches as a sign that it is working and said that its algorithm prioritizes “high precision” to avoid mistakenly finding matches between pieces of media that are meaningfully different. He also said the Community Notes team plans to further improve its media-matching system.

    According to annotations on Community Notes on X that we found, a note for this image was displayed on at least 7,200 posts. We found 22 tweets with this image, but only seven had a Community Note. The second image has been deleted, but not before it garnered more than 200,000 views. (Screenshots of X taken and annotated by ProPublica and the Tow Center for Digital Journalism.)

    The false claims ProPublica and the Tow Center identified in this analysis were also posted on other platforms, including Instagram and TikTok. On X, having a Community Note added to a post does not affect how it is displayed. Other platforms deprioritize fact-checked posts in their algorithmic feeds to limit their reach. While Ovadya believes that continued investment in Community Notes is important, he says changing X’s core algorithm could be even more impactful.

    “If X’s recommendation algorithms were built on the same principles as Community Notes and was actively rewarding content that bridges divides,” he said, “you would have less misinformation and sensationalist content going viral in the first place.”

    Methodology

    ProPublica and Columbia University’s Tow Center for Digital Journalism identified and analyzed more than 2,000 tweets by verified accounts that posted clearly debunked images or videos in the first month of the Israel-Hamas war. The posts, which encompassed more than 200 false claims, were published by more than 1,300 verified accounts and collectively received half a billion impressions. We then looked at Community Notes and account data associated with those tweets.

    Since the metrics on tweets, accounts and Community Notes were viewed at various points in time, they may not be current; for example, the status of accounts or Community Notes may have changed and the number of impressions on tweets and notes might be different after the time frame of our analysis.

    In this review, we focused on claims that could be unambiguously debunked, including those based on generative AI images that aren’t labeled as such, old pictures and videos presented as current, falsified social media posts and documents, footage from video games described as real events, doctored images and mistranslated videos. To compile our list of debunked claims, we reviewed fact checks from multiple news organizations, including BBC Verify, Logically Facts, two stories from The New York Times, The Associated Press, Agence France-Presse and Reuters. We also identified debunked claims by filtering Community Notes data by relevant keywords (Gaza, Palestine/Palestinian, Israel, IDF, Hamas/Hammas, Mossad, Iron Beam, Iron Dome), and verified the note using independent news organizations or reverse image searches to ensure that each was accurate. We did not include claims that could not be independently verified or that were contested under the fog of war.

    We compiled tweets using X’s text search functionality and Google’s reverse image search. Reverse image search was able to identify both images and videos (using a frame from the video). The claims and tweets we compiled are a convenience sample, not an exhaustive survey of all media-based misinformation on X during the first month of the Israel-Hamas war: The dataset relies heavily on images that Google has indexed as well as tweets that use identical or very similar language, which allows X’s search functionality to surface them. Additionally, the accounts mentioned in the story might have tweeted more false claims than those we identified. Tweets deleted prior to our searches are not captured in our dataset. (In its response, X provided us with 18 examples of Community Notes and tweets that were not in our dataset and could not be located because the tweets were not yet indexed by Google or could not be easily found by X’s search function.)

    We also analyzed the accounts that were posting these tweets, using account data collected by researcher Travis Brown from July through November 2023. We used this data to determine account status, follower count, handles and usernames.

    For Community Notes, we downloaded X’s open-source datasets and filtered by notes with the above-mentioned keywords. A single tweet can have multiple Community Notes and the same note can appear alongside multiple tweets. Our analysis ensured we took both relationships into account.

    X’s Community Notes data contains the current status of a note as well as the time at which that status was set. It also includes when the Community Note was created and the note’s text. For some tweets that use repurposed media (i.e. media from a tweet that’s already been debunked by Community Notes) the note appears immediately due to improvements in X’s media-matching algorithm. This means that occasionally the time of creation or visibility of a note will be before the time the tweet was posted.

    Do You Have a Tip for ProPublica? Help Us Do Journalism.

    Elizabeth Yaboni of the Tow Center for Digital Journalism contributed research.

    This post was originally published on Articles and Investigations – ProPublica.

  • Dozens of prominent investors and business leaders traveled to Israel this week to show solidarity with Israel amid its war on Hamas, according to documents from the junket obtained by The Intercept.

    The trip included top officials from private equity firms like Bain Capital; leaders from the tech industry, like a Patreon executive; and a managing director at the endowment investment firm of Harvard University, a school riven by political clashes around the Israeli war on Gaza.

    “In every war there are multiple fronts. The attendees of this mission are here to help counter the war’s economic disruption.”

    The documents, which include an itinerary and list of attendees, provide details about the weeklong meeting taking place in Tel Aviv and Jerusalem, called the Israel Tech Mission. Beginning on Sunday, the meeting includes panels like “Tech in the Trenches: Supporting an ecosystem during wartime.”

    Participants will hold meetings with top Israeli officials, like President Isaac Herzog, along with opposition leader and former military chief of staff Benny Gantz, who joined Israeli Prime Minister Benjamin Netanyahu’s war cabinet after the October 7 attack.

    Shoring up investor confidence would be welcome news in Israel. The Israeli stock exchange — whose chair Tech Mission participants are slated to meet with on Thursday — suffered billions in losses after the Hamas attack on October 7, though it has gradually recovered. The market losses came in the wake of the reported withdrawal from Israel of some foreign investors when the country was roiled by Netanyahu’s controversial attempt to roll back judicial independence.

    The Israel Tech Mission is explicit in its support for the Israeli war effort.

    “In every war there are multiple fronts,” Ron Miasnik, a co-organizer of the Israel Tech Mission who invests for Bain Capital, told the Israeli business website CTech. “The attendees of this mission are here to help counter the war’s economic disruption. We are focused on supporting and helping rebuild Israel’s world-class tech industry.”

    According to an online application for the trip, a screenshot of which was obtained by The Intercept, attendees on the trip will have to pay their own way. “Attendees will organize their own travel,” the application says. “Participants will cover their own trip cost.”

    Israel Defense Forces and Right-Wing Politicians

    On the trip, the delegation will spend time with Israel’s senior political leadership as well as military figures. The online trip application says attendees will “receive confidential military and political briefings from former Israeli Prime Minister Nafatali Bennett, as well as current Members of Knesset and senior military leaders.”

    The group, according to the itinerary, is scheduled to meet with Israel Defense Forces, or IDF, soldiers on Tuesday before taking part in a “solidarity tech reception” drawing on figures as diverse as the Israeli NBA player and venture capitalist Omri Casspi to the CEO of Goldman Sachs Israel. (In response to a request for comment, Goldman Sach’s U.K. office said it had not heard back from its Israeli office.)

    The Israel Tech Mission appears to have been organized by Itrek, a nonprofit based in New York whose logo appears on the itinerary and list of attendees. Itrek sponsors weeklong “Israel Treks” to build “appreciation for Israel among present and future leaders” so they can understand its “complex reality,” according to the group’s website. (Itrek did not respond to a request for comment.)

    Israel boasts a robust tech sector. While pro-Israel figures have long touted the country’s reputation as a “start-up nation,” criticisms have emerged in recent years pointing to the role of Israel’s defense sector in creating talent and funding research that becomes the locus of tech projects — effectively profiting off Israel’s occupation of Palestine. The cyber specialists of the Israeli army’s Unit 8200, for instance, are known for creating successful start-ups, sometimes involved in security work and even alleged rights abuses.

    Close relationships between Israel’s security state, its tech sector, and the U.S. technology community are common. Tesla CEO Elon Musk met with Netanyahu and top IDF officials last month to discuss “the security aspects of artificial intelligence,” according to a readout of the conversation. The Israeli–Palestinian magazine +972 reported last month that advances in artificial intelligence have allowed the Israeli military to generate targets more rapidly than ever before.

    Israel Tech Mission attendees, for their part, are looking to support Israel’s tech sector.

    “After October 7th, we feel it is critical for venture capital and technology business leaders to stand with Israel,” David Siegel, CEO of Meetup and co-organizer of the mission, said in a press release. “Our trip was oversubscribed for attendees. The technology community recognizes the heightened need for support as many Israeli entrepreneurs and their workforces are on the front lines as reservists.”

    Harvard’s Massive Endowment

    The attendee list for the Israel Tech Mission includes a diverse roster of investors and business leaders. Among those listed are top officials at companies working in stock trading such as Vstock Transfer, a stock transfer firm, and TIFIN, a financial technology investment firm that employs artificial intelligence. Investors from private equity funds like Apollo Global Management and Entrepreneur Partners are also slated to participate.

    The attendee list also includes business officials like Ariel Boyman, a vice president at Mastercard; Steve Miller, chief financial officer at the glasses retailer Warby Parker; Michael Kohen, who leads the autonomy and automation platform at John Deere; and Jeffrey Swartz, the former CEO of Timberland. (Vstock, TIFIN, Apollo, Entrepreneur Partners, Mastercard, Warby Parker, John Deere, and Swartz did not immediately respond to a request for comment.)

    Also listed as an attendee is Adam Goldstein, managing director at Harvard Management Company, which helps oversee Harvard University’s over $50 billion endowment — the largest on Earth. The endowment investment fund has been accused in the past of investing nearly $200 million in companies that profit off Israel’s illegal settlements in the occupied Palestinian West Bank. (The Harvard Management Company did not immediately respond to a request for comment.)

    Elite Ivy League colleges have become a flash point in the U.S. debate about Israel’s war on Gaza. Harvard has faced a backlash from donors. Billionaire investor Bill Ackman, for instance, has become a strident critic of pro-Palestine students and what he says is the school’s lackluster response to them — a battle fueled by years of resentment. And Harvard President Claudine Gay has faced, and resisted, calls to resign because of her response to pro-Palestinian activism and alleged antisemitism on campus.

    In recent years, the movement for boycott, divestment, and sanctions against Israel has gained steam at the university. Last year, the student newspaper, the Harvard Crimson, faced a backlash for its endorsement of the BDS campaign — which, if successful, would see Goldstein’s Harvard Management Company divest from Israel.

    While Israel Tech Mission delegates are looking to boost the tech sector in Israel, the Israeli war on Gaza is also being used as a pitch for tech firms like NSO Group to improve their image back in the United States. The company was blacklisted by the U.S. when its phone-hacking software Pegasus was shown to be involved in rights abuses.

    Lobbyists in Washington working for the company, which has faced cash shortages, have been using the Israeli war on Gaza to refurbish the company’s reputation. In November, the NSO lobbyists wrote to the U.S. State Department to make the case for “the importance of cyber intelligence technology in the wake of the grave security threats posed by the recent Hamas terrorist attacks in Israel and their aftermath.”

    The post Harvard Endowment Investor and Other Business Leaders Take a Solidarity Trip to Israel appeared first on The Intercept.

    This post was originally published on The Intercept.

  • In a Blog Post (Council on Foreign Relations of 18 December 2023) Raquel Vazquez Llorente argues that ‘Artificial intelligence is increasingly used to alter and generate content online. As development of AI continues, societies and policymakers need to ensure that it incorporates fundamental human rights.” Raquel is the Head of Law and Policy, Technology Threats and Opportunities at WITNESS

    The urgency of integrating human rights into the DNA of emerging technologies has never been more pressing. Through my role at WITNESS, I’ve observed first-hand the profound impact of generative AI across societies, and most importantly, on those defending democracy at the frontlines.

    The recent elections in Argentina were marked by the widespread use of AI in campaigning material. Generative AI has also been used to target candidates with embarrassing content (increasingly of a sexual nature), to generate political ads, and to support candidates’ campaigns and outreach activities in India, the United States, Poland, Zambia, and Bangladesh (to name a few). The overall result of the lack of strong frameworks for the use of synthetic media in political settings has been a climate of mistrust regarding what we see or hear.

    Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.

    As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle, but a fight to uphold democracy.

    From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.

    Generative AI introduces a daunting new reality: inconvenient truths can be denied as deep faked, or at least facilitate claims of plausible deniability to evade accountability. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. This is not just a crisis of identifying what is fake. It is also a crisis of protecting what is true. When anything and everything can be dismissed as AI-generated or manipulated, how do we elevate the real stories of those defending our democracy at the frontlines?

    But AI’s impact doesn’t stop at new challenges; it exacerbates old inequalities. Those who are already marginalized and disenfranchised—due to their gender, ethnicity, race or belonging to a particular group—face amplified risks. AI is like a magnifying glass for exclusion, and its harms are cumulative. AI deepens existing vulnerabilities, bringing a serious threat to principles of inclusivity and fairness that lie at the heart of democratic values. Similarly, sexual deepfakes can have an additional chilling effect, discouraging women, LGBTQ+ people and individuals from minoritized communities to participate in public life, thus eroding the diversity and representativeness that are essential for a healthy democracy.

    Lastly, much as with social media, where we failed to incorporate the voices of the global majority, we have borrowed previous mistakes. The shortcomings in moderating content, combating misinformation, and protecting user privacy have had profound implications on democracy and social discourse. Similarly, in the context of AI, we are yet to see meaningful policies and regulation that not only consult globally those that are being impacted by AI but, more importantly, center the solutions that affected communities beyond the United States and Europe prioritize. This highlights a crucial gap: the urgent need for a global perspective in AI governance, one that learns from the failures of social media in addressing cultural and political nuances across different societies.

    As we navigate AI’s impact on democracy and human rights, our approach to these challenges should be multifaceted. We must draw on a blend of strategies—ones that address the immediate ‘new’ realities of AI, respond to the ‘old’ but persistent challenges of inequality, and incorporate ‘borrowed’ wisdom from our past experiences.

    First, we must ensure that new AI regulations and companies’ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation.

    While anchoring new policies in human rights is crucial, we should not lose sight of the historical context of these technological advancements. We must look back as we move forward. As with technological advancements of the past, we should remind ourselves that progress is not how far you go, but how many people you bring along. We should really ask, is it technological progress if it is not inclusive, if it reproduces a disadvantage? Technological advancement that leaves people behind is not true progress; it is an illusion of progress that perpetuates inequality and systems of oppression. This past weekend marked twenty-five years since the adoption of the UN Declaration on Human Rights Defenders, which recognizes the key role of human rights defenders in realizing the Universal Declaration of Human Rights and other legally binding treaties. In the current wave of excitement around generative AI, the voices of those protecting human rights at the frontlines have rarely been more vital.

    Our journey towards a future shaped by AI is also about learning from the routes we have already travelled, especially those from the social media era. Synthetic media has to be understood in the context of the broader information ecosystem. We are monetizing the spread of falsehoods while keeping local content moderators and third-party fact-checkers on precarious salaries, and putting the blame on platform users for not being educated enough to spot the fakery. The only way to align democratic values with technology goals is by both placing responsibility and establishing accountability across the whole information and AI ecosystem, from the foundation models researchers, to those commercializing AI tools, and those creating content and distributing it.

    In weaving together these new, old, and borrowed strands of thought, we create a powerful blueprint for steering the course of AI. This is not just about countering a wave of digital manipulation—it is about championing technology advancement that amplifies our democratic values, deepens our global engagement, and preserves the core of our common humanity in an increasingly AI-powered and image-driven world. By centering people’s rights in AI development, we not only protect our individual freedoms, but also fortify our shared democratic future.

    https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines

    This post was originally published on Hans Thoolen on Human Rights Defenders and their awards.

  • In the past 15 years, policing has grown its reach, largely through an array of technologies that record and store our personal details and daily activities. Using algorithms and other formulae, authorities are able to repurpose data to meet the emerging demands of the criminal legal and immigration systems. From predictive policing to GPS-enabled ankle monitors to gunshot trackers to massive…

    Source

    This post was originally published on Latest – Truthout.

  • Unlike any other point in history, hackers, whistleblowers, and archivists now routinely make off with terabytes of data from governments, corporations, and extremist groups. These datasets often contain gold mines of revelations in the public interest and in many cases are freely available for anyone to download. 

    Revelations based on leaked datasets can change the course of history. In 1971, Daniel Ellsberg’s leak of military documents known as the Pentagon Papers led to the end of the Vietnam War. The same year, an underground activist group called the Citizens’ Commission to Investigate the FBI broke into a Federal Bureau of Investigation field office, stole secret documents, and leaked them to the media. This dataset mentioned COINTELPRO. NBC reporter Carl Stern used Freedom of Information Act requests to publicly reveal that COINTELPRO was a secret FBI operation devoted to surveilling, infiltrating, and discrediting left-wing political groups. This stolen FBI dataset also led to the creation of the Church Committee, a Senate committee that investigated these abuses and reined them in. 

    Huge data leaks like these used to be rare, but today they’re increasingly common. More recently, Chelsea Manning’s 2010 leaks of Iraq and Afghanistan documents helped spark the Arab Spring, documents and emails stolen by Russian military hackers helped elect Donald Trump as U.S. president in 2016, and the Panama Papers and Paradise Papers exposed how the rich and powerful use offshore shell companies for tax evasion.

    Yet these digital tomes can prove extremely difficult to analyze or interpret, and few people today have the skills to do so. I spent the last two years writing the book “Hacks, Leaks, and Revelations: The Art of Analyzing Hacked and Leaked Data” to teach journalists, researchers, and activists the technologies and coding skills required to do just this. While these topics are technical, my book doesn’t assume any prior knowledge: all you need is a computer, an internet connection, and the will to learn. Throughout the book, you’ll download and analyze real datasets — including those from police departments, fascist groups, militias, a Russian ransomware gang, and social networks — as practice. Throughout, you’ll engage head-on with the dumpster fire that is 21st-century current events: the rise of neofascism and the rejection of objective reality, the extreme partisan divide, and an internet overflowing with misinformation.

    My book officially comes out January 9, but it’s shipping today if you order it from the publisher here. Add the code INTERCEPT25 for a special 25 percent discount.

    The following is a lightly edited excerpt from the first chapter of “Hacks, Leaks, and Revelations” about a crucial and often underappreciated part of working with leaked data: how to verify that it’s authentic.

    Photo: Micah Lee

    You can’t believe everything you read on the internet, and juicy documents or datasets that anonymous people send you are no exception. Disinformation is prevalent.

    How you go about verifying that a dataset is authentic completely depends on what the data is. You have to approach the problem on a case-by-case basis. The best way to verify a dataset is to use open source intelligence (OSINT), or publicly available information that anyone with enough skill can find. 

    This might mean scouring social media accounts, consulting the Internet Archive’s Wayback Machine, inspecting metadata of public images or documents, paying services for historical domain name registration data, or viewing other types of public records. If your dataset includes a database taken from a website, for instance, you might be able to compare information in that database with publicly available information on the website itself to confirm that they match. (Michael Bazzell also has great resources on the tools and techniques of OSINT.)

    Below, I share two examples of authenticating data from my own experience: one about a dataset from the anti-vaccine group America’s Frontline Doctors, and another about leaked chat logs from a WikiLeaks Twitter group. 

    In my work at The Intercept, I encounter datasets so frequently I feel like I’m drowning in data, and I simply ignore most of them because it’s impossible for me to investigate them all. Unfortunately, this often means that no one will report on them, and their secrets will remain hidden forever. I hope “Hacks, Leaks, and Revelations” helps to change that. 

    The America’s Frontline Doctors Dataset

    In late 2021, in the midst of the Covid-19 pandemic, an anonymous hacker sent me hundreds of thousands of patient and prescription records from telehealth companies working with America’s Frontline Doctors (AFLDS). AFLDS is a far-right anti-vaccine group that misleads people about Covid-19 vaccine safety and tricks patients into paying millions of dollars for drugs like ivermectin and hydroxychloroquine, which are ineffective at preventing or treating the virus. The group was initially formed to help Donald Trump’s 2020 reelection campaign, and the group’s leader, Simone Gold, was arrested for storming the U.S. Capitol on January 6, 2021. In 2022, she served two months in prison for her role in the attack.

    My source told me that they got the data by writing a program that made thousands of web requests to a website run by one of the telehealth companies, Cadence Health. Each request returned data about a different patient. To see whether that was true, I made an account on the Cadence Health website myself. Everything looked legitimate to me. The information I had about each of the 255,000 patients was the exact information I was asked to provide when I created my account on the service, and various category names and IDs in the dataset matched what I could see on the website. But how could I be confident that the patient data itself was real, that these people weren’t just made up?

    I wrote a simple Python script to loop through the 72,000 patients (those who had paid for fake health care) and put each of their email addresses in a text file. I then cross-referenced these email addresses with a totally separate dataset containing personal identifying information from members of Gab, a social network popular among fascists, anti-democracy activists, and anti-vaxxers. In early 2021, a hacktivist who went by the name “JaXpArO and My Little Anonymous Revival Project” had hacked Gab and made off with 65GB of data, including about 38,000 Gab users’ email addresses. Thinking there might be overlap between AFLDS and Gab users, I wrote another simple Python program that compared the email addresses from each group and showed me all of the addresses that were in both lists. There were several.

    Armed with this information, I started scouring the public Gab timelines of users whose email addresses had appeared in both datasets, looking for posts about AFLDS. Using this technique, I found multiple AFLDS patients who posted about their experience on Gab, leading me to believe that the data was authentic. For example, according to consultation notes from the hacked dataset, one patient created an account on the telehealth site and four days later had a telehealth consultation. About a month after that, they posted to Gab saying, “Front line doctors finally came through with HCQ/Zinc delivery” (HCQ is an abbreviation for hydroxychloroquine).

    Having a number of examples like this gave us confidence that the dataset of patient records was, in fact, legitimate. You can read our AFLDS reporting at The Intercept — which led to a congressional investigation into the group — here.

    The WikiLeaks Twitter Group Chat

    In late 2017, journalist Julia Ioffe published a revelation in The Atlantic: WikiLeaks had slid into Donald Trump Jr.’s Twitter DMs. Among other things, before the 2016 election, WikiLeaks suggested to Trump Jr. that even if his father lost the election, he shouldn’t concede. “Hi Don,” the verified @wikileaks Twitter account wrote, “if your father ‘loses’ we think it is much more interesting if he DOES NOT conceed [sic] and spends time CHALLENGING the media and other types of rigging that occurred—as he has implied that he might do.”

    A long-term WikiLeaks volunteer who went by the pseudonym Hazelpress started a private Twitter group with WikiLeaks and its biggest supporters in mid-2015. After watching the group become more right-wing, conspiratorial, and unethical, and specifically after learning about WikiLeaks’ secret DMs with Trump Jr., Hazelpress decided to blow the whistle on the whistleblowing group itself. She has since publicly come forward as Mary-Emma Holly, an artist who spent years as a volunteer legal researcher for WikiLeaks.

    To carry out the WikiLeaks leak, Holly logged in to her Twitter account, made it private, unfollowed everyone, and deleted all of her tweets. She also deleted all of her DMs except for the private WikiLeaks Twitter group and changed her Twitter username. Using the Firefox web browser, she then went to the DM conversation — which contained 11,000 messages and had been going on for two-and-a-half years — and saw the latest messages in the group. She scrolled up, waited for Twitter to load more messages, scrolled up again, and kept doing this for four hours until she reached the very first message in the group. She then used Firefox’s Save Page As function to save an HTML version of the webpage, as well as a folder full of resources like images that were posted in the group.

    Now that she had a local, offline copy of all the messages in the DM group, Holly leaked it to the media. In early 2018, she sent a Signal message to the phone number listed on The Intercept’s tips page. At that time, I happened to be the one checking Signal for incoming tips. Using OnionShare — software that I developed for this purpose — she sent me an encrypted and compressed file, along with the password to decrypt it. After extracting it, I found a 37MB HTML file — so big that it made my web browser unresponsive when I tried opening it and which I later split into separate files to make it easier to work with — and a folder with 82MB of resources.

    How could I verify the authenticity of such a huge HTML file? If I could somehow access the same data directly from Twitter’s servers, that would do it; only an insider at Twitter would be in a position to create fake DMs that show up on Twitter’s website, and even that would be extremely challenging. When I explained this to Holly (who, at the time, I still knew only as Hazelpress), she gave me her Twitter username and password. She had already deleted all the other information from that account. With her consent, I logged in to Twitter with her credentials, went to her DMs, and found the Twitter group in question. It immediately looked like it contained the same messages as the HTML file, and I confirmed that the verified account @wikileaks frequently posted to the group.

    Following these steps made me extremely confident in the authenticity of the dataset, but I decided to take verification one step further. Could I download a separate copy of the Twitter group myself in order to compare it with the version Holly had sent me? I searched around and found DMArchiver, a Python program that could do just that. Using this program, along with Holly’s username and password, I downloaded a text version of all of the DMs in the Twitter group. It took only a few minutes to run this tool, rather than four hours of scrolling up in a web browser.

    Note: After this investigation, the DMArchiver program stopped working due to changes on Twitter’s end, and today the project is abandoned. However, if you’re faced with a similar challenge in a future investigation, search for a tool that might work for you. 

    The output from DMArchiver, a 1.7MB text file, was much easier to work with compared to the enormous HTML file, and it also included exact time stamps. Here’s a snippet of the text version:

    [2015-11-19 13:46:39] <WikiLeaks> We believe it would be much better for GOP to win.

    [2015-11-19 13:47:28] <WikiLeaks> Dems+Media+liberals woudl then form a block to reign in their worst qualities.

    [2015-11-19 13:48:22] <WikiLeaks> With Hillary in charge, GOP will be pushing for her worst qualities., dems+media+neoliberals will be mute.

    [2015-11-19 13:50:18] <WikiLeaks> She’s a bright, well connected, sadistic sociopath.

    I could view the HTML version in a web browser to see it exactly as it had originally looked on Twitter, which was also useful for taking screenshots to include in our final report.

    A screenshot of the leaked HTML file.

    Along with the talented reporter Cora Currier, I started the long process of reading all 11,000 chat messages, paying closest attention to the 10 percent of them from the @wikileaks account — which was presumably controlled by Julian Assange, WikiLeaks’s editor — and picking out everything in the public interest. We discovered the following details:

    • Assange expressed a desire for Republicans to win the 2016 presidential election.
    • Assange and his supporters were intensely focused on discrediting two Swedish women who had accused him of rape and molestation, as well as discrediting their lawyers. Assange and his defenders spent weeks discussing ways to sabotage articles about his rape case that feminist journalists were writing.
    • After Associated Press journalist Raphael Satter wrote a story about harm caused when WikiLeaks publishes personal identifiable information, Assange called him a “rat” and said that “he’s Jewish and engaged in the ((())) issue,” referring to an antisemitic neo-Nazi meme. He then told his supporters to “bog him down. Get him to show his bias.”

    You can read our reporting on this dataset at The Intercept. After The Intercept published this article, Assange and his supporters also targeted me personally with antisemitic abuse, and Russia Today, the state-run TV station, ran a segment about me. 

    The techniques you can use to authenticate datasets vary greatly depending on the situation. Sometimes you can rely on OSINT, sometimes you can rely on help from your source, and sometimes you’ll need to come up with an entirely different method.

    Regardless, it’s important to explain in your published report, at least briefly, what makes you confident in the data. If you can’t authenticate it but still want to publish your report in case it’s real — or in case others can authenticate it — make that clear. When in doubt, err on the side of transparency.

    My book, “Hacks, Leaks, and Revelations,” officially comes out on January 9, but it’s shipping today if you order it from the publisher here. Add the code INTERCEPT25 for a special 25 percent discount.

    The post How to Authenticate Large Datasets appeared first on The Intercept.

    This post was originally published on The Intercept.