Category: Technology

  • By Jane Patterson, RNZ political editor, and Rowan Quinn, health correspondent

    As New Zealand readies for more covid-19 cases, warnings about the ability of public hospitals to cope are escalating.

    There are 289 intensive care unit (ICU) or high dependency unit (HDU) beds at the moment, with Minister of Health Andrew Little insisting that could be ramped up to 550 if needed.

    But that has been roundly questioned by opposition MPs, clinicians and ICU experts, including a recent New Zealand Medical Journal article concluding fully staffed, extra capacity would be more like 67 beds.

    It describes New Zealand’s “comparatively low ICU capacity” as a “potential point of vulnerability” in the covid-19 response.

    Intensive care
    There is a reason it is called intensive care.

    Patients there are so sick, each one has a nurse with them around the clock.

    Those there because of covid-19 are usually struggling to breathe, their lungs unable to give their body all the oxygen it needs to function.

    There are doctors, physios, pharmacists who come and go to give vital care but it is the nurses who are the constant.

    That’s why the shortage of ICU nurses is at the heart of the debate.

    New Zealand’s intensive care was already in a perilous position long before covid-19, with one of the lowest number of beds per capita in the developed world.

    Doctors and nurses have been asking for help for 10 years, failing to make meaningful traction with successive governments.

    The small community pulled together, pooled resources, when crises like the White Island eruption and the mass shooting in Christchurch hit.

    But covid-19 is different. It is here for longer and will hit everywhere.

    Political football
    Little is “assured that we will manage and we will cope”.

    High vaccination rates will mean fewer people will actually end up in hospital and “the vast majority who then get infected will be able to be cared for in the home with appropriate sort of monitoring, the stuff we’re putting in place at the moment”, he says.

    He acknowledges any move to surge up would mean deferred operations for things like hip and knee replacements, and people needing a lower level of care getting it somewhere other than a hospital.

    “The impact will be on non-covid patients who can be safely referred to other places for their care and recovery at the hospital.”

    Health Minister Andrew Little
    Minister of Health Andrew Little … “assured that we will manage and we will cope”. Image: Dom Thomas/RNZ

    National Party MP Shane Reti says there are simply not enough specialist ICU nurses.

    “Five point three nurses [needed per ICU] bed, it’s orphaned out and what we know from specialists … is that instead of the hundreds of beds that Andrew says we’ve got we’ve probably only got about 67 to surge to.”

    Not wanting to sound like a “political caricature”, Little, however, lays the blame at the feet of the previous National government.

    Heath underfunded
    “Our ICU capacity – if we’re talking about just designated ICU wards, and ICU beds, yep, that’s been a long standing problem … the reality is health has been underfunded for a long time, particularly when it comes to health facilities and buildings,” he says.

    He is confident any outbreak can be managed, saying expanding to 500 or so beds would require an increase to about 200,000 covid-19 patients across the country.

    However, Reti says that the May 5 public sector pay freeze has impacted on staffing, with some going to Australia, and that New Zealand’s now competing with the world for ICU nurses with an immigration system that’s not friendly to them.

    National Party MP Shane Reti
    National Party MP Shane Reti … May 5 public sector pay freeze has impacted on staffing. Image: Dom Thomas/RNZ

    Nursing shortage
    Even with the known nursing vacancies, New Zealand’s needs could be met with the training of about 1400 more nurses to work in ICU under supervision, Little says.

    Through May 2020 till mid August this year, there were no new, resourced ICU beds in Auckland DHB, but the ICU nurse headcount dropped from about 250 to just over 212.

    Reti says the nursing shortage is a major obstacle.

    “When Minister Little says, ‘I’ve trained up 1400 ICU nurses’ — no you haven’t, what you’ve done is you’ve given them half a day’s online training and half a day on a mannequin.

    “In no shape or form is that an ICU nurse — they’ll be valuable, don’t get me wrong — but valuable for turning patients in ICU?”

    Auckland has the biggest ICU unit in the country, and needed to find nurses from across New Zealand on September 1 when eight active cases arrived there, he says, showing just how thin the margins are.

    On the ground
    Vice-president of the Australasian College of Intensive Care Rob Bevan says right now intensive care is coping well.

    That is due, in large part, to high — and rising — vaccination rates and the fact that Auckland’s been in lockdown.

    Quieter lives mean fewer car accident and workplace falls, while hospitals have delayed many of the planned operations which might involve ICU recovery.

    But Dr Bevan, a specialist at Auckland’s Middlemore Hospital, says more beds will be needed next year when covid-19 is in the community and life was comparatively back to normal.

    “There is going to be a burden of covid that people will need hospitalisation and intensive care for that we need to add onto what we were doing before,” he says.

    “And acknowledging that our intensive care bed capacity before was still not enough to care for everybody without resorting to the deferment of planned care on occasion.”

    Many who work in intensive care say the government and health bosses are wrong to count physical beds (and the equipment that comes with them) when there are not enough nurses to use them all.

    Shocked by ‘training’
    When they said they were training other nurses to help in ICU, the nurses organisation kaiwhakahaere Kerri Nuku said she was shocked to learn what that meant.

    “Four hours online training — to go and support in ICU. Those decisions about what’s in the best interests of nursing have not been made for nurses,” she said.

    Indeed, specialist ICU nurses say they would have to spend time supervising the online trained back-ups, adding more work to an already very challenging job.

    And Bevan says surging up to more than 500 beds is not a realistic picture.

    “That is a crisis, short term, and largely unsustainable model that we would have had to have moved to had we been overwhelmed like they have been in other parts of the world,” he says.

    “But that would most likely achieve worse outcomes for all patients in ICU than they have in other parts of the world compared with our best model of care that we’ve been able to provide to date.”

    The message is starting to get through to those who made decisions, he says.

    Intensive care meetings
    Intensive care bodies are meeting with the Ministry of Health twice a week and there is work underway to try to recruit more nurses from overseas, he says.

    But it has to go beyond talk and into action, first to sort the short term problem but then to keep building on that over the next several years.

    “The next pandemic is inevitable … it might be in 10 years, it might be in 100 years, but it is coming,” Bevan says.

    Little says he has also asked for decisions on three DHBs proposals expanding ICU capacity to be “accelerated”, but even then, those “will be some months away — they won’t be instant”.

    This article is republished under a community partnership agreement with RNZ.

    This post was originally published on Asia Pacific Report.

  • Legislation introduced in both the House and Senate today will force cops to obtain a warrant before extracting information stored in the computers onboard modern cars, closing what the bill’s sponsors say is a glaring, outdated loophole through the Fourth Amendment.

    Recent automobile models rely heavily on computers for everything from navigation to engine diagnostics to entertainment, and entice drivers to connect their smartphones for added features and convenience. These systems log drivers’ movements while also downloading deeply sensitive personal information from their smartphones over Bluetooth or Wi-Fi — typically silently, without their knowledge or consent.

    The conversion of cars into four-wheeled unprotected databases, with troves of information about owners’ travels and associates, has presented low-hanging fruit for law enforcement agencies, which are able to legally pull data off a vehicle without the owner’s knowledge. They are aided by a small but lucrative industry of tech firms that perform “vehicle forensics,” extracting not only travel data but often text messages, photos, and other private data from synced devices. Critics say this exploits a dangerous gap in the law: If police want to search the contents of your smartphone, the Fourth Amendment demands that they obtain a warrant first; if they want to search the computer built into your car, they don’t need any such permission, even if they end up siphoning data that originated on the exact same smartphone.

    The new legislation, titled “Closing the Warrantless Digital Car Search Loophole Act,” would bar such warrantless searches; evidence from them would be inadmissible in court, to establish probable cause, or for use by regulatory agencies. The measure was introduced in the Senate by Oregon Democrat Ron Wyden and Wyoming Republican Cynthia Lummis, and in the House by Rep. Peter Meijer, the Republican representing West Michigan’s 3rd Congressional District, and Rep. Ro Khanna, the Democrat in the San Francisco Bay Area’s 17th.

    “The idea the government can peruse digital car data without a warrant should sit next to the Geo Metro on the scrap heap of history,” Wyden said in an advance announcement shared with The Intercept.

    In May, The Intercept reported that U.S. Customs and Border Protection had contracted with MSAB, a Swedish company specializing in digital device cracking, to purchase vehicle forensics kits manufactured by Berla, an American firm. MSAB marketing materials make clear how powerful these kits are, touting the ability to pull “[r]ecent destinations, favorite locations, call logs, contact lists, SMS messages, emails, pictures, videos, social media feeds, and the navigation history of everywhere the vehicle has been,” as well as data that can be used to determine a target’s “future plan,” and “[i]dentify known associates and establish communication patterns between them.”

    CBP’s use of such tools is among the warrantless uses of car data that would be blocked by the new bill, Wyden spokesperson Keith Chu confirmed.

    “New vehicles are computers on wheels and should have the same 4th Amendment protections.”

    The bill protects a diverse range of data collected by today’s cars, including “all onboard and telematics data” in the vehicle or in attached “storage and communication systems,” including “diagnostic data, entertainment system data, navigation data, images or data captured by onboard sensors, or cameras, including images or data used to support automated features or autonomous driving, internet access, and communication to and from vehicle occupants.”

    There are carveouts; the bill exempts vehicles that require a commercial license to drive as well as traffic safety research and situations subject to “emergency provisions in the wiretap act and the USA Freedom Act, enabling the government to get a warrant after the fact,” according to an overview shared by Wyden’s office.

    The bill has endorsements from an array of left-leaning groups, including due process advocates like the American Civil Liberties Union and Electronic Frontier Foundation. But the Republican backing underlines that digital privacy and surveillance concerns resonate across party lines. “New vehicles are computers on wheels, and my constituents in Wyoming should have the same 4th Amendment protections for their vehicles as they do for their phones and home computers,” Lummis said in the announcement.

    The post Bipartisan Bill Seeks to Stop Warrantless Car Spying by Police appeared first on The Intercept.

    This post was originally published on The Intercept.

  • The move may surprise some, given Meta’s recent announcement that it would be shutting down one of its Face Recognition features

    This post was originally published on The Asian Age | Home.

  • The revelation, a few years ago, that the US National Security Agency (NSA) has been conducting mass surveillance on millions of Americans has reignited the conversation on governments’ misconduct and their violation of human rights and privacy laws.

    Until recently, however, Israel has been spared due criticism, not only for its unlawful spying methods on the Palestinians but also for being the originator of many of the technologies which are now being heavily criticized by human rights groups worldwide.

    Even at the height of various controversies involving government surveillance in 2013, Israel remained on the margins, despite the fact that Tel Aviv, more than any other government in the world, uses racial profiling, mass surveillance and numerous spying techniques to sustain its military occupation of Palestine.

    In Gaza, two million Palestinians are living under an Israeli blockade. They are surrounded by walls, electric fences, underground barriers, navy ships and multitudes of snipers. From above, the tannaana, the Arabic slang word Palestinians have for unmanned drone, watches and records everything. At times, these armed drones are used to blow up anything deemed suspicious from an Israeli ‘security’ perspective. Moreover, every Palestinian wishing to leave or return to Gaza—with only a few who are allowed such privilege—is subjected to the most stringent ‘security’ measures, involving various government intelligences and endless military checks. This applies as much to a Palestinian toddler as it does to a terminally-ill woman.

    In the West Bank, Israel’s security ‘experiment’ takes on many other manifestations. While the Israeli objective is to entrap people in Gaza, its aim is to control the everyday life of Palestinians in the West Bank and East Jerusalem. Aside from the  1,660 kilometer-long Apartheid Wall in the West Bank, there are many other walls, fences, trenches and various types of barriers that are aimed at fragmenting Palestinian communities in the West Bank. These isolated communities are only connected through an elaborate system of Israeli military checkpoints, many of which are permanent and many more are erected or dismantled depending on the ‘security’ objectives on any given day.

    Much of the surveillance occurs daily at these Israeli checkpoints. While Israel uses the convenient term ‘security’ to justify its practices against Palestinians, actual security has very little to do with what takes place at these checkpoints. Many Palestinians have died, many mothers have given birth or lost their newborns while waiting for Israeli security clearance. It is a daily torment, and Palestinians are subjected to it because they are the unwitting participants of a very profitable Israeli experiment.

    Luckily, the news of Israel’s undemocratic practices is becoming increasingly known. On November 8, The Washington Post revealed an Israeli mass surveillance operation, which uses ‘Blue Wolf’ technology to create a massive database of all Palestinians.

    This additional measure gives soldiers the chance to, using their own cameras, take pictures of as many Palestinians as possible and match “them to a database of images so extensive that one former soldier described it as the army’s secret ‘Facebook for Palestinians.’”

    We know very little about this new ‘Facebook for Palestinians’, aside from what has been revealed in the news. However, we know that Israeli soldiers compete to take as many photos of Palestinian faces as possible, as those with the highest number of photos could potentially receive certain rewards, the nature of which remains unclear.

    While the ‘Blue Wolf’ story is receiving some attention in international media, it offers nothing new for Palestinians. To be a Palestinian living under occupation is to carry multiple permits and magnetic cards, to pass various clearances, to have your photo taken regularly, to have your movement monitored, to be ready to answer any question about your friends, your family, co-workers and acquaintances. When that is impractical, because, say, you live under siege in Gaza, then the work is entrusted to unmanned drones scanning sky, earth and sea.

    The reason that ‘Blue Wolf’ is receiving some traction in the media is that Israel has been recently implicated in one of the world’s greatest espionage operations.

    Pegasus is a type of malware that spies on iPhones and Android devices, to extract photos, messages, emails and record calls. Tens of thousands of people around the world, many of whom are prominent activists, journalists, officials, business leaders and alike, have fallen victim to this operation. Unsurprisingly, Pegasus is produced by the Israeli technology firm, the NSO Group, whose products are heavily involved in the monitoring of and spying on Palestinians, as confirmed by the Dublin-based Front Line Defenders, and as reported in the New York Times on November 8.

    Sadly, the Israeli unlawful and undemocratic practices became a subject of international condemnation when the victims were high-ranking personalities, the likes of French President Emmanuel Macron and others. When Palestinians were on the other end of Israeli spying, surveillance and racial profiling, the story seemed unworthy of reporting.

    Worse, for many years, Israel has promoted its sinister ‘security technology’ to the rest of the world as ‘field-proven’, meaning that they have been used against occupied Palestinians. Not only did such a declaration raise a few eyebrows, the tried and tested brand allowed Israel to become the world’s eighth-largest arms exporter. Israeli security exports are now utilized in many parts of the world. They can be found at North American and European airports, at the Mexico-US border, in the hands of various world’s intelligences, at European Union territorial waters—largely to intercept war refugees and asylum eekers.

    Covering up Israel’s unlawful and inhuman practices against the Palestinians has proven a liability on the very people who justified Israeli actions in the name of security, including Washington. On November 3, the Joe Biden Administration decided to blacklist the Israeli NSO Group for acting “contrary to the national security or foreign policy interests of the United States.” This is a proper measure, of course, but it fails to address the ongoing Israeli violations against the Palestinian people.

    The truth is, for as long as Israel maintains its military occupation of Palestine, and as long as the Israeli military continues to see Palestinians as subjects in a mass ‘security experiment’, the Middle East—in fact, the entire world—will continue to pay the price.

    The post From Pegasus to Blue Wolf: How Israel’s “Security” Experiment in Palestine Became Global first appeared on Dissident Voice.

    This post was originally published on Dissident Voice.

  • British government contractor, Serco, has been forced to abandon plans to bid for contracts at Britain’s Atomic Weapons Establishment (AWE) after investors threatened to sell their shares in the company, reporyts Linda Pearson.

    This post was originally published on Green Left.

  • QAnon is a far-right conspiracy movement, which revolves around false claims made by anonymous individuals, much of it online.

    In this edited excerpt of her new book ‘QAnon and On’ Van Badham details misogyny and gendered hate within this cohort. 

    Anita Sarkeesian was a graduate student at York University in Canada. Her research critiqued the representation of women in genre tropes across popular culture, and she maintained a website with a YouTube channel, Feminist Frequency, that documented her work.

    In June 2012 she launched an online Kickstarter fundraiser to make a series of videos that extended her analysis to video games. ‘I love playing video games but I’m regularly disappointed in the limited and limiting ways women are represented,’ Sarkeesian wrote in her pitch asking for contributions.

    To make Tropes vs. Women in Video Games, she needed six thousand dollars to cover production costs.

    ‘[T]its or back to the kitchen, bitch,’ came one comment.

    ‘LESBIANS: THE GAME is all this bitch wants,’ came another.

    They continued: ‘You are a hypocrite fucking slut’; ‘I’ll donate $50 if you make me a sandwich’; ‘She needs a good dicking, good luck finding it though’; ‘I hope you get cancer :)’; ‘Back to the kitchen, cunt.’

    And more. Thousands and thousands more, across all platforms where Sarkeesian had a presence – two thousand within a week on her YouTube channel alone.

    A post on Feminist Frequency reported that a ‘coordinated attack’ had mobilised, trying to get Sarkeesian’s accounts banned, sending her torrents of abuse – including rape and death threats – even editing her Wikipedia page to describe her as a ‘cunt’ and transforming her profile picture to porn. On her Tumblr, Sarkeesian identified that ‘a dozen or more different people were working together to vandalize’ the Wikipedia entry alone.

    The abuse of Anita Sarkeesian was so intense, it became an international news story.

    French website Madmoizelle pegged the blame for the attack squarely on ‘a bunch of 4channers doing everything on the Internet to destroy her’.

    It seemed that someone had brought Sarkeesian’s Kickstarter to 4chan’s attention, as they had abuse victim Jessi Slaughter and so many other targets before.

    The joke at the time was that 4chan did more to resource a feminist critique of gaming than Sarkeesian could have ever managed on her own. In the wake of publicity about the abuse, donations made in solidarity with Sarkeesian flowed into the Tropes vs. Women in Video Games fundraiser. She had asked for $6000. She received $158,922.

    The videos got made but laughs for the woman herself were thin on the ground.

    Anita Sarkeesian’s life was transformed: the engaged and eager 28-year-old academic became a woman made famous for her public abuse, because that abuse didn’t stop when she surpassed her fundraising goals, or even when she made her videos. The attention from 4chan had made her into a new kind of internet celebrity: the online feminist folk villain.

    Over the next few years, the abuse and death threats continued, and there were ongoing attempts to hack her accounts, shut down her sites and dox her. The attention had made her videos hugely popular, but while she was provided with international platforms in the media and at conferences to discuss her work, she was subjected to bomb and mass shooting threats at public appearances. She was falsely reported to the FBI and IRS for investigation.

    Cover - QAnon and On

    In QAnon and On, Van Badham delves headfirst into the QAnon conspiracy theory, unpicking the why, how and who behind this century’s most dangerous and far-fetched internet cult. 

    She was sent images that depicted video-game characters raping her and had a video game made about her. It was called Beat Up Anita Sarkeesian, and in it the player punches an image of her face until it is misshapen, cut and bloody.

    She was also the subject of conspiratorial, crowd-funded amateur documentaries. Two men behind these films, Jordan Owen and Davis Aurini, worked sometimes together, sometimes apart, but shared a mission to prove Sarkeesian was a fraud and manipulator.

    Their most infamous project was a crowd-funded documentary called The Sarkeesian Effect.

    Aurini self-identified as an ‘intellectual’ and shared propheavy white nationalist opinions on a YouTube channel and on Reddit. He was rumoured to hang out on 4chan’s /pol/ board.

    Owen was also a YouTuber, as well as a gamer, and a composer of ‘modern orchestral dance music’. Both men were enraged that Anita Sarkeesian had been recognised for her feminist advocacy at events like the Game Developers Choice Awards and denounced her in The Sarkeesian Effect as ‘a bully like [the video game industry] had never encountered before, a bully that used guilt and political correctness to have her way’.

    Their films and public comments repeated the online myth that her stories of harassment were a lie – a pity-eliciting grift to propel her to fame and riches – even as their own projects actively harassed her.

    For all the exuberance of their attempts to have Anita ‘cornered’, the only major revelation of Aurini and Owen’s film was that she got her correspondence delivered to a post-office box rather than a street address. An insistence of Aurini’s that she had lied about reporting her harassment to the police turned out to be incorrect.

    When the police located her harassment reports, Aurini retorted online that this news merely ‘compounded’ his questions rather than answering them. His reasoning for this was without explanation.

    Writer David Futrelle from the anti-misogynist website We Hunted the Mammoth followed the story of the Sarkeesian films and blogged about it. He saw a genuine desperation within these projects for ‘the terrible things people say about Sarkeesian to all be proved true’.

    For a start, there was the issue that they were sourcing money from people on a promise to validate the energy ‘half the internet’ had put into hounding her.

    There was also, Futrelle observed, a monstrous, sometimes admitted, envy of Sarkeesian among these people. She was able to raise more money from her projects than they could for theirs. She was invited to game industry parties when they were not. Her work was influencing a mainstream conversation.

    Futrelle described a pressing, psychological need he saw in Jordan Owen to delegitimise her. Any suggestion that Anita Sarkeesian may not be the creature Owen wanted her to be, wrote Futrelle, ‘actually seemed to plunge him into something close to an existential crisis’.

    Around these men, their projects, the online movement against Anita Sarkeesian and forums like 4chan, a new ecology was growing. The internet is the technology that offers the vastest storehouse of humanity’s learned truths in all our history, yet the accessibility of internet communities, their global reach and the rapidity of their communications were creating spaces where participants could affirm and reaffirm wilful myths to any audience that was eager to believe them. Years later, this phenomenon would be called post-truth’.

    When it came to the likes of Anita Sarkeesian, the scheming ‘SJW’ villain her antagonists wanted her to be was a far more compelling story than the video-game-playing feminist academic she really was. The same keyboards and screens used to demonise her as an agent of a ‘politically correct’ conspiracy were ones on which a mere few clicks could establish she was not a demon at all. The very proximity to empirical evidence made the deliberate choice to ignore it more conspicuous – and disturbing.

    Feature image: Anita Sarkeesian” by theglobalpanorama is licensed under CC BY-SA 2.0

    The post When misogynist cyberhate turns you into international news appeared first on BroadAgenda.

    This post was originally published on BroadAgenda.

  • Instagram has worked with third-party experts to develop the ‘Take a Break’ feature

    This post was originally published on The Asian Age | Home.

  • The US Department of Commerce revealed on November 3 that it would be adding Israeli-based spyware developers NSO Group and Candiru to its blacklist “based on evidence that these entities developed and supplied spyware to foreign governments that used these tools to maliciously target government officials, journalists, businesspeople, activists, academics and embassy workers”, reports Binoy Kampmark.

    This post was originally published on Green Left.

  • Google’s newest office buildings in Mountain View, California are covered in silver scales. Some 90,000 squares ripple across four rooftops, near the tech giant’s headquarters, each overlapping slat a solar panel. Once operating next year, they should be able to meet roughly 40 percent of the four buildings’ electricity needs.  

    These “dragonscale” rooftops are perhaps the most eye-catching example of Google’s larger climate goals, which involve using only carbon-free energy at its nearly two dozen data centers and 70 offices worldwide by 2030. Google says the unique installations might do more than limit emissions at its new Bay View and Charleston East offices. They could also pave the way for buildings across the country to adopt the reptilian design — if dragonscales can overcome the same barriers in the way of other novel solar technologies.

    The idea is “to kickstart this market in the U.S. by showing it can be done,” said Asim Tahir, who leads Google’s renewable energy strategy, in an interview with Grist. The four solar arrays will have a combined installed capacity of 7 megawatts, or enough to power roughly 1,800 average homes in California. 

    Google’s initiative arrives as engineers and building developers worldwide are trying to transform homes, offices, and factories from energy hogs into energy-efficient properties. In 2019 — before the pandemic disrupted office life and everything else — buildings churned out a record level of energy-related carbon dioxide emissions, accounting for 28 percent of the global total, according to the Global Alliance for Buildings and Construction. Residential and commercial buildings continue drawing electricity from carbon-intensive grids and burning natural gas for heating and cooking.

    To curb emissions, many companies are installing LED lighting systems in their offices, adding thick insulation panels made of wood and styrofoam and, as Google is doing, building geothermal systems that store heat underground to warm buildings and water supplies from the bottom up. A much smaller number of properties use “building-integrated photovoltaics,” which embed solar cells into the actual roof tiles, windows, or façades. Google’s dragonscale shingles don’t sit atop the canopied roof; they’re part of the roof itself.

    “We need to cover every surface that can take solar panels with solar panels,” Tahir said.

    U.S. solar firms have been trying to do just that for more than a decade. Yet despite rising interest from homeowners and commercial building owners, building-integrated solar technologies have struggled to achieve anything close to the mainstream success of conventional solar panels mounted on rooftops or arrayed on the ground. Of the more than 3,000 megawatts of residential solar installed in 2020, roughly 1 percent were solar shingles, according to Paula Mints, who runs the global solar research firm SPV Market Research

    A key reason is cost. Today, solar shingles are made in limited amounts by specialized manufacturers, and installing them requires a specialized labor force. Such constraints recently led Tesla to hike the price of its Solar Roof projects by tens of thousands of dollars, in some cases. By contrast, with a typical silicon solar panel, all the raw materials and components are produced in high volumes in giant factories, mainly in China. Installing panels is by now a relatively easy, inexpensive task that happens thousands of times a year.

    Another challenge for solar shingles is that they’re generally less efficient at turning sunlight into electricity. This is primarily a problem with heat. As the silicon solar cells inside panels or shingles heat up, they gradually produce less power. Panels can be elevated slightly off the rooftop, allowing air to circulate and cool them down, but shingles are often tightly knitted together with little air flow.

    “Because they’re integrated into the roof, they usually get very hot and typically produce 20 to 30 percent less electricity than an average solar panel would,” said Vikram Aggarwal, CEO of EnergySage, an online marketplace for solar arrays. This also means shingles generate less solar power during the sunniest, hottest time of the day. As a result, it costs significantly more to generate a kilowatt-hour of power with building integrated solar versus a typical residential array, he said.

    Both Aggarwal and Mints said it was too early to tell how Google’s dragonscales compare to other solar technology, or how they might compete in the U.S. solar market. The tech giant has disclosed few technical details, and the shingles won’t start fully producing power until the California installations are completed next year.

    But the dragonscales have at least one obvious advantage: They can carpet the sloping segments of Google’s roofs in ways that standard roof-mounted panels cannot. 

    “Dragon scale” solar panels are visible on the roof of Google’s Bay View campus. Chris McAnneny / Heatherwick Studio

    The tops of Google’s new offices in Mountain View — which, to make another reptilian comparison, resemble turtle shells — feature long glass cut-outs that allow as much natural light to enter as possible without turning the buildings into sweltering greenhouses, Tahir said. Google engineers designed solar arrays to cover the remaining parts of the roof, building miniature versions in their own lab and consulting with outside solar manufacturers, including the French company SunStyle.

    At the heart of a dragonscale shingle is a standard silicon solar cell. Everything else is unique. The cells are kept beneath a layer of highly textured “prismatic” glass, which Google claims can trap light particles within the solar shingle that would escape from a traditional flat panel. The shingles are covered with a specially developed anti-glare coating, keeping them from blinding pilots landing at a nearby federal airfield. The solar shingles are also arranged using SunStyle’s overlapping diamond configuration, to keep wind, rain, and ice from slipping through the cracks.

    SunStyle itself has installed similar-looking arrays on more than 500 roofs in Europe and two properties in the United States: First Equity Bank in Skokie, Illinois, and an architect’s house in New York’s Hudson Valley. The 14-year-old company says it’s now launching a more affordable, ready-made version of dragonscales for the U.S. mass market. 

    Jessie Schiavone, CEO of SunStyle North America, said the company will initially market their panels to homeowners who are planning to replace their roofs and also want to install solar. A solar-shingle roof should cost about the same as a new typical roof with panels mounted on top, except that the solar shingles will be able to cover more surfaces, like eaves and ridges.

    It won’t hurt that Google’s buildings are essentially billboards for the shimmering, power-producing roofs. Mints of SPV Market Research said that with other premium solar technologies, including Tesla’s Solar Roof, aesthetics and brand association are often just as important considerations as cost and solar power production — at least for customers who can afford it. “What you really have is a cool-looking roof market,” she said.

    This story was originally published by Grist with the headline Enter the dragonscale: Google looks to jumpstart a new market for rooftop solar. on Nov 9, 2021.

    This post was originally published on Grist.

  • As per sources, Netflix will tiptoe around Apple’s rules by making its games available via the App Store

    This post was originally published on The Asian Age | Home.

  • BroadAgenda Research Wrap is your regular window into academia. We scour the journals so you don’t have to.

    No topic too impenetrable, no research too eclectic; BroadAgenda Research Wrap brings you a glimpse of the latest gender research around the world – in plain English.

    Hands up, who was expecting 2021 to end with yet another dystopian reality?

    For the last couple of months, we’ve been treated to our usual smorgasbord of patriarchy. From submarine tousles to the mostly male show at COP26, the news have been as predictable as they have been exasperating.

    Enter Meta. The reincarnation of Facebook wants to blur the lines between our physical world and virtual realities, a move which in the hands of select few individuals and lacking proper regulation should make anyone shudder. (Of course the ‘innovation’ aspect could also be contested – since as most parents of under 12s would know, Roblox and Fortnite are already ahead of the proverbial and literal game – but I digress).

    So, what are the gendered implications of technology – whether old or new?

    In a rather serendipitous timing, Australian Feminist Studies has published a special issue entitled ‘Gender, Technology and Trust: Feminist Reflections on Mobile and Social Media Practices’. Edited by Jess Hardley, Caitlin McGrane and Ingrid Richardson, the issue provides an excellent snapshot into recent feminist thinking as it explores the ever-present question of ‘trust’ in the age of digital media.

    And trust, as Indigenous researcher, Professor Bronwyn Carlson details, is complicated. For non-Indigenous people the intersections of critical studies, technology, culture and society look vastly different than for the Indigenous populations who also factor in colonial history and technology, and the ways in which identities are inherently linked to place, genealogy, kinship and language.

    While globally Indigenous people have been early adopters of digital technologies, and social media technologies have become a central part of the everyday lives, the platforms also mirror the constrains of the real world, sometimes further amplifying them.

    Online, Indigenous women and Indigenous LGBTQI+ people are exposed to the extremes of colonial discourse, and often experience direct threats of violence including rape. The trauma incited by online racism is incalculable.

    Consequently, “trust according to the Indigenous episteme,” she writes, “is based on tens of thousands of years of knowledge that has promised and delivered our survival, despite and through the enduringness of colonialism”. Trusting the ‘system’ – or in this instance the online spaces – demands that the system incorporates the views and experiences of Indigenous actors online.

    As it stands, it is hard to imagine how a metaverse originating from the Silicon Valley might change the current course of action, in particular if the inherent power imbalances at all levels remain the same. We can but hope. (And regulate!)

    (Read the full article, Indigenous Internet Users: Learning to Trust Ourselves, 2021.)

    How can individuals then use online spaces for sharing lived experiences and forming communities, while simultaneously mitigating the harms that may arise in free-for-all spaces?

    Removing the Mask: Trust, Privacy and Self-protection in Closed, Female-focused Facebook Groups’ (2021) draws on three different closed Facebook groups created by and for women: Australian ‘mum bloggers’ and readers, Australian Defence Force partners, and migrant mothers in Australia.

    The authors investigate the women’s motivations for creating and participating in closed online spaces, their expectations of privacy and safety, and the consequences of potential privacy breaches. They write:

    In effect, women’s creation and curation of closed Facebook groups provides a kind of shielded environment in which they can – to a certain extent – let their mask slip, provide mutual support and information, and in some cases enact non-compliance with social norms.

    However, the tenuous nature of safety and trust in such environments, where privacy can easily be compromised by a simple screenshot, means that the individual masks may lowered, but not completely removed.

    The authors also remain doubtful of the closed groups’ usefulness as a vehicle for broader social change. As they note, in providing the safe space the groups may simultaneously hide the complexities of women’s lives from broader public discourse; the group identity may also implicitly exclude marginalised and/or diverse individuals; and we can’t ignore the “paradox of creating ‘private’ spaces within the architecture or the commercial platform that monetises personal information”.

    Perhaps most ironically, however, the solution to the problem also constitutes a significant problem. As the authors also so aptly note, these curated safe spaces also rely on significant amount of intensive labour – both material and emotional – from the group administrators and the participators alike.

    Regardless of one’s identity, I think most of us would agree that women and mothers don’t need any more unpaid labour on their plate.

    Finally, let’s look at the ways in which trust plays out with specific subject matters online.

    In ‘The “Be All and End All”? Young People, Online Sexual Health Information, Science and Skepticism’ (2021), Adrian Farruga et al provide a fascinating snapshot of young people in Australia, and their trust in sexual health information online.

    Reassuringly, the researchers found that rather than being uncritical consumers, many young people “do not readily trust online sexual health resources”, and “many desire factual sexual health information produced by experts and backed by credible research”.

    But there’s always a ‘but’. The subject matter also influences the types of information people seek out. In the context of this research, young people sought medical advice from sources purporting to the present ‘facts’, whereas in matters of relationships and sexual practices, their emphasis was on life experiences.

    Regardless of content, the research participants approached the information sceptically and described strategies for assessing the trustworthiness of online information. As such, as the researchers note, trust in this context is “developed through everyday, iterative practices of critical engagement”.

    In the context of online disinformation, individuals’ ability to claim expertise without due diligence, and general distrust of news and institutions more broadly, the implications of their research insights cannot be overemphasised.

    Perhaps children will be our future after all?

     

    The post Women and trust in the digital age appeared first on BroadAgenda.

    This post was originally published on BroadAgenda.

  • Analysis: while identity of hackers is not known in this case, Palestinians have long been spied on by Israeli military

    The disclosure that Palestinian human rights defenders were reportedly hacked using NSO’s Pegasus spyware will come as little surprise to two groups of people: Palestinians themselves and the Israeli military and intelligence cyber operatives who have long spied on Palestinians.

    While it is not known who was responsible for the hacking in this instance, what is very well documented is the role of the Israeli military’s 8200 cyberwarfare unit – known in Hebrew as the Yehida Shmoneh-Matayim – in the widespread spying on Palestinian society.

    Continue reading…

    This post was originally published on Human rights | The Guardian.

  • Investigation finds rights activists working for groups accused by Israel of being terrorist were previously targeted by NSO spyware

    The mobile phones of six Palestinian human rights defenders who work for organisations that were recently – and controversially – accused by Israel of being terrorist groups were previously hacked by sophisticated spyware made by NSO Group, according to a report.

    An investigation by Front Line Defenders (FLD), a Dublin-based human rights group, found that the mobile phones of Salah Hammouri, a Palestinian rights defender and lawyer whose Jerusalem residency status has been revoked, and five others were hacked using Pegasus, NSO’s signature spyware. In one case, the hacking was found to have occurred as far back as July 2020.

    Continue reading…

    This post was originally published on Human rights | The Guardian.

  • Google has not revealed why it has decided to do away with this particular feature

    This post was originally published on The Asian Age | Home.

  • Web Desk:

    According to Hindustan Times, the next edge in line to be ruled is the air and humanity has done it already. But vehicles like the Xturismo intend to bring something a little more exciting to the skies above us.

    You may have seen flying cars in action in real life, or in videos. But Japan witnessed a flying bike whooshing past them, or hovering mid-air during a demonstration this week.

    Japanese company A.L.I. Technologies recently showed off its own take on a flying motorcycle thingy on a racetrack. Which isn’t as exciting as it sounds but it certainly is loud.

    During the demonstration, the XTURISMO Limited Edition flying bike was seen floating in the air with a roaring sound in front of the grandstand at the Fuji racing track. It was also seen slowly swirling in the air while drawing a figure of eight.

    ALI Technologies has already started to accept bookings for the XTURISMO Limited Edition from October 26. The company will only manufacture 200 units of these flying bikes. The price of the XTURISMO Limited Edition is 77.7 million yen, including tax and insurance premiums.

    The weight of the XTURISMO flying bike is about 300 kgs. The flying bike stands 3.7 meters in length, 2.4 meters wide, and 1.5 meters in height. It can only seat a pilot for now.

    According to the company, the flying bike has a cruising time ranging between 30 and 40 minutes. While the maximum speed of the hoverbike has not been disclosed by the company as of now, it was seen clocking around 100 kmph during the demonstration.

    Photo Courtesy: Internet

    President and CEO of ALI Technologies, Daisuke Katano said, “We started developing hoverbikes in 2017. It is expected that air mobility will expand in the future, but first of all, it is expected to be used in circuits, mountainous areas, at sea, and in times of disaster. I am happy to introduce it as the first step of the XTURISMO that is being done and as one of the new lifestyles.

    The company said that the delivery of the first units of these limited edition flying bikes will start as early as the first half of next year.

     

    This post was originally published on VOSA.

  • The other bit of information is that the Galaxy A33 will most likely be 5G-only

    This post was originally published on The Asian Age | Home.

  • The Treasury Department has in recent months expanded its digital surveillance powers, contracts provided to The Intercept reveal, turning to the controversial firm Babel Street, whose critics say it helps federal investigators buy their way around the Fourth Amendment.

    Two contracts obtained via a Freedom of Information Act request and shared with The Intercept by Tech Inquiry, a research and advocacy group, show that over the past four months, the Treasury acquired two powerful new data feeds from Babel Street: one for its sanctions enforcement branch, and one for the Internal Revenue Service. Both feeds enable government use of sensitive data collected by private corporations not subject to due process restrictions. Critics were particularly alarmed that the Treasury acquired access to location and other data harvested from smartphone apps; users are often unaware of how widely apps share such information.

    The first contract, dated July 15 at a cost of $154,982, is with Treasury’s Office of Foreign Assets Control, a quasi-intelligence wing responsible for enforcing economic sanctions against foreign regimes like Iran, Cuba, and Russia. According to contract documents, OFAC investigators can now use a Babel Street tool called Locate X to track the movements of individuals without a search warrant. Locate X provides clients with geolocational data gleaned from mobile apps, which often relay your coordinates to untold third parties via advertisements or pre-packaged code embedded to give the app social networking features or study statistics about users. This commercial location data exists largely in a regulatory vacuum, acquired by countless apps and bought, sold, and swapped between an incredibly vast and ever-growing ecosystem of ad tech firms and data brokers around the world, eventually landing in the possession of Babel Street, who then sells search access to government clients like OFAC.

    Critics of the software say it essentially allows the state to buy its way past the Fourth Amendment, which protects Americans from unreasonable searches. The contract notes that OFAC’s Office of Global Targeting will use Locate X for “analysis of cellphone ad-tech data … to research malign activity and identify malign actors, conduct network exploitation, examine corporate structures, and determine beneficial ownership,” a rare public admission by the government of its use of personal location data acquired with cash rather than a warrant. The contract does not indicate any restrictions or intentions around whether Locate X will be used against U.S. persons or foreigners.

    The contract provided to Tech Inquiry is heavily redacted in important sections that appear to discuss how Locate X will actually be used. Prior reporting on how other federal entities use Locate X, including a 2020 report by Protocol, indicates it allows agents to instantly determine what people were at a particular location at a particular time — and even where they arrived from and where they’d traveled in the preceding months.

    “My office has pressed Babel Street for answers. They won’t even put an employee on the phone.”

    Babel Street has claimed its location data is “anonymized,” meaning the harvested coordinates are tied not to an individual’s name but to a random string of characters. But researchers have found time and time again that deanonymizing precise historical location data is trivial. Indeed, in 2020 a company source told Motherboard “we could absolutely deanonymize a person” and that employees would “play with it, to be honest.” Oddly, despite the fact that the contract states Locate X will aid OFAC in “implementing its sanctions programs” — which is to say, enforcing the law — Babel Street’s terms of service, included in the FOIA response, state: “Due to the varied nature of Third-Party Data and Babel Street’s inability to attest to the accuracy of Third-Party Data (including any results Customer may obtain), Third-Party Data may be unsuitable for use in legal or administrative proceedings.”

    In an emailed statement, Democratic Oregon Sen. Ron Wyden, a vocal critic of ad-based geolocation, told The Intercept, “As part of my investigation into the sale of Americans’ private data, my office has pressed Babel Street for answers about where their data comes from, who they sell it to, and whether they respect mobile device opt-outs. Not only has Babel Street refused to answer questions over email, they won’t even put an employee on the phone.”

    Neither the Department of Treasury nor Babel Street responded to a request for comment on either contract.

    Corporate Surveillance vs. Constitutional Rights

    The government has long been able to pinpoint your location by tracking your phone through your mobile carrier, but the Supreme Court’s 2018 decision in Carpenter v. United States made clear that it needs a warrant to do so.The sprawling and unscrupulous digital advertising industry, constantly vacuuming up details of your movements in order to better target you with ads, provides a convenient loophole. Locate X has drawn intense scrutiny and criticism by allowing government agents to sidestep constitutional hurdles like those provided by the Carpenter ruling.

    “It is clear that multiple federal agencies have turned to purchasing Americans’ data to buy their way around Americans’ Fourth Amendment rights,” Wyden added. Wyden is the co-sponsor of the Fourth Amendment Is Not For Sale Act, proposed legislation that would force law enforcement and intelligence agencies to obtain a court order for this sort of app data, rather than simply buying it from any willing broker.

    “OFAC’s use of ad-tech location tracking for economic sanctions is a troubling extension of the previously known usage by CBP, the FBI and Secret Service, IRS, and the DoD,” Jack Poulson, Tech Inquiry’s founder, told The Intercept (for which he once wrote an opinion piece). He added, “Rather than backing off their invasive surveillance of smartphone location data after a prominent vendor was caught spying on a popular Muslim prayer app, US intelligence and law enforcement agencies appear to be finding more use cases,” referring to the Motherboard investigation that found the government buying location data from a Locate X competitor harvested via a popular Quran app.

    Through a second contract, finalized on September 30 and totaling $150,000, Virginia-based Babel Street will provide another Treasury agency, the IRS, with software that “captures information from public facing digital media records” in order to detect individual and small business-owning tax dodgers through their online posts, a capability the office has sought previously. Though the contract language doesn’t mention specific social media platforms by name, the automated “monitoring” of sites like Twitter and Facebook is Babel Street’s bread and butter, a capability similar to that of rival Dataminr; a 2017 Motherboard report on Babel Street noted that it offers clients “access to over 25 social media sites, including Facebook, Instagram, and to Twitter’s firehose. … Babel Street’s filtering options are extremely precise, and allow for the user to screen for dates, times, data type, language, and—interestingly enough—sentiment.”

    That the IRS would want to track down those trying to avoid paying their share is of course unsurprising, but the contract provides scant information about how greatly expanding the surveillance of what Americans say online — Babel Street’s tool will be able to handle “at least 25,000 concurrent users” — will achieve this end. In its initial request for contract solicitations, presumably now provided by Babel Street, the IRS cited capabilities far beyond simply searching tweets and Instagram posts, requiring that the winner of the contract provide “available bio-metric data, such as photos, current address, or changes to marital status” for individuals targeted by the agency, “provide publicly available information of taxpayers’ past or present locations,” as well as “reports showing that a taxpayer participated in an online chat room, blog, or forum, and reports showing the chat room or blog conversation threads.”

    “Babel Street’s support for the IRS increasing its surveillance of small businesses and the self employed — after the IRS has already largely given up on auditing the ultrawealthy — is an example of the U.S. surveillance industry being used to help shift the tax burden to the working class,” Poulson said.

    The post The U.S. Treasury Is Buying Private App Data to Target and Investigate People appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Twitter users investigated for sharing “disinformation and manipulative content”

    Thirty people are facing legal proceedings after Turkish police launched an investigation into the spread of rumours on social media that President Recep Tayyip Erdoğan had died.

    Twitter users who posted under the trending hashtag “olmuş” – roughly “is said to be dead” – were being investigated for sharing “disinformation and manipulative content”, a police statement issued on Wednesday said.

    Continue reading…

    This post was originally published on Human rights | The Guardian.

  • Web Desk:

    According to the insideevs, at the ongoing COP26 climate summit where delegates from around the world have gathered to discuss matters concerning climate change, the UK startup has introduced its own solution to help power electric vehicles and reduce vehicular carbon emissions.

    A UK startup called Zip Charge introduced its portable electric vehicle charger. The portable charger Go is compact, about the size and shape of an average-sized carry-on suitcase, and weighs about 50-lbs. It has a 4 kWh (net capacity) battery and can charge an EV at 7.2 kW. However, larger versions will also be introduced, offering up to 8 kWh of net capacity.

    Photo Courtesy: insideevs.com

    In a little more than 30 minutes, the 4 kWh versions can add approximately 12 to 20 miles of range, depending on how efficient the EV is. The larger 8 kWh unit needs about an hour to fully discharge and will add roughly 25 to 40 miles of driving range.

    The Go is designed to be simple to use at home, work, and other destinations, wherever a driver parks their car. The unit is charged from any regular household outlet and can be used outside in any weather condition.

    Photo Courtesy: insideevs.com

    Therefore, it can be used by those that don’t have the ability to plug into a home charging station – they can charge anywhere with the Go. It can also offer peace of mind for those that are concerned with running out of charge in an EV. It’s small and light enough to be carried in the trunk.

    Photo Courtesy: insideevs.com

    The charger is due to be available to customers from the end of next year.

    This post was originally published on VOSA.

  • Web Desk:

    World’s largest streaming platform Netflix is now officially debuting in the world of mobile games. When updating the Netflix app for Android, the user will find, as of today, a new tab or section with unusual but interesting content.

    Netflix in a statement said that we love games, whether it’s physical games (Floor Is Lava), mind games (The Circle), or Squid Games. And we love entertaining our members. That’s why we’re excited to take our first step in launching Netflix games on mobile to the world.

    https://about.netflix.com/en/news/let-the-games-begin-a-new-way-to-experience-entertainment-on-mobile

    As a result of the update, Netflix subscribers across the globe can now be able to play five mobile games, Stranger Things: 1984 (BonusXP), Stranger Things 3: The Game (BonusXP), Shooting Hoops (Frosty Pop), Card Blast (Amuzo & Rogue Games), and Teeter Up (Frosty Pop).

    The new development is a result of a years-long experiment that the video streaming company is doing to go beyond offering movies and television series and please both its investors and users with newer experiences.

    It should be noted that none of these games have advertising, nor do they have in-game purchases or any other additional charges. In fact, all you need to play these games is an active Netflix subscription, plus an updated Android smartphone. All this could, however, be an introductory move as Netflix may eventually be able to use gaming as a source of generating additional revenues.

    Featured games will automatically adjust to the language set in each Netflix account. Thus, all the user has to do is open the game and enjoy it. You can pick them through the dedicated games row or games tab available on mobile devices or using from the categories drown down menu on tablets, and then download the selected games via Google Play. Once downloaded, the games will be available for access directly through the Netflix app.

    To attract the masses, games on Netflix are available in as many languages as the platform has for its regular offerings. However, if you have not selected any particular language of your choice, games will default to English. Some of the Netflix games can even be played offline.

    iPhone (iOS) users will have to wait a little longer. Nonetheless, according to the platform itself, the news will not be long in reaching the Apple ecosystem as well.

    This post was originally published on VOSA.

  • The mobile games are available in many of the languages Netflix offers on service

    This post was originally published on The Asian Age | Home.

  • More than a third of Facebook’s daily active users have opted in to have their faces recognized by the social network’s system

    This post was originally published on The Asian Age | Home.

  • Clippy started off life in Office 97, offering hints for using Microsoft’s Office software

    This post was originally published on The Asian Age | Home.

  • The Committee had asked Facebook India to appear before them on November 18

    This post was originally published on The Asian Age | Home.

  • Safety issues have dogged Uber since its early days as a black car-hailing service. Car accidents and physical altercations have persisted despite Uber’s attempts to monitor its cars and vet drivers; reports of sexual violence in its vehicles eventually led Uber to admit it was “not immune” to the problem.

    Amid public backlash and calls to address rider safety, Uber rolled out a flashy “Safety First” initiative in 2018, adding features like 911 assistance to its app, tightening screening of drivers, and for the first time, issuing a safety report that outlined traffic fatalities, fatal physical assaults, and sexual assaults on its platform.

    That was before the pandemic. Over the past year and a half, safety took on new significance; where it used to mean drivers had to make riders feel taken care of, it now means that in addition to protecting riders from the virus, drivers have to figure out how to keep themselves healthy.

    And Uber again found itself persuading riders not to abandon its platform over safety fears — requiring drivers to submit a selfie verifying they’re wearing a mask, offering limited amounts of cleaning supplies, and asking riders to complete a safety check before getting into a vehicle.

    While Uber’s changes might ease some riders’ concerns, they don’t offer the same level of automation and scale that an algorithmic solution could. It’s a potential path hinted at by a series of Uber patents, granted from 2019 to last summer, which outline algorithmic scoring and risk prediction systems to help decide who is safe enough to drive for Uber.

    Taken together, they point to a pattern of experimentation with algorithmic prediction and driver surveillance in the name of rider safety. Similar to widely criticized algorithms that help price insurance and make decisions on bail, sentencing, and parole, the systems described in the patents would make deeply consequential decisions using digital processes that are difficult or impossible to untangle. While Uber’s quest to make safety programmatic is tantalizing, experts expressed concern that the systems could run afoul of their stated purpose.

    An Uber spokesperson wrote in an emailed statement that although the company is “always exploring ways that our technology can help improve the Uber experience,” it does not currently have products tied to the safety scoring and risk assessment patents.

    As the battle over drivers’ legal classification continues, after Uber and Lyft used their deep pockets to fund an election victory that let them keep drivers as contractors in California, there are urgent concerns that systems like these could become another means to remove drivers without due process, especially as the pandemic has laid bare the vulnerability of gig workers who lack the safety net employees can lean on.

    Close-up of Uniden dashboard camera (dashcam) installed on the interior window of an Uber vehicle in San Ramon, California; dashcams are often used by crowdsourced taxi drivers to increase driver and passenger safety, September 27, 2018. (Photo by Smith Collection/Gado/Getty Images)

    Close-up of a dashboard camera installed on the interior window of an Uber vehicle in San Ramon, Calif., on Sept. 27, 2018.

    Photo: Smith Collection/Gado/Getty Images

    Watched at All Times

    One patent for scoring driver safety risk relies on machine learning and rider feedback and notably suggests a driver’s “heavy accent” corresponds to “low quality” service.

    Another aims to predict safety incidents using machine-learning models that determine the likelihood that a driver will be involved in dangerous driving or interpersonal conflict, utilizing factors like psychometric tests to determine their “trustworthiness,” monitoring their social media networks, and using “official sources” like police reports to overcome biases in rider feedback.

    “The mistake is that a driver loses their livelihood, not that someone gets shown the wrong ad.”

    Jeremy Gillula, former tech projects director at the Electronic Frontier Foundation who now works as a privacy engineer at Google, said using algorithms to predict a person’s behavior for the purpose of “deciding if they’re going to be a danger or a problem” is deeply concerning.

    “Some brilliant engineers realized we can do machine learning based on people’s text, without realizing what we really want to get, and what it actually represents in a real-life application,” he said. “The mistake is that a driver loses their livelihood, not that someone gets shown the wrong ad.”

    Surveilling drivers under the guise of safety is a common thread in Uber’s patents. Many evaluate drivers’ performance using information from their phones, including one that scores their driving ability and suggests tracking their eye and head movements with phone cameras, and another that detects their behavioral state (angry, intoxicated, or sleepy) and assigns them an “abnormality score.”

    Additional patents aim to monitor drivers’ behavior using in-vehicle cameras and approximate “distraction level” with an activity log that tracks what else they’re doing on their phones; making a call, looking at a map, or even moving the phone around could indicate distraction.

    Jamie Williams, a former staff attorney at EFF focused on civil liberties who now works as a product counselor, said drivers should be aware they’re “being watched at all times.”

    The patents also mirror technologies recently implemented by Amazon in its delivery vans. The company announced plans in February to install video cameras that use AI to track drivers’ hand movements, driving abilities, and facial expressions. Data collected by the cameras determines a “safety score” and could result in a driver being terminated. Drivers have told Reuters: “The cameras are just another way to control us.”

    The Promise of Safety

    The algorithm outlined in a 2019 safety risk scoring patent shows how dangerous these systems can be in real life, experts said, noting that it could mimic riders’ existing biases.

    The system described in the patent uses a combination of rider feedback and phone metadata to assign drivers a safety score based on how carefully they drive (“vehicle operation”) and how they interact with passengers (“interpersonal behavior”). A driver’s safety score would be calculated once a rider submits a safety report to Uber.

    After a report is submitted, according to the patent, it would be processed by algorithms along with any associated metadata, including the driver’s Uber profile, the trip duration, distance traveled, GPS location, and car speed. With that information, the report would be classified into topics like “physical altercation” or “aggressive driving.”

    A driver’s overall safety score would be calculated using weighted risk assessment scores from the interpersonal behavior and vehicle operation categories. This overall score would determine if a driver has a low, medium, or high safety risk, and consequently, if they should face disciplinary action. Drivers with a high safety risk might receive a warning in the app, a temporary account suspension, or an unspecified “intervention” in real time.

    Adding a further layer of automation, the patent also describes a system that automatically tweaks driver safety scores based on specific metadata. A driver who has completed a certain number of trips would be marked as safer, while one who has generated more safety incidents would be marked as less safe. According to the patent, a driver who works at night is considered less safe than a driver who works during the day.

    While this design may seem straightforward — shouldn’t a more experienced driver who has better road visibility be considered safer? — experts say any automated decision-making requires that developers make meaningful choices to avoid inserting bias into the entire system.

    Gillula said Uber’s automated rules could make decisions based on flawed human assumptions. “Race may be correlated with what time of day you’re operating as an Uber driver. If it’s a second job because you have to work during the day, it seems ridiculous to penalize you for that,” he said. “This is exactly the sort of thing that worries me.”

    If Uber wants to make its algorithmic scoring fair, it would need to be transparent about how drivers are being evaluated and give them a proper feedback channel, Williams said. “Machine learning algorithms can be wrong; users can be wrong,” she said. “It’s very important to have clear processes, transparency, and awareness about what’s going into the score.”

    The driver rating screen in an Uber app is seen February 12, 2016 in Washington, DC. Global ridesharing service Uber said February 12, 2016 it had raised $200 million in additional funding to help its push into emerging markets.The latest round comes from Luxembourg-based investment group LetterOne (L1), according to a joint statement.  / AFP PHOTO / Brendan Smialowski        (Photo credit should read BRENDAN SMIALOWSKI/AFP via Getty Images)

    The driver rating screen in the Uber app is seen on Feb. 12, 2016, in Washington, D.C.

    Photo: Brendan Smialowski/AFP via Getty Images

    Questions of Accuracy and Fairness

    Risk assessment algorithms have long been used by insurance companies to set policyholder premiums based on indicators like age, occupation, geographical location, and hobbies. Algorithms are also utilized in the criminal justice system, where they’re applied at nearly every stage of the legal process to help judges and officials make decisions.

    Proprietary algorithms like COMPAS, used in states like Florida and Wisconsin, determine an individual’s risk of recidivism on a scale of 1 to 10, with certain numbers corresponding to low, medium, and high risk — the same rubric Uber’s patent follows.

    Though Uber aims to predict “safety” risk in its patents, it faces the same fundamental questions of fairness and accuracy leveled at criminal justice algorithms. (The bias inherent in those algorithms has been pointed out again and again.) If the design of an algorithm is flawed from the outset, its outcomes and predictions will be too. In the criminal justice context, rearrest is an imperfect proxy for recidivism because arrests are so closely tied to factors like where you live, whether you interact with the police, and what you look like, Gillula said.

    Uber’s current rating system, which allows riders and drivers to rate one another on a five-star scale, is similar to the interpersonal behavior category described in Uber’s safety risk scoring patent: Both rely on subjective judgments that are the basis for doling out punishments. Under its current system, Uber “deactivates” or fires drivers whose ratings drop below a certain threshold. The policy has long infuriated drivers who say they have no real way of contesting unfair ratings: They’re funneled through a support system that prioritizes passengers and rarely provides a satisfactory resolution.

    Bhairavi Desai, executive director of the New York Taxi Workers Alliance, said drivers are not protected from passengers’ racism, bias, or bigotry. “We’ve talked to drivers who feel like they’ve gotten a lower rating because they’re Muslim,” she said. “I know of African American drivers who stopped working for them because they felt that they would be rated lower.”

    Former driver Thomas Liu sued Uber last October, proposing a class-action suit on behalf of nonwhite drivers who were fired based on racially “biased” ratings. Williams said the safety score would be subject to the same concerns: “People could put a safety report in just because they don’t like a driver. It could be racially biased, and there could be a lot of misuse of it.”

    Varinder Kumar, a former New York City yellow cab driver, was permanently deactivated by Uber in 2019. He’d been driving for Uber every day for nearly five years, and the deactivation meant the sudden loss of $400 to $500 per week.

    “You ask them what happened, they always say it’s a safety issue.”

    “I went to the office five times, I emailed them, and they said it was because one customer complained,” Kumar said. “Whenever you go there, you ask them what happened, they always say it’s a safety issue. I’ve been driving in New York City since 1991 and had no accident, no ticket, so I don’t know what kind of safety they’re looking for.”

    The kind of safety outlined in Uber’s safety risk scoring patent isn’t clear to Kumar either. He said the interpersonal behavior reporting would cause the same problems as Uber’s rating system: “Customers file a complaint even if they are not 100 percent right.” Meanwhile, the vehicle operation category could unfairly penalize New York City drivers who need to drive more aggressively.

    Joshua Welter, an organizer with Teamsters 117 and the affiliated Drivers Union, said algorithmic discipline remains a top issue for drivers. “It’s no wonder Uber and Lyft drivers across the country are rising up and taking action for greater fairness and a voice on the job, like due process to appeal deactivations,” Welter said. “It’s about basic respect and being treated as a human being, not a data experiment.”

    A traveler uses a smartphone in front of a vehicle displaying Uber Technologies Inc. signage at the Oakland International Airport in Oakland, California, U.S., on Tuesday, Aug. 6, 2019. Uber Technologies Inc. is scheduled to release earnings figures on August 8. Photographer: David Paul Morris/Bloomberg via Getty Images

    A traveler uses a smartphone in front of a vehicle displaying Uber signage at the Oakland International Airport in California on Aug. 6, 2019.

    Photo: David Paul Morris/Bloomberg via Getty Images

    An Artificial Trust

    The basis for Uber’s safety experimentation is user data, and Daniel Kahn Gillmor, a senior staff technologist at the American Civil Liberties Union’s Speech, Privacy, and Technology Project, said Uber is “sitting on an ever-growing pile of information on people who have ever ridden on its platform.”

    “This is a company that does massive experimentation and has shown little regard for data privacy,” he added.

    In addition to a vast trove of data gathered from over 10 billion trips, Uber collects telematics data from drivers such as their car’s speed, braking, and acceleration using GPS data from their devices. In 2016, it launched a safety device called the Uber Beacon, a color-changing orb that mounts to a car’s windshield. It was announced as a device that assisted with rider pickups, without mention of the fact that it contained sensors for collecting telematics data. In a now-deleted blog post from 2018, Uber engineers touted the Beacon’s benefit as a device solely managed by Uber for testing algorithms and said it collected better data than drivers’ devices.

    Brian Green, director of technology ethics at Santa Clara University’s Markkula Center for Applied Ethics, questioned the motives behind Uber’s data collection. “If the purpose of [Uber’s] surveillance system is to promote trust — if a corporation wants to be trustworthy — they have to allow the public to look at them,” he said. “A lot of tech companies are not transparent. They don’t want the light shone on them.”

    Welter said that when companies like Uber experiment with worker discipline based on black box algorithms, “both workers and consumers alike should be deeply concerned whether the reach of big data into our daily lives has gone too far.”

    In addition to providing a view into Uber’s safety vision, the patents demonstrate the scope of its machine learning ambitions.

    “We’re dealing with so many people we don’t know that tech and surveillance steps in to build an artificial trust.”

    Uber considers AI essential to its business and has made significant investments in it over the past few years. Its internal machine learning platform helps engineering teams apply AI to optimize trip routes, match drivers and riders, mine insights about drivers, and build more safety features. Uber already uses algorithms to process over 90 percent of its rider feedback (the company said that it receives an immense amount of feedback by design, the majority of which is not related to safety).

    Algorithmic safety scoring and risk assessment also fit under Uber’s rider safety initiative and its efforts to ensure safe drop-offs for its growing delivery platform. Experts said the systems are not as far from reality as tech companies’ patents sometimes are. In a statement, Uber said that “patent applications are filed on many ideas, but not all of them actually become products or features.”

    But some of Uber’s safety-related patents have close parallels with widely utilized features: A patent filed in 2015 for “trip anomaly” detection bears similarities to Uber’s RideCheck feature, technology for anonymizing pickup and drop-off locations is similar to a patent application filed last year, and an application filed in 2015 for verifying drivers’ identity with selfies is similar to Uber’s security selfie feature.

    Green said Uber’s patents reflect a broader trend in which technology is used as a quick fix for deeper societal issues. “We’re dealing with so many people we don’t know that tech and surveillance steps in to build an artificial trust,” he said.

    That trust can only extend so far during the pandemic, which has underscored the economic uncertainty drivers face and the limits of technology’s promise of safety. Rolling out such systems now would mean there’s even more at stake.

    The post Uber Patents Reveal Experiments With Predictive Algorithms to Identify Risky Drivers appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Data protection advocate Mariano delli Santi on whether we should worry about targeted advertising

    We all believe in at least one conspiracy theory. Well, a little bit. That’s according to a Norwegian professor who recently argued that conspiratorial thinking spans everything from 5G theories to believing the referee really is against your team. Mine? I think my phone is somehow listening in. How else can I explain the ads that appear for a product just as I’m talking about it? I asked Mariano delli Santi, legal and policy officer with data protection advocate Open Rights Group.

    As hills to die on go, I could do worse than “my phone is listening to me”, right?
    Well, there have been fringe cases where apps have been found to turn up your mic. But the point is, advertisers don’t need to listen to know everything about you.

    Continue reading…

    This post was originally published on Human rights | The Guardian.

  • The announcement comes amid an existential crisis for Facebook

    This post was originally published on The Asian Age | Home.

  • A California startup backed by the shipping giant Maersk wants to turn America’s farm waste into clean fuel for mammoth container ships. The company, WasteFuel, is working to build facilities across the country that produce “bio-methanol” from corn husks, discarded wheat straw, and other agricultural scraps —  a low-carbon fuel produced in tiny volumes today.

    Bio-methanol from these facilities would then travel by rail car to major ports, Trevor Nielson, chairman and CEO of WasteFuel, told Grist. Methanol-powered deep-sea vessels on, say, the U.S. West Coast would be able to refuel before sailing to China, Japan, or Vietnam with containers full of pet food, cotton, and scrap metal.

    WasteFuel’s plans are part of a larger effort to cut emissions in the global shipping industry, whose vessels run almost exclusively on fossil fuels. Cargo ships contribute nearly 3 percent of the world’s annual greenhouse gas emissions — a number expected to climb as global trade expands. Within shipping, about 80 percent of emissions come from deep sea vessels. These oceangoing ships are particularly tricky to clean up because they often sail for weeks and cover thousands of miles between refueling, and have limited space on board for fuel tanks or battery banks.

    “We have a huge opportunity to address the hardest aspects of decarbonization,” including long-distance and heavy-duty transportation, Nielson said from his home in Malibu, which overlooks the ship-clogged waters of Southern California. Record congestion around the ports of Long Beach and Los Angeles has driven a spike in air pollution from idling vessels and truck traffic.

    For shipowners, bio-methanol is a promising alternative to dirty bunker fuel, because it can be used by converting engines in existing vessels. The problem is, only a handful of relatively small facilities make the fuel worldwide. Bio-methanol is also expensive to produce, costing up to seven times more than conventional methanol from fossil fuels, according to the International Renewable Energy Agency.

    The first of WasteFuel’s planned bio-methanol facilities is expected to be completed within a few years, Neilson said, though the company hasn’t disclosed a location yet. Along with bio-methanol, WasteFuel plans to produce other low-carbon fuels for airplanes and freight trucks made from landfill garbage in the Philippines and Mexico, respectively. 

    For bio-methanol and other biofuels to play a meaningful role in reducing shipping emissions, clean fuel producers will need to drastically boost output and slash costs in the coming decades, said Eric Tan, a senior research engineer at the U.S. National Renewable Energy Laboratory, or NREL, in Colorado. “Scalability is very important,” he said, “and the fuel cost has to be competitive.”

    In a recent study, Tan and his colleagues found the United States has ample amounts of feedstock that could be turned into biofuel — enough to supply a significant share of the global shipping industry’s annual needs by 2040. With technology improvements, supportive government policy, and more efficient ways of sourcing raw materials, U.S. producers could achieve the “critical mass” needed to make a dent in shipping’s carbon footprint, researchers said.

    Companies will need to start using such fuels soon in order to stay on track for global climate goals, according to industry experts. The International Maritime Organization, the U.N. agency that regulates global shipping, aims to reduce the industry’s total emissions by half compared with 2008 levels by mid-century, and to fully decarbonize by 2100. 

    Last week, a group of giant retail companies including Amazon and IKEA said that, by 2040, they would only import goods on ships using zero-carbon fuels, including hydrogen and ammonia made with renewable electricity.

    Shipping analysts say it’s not yet clear which low- and zero-carbon technologies will prevail as companies work to meet such targets. That uncertainty has prompted some shipowners to postpone ordering new vessels for fear of betting on the wrong solution. 

    Maersk, by contrast, is betting at least $1.4 billion on methanol-powered ships and supplies.

    The toxic alcohol is primarily used to make chemicals for paints, plastics, and cosmetics, and nearly all of it is produced with natural gas or coal. When burned, methanol produces little air pollution. Only about two dozen ships use, or will soon use, methanol for fuel — mainly methanol-carrying tankers that draw supplies from their cargo holds. (In the 1990s, U.S. passenger cars like the Ford Taurus and Dodge Intrepid were built to use a methanol-gasoline blend before the fuel fell out of favor.)

    From a climate perspective, however, methanol from natural gas is worse than commonly used fuels, like marine gas oil. So Maersk and other shipping companies are focusing on developing both bio-methanol and “e-methanol,” made with renewable electricity. According to the International Council on Clean Transportation, bio-methanol from plant biomass can reduce “well-to-wake” lifecycle emissions by 70 to 80 percent compared with marine gas oil.

    In August, Maersk said it ordered eight mega container ships capable of running on methanol, with the first to join its fleet in early 2024. The Danish shipping company also plans to operate a methanol-burning vessel in Europe in 2023. The company has acknowledged one huge complication to those plans:  Only about 220,000 metric tons of green methanol are produced annually, compared to the 330 million metric tons of fuel the shipping industry consumes every year.

    “It will be a significant challenge to source an adequate supply of carbon-neutral methanol within our timeline to pioneer this technology,” said Henriette Hallberg Thygesen, CEO of Maersk’s fleet and strategic brands division, in a statement earlier this year. (The company did not return requests for comment.)

    Maersk is backing WasteFuel and other fuel producers to fill the gap. In September, the company announced that it had invested in WasteFuel for an undisclosed amount, a move that made the shipping giant WasteFuel’s largest investor and put a Maersk executive on the startup’s board. Maersk is also investing in another clean energy startup, Prometheus Fuels, and has signed a contract to buy e-methanol from a facility planned in Denmark.

    WasteFuel is talking to around 30 agricultural waste owners in and outside the United States and plans to eventually license existing technology from other developers, Nielson said. Bio-methanol can be made by fermenting organic materials in an anaerobic digester, or subjecting materials to high temperatures to produce synthetic gas, then processing it in a reactor. The latter process is especially capital- and energy-intensive, and neither is widely used in large commercial operations. “The question of scale is one that we all have to address,” he acknowledged.

    According to Maersk, green fuels like bio-methanol will be two to three times more expensive than oil-based marine fuels. But because ships are so huge and carry so many containers on their decks, the final cost to consumers will be something like an extra 50 cents on a pair of running shoes, Nielson said. 

    There are other barriers to scaling up bio-methanol and other biofuels, said Tan, the NREL engineer. Collecting, cleaning up, and storing feedstock adds significant expense for fuel producers. And as chemical companies and airlines turn to biofuels to reduce emissions, the competition for waste feedstocks will heat up.

    Nielson said that building bio-methanol facilities close to places where waste is collected, such as farms or landfills, can help reduce the cost of securing the raw materials. It also ensures that corn husks, food scraps, and garbage are fresh when they enter the facility, an important step because decaying organic materials emit methane. The sooner waste hits the system, the more methane can be turned into fuel — and the less winds up in the atmosphere as a potent greenhouse gas.

    Noting the challenges facing WasteFuel, Neilson said a key difference between past and present attempts to boost bio-methanol supplies is that major companies are increasingly responding to pressure from activists to tackle climate change.

    “That shifts the economics,” he said. “People in finance, people in consumer products are listening.”

    This story was originally published by Grist with the headline Can farm waste help clean up the world’s dirty cargo ships? on Oct 27, 2021.

    This post was originally published on Grist.

  • ]

    ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published.

    As ProPublica has reported, cybercriminals are flooding the internet with fake job ads and even bogus company hiring websites whose purpose is to steal your identity and use it to commit fraud. It’s a good reminder that you should vet potential employers as closely as they vet you.

    Here are ten tips on how to spot such scams:

    1. Beware of abnormally high salaries

    One of the ways criminals entice people is by advertising unusually generous pay. If the salary being offered in a job ad is way above what you see in other ads for similar positions, be wary. You can get an idea of average weekly earnings by industry using the Quarterly Census of Employment and Wages or check out salary calculators on websites such as Glassdoor.

    A fake job ad on LinkedIn that promises unusually high pay for shuttle-bus drivers 2. Don’t accept jobs you didn’t apply for

    Sometimes cybercriminals obtain the contact information of people who have submitted their résumés to job-seeking websites and then email them to say they are preapproved for a job. These are bogus messages whose main purpose is to get people to share additional information, which the scammers will use to commit fraud. The emails may also include malware that can infect your computer. Ignore such messages and don’t open any attachments.

    3. Be wary of job ads touting the need to verify your identity at the outset

    Ads that demand you share your driver’s license or Social Security number as part of an initial application, or very soon after, are a significant red flag. Legitimate employers rarely request such information until much later in the hiring process.

    A phony website purporting to be the Spirit Airlines careers site asks for the applicant’s driver’s license as part of the initial application process 4. Take the text of the job ad and put it in Google

    Cybercriminals sometimes reuse the same job ads over and over, posting them on LinkedIn, Facebook and other online platforms with only slight modifications. If you spot an ad that features virtually identical language to that used by various employers all over the country, it could be a scam.

    5. Research the identity of the person posting the ad

    Cybercriminals are creating fake profiles on LinkedIn and Facebook meant to resemble individuals at real companies who are posting job ads. One clue: a person claiming to work for a company in the U.S. while showing check-ins at locations in other countries. When in doubt, contact the companies directly to ask if they’re actually recruiting for the positions. If they’re not, report the suspect profiles to LinkedIn and Facebook.

    Screenshots from a fake Facebook profile that claimed to belong to a senior manager at Denver International Airport, but which showed a check-in in Owerri, Nigeria. (The ad was removed after an inquiry by ProPublica.) 6. Check the spelling and domains of company names

    When you vet companies, be aware that cybercriminals sometimes steer potential applicants to fake websites they’ve created that mimic the sites of real companies — except that, say, an extra letter has been added to the company’s name. When job applicants can’t spell a company’s name right in a cover letter, recruiters are apt to toss those applications in the trash. Do the same with any companies that seemingly can’t spell their own names.

    Top: A domain name for a fake careers website posing as Spirit Airlines that misspells “Spirit” as “Spirits.” Bottom: The real Spirit Airlines careers web address. 7. Avoid text-only interviews

    The pandemic has made it necessary for many employers to conduct job interviews remotely via services like Zoom. But be cautious of hiring managers who insist on communicating only by email or text or using messaging platforms such as Telegram to conduct interviews. Sooner or later, a real employer will want to see and interact with a recruit, whether through a video call or in person. Cybercriminals typically don’t want you to hear their voices or see their faces, since it raises the chances you’ll realize they’re not who they say they are.

    8. Don’t give out your credit card or phone account login

    A real employer doesn’t need to know your credit card number, credit score or phone account login to process your job application. Cybercriminals sometimes ask for such information up front to commandeer your phone and finances, often under the pretense of needing to set you up with a company phone plan or purchase equipment you’ll need to do your job (see next item).

    9. Don’t buy things on behalf of a potential employer

    Beware of companies that, before you’re hired, offer to send you a check to purchase a computer or other equipment. It’s a variation on an old scam that involves criminals asking marks to send their own money to some third party with the promise that they will reimburse the marks. Inevitably, the reimbursement doesn’t come through, and the mark is left holding the bag.

    10. If something feels suspicious, investigate — or walk away

    If at any point in the job application or interview stage something feels wrong to you, don’t ignore the feeling. Ask yourself if you see any of the warning signs outlined above. Or pause and ask a trusted friend or relative for a reality check.

    Do You Have a Tip for ProPublica? Help Us Do Journalism.

    Have you had your identity stolen during the pandemic because of a data breach or a fake job scam? If so, tell us about it. Email cezary.podkul@propublica.org.

    This post was originally published on Articles and Investigations – ProPublica.

  • The program lets senior living facilities use Amazon Echo devices to send announcements or other messages to residents’ rooms

    This post was originally published on The Asian Age | Home.