Category: AI

  • Australia’s chief scientist – a quantum physicist by trade – got to the heart of the online harm problems facing Australia in a way other experts appearing at Monday’s Senate hearing on AI did not. While witnesses offered options for driving rapid uptake in a responsible way that builds up access, awareness and Australian made…

    The post Foley bells the cat on tech’s transparency problem appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The rapid advancement of generative AI is challenging long-held notions of expertise and mastery. The traditional belief, epitomized by Malcolm Gladwell’s “10,000-hour rule” that expertise demands extensive practice, is being upended by AI’s capability to produce high-quality outputs, both creative and technical, with a minimum of human input. This demands reevaluating the skills and knowledge that will remain valuable as AI begins to automate, thereby devaluing some complex cognitive tasks.

    The post Malcolm Gladwell 2.0: AI-powered students, redefining mastery appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • AI leaders and groups representing the wider tech sector have reacted with dismay at the absence of new funding for artificial intelligence from the Albanese government at a time when other nations are ramping up investment. Tuesday’s Budget contained no new industry funding for AI, with a $40 million investment flowing exclusively towards the development…

    The post ‘We will lose out’: Budget gets a fail mark for AI funding appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Federal funding for artificial intelligence was reshaped in the Albanese government’s third budget with an additional $40 million, but there is little direct industry investment as quantum and clean energy technologies were put at the heart of the Future Made in Australia. A new $40 million Ai package revealed Tuesday includes measures that will move…

    The post National AI Centre stripped from CSIRO in $40m policy shift appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Hospitals could save up to $5.4 billion a year by making better use of electronic medical records, including the My Health Record if the government fixes “important gaps”, according to the Productivity Commission. Improving the use of electronic medical records would generate savings by shortening hospital stays for patients. It could also generate a further…

    The post $5.4bn windfall from greater use of digital health records: PC appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The Albanese government’s APS-wide trial of Microsoft’s artificial intelligence had an immediate chilling effect on local providers of competing technology, with a likely deal scrapped the following day, the Senate’s sovereign tech inquiry heard on Monday. Trellis Data chief Michael Gately told the hearing that his company – which provides Copilot and ChatGPT like products…

    The post APS Microsoft trial cost local AI firm its govt deal, inquiry hears appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The $940 million investment in Silicon Valley-based startup PsiQuantum jointly made by the federal and Queensland governments this week represents the first WTF moment of the recently announced Future Made in Australia era. Apologies for resorting to the colloquial expletive, but it aptly summarises the reaction to a decision that is still shrouded in secrecy…

    The post Quantum is a safe bet when you fear AI appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Policy experts are calling for a new Artificial Intelligence Commissioner to combat Australia’s “over reliance” on foreign tech firms responsible for almost all the innovation and production behind tools like Chat GPT. Under a new proposal to develop locally managed, public AI capability, the Commissioner would oversee local development and levy charges on foreign tech…

    The post AI tsar, taxes needed to capture tech’s public good appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • By Catherine Thorbecke

    See original post here.

    Michael Tubbs was born and raised in Stockton, California, roughly a one-hour drive from Silicon Valley, the birthplace of the AI revolution that’s now forecast to forever change the way Americans live and work.

    But despite coming of age in Big Tech’s backyard, the America that Tubbs grew up in was marked by “scarcity and poverty,” he told CNN. Tubbs, 33, was born to a teenage mother, whom he says he never saw when he was younger because “she was always working — and it was never enough.”

    His own experiences led him to think about different ways that the wealthiest country in the world could help ameliorate poverty. When Tubbs went on to become the first Black mayor of his hometown in 2016, he spearheaded a guaranteed income pilot program in 2019 that did something simple yet radical: Give out free money with no strings attached. 

    That idea of guaranteed income is receiving renewed interest as AI becomes an increasing threat to Americans’ livelihoods.

    Global policymakers and business leaders are now increasingly warning that the rise of artificial intelligence will likely have profound impacts on the labor market and could put millions of people out of work in the years ahead (while also creating new and different jobs in the process). The International Monetary Fund warned earlier this year that some 40% of jobs around the world could be affected by the rise of AI, and that this trend will likely deepen the already cavernous gulf between the haves and have-nots.

    As more Americans’ jobs are increasingly at risk due to the threat of AI, Tubbs and other proponents of guaranteed income say this could be one solution to help provide a safety net and cushion the expected blow AI will have on the labor market.

    “We don’t really do a good job at designing policies or doing things in times of crisis,” Tubbs told CNN, saying it is urgent to start planning for guaranteed income programs before we see 40% of global jobs taken by AI.

    For a period of two years starting in 2019, Stockton handed out to 125 randomly selected residents in low-income neighborhoods $500 a month with no conditions around how they used the funds or if they had employment. The initial results from the pilot program found that recipients had drastically improved their job prospects and financial stability and saw better physical and mental health outcomes.

    “Let’s get the guardrails in place now,” he said. “Then, when we have to deal with that job displacement, we’re better positioned to do so.”

    Silicon Valley’s infatuation with guaranteed income

    The idea of a guaranteed income is not new. Tubbs said he was inspired to pursue it after reading the works of Civil Rights leader Martin Luther King, Jr., who advocated for guaranteed income in his 1967 book, “Where Do We Go From Here: Chaos or Community?”

    “I’m now convinced that the simplest approach will prove to be the most effective — the solution to poverty is to abolish it directly by a now widely discussed measure: the guaranteed income,” King wrote at the time.

    Decades after King’s death, the idea of guaranteed income went on to see a resurgence of support emanating out of Silicon Valley. The concept emerged as a buzzword of sorts among many of Silicon Valley’s elite — including Elon Musk, Mark Zuckerberg and Sam Altman — even before the public launch of ChatGPT in late 2022 re-upped a global debate about automation disrupting jobs.

    “Universal income will be necessary over time if AI takes over most human jobs,” Tesla CEO Musk tweeted back in 2018. Late last year, in an interview with UK Prime Minister Rishi Sunak, Musk said he thought AI would eventually bring about “universal high income,” without sharing any details of what this could look like.

    Meta CEO Mark Zuckerberg, meanwhile, called for the exploration of “ideas like universal basic income to make sure that everyone has a cushion to try new ideas,” during a Harvard commencement speech in May 2017. In a Facebook post later that year, Zuckerberg celebrated Alaska’s Permanent Fund Dividend — or the annual grants given to Alaska residents from a portion of the state’s oil revenue — as a “novel approach to basic income” that “comes from conservative principles of smaller government, rather than progressive principles of a larger safety net.”

    Altman, CEO of one of the world’s most powerful AI companies, OpenAI, has also been outspoken about what he sees as the need for some form of guaranteed income as many jobs are increasingly lost to automation.

    Back in 2016, when Altman was president of tech startup accelerator YCombinator, he announced he was seeking participants to help launch a study on basic income (or, as he described it at the time, “giving people enough money to live on with no strings attached.”)

    “I’m fairly confident that at some point in the future, as technology continues to eliminate traditional jobs and massive new wealth gets created, we’re going to see some version of this at a national scale,” Altman wrote in a 2016 blog post for YCombinator.

    He has since left his post at YCombinator to focus on OpenAI, but Altman still chairs the board of OpenResearch, the nonprofit lab that is in the process of conducting this ongoing study on basic income that he helped launch.

    Elizabeth Rhodes, research director at OpenResearch, told CNN earlier this year that it hopes to release initial findings this summer from a three-year study on unconditional income involving some 3,000 individuals in two states.

    “We really see this as sort of a foundational exploratory study to understand what happens when you give individuals unconditional cash,” she told CNN.

    While she stressed that she could not get into the specifics of her team’s research while the study is underway, she hopes that their findings can eventually provide some data that answers some of the most common questions surrounding how cash payments will impact people’s desire to work and its broader potential advantages or disadvantages within communities.

    Other tech industry tycoons, including Twitter co-founder Jack Dorsey, have also thrown immense financial support behind guaranteed income programs. (In 2020, Dorsey donated some $18 million to Mayors for a Guaranteed Income, the organization that Tubbs founded).

    Dozens of cities across the United States have already begun experimenting with guaranteed income programs in recent years, with most of them funded by nonprofit organizations but organized by local officials.

    Tubbs said he ultimately thinks funding for these programs should come from the federal government but encouraged lawmakers to be creative about finding ways to raise revenue.

    “For example, you could legalize cannabis federally and use that tax revenue, you could do a data dividend or some sort of robot tax or AI tax,” he suggested.

    Opponents to guaranteed income programs, most of whom lean Republican, have argued that such efforts disincentivize work or that taxing successful tech companies can stifle innovation.

    And in Texas, opponents of guaranteed income are taking their battle to court. Earlier this week, Texas Attorney General Ken Paxton sued Harris County over its guaranteed income program that is funded using federal money from the pandemic-era American Rescue Plan. “This scheme is plainly unconstitutional,” Paxton said in a statement. “I am suing to stop officials in Harris County from abusing public funds for political gain.”

    In court documents, the attorney general goes on to slam the program as “illegal and illegitimate government overreach.”

    ‘It’s not just giving people money, it’s giving them opportunity’

    Tomas Vargas Jr., a recipient of guaranteed income in the Stockton pilot program, told CNN that he heard critics saying that receiving the extra payments would make people “lazy.” But he says it ultimately gave him the opportunity to find better work.

    “When I got the money, I was already in the mindset of hustling and getting money. So, it just made me want to get more money,” he said. “The thing that I want people to understand about the guaranteed income is it’s not just giving people money, it’s giving them opportunity.”

    For years, Vargas said he woke up every day with the crippling anxiety that comes with never quite knowing how he will be able to provide for his family. He was juggling multiple jobs: working at UPS, repairing cars, mowing lawns, delivering groceries and picking up any other work he could find. He said he almost never saw his children and said he briefly received food stamp assistance but was “instantly kicked off” when he would pick up extra hours of work.

    “There’s one thing that I’ve always wanted as a father, and that’s not to make my kids go through the same things that I went through: having no power, no water, or no food on the plates,” he told CNN. “So I was always trying to grind.”

    Vargas said the extra cash payments he received helped him focus and apply for one full-time job, which he never had the time or energy to do before. He now says he thinks guaranteed income could be one way to provide a cushion for re-training or education programs for people whose jobs are exposed to AI, the same way it helped him pivot to better and more secure employment.

    Vargas, like Tubbs, was born and raised in Stockton. Vargas said his father was never around much growing up and he eventually moved in with his grandmother when he was 12. Before participating in the program, Vargas said he was “a really negative person” and that he didn’t look at himself as someone even worth investing in.

    But the extra financial security allowed him to spend more time with his children, and ultimately break the cycle of poverty he had seen in his community his whole life.

    “One of the biggest things that helped me realize my full potential that I had in myself, and I was worth investing in, was seeing the reaction from my kids,” Vargas said, “and seeing the generational trauma and healing in them.”

    This post was originally published on Basic Income Today.

  • Australia’s $940 million bet on PsiQuantum should be swiftly followed by more direct investments in local quantum businesses and research organisations, according to the country’s newest technology industry association. The Tech Council of Australia welcomed the investment by the federal and Queensland governments in the California-based company tackling the quantum challenge using a photonics-based approach. Prime…

    The post Govts urged to follow PsiQuantum bet with more local investment appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • A group of 20 representatives from industry and academia have been appointed to screen the issues to be contemplated by the federal government’s AI and Copyright References Group. Details of the 20-strong steering committee and the governance framework for the Attorney-General’s Department administered group broke cover on Monday. The Copyright and Artificial Intelligence Reference Group…

    The post AI copyright steering committee appointed appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The Republic of Singapore Navy (RSN)’s fourth Type 218SG Invincible-class diesel-electric submarine, the future RSS Inimitable, has been launched at Germany’s Thyssenkrupp Marine Systems (TKMS) facility in Kiel on 22 April. The event was officiated by Singapore Senior Minister and Coordinating Minister for National Security Teo Chee Hean, and witnessed by Federal Minister for Defence […]

    The post Singapore launches fourth Invincible-class submarine appeared first on Asian Military Review.

    This post was originally published on Asian Military Review.

    • Powerful governments cast humanity into an era devoid of effective international rule of law, with civilians in conflicts paying the highest price
    • Rapidly changing artificial intelligence is left to create fertile ground for racism, discrimination and division in landmark year for public elections
    • Standing against these abuses, people the world over mobilized in unprecedented numbers, demanding human rights protection and respect for our common humanity

    The world is reaping a harvest of terrifying consequences from escalating conflict and the near breakdown of international law, said Amnesty International as it launched its annual The State of the World’s Human Rights report, delivering an assessment of human rights in 155 countries.

    Amnesty International also warned that the breakdown of the rule of law is likely to accelerate with rapid advancement in artificial intelligence (AI) which, coupled with the dominance of Big Tech, risks a “supercharging” of human rights violations if regulation continues to lag behind advances.

    Amnesty International’s report paints a dismal picture of alarming human rights repression and prolific international rule-breaking, all in the midst of deepening global inequality, superpowers vying for supremacy and an escalating climate crisis,” said Amnesty International’s Secretary General, Agnès Callamard. 

    “Israel’s flagrant disregard for international law is compounded by the failures of its allies to stop the indescribable civilian bloodshed meted out in Gaza. Many of those allies were the very architects of that post-World War Two system of law. Alongside Russia’s ongoing aggression against Ukraine, the growing number of armed conflicts, and massive human rights violations witnessed, for example, in Sudan, Ethiopia and Myanmar – the global rule-based order is at risk of decimation.”

    Lawlessness, discrimination and impunity in conflicts and elsewhere have been enabled by unchecked use of new and familiar technologies which are now routinely weaponized by military, political and corporate actors. Big Tech’s platforms have stoked conflict. Spyware and mass surveillance tools are used to encroach on fundamental rights and freedoms, while governments are deploying automated tools targeting the most marginalized groups in society.

    “In an increasingly precarious world, unregulated proliferation and deployment of technologies such as generative AI, facial recognition and spyware are poised to be a pernicious foe – scaling up and supercharging violations of international law and human rights to exceptional levels,” said Agnès Callamard.

    “During a landmark year of elections and in the face of the increasingly powerful anti-regulation lobby driven and financed by Big Tech actors, these rogue and unregulated technological advances pose an enormous threat to us all. They can be weaponized to discriminate, disinform and divide.”

    Read more about Amnesty researchers’ biggest human rights concerns for 2023/24.

    Amnesty International’s report paints a dismal picture of alarming human rights repression and prolific international rule-breaking, all in the midst of deepening global inequality, superpowers vying for supremacy and an escalating climate crisis. Amnesty International’s Secretary General, Agnès Callamard

    This post was originally published on Hans Thoolen on Human Rights Defenders and their awards.

  • Australians will be able to take a crash course in responsible AI from next week, with Standards Australia to offer a training module on its international standard for responsible development and use of the technology for the first time. The on-demand training module for ISO/IEC 42001, Information technology – Artificial Intelligence – Management System, will…

    The post Training for Australia’s world first AI standard arrives appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • More than 75 artificial intelligence projects were underway across federal government last year before consistent governance frameworks were in place, raising questions about how agencies could audit the decisions they were making. The use cases were mostly basic or experimental, with large language model technology not being deployed, according to an initial survey of AI…

    The post AI deployed without checks across government appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The Western Australian government wants to raise the state’s digital readiness index score, one of several measurable goals outlined in a draft of its WA: Digitally Evolved – Digital Industries Acceleration Strategy. The draft, released on Monday, does not reveal any new government initiatives which are to be developed through a consultation process that was…

    The post WA unveils draft digital acceleration strategy appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The country’s national science agency is moving fast to develop best practice guidance for businesses developing and deploying artificial intelligence, teaming with UTS’s Human Technology Institute and the wider Responsible AI Network for the work. Two months after the federal government announced a new risk-based regulatory approach to AI, at least a dozen roundtables have…

    The post CSIRO teams with UTS on AI safety standard appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Canadian Prime Minister Justin Trudeau has unveiled C$2.4 billion (A$2.7 billion) in budget initiatives aimed at boosting national capability in artificial intelligence, including large investments if Canadian-owned and located AI infrastructure. While Canada’s investment in AI is dwarfed by the commercial AI research and infrastructure investments of the Big Tech companies, or the national investments…

    The post Canada drops $2.7bn on sovereign AI capability programs appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Victoria-based Sentient Vision Systems, a developer of advanced sensors, has been acquired by US defence drone and autonomous systems producer Shield AI. Pending regulatory approval, Sentient Vision Systems will join the newly established Shield AI Australia to continue development of its video detection and ranging (ViDAR) system, Sentient Observer. Sentient’s software systems, combining computer vision…

    The post US firm acquires Australia’s Sentient Vision Systems appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The Israeli publications +972 and Local Call have exposed how the Israeli military used an artificial intelligence program known as Lavender to develop a “kill list” in Gaza that includes as many as 37,000 Palestinians who were targeted for assassination with little human oversight. A second AI system known as “Where’s Daddy?” tracked Palestinians on the kill list and was purposely designed to…

    Source

    This post was originally published on Latest – Truthout.

  • ChatGPT-generated health advice becomes less reliable when it is infused with evidence from the internet, according to a world-first Australian study that highlights the risk of relying on large language models for answers. Research led by the CSIRO and the University of Queensland (UQ) found that while the chatbot handles simple, question-only prompts relatively well,…

    The post Not what the doctor ordered: ChatGPT health advice left wanting appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The government’s interim response to the safe and responsible AI in Australia consultation earlier this year envisions a risk-based approach to regulating AI that would strike a constructive balance between encouraging its adoption to spur economic growth and ensuring its safety for everyone.   This is welcome news for Australia. As Minister for Industry and…

    The post How governments can foster trust in the age of AI appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The New Indian Express of 22 March 2024 reports (based on Al Jazeera) that Prime Minister Narendra Modi government has approached a major Indian think tank to develop its own democracy ratings index that could help it counter recent downgrades in rankings issued by international groups that New Delhi fears could affect the country’s credit rating. The Observer Research Foundation (ORF), which works closely with the Indian government on multiple initiatives, is preparing the ratings framework,

    On June 2023, The Guardian reported that the Indian government has been secretly working to keep its reputation as the “world’s largest democracy” alive after being called out by researchers for serious democratic backsliding under the nationalist rule of the Narendra Modi government, according to internal reports seen by The Guardian.

    Despite publicly dismissing several global rankings that suggest the country is on a dangerous downward trajectory, officials from government ministries have been quietly assigned to monitor India’s performance, minutes from meetings show, The Guardian said. Al Jazeera revealed that the Observer Research Foundation (ORF), which works closely with the Indian government on multiple initiatives, is preparing the ratings framework. The new rankings system could be released soon, an official was quoted as saying.

    Global human rights NGO Amnesty International has continued to highlight the erosion of civil rights and religious freedom under the Narendra Modi regime.

    Amnesty in its India 2022 report noted that arbitrary arrests, prolonged detentions, unlawful attacks and killings, internet shutdowns and intimidation using digital technologies, including unlawful surveillance as major concerns faced by minority groups, human rights defenders, dissenters and critics of the Union government. [see also: https://www.amnesty.org/en/latest/news/2024/03/india-crackdown-on-opposition-reaches-a-crisis-point-ahead-of-national-elections/]

    Similarly, Human Rights Watch has also continued to highlight the crackdown on civil society and media under the Modi government citing persecution of activists, journalists, protesters and critics on fabricated counterterrorism and hate speech laws. The vilification of Muslims and other minorities by some BJP leaders and police inaction against government supporters who commit violence are also among HRW’s concerns in India.

    Notably, the ‘Democracy Index’, prepared by The Economist Group’s Economist Intelligence Unit, had downgraded India to a “flawed democracy” in its 2022 report due to the serious backsliding of democratic freedom under the Modi government.

    Similarly, the US-based non-profit organization Freedom House had lowered India’s standing from a free democracy to a “partly free” democracy in its global freedom and internet freedom ratings, while V-Dem Institute, a Sweden-based independent research institute, had classified India as an “electoral autocracy”, as part of its 2022 Democracy report. for more on ranking, see: https://humanrightsdefenders.blog/tag/ranking/

    https://www.newindianexpress.com/nation/2024/Mar/22/centre-planning-its-own-democracy-index-amid-global-rankings-downgrade

  • A bipartisan Senate committee has sounded the alarm over the use of automated decision-making for immigration and biosecurity matters that should be decided on a case-by-case basis by federal ministers. The warning, the second from the committee in less than two months, comes as the government looks to introduce a consistent legal framework for automated…

    The post Senate sounds alarm on automated govt decision-making appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Hossam Bahgat is demanding an apology and remedy after a travel ban and freeze on his assets was reversed on 20 March 2024 (AFP/Mada Masr/file photo)

    On 22 March 2024 MEE reported on a very interesting development in Egypt, where dozens of rights defenders have been affected by travel bans and asset freezes for over decade in a ‘politically motivated’ case [see also: https://humanrightsdefenders.blog/tag/hossam-bahgat/].

    Egypt has announced the closure of a 13-year landmark case in which human rights defenders were accused of receiving illicit foreign funding – but those affected by the allegations are demanding justice. An investigative judge on Wednesday declared the closure of case 173/2011, known in the media as the “foreign funding case”, due to what he described as “insufficient evidence”.

    The case has been widely denounced as a politically-motivated attack on Egypt’s civil society.  Judge Ahmed Abdel Aziz Qatlan’s decision marks the end of a probe against 85 organisations. It also means an end to asset freezes and travel bans imposed on members of these organisations, he added.

    Before the decision on Wednesday, accusations against most of the organisations implicated had already been dropped and this week’s decision only affects five organisations. 

    These were the Egyptian Initiative for Personal Rights (EIPR); the Arabic Network for Human Rights Information (ANHRI); the Arab Penal Reform Organisation; the Cairo Institute For Human Rights Studies; and Al-Nadeem Center for  Rehabilitation of Victims of Violence.

    Rights groups and human rights defenders have called for an apology and compensation for the defendants. Hussein Baoumi, foreign policy advocacy officer at Amnesty International, who had previously monitored the case as Amnesty’s Egypt researcher, said the closure of the case is a welcome step but is “long overdue”.

    “The government must issue a public apology and compensate the human rights defenders for years of smearing and punitive measures, merely because they defended the rights of millions of people,” he told Middle East Eye.

    Baoumi expressed cautious optimism about the government’s respect for the court decision. “It is too early to say if this marks a serious shift in the government’s crackdown on civil society,” he said. “Closing case 173 must be followed by lifting all travel bans and asset freezes against human rights defenders, all those arbitrarily detained must be released and the NGO law must be amended to bring it in line with Egypt’s obligations.”

    Hossam Bahgat, director of the EIPR, has been under a travel ban and barred from accessing his bank account for eight years. Following the closure of the case, he said he felt “vindicated but not relieved”.

    He demanded “an official and public apology and restitution for the psychological and material damage resulting from this bogus case”. Gamal Eid, the founder of the ANHRI, welcomed the decision to lift his travel ban but said he still hopes for “the return of all the innocent and oppressed people to their families and loved ones”, referring to the estimated 65,000 political prisoners still languishing in Egyptian jails.

    The Cairo Institute for Human Rights (CIHRs) said on Friday: “The decision does not remedy the injustices suffered by the dozens of human rights defenders targeted by the case over the course of the previous decade. Egyptian authorities must issue a formal apology to the victims of this persecution and compensate them for the losses and hardship they have been forced to endure.

    Bahey eldin Hassan, CIHRs director, has been sentenced to 18 years in jail in absentia and his sentence remains in effect, the group said.  Hassan and dozens of other human rights defenders are currently living in exile because they fear arrest if they return to Egypt.

    CIHR also called on Egypt to put an end to its ongoing crackdown on civil society and human rights defenders, including Ibrahim Metwally, Ezzat Ghoneim, and Hoda Abdelmoniem, who are still behind bars in connection with their work.

    CIHR is calling for a review of Egypt’s counter-terrorism legislation and penal code to safeguard the freedom of human rights defenders to carry out their jobs without fear of reprisals. 

    “Only through a comprehensive review of repressive Egyptian legislation, the releasing of the tens of thousands of peaceful political prisoners, and a genuine opening of public space, can Egyptian authorities demonstrate genuine political will to reform,” it said.

    https://www.middleeasteye.net/news/egypt-ngos-demand-apology-after-closure-13-year-case-over-lack-evidence

    This post was originally published on Hans Thoolen on Human Rights Defenders and their awards.

  • Researchers at Australia’s national science agency are “bullish” on the potential productivity gains to be had from developing foundation AI models in Australia, and say that building sovereign capability should be seriously considered. The comments were made by CSIRO principal scientist in strategy and foresight Dr Stefan Hajkowicz, co-author of a new report urging Australian…

    The post CSIRO argues the case for local AI foundation models appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Parliament has launched a second inquiry into artificial intelligence, this time to probe general issues presented by the disruptive technology, following a government-supported push by the Greens. The inquiry, which arrives at the same time as another Coalition-led inquiry is rejected, will look into the opportunities and impacts that stem from the uptake of AI….

    The post Parliament launches second inquiry into AI appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The Pentagon is pursuing a high-tech program that will “minimize cognitive burden” on soldiers, according to budget documents released last week. The $40 million-plus classified program, codenamed “CARCOSA,” shares the same name as “the temple” in the first season of the HBO TV series “True Detective,” a place where an elite pedophile ring performs ritual abuse on children.

    The program is overseen by the Defense Advanced Research Projects Agency, or DARPA, the Pentagon’s premier organization funding the development of futuristic weapons and military capabilities. 

    There is of course no evidence that the military’s CARCOSA is involved in anything like that; but it’s unclear why, at a time when the White House has prioritized fighting “dangerous conspiracy theories,” DARPA is providing the conspiracy crowd with such fodder. The Intercept reached out to DARPA to inquire whether the elite research agency was aware of the strange coincidence or whether there’s a “True Detective” fan at the agency. DARPA did not respond at the time of publication.

    The Pentagon’s CARCOSA is its own temple of information, an AI-driven aggregator that is intended to acquire, sort, and display the blizzard of information that reflects what is going on on a fast-moving future battlefield. “The Carcosa program is developing and demonstrating cyber technologies for use by warfighters during tactical operations,” DARPA’s new fiscal year 2025 budget request says. “Carcosa cyber technology aims to provide warfighters in the field with enhanced situational awareness of their immediate battlespace.”

    CARCOSA, DARPA says, will help to “minimize cognitive burden on tactical cyber operators.” In other words, headaches caused by the same information overload we all have to deal with everyday. Individual cyber warriors on high-intensity battlefields such as Ukraine and Israel are inundated with data, from their own communications and IT systems, from a virtual Niagara of intelligence inputs, and from electronic attacks via computers, machines, and drones. On top of it all, the modern battlefield is a venue for “information operations,” which seek to manipulate what the enemy sees and believes.

    CARCOSA will support an Army mission area called Cyberspace and Electromagnetic Activities, or CEMA, which provides battlefield commanders “with technical and tactical advice on all aspects of offensive and defensive cyberspace and electronic warfare operations.” The Army says CEMA operators are so inundated with information that they need augmented intelligence technology to help sort the signal from the noise.

    CARCOSA stands for Cyber-Augmented Reality and Cyber-Operations Suite for Augmented Intelligence. “Augmented reality” refers to immersive technology that produces computer-generated images overlaying a user’s view of the real world, like Apple’s Vision Pro headset. The program supports development of various technologies, at least according to vague budget documents, all of which seek to defeat a new reality of combat: Individual soldiers and commanders can’t process all of the information that they are bombarded with. 

    The full CARCOSA name, which has not been previously reported, appears in a November $26 million DARPA contract to Two Six Labs, a part of Two Six Technologies and owned by the Carlyle Group. Two Six Labs says it supplies “situational awareness interfaces for cyber operators to distributed sensor networks, from machine learning models that learn to reverse engineer malware to embedded devices that enable and protect our nation’s warfighters.” 

    “We want to do everything we can to help the US government and the intelligence community,” says Two Six Technologies CEO Joe Logue. “Starting from over here for information operations and influence up through cyber, command control and operations.” In its three years of operations, the Arlington, Virginia, based company has doubled its national security contracts to some $650 million.

    “DARPA’s Cyber-Augmented Operations, also known as CAOs, are a vast spectrum of military programs many of which seek to enhance, if not replace, humans with machines,” says Annie Jacobsen, author of “The Pentagon’s Brain: An Uncensored History of DARPA, America’s Top-Secret Military Research Agency.”

    CARCOSA is also mentioned in a DARPA broad agency announcement released February 2023. In the announcement, DARPA’s Information Innovation Office solicits research proposals to create “novel cyber technologies” for warfighters. CARCOSA, it says, will be a 38-month-long program.

    At least one other CARCOSA-related contract, this one worth $13 million, has been awarded to Chameleon Consulting Group, which also focuses on information operations, per its website. Raytheon Cyber Solutions, Inc.; Southwest Research Institute; SRI International; and Battelle Memorial Institute have also received CARCOSA contracts.

    Though CARCOSA has appeared in the Pentagon’s budget since 2022, when DARPA sought initial funding for the program, this year’s $41.5 million request represents the largest yet for the program.

    “For decades now, DARPA has been leading the world in machine learning systems,” Jacobsen told The Intercept. “Today this gets called AI, but ‘machine learning’ is, I think, a more appropriate term of art — machines are not yet intelligent.”

    Time, it would seem, is a flat circle, to quote the iconic line from “True Detective,” and which has popularly come to denote something we’re doomed to repeat again and again and again.

    The post Secret Pentagon Program Echoes Pedophile Ring in “True Detective” Series appeared first on The Intercept.

    This post was originally published on The Intercept.

  • Google’s dominance of internet search and the potential of artificial intelligence to disrupt or entrench it will go under the microscope in the competition watchdog’s second inquiry of the market, announced on Monday. The inquiry, the latest in the Australia Competition and Consumer Commission’s (ACCC) five-year examination of digital platforms, will help the regulator collect…

    The post ACCC probes AI impact on uncompetitive search market appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Early data security concerns with off-the-shelf generative artificial intelligence solutions prompted the Queensland government to pursue its own purpose-built internal chatbot, QChat. The internal tool, which is based on OpenAI’s large language model (LLM), is currently being rolled across the public service after three months of testing with select state government agencies. Since it went…

    The post Qld rolls out purpose-built AI tool amid Copilot concerns appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.