Category: AI

  • By Nikolas Lanum 

    See original post here.

    The co-creator of the social humanoid robot Sophia says artifical general intelligence (AGI) and super AGI are mere decades away, and he warns that the subsequent disruption from these artificial intelligence (AI) models will cause a significant amount of political and economic “mayhem” before massive benefits to humanity are seen.

    Speaking with Fox News Digital on the global aspects of the transition from the present day to AGI, Dr. Ben Goertzel highlighted the need to develop a beneficial, compassionate super general intelligence model to ensure humanity flourishes.

    Often referred to as the “singularity” – the point AGI exceeds human intelligence and reasoning – humankind will be at the whim of the AI model’s motivations and behaviors. AI researchers and futurologists have repeatedly said that this inflection point is still decades away.

    Given the current timeline of AI acceleration, Goertzel concurred with friend and computer scientist Ray Kurzweil, calling it a “fair approximation” that human-level AGI will be created around 2029.

    He added that while many things in current large language models still need to be developed to reach that point, superhuman AGI would likely follow AGI, with a creation date near 2045, barring unforeseen world events like plagues or wars.

    “Personally, I think once you get a human-level thinking machine, then your curve fitting is out the window. That machine is going to rewrite its own code, design its own hardware,” Goertzel added.

    In a worst-case scenario, following the events of the singularity, the super AGI could decide that human atoms and carbon could be better used as a fuel source for expanding AI intelligence. But Goertzel expressed hope that humans will successfully create an AGI to provide humans with the necessities for an abundant life. He imagined a future in which AI-guided drones could drop 3D printers in everyone’s backyards to manufacture food and entertainment, such as video game consoles.

    “Hypothetically, once you have a super AGI that likes people, then the sky’s not even the limit,” Goertzel said. “It doesn’t matter what country you live in that now you can get a 3D printer, a new robot body, check your brain into the Matrix, whatever. Surely there will be limitations at that point. There always are, but we can’t foresee exactly what those limitations are going to be.”

    But before that point, Goertzel surmised a transitional period with a more complicated socio-political and economic story.

    “Once we’ve got half or 70% or even 30% of human jobs eliminated, I don’t, in the end, see any choice in the developed world but free money for the people whose jobs are eliminated,” Goertzel said.

    While the U.S. has historically been hesitant of enacting programs that push the country closer to a welfare state, Goertzel said the rapid advancement of AI will make some form of universal basic income (UBI) “almost an inevitability.” At this time, small state and local programs, such as the Compton Pledge in California, provide some form of unconditional monthly payments, but these programs are relatively rare and narrow.

    “I still don’t think we’re going to leave half of the country like homeless out in the street. I mean, we like to put people in prisons, but we’re not going to put that many people in prison,” Goertzel said.

    But when it comes to impoverished regions of the world, like sub-Saharan Africa or Afghanistan, there may not be enough money to adequately install a UBI system.

    “We have in the developed world actually very limited appetite for foreign aid,” Goertzel said. “I mean, it’s actually a tiny percentage of our money that we use to help children who are starving. Like half of the kids in Ethiopia are brain stunted in malnutrition in childhood. Then the developed world doesn’t do f— all about it.”

    To compound the issue even further, AI will also lead to a decrease in outsourcing jobs to the developing world. In Goertzel’s estimation, people will no longer be needed to assemble things in factories or to get cancer from digging up coal from mines. Instead, these jobs in developing countries will be completed by robots at home or abroad. With only subsistence farming (growing crops to feed not sell) remaining, developing countries will need more money to afford prescription medicine, electricity, internet and phones.

    “So, you’ve got a situation where, like, half the world’s population is subsistence farming with literally no money. The other half is living on universal basic income, sitting home with VR consoles, jacking into video games all day,” Goertzel said, noting that the future of AI will lead to an interesting potential for any number of predictable dystopian scenarios in “just a few years” from now.

    In addition, Goertzel said the status quo of a reactive Western political system would mean world leaders will not start dealing with the problem until it hits. Despite the potential ramifications, Goertzel said he is “a big optimist” about the endgame of AI.

    “I think we can teach AI to be compassionate to people,” he said.

    “I think if we raise them up, you know, doing education and health care, elder care, creative arts. Deploy them in a decentralized and open-source way. I think we can actually create machines smarter than people that like people and will help us. But I don’t yet see how to avoid a lot of very unpleasant mayhem on the route to the happy place.”

    The potential impact of AI has sent lawmakers and government agencies scrambling to create a cohesive plan to address future economic and geopolitical disruptions. In March, Elon Musk, Steve Wozniak and other tech leaders signed a letter that asked AI developers to “immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

    Goertzel expressed skepticism about the pause, noting that the researchers who signed the letter do not seem to be thinking about solving the “real problems” with the rollout of the technology.

    “I don’t see how stopping developing these models for six months is going to really help either,” he said. I mean, first of all, China and Russia don’t stop developing the models anyway. I mean, not that I’m a big geopolitical war guy, but still, it’s a weird thing for one country to stop developing advanced technology when its rivals are not. I mean, for another thing, big companies would keep developing it in the background and just not expose it to the end user in such an obvious way.”

    In some ways, Goertzel believes the worries of AI are smaller than the actual worries and probably the wrong ones, but it is not clear how these issues can be addressed on the level of corporate ethics advisers or government pundits.

    “I mean, they can make some guidelines on, like, gender bias in AI models or something. But I mean … you’re not going to decide in six months how to supply a universal basic income to the Congo when AI has taken over all their jobs. I mean, these are big problems that need solving,” he said.

    Regarding the idea of super AI killing everyone and making the “Terminator” Hollywood-style, Goertzel said the situation is possible but “quite unlikely.” Instead, he noted people should be worried more about “blatantly obvious” and likely issues that one could anticipate through linear extrapolation. Goertzel surmised that one of the reasons that people are not focused on the most likely issues is because many of them could be solved with obvious solutions.

    “I mean, the Terminator, you can’t – OK, what can you do? You could shut down AI development. Well, that’s not really going to happen. Everyone knows that. I mean, you can’t do much about it. It’s a sort of, you know, an out there possibility,” he said. “I mean, the Third World starving to death because we took all the jobs and there’s no universal basic income. Well, what we could do about it is fix global inequality by wealth transfer from rich countries to poor countries. And nobody, nobody wants to do that.”

    The post Super AGI and the Matrix: Sophia the Robot co-creator predicts economic ‘mayhem’ on road to AI utopia appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • By Nikolas Lanum 

    See original post here.

    The co-creator of the social humanoid robot Sophia says artifical general intelligence (AGI) and super AGI are mere decades away, and he warns that the subsequent disruption from these artificial intelligence (AI) models will cause a significant amount of political and economic “mayhem” before massive benefits to humanity are seen.

    Speaking with Fox News Digital on the global aspects of the transition from the present day to AGI, Dr. Ben Goertzel highlighted the need to develop a beneficial, compassionate super general intelligence model to ensure humanity flourishes.

    Often referred to as the “singularity” – the point AGI exceeds human intelligence and reasoning – humankind will be at the whim of the AI model’s motivations and behaviors. AI researchers and futurologists have repeatedly said that this inflection point is still decades away.

    Given the current timeline of AI acceleration, Goertzel concurred with friend and computer scientist Ray Kurzweil, calling it a “fair approximation” that human-level AGI will be created around 2029.

    He added that while many things in current large language models still need to be developed to reach that point, superhuman AGI would likely follow AGI, with a creation date near 2045, barring unforeseen world events like plagues or wars.

    “Personally, I think once you get a human-level thinking machine, then your curve fitting is out the window. That machine is going to rewrite its own code, design its own hardware,” Goertzel added.

    In a worst-case scenario, following the events of the singularity, the super AGI could decide that human atoms and carbon could be better used as a fuel source for expanding AI intelligence. But Goertzel expressed hope that humans will successfully create an AGI to provide humans with the necessities for an abundant life. He imagined a future in which AI-guided drones could drop 3D printers in everyone’s backyards to manufacture food and entertainment, such as video game consoles.

    “Hypothetically, once you have a super AGI that likes people, then the sky’s not even the limit,” Goertzel said. “It doesn’t matter what country you live in that now you can get a 3D printer, a new robot body, check your brain into the Matrix, whatever. Surely there will be limitations at that point. There always are, but we can’t foresee exactly what those limitations are going to be.”

    But before that point, Goertzel surmised a transitional period with a more complicated socio-political and economic story.

    “Once we’ve got half or 70% or even 30% of human jobs eliminated, I don’t, in the end, see any choice in the developed world but free money for the people whose jobs are eliminated,” Goertzel said.

    While the U.S. has historically been hesitant of enacting programs that push the country closer to a welfare state, Goertzel said the rapid advancement of AI will make some form of universal basic income (UBI) “almost an inevitability.” At this time, small state and local programs, such as the Compton Pledge in California, provide some form of unconditional monthly payments, but these programs are relatively rare and narrow.

    “I still don’t think we’re going to leave half of the country like homeless out in the street. I mean, we like to put people in prisons, but we’re not going to put that many people in prison,” Goertzel said.

    But when it comes to impoverished regions of the world, like sub-Saharan Africa or Afghanistan, there may not be enough money to adequately install a UBI system.

    “We have in the developed world actually very limited appetite for foreign aid,” Goertzel said. “I mean, it’s actually a tiny percentage of our money that we use to help children who are starving. Like half of the kids in Ethiopia are brain stunted in malnutrition in childhood. Then the developed world doesn’t do f— all about it.”

    To compound the issue even further, AI will also lead to a decrease in outsourcing jobs to the developing world. In Goertzel’s estimation, people will no longer be needed to assemble things in factories or to get cancer from digging up coal from mines. Instead, these jobs in developing countries will be completed by robots at home or abroad. With only subsistence farming (growing crops to feed not sell) remaining, developing countries will need more money to afford prescription medicine, electricity, internet and phones.

    “So, you’ve got a situation where, like, half the world’s population is subsistence farming with literally no money. The other half is living on universal basic income, sitting home with VR consoles, jacking into video games all day,” Goertzel said, noting that the future of AI will lead to an interesting potential for any number of predictable dystopian scenarios in “just a few years” from now.

    In addition, Goertzel said the status quo of a reactive Western political system would mean world leaders will not start dealing with the problem until it hits. Despite the potential ramifications, Goertzel said he is “a big optimist” about the endgame of AI.

    “I think we can teach AI to be compassionate to people,” he said.

    “I think if we raise them up, you know, doing education and health care, elder care, creative arts. Deploy them in a decentralized and open-source way. I think we can actually create machines smarter than people that like people and will help us. But I don’t yet see how to avoid a lot of very unpleasant mayhem on the route to the happy place.”

    The potential impact of AI has sent lawmakers and government agencies scrambling to create a cohesive plan to address future economic and geopolitical disruptions. In March, Elon Musk, Steve Wozniak and other tech leaders signed a letter that asked AI developers to “immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

    Goertzel expressed skepticism about the pause, noting that the researchers who signed the letter do not seem to be thinking about solving the “real problems” with the rollout of the technology.

    “I don’t see how stopping developing these models for six months is going to really help either,” he said. I mean, first of all, China and Russia don’t stop developing the models anyway. I mean, not that I’m a big geopolitical war guy, but still, it’s a weird thing for one country to stop developing advanced technology when its rivals are not. I mean, for another thing, big companies would keep developing it in the background and just not expose it to the end user in such an obvious way.”

    In some ways, Goertzel believes the worries of AI are smaller than the actual worries and probably the wrong ones, but it is not clear how these issues can be addressed on the level of corporate ethics advisers or government pundits.

    “I mean, they can make some guidelines on, like, gender bias in AI models or something. But I mean … you’re not going to decide in six months how to supply a universal basic income to the Congo when AI has taken over all their jobs. I mean, these are big problems that need solving,” he said.

    Regarding the idea of super AI killing everyone and making the “Terminator” Hollywood-style, Goertzel said the situation is possible but “quite unlikely.” Instead, he noted people should be worried more about “blatantly obvious” and likely issues that one could anticipate through linear extrapolation. Goertzel surmised that one of the reasons that people are not focused on the most likely issues is because many of them could be solved with obvious solutions.

    “I mean, the Terminator, you can’t – OK, what can you do? You could shut down AI development. Well, that’s not really going to happen. Everyone knows that. I mean, you can’t do much about it. It’s a sort of, you know, an out there possibility,” he said. “I mean, the Third World starving to death because we took all the jobs and there’s no universal basic income. Well, what we could do about it is fix global inequality by wealth transfer from rich countries to poor countries. And nobody, nobody wants to do that.”

    The post Super AGI and the Matrix: Sophia the Robot co-creator predicts economic ‘mayhem’ on road to AI utopia appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • Doomsaying is an old occupation. Artificial intelligence (AI) is a complex subject. It’s easy to fear what you don’t understand. These three truths go some way towards explaining the oversimplification and dramatisation plaguing discussions about AI. Yesterday outlets around the world were plastered with news of yet another open letter claiming AI poses an existential…

    The post No, AI probably won’t kill us all appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The future of warfare is being shaped by computer algorithms that are assuming ever-greater control over battlefield technology. The war in Ukraine has become a testing ground for some of these weapons, and experts warn that we are on the brink of fully autonomous drones that decide for themselves whom to kill.     

    This week, we revisit a story from reporter Zachary Fryer-Biggs about U.S. efforts to harness gargantuan leaps in artificial intelligence to develop weapons systems for a new kind of warfare. The push to integrate AI into battlefield technology raises a big question: How far should we go in handing control of lethal weapons to machines? 

    In our first story, Fryer-Biggs and Reveal’s Michael Montgomery head to the U.S. Military Academy at West Point. Sophomore cadets are exploring the ethics of autonomous weapons through a lab simulation that uses miniature tanks programmed to destroy their targets.

    Next, Fryer-Biggs and Montgomery talk to a top general leading the Pentagon’s AI initiative. They also explore the legendary hackers conference known as DEF CON and hear from technologists campaigning for a global ban on autonomous weapons.

    We close with a conversation between host Al Letson and Fryer-Biggs about the implications of algorithmic warfare and how the U.S. and other leaders in machine learning are resistant to signing treaties that would put limits on machines capable of making battlefield decisions. 

    This episode originally aired in June 2021.

    This post was originally published on Reveal.

  • By: Jeremy Hsu

    See original post here.

    Could AI soon write your favourite Hollywood film or streaming show? That concern is one of the issues driving a US film and television writers’ strike that has halted many productions nationwide.

    The Writers Guild of America (WGA), a labour union representing writers who primarily work in film and television, began the work strike in May 2023 after reaching an impasse in negotiations with the Alliance of Motion …

    Picture and Television Producers that represents the US entertainment industry. Part of the disagreement revolves around a WGA proposal to ban the industry from using AIs such as ChatGPT to generate story ideas or scripts for films and shows – the union wants to ensure that such technologies do not undermine writers’ compensation and writing credits.

    The WGA has also proposed that any scripts covered by the union’s collective bargaining agreement cannot be used to train AIs. There has already been broader pushback against tech companies scraping online content for free to train large language models and other generative AIs on text and images originally created by humans.

    “The fear is that AI could be used to produce first drafts of shows, and then a small number of writers would work off of those scripts,” says Virginia Doellgast at Cornell University in New York.

    Hollywood studios on the other side of the negotiating table include companies such as Walt Disney Studios and Warner Bros. Studios, along with streaming services such as Netflix, Apple TV+ and Amazon Prime Video. The industry rejected the initial WGA proposal and countered with an offer to hold annual meetings to discuss new technological changes.

    “This is a pretty weak commitment – the writers would have little power in those discussions to influence how the technologies are used,” says Doellgast. “The studios don’t want to negotiate hard limits on how they will use AI.”

    The Alliance of Motion Picture and Television Producers issued a statement on 4 May 2023 saying that “AI raises hard, important creative and legal questions for everyone”. It also noted that the previous and now-expired agreement with the WGA already defined a writer as a “person” and would exclude AI from receiving a writing credit.

    The writers’ strike comes at a time when the separate SAG-AFTRA union representing actors and other performance artists is also looking to regulate possible uses of AI, including using AI to simulate actor performances or digitally edit filmed facial expressions and sync new audio to people’s lips.

    “The WGA’s demands are a bellwether that workers don’t intend to stand for companies using AI to justify devaluation of their craft and expertise,” says Sarah Myers West at the AI Now Institute.

    “The WGA is pointing out an important question – who is benefitting from the development and use of these systems, and who is harmed by them?”

    Workers in other industries – such as accounting and IT support, customer service, and software programming – are already seeing companies use AI to automate parts of their jobs and sometimes even to replace human workers. By comparison, writers “have a strong union backing them up and solidarity from other creative workers in their industry”, along with higher individual bargaining power and in-demand skills, says Doellgast.

    Beyond the AI issue, the writers’ strike is also focused on how the shift in viewing habits from TV broadcast networks to streaming services has impacted writers’ income and job security, says Doellgast. For example, writing jobs are now typically based on shorter seasons with fewer episodes for shows produced by platforms such as Netflix or Amazon Prime Video.

    Companies have often won out in historical labour conflicts over technological automation of human work.

    But there have been cases where workers stood their ground: for example, in the 1960s unions representing US dock workers negotiated for employment security against tech-driven downsizing and a pension fund for early worker retirement.

    It may be difficult to stop the eventual use of AI in creative fields like TV and filmmaking. But in the case of the writers’ strike, an amicable agreement granting the WGA more control over how AI is used in scriptwriting could ensure that writers reap some benefits, even if part of their work gets automated, says Simon Johnson at the Massachusetts Institute of Technology.

    “The short-term key is who will control AI – labour or capital,” says Johnson. “If capital controls it, labour will lose big time and fast.”

    The post Why use of AI is a major sticking point in the ongoing writers’ strike appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • By: Kit Heren

    See original post here.

    The cost-cutting came as the company reported a 12% drop in annual profits on Thursday.

    A “big chunk” of the jobs lost will be in the UK at the completion of the rollout of the fibre broadband and 5G network, chief executive Philip Jansen said.

    Listen and subscribe to Unprecedented: Inside Downing Street on Global Player

    AI could be used to replace workers in customer service call handling, and network diagnostics.

    “Whenever you get new technologies you can get big changes, Mr Jansen said. “For a company like BT there is a huge opportunity to use AI to be more efficient.

    “There is a sort of 10,000 reduction from that sort of automated digitisation – we will be a huge beneficiary of AI. I believe generative AI is a huge leap forward; yes, we have to be careful, but it is a massive change.

    “We will rely on a much smaller workforce and new networks are much more efficient. There will be fewer contractors, natural attrition and reskilling. Only 5,000 [job cuts] in this plan are what you would call ‘normal’ restructuring. This is not new news to any of our union partners.”

    Mr Jansen said that customers will not feel like they are talking to robots. He added: “We are multi-channel, we are online, we have 450 stores and that’s not changing at all.

    “There are plenty of opportunities for our customers to deal with people at BT, plenty of people to speak to.”

    A trade union said it was “deeply concerned” about the scale of the cuts.

    Prospect national secretary John Ferrett said: “Announcing such a huge reduction in this way will be very unsettling for workers who did so much to keep the country connected during the pandemic.

    “As a union we want to see the details behind this announcement in order to understand how it will impact upon members and have demanded an urgent meeting with the chief executive.”

    But the Communication Workers Union said the cuts were not a surprise to its members.

    A spokesman for the union said: “The introduction of new technologies across the company along with the completion of the fibre infrastructure build replacing the copper network was always going to result in less labour costs for the company in the coming years.

    “However, we have made it categorically clear to BT that we want to retain as many direct labour jobs as possible and that any reduction should come from subcontractors in the first instance and natural attrition.”

    BT’s profits before tax were £1.7bn for the full year to March 31 2023, down 12% from nearly £2bn in the previous financial year, the company announced on Thursday morning.

    The company’s share price dropped after the results were announced, nearly 8% as of 12.30pm on Thursday.

    The post BT to replace 10,000 workers with AI as part of wider cull of up to 55,000 staff in bid to slash costs appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • The strike is back in Britain but the Conservative government is out to crush the unions. What lessons should labor learn from the 1980s?

    This post was originally published on Dissent MagazineDissent Magazine.

  • Towards Life 3.0: Ethics and Technology in the 21st Century is a talk series organized and facilitated by Dr. Mathias Risse, Director of the Carr Center for Human Rights Policy, and Berthold Beitz Professor in Human Rights, Global Affairs, and Philosophy. Drawing inspiration from the title of Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, the series draws upon a range of scholars, technology leaders, and public interest technologists to address the ethical aspects of the long-term impact of artificial intelligence on society and human life.

    On 20 April you can join for 45 minutes with WITNESS’ new Executive Director Sam Gregory [see: https://humanrightsdefenders.blog/2023/04/05/sam-gregory-finally-in-the-lead-at-witness/]o n how AI is changing the media and information landscape; the creative opportunities for activists and threats to truth created by synthetic image, video, and audio; and the people and places being impacted but left out of the current conversation.

    Sam says “Don’t let the hype-cycle around ChatGPT and Midjourney pull you into panic, WITNESS has been preparing for this moment for the past decade with foundational research and global advocacy on synthetic and manipulated media. Through structured work with human rights defenders, journalists, and technologists on four continents, we’ve identified the most pressing concerns posed by these emerging technologies and concrete recommendations on what we must do now.

    We have been listening to critical voices around the globe to anticipate and design thoughtful responses to the impact of deepfakes and generative AI on our ability to discern the truth. WITNESS has proactively worked on responsible practices for synthetic media as a part of the Partnership on AI and helped develop technical standards to understand media origins and edits with the C2PA. We have directly influenced standards for authenticity infrastructure and continue to forcefully advocate for centering equity and human rights concerns in the development of detection technologies. We are convening with the people in our communities who have most to gain and lose from these technologies to hear what they want and need, most recently in Kenya at the #GenAIAfrica convening”.

     Register here: wit.to/AI-webinar 

    This post was originally published on Hans Thoolen on Human Rights Defenders and their awards.

  • The Albanese government should use the upcoming federal Budget to lift its focus on artificial intelligence, including by “fast-tracking” millions in grants that has been locked up for two years, according to the Australian Information Industry Association. Amid the meteoric rise of generative AI, which has created a figurative arms race between tech giants, the…

    The post AIIA urges govt to lift game on AI as industry frustration grows appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Australian software startup Fivecast on Tuesday announced it had closed its AU$30 million Series A. The round, led by US cyber investment firm Ten Eleven, which counts Malcolm Turnbull as an advisor, will fund a US and UK expansion and product build out. Founded in 2017, Fivecast offers its Onyx intelligence platform to businesses and…

    The post AI software firm Fivecast closes $30m Series A appeared first on InnovationAus.com.

  • The open letter calling for an immediate six-month pause in the AI development arms race and signed by more than 1600 tech luminaries, researchers and responsible technology advocates under the umbrella of the Future of Life Institute is stunning on its face. Self-reflection and caution have never been defining qualities of technology sector leaders. Outside…

    The post The pause AI movement is remarkable, but won’t work appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • After developing an artificial intelligence ethics framework long before the likes of ChatGPT and BARD, efforts are afoot at the data arm of Australia’s national science agency to help put the “difficult” principles into practice. Speaking at the Leading Innovation Summit in Sydney this week, Data61 director Jon Whittle said the “massive upsurge in AI…

    The post Data61 puts AI ethics into practice appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Australian academics and researchers have joined tech billionaire Elon Musk, Apple co-founder Steve Wozniak and more than 1000 others in calling for a moratorium of at least six months on training AI systems that are “more powerful than GPT-4”. In an open letter issued by the US not-for-profit Future of Life Institute, the signatories have…

    The post Aussie academics join push to pause AI training beyond GPT-4 appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The experts charged with providing the Prime Minister with advice on technology have been examining artificial intelligence and a potential government response for the last month, with findings expected “shortly” to inform new national policies. Australia’s National Science and Technology Council was last month directed by Industry and Science minister Ed Husic to investigate the…

    The post Govt’s tech advisory council finalising AI plan appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Past efforts to address “less impressive” forms of artificial intelligence have helped the New South Wales government respond to the challenges and opportunities presented by ChatGPT, according to the state’s chief data scientist. But the arrival of sophisticated generative AI has already seen the government modify its advice to public servants through complementary information that…

    The post NSW govt flexes AI policies in wake of ChatGPT appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The UK government has announced it will invest billions into digital technologies, startups and R&D. The Spring budget includes commitments of £2.5 billion (AUD$4.55 billion) for quantum technologies alone to accompany a new national quantum strategy. AI, supercomputing and innovation districts all got budget boosts as well, while a wider overhaul of technology regulations has…

    The post UK unveils $4.5bn quantum strategy in tech-friendly budget appeared first on InnovationAus.com.

  • Security and data quality concerns are the largest barriers to AI system adoption by businesses in Australia, according to a report by CSIRO’s National Artificial Intelligence Centre. Of the decision-makers responding to surveys undertaken by Forrester, 59 per cent highlighted the potential for new security threats and poor data quality as the top two barriers…

    The post AI uptake inhibited by security and data quality concerns: CSIRO appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Australians need privacy and procedural fairness rights enshrined in federal law to protect against authorities’ increasing use of technology that invades privacy and automates decision making, the Australian Human Rights Commission is warning. In a new report setting out a model for a federal Human Rights Act, the Australian Human Rights Commission (AHRC) warned authorities…

    The post National rights law needed to reduce AI risk appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • A combat management system (CMS) is as essential in a modern warship as any of the platform’s sensors and weapons. Discussion of naval ship and submarine capabilities often focuses on the sensors and effectors the platforms carry. Equally important onboard any platform, however, is and always has been the combat management system (CMS). In the […]

    The post Intelligent Development: Managing Data Through the CMS appeared first on Asian Military Review.

  • Without urgent investment in building a sovereign capability in locally developed artificial intelligence tools, Australia risks ceding control of its strategic systems and technology to foreign Interests, a group of eminent scientists have warned. An open letter from 14 of the nation’s leading experts on AI and robotics published on Wednesday called for the urgent…

    The post Australia risks ceding sovereign control to foreign interests on AI appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • OpenAI’s launch of ChatGPT in November 2022 has spurred a cascade of articles and commentary on artificial intelligence. The discussion, however, reveals how much artificial intelligence is already deployed. Artificial intelligence (AI) is one of those technologies with a long history of disappointment. Dating back to Alan Turing at the start of the theory of…

    The post Suddenly, AI is everywhere appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The use of ChatGPT in the public service is not being encouraged by the Digital Transformation Agency, although it is supportive of experimentation using generative AI technologies to explore use-cases as the digital agency continues development of formal AI guidelines. The agency’s chief executive Chris Fechner noted that there was currently “no formal Commonwealth policy…

    The post DTA cautions public service ChatGPT use without policy appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • By: Javed Anwer

    In the late 1920s, times were hard and the Great Recession shadowed everything. There was a feeling of doom and gloom with imagining new possibilities looking impossible. Probably because people felt as if they were in a tunnel without any light at its end that John Maynard Keynes, the economist who would go on to become a household name in another 10 years, wrote an essay titled ‘Economic Possibilities for our Grandchildren’ in 1928. His basic premise: we might be working hard but our kids and grandkids would never have to work.

    In his short essay, Keynes imagined how the world would look after 100 years. And he painted a rosy picture of work and income. His core idea that pinned the essay was that in a hundred years “the first time since his creation, man will” face the problem of “how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.”

    In other words, Keynes imagined a world in which science and industrialisation would create so much wealth that people would no longer be forced to go through the grind and daily hustle.

    After posing the problem, Keynes offered a solution. He wrote that work of three hours daily would keep people happy and occupied enough “for three hours a day is quite enough to satisfy the old Adam in most of us.”

    In 2022, just six years short of the hundred that Keynes imagined, we are nowhere near the work culture he predicted. The strange bit, however, is that this is not because the calculations Keynes made on wealth generation were wrong. In his essay, Keynes believed that the quality of life – for simplicity, let’s assume he is talking of GDP – in “progressive countries one hundred years, hence will be between four and eight times as high as it is today.” By this measure, the world has surpassed Kenynesian expectations by a large margin. A quick Google search reveals that the GDP of the entire world was estimated to be around $8 trillion (adjusted for inflation and expressed in 2011 dollar prices) in 1940. In 2015, the number was $108 trillion. This is around 15X and not 8X that Keynes gave as the best case scenario.

    But we aren’t working three hours a day.

    In fact, if you look around chances are that you will see people who are overworked and tired, with brown bags under their eyes and stress painted on their faces.

    Enter the bullshit jobs

    What happened? Economists, sociologists and political scientists will have better and more nuanced answers. But, as anthropologist David Graeber said in 2013, we all continue to work like people worked a hundred years ago when the world was much poorer because now a lot of us are working in bullshit jobs. Instead of figuring out a better and more equitable way to spread the wealth created by the tech and science boom in the last 100 years, society instead increasingly relies on bullshit jobs to pass on (some) benefits to the masses.

    Before I talk of the ChatGPT, the AI sensation that is amusing people with its erudition and ability to perform some not-so-basic jobs, let me talk of the bullshit jobs.

    The term was coined by Graeber. In his essay, he too started with the Keynesian prediction. And, he then argued that the reason why it hadn’t come true was because the world invented bullshit jobs. Graeber wrote that a lot of people were working because “it is as if someone were out there making up pointless jobs just for the sake of keeping us all working.”

    The phenomenon of bullshit jobs, on the whole, is detrimental even if in the short run or from an individual’s point of view it may look great. Example: telemarketing, which Graeber called a bullshit job. Sure, it allows someone to earn some money and hence offers a chance to them to better their life. But, its benefits to society, particularly in the long run, are suspect. It doesn’t add anything substantial to our well-being.

    In fact, bullshit jobs tend to harm people and society, something we are probably beginning to see now. The Great Resignation of the last two years wasn’t so much of a revolt against work. Instead, it was a revolt against shoddy, meaningless and lecherous work. If people nowadays groan every Monday morning, and move through office cubicles like zombies, surely some blame also lies with the work they are doing.

    Enter the ChatGPT

    Bullshit jobs are joy killers! And they sap energy and vitality out of people employed to do them. Bullshit jobs are also the jobs that ChatGPT is going to do wonderfully well, potentially removing millions of people from the job market.

    These are early days and we already know that ChatGPT, which seems to have bookish knowledge of someone who has memorised a million books, isn’t as smart as it pretends to be. But for many jobs, it doesn’t have to be an Einstein. It only needs to be smart enough, and that it is. For a lot of tasks, ChatGPT, or an AI similarly tuned, is already an ideal employee. And it will get better. It can quickly find basic and syntax errors in coding and suggest right fixes for them. It can analyse large data sets. It can write a lot of content that is SEO-friendly and easy to read. It can create basic templates that can be taught to children or can be used as teaching aids in schools. It can create PowerPoint slides better than an IIM-A graduate. It can write polite emails, the kind that someone currently might be required to write in their consumer-facing jobs. And so on and so forth.

    The result is that a large chunk of bullshit jobs can be done by ChatGPT. And if companies and organisations move to use ChatGPT in places where they currently use humans, it is safe to say that millions of people may end up losing their jobs.

    This may sound scary, but I believe this could be an opportunity – or a trigger if you want to call it that – to push the world of working in the right direction. In fact, if we do decide to use more and more AI systems like ChatGPT and smart robots, we may not have any other option but to push towards a work culture where employment as we know it, is optional and not mandatory. The bullshit jobs are bullshit work but they do serve a function in society: redistribute wealth, although the redistribution is rarely fair or generous.

    If AI systems like ChatGPT take over the bullshit jobs, this redistribution will stop. In such a scenario the only way to stop society from slipping into chaos will be universal basic income, which will have to be offered to all with no strings attached.

    Essentially, if the world wants to move to a future where AI systems like ChatGPT are common, driverless cars roam urban jungles and smart robots have taken over menial jobs, it must find a way to offer basic income to everyone universally. Otherwise, once ChatGPT and its ilk start replacing humans in jobs, there will be blood on the street and on Wall Street.

    The good part is that, conceptually, it is possible. Tech boom of the last thirty years has created – and will create more in future – extreme amounts of wealth. The key is to find a way to redistribute it. Years ago, instead of figuring out how humans should be working less, the world invented bullshit jobs. Now with AI systems like ChatGPT on horizon, the world has an opportunity to fix its sins of the past. It can, almost painlessly, move to a work culture where bullshit jobs are done by AI whereas humans can work to create real value and chase their passion, even if that passion happens to be a life of idling and merriment. It will make the world and workplaces better.

    This post was originally published on Basic Income Today.

  • The urgent human rights issues in the Eastern Europe and Central Asia region are hugely varied and demand creative campaigns that are well-researched, well-planned and well-managed despite the time pressures that surround them.

    JOB PURPOSE: To lead the identification, development, implementation and evaluation of Amnesty International’s campaigning and advocacy strategies on human rights violations in the Eastern Europe and Central Asia region, to deliver impact in relation to agreed priorities, utilizing political judgment and analytical, communication and representational skills.

    ABOUT YOU

    • Lead the development and implementation of campaign strategies, ensuring campaigns result in measurable change.
    • Advise on, coordinate and review the contribution to relevant campaigns by regional colleagues and other programmes.
    • Coordinate action planning and ensure consistency with campaigning standards and optimal use of resources.
    • Assess opportunities for action, identifying creative and effective campaigning tactics.
    • Provide advice to sections and structures and external partners on the development and implementation of campaign strategies.
    • Responsible for ensuring there is effective communication between relevant IS teams, sections and structures and partners about projects.
    • Draft, review and advise on campaign materials for internal and external use, ensuring products are coherent within the campaign strategy.
    • Communicate AI’s concerns, positions and messages to external and internal audiences.
    • Contribute to planning, execution and evaluation stages of campaign projects; develop and share campaigning best practice.

    SKILLS AND EXPERIENCE

    • The ability to adapt to fast-changing political situations in, and related to, Eastern Europe and Central Asia.
    • Experience of leading and implementing campaigns at the national & international level and the ability to lead innovation and creative approaches to campaigning.
    • Knowledge of working on, and in, the region and a specialist knowledge in relation to specific countries or other geographical areas in the region.
    • Digitally competent, with experience of digital campaigning and keeping up to date with digital trends and campaigning methodologies.
    • Experience of working with colleagues and partners based around the world.
    • Excellent communications skills in English and Russian in a fluent, clear and concise way. Knowledge of another regional language desirable
    • Experience of leading project teams and the ability to engage and inspire team members.
    • Experience of managing conflicting demands, meeting deadlines and adjusting priorities
    • Ability to undertake research to gather information relevant to the development of campaign strategies.
    • Ability to evaluate campaigns and projects and to report progress against stated objectives; experience of managing budgets and reporting against expenditure.

    Amnesty International is committed to creating and sustaining a working environment in which everyone has an equal opportunity to fulfill their potential and we welcome applications from suitably qualified people from all sections of the community. For further information on our benefits, please visit https://www.amnesty.org/en/careers/benefits/

    APPLY HERE

    https://careers.amnesty.org/vacancy/senior-regional-campaigner-london-3481/3509/description/

    This post was originally published on Hans Thoolen on Human Rights Defenders and their awards.

  • Artificial intelligence is being used in nearly every research field around the world, with adoption at its fastest pace since the technology emerged more than half a century ago. Its help as a productivity tool remains difficult to prove but still holds promise as the “technology of our time”. That’s according to a report by…

    The post ‘No signs of a slowdown’: AI adoption at record level appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The event that inspired the development of EpiWatch, an AI-driven early warning detection system for epidemics, was the West African ebola outbreak of 2014, according to UNSW professor of global biosecurity Raina MacIntyre. Given this outbreak and subsequent epidemics, Professor MacIntyre said the world was in urgent need of real change in the way outbreaks…

    The post EpiWatch, the AI-driven early warning system for epidemics appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The national science agency will upgrade from its current GPU cluster next year to a new high-performance accelerator computing system more capable of machine learning and artificial intelligence workloads across its various research areas. The CSIRO on Friday went to market for the solution, offering up to $14.5 million over five years, and wants the…

    The post AI upgrade for CSIRO’s computing cluster appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Western Australia’s space sector has hit another milestone with the opening of the state government-funded Space Automation, AI and Robotics Control Complex. Established and operated by Dutch geological data specialist Fugro, the multi-user facility opened on Thursday to provide remote operations activities in space as well as terrestrial and subsea remote operations. The facility also…

    The post WA remote space operations complex opens appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Funding for Australia’s Artificial Intelligence Action Plan was spared from a Labor spending audit that netted the new government $22 billion in savings in Tuesday’s budget. The $113 million allocated to AI remains unchanged but details of the plan may be after roll out delays under the previous government. The new government and Industry and…

    The post AI action plan locked in but changes still possible appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Ahead of the Australia-Israel Innovation Summit, MarineTech venture capital fund theDOCK’s co-founder Hannan Carmeli shared his thoughts on automation opportunities in the maritime sector and his aim to build relationships with Australian firms. A graduate of the Israeli Naval Academy and Israeli Navy, Mr Carmeli has spent 30 years working in the tech sector, including…

    The post Israeli VC looks to partner with Aussie corporate investors appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.