Category: AI

  • What’s notable about the hype cycle around artificial intelligence at the moment is that it’s not just VCs, tech CEOs and advocates that are making the headlines, governments are as well. Not wanting to repeat the deregulated disasters of Web 2.0 companies that resulted in widespread harms, disinformation, privacy breaches and monopolistic behaviours, governments are…

    The post The AI wishes of govt will be lip service without accountability appeared first on InnovationAus.com.

  • By Dylan Matthews

    See original post here.

    There’s a lot of money in AI. That’s not just something that startup founders rushing to cash in on the latest fad believe; some very reputable economists are predicting a massive boom in productivity as AI use takes off, buoyed by empirical research showing tools like ChatGPT boost worker output.

    But while previous tech founders such as Larry Page or Mark Zuckerberg schemed furiously to secure as much control over the companies they created as possible — and with it, the financial upside — AI founders are taking a different tack, and experimenting with novel corporate governance structures meant to force themselves to take nonmonetary considerations into account.

    Demis Hassabis, the founder of DeepMind, sold his company to Google in 2014 only after the latter agreed to an independent ethics board that would govern how Google uses DeepMind’s research. (How much teeth the board has had in practice is debatable.)

    ChatGPT maker OpenAI is structured as a nonprofit that owns a for-profit arm with “capped” profits: First-round investors would stop earning after their shares multiply in value a hundredfold, with profits beyond that going into OpenAI’s nonprofit. A 100x return may seem ridiculous but consider that venture capitalist Peter Thiel invested $500,000 in Facebook and earned over $1 billion when the company went public, an over 2,000x return. If OpenAI is even a 10th that successful, the excess profits returning to the nonprofit would be huge.

    Meanwhile, Anthropic, which makes the chatbot Claude, is divesting control over a majority of its board to a trust composed not of shareholders, but independent trustees meant to enforce a focus on safety ahead of profits.

    Those three companies, plus Microsoft, got together on Wednesday to start a new organization meant to self-regulate the AI industry.

    I don’t know which of these models, if any, will work — meaning produce advanced AI that is safe and reliable. But I have hope that the hunger for new governance models from AI founders could maybe, possibly, if we’re very lucky, result in many of the potentially enormous and needed economic gains from the technology being broadly distributed.

    Where does the AI windfall go?

    There are three broad ways the profits reaped by AI companies could make their way to a more general public. The first, and most important over the long-term, is taxes: There are a whole lot of ways to tax capital income, like AI company profits, and then redistribute the proceeds through social programs. The second, considerably less important, is charity. Anthropic in particular is big on encouraging this, offering a 3-1 match on donations of shares in the company, up to 50 percent of an employee’s shares. That means that if an employee who earns 10,000 shares a year donates half of them, the company will donate another 15,000 shares on top of that.

    The third is if the companies themselves decide to donate a large share of their profits. This was the key proposal of a landmark 2020 paper called “The Windfall Clause,” released by the Centre for the Governance of AI in Oxford. The six authors notably include a number of figures who are now senior governance officials at leading labs; Cullen O’Keefe and Jade Leung are at OpenAI, and Allan Dafoe is at Google DeepMind (the other three are Peter Cihon, Ben Garfinkel, and Carrick Flynn).

    The idea is simple: The clause is a voluntary but binding commitment that AI firms could make to donate a set percentage of their profits in excess of a certain threshold to a charitable entity. They suggest the thresholds be based on profits as a share of the gross world product (the entire world’s economic output).

    If AI is a truly transformative technology, then profits of this scale are not inconceivable. The tech industry has already been able to generate massive profits with a fraction of the workforce of past industrial giants like General Motors; AI promises to repeat that success but also completely substitute for some forms of labor, turning what would have been wages in those jobs into revenue for AI companies. If that revenue is not shared somehow, the result could be a surge in inequality.

    In an illustrative example, not meant as a firm proposal, the authors of “The Windfall Clause” suggest donating 1 percent of profits between 0.1 percent and 1 percent of the world’s economy; 20 percent of profits between 1 and 10 percent; and 50 percent of profits above that be donated. Out of all the companies in the world today — up to and including firms with trillion-dollar values like Apple — none have high enough profits to reach 0.1 percent of gross world product. Of course, the specifics require much more thought, but the point is for this not to replace taxes for normal-scale companies, but to set up obligations for companies that are uniquely and spectacularly successful.

    The proposal also doesn’t specify where the money would actually go. Choosing the wrong way to distribute would be very bad, the authors note, and the questions of how to distribute are innumerable: “For example, in a global scheme, do all states get equal shares of windfall? Should windfall be allocated per capita? Should poorer states get more or quicker aid?”

    A global UBI

    I won’t pretend to have given the setup of windfall clauses nearly as much thought as these authors, and when the paper was published in early 2020, OpenAI’s GPT-3 hadn’t even been released. But I think their idea has a lot of promise, and the time to act on it is soon.

    If AI really is a transformative technology, and there are companies with profits on the order of 1 percent or more of the world economy, then the cat will be far out of the bag already. That company would presumably fight like hell against any proposals to distribute its windfall equitably across the world, and would have the resources and influence to win. But right now, when such benefits are purely speculative, they’d be giving up little. And if AI isn’t that big a deal, then at worst those of us advocating these measures will look foolish. That seems like a small price to pay.

    My suggestion for distribution would be not to attempt to find hyper-specific high-impact opportunities, like donating malaria bednets or giving money to anti-factory farming measures. We don’t know enough about the world in which transformative AI develops for these to reliably make sense; maybe we’ll have cured malaria already (I certainly hope so). Nor would I suggest outsourcing the task to a handful of foundation managers appointed by the AI firm. That’s too much power in the hands of an unaccountable group, too tied to the source of the profits.

    Instead, let’s keep it simple. The windfall should be distributed to as many individuals on earth as possible as a universal basic income every month. The company should be committed to working with host country governments to supply funds for that express purpose, and commit to audits to ensure the money is actually used that way. If there’s need to triage and only fund measures in certain places, start with the poorest countries possible that still have decent financial infrastructure. (M-Pesa, the mobile payments software used in central Africa, is more than good enough.)

    Direct cash distributions to individuals reduce the risk of fraud and abuse by local governments, and avoid intractable disputes about values at the level of the AI company making the donations. They also have an attractive quality relative to taxes by rich countries. If Congress were to pass a law imposing a corporate profits surtax along the lines laid out above, the share of the proceeds going to people in poverty abroad would be vanishingly small, at most 1 percent of the money. A global UBI program would be a huge win for people in developing countries relative to that option.

    Of course, it’s easy for me to sit here and say “set up a global UBI program” from my perch as a writer. It will take a lot of work to get going. But it’s work worth doing, and a remarkably non-dystopian vision of a world with transformative AI.

    The post How “windfall profits” from AI companies could fund a universal basic income appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • By Leigh Thomas

    See original post here.

    More than a quarter of jobs in the OECD rely on skills that could be easily automated in the coming artificial intelligence revolution, and workers fear they could lose their jobs to AI, the OECD said on Tuesday.

    The Organisation for Economic Co-operation and Development (OECD) is a 38-member bloc, spanning mostly wealthy nations but also some emerging economies like Mexico and Estonia.

    There is little evidence the emergence of AI is having a significant impact on jobs so far, but that may be because the revolution is in its early stages, the OECD said.

    Jobs with the highest risk of being automated make up 27% of the labour force on average in OECD countries, with eastern European countries most exposed, the Paris-based organisation said in its 2023 Employment Outlook.

    Jobs at highest risk were defined as those using more than 25 of the 100 skills and abilities that AI experts consider can be easily automated.

    Three out of five workers meanwhile fear that they could lose their job to AI over the next 10 years, the OECD found in a survey last year. The survey covered 5,300 workers in 2,000 firms spanning manufacturing and finance across seven OECD countries.

    The survey was carried out before the explosive emergence of generative AI like ChatGPT.

    Despite the anxiety over the advent of AI, two-thirds of workers already working with it said that automation had made their jobs less dangerous or tedious.

    “How AI will ultimately impact workers in the workplace and whether the benefits will outweigh the risks, will depend on the policy actions we take,” OECD Secretary General Mathias Cormann told a news conference.

    “Governments must help workers to prepare for the changes and benefit from the opportunities AI will bring about,”

    he continued.

    Minimum wages and collective bargaining could help ease the pressure that AI could put on wages while governments and regulators need to ensure workers rights are not compromised, the OECD said.

    Reuters Graphics
    Reuters Graphics

    The post 27% of jobs at high risk from AI revolution, says OECD appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • Multinational consulting firm Bain and Company has acquired the consulting and managed services divisions of Brisbane-based artificial intelligence consultancy Max Kelsen for an undisclosed amount. Max Kelsen’s team of full stack machine learning engineers will join Bain’s Advanced Analytics Group to “help enterprises develop and operationalise high-impact AI and ML enabled use cases”, the company…

    The post Australian AI consultancy acquired by Bain & Co appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • A political action committee aligned with GOP presidential candidate Gov. Ron DeSantis (R-Florida) has used Artificial Intelligence in an attack ad against former President Donald Trump. A voice that sounds a lot like Trump’s (who is also running for president next year) is featured in the ad, which was produced by Never Back Down, a super PAC that supports DeSantis for president.

    Source

    This post was originally published on Latest – Truthout.

  • “It’s really important for people to understand what this bundle of ideologies is, because it’s become so hugely influential, and is shaping our world right now, and will continue to shape it for the foreseeable future,” says philosopher and historian Émile P. Torres. In this episode of “Movement Memos,” host Kelly Hayes and Torres discuss what activists should know about longtermism and TESCREAL.

    Source

    This post was originally published on Latest – Truthout.

  • Generative AI could add $115 billion to the Australian economy annually by 2030 based on a modelling scenario of ‘fast adoption’, according to a new report prepared by the Technology Council of Australia and Microsoft. Even under a ‘slow’ adoption scenario, generative AI could add $45 billion to the economy annually, while medium adoption rates…

    The post Generative AI could be $115bn to economy by 2030: Tech Council appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • By Barath Raghavan and Bruce Schneier

    See original post here.

    For four decades, Alaskans have opened their mailboxes to find checks waiting for them, their cut of the black gold beneath their feet. This is Alaska’s Permanent Fund, funded by the state’s oil revenues and paid to every Alaskan each year. We’re now in a different sort of resource rush, with companies peddling bits instead of oil: generative AI.

    Everyone is talking about these new AI technologies — like ChatGPT — and AI companies are touting their awesome power. But they aren’t talking about how that power comes from all of us. Without all of our writings and photos that AI companies are using to train their models, they would have nothing to sell. Big Tech companies are currently taking the work of the American people, without our knowledge and consent, without licensing it, and are pocketing the proceeds.

    You are owed profits for your data that powers today’s AI, and we have a way to make that happen. We call it the AI Dividend.

    Our proposal is simple, and harkens back to the Alaskan plan. When Big Tech companies produce output from generative AI that was trained on public data, they would pay a tiny licensing fee, by the word or pixel or relevant unit of data. Those fees would go into the AI Dividend fund. Every few months, the Commerce Department would send out the entirety of the fund, split equally, to every resident nationwide. That’s it.

    There’s no reason to complicate it further. Generative AI needs a wide variety of data, which means all of us are valuable — not just those of us who write professionally, or prolifically, or well. Figuring out who contributed to which words the AIs output would be both challenging and invasive, given that even the companies themselves don’t quite know how their models work. Paying the dividend to people in proportion to the words or images they create would just incentivize them to create endless drivel, or worse, use AI to create that drivel. The bottom line for Big Tech is that if their AI model was created using public data, they have to pay into the fund. If you’re an American, you get paid from the fund.

    Under this plan, hobbyists and American small businesses would be exempt from fees. Only Big Tech companies — those with substantial revenue — would be required to pay into the fund. And they would pay at the point of generative AI output, such as from ChatGPT, Bing, Bard, or their embedded use in third-party services via Application Programming Interfaces.

    Our proposal also includes a compulsory licensing plan. By agreeing to pay into this fund, AI companies will receive a license that allows them to use public data when training their AI. This won’t supersede normal copyright law, of course. If a model starts producing copyright material beyond fair use, that’s a separate issue.

    Using today’s numbers, here’s what it would look like. The licensing fee could be small, starting at $0.001 per word generated by AI. A similar type of fee would be applied to other categories of generative AI outputs, such as images. That’s not a lot, but it adds up. Since most of Big Tech has started integrating generative AI into products, these fees would mean an annual dividend payment of a couple hundred dollars per person.

    The idea of paying you for your data isn’t new, and some companies have tried to do it themselves for users who opted in. And the idea of the public being repaid for use of their resources goes back to well before Alaska’s oil fund. But generative AI is different: It uses data from all of us whether we like it or not, it’s ubiquitous, and it’s potentially immensely valuable. It would cost Big Tech companies a fortune to create a synthetic equivalent to our data from scratch, and synthetic data would almost certainly result in worse output. They can’t create good AI without us.

    Our plan would apply to generative AI used in the U.S. It also only issues a dividend to Americans. Other countries can create their own versions, applying a similar fee to AI used within their borders. Just like an American company collects VAT for services sold in Europe, but not here, each country can independently manage their AI policy.

    Don’t get us wrong; this isn’t an attempt to strangle this nascent technology. Generative AI has interesting, valuable and possibly transformative uses, and this policy is aligned with that future. Even with the fees of the AI Dividend, generative AI will be cheap and will only get cheaper as technology improves. There are also risks — both every day and esoteric — posed by AI, and the government may need to develop policies to remedy any harms that arise.

    Our plan can’t make sure there are no downsides to the development of AI, but it would ensure that all Americans will share in the upsides — particularly since this new technology isn’t possible without our contribution.

    The post Artificial Intelligence Can’t Work Without Our Data. We Should All Be Paid For It. appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • Television and film actors are going on strike after a breakdown in negotiations between the SAG-AFTRA union and Hollywood studios. More than 160,000 members of the union are taking part in the first major actors’ strike since 1980. This also marks the first time since 1960 that actors and screenwriters have been on strike at the same time, with members of the Writers Guild of America on the…

    Source

    This post was originally published on Latest – Truthout.

  • Australia must significantly lift its investment in robotics-related artificial intelligence research to build out a sovereign robotics industry, according to 14 of the nation’s leading experts in the field. In a submission to the federal government’s robotics strategy consultation, the so-called Kingston AI Group has called for greater incentives and support for both the AI…

    The post Sovereign AI key to Australia’s robotics success: Researchers appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • On 11 July 2023 EFE reported that Vietnam had released Vietnamese-Australian activist Chau Van Kham, sentenced in 2019 to 12 years in prison for extremism over his ties to the Viet Tan pro-democratic party.

    Australian Prime Minister Anthony Albanese said he “very much welcomes the release of Chau,” in remarks Monday from Berlin, through Australian public broadcaster ABC.

    Chau’s lawyer Dan Nguyen said in a statement through Amnesty International Australia that the activist, who returned Monday night to Australia, is with his wife and two sons. He also thanked the government’s, organizations and individuals’ efforts that fought for his release.

    Chau was arrested in Ho Chi Minh City in January 2019 after being accused of entering the country with a false document and sentenced in an express trial to 12 years in prison for extremism charges 10 months later. See: https://humanrightsdefenders.blog/2020/06/08/chau-van-kham-australian-human-rights-defender-disappeared-inside-vietnams-prison-system/

    This was due to Chau, 73, being linked to pro-democratic group Viet Tan, considered an extremist entity in the country but a human rights organization in Australia.

    Deputy Australian Prime Minister Richard Marles said Chau was released on “humanitarian” reasons and “in the spirit of friendship which exists between Australia and Vietnam,” according to ABC.

    Chau is one of “more than 150 political activists in Vietnam who have been detained for peaceful acts in favor of freedom of expression,” Human Rights Watch Asia Human Rights Director Elaine Pearson said in a statement.

    Pearson spoke of journalist Dang Dihn Bach and activists Mai Phan Loi, Dang Dinh Bach, and Hoang Thi Minh Hong among them and urged Australia to continue advocating for their release.

    The exact number of political prisoners in Vietnam is unknown, as numbers provided by different human rights organizations have discrepancies.

    While Human Rights Watch says the total exceeds 150, Amnesty International said there were 128 political prisoners in the country last year. Dissident organization Defend the Defenders raised the number to more than 250.

    This post was originally published on Hans Thoolen on Human Rights Defenders and their awards.

  • By Min Jeong Lee and Yuki Furukawa

    See original post here.

    Japan is laying the groundwork to become home to some of the world’s top companies in artificial intelligence, the country’s Economy Minister Yasutoshi Nishimura told a group of University of Tokyo students Tuesday.

    That includes supporting promising startups and big enterprises, as well as pushing forward discussions on universal basic income as AI makes more jobs obsolete, he said at a symposium.

    “People will have more time”

    when robots, drones, self-driving vehicles and other devices do more as AI evolves, he said. 

    Japan would also need the capacity to drive development in AI-training processors, Nishimura said. That category is led by graphics chipmaker Nvidia Corp., which became the world’s most valuable chip company thanks to its early bet on AI. “I hope to create a company in Japan that surpasses Nvidia,” he said. 

    Prime Minister Fumio Kishida has stepped up support of the domestic semiconductor sector, betting that shifting geopolitical priorities will help Japan regain some of its long-lost leadership in chips. The country’s preparing billions of dollars in subsidies as part of a push to triple domestic production of chips by 2030, while a government-backed fund is also working to shore up the country’s chip materials supply chain.

    The birthplace of Astro Boy has held public discussions on AI’s impact on society for longer than most. And while Japan is drafting guidelines for the use of generative AI this year, those regulations don’t have to slow down AI’s progress, Kishida said. “It’s not an all-or-nothing choice,” he said during the symposium.

    SoftBank Group Corp.’s billionaire founder Masayoshi Son, who also attended, was especially enthusiastic. Son last month said the Vision Fund, the world’s largest pool of tech capital, is hunting new investments after racking up billions of dollars of losses on its bets on AI.

    “We need to discuss what it means to be human, when we no longer are the most intelligent animated being on the planet,” he said. “This is the time for Japan to pour all its efforts into AI.”

    The post Japan’s Economy Minister Sees Universal Basic Income in Japan’s AI Future appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • Military Airlift 2023 is returning to London, UK on the 5th and 6th September. The largest forum for the Airlift community will cover everything Airlift related. Focusing on improving equipment, capability, strategy and tactics, top level speakers and decision-makers will present their solutions and experiences over a structured two-day conference. Interested parties can register at […]

    The post Exclusive report | Airlift in the 21st Century | Military Airlift 2023 appeared first on Asian Military Review.

    This post was originally published on Asian Military Review.

  • South Australian lawmakers will use a new parliamentary inquiry to examine whether the state is well placed to seize on the opportunities presented by artificial intelligence while mitigating any risks. The state Parliament has agreed to establish the inquiry in a bid to understand the current state of AI in South Australia, particularly the “economic,…

    The post SA lawmakers to examine AI framework appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The Australian Research Council has banned grant assessors from using generative AI tools following claims that ChatGPT had been used to produce assessor reports. All peer reviewers are now prohibited from using the technology “as part of their assessment activities” because of breach of confidentiality concerns and the potential to “compromise the integrity of the…

    The post Generative AI banned from research grant assessments appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Deloitte has been contracted for $1.86 million to advance the next stage of Transport NSW’s pothole detecting platform Asset AI, which is initially being rolled out to two local government areas. The Asset AI software platform is designed to collect and highlight data on the condition of road infrastructure. It is fed data captured by…

    The post Pothole detecting AI trials begin in NSW appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • On 6 July 2023 Janika Spannagel in Open Global Rights comes with a study of great importance to the work for human rights defenders. The researcher states that “focusing only on defenders’ physical integrity risks undermining the very idea of supporting agents of human rights change” and that there is a need to Rethink campaigns on human rights defenders

    Spannnagel’s work featured in this blog before [see: https://humanrightsdefenders.blog/tag/janika-spannagel/] but this work questions more directly the core of HRD protection.

    Instead of summarising I will provide large quotes:

    ,,,,The theory of change put forward by actors, including Front Line Defenders, International Service for Human Rights, and many others, claims that by protecting local human rights activists, international campaigns can support them in their work to advance human rights protection on the ground. This assumption appears plausible and aligns with prominent accounts in academic human rights literature, where domestic activists’ protection from repression is seen as a way to open spaces for them to challenge the regime and enact change.

    That said, empirical evidence from UN casework and the experience of Tunisian defenders shows that this promise has not been fulfilled when it comes to human rights defenders in authoritarian regimes, as I show in my recent book. There, I argue that, while international attention can have important protective benefits, it does little to support individual human rights defenders as agents of change in repressive contexts. [Emphasis added]

    The reason for this is that international casework on defenders, including urgent action–like campaigns or UN communications, maintain the traditional focus on physical integrity rights that has guided the long-standing casework on political imprisonment, torture, or enforced disappearances. In doing so, it overlooks the many administrative, discursive, and covert forms of repression that typically bypass international scrutiny more broadly but that often very effectively disrupt and thwart defenders’ work toward change.

    The analysis of over 12,000 individual cases of human rights activists taken up by the UN special rapporteur on human rights defenders between 2000 and 2016 reveals that, in almost three-quarters of them, at least one of the violations described fell within the category of physical integrity violations. Detention cases alone made up 56% of all cases raised during that period. In contrast, only 4% of the cases dealt exclusively with softer types of repression, such as travel bans, bureaucratic issues, job dismissals, surveillance, or defamation.

    This distribution far from represents the everyday experience of human rights defenders in authoritarian states—instead, it is reflective of a humanitarian instinct in human rights casework to privilege cases that are considered most severe. One could argue that UN communications, and perhaps attention-based campaigning more broadly, are inherently humanitarian, not transformative instruments. But one should ask: What, then, is the purpose of focusing on human rights defenders, as opposed to any victim of repression? [Emphasis added]

    The priority given to physical integrity violations has two important adverse consequences. First, we can see that the data profoundly shape our understanding of what human rights defenders are struggling with. For example, on the basis of such data a CIVICUS report claims that in order to repress civic space, states resort “most often” to detention of activists, attacks against journalists, and excessive use of force against protesters. The human rights community’s own focus on violent repression thus paradoxically misleads us to believe that this is where most attention is needed.

    Secondly, this focus reinforces a protection gap for violations that fall outside of the conventional notion of state repression as physically harmful and as undeniably politically motivated. Research on repression highlights that authoritarian states engage in repressive substitution, where they replace highly scrutinized coercive tactics—typically harder and overt types of repression—with softer and more covert measures. The case of Tunisia under Ben Ali aptly illustrates the strong impact of such tactics on defenders’ ability to carry out meaningful work.

    When analyzing the further development of cases taken up by the UN, I also found that, while some positive effects of the UN’s attention could be identified for most of them, many did not see an actual improvement relative to the reported violations over the course of the next year; where they did, it was mostly an easing of harder repression. Ultimately, there is a real risk that governments continue to use hard repression to increase their bargaining power and then pass off a release from prison as a costly concession, while in reality imposing softer but equally effective measures against the activist in question.

    With this problem in mind, what could be done differently? Casework that follows a transformative logic should not seek to maximize the reduction of physical harm—the humanitarian logic—but should define protection needs in terms of safeguarding a defender’s ability to do effective human rights work. 

    Those engaging in casework and campaigns on human rights defenders should actively revisit their priorities in terms of the violations they tend to address. Far too often, softer repression remains unreported, unnoticed, and not acted upon, which effectively creates a twilight zone in which authoritarian states can comfortably stifle opposition voices without risking much pushback. We owe it to the countless number of human rights activists around the world to ensure that the label of “human rights defender” does not merely serve to laud their heroism and excite donors and the media, but that it is dedicated to fulfilling its promise of human rights change.

    https://www.openglobalrights.org/rethinking-campaigns-human-rights-defenders/index.cfm

    For the more traditional approach, see e.g. https://www.ipsnews.net/2023/07/recognising-human-rights-defenders-remarkable-agents-positive-change/

  • By The Editorial Board

    See original post here.

    India’s single most vexatious economic problem is the lack of adequate employment and earning opportunities for its large and growing labour force. The nation is one of the youngest countries of the world in terms of its demographic profile. This ‘demographic dividend’ is considered to be a blessing. It could, it is argued, help India become the world’s labour force. However, for the dividend to yield returns, it would require, firstly, a well-educated and well-trained workfor­ce. The demographic dividend’s fruition is also predicated upon the availability of enough job opportunities across the world. Neither possibility seems obvious today. Although India has a large constituency of graduates, an overwhelming num­ber of them are considered unemployable. There is also a very large number that is unskilled and functionally literate. Employment opportunities abroad are limited to a few low-skilled jobs, openings in the technology sector, and some in academia. Moreover, low-skilled labour migrating to other parts of the world is subject to scrutiny since few countries look at migration favourably. Moreover, the scale and the speed of emerging Artificial Intelligence-based technologies do not augur well for the traditional job market, save for workers who are highly educated and skilled in cutting-edge technologies. On joining the dots, the future of jobs across all levels of skill is not bright.

    Given this context, the remark made by the chief economic advisor, that India does not need a universal basic income because economic growth would suffice to meet aspirations, must be taken with the proverbial pinch of salt. India’s economic structure hides unemployment behind part-time, informal, job-sharing livelihoods. These ‘workers’ are surplus in the sense that total production would not decline even if they were to be removed from their activity. Therefore, the wisdom of depending solely on economic growth to guarantee basic amenities for all is unwarranted. Some alternative strategy has to be thought of to ensure a decent living for all. The concept of a universal basic income, or some variant of it, cannot thus be ruled out. The idea is to be able to provide for collective survival and sustenance. A template of UBI has two principal challenges: the identification of beneficiaries and the avoidance of fiscal stress. The digital enumeration of personal information can solve the first problem. Additional taxes on the super-rich can resolve the latter. Reality demands that the possibility of UBI be debated and kept alive notwithstanding its political unpalatability.

    The post Opinion: Universal basic income is a good idea for India’s workforce appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • South Australia has become the first state in the country to trial a generative artificial intelligence app with high school students to teach them about the safe use of the world-changing technology. Students at eight public high schools will use the chatbot, which has been designed in partnership with Microsoft, over an eight-week period after…

    The post SA public schools trial ‘safe’ AI-powered chatbot in nation first appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Do scholars and activists who support Palestinian rights sometimes unintentionally promote the Israeli arms industry? The Israeli military hype machine famously uses the occupation as a “laboratory” or as a “showcase” for its newly developed weapons, but this creates a dilemma for activists who oppose Israeli arms exports. Scholars and activists are morally obligated to highlight the crimes…

    Source

  • The Human Rights Council should urgently address the deterioration of the human rights situation in Tunisia, four human rights organizations said on 27 June 2023 as the 53rd Council’s session is underway.

    In a letter sent to UN Member States’ Representatives on 5 June 2023, the four undersigned organizations warned against the rapidly worsening situation in Tunisia, and urged States to seize the opportunity of the ongoing Human Rights Council’s session to address it. The organizations called on the Council and Member States to press the Tunisian authorities to comply with their obligations under international human rights law particularly those guaranteeing the rights to fair trial, freedom of expression, freedom of peaceful assembly and association, and non-discrimination.

    The Human Rights Council should urge Tunisia to end the ongoing crackdown on peaceful dissent and freedom of expression, and drop charges against, and release, all individuals being detained and prosecuted solely on the basis of their peaceful political activities and the exercise of their human rights. The Council should also call on Tunisia to conduct prompt, thorough, independent, impartial and transparent and investigation into a wave of anti-Black violence – including assaults and summary evictions – against Black African foreign nationals, including migrants, asylum seekers and refugees, and bring to justice anyone reasonably suspected to be responsible, and provide victims with access to justice and effective remedies.

    Over the past two years, Tunisia has witnessed a significant rollback on human rights. Judicial independence guarantees have been dismantled and individual judges and prosecutors have been subjected to arbitrary dismissal, politicized criminal prosecutions and increased interference by the executive. Lawyers are being prosecuted for the discharge of their professional duties and exercise of their right to freedom of expression.

    The Tunisian authorities’ interference in the judiciary and attacks on lawyers have greatly undermined the right to fair trial and public trust in the integrity of the justice system. The authorities must ensure that the courts are not weaponized to crush dissent and free expression,’ said International Commission of Jurists’ MENA director Said Benarbia. 

    Under the guise of ‘fighting offences related to information and communication systems’,  punishable by up to a 10 years’ imprisonment and a hefty fine according to Decree Law 54, at least 13 individuals, including journalists, political opponents, lawyers, human rights defenders and activists, have been subject to police or judicial investigations and are facing possible prosecutions.

    ‘With Tunisia facing political uncertainty and economic crisis, it’s more important than ever that Tunisians be free to debate their country’s future without fear of reprisal. The authorities should strive to allow the effective enjoyment of the right to freedom of expression of everyone; instead, they are attacking it,’ said Rawya Rageh, Amnesty International’s acting deputy director for the Middle East and North Africa.

    Last week, the UN High Commissioner for Human Rights called on the Tunisian authorities to stop restricting media freedoms and criminalizing independent journalism. In a statement published on 23 June, Volker Türk expressed deep concern at the increasing restrictions on the right to freedom of expression and press freedom in Tunisia, noting that vague legislation is being used to criminalize independent journalism and stifle criticism of the authorities. ‘It is troubling to see Tunisia, a country that once held so much hope, regressing and losing the human rights gains of the last decade,’ said the High Commissioner.

    Since February 2023, a wave of arrests targeted political opponents and perceived critics of Tunisia’s President, Kais Saied. In the absence of credible evidence of any offences, judges are investigating at least 48 people, such as dissidents, opposition figures, and lawyers, for allegedly conspiring against the State or threatening State security, among other charges. At least 17 of them are being investigated under Tunisia’s 2015 counter-terrorism law.

    ‘By jailing political leaders and banning opposition meetings, the authorities are dangerously trampling on the fundamental rights that underpin a vibrant democracy. The democratic backsliding and the human rights violations, which are unprecedented since the 2011 revolution, require urgent attention from the Human Rights Council and Member States,’ said Salsabil Chellali, Tunisia director at Human Rights Watch.

    Signatories:

    1. International Commission of Jurists (ICJ)
    2. International Service for Human Rights (ISHR)
    3. Amnesty International
    4. Human Rights Watch (HRW)

    Download as PDF

    This post was originally published on Hans Thoolen on Human Rights Defenders and their awards.

  • A New South Wales parliamentary inquiry has been created to examine the opportunities and risks of artificial intelligence, with senior members of the state government and opposition increasingly concerned by the technology. The wide-ranging inquiry, which will be conducted by the Premier and Finance Committee, was approved by NSW Parliament on Wednesday following broad support…

    The post NSW Parliament launches inquiry into AI appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • By Ian Rose

    See original post here.

    Until recently Dean Meadowcroft was a copywriter in a small marketing department.

    His duties included writing press releases, social media posts and other content for his company.

    But then, late last year, his firm introduced an Artificial Intelligence (AI) system.

    “At the time the idea was that it would be working alongside human lead copywriters to help speed up the process, essentially streamline things a little bit more,” he says.

    Mr Meadowcroft was not particularly impressed with the AI’s work.

    “It just kind of made everybody sound middle of the road, on the fence, and exactly the same, and therefore nobody really stands out.”

    The content also had to be checked by human staff to make sure it had not been lifted from anywhere else.

    But the AI was fast. What might take a human copywriter between 60 and 90 minutes to write, the AI could do in 10 minutes or less.

    Around four months after the AI was introduced, Mr Meadowcroft’s four-strong team was laid off.

    Mr Meadowcroft can’t be certain, but he’s pretty sure the AI replaced them.

    “I did laugh-off the idea of AI replacing writers, or affecting my job, until it did,” he said.

    The latest wave of AI hit late last year when OpenAI launched ChatGPT.

    Backed by Microsoft, ChatGPT can give human-like responses to questions and can, in minutes, generate essays, speeches, even recipes.

    Other tech giants are scrambling to launch their own systems – Google launched Bard in March.

    While not perfect, such systems are trained on the ocean of data available on the internet – an amount of information impossible for even a team of humans to digest.

    So that’s left many wondering which jobs might be at risk.

    Earlier this year, a report from Goldman Sachs said that AI could potentially replace the equivalent of 300 million full-time jobs.

    Any job losses would not fall equally across the economy. According to the report, 46% of tasks in administrative and 44% in legal professions could be automated, but only 6% in construction and 4% in maintenance.

    The report also points out that the introduction of AI could boost productivity and growth and might create new jobs.

    There is some evidence of that already.

    This month IKEA said that, since 2021, it has retrained 8,500 staff who worked in its call centres as design advisers.

    The furniture giant says that 47% of customer calls are now handled by an AI called Billie.

    While IKEA does not see any job losses resulting from its use of AI, such developments are making many people worried.

    A recent survey by Boston Consulting Group (BCG), which polled 12,000 workers from around the world, found that a third were worried that they would be replaced at work by AI, with frontline staff more concerned than managers.

    Jessica Apotheker from BCG says that’s partly due to fear of the unknown.

    “When you look at leaders and managers, we have more than 80% of them that use AI at least on a weekly basis. When you look at frontline staff, that number drops to 20% so with the lack of familiarity with the tech comes much more anxiety and concern on the outcomes for them.”

    But perhaps there is good reason to be anxious.

    For three months last year, Alejandro Graue had been doing voiceover work for a popular YouTube channel.

    It seemed to be a promising line of work, a whole YouTube channel in English had to be re-voiced in Spanish.

    Mr Graue went on holiday late last year confident that there would be work when he returned.

    “I was expecting to have that money to live with – I have two daughters, so I need the money,” he says.

    But to his surprise, before he returned to work, the YouTube channel uploaded a new video in Spanish – one he had not worked on.

    “When I clicked on it, what I heard was not my voice, but an AI generated voice – a very badly synced voiceover. It was terrible. And I was like, What is this? Is this like going to be my new partner in crime like the channel? Or is this going to replace me?” he says.

    A phone call to the studio he worked for confirmed the worst. The client wanted to experiment with AI because it was cheaper and faster.

    That experiment turned out to be a failure. Viewers complained about the quality of the voiceover and eventually the channel took down the videos that featured the AI-generated voice.

    But Mr Graue did not find that very comforting. He thinks the technology will only improve and wonders where that will leave voiceover artists like him.

    “If this starts to happen in every job that I have, what should I do? Should I buy a farm? I don’t know. What other job could I look for that is not going to be replaced as well in the future? It’s very complicated,” he says.

    If AI is not coming for your job then it is likely you might have to start working with one in some way.

    After a few months of freelance work, former copywriter Dean Meadowcroft took a new direction.

    He now works for an employee assistance provider, which gives wellbeing and mental health advice to staff. Working with AI is now part of his job.

    “I think that is where the future is for AI, giving quick access to human-led content, as opposed to completely eliminating that human aspect,” he says.

    The post The workers already replaced by artificial intelligence appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • We speak with Dr. Joy Buolamwini, founder of the Algorithmic Justice League, who met this week with President Biden in a closed-door discussion with other artificial intelligence experts and critics about the need to explore the promise and risk of AI. The computer scientist and coding expert has long raised alarm about how AI and algorithms are enabling racist and sexist bias. We discuss examples…

    Source

    This post was originally published on Latest – Truthout.

  • Our nation is experiencing its lowest productivity growth in 60 years, according to the Committee for the Economic Development of Australia. And this downturn is reflected across most advanced economies worldwide. So it’s not surprising some see the rise of artificial intelligence (AI) as productivity’s saviour. Media articles herald a new era of high productivity…

    The post Yes, AI could help us fix productivity – but not everything appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • Australian organisations are being urged to tighten their use of artificial intelligence with the application of ethics principles ahead of potentially sweeping changes to regulations and standards. The CSIRO’s National Artificial Intelligence Centre on Thursday launched a how-to guide for bridging ethics principles and practice after worrying signs businesses were still coming to grips with…

    The post Businesses offered crash course on AI ethics appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • The New South Wales government guide for using artificial intelligence is being refreshed as more agencies implement the technology and user-friendly generative tools obscure the risks from public servants. It comes as the state’s AI Review Committee passes a dozen assessments of high-value or high-risk AI projects in its first year, all of which needed…

    The post NSW govt to refresh AI framework appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • By Pat Kane

    See original post here.

    WHEN speaking on BBC Scotland’s excellent Debate Night show on Wednesday, I was lobbed a perfectly formed audience question. It was so appropriate I felt like Robert Redford in The Natural, ready to smash the ball into the floodlights.

    I kinda missed, expectedly. But here was the question: Can Scotland influence the opportunities and threats posed by artificial intelligence (AI) – or are we essentially just bystanders? And was I pessimistic or optimistic about AI?

    Here, a bit tidied up, is my reply: “I’ve been dreaming of this moment most of my adult life. Musicians (like me) love technology – we make it do beautiful things, we’re not alienated by it. And the artistic life is upon us all, ladies and gentlemen – if we use these machines properly.

    “What’s interesting about AI is that it takes what the anarchist thinker David Graeber called ‘bullshit’ jobs – routinised jobs that seem to have no purpose or meaning – and puts them most in danger.

    “And humans at either end of the spectrum – whether you want to be purely expressive on one extreme, or you want to be hands-on with care and craft at the other – these are the people who will survive this moment. It’s potentially a time of great liberation.

    “What can Scotland do in relation to AI? Well, with the powers of independence – which means jurisdiction over your institutions, your labour and welfare laws – we can look at these technologies and say: How long should a working week be?

    “Rather than the profits going to the corporations that are bringing these technologies into our lives, we can say: Why couldn’t some of these revenues fund universal basic services, universal basic income? This is a time of great liberation – if the citizens can rise up and seize it.

    “And if Scotland gets the nation-state capacity to build the institutions that ride this future – and choose to be part of, say, the EU’s fantastically detailed regulation of AI – we’ll be in the driving seat.”

    There endeth the sermon. I reproduce my response here because, out of the usual panic before the television camera, it actually comes from a very authentic place. Which doesn’t mean to say I can’t see the flaws and gaps in my argumentation.

    It’s true to say I have been waiting for superintelligent AI to come along all my life. However, I’m not deaf to the charge that this might be a God-replacement fantasy.

    My rejection of the Catholic religion during my Higher exams (it’s my efforts that will get me there, not His!) came at the same time as I was lost in the sci-fi classics – Isaac Asimov, Robert Heinlein, Ray Bradbury, Philip K Dick. As well as the rougher narratives in British sci-fi comics such as 2000 AD, Warrior, and Crisis.

    Like the aliens in these stories, I was actively looking for robots and AI to present a profound challenge to human nature. A nature which, as manifested in the reactive, cut-down-the-tall-poppy environment of school (and Coatbridge) that I was growing up in, I wasn’t that impressed with.

    I am currently (and happily) using the AI chatbot GPT-4 nearly every other day, and I think this early conditioning of mine explains why.

    Here I sit, with a calm, polite, not shouty, not abusive conversation partner. Unfazed by almost any cultural or intellectual reference I bring to it, GPT-4 unfailingly reacts in a steady and constructive manner. What a relief, just as I always dreamed someone/something would.

    The musicians with technology point, I’d reinforce. Drummers have been faced with drum machines; physical studios with virtual versions of the great recording rooms; singers with their vocals digitally nudged into perfection.

    Yet “playing around” with tech, subjecting it to human whims and experiences, means that musicians come up with hybrid forms of smart tech.

    For example, they’ll use many shades of “generative” tech – from complete simulation to none at all – in order to generate an unprecedented thrill in themselves and their listeners.

    UNTIL we get lovelorn or rebellious AI, with insatiable appetites and sensual drives, I doubt the humans will be kicked out of music anytime soon. Indeed, we should look to artists as an example of how we can best draw the line – and it’s a wavy, fuzzy one – between the human and the machine.

    And “bullshit jobs”? Chatbots passing high-end legal and medical exams with honours is spectacular. But think for a minute what that means for you if your job is based on having ingested lots of bureaucratic and organisational precedent, which you then assess and regurgitate on request.

    At the very least, the possibility of nine out of 10 of those jobs disappearing – with the remainder in a supervisory role – is real, according to several assessments.

    The AI godfather Stuart Russell, who I engaged with at his Reith Lecture in Edinburgh a few years ago, is behind my point about the consequences of routine organisational labour being automated. Freely creative minds, and hands-on care and craft, will both be given new impetus, Russell predicted.

    On the question of the political economy of AI – the institutions and regulations that will transition humans into a future not defined by the work ethic – I do think we have the best chance of building those under conditions of independence.

    Professor Shannon Vallor, director of the Centre for Technomoral Futures at the University of Edinburgh, gave me a quote on Scotland’s leadership in this area.

    “The Scottish Government is in a particularly strong position to model this pragmatic, grounded approach; it pioneered an AI strategy in 2021 focused on trustworthy, ethical and inclusive AI innovation that serves the wider public interest, rather than AI for AI’s sake”, she writes.

    “We need to bring AI and political leaders back to earth to craft evidence-based and practical AI governance strategies that keep our interests at the centre.”

    “So I think as the debate unfolds about how the UK, EU, USA and other governance bodies globally should respond to AI, Scotland should not only move forward with implementing its own strategy, which is very well suited for this moment, but seek to shape the debate internationally about AI governance priorities.”

    I’d also recommend watching the Trustworthy, Ethical and Inclusive Artificial Intelligence MSP debate on the Scottish Parliament website.

    If you want the opposite of Sam Altman and Elon Musk touring the conference halls – whipping up fears in order to present themselves as the lead assuagers of those fears – this debate was it.

    But the calm intelligence on display in the Holyrood chamber models, for me, how I want these AIs to be in my own life. I want them as amplifiers of my strengths and visions.

    As the philosopher Roberto Mangabeira Unger puts it: “These machines do what’s repeatable, so that I can do what humans do – the unrepeatable.”

    The problem is how to instil a sense of agency in people – a belief that such an advanced human civilisation is really possible, by means of the ethical use and arrangement of this tech.

    Thus my gesticulations, and wild metaphors, on a Wednesday night telly debate.

    Sorry/not sorry.

    The post AI presents us with an opportunity for liberation if we can take it appeared first on Basic Income Today.

    This post was originally published on Basic Income Today.

  • As the European Union moves to legislate the world’s first Artificial Intelligence Act, Industry and Science minister Ed Husic has stressed the need for Australia’s regulatory regime to be “fit for Australian purpose”. Mr Husic spoke to InnovationAus.com following a meeting with OpenAI founder and chief executive Sam Altman, who is in Australia for talks…

    The post Husic wants AI regulation ‘fit for Australian purpose’ appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.

  • World-first artificial intelligence laws have cleared a major hurdle in Europe, with lawmakers agreeing to draft rules that could serve as a model for other countries grappling with the rapid rise of generative AI. As consultations on similar regulations get underway in Australia, the European Parliament on Wednesday night agreed to the text of the…

    The post Landmark AI rules take major step forward in EU appeared first on InnovationAus.com.

    This post was originally published on InnovationAus.com.