The procurement process is a key point of weakness in the government’s development and use of emerging technologies, and human rights need to be better considered in this process, according to Human Rights Commissioner Ed Santow.
After three years of research and work, the Australian Human Rights Commission released its final report on human rights and technology last week, with nearly 40 recommendations for government to ensure emerging technologies are fair and protect human rights.
A number of recommendations were around government procurement, which Mr Santow said is a significant vulnerability at the moment.
“The government needs to be able to ask the right questions in the procurement process. That is a point of real vulnerability. If you don’t get that right you can end up with an AI system that causes real harm,” Mr Santow told InnovationAus.
“If you do get that right, you’re able to make sure that the process of procurement asks the right questions and has the right protections, and then you’ve got a really robust system that the government can be confident in.”
The Commission called on the government to instruct the Department of Finance and the Digital Transformation Agency to amend the current procurement laws, policies and guidelines to ensure human rights are protected in the design and development of new technology.
“It is increasingly common to use government procurement processes as a lever to influence behaviour to achieve other policy outcomes. It is vital that the government procures AI-informed decision-making systems that are safe and protect human rights,” the report said.
“The Australian government generally works with, and relies on, the private sector to develop AI-informed decision-making systems. It is well recognised, therefore, that government procurement should focus on ensuring these systems are safe and comply with human rights.”
This should start with a review of the current procurement rules and policies to ensure they reflect a human rights-based approach to new technologies. Protections should then be included for when the government is looking to procure an AI-informed decision-making system, with a focus on the tool being transparent, explainable and accountable.
The Human Rights Commission pointed to the UK government’s Artificial Intelligence Office’s guidelines for AI procurement from last year as an example of a way forward for Australia.
These guidelines aim to ensure that the risks associated with this technology are identified and managed in the early stage of procurement, and that explainability and interpretability of the algorithms is included as design criteria.
There is also an important need to improve knowledge within the public sector on these new technologies and the risks associated with them, Mr Santow said.
“You need to understand at a high level what the strategy risks and opportunities are so you can make a good decision about where it is safe and effective to use AI in government agencies or in a company,” he said.
“The government needs to understand in more detail in the procurement process where the weak points are so they can conduct the process well and address those risks.”
This doesn’t require every public servant to know about AI in-depth, but a general upskilling around AI is needed, he said.
“You don’t need to be able to pull apart a car and build it from scratch, but you do need to know that if you push this lever then it goes forward so you can operate it safely. That level of upskilling is important,” Mr Santow said.
“It’s not about making everybody a data scientist or AI expert, it’s to enable people to operate effectively in a world that is increasingly operating using AI.”
The Human Rights Commission also recommended the Digital Transformation Agency’s Digital Sourcing Framework for ICT procurement be amended to include specific references to human rights protections, and for a requirement for any vendor of an AI-informed decision making system to complete a human rights impact assessment.
Further guidance should also be provided to government decision-makers to decide whether an AI-informed decision-making system will support compliance with the legal measures guaranteeing a right to a remedy, and to ensure compliance with the relevant Australian and international standards, the report said.
This post was originally published on InnovationAus.