Today the Australian Government has released its interim response to the Safe and Responsible AI in Australia consultation.
While AI has been an industry segment for a long time, in 2023, the world changed with the release of large language models in the form of user-facing services like ChatGPT from OpenAI, Google Bard, Microsoft Copilot and many others. We saw the arrival of Generative AI which allows users to create and edit media using written text, often accelerating workflows.
After industry consultation between June and August last year, the Government is now considering mandatory guardrails for AI development and deployment in high-risk settings, through changes to existing laws or the creation of new AI-specific laws.
The Government has published 510 submissions from entities like AFP, Adobe, Canva, Commonwealth Bank, NAB, Telstra, Optus, OpenAI, Meta, Red Cross, Atlassian, Microsoft, Google, Cisco, Woolworths, Australia Post and more.
As part of the Government’s response, they outline that work is being done across the government to address issues raised during consultation on regulatory and policy frameworks. The department is working with agencies leading this work to ensure that views in submissions can inform these processes.
Developing new laws that will provide the Australian Communications and Media Authority with powers to combat online misinformation and disinformation
Given the vast majority of AI-powered services are created in countries outside Australia, it seems unlikely that legislation here would do much to resolve this issue and may lead to some services not being offered to Australian customers if new laws are poorly crafted.
The reason our social media platforms have problems addressing misinformation is that detecting it is not easy and often subjective. It’s ambitious to imagine the Government could do a better job of this than multi-billion dollar private companies.
An independent statutory review of the Online Safety Act 2021 to ensure that the legislative
framework remains responsive to online harms
Reviewing the Online Safety Act is fine in principle, improve it where we can.
Working with the state and territory governments, industry, and the research community to
develop a regulatory framework for automated vehicles in Australia, including interactions with work health and safety laws
Autonomous Vehicles are already delivering paid services internationally. Unfortunately, there are no players in the Australian market. Like crash testing, we should have tests that autonomous systems need to successfully complete, and/or provide data that shows the technology is capable of reducing accidents and our road toll.
There are many different technology approaches to solving this problem and it’ll be important not to artificially disadvantage companies that leverage AI. Fears around the use of AI are often fueled by a lack of understanding of the technology, rather than evidence that AI systems used for robotics, drones, and autonomous vehicles are inherently unsafe.
Given Australia’s road toll is not reducing using all existing techniques (fines/advertising etc), we should act fast to enable this technology to save lives. If you are in any doubt about the capabilities of these systems, take a look at videos of poor human behaviour on our roads (DashcamOwnersAustralia).
What is not clear is why the work health and safety laws are being discussed in the context of Autonomous Vehicles, this sounds a lot like the taxi industry getting ready to protest a future where taxi drivers become irrelevant and the cost of transport gets slashed dramatically.
Ongoing research and consultation by the Attorney-General’s Department and IP Australia,
including through the AI Working Group of the IP Policy Group, on the implications of AI on
copyright and broader IP law
Intellectual Property laws will certainly be challenged by Large Language Models and Generative AI. These AI models scour the public web to create a knowledgebase that far exceeds human capabilities (trillions of datapoints), and is able to then answer questions and generate new content based on the information learned online.
It may be an option to enforce AI training to adhere to AI.txt, similar to robots.txt placed on websites by the developers/creators. This would enable site owners to determine if they want to allow their content to be leveraged for learning. Again this is problematic without international laws, given these models and compute are often so large, so expensive that they are created by the biggest tech companies in the world, none of which are owned and operated out of Australia.
Implementing the privacy law reforms
It is not clear how privacy law reforms would change the AI industry to help. Already in place today (on commercial services) are restrictions on what you can achieve with these services. If you ask ChatGPT to write an exploit to find an exploit to extract data from a service, it will refuse to answer that.
There are unscrupulous models that don’t have these safety mechanisms built in, but these are unlikely to pay attention to Australian law.
Strengthening Australia’s competition and consumer laws to address issues posed by digital
The details will be important here. There’ll be a lot of advocates to just get out a massive ban hammer and attempt to technically prevent AI services, rather than look at the opportunity that it affords. There will be winners and losers in this AI race and I’d hate to see the Government effectively promise they can prevent these issues.
Agreeing an Australian Framework for Generative AI in Schools by education ministers to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools and society while protecting privacy, security and safety
Love it or hate it, AI is here and while the industry has exploded in 2023, this is just the start. Those teachers and schools who are scared by technology will attempt to ban the use of AI – that’s just lazy. Schools are designed to teach our kids how to get ready for the world and by the time they graduate, students with expertise in AI will win in their careers (assuming their job of choice hasn’t been replaced).
Historically assignments would be checked by plagiarism algorithms to detect students who were taking shortcuts, now finding the most efficient route to success (great prompt engineering) should be encouraged and yes this makes a teacher’s job harder, but pandora’s box is open, like it or not, best get used it.
Ensuring the security of AI tools, such as using principles like security by design, through the government’s work on the Cyber Security Strategy.
This item is actually one of the most important initiatives. Security can always be improved and leveraging AI to defend against AI-powered attacks is likely going to be a requirement, not an option in the very near future.
The Government’s response to the Safe and Responsible AI in Australia discussion paper is available on the website of the Department of Industry, Science and Resources at: https://consult.industry.gov.au/supporting-responsible-ai