This week, the Australian Government provided interim guidance to Government agencies on the use of generative AI platforms like ChatGPT.
Published on the Digital Transformation Agency’s website, the interim advice was developed by the Digital Transformation Agency (DTA) and the Department of Industry, Science and Resources (DISR).
In November 2022, OpenAI released ChatGPT which set the world on fire with its consumer-facing chat service that actually provided great answers to common questions. This was possible thanks to a large language model (LLM) trained on billions of parameters (aka data points from across the public web).
Since then Google released their competitor Bard in March 2023, Midjourney and Adobe both released text-to-image generation services and about a million, startups have emerged all seeking funding to be the next big thing in AI.
While these services are often very capable, they aren’t just present in our personal lives, but many businesses, and Government agencies are exploring ways to leverage them, often to increase business productivity.
What’s important to understand as AI develops (that’s happening fast), is what are the guardrails that we can play safely within, and avoid negative outcomes.
The Australian Government is keen to be seen as supportive of the new technology and says they are committed to fostering a transparent and innovative culture in the public service, while effectively managing risk through the assessment and governance of all emerging technologies.
Whole-of-government and agency-specific policies provide guidance on ensuring the appropriate, safe and effective use of technology tools, including AI. However, given the ease of access and use with publicly available generative AI tools like ChatGPT, the Digital Transformation Agency (DTA), in collaboration with the Department of Industry, Science and Resources (DISR), has developed advice that it recommends government agencies use as the basis for providing guidance to their staff.
In addition to the interim guidance for staff, we recommend agencies:
- implement an enrolment mechanism to register and approve staff user accounts to access generative AI platforms. This should include appropriate approval processes through Chief Information Security Officers (CISO) and/or Chief Information Officers (CIO).
- establish an avenue for staff to report any exceptions made to adhering to the guidance through your CISO/CIO. This should be reported periodically to the DTA by emailing email@example.com.
- seek to move to commercial arrangements for generative AI solutions as soon as it is possible to do so.
The Department of Industry, Science and Resources has released a discussion paper as part of public consultation on the safe and responsible use of AI. The supporting responsible AI discussion paper focuses on governance mechanisms to ensure AI is developed and used safely and responsibly in Australia.
Once the consultation closes, feedback will be used to inform appropriate regulatory and policy and responses. These will build on the government’s multimillion-dollar investment in responsible AI through the 2023-24 Budget.
The purpose of this interim guidance on generative AI is intended purely to guide staff within APS agencies. At this stage, it does not replace the generative AI policies developed by individual agencies, rather it is intended to supplement these policies while further work is done to develop a whole of government position.
The interim guidance is available on the Australian Government Architecture website.