The Department of Home Affairs has banned conversational AI systems such as ChatGPT and Bard, stopped staff from using TikTok on government devices, and put a block on the use of drones and security camera equipment from companies based in China. Its secretary reckons it’s a decision the rest of government should adopt.

Use of OpenAI’s ChatGPT and Google’s Bard is currently blocked for use by Home Affairs staff. Parts of the department can seek a business case to access the capability, but it has to be a bloody good one.

“There is certainly some value in exploring the capabilities as a tool for experimentation and learning and in looking at the utility for purposes of innovation and the like,” Home Affairs chief operating officer Justine Saunders said.

“But it’s not to be used for the purposes of making decisions, and it’s very critical that you don’t incorporate, in the questions that you are asking ChatGPT, any information relating to the department.”

Not downplaying the disastrous consequences of letting an algorithm make government decisions, the biggest risk here is to national security – Home Affairs has a lot of sensitive data within its walls and we’ve all seen reports of companies having their secret sauce leaked because a staff member felt it appropriate to ask AI to do their job for them.

“Departments and agencies have been using artificial intelligence for some years now in risk algorithms. But managing something corporately, where you might have a proprietary engagement … and where you know where the data is stored and what limitations are placed on that data is one thing,” Home Affairs secretary Mike Pezzullo said.

And while he didn’t exactly finish that train of thought, he continued by saying what we can only imagine is the ‘other thing’ – that he’s not willing to use something that he doesn’t know where the info will end up.

“I’ve voiced my own views, corporately, across the public service. I don’t think individual officers, who won’t necessarily be attuned to those risks or have the capacity to engage with those applications, should be in a position where they say: ‘Gosh! This’ll help me do my work quicker. I’m going to download ChatGPT’—or, indeed, any other application—’so I can do my work more efficiently, or I can get out earlier’,” he added.

In fact, it’s a position Pezzullo reckons the whole-of-government should adopt. But there is no current Commonwealth policy on the use of generative AI technologies such as ChatGPT. Mostly because the department, and others within the Australian government, are using AI and algorithms.

“If the department arrives at a position, preferably as part of a whole-of-government position, that we can put safeguards in place, either through a proprietary agreement or otherwise, at the corporate level then, it’s no different from the risk algorithms and risk engines that we currently employ for our officers to do their jobs,” Pezzullo explained.

“What I’m saying is that I don’t want a permissive situation where an officer can individually decide, without any safeguards, ‘I’m going to use this technology because it will make my day go faster.’

“When we purchase AI on our corporate systems through proprietary arrangements, we can control that. I’m concerned about open-source technologies of this nature where you can’t control that.”

This conversation all went down during Senate Estimates earlier this week, and actually started with some discussion of the use of security cameras and drones made by Chinese-based companies. The Pentagon has blacklisted drone-maker DJI from providing any of its drones to the U.S. military and Saunders has confirmed that Australia has “suspended the use of that capability”, which includes Australian Border Force. Security cameras on government property have also been removed.