Introduction
Rapid advances in artificial intelligence (AI), along with public releases of AI products, have prompted governments, businesses and criminals to accelerate efforts to incorporate this new technology into their operations. These actors are looking to capitalise on the advances in a particular type of AI known as deep learning neural networks - such as ChatGPT. Take up of AI by businesses, governments and malicious actors will enhance existing cyber threats and enable new threat vectors. While it will also be used to enhance cybersecurity, those same systems will themselves become targets of cyber threats.
This advice provides definitions for some of the most commonly encountered AI terms in cybersecurity and a brief typology of cyber threats that will arise from AI.
Hierarchy of artificial intelligence
- Artificial Intelligence
Digital systems capable of performing tasks commonly thought of as requiring intelligence, such as writing meaningful sentences, solving equations, creating art, navigating obstacles, and playing board games. - Machine learning (ML)
An approach to AI where a digital system improves its performance on a task over time through experience. This learning is achieved by using a training dataset to gradually optimise values which produce the output. - Neural networks
A common approach to ML that consists of layers of nodes, with weighted connections between them, through which the data is passed to turn an input into an output.- Neural networks are not the only approach to machine learning. Other approaches include support vector machines, Bayesian networks, and linear regression.
- Deep learning
A common implementation for neural networks with a large number of ‘hidden’ layers between the input and output layers. - Various architectures
Specific ways deep learning neural networks can be structured, such as convolutional neural networks and transformer networks. - Specific models
An individual AI system that has been trained on a dataset to perform a specific task , such as ChatGPT.- ChatGPT is an application based on the GPT3/3.5/4 models, which are deep learning neural networks with a transformer architecture.
Artificial intelligence terminology
Model
An AI system that has been trained to do a particular task. ‘Foundational’ models are intended to be used with further training to refine their performance. For example ChatGPT is a text generating application based on the GPT 3.5/4 foundation model.
Algorithm
The mathematical process that transforms input data into the output. For ML neutral networks, the algorithm is developed by adjusting the weighting of the connections between nodes during training.
Training
The process where the algorithm is adjusted in response to feedback as the AI uses a data-set to learn how to perform its task. Inference is a form of training to make finer adjusts to models that have already been trained.
Parameter
Variables in the algorithm whose values are adjusted during training and determine how input is transformed into output. ‘Hyperparameters’ are those set by the human before training.
Cyberthreats related to artificial intelligence
Threats from AI
AI can be used to enhance existing cyber threats and to enable new threat vectors. For example, a malicious actor could use generative AI to enhance spearphishing material.
Threats to AI
As digital systems, AI models themselves can be the target of cyberattack. For example, an AI used for malware classification could be disrupted to enable access by a malicious actor.
Accidental threats
AI can threaten cybersecurity inadvertently. For example, a bug in an AI model could reveal data entered by one user to another.
Threats via AI
AI models and associated datasets and files can be used as a vector for cyberattacks. For example, malicious code could be hidden in an open source AI model that users then download.
Contact details
If you have any questions regarding this guidance you can write to us or call us on 1300 CYBER1 (1300 292 371).