NIST Fortifies Chatbots and Self-Driving Cars Against Digital Threats

Jan. 5, 2024
NIST Fortifies Chatbots and Self-Driving Cars Against Digital Threats

In a landmark move, the US National Institute of Standards and Technology (NIST) has taken a new step in developing strategies to fight against cyber-threats that target AI-powered chatbots and self-driving cars.

The Institute released a new paper on January 4, 2024, in which it established a standardized approach to characterizing and defending against cyber-attacks on AI.

The paper, called Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, was written in collaboration with academia and industry. It documents the different types of adversarial machine learning (AML) attacks and some mitigation techniques.

In its taxonomy, NIST broke down AML attacks into two categories:

  • Attacks targeting ‘predictive AI’ systems
  • Attacks targeting ‘generative AI’ systems

What NIST calls ‘predictive AI’ refers to a broad understanding of AI and machine learning systems that predict behaviors and phenomena. An example of such systems can be found in computer vision devices or self-driving cars.

‘Generative AI,’ in NIST taxonomy, is a sub-category within ‘predictive AI,’ which includes generative adversarial networks, generative pre-trained transformers and diffusion models.

“While many attack types in the PredAI taxonomy apply to GenAI […], a substantial body of recent work on the security of GenAI merits particular focus on novel security violations,” reads the paper.

Evasion, Poisoning and Privacy Attacks

For ‘predictive AI’ systems, the report considers three types of attacks:

  • Evasion attacks, in which the adversary’s goal is to generate adversarial examples, which are defined as testing samples whose classification can be changed at a deployment time to an arbitrary class of the attacker’s choice with only minimal perturbation
  • Poisoning attacks, referring to adversarial attacks conducted during the training stage of the AI algorithm
  • Privacy attacks, attempts to learn sensitive information about the AI or the data it was trained on in order to misuse it

Alina Oprea, a professor at Northeastern University and one of the paper’s co-authors, commented in a public statement: “Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities. Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”

Tags:

No tags.

JikGuard.com, a high-tech security service provider focusing on game protection and anti-cheat, is committed to helping game companies solve the problem of cheats and hacks, and providing deeply integrated encryption protection solutions for games.

Explore Features>>