Skip to main content

Vocabulary of natural language processing (POC)

Search from vocabulary

Concept information

Preferred term

adversarial attack  

Definition

  • A deliberate intent to mislead a machine learning or deep neural network model by introducing subtle, imperceptible interference to an input sample. This might result in the model drawing an incorrect conclusion confidently. (Based on Wang et al., Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey, 2023)

Broader concept

Entry terms

  • adversarial training

In other languages

URI

http://data.loterre.fr/ark:/67375/8LP-B29QHNSZ-4

Download this concept:

RDF/XML TURTLE JSON-LD Last modified 6/13/24