Skip to main

Vocabulary of natural language processing (POC)

Search from vocabulary

Concept information

Término preferido

adversarial attack  

Definición

  • A deliberate intent to mislead a machine learning or deep neural network model by introducing subtle, imperceptible interference to an input sample. This might result in the model drawing an incorrect conclusion confidently. (Based on Wang et al., Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey, 2023)

Concepto genérico

Etiquetas alternativas

  • adversarial training

En otras lenguas

URI

http://data.loterre.fr/ark:/67375/8LP-B29QHNSZ-4

Descargue este concepto:

RDF/XML TURTLE JSON-LD última modificación 13/6/24