Concept information
Término preferido
adversarial attack
Definición
- A deliberate intent to mislead a machine learning or deep neural network model by introducing subtle, imperceptible interference to an input sample. This might result in the model drawing an incorrect conclusion confidently. (Based on Wang et al., Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey, 2023)
Concepto genérico
Etiquetas alternativas
- adversarial training
En otras lenguas
-
francés
-
apprentissage adverse
-
apprentissage antagoniste
-
attaque adverse
URI
http://data.loterre.fr/ark:/67375/8LP-B29QHNSZ-4
{{label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }}
{{#if prefLabel }}
{{/if}}
{{/each}}
{{#if notation }}{{ notation }} {{/if}}{{ prefLabel }}
{{#ifDifferentLabelLang lang }} ({{ lang }}){{/ifDifferentLabelLang}}
{{#if vocabName }}
{{ vocabName }}
{{/if}}