Concept information
Terme préférentiel
transformer
Définition
- Sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. (Vaswani et al., 2017, p. 10)
Concept générique
Synonyme(s)
- self-attention model
Traductions
-
français
-
modèle auto-attentif
-
modèle d'auto-attention
URI
http://data.loterre.fr/ark:/67375/8LP-Q1F2DNLD-R
{{label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }}
{{#if prefLabel }}
{{/if}}
{{/each}}
{{#if notation }}{{ notation }} {{/if}}{{ prefLabel }}
{{#ifDifferentLabelLang lang }} ({{ lang }}){{/ifDifferentLabelLang}}
{{#if vocabName }}
{{ vocabName }}
{{/if}}