Concept information
Terme préférentiel
contrastive loss
Définition
- A loss function used in training neural network models which draws representations of similar inputs close to one another, while simultaneously pushing representations of different inputs apart, and is used for tasks such as sentence similarity, semantic textual similarity, and sentence embedding learning. (Based on Moayeri et al., A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes, 2022)
Concept générique
Traductions
-
français
URI
http://data.loterre.fr/ark:/67375/8LP-ZBB710P2-5
{{label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }}
{{#if prefLabel }}
{{/if}}
{{/each}}
{{#if notation }}{{ notation }} {{/if}}{{ prefLabel }}
{{#ifDifferentLabelLang lang }} ({{ lang }}){{/ifDifferentLabelLang}}
{{#if vocabName }}
{{ vocabName }}
{{/if}}