Generative pre-trained transformer: Difference between revisions
Appearance
imported>Unknown user No edit summary |
imported>Unknown user No edit summary |
(No difference)
| |
Revision as of 02:53, 15 January 2026
Generative pre-trained transformer
A family of machine learning models based on the transformer architecture that are pre-trained through self-supervised learning on large data sets of unlabelled text. This is the current predominant architecture for large language models.
Source: NIST AI 100-2e2025 | Category: