Language model - Wikiwand
A language model is a model of natural language.[1] Language models are useful for a variety of tasks, including speech recognition,[2] machine translation,[3] natural language generation (generating more human-like text), optical character recognition, route optimization,[4] handwriting recognition,[5] grammar induction,[6] and information retrieval.[7][8]
Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.
Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars, which became fundamental to the field of programming languages.[9]
In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances.
In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations.[10] Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning, and common relationships between pairs of words like plurality or gender .
In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.[11]
Models based on word n-grams
A word n-gram language model is a purely statistical model of language. It has been superseded by recurrent neural network–based models, which have been superseded by large language models.[12] It is based on an assumption that the probability of the next word in a sequence depends only on a fixed size window of previous words. If only one previous word is considered, it is called a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model.[13] Special tokens are introduced to denote the start and end of a sentence and
.
To prevent a zero probability being assigned to unseen words, each word's probability is slightly lower than its frequency count in a corpus. To calculate it, various methods were used, from simple "add-one" smoothing (assign a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated models, such as Good–Turing discounting or back-off models.
Exponential
Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is
where is the partition function,
is the parameter vector, and
is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on
or some form of regularization.
The log-bilinear model is another example of an exponential language model.
Skip-gram model
Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over.[14]
Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other.
For example, in the input text:
- the rain in Spain falls mainly on the plain
the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences
- the in, rain Spain, in falls, Spain mainly, falls on, mainly the, and on plain.
In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then
where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side.[15][16]
Recurrent neural network
Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models).[17] Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.[18]
Large language models
Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do.[22]
Evaluation and benchmarks
Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves.[23]
Various data sets have been developed for use in evaluating language processing systems.[24] These include:
- Massive Multitask Language Understanding (MMLU)[25]
- Corpus of Linguistic Acceptability[26]
- GLUE benchmark[27]
- Microsoft Research Paraphrase Corpus[28]
- Multi-Genre Natural Language Inference
- Question Natural Language Inference
- Quora Question Pairs[29]
- Recognizing Textual Entailment[30]
- Semantic Textual Similarity Benchmark
- SQuAD question answering Test[31]
- Stanford Sentiment Treebank[32]
- Winograd NLI
- BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs[33]
- Artificial intelligence and elections – Use and impact of AI on political elections
- Cache language model
- Deep linguistic processing
- Ethics of artificial intelligence
- Factored language model
- Generative pre-trained transformer
- Katz's back-off model
- Language technology
- Semantic similarity network
- Statistical model