Abstract
We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained automaton is provably minimal. This is a substantial generalization of an existing algorithm to train an n-gram model on the basis of a probabilistic context-free grammar.
| Original language | English |
|---|---|
| Pages (from-to) | 173-185 |
| Number of pages | 13 |
| Journal | Computational Linguistics |
| Volume | 31 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - Jun 2005 |
Keywords
- GRAMMARS
Fingerprint
Dive into the research topics of 'A general technique to train language models on language models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver