Refine
Has Fulltext
- yes (2)
Document Type
- Article (1)
- Conference Proceeding (1)
Language
- English (2)
Is part of the Bibliography
- yes (2)
Keywords
- LLM pretraining (1)
- synthetic languages (1)
Institute
The rise of deep learning techniques and especially the advent of large language models (LLMs) intensified the discussions around possibilities that artificial intelligence with higher generalization capability entails. The range of opinions on the capabilities of LLMs is extremely broad: from equating language models with stochastic parrots to stating that they are already conscious. This paper represents an attempt to review LLM landscape in the context of their generalization capacity as an information theoretic property of those complex systems. We discuss the suggested theoretical explanations for generalization in LLMs and highlight possible mechanisms responsible for these generalization properties. Through an examination of existing literature and theoretical frameworks, we endeavor to provide insights into the mechanisms driving the generalization capacity of LLMs, thus contributing to a deeper understanding of their capabilities and limitations in natural language processing tasks.
This work explores transfer learning from several synthetic languages to English. We investigate the structure of the embeddings in the finetuned models, the information they contain, and the capabilities of the finetuned models on simple linguistic tasks. We also introduce a new synthetic language that leads to better transfer to English than the languages used in previous research. Finally, we introduce Tiny-Cloze Benchmark — a new synthetic benchmark for natural language understanding that is more informative for less powerful models. We use Tiny-Cloze Benchmark to evaluate fine-tuned models in several domains demonstrating that finetuning on a new synthetic language allows for better performance on a variety of tasks.