BPE Gets Picky: Efficient Vocabulary Refinement During Tokenizer Training

  • Language models can greatly benefit from efficient tokenization. However, they still mostly utilize the classical Byte-Pair Encoding (BPE) algorithm, a simple and reliable method. BPE has been shown to cause such issues as under-trained tokens and sub-optimal compression that may affect the downstream performance. We introduce PickyBPE, a modified BPE algorithm that carries out vocabulary refinement during tokenizer training by removing merges that leave intermediate “junk” tokens. Our method improves vocabulary efficiency, eliminates under-trained tokens, and does not compromise text compression. Our experiments show that this method either improves downstream performance or does not harm it.

Download full text files

  • Language models can greatly benefit from efficient tokenization. However, they still mostly utilize the classical Byte-Pair Encoding (BPE) algorithm, a simple and reliable method. BPE has been shown to cause such issues as under-trained tokens and sub-optimal compression that may affect the downstream performance. We introduce PickyBPE, a modified BPE algorithm that carries out vocabulary refinement during tokenizer training by removing merges that leave intermediate “junk” tokens. Our method improves vocabulary efficiency, eliminates under-trained tokens, and does not compromise text compression. Our experiments show that this method either improves downstream performance or does not harm it.

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Pavel Chizhov, Catherine Arnett, Elizaveta Korotkova, Ivan P. Yamshchikov
URN:urn:nbn:de:bvb:863-opus-62831
DOI:https://doi.org/10.18653/v1/2024.emnlp-main.925
Parent Title (English):Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Publisher:Association for Computational Linguistics
Place of publication:Stroudsburg, PA, USA
Document Type:Conference Proceeding
Language:English
Year of publication:2024
Release Date:2025/09/23
Tag:LLM; tokenization
Pages/Size:18
First Page:16587
Last Page:16604
Institutes and faculty:Institute / Center for Artificial Intelligence (CAIRO)
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.