From Text to Tokens: How BERT’s tokenizer, WordPiece, works.

From Text to Tokens: Your Step-by-Step Guide to BERT Tokenization

Dimitris Poulopoulos
Towards Data Science
by Glen Carrie on Unsplash

Did you know that the way you tokenize text can make or break your language ? Have you ever wanted to tokenize documents in a rare language or a specialized domain? Splitting text into tokens, it’s not a chore; it’s a gateway to transforming language into actionable intelligence. This story will teach you everything you need to know about tokenization, not only for BERT but for any LLM out there.

In my last story, we talked about BERT, explored its theoretical foundations and mechanisms, and discussed how to -tune it and create a questing-answering system. Now, as we go further into the intricacies of this groundbreaking model, it’s to spotlight one of the unsung heroes: tokenization.

I get it; tokenization might seem like the last boring obstacle between you and the thrilling process of training your model. Believe me, I used to think the same. But I’m here to tell you that tokenization is not just a “necessary evil”— it’s an art form in its own right.

In this story, we’ll examine every part of the tokenization pipeline. Some steps are trivial (like normalization and pre-), while others, like the modeling part, are what make each tokenizer unique.

Tokenization pipeline — Image by Author

By the time you finish reading this article, you’ll not only understand the ins and outs of the BERT tokenizer, but you’ll also be equipped to train it on your own . And if you’re feeling adventurous, you’ll even have the to this crucial step when training your very own BERT model from .

Splitting text into tokens, it’s not a chore; it’s a gateway to transforming language into actionable

Source link