Document Classification¶
This tutorial will show how to perform document classification in Tribuo, using a variety of different methods to extract features from the text. We'll use the venerable 20-newsgroups dataset where the task is to predict what newsgroup a particular post is from, though this tutorial would be equally applicable to any document classification task (including tasks like sentiment analysis). We're going to train a simple logistic regression with fixed hyperparameters using a variety of feature extraction methods. The aim is to show how to extract features from text rather than focusing on the performance, as using a more powerful model like XGBoost, or performing hyperparameter optimization on the logisitic regression will likely improve the performance of all the feature extraction techniques.
Setup¶
You'll need a copy of the 20 newsgroups dataset, so first download and unpack it:
wget http://qwone.com/~jason/20Newsgroups/20news-bydate.tar.gz
mkdir 20news
cd 20news
tar -zxf ../20news-bydate.tar.gz
This leaves you with two directories 20news-bydate-train
and 20news-bydate-test
, which contain the standard train and test split for this data.
20 newsgroups comes in a fairly standard format, the dataset is represented by a set of directories where the directory name is the class label, and the directory contains a collection of documents with one document in each file. Each file is a single Usenet post. For the purposes of this tutorial, we'll use the subject and body of the post as the input text for classification.
Here's an example:
$ ls 20news-bydate-train/
alt.atheism/ comp.sys.mac.hardware/ rec.motorcycles/ sci.electronics/ talk.politics.guns/
comp.graphics/ comp.windows.x/ rec.sport.baseball/ sci.med/ talk.politics.mideast/
comp.os.ms-windows.misc/ misc.forsale/ rec.sport.hockey/ sci.space/ talk.politics.misc/
comp.sys.ibm.pc.hardware/ rec.autos/ sci.crypt/ soc.religion.christian/ talk.religion.misc/
$ ls 20news-bydate-train/comp.graphics/
37261 37949 38233 38270 38305 38344 38381 38417 38454 38489 38525 38562 38598 38633 38668 38703 38739
37913 37950 38234 38271 38306 38346 38382 38418 38455 38490 38526 38563 38599 38634 38669 38704 38740
37914 37951 38235 38272 38307 38347 38383 38420 38456 38491 38527 38564 38600 38635 38670 38705 38741
37915 37952 38236 38273 38308 38348 38384 38421 38457 38492 38528 38565 38601 38636 38671 38706 38742
...
As this is a pretty common format, Tribuo has a specific DataSource
which can be used to read in this sort of data, org.tribuo.data.text.DirectoryFileSource
.
We're going to use the classification experiments jar, along with the ONNX jar which provides support for loading in contextual word embedding models like BERT.
%jars ./tribuo-classification-experiments-4.1.0-jar-with-dependencies.jar
%jars ./tribuo-onnx-4.1.0-jar-with-dependencies.jar
We'll also need a selection of imports from the org.tribuo.data.text
package, along with the usual imports from org.tribuo
and org.tribuo.classification
we use when working with classification tasks. We'll load in the BERT support from the org.tribuo.interop.onnx.extractors
package. Tribuo's BERT support loads in models and tokenizers from HuggingFace's Transformer package, and can be easily extended to support non-BERT models.
import java.util.Collections;
import java.nio.file.Paths;
import com.oracle.labs.mlrg.olcut.provenance.ProvenanceUtil;
import com.oracle.labs.mlrg.olcut.util.Pair;
import org.tribuo.*;
import org.tribuo.data.text.*;
import org.tribuo.data.text.impl.*;
import org.tribuo.dataset.MinimumCardinalityDataset;
import org.tribuo.classification.*;
import org.tribuo.classification.evaluation.*;
import org.tribuo.classification.sgd.linear.LinearSGDTrainer;
import org.tribuo.classification.sgd.objectives.LogMulticlass;
import org.tribuo.interop.onnx.extractors.BERTFeatureExtractor;
import org.tribuo.math.optimisers.AdaGrad;
import org.tribuo.transform.*;
import org.tribuo.transform.transformations.IDFTransformation;
import org.tribuo.util.tokens.universal.UniversalTokenizer;
import org.tribuo.util.Util;
We'll instantiate a few classes that we'll use throughout this tutorial, the label factory, the evaluator and the paths to the train and test data.
var labelFactory = new LabelFactory();
var labelEvaluator = new LabelEvaluator();
var trainPath = Paths.get(".","20news","20news-bydate-train");
var testPath = Paths.get(".","20news","20news-bydate-test");
Extracting features from text¶
Much of the work of machine learning is in presenting an appropriate representation of the data to the model. This is especially true when working with text data, as there is a plethora of approaches for converting text into the numbers that ML algorithms operate on. The DirectoryFileSource
allows the user to choose the feature extraction, as it requires a TextFeatureExtractor
which converts the String
representing the input text into a Tribuo Example
. We'll cover several different implementations of the TextFeatureExtractor
interface in this tutorial, and we expect that users will implement it in their own classes to cope with specific feature extraction requirements.
We'll start with the simplest approach, a "bag of words", where each document is represented by the counts of the words in that document. This means the feature space is equal to the number of words, and most documents only have a positive value for a small number of words (as most words don't appear in any given document). This is particularly well suited to Tribuo's sparse vector representation of examples, and this suitability for NLP tasks is the reason that Tribuo is designed this way. Of course, first we'll need to tell the extractor what a word is, and for this we use a Tokenizer
. Tokenizers split up a String
into a stream of tokens. Tribuo provides several basic tokenizers, and an interface for tokenization. We're going to use Tribuo's UniversalTokenizer
which is descended from tokenizers developed at Sun Labs in the 90s, and used in a variety of Sun products since that time. First we'll use a strict bag of words where each feature takes the value 1
if that word is present in the document, and 0
otherwise. We'll use Tribuo's BasicPipeline
which can convert String
s into features, and pass it to the basic TextFeatureExtractor
implementation, helpfully called TextFeatureExtractorImpl
.
var tokenizer = new UniversalTokenizer();
var bowPipeline = new BasicPipeline(tokenizer,1);
var bowExtractor = new TextFeatureExtractorImpl<Label>(bowPipeline);
We're now almost ready to make our train and test data sources, and load in the data. The DirectoryFileSource
also accepts an array of DocumentPreprocessor
s which can be used to transform the text before feature extraction takes place. We're going to use a specific preprocessor (NewsPreprocessor
) which standardises the 20 newsgroups data by stripping out the mail headers and returning only the subject and the body of the email. We'll also lowercase all the text using the CasingPreprocessor
to slightly reduce the space we're working in. In general the preprocessors are dataset and task specific, which is why Tribuo doesn't ship with many implementations as in most cases users will need to write one from scratch for their specific task.
var newsProc = new NewsPreprocessor();
var lowercase = new CasingPreprocessor(CasingPreprocessor.CasingOperation.LOWERCASE);
We'll make a helper function to load the data sources and create the datasets. We're also going to restrict the test dataset so it only contains valid examples, as 20 newsgroups has some test examples that share no words with the train examples (and so have no features we could use to make predictions with).
Let's check our datasets and see if everything has loaded in correctly.
public Pair<Dataset<Label>,Dataset<Label>> mkDatasets(String name, TextFeatureExtractor<Label> extractor) {
var trainSource = new DirectoryFileSource<>(trainPath,labelFactory,extractor,newsProc,lowercase);
var testSource = new DirectoryFileSource<>(testPath,labelFactory,extractor,newsProc,lowercase);
var trainDS = new MutableDataset<>(trainSource);
var testDS = new ImmutableDataset<>(testSource,trainDS.getFeatureIDMap(),trainDS.getOutputIDInfo(),true);
System.out.println(String.format(name + " training data size = %d, number of features = %d, number of classes = %d",trainDS.size(),trainDS.getFeatureMap().size(),trainDS.getOutputInfo().size()));
System.out.println(String.format(name + " testing data size = %d, number of features = %d, number of classes = %d",testDS.size(),testDS.getFeatureMap().size(),testDS.getOutputInfo().size()));
return new Pair<>(trainDS,testDS);
}
var bowPair = mkDatasets("bow",bowExtractor);
We've loaded in 11,314 training documents containing 122,024 unique words and 7,532 test documents, each with the expected 20 classes.
Now we're ready to train a model. Let's start with a simple logistic regression.
var lrTrainer = new LinearSGDTrainer(new LogMulticlass(),new AdaGrad(0.1,0.001),5,42);
var bowStartTime = System.currentTimeMillis();
var bowModel = lrTrainer.train(bowPair.getA());
var bowEndTime = System.currentTimeMillis();
System.out.println("Training the model on BoW features took " + Util.formatDuration(bowStartTime,bowEndTime));
System.out.println();
var bowEval = labelEvaluator.evaluate(bowModel,bowPair.getB());
System.out.println(bowEval);
We got a macro F1 score of 79.1%, which is a fairly good starting point and it's roughly what other linear models get on this task (e.g., scikit-learn's text classification tutorial gets 76.9% macro F1 when using a similar multinomial Naive Bayes model).
Term counting¶
This simple Bag of Words approach discards a lot of information about the documents, as we're ignoring how many times the word or n-gram appears in the document (also known in information retrieval circles as the Term Frequency or TF). Let's swap the BasicPipeline
for a TokenPipeline
which supports term counting via a constructor flag.
var unigramPipeline = new TokenPipeline(tokenizer, 1, true);
var unigramExtractor = new TextFeatureExtractorImpl<Label>(unigramPipeline);
var unigramPair = mkDatasets("unigram",unigramExtractor);
We can see the number of documents and number of features are still the same, all that's different is the feature values within each document. Let's build another logistic regression.
var unigramStartTime = System.currentTimeMillis();
var unigramModel = lrTrainer.train(unigramPair.getA());
var unigramEndTime = System.currentTimeMillis();
System.out.println("Training the model on Unigram features took " + Util.formatDuration(unigramStartTime,unigramEndTime));
System.out.println();
var unigramEval = labelEvaluator.evaluate(unigramModel,unigramPair.getB());
System.out.println(unigramEval);
We see that the logistic regression trained on unigrams gets about 80% accuracy, pretty much the same as the BoW baseline, and takes about the same amount of time to run. Both of these make sense, as the term count isn't necessarily that useful in this particular dataset, and we didn't change the number of features overall or inside each example by using term counting.
N-grams as features¶
Let's try a little more complicated feature extractor. The natural step from unigrams is to include word pairs (or bigrams) and count the occurrence of those. This allows us to get simple negations (e.g., "not bad" rather than "not" and "bad") along with places like "New York" rather than "new" and "york". In Tribuo this is as straightforward as telling the token pipeline we'd like bigrams.
var bigramPipeline = new TokenPipeline(tokenizer, 2, true);
var bigramExtractor = new TextFeatureExtractorImpl<Label>(bigramPipeline);
var bigramPair = mkDatasets("bigram",bigramExtractor);
We can see the feature space has massively increased due to the presence of bigram features, we've now got 1.1 million features from the same 11,314 documents.
Now to train another logistic regression.
var bigramStartTime = System.currentTimeMillis();
var bigramModel = lrTrainer.train(bigramPair.getA());
var bigramEndTime = System.currentTimeMillis();
System.out.println("Training the model on Bigram features took " + Util.formatDuration(bigramStartTime,bigramEndTime));
System.out.println();
var bigramEval = labelEvaluator.evaluate(bigramModel,bigramPair.getB());
System.out.println(bigramEval);
Our performance decreased a little when using bigrams to 78%, and the runtime increased from 10s to 48s. This is because despite there being more information in the features, there are also many, many more features making it easier to confuse this simple linear model plus each example takes longer to process due to the greatly increased number of features. We could look at using a more complex model like boosted trees to exploit this additional information which may increase the performance back above our baseline. We could further increase number of n-gram features but we'll start to see diminishing returns even with more powerful models as the dimensionality of the feature space increases without a commensurate increase in training data.
TFIDF vectors¶
One other factor is that the count of some words isn't usually that helpful, as most documents include "a", "the", "and" many times which just isn't a useful signal. A popular way to deal with this is to scale the term frequencies (i.e., the n-gram counts) by the Inverse Document Frequency (or IDF), producing TF-IDF vectors. In Tribuo the IDF is a transformation which is applied separately to the dataset after it's constructed, as it uses aggregate information from the whole dataset which isn't available until all the examples have been loaded in. Let's see how that affects performance.
// Create a transformation map that contains a single IDFTransformation to apply to every feature
var trMap = new TransformationMap(Collections.singletonList(new IDFTransformation()));
// Copy out the datasets.
var tfidfTrain = MutableDataset.createDeepCopy(bigramPair.getA());
var tfidfTest = MutableDataset.createDeepCopy(bigramPair.getB());
// Fit the IDF transformation and apply it to the data
// We add the implicit zero features (i.e. the words not present in each document)
// to get the correct estimate of the IDF.
var transformers = tfidfTrain.createTransformers(trMap,true);
tfidfTrain.transform(transformers);
tfidfTest.transform(transformers);
// Print the dataset statistics
System.out.println(String.format("tf-idf training data size = %d, number of features = %d, number of classes = %d",tfidfTrain.size(),tfidfTrain.getFeatureMap().size(),tfidfTrain.getOutputInfo().size()));
System.out.println(String.format("tf-idf testing data size = %d, number of features = %d, number of classes = %d",tfidfTest.size(),tfidfTest.getFeatureMap().size(),tfidfTest.getOutputInfo().size()));
Creating TF-IDF vectors didn't change the number of features, we still have 1.1 million features in the training set, but it has made the feature values more useful. The irrelevant "the" features will have a small value because while they may have a high term frequency, they are also present in every document so they have a high document frequency, so when we divide the two values it'll end up small.
var tfidfStartTime = System.currentTimeMillis();
var tfidfModel = lrTrainer.train(tfidfTrain);
var tfidfEndTime = System.currentTimeMillis();
System.out.println("Training the model on TF-IDF features took " + Util.formatDuration(tfidfStartTime,tfidfEndTime));
System.out.println();
var tfidfEval = labelEvaluator.evaluate(tfidfModel,tfidfTest);
System.out.println(tfidfEval);
Using TF-IDF features has roughly the same accuracy as bigrams, so it may be that these features aren't something the linear model can easily operate on on this dataset, but in general the TF-IDF transformation is a useful one when working with text documents.
Feature hashing¶
A popular technique for dealing with large feature spaces is feature hashing. This is where the features are mapped back down to a smaller space using a hash function. It induces collisions between the features, so the model might treat "New York" and "San Francisco" as the same feature, but the collisions are generated essentially at random based on the hash function which provides a strong regularising effect which can improve performance while making things run faster and use less memory.
To use feature hashing in Tribuo simply pass a hash dimension to the TokenPipeline
on construction. We'll map everything down to 50,000 features, which is around 5% of the original number and see how that affects the model.
var hashPipeline = new TokenPipeline(tokenizer, 2, true, 50000);
var hashExtractor = new TextFeatureExtractorImpl<Label>(hashPipeline);
var hashPair = mkDatasets("hash-100k",hashExtractor);
As expected we still have the same number of training & test examples, but now there are only 50,000 features. Let's build another logistic regression.
var hashStartTime = System.currentTimeMillis();
var hashModel = lrTrainer.train(hashPair.getA());
var hashEndTime = System.currentTimeMillis();
System.out.println("Training the model on hashed features took " + Util.formatDuration(hashStartTime,hashEndTime));
System.out.println();
var hashEval = labelEvaluator.evaluate(hashModel,hashPair.getB());
System.out.println(hashEval);
The performance dropped a little here, but the model has less than a tenth of the parameters compared to the bigram model, making it faster and much smaller at inference time, and it took around 66% of the time to train. In many cases dropping a couple of points of accuracy for a model that is 20x smaller and substantially faster is a worthwhile tradeoff, but as with most machine learning tasks this depends on the problem you're solving and where you're deploying the model. Tuning the hashing dimension and the trainer parameters will likely produce a model with similar accuracy at greatly reduced computational cost.
Trimming out infrequent features¶
We can also directly trim out infrequently occuring features. If a feature doesn't occur very frequently then we're not likely to estimate it's weights properly as we've not seen it very often. Then if it occurs frequently in the test dataset it can confuse the model (this is a form of overfitting to the training data). Let's take the TF-IDF dataset and remove all the bigrams that occur fewer than 5 times.
var minCardTrain = new MinimumCardinalityDataset<>(tfidfTrain,5);
// This call creates a copy of bigramTest, removing all the
// features not found in bigramTrain's feature and output maps
var minCardTest = ImmutableDataset.copyDataset(tfidfTest,minCardTrain.getFeatureIDMap(),minCardTrain.getOutputIDInfo());
// Print the dataset statistics
System.out.println(String.format("Minimum cardinality training data size = %d, number of features = %d, number of classes = %d",minCardTrain.size(),minCardTrain.getFeatureMap().size(),minCardTrain.getOutputInfo().size()));
System.out.println(String.format("Minimum cardinality testing data size = %d, number of features = %d, number of classes = %d",minCardTest.size(),minCardTest.getFeatureMap().size(),minCardTest.getOutputInfo().size()));
We can see that's removed about 90% of the features, so let's try our simple model on it again.
var minCardStartTime = System.currentTimeMillis();
var minCardModel = lrTrainer.train(minCardTrain);
var minCardEndTime = System.currentTimeMillis();
System.out.println("Training the model on trimmed TF-IDF features took " + Util.formatDuration(minCardStartTime,minCardEndTime));
System.out.println();
var minCardEval = labelEvaluator.evaluate(minCardModel,minCardTest);
System.out.println(minCardEval);
As with the feature hashing above, this model trains more quickly because there is less data to process, but the speed improvement is more stubstantial as the number of features in each example is lower (because the hashing produces a denser example than trimming out infrequent features). Performance dropped slightly as compared to the TF-IDF model, but again it is around 10% of the parameters, with a corresponding reduction in memory and runtime in inference and training. Performance is improved over the hashing as we're not colliding features at random, we're simply removing ones which are infrequent. If a feature is infrequent we probably can't estimate the weight for it very well so it helps remove some of the noise.
Choosing which one of feature hashing and trimming out infrequent features to apply is problem dependent. Feature hashing can work in denser feature spaces than trimming infrequent features, but both still require some amount of sparsity in the problem to have any useful effect. With text datasets then trimming the infrequent words/features is usually helpful.
Word embeddings¶
All the approaches described above have no notion of word similarity, they rely upon exactly the same words with the same spelling appearing in the training and test documents, when in practice word similarity is likely to be very useful information for the classifier because no two documents use exactly the same phrasing. For example, the unigrams "excellent" and "fantastic" are equally dissimilar to an n-gram model, when in fact those words are quite similar in meaning. Adding notions of word similarity to ML models usually means embedding each word into some vector space, then words with similar meanings can be close in the vector space, and words with dissimilar or opposite meanings are far apart. There are many popular word embedding algorithms, like Word2Vec, GloVe or FastText which build embeddings on a corpus of text that can then be used in downstream tasks. Tribuo doesn't have a class which can directly load those word vectors, as they all come in different file formats, but it's pretty straightforward to build a TextFeatureExtractor
that will tokenize the input text, look up each word or n-gram in the vector space and then average them across the input (it took us about an afternoon to build one for our internal word2vec style word vector research file format). If there is interest from the community in supporting a specific word vector file format, we're happy to accept PRs that add the support.
While these more traditional forms of word vector are very powerful, as they are precomputed they treat each word the same no matter the context it appears in. For example "bank" could mean a river bank, or a financial institution, but a word2vec vector has to contain both meanings because it doesn't know the context the word is present in, i.e., the rest of the sentence. This led to the rise of contextual word embeddings, which produce a vector for each word based on the whole input sequence. The most popular of these embeddings are based on the Transformer architecture, usually a variant of Google's BERT model.
Using BERT embeddings¶
BERT is a multi-layer transformer network, which reads in a sentence and produces both an embedding of the sentence, along with embeddings for each wordpiece. A "wordpiece" is the token that BERT operates on, which is either a whole word, or a chunk of a word, emitted by the wordpiece tokenizer. The word chunking algorithm is trained on a large corpus and allows common prefixes & suffixes (e.g. "un", "ing") to be split off the words and to share state. We can use BERT to produce a single vector which represents the sentence or document and then use that vector as features in a downstream Tribuo classifier.
Tribuo works with BERT models that are stored in ONNX format, and can load in tokenizers produced by HuggingFace Transformers. That package also helpfully provides a Python script to convert BERT models from HuggingFace format into ONNX format for deployment. We provide a TextFeatureExtractor
implementation called BERTFeatureExtractor
which can produce sentence embeddings out by passing the text through a BERT model. Tribuo uses Microsoft's ONNX Runtime to load the model, and has it's own implementation of the Wordpiece tokenization algorithm, along with the necessary glue to produce tokens in the format that BERT expects. One downside of BERT models is that they have a maximum document length that they can process, usually 512 wordpieces. This is configurable in Tribuo's extractor, but if you set the maximum length to be longer than the sequences the model was trained on then the performance is likely to suffer (or the computation may fail depending on how that specific BERT model is implemented).
To follow along with this part of the tutorial you'll need to produce a BERT model in onnx format. To do that you'll need access to a Python 3 environment with HuggingFace and PyTorch or TensorFlow installed to export the model (the snippet below assumes PyTorch, change the pt
to tf
if you're using TensorFlow). Running the following snippet will produce a bert-base-uncased.onnx
file that we can use for the rest of the tutorial. You'll need to run it in an empty directory due to the way HuggingFace's conversion script works.
python -m transformers.convert_graph_to_onnx --framework pt --model bert-base-uncased bert-base-uncased.onnx
You'll also need to download the tokenizer.json
that goes with the BERT variant you are using, for bert-base-uncased
that file is here. Assuming both of those files are now in the same directory as this tutorial, we can create the BERTFeatureExtractor
. We're going to take the average token embedding across the whole input, as the [CLS]
token which provides the sentence embedding tends to perform poorly unless it is fine-tuned on your task.
Warning: this feature extraction step took more than a minute per newsgroup on a 2019 16" 6-core MacBook Pro (using the default settings of ONNX Runtime i.e., using a single thead on the CPU provider) so around 55 minutes to extract the full train and test datasets. Your mileage may vary, and your laptop may get quite warm. We recommend not running it while your laptop is actually on your lap. At the moment Tribuo's TextFeatureExtractor
interface doesn't batch up the inputs, which limits the performance of contextual feature extractors. We'll look at expanding that interface to support batching in a future release. The session options used can be controlled by the BERTFeatureExtractor.reconfigureOrtSession(SessionOptions options)
method, which allows the use of whatever configuration is supported by your onnxruntime jar.
var bertPath = Paths.get("./bert-base-uncased.onnx");
var tokenizerPath = Paths.get("./tokenizer.json");
var bert = new BERTFeatureExtractor<>(labelFactory,
bertPath,
tokenizerPath,
BERTFeatureExtractor.OutputPooling.MEAN,
256, // Maximum number of wordpiece tokens
false // Use Nvidia GPUs for inference (if onnxruntime_gpu is on the classpath)
);
var bertStartTime = System.currentTimeMillis();
var bertPair = mkDatasets("bert",bert);
var bertEndTime = System.currentTimeMillis();
System.out.println("Extracting features with BERT took " + Util.formatDuration(bertStartTime,bertEndTime));
Note Tribuo's BERTFeatureExtractor
can run the BERT embeddings on a GPU, but only if the onnxruntime_gpu jar is on the classpath. By default Tribuo pulls in the CPU only jar for maximum compatibility. As you can see from the time taken to extract the features, it's best to deploy BERT when you've got plenty of CPUs or fast GPUs.
Now we build a logistic regression on the dense feature space produced by BERT. These embeddings are dense 768 dimensional vectors, each document contains a value for each one of those dimensions. In Tribuo 4.1 we added optimisations to several of the models and trainers to improve their performance on the dense feature spaces produced by techniques like BERT.
var bertStartTime = System.currentTimeMillis();
var bertModel = lrTrainer.train(bertPair.getA());
var bertEndTime = System.currentTimeMillis();
System.out.println("Training a LR on BERT features took " + Util.formatDuration(bertStartTime,bertEndTime));
var bertEval = labelEvaluator.evaluate(bertModel,bertPair.getB());
System.out.println(bertEval);
We get around 71% accuracy using this standard BERT model, which might be due to it's training data of Wikipedia and books not overlapping well with the comparatively old newsgroup language. Fine tuning the BERT model on a large corpus of newsgroups could probably improve this, but the standard model is likely to work well for more well formed text like news articles or more formal documents. Alternatively it may be that the logistic regression we're training isn't sufficiently flexible to use the information in the BERT features, so it may be beneficial to use a more complex classifier like gradient boosted trees or a Multi-Layer Perceptron through Tribuo's TensorFlow interface.
Using different BERT versions can change the accuracy as there are variants fine-tuned for a wide variety of different tasks & domains, and there are smaller versions like DistillBERT and TinyBERT which are useful for deploying models in constrained environments. However BERT based feature extractors will always be slower than the simpler BoW approaches described above, because they have to perform lots of floating point computations to compute the embedded feature values.
Deploying the feature extractors¶
Similarly to when working with columnar data, the feature extractor used is recorded in the model provenance. We can see that for the BERT model here.
var sourceProvenance = bertModel.getProvenance().getDatasetProvenance().getSourceProvenance();
System.out.println(ProvenanceUtil.formattedProvenanceString(sourceProvenance));
This means that the model has recorded how the features were extracted, but the extraction process itself isn't part of the serialized model (which we wouldn't really want anyway as BERT models are hundreds of megabytes). So to use one of these models at inference time the feature extraction pipeline needs to be rebuilt from the configuration, in the same way we rebuilt the RowProcessor
in the columnar tutorial.
Each of the different models trained in this tutorial has recorded the source provenance and it's associated TextFeatureExtractor
configuration, meaning the models come with all the necessary information to infer the classes of new documents.
Conclusion¶
We looked at a document classification task in Tribuo. As most of the work in NLP tends to be on featurising the data, we discussed several different ways of converting text into features for use in machine learning. We looked at Bag of Words models, using n-grams, term frequencies, TFIDF vectors, feature hashing and also looked at trimming large feature spaces based on the number of times we'd seen a feature. We also discussed word vector approaches, and showed how to use the popular contextual word embedding model, BERT, to extract features for document classification. It's worth noting all the models trained were simple logistic regressions, with no parameter tuning. Using a more powerful classifier like XGBoost, or performing hyperparameter tuning on the logistic regression will likely improve performance over the simple baselines presented here.
Tribuo's text processing framework is very flexible, and it's possible to insert your own code into each of the different classes by implementing TextFeatureExtractor
, TextPipeline
or even the Tokenizer
yourself, while the provenance system ensures that you can always recover how your data was processed to ensure it matches at inference time.