This page describes Tribuo 4.1. View the documentation for Tribuo 4.3 instead.

Document Classification

This tutorial will show how to perform document classification in Tribuo, using a variety of different methods to extract features from the text. We'll use the venerable 20-newsgroups dataset where the task is to predict what newsgroup a particular post is from, though this tutorial would be equally applicable to any document classification task (including tasks like sentiment analysis). We're going to train a simple logistic regression with fixed hyperparameters using a variety of feature extraction methods. The aim is to show how to extract features from text rather than focusing on the performance, as using a more powerful model like XGBoost, or performing hyperparameter optimization on the logisitic regression will likely improve the performance of all the feature extraction techniques.

Setup

You'll need a copy of the 20 newsgroups dataset, so first download and unpack it:

wget http://qwone.com/~jason/20Newsgroups/20news-bydate.tar.gz
mkdir 20news
cd 20news
tar -zxf ../20news-bydate.tar.gz

This leaves you with two directories 20news-bydate-train and 20news-bydate-test, which contain the standard train and test split for this data.

20 newsgroups comes in a fairly standard format, the dataset is represented by a set of directories where the directory name is the class label, and the directory contains a collection of documents with one document in each file. Each file is a single Usenet post. For the purposes of this tutorial, we'll use the subject and body of the post as the input text for classification.

Here's an example:

$ ls 20news-bydate-train/
alt.atheism/               comp.sys.mac.hardware/  rec.motorcycles/     sci.electronics/         talk.politics.guns/
comp.graphics/             comp.windows.x/         rec.sport.baseball/  sci.med/                 talk.politics.mideast/
comp.os.ms-windows.misc/   misc.forsale/           rec.sport.hockey/    sci.space/               talk.politics.misc/
comp.sys.ibm.pc.hardware/  rec.autos/              sci.crypt/           soc.religion.christian/  talk.religion.misc/
$ ls 20news-bydate-train/comp.graphics/
37261  37949  38233  38270  38305  38344  38381  38417  38454  38489  38525  38562  38598  38633  38668  38703  38739
37913  37950  38234  38271  38306  38346  38382  38418  38455  38490  38526  38563  38599  38634  38669  38704  38740
37914  37951  38235  38272  38307  38347  38383  38420  38456  38491  38527  38564  38600  38635  38670  38705  38741
37915  37952  38236  38273  38308  38348  38384  38421  38457  38492  38528  38565  38601  38636  38671  38706  38742
...

As this is a pretty common format, Tribuo has a specific DataSource which can be used to read in this sort of data, org.tribuo.data.text.DirectoryFileSource.

We're going to use the classification experiments jar, along with the ONNX jar which provides support for loading in contextual word embedding models like BERT.

In [1]:
%jars ./tribuo-classification-experiments-4.1.0-jar-with-dependencies.jar
%jars ./tribuo-onnx-4.1.0-jar-with-dependencies.jar

We'll also need a selection of imports from the org.tribuo.data.text package, along with the usual imports from org.tribuo and org.tribuo.classification we use when working with classification tasks. We'll load in the BERT support from the org.tribuo.interop.onnx.extractors package. Tribuo's BERT support loads in models and tokenizers from HuggingFace's Transformer package, and can be easily extended to support non-BERT models.

In [2]:
import java.util.Collections;
import java.nio.file.Paths;
import com.oracle.labs.mlrg.olcut.provenance.ProvenanceUtil;
import com.oracle.labs.mlrg.olcut.util.Pair;
import org.tribuo.*;
import org.tribuo.data.text.*;
import org.tribuo.data.text.impl.*;
import org.tribuo.dataset.MinimumCardinalityDataset;
import org.tribuo.classification.*;
import org.tribuo.classification.evaluation.*;
import org.tribuo.classification.sgd.linear.LinearSGDTrainer;
import org.tribuo.classification.sgd.objectives.LogMulticlass;
import org.tribuo.interop.onnx.extractors.BERTFeatureExtractor;
import org.tribuo.math.optimisers.AdaGrad;
import org.tribuo.transform.*;
import org.tribuo.transform.transformations.IDFTransformation;
import org.tribuo.util.tokens.universal.UniversalTokenizer;
import org.tribuo.util.Util;

We'll instantiate a few classes that we'll use throughout this tutorial, the label factory, the evaluator and the paths to the train and test data.

In [3]:
var labelFactory = new LabelFactory();
var labelEvaluator = new LabelEvaluator();
var trainPath = Paths.get(".","20news","20news-bydate-train");
var testPath = Paths.get(".","20news","20news-bydate-test");

Extracting features from text

Much of the work of machine learning is in presenting an appropriate representation of the data to the model. This is especially true when working with text data, as there is a plethora of approaches for converting text into the numbers that ML algorithms operate on. The DirectoryFileSource allows the user to choose the feature extraction, as it requires a TextFeatureExtractor which converts the String representing the input text into a Tribuo Example. We'll cover several different implementations of the TextFeatureExtractor interface in this tutorial, and we expect that users will implement it in their own classes to cope with specific feature extraction requirements.

We'll start with the simplest approach, a "bag of words", where each document is represented by the counts of the words in that document. This means the feature space is equal to the number of words, and most documents only have a positive value for a small number of words (as most words don't appear in any given document). This is particularly well suited to Tribuo's sparse vector representation of examples, and this suitability for NLP tasks is the reason that Tribuo is designed this way. Of course, first we'll need to tell the extractor what a word is, and for this we use a Tokenizer. Tokenizers split up a String into a stream of tokens. Tribuo provides several basic tokenizers, and an interface for tokenization. We're going to use Tribuo's UniversalTokenizer which is descended from tokenizers developed at Sun Labs in the 90s, and used in a variety of Sun products since that time. First we'll use a strict bag of words where each feature takes the value 1 if that word is present in the document, and 0 otherwise. We'll use Tribuo's BasicPipeline which can convert Strings into features, and pass it to the basic TextFeatureExtractor implementation, helpfully called TextFeatureExtractorImpl.

In [4]:
var tokenizer = new UniversalTokenizer();
var bowPipeline = new BasicPipeline(tokenizer,1);
var bowExtractor = new TextFeatureExtractorImpl<Label>(bowPipeline);

We're now almost ready to make our train and test data sources, and load in the data. The DirectoryFileSource also accepts an array of DocumentPreprocessors which can be used to transform the text before feature extraction takes place. We're going to use a specific preprocessor (NewsPreprocessor) which standardises the 20 newsgroups data by stripping out the mail headers and returning only the subject and the body of the email. We'll also lowercase all the text using the CasingPreprocessor to slightly reduce the space we're working in. In general the preprocessors are dataset and task specific, which is why Tribuo doesn't ship with many implementations as in most cases users will need to write one from scratch for their specific task.

In [5]:
var newsProc = new NewsPreprocessor();
var lowercase = new CasingPreprocessor(CasingPreprocessor.CasingOperation.LOWERCASE);

We'll make a helper function to load the data sources and create the datasets. We're also going to restrict the test dataset so it only contains valid examples, as 20 newsgroups has some test examples that share no words with the train examples (and so have no features we could use to make predictions with).

Let's check our datasets and see if everything has loaded in correctly.

In [6]:
public Pair<Dataset<Label>,Dataset<Label>> mkDatasets(String name, TextFeatureExtractor<Label> extractor) {
    var trainSource = new DirectoryFileSource<>(trainPath,labelFactory,extractor,newsProc,lowercase);
    var testSource = new DirectoryFileSource<>(testPath,labelFactory,extractor,newsProc,lowercase);
    var trainDS = new MutableDataset<>(trainSource);
    var testDS = new ImmutableDataset<>(testSource,trainDS.getFeatureIDMap(),trainDS.getOutputIDInfo(),true);
    System.out.println(String.format(name + " training data size = %d, number of features = %d, number of classes = %d",trainDS.size(),trainDS.getFeatureMap().size(),trainDS.getOutputInfo().size()));
    System.out.println(String.format(name + " testing data size = %d, number of features = %d, number of classes = %d",testDS.size(),testDS.getFeatureMap().size(),testDS.getOutputInfo().size()));
    return new Pair<>(trainDS,testDS);
}

var bowPair = mkDatasets("bow",bowExtractor);
bow training data size = 11314, number of features = 122024, number of classes = 20
bow testing data size = 7532, number of features = 122024, number of classes = 20

We've loaded in 11,314 training documents containing 122,024 unique words and 7,532 test documents, each with the expected 20 classes.

Now we're ready to train a model. Let's start with a simple logistic regression.

In [7]:
var lrTrainer = new LinearSGDTrainer(new LogMulticlass(),new AdaGrad(0.1,0.001),5,42);
var bowStartTime = System.currentTimeMillis();
var bowModel = lrTrainer.train(bowPair.getA());
var bowEndTime = System.currentTimeMillis();
System.out.println("Training the model on BoW features took " + Util.formatDuration(bowStartTime,bowEndTime));
System.out.println();
var bowEval = labelEvaluator.evaluate(bowModel,bowPair.getB());
System.out.println(bowEval);
Training the model on BoW features took (00:00:09:659)

Class                                n          tp          fn          fp      recall        prec          f1
soc.religion.christian             398         352          46         110       0.884       0.762       0.819
rec.autos                          396         344          52          63       0.869       0.845       0.857
talk.religion.misc                 251         166          85         120       0.661       0.580       0.618
comp.windows.x                     395         283         112          55       0.716       0.837       0.772
rec.sport.baseball                 397         370          27          45       0.932       0.892       0.911
comp.graphics                      389         293          96         143       0.753       0.672       0.710
talk.politics.mideast              376         283          93          11       0.753       0.963       0.845
comp.sys.ibm.pc.hardware           392         277         115         160       0.707       0.634       0.668
sci.med                            396         323          73          43       0.816       0.883       0.848
comp.os.ms-windows.misc            394         272         122          87       0.690       0.758       0.722
sci.crypt                          396         349          47          23       0.881       0.938       0.909
comp.sys.mac.hardware              385         283         102          96       0.735       0.747       0.741
misc.forsale                       390         341          49          63       0.874       0.844       0.859
rec.motorcycles                    398         364          34          23       0.915       0.941       0.927
talk.politics.misc                 310         182         128          94       0.587       0.659       0.621
sci.electronics                    393         272         121         135       0.692       0.668       0.680
rec.sport.hockey                   399         367          32          24       0.920       0.939       0.929
sci.space                          394         325          69          56       0.825       0.853       0.839
alt.atheism                        319         243          76          75       0.762       0.764       0.763
talk.politics.guns                 364         303          61         114       0.832       0.727       0.776
Total                            7,532       5,992       1,540       1,540
Accuracy                                                                         0.796
Micro Average                                                                    0.796       0.796       0.796
Macro Average                                                                    0.790       0.795       0.791
Balanced Error Rate                                                              0.210

We got a macro F1 score of 79.1%, which is a fairly good starting point and it's roughly what other linear models get on this task (e.g., scikit-learn's text classification tutorial gets 76.9% macro F1 when using a similar multinomial Naive Bayes model).

Term counting

This simple Bag of Words approach discards a lot of information about the documents, as we're ignoring how many times the word or n-gram appears in the document (also known in information retrieval circles as the Term Frequency or TF). Let's swap the BasicPipeline for a TokenPipeline which supports term counting via a constructor flag.

In [8]:
var unigramPipeline = new TokenPipeline(tokenizer, 1, true);
var unigramExtractor = new TextFeatureExtractorImpl<Label>(unigramPipeline);
var unigramPair = mkDatasets("unigram",unigramExtractor);
unigram training data size = 11314, number of features = 122024, number of classes = 20
unigram testing data size = 7532, number of features = 122024, number of classes = 20

We can see the number of documents and number of features are still the same, all that's different is the feature values within each document. Let's build another logistic regression.

In [9]:
var unigramStartTime = System.currentTimeMillis();
var unigramModel = lrTrainer.train(unigramPair.getA());
var unigramEndTime = System.currentTimeMillis();
System.out.println("Training the model on Unigram features took " + Util.formatDuration(unigramStartTime,unigramEndTime));
System.out.println();
var unigramEval = labelEvaluator.evaluate(unigramModel,unigramPair.getB());
System.out.println(unigramEval);
Training the model on Unigram features took (00:00:10:529)

Class                                n          tp          fn          fp      recall        prec          f1
soc.religion.christian             398         362          36          88       0.910       0.804       0.854
rec.autos                          396         353          43          58       0.891       0.859       0.875
talk.religion.misc                 251         148         103          97       0.590       0.604       0.597
comp.windows.x                     395         295         100          54       0.747       0.845       0.793
rec.sport.baseball                 397         356          41          49       0.897       0.879       0.888
comp.graphics                      389         280         109         120       0.720       0.700       0.710
talk.politics.mideast              376         310          66          29       0.824       0.914       0.867
comp.sys.ibm.pc.hardware           392         266         126         133       0.679       0.667       0.673
sci.med                            396         310          86          42       0.783       0.881       0.829
comp.os.ms-windows.misc            394         241         153          82       0.612       0.746       0.672
sci.crypt                          396         354          42          55       0.894       0.866       0.880
comp.sys.mac.hardware              385         312          73         103       0.810       0.752       0.780
misc.forsale                       390         343          47          69       0.879       0.833       0.855
rec.motorcycles                    398         362          36          27       0.910       0.931       0.920
talk.politics.misc                 310         171         139          90       0.552       0.655       0.599
sci.electronics                    393         289         104         110       0.735       0.724       0.730
rec.sport.hockey                   399         374          25          23       0.937       0.942       0.940
sci.space                          394         342          52          57       0.868       0.857       0.863
alt.atheism                        319         240          79          84       0.752       0.741       0.747
talk.politics.guns                 364         314          50         140       0.863       0.692       0.768
Total                            7,532       6,022       1,510       1,510
Accuracy                                                                         0.800
Micro Average                                                                    0.800       0.800       0.800
Macro Average                                                                    0.793       0.795       0.792
Balanced Error Rate                                                              0.207

We see that the logistic regression trained on unigrams gets about 80% accuracy, pretty much the same as the BoW baseline, and takes about the same amount of time to run. Both of these make sense, as the term count isn't necessarily that useful in this particular dataset, and we didn't change the number of features overall or inside each example by using term counting.

N-grams as features

Let's try a little more complicated feature extractor. The natural step from unigrams is to include word pairs (or bigrams) and count the occurrence of those. This allows us to get simple negations (e.g., "not bad" rather than "not" and "bad") along with places like "New York" rather than "new" and "york". In Tribuo this is as straightforward as telling the token pipeline we'd like bigrams.

In [10]:
var bigramPipeline = new TokenPipeline(tokenizer, 2, true);
var bigramExtractor = new TextFeatureExtractorImpl<Label>(bigramPipeline);
var bigramPair = mkDatasets("bigram",bigramExtractor);
bigram training data size = 11314, number of features = 1143035, number of classes = 20
bigram testing data size = 7532, number of features = 1143035, number of classes = 20

We can see the feature space has massively increased due to the presence of bigram features, we've now got 1.1 million features from the same 11,314 documents.

Now to train another logistic regression.

In [11]:
var bigramStartTime = System.currentTimeMillis();
var bigramModel = lrTrainer.train(bigramPair.getA());
var bigramEndTime = System.currentTimeMillis();
System.out.println("Training the model on Bigram features took " + Util.formatDuration(bigramStartTime,bigramEndTime));
System.out.println();
var bigramEval = labelEvaluator.evaluate(bigramModel,bigramPair.getB());
System.out.println(bigramEval);
Training the model on Bigram features took (00:00:41:981)

Class                                n          tp          fn          fp      recall        prec          f1
soc.religion.christian             398         331          67          57       0.832       0.853       0.842
rec.autos                          396         326          70          55       0.823       0.856       0.839
talk.religion.misc                 251         167          84         106       0.665       0.612       0.637
comp.windows.x                     395         297          98          57       0.752       0.839       0.793
rec.sport.baseball                 397         357          40          52       0.899       0.873       0.886
comp.graphics                      389         304          85         196       0.781       0.608       0.684
talk.politics.mideast              376         300          76          48       0.798       0.862       0.829
comp.sys.ibm.pc.hardware           392         244         148         104       0.622       0.701       0.659
sci.med                            396         298          98          66       0.753       0.819       0.784
comp.os.ms-windows.misc            394         260         134          99       0.660       0.724       0.691
sci.crypt                          396         327          69          37       0.826       0.898       0.861
comp.sys.mac.hardware              385         320          65         162       0.831       0.664       0.738
misc.forsale                       390         352          38         102       0.903       0.775       0.834
rec.motorcycles                    398         359          39          39       0.902       0.902       0.902
talk.politics.misc                 310         185         125          93       0.597       0.665       0.629
sci.electronics                    393         253         140          90       0.644       0.738       0.688
rec.sport.hockey                   399         370          29          30       0.927       0.925       0.926
sci.space                          394         336          58          40       0.853       0.894       0.873
alt.atheism                        319         225          94          65       0.705       0.776       0.739
talk.politics.guns                 364         309          55         114       0.849       0.730       0.785
Total                            7,532       5,920       1,612       1,612
Accuracy                                                                         0.786
Micro Average                                                                    0.786       0.786       0.786
Macro Average                                                                    0.781       0.786       0.781
Balanced Error Rate                                                              0.219

Our performance decreased a little when using bigrams to 78%, and the runtime increased from 10s to 48s. This is because despite there being more information in the features, there are also many, many more features making it easier to confuse this simple linear model plus each example takes longer to process due to the greatly increased number of features. We could look at using a more complex model like boosted trees to exploit this additional information which may increase the performance back above our baseline. We could further increase number of n-gram features but we'll start to see diminishing returns even with more powerful models as the dimensionality of the feature space increases without a commensurate increase in training data.

TFIDF vectors

One other factor is that the count of some words isn't usually that helpful, as most documents include "a", "the", "and" many times which just isn't a useful signal. A popular way to deal with this is to scale the term frequencies (i.e., the n-gram counts) by the Inverse Document Frequency (or IDF), producing TF-IDF vectors. In Tribuo the IDF is a transformation which is applied separately to the dataset after it's constructed, as it uses aggregate information from the whole dataset which isn't available until all the examples have been loaded in. Let's see how that affects performance.

In [12]:
// Create a transformation map that contains a single IDFTransformation to apply to every feature
var trMap = new TransformationMap(Collections.singletonList(new IDFTransformation()));
// Copy out the datasets.
var tfidfTrain = MutableDataset.createDeepCopy(bigramPair.getA());
var tfidfTest = MutableDataset.createDeepCopy(bigramPair.getB());
// Fit the IDF transformation and apply it to the data
// We add the implicit zero features (i.e. the words not present in each document)
// to get the correct estimate of the IDF.
var transformers = tfidfTrain.createTransformers(trMap,true);
tfidfTrain.transform(transformers);
tfidfTest.transform(transformers);
// Print the dataset statistics    
System.out.println(String.format("tf-idf training data size = %d, number of features = %d, number of classes = %d",tfidfTrain.size(),tfidfTrain.getFeatureMap().size(),tfidfTrain.getOutputInfo().size()));
System.out.println(String.format("tf-idf testing data size = %d, number of features = %d, number of classes = %d",tfidfTest.size(),tfidfTest.getFeatureMap().size(),tfidfTest.getOutputInfo().size()));
tf-idf training data size = 11314, number of features = 1143035, number of classes = 20
tf-idf testing data size = 7532, number of features = 316757, number of classes = 20

Creating TF-IDF vectors didn't change the number of features, we still have 1.1 million features in the training set, but it has made the feature values more useful. The irrelevant "the" features will have a small value because while they may have a high term frequency, they are also present in every document so they have a high document frequency, so when we divide the two values it'll end up small.

In [13]:
var tfidfStartTime = System.currentTimeMillis();
var tfidfModel = lrTrainer.train(tfidfTrain);
var tfidfEndTime = System.currentTimeMillis();
System.out.println("Training the model on TF-IDF features took " + Util.formatDuration(tfidfStartTime,tfidfEndTime));
System.out.println();
var tfidfEval = labelEvaluator.evaluate(tfidfModel,tfidfTest);
System.out.println(tfidfEval);
Training the model on TF-IDF features took (00:00:42:471)

Class                                n          tp          fn          fp      recall        prec          f1
soc.religion.christian             398         350          48         183       0.879       0.657       0.752
rec.autos                          396         332          64          68       0.838       0.830       0.834
talk.religion.misc                 251         155          96         111       0.618       0.583       0.600
comp.windows.x                     395         290         105          58       0.734       0.833       0.781
rec.sport.baseball                 397         345          52          26       0.869       0.930       0.898
comp.graphics                      389         264         125         111       0.679       0.704       0.691
talk.politics.mideast              376         306          70          32       0.814       0.905       0.857
comp.sys.ibm.pc.hardware           392         285         107         170       0.727       0.626       0.673
sci.med                            396         305          91          63       0.770       0.829       0.798
comp.os.ms-windows.misc            394         248         146          71       0.629       0.777       0.696
sci.crypt                          396         340          56          47       0.859       0.879       0.868
comp.sys.mac.hardware              385         283         102          69       0.735       0.804       0.768
misc.forsale                       390         340          50          79       0.872       0.811       0.841
rec.motorcycles                    398         359          39          36       0.902       0.909       0.905
talk.politics.misc                 310         191         119         130       0.616       0.595       0.605
sci.electronics                    393         292         101         112       0.743       0.723       0.733
rec.sport.hockey                   399         376          23          32       0.942       0.922       0.932
sci.space                          394         339          55          52       0.860       0.867       0.864
alt.atheism                        319         226          93          57       0.708       0.799       0.751
talk.politics.guns                 364         303          61          96       0.832       0.759       0.794
Total                            7,532       5,929       1,603       1,603
Accuracy                                                                         0.787
Micro Average                                                                    0.787       0.787       0.787
Macro Average                                                                    0.781       0.787       0.782
Balanced Error Rate                                                              0.219

Using TF-IDF features has roughly the same accuracy as bigrams, so it may be that these features aren't something the linear model can easily operate on on this dataset, but in general the TF-IDF transformation is a useful one when working with text documents.

Feature hashing

A popular technique for dealing with large feature spaces is feature hashing. This is where the features are mapped back down to a smaller space using a hash function. It induces collisions between the features, so the model might treat "New York" and "San Francisco" as the same feature, but the collisions are generated essentially at random based on the hash function which provides a strong regularising effect which can improve performance while making things run faster and use less memory.

To use feature hashing in Tribuo simply pass a hash dimension to the TokenPipeline on construction. We'll map everything down to 50,000 features, which is around 5% of the original number and see how that affects the model.

In [14]:
var hashPipeline = new TokenPipeline(tokenizer, 2, true, 50000);
var hashExtractor = new TextFeatureExtractorImpl<Label>(hashPipeline);
var hashPair = mkDatasets("hash-100k",hashExtractor);
hash-100k training data size = 11314, number of features = 50000, number of classes = 20
hash-100k testing data size = 7532, number of features = 50000, number of classes = 20

As expected we still have the same number of training & test examples, but now there are only 50,000 features. Let's build another logistic regression.

In [15]:
var hashStartTime = System.currentTimeMillis();
var hashModel = lrTrainer.train(hashPair.getA());
var hashEndTime = System.currentTimeMillis();
System.out.println("Training the model on hashed features took " + Util.formatDuration(hashStartTime,hashEndTime));
System.out.println();
var hashEval = labelEvaluator.evaluate(hashModel,hashPair.getB());
System.out.println(hashEval);
Training the model on hashed features took (00:00:24:289)

Class                                n          tp          fn          fp      recall        prec          f1
soc.religion.christian             398         306          92         125       0.769       0.710       0.738
rec.autos                          396         324          72          77       0.818       0.808       0.813
talk.religion.misc                 251         139         112         132       0.554       0.513       0.533
comp.windows.x                     395         273         122          78       0.691       0.778       0.732
rec.sport.baseball                 397         335          62          64       0.844       0.840       0.842
comp.graphics                      389         238         151         135       0.612       0.638       0.625
talk.politics.mideast              376         265         111          35       0.705       0.883       0.784
comp.sys.ibm.pc.hardware           392         276         116         178       0.704       0.608       0.652
sci.med                            396         251         145         125       0.634       0.668       0.650
comp.os.ms-windows.misc            394         254         140         109       0.645       0.700       0.671
sci.crypt                          396         305          91          36       0.770       0.894       0.828
comp.sys.mac.hardware              385         259         126          97       0.673       0.728       0.699
misc.forsale                       390         325          65          87       0.833       0.789       0.810
rec.motorcycles                    398         341          57          75       0.857       0.820       0.838
talk.politics.misc                 310         171         139         195       0.552       0.467       0.506
sci.electronics                    393         243         150         159       0.618       0.604       0.611
rec.sport.hockey                   399         353          46          59       0.885       0.857       0.871
sci.space                          394         305          89          49       0.774       0.862       0.816
alt.atheism                        319         215         104         100       0.674       0.683       0.678
talk.politics.guns                 364         292          72         147       0.802       0.665       0.727
Total                            7,532       5,470       2,062       2,062
Accuracy                                                                         0.726
Micro Average                                                                    0.726       0.726       0.726
Macro Average                                                                    0.721       0.726       0.721
Balanced Error Rate                                                              0.279

The performance dropped a little here, but the model has less than a tenth of the parameters compared to the bigram model, making it faster and much smaller at inference time, and it took around 66% of the time to train. In many cases dropping a couple of points of accuracy for a model that is 20x smaller and substantially faster is a worthwhile tradeoff, but as with most machine learning tasks this depends on the problem you're solving and where you're deploying the model. Tuning the hashing dimension and the trainer parameters will likely produce a model with similar accuracy at greatly reduced computational cost.

Trimming out infrequent features

We can also directly trim out infrequently occuring features. If a feature doesn't occur very frequently then we're not likely to estimate it's weights properly as we've not seen it very often. Then if it occurs frequently in the test dataset it can confuse the model (this is a form of overfitting to the training data). Let's take the TF-IDF dataset and remove all the bigrams that occur fewer than 5 times.

In [16]:
var minCardTrain = new MinimumCardinalityDataset<>(tfidfTrain,5);
// This call creates a copy of bigramTest, removing all the 
// features not found in bigramTrain's feature and output maps
var minCardTest = ImmutableDataset.copyDataset(tfidfTest,minCardTrain.getFeatureIDMap(),minCardTrain.getOutputIDInfo());
// Print the dataset statistics    
System.out.println(String.format("Minimum cardinality training data size = %d, number of features = %d, number of classes = %d",minCardTrain.size(),minCardTrain.getFeatureMap().size(),minCardTrain.getOutputInfo().size()));
System.out.println(String.format("Minimum cardinality testing data size = %d, number of features = %d, number of classes = %d",minCardTest.size(),minCardTest.getFeatureMap().size(),minCardTest.getOutputInfo().size()));
Minimum cardinality training data size = 11314, number of features = 109743, number of classes = 20
Minimum cardinality testing data size = 7532, number of features = 109743, number of classes = 20

We can see that's removed about 90% of the features, so let's try our simple model on it again.

In [17]:
var minCardStartTime = System.currentTimeMillis();
var minCardModel = lrTrainer.train(minCardTrain);
var minCardEndTime = System.currentTimeMillis();
System.out.println("Training the model on trimmed TF-IDF features took " + Util.formatDuration(minCardStartTime,minCardEndTime));
System.out.println();
var minCardEval = labelEvaluator.evaluate(minCardModel,minCardTest);
System.out.println(minCardEval);
Training the model on trimmed TF-IDF features took (00:00:19:049)

Class                                n          tp          fn          fp      recall        prec          f1
soc.religion.christian             398         337          61          93       0.847       0.784       0.814
rec.autos                          396         312          84          60       0.788       0.839       0.813
talk.religion.misc                 251         172          79         143       0.685       0.546       0.608
comp.windows.x                     395         290         105          56       0.734       0.838       0.783
rec.sport.baseball                 397         344          53          37       0.866       0.903       0.884
comp.graphics                      389         284         105         112       0.730       0.717       0.724
talk.politics.mideast              376         301          75          19       0.801       0.941       0.865
comp.sys.ibm.pc.hardware           392         286         106         217       0.730       0.569       0.639
sci.med                            396         295         101          74       0.745       0.799       0.771
comp.os.ms-windows.misc            394         219         175          52       0.556       0.808       0.659
sci.crypt                          396         322          74          49       0.813       0.868       0.840
comp.sys.mac.hardware              385         287          98         125       0.745       0.697       0.720
misc.forsale                       390         320          70          62       0.821       0.838       0.829
rec.motorcycles                    398         353          45          45       0.887       0.887       0.887
talk.politics.misc                 310         191         119         131       0.616       0.593       0.604
sci.electronics                    393         298          95         148       0.758       0.668       0.710
rec.sport.hockey                   399         370          29          41       0.927       0.900       0.914
sci.space                          394         336          58          64       0.853       0.840       0.846
alt.atheism                        319         218         101          59       0.683       0.787       0.732
talk.politics.guns                 364         302          62         108       0.830       0.737       0.780
Total                            7,532       5,837       1,695       1,695
Accuracy                                                                         0.775
Micro Average                                                                    0.775       0.775       0.775
Macro Average                                                                    0.771       0.778       0.771
Balanced Error Rate                                                              0.229

As with the feature hashing above, this model trains more quickly because there is less data to process, but the speed improvement is more stubstantial as the number of features in each example is lower (because the hashing produces a denser example than trimming out infrequent features). Performance dropped slightly as compared to the TF-IDF model, but again it is around 10% of the parameters, with a corresponding reduction in memory and runtime in inference and training. Performance is improved over the hashing as we're not colliding features at random, we're simply removing ones which are infrequent. If a feature is infrequent we probably can't estimate the weight for it very well so it helps remove some of the noise.

Choosing which one of feature hashing and trimming out infrequent features to apply is problem dependent. Feature hashing can work in denser feature spaces than trimming infrequent features, but both still require some amount of sparsity in the problem to have any useful effect. With text datasets then trimming the infrequent words/features is usually helpful.

Word embeddings

All the approaches described above have no notion of word similarity, they rely upon exactly the same words with the same spelling appearing in the training and test documents, when in practice word similarity is likely to be very useful information for the classifier because no two documents use exactly the same phrasing. For example, the unigrams "excellent" and "fantastic" are equally dissimilar to an n-gram model, when in fact those words are quite similar in meaning. Adding notions of word similarity to ML models usually means embedding each word into some vector space, then words with similar meanings can be close in the vector space, and words with dissimilar or opposite meanings are far apart. There are many popular word embedding algorithms, like Word2Vec, GloVe or FastText which build embeddings on a corpus of text that can then be used in downstream tasks. Tribuo doesn't have a class which can directly load those word vectors, as they all come in different file formats, but it's pretty straightforward to build a TextFeatureExtractor that will tokenize the input text, look up each word or n-gram in the vector space and then average them across the input (it took us about an afternoon to build one for our internal word2vec style word vector research file format). If there is interest from the community in supporting a specific word vector file format, we're happy to accept PRs that add the support.

While these more traditional forms of word vector are very powerful, as they are precomputed they treat each word the same no matter the context it appears in. For example "bank" could mean a river bank, or a financial institution, but a word2vec vector has to contain both meanings because it doesn't know the context the word is present in, i.e., the rest of the sentence. This led to the rise of contextual word embeddings, which produce a vector for each word based on the whole input sequence. The most popular of these embeddings are based on the Transformer architecture, usually a variant of Google's BERT model.

Using BERT embeddings

BERT is a multi-layer transformer network, which reads in a sentence and produces both an embedding of the sentence, along with embeddings for each wordpiece. A "wordpiece" is the token that BERT operates on, which is either a whole word, or a chunk of a word, emitted by the wordpiece tokenizer. The word chunking algorithm is trained on a large corpus and allows common prefixes & suffixes (e.g. "un", "ing") to be split off the words and to share state. We can use BERT to produce a single vector which represents the sentence or document and then use that vector as features in a downstream Tribuo classifier.

Tribuo works with BERT models that are stored in ONNX format, and can load in tokenizers produced by HuggingFace Transformers. That package also helpfully provides a Python script to convert BERT models from HuggingFace format into ONNX format for deployment. We provide a TextFeatureExtractor implementation called BERTFeatureExtractor which can produce sentence embeddings out by passing the text through a BERT model. Tribuo uses Microsoft's ONNX Runtime to load the model, and has it's own implementation of the Wordpiece tokenization algorithm, along with the necessary glue to produce tokens in the format that BERT expects. One downside of BERT models is that they have a maximum document length that they can process, usually 512 wordpieces. This is configurable in Tribuo's extractor, but if you set the maximum length to be longer than the sequences the model was trained on then the performance is likely to suffer (or the computation may fail depending on how that specific BERT model is implemented).

To follow along with this part of the tutorial you'll need to produce a BERT model in onnx format. To do that you'll need access to a Python 3 environment with HuggingFace and PyTorch or TensorFlow installed to export the model (the snippet below assumes PyTorch, change the pt to tf if you're using TensorFlow). Running the following snippet will produce a bert-base-uncased.onnx file that we can use for the rest of the tutorial. You'll need to run it in an empty directory due to the way HuggingFace's conversion script works.

python -m transformers.convert_graph_to_onnx --framework pt --model bert-base-uncased bert-base-uncased.onnx

You'll also need to download the tokenizer.json that goes with the BERT variant you are using, for bert-base-uncased that file is here. Assuming both of those files are now in the same directory as this tutorial, we can create the BERTFeatureExtractor. We're going to take the average token embedding across the whole input, as the [CLS] token which provides the sentence embedding tends to perform poorly unless it is fine-tuned on your task.

Warning: this feature extraction step took more than a minute per newsgroup on a 2019 16" 6-core MacBook Pro (using the default settings of ONNX Runtime i.e., using a single thead on the CPU provider) so around 55 minutes to extract the full train and test datasets. Your mileage may vary, and your laptop may get quite warm. We recommend not running it while your laptop is actually on your lap. At the moment Tribuo's TextFeatureExtractor interface doesn't batch up the inputs, which limits the performance of contextual feature extractors. We'll look at expanding that interface to support batching in a future release. The session options used can be controlled by the BERTFeatureExtractor.reconfigureOrtSession(SessionOptions options) method, which allows the use of whatever configuration is supported by your onnxruntime jar.

In [18]:
var bertPath = Paths.get("./bert-base-uncased.onnx");
var tokenizerPath = Paths.get("./tokenizer.json");
var bert = new BERTFeatureExtractor<>(labelFactory,
                                      bertPath,
                                      tokenizerPath,
                                      BERTFeatureExtractor.OutputPooling.MEAN,
                                      256,  // Maximum number of wordpiece tokens
                                      false // Use Nvidia GPUs for inference (if onnxruntime_gpu is on the classpath)
                                      );
                                      
var bertStartTime = System.currentTimeMillis();
var bertPair = mkDatasets("bert",bert);
var bertEndTime = System.currentTimeMillis();
System.out.println("Extracting features with BERT took " + Util.formatDuration(bertStartTime,bertEndTime));
bert training data size = 11314, number of features = 768, number of classes = 20
bert testing data size = 7532, number of features = 768, number of classes = 20
Extracting features with BERT took (01:06:52:756)

Note Tribuo's BERTFeatureExtractor can run the BERT embeddings on a GPU, but only if the onnxruntime_gpu jar is on the classpath. By default Tribuo pulls in the CPU only jar for maximum compatibility. As you can see from the time taken to extract the features, it's best to deploy BERT when you've got plenty of CPUs or fast GPUs.

Now we build a logistic regression on the dense feature space produced by BERT. These embeddings are dense 768 dimensional vectors, each document contains a value for each one of those dimensions. In Tribuo 4.1 we added optimisations to several of the models and trainers to improve their performance on the dense feature spaces produced by techniques like BERT.

In [19]:
var bertStartTime = System.currentTimeMillis();
var bertModel = lrTrainer.train(bertPair.getA());
var bertEndTime = System.currentTimeMillis();
System.out.println("Training a LR on BERT features took " + Util.formatDuration(bertStartTime,bertEndTime));
var bertEval = labelEvaluator.evaluate(bertModel,bertPair.getB());
System.out.println(bertEval);
Training a LR on BERT features took (00:00:08:960)
Class                                n          tp          fn          fp      recall        prec          f1
soc.religion.christian             398         353          45         111       0.887       0.761       0.819
rec.autos                          396         332          64          99       0.838       0.770       0.803
talk.religion.misc                 251         102         149         131       0.406       0.438       0.421
comp.windows.x                     395         288         107         121       0.729       0.704       0.716
rec.sport.baseball                 397         365          32          32       0.919       0.919       0.919
comp.graphics                      389         257         132         183       0.661       0.584       0.620
talk.politics.mideast              376         289          87          26       0.769       0.917       0.836
comp.sys.ibm.pc.hardware           392         220         172         166       0.561       0.570       0.566
sci.med                            396         320          76          34       0.808       0.904       0.853
comp.os.ms-windows.misc            394         247         147         187       0.627       0.569       0.597
sci.crypt                          396         314          82          95       0.793       0.768       0.780
comp.sys.mac.hardware              385         134         251          32       0.348       0.807       0.486
misc.forsale                       390         342          48         103       0.877       0.769       0.819
rec.motorcycles                    398         308          90          75       0.774       0.804       0.789
talk.politics.misc                 310         186         124         226       0.600       0.451       0.515
sci.electronics                    393         252         141         197       0.641       0.561       0.599
rec.sport.hockey                   399         381          18          21       0.955       0.948       0.951
sci.space                          394         332          62          78       0.843       0.810       0.826
alt.atheism                        319         163         156         121       0.511       0.574       0.541
talk.politics.guns                 364         210         154          99       0.577       0.680       0.624
Total                            7,532       5,395       2,137       2,137
Accuracy                                                                         0.716
Micro Average                                                                    0.716       0.716       0.716
Macro Average                                                                    0.706       0.715       0.704
Balanced Error Rate                                                              0.294

We get around 71% accuracy using this standard BERT model, which might be due to it's training data of Wikipedia and books not overlapping well with the comparatively old newsgroup language. Fine tuning the BERT model on a large corpus of newsgroups could probably improve this, but the standard model is likely to work well for more well formed text like news articles or more formal documents. Alternatively it may be that the logistic regression we're training isn't sufficiently flexible to use the information in the BERT features, so it may be beneficial to use a more complex classifier like gradient boosted trees or a Multi-Layer Perceptron through Tribuo's TensorFlow interface.

Using different BERT versions can change the accuracy as there are variants fine-tuned for a wide variety of different tasks & domains, and there are smaller versions like DistillBERT and TinyBERT which are useful for deploying models in constrained environments. However BERT based feature extractors will always be slower than the simpler BoW approaches described above, because they have to perform lots of floating point computations to compute the embedded feature values.

Deploying the feature extractors

Similarly to when working with columnar data, the feature extractor used is recorded in the model provenance. We can see that for the BERT model here.

In [20]:
var sourceProvenance = bertModel.getProvenance().getDatasetProvenance().getSourceProvenance();
System.out.println(ProvenanceUtil.formattedProvenanceString(sourceProvenance));
DirectoryFileSource(
	class-name = org.tribuo.data.text.DirectoryFileSource
	dataDir = /Users/apocock/Development/Tribuo/tutorials/20news/20news-bydate-train
	preprocessors = List[
		NewsPreprocessor(
					class-name = org.tribuo.data.text.impl.NewsPreprocessor
					host-short-name = DocumentPreprocessor
				)
		CasingPreprocessor(
					class-name = org.tribuo.data.text.impl.CasingPreprocessor
					op = LOWERCASE
					host-short-name = DocumentPreprocessor
				)
	]
	extractor = BERTFeatureExtractor(
			class-name = org.tribuo.interop.onnx.extractors.BERTFeatureExtractor
			useCUDA = false
			pooling = MEAN
			modelPath = /Users/apocock/Development/Tribuo/tutorials/bert-base-uncased.onnx
			tokenizerPath = /Users/apocock/Development/Tribuo/tutorials/tokenizer.json
			outputFactory = LabelFactory(
					class-name = org.tribuo.classification.LabelFactory
				)
			maxLength = 256
			host-short-name = FeatureExtractor
		)
	outputFactory = LabelFactory(
			class-name = org.tribuo.classification.LabelFactory
		)
	file-modified-time = 2003-03-18T07:24:55-05:00
	datasource-creation-time = 2021-05-24T12:46:58.801385-04:00
)

This means that the model has recorded how the features were extracted, but the extraction process itself isn't part of the serialized model (which we wouldn't really want anyway as BERT models are hundreds of megabytes). So to use one of these models at inference time the feature extraction pipeline needs to be rebuilt from the configuration, in the same way we rebuilt the RowProcessor in the columnar tutorial.

Each of the different models trained in this tutorial has recorded the source provenance and it's associated TextFeatureExtractor configuration, meaning the models come with all the necessary information to infer the classes of new documents.

Conclusion

We looked at a document classification task in Tribuo. As most of the work in NLP tends to be on featurising the data, we discussed several different ways of converting text into features for use in machine learning. We looked at Bag of Words models, using n-grams, term frequencies, TFIDF vectors, feature hashing and also looked at trimming large feature spaces based on the number of times we'd seen a feature. We also discussed word vector approaches, and showed how to use the popular contextual word embedding model, BERT, to extract features for document classification. It's worth noting all the models trained were simple logistic regressions, with no parameter tuning. Using a more powerful classifier like XGBoost, or performing hyperparameter tuning on the logistic regression will likely improve performance over the simple baselines presented here.

Tribuo's text processing framework is very flexible, and it's possible to insert your own code into each of the different classes by implementing TextFeatureExtractor, TextPipeline or even the Tokenizer yourself, while the provenance system ensures that you can always recover how your data was processed to ensure it matches at inference time.