Reproducibility Tutorial¶
Reproducibility of ML models and evaluations is frequently a problem across many ML systems. It's usually two problems, the first is a description of the computation that was executed, and the second is a method of replaying that computation. In Tribuo we built our provenance system to make our models self-describing by which we mean they capture a complete description of the computation that produced them, solving the first issue. In v4.2 we added an automated reproducibility system which consumes the provenance data and retrains the model. As well as the reproducibility system we also added a mechanism for diffing provenance objects allowing easy comparison between the reproduced and original models. This is because the models are only guaranteed to be identical if the data is the same, and any differences in the data will show up in the data provenance object.
Setup¶
Before running this tutorial, please run the irises classification and ONNX export tutorial to build the two models that we're going to reproduce.
We're going to load in the classification jar, onnx jar, and the reproducibility jar. Note the reproducibility jar is written in Java 16, and so this tutorial requires Java 16 or later. Then we'll import the necessary classes.
%jars ./tribuo-classification-experiments-4.2.0-jar-with-dependencies.jar
%jars ./tribuo-onnx-4.2.0-jar-with-dependencies.jar
%jars ./tribuo-json-4.2.0-jar-with-dependencies.jar
%jars ./tribuo-reproducibility-4.2.0-jar-with-dependencies.jar
import org.tribuo.*;
import org.tribuo.classification.*;
import org.tribuo.classification.evaluation.*;
import org.tribuo.classification.sgd.fm.*;
import org.tribuo.classification.sgd.linear.*;
import org.tribuo.datasource.*;
import org.tribuo.interop.onnx.*;
import org.tribuo.reproducibility.*;
import com.oracle.labs.mlrg.olcut.provenance.*;
import com.oracle.labs.mlrg.olcut.util.*;
import ai.onnxruntime.*;
import java.nio.file.*;
Reproducing a Tribuo Model¶
The reproducibility system works on Tribuo Model
or ModelProvenance
objects. When using the ModelProvenance
the system loads in the original training data, processes and transforms it according to the columnar processing and transforms applied, then rebuilds the original trainer including it's RNG state, before passing the data into the train method and returning the reproduced model. When using the Model
object, it performs the same steps as for a ModelProvenance
and then compares the feature and output domains to provide more information about any differences between the feature and output domains used by the model. Over time we plan to expand the validation applied to the reproduced model to show if the features have different ranges or histograms.
We're going to load in the Irises logistic regression model trained in the first tutorial.
File irisModelFile = new File("iris-lr-model.ser");
String filterPattern = Files.readAllLines(Paths.get("../docs/jep-290-filter.txt")).get(0);
ObjectInputFilter filter = ObjectInputFilter.Config.createFilter(filterPattern);
LinearSGDModel loadedModel;
try (ObjectInputStream ois = new ObjectInputStream(new BufferedInputStream(new FileInputStream(irisModelFile)))) {
ois.setObjectInputFilter(filter);
loadedModel = (LinearSGDModel) ois.readObject();
}
System.out.println(loadedModel.toString());
The reproducibility system lives in the ReproUtil
class. This class is constructed with a Model
or a ModelProvenance
and Class<T extends Output<T>>
for the output class.
var repro = new ReproUtil<>(loadedModel);
Now we can separately rebuild the dataset and the trainer, though note if you mutate the objects returned by these methods then you won't get the exact same model back from the reproduction. We're still working on the API for the reproducibility system and expect to make this API more robust over time.
var dataset = repro.recoverDataset();
System.out.println(ProvenanceUtil.formattedProvenanceString(dataset.getProvenance()));
Our irises dataset was loaded in using the CSVLoader
and split with a 70/30 train test split, and we can see that the reproduced training dataset has been split just as we expect.
var trainer = repro.recoverTrainer();
System.out.println(ProvenanceUtil.formattedProvenanceString(trainer.getProvenance()));
The irises model is a logistic regression, using seed 12345
and it's the first model trained by that trainer (as train-invocation-count
is zero).
var reproduction = repro.reproduceFromModel();
var reproducedModel = (LinearSGDModel) reproduction.model();
We can compare this provenance to the one in the original model using our diff tool, however as Tribuo records construction timestamps they will not be identical.
System.out.println(ReproUtil.diffProvenance(loadedModel.getProvenance(),reproducedModel.getProvenance()));
We can see that the timestamps are a little different, though the precise difference will depend on when you ran the irises tutorial. You may also see differences in the JVM or other machine provenance if you ran that tutorial on a different machine. If the irises dataset grows a new feature or additional rows in the same file, then the diff will show that the datasets have different numbers of features or samples, and that the file has a different hash.
For some models we can easily compare the model contents, e.g., for the logistic regression we can directly compare the model weights.
var originalWeights = loadedModel.getWeightsCopy();
var reproducedWeights = reproducedModel.getWeightsCopy();
System.out.println("Weights are equal = " + originalWeights.equals(reproducedWeights));
Reproducing an ONNX exported Tribuo Model¶
Tribuo models can be exported into the ONNX format. When Tribuo models are exported the model provenance is stored as a metadata field in the ONNX file. This doesn't affect anything which serves the ONNX model, but allows Tribuo to load the provenance back in if the model is loaded in as an ONNXExternalModel
which is Tribuo's class for loading in ONNX models.
To load a model in as an ONNXExternalModel
we need to define the feature and label mappings which should be written out separately when the ONNX model is exported. We're going to cheat slightly and get them from the MNIST training set itself.
var labelFactory = new LabelFactory();
var mnistTrainSource = new IDXDataSource<>(Paths.get("train-images-idx3-ubyte.gz"),Paths.get("train-labels-idx1-ubyte.gz"),labelFactory);
var mnistTestSource = new IDXDataSource<>(Paths.get("t10k-images-idx3-ubyte.gz"),Paths.get("t10k-labels-idx1-ubyte.gz"),labelFactory);
var mnistTrain = new MutableDataset<>(mnistTrainSource);
var mnistTest = new MutableDataset<>(mnistTestSource);
Map<String, Integer> mnistFeatureMap = new HashMap<>();
for (VariableInfo f : mnistTrain.getFeatureIDMap()){
VariableIDInfo id = (VariableIDInfo) f;
mnistFeatureMap.put(id.getName(),id.getID());
}
Map<Label, Integer> mnistOutputMap = new HashMap<>();
for (Pair<Integer,Label> l : mnistTrain.getOutputIDInfo()) {
mnistOutputMap.put(l.getB(), l.getA());
}
Now let's load in the ONNX file:
var ortEnv = OrtEnvironment.getEnvironment();
var sessionOpts = new OrtSession.SessionOptions();
var denseTransformer = new DenseTransformer();
var labelTransformer = new LabelTransformer();
var mnistModelPath = Paths.get(".","fm-mnist.onnx");
ONNXExternalModel<Label> onnx = ONNXExternalModel.createOnnxModel(labelFactory, mnistFeatureMap, mnistOutputMap,
denseTransformer, labelTransformer, sessionOpts, mnistModelPath, "input");
This model has two provenance objects, one from the creation of the ONNXExternalModel
, and one from the original training run in Tribuo which is persisted inside the ONNX file.
System.out.println(ProvenanceUtil.formattedProvenanceString(onnx.getProvenance()));
The ONNXExternalModel
provenance has a lot of placeholders in it, as you might expect given the information is not always present in ONNX files.
We can load the Tribuo model provenance using getTribuoProvenance()
:
var tribuoProvenance = onnx.getTribuoProvenance().get();
System.out.println(ProvenanceUtil.formattedProvenanceString(tribuoProvenance));
From this provenance we can see that the model is a factorization machine running on MNIST (as expected). So now we can build a ReproUtil
and rebuild the model.
var mnistRepro = new ReproUtil<>(tribuoProvenance,Label.class);
var reproducedMNISTModel = mnistRepro.reproduceFromProvenance();
We can diff the two provenances:
System.out.println(ReproUtil.diffProvenance(tribuoProvenance, reproducedMNISTModel.getProvenance()));
As before, it's not very interesting as we're using the same files and so only the creation timestamps are differing. Checking the model weights is tricky with an ONNX model, so we can instead check that the predictions are the same (though Tribuo computes in doubles and ONNX Runtime uses floats so the answers are slightly different). We'll borrow the checkPredictions
function from the ONNX export tutorial.
public boolean checkPredictions(List<Prediction<Label>> nativePredictions, List<Prediction<Label>> onnxPredictions, double delta) {
for (int i = 0; i < nativePredictions.size(); i++) {
Prediction<Label> tribuo = nativePredictions.get(i);
Prediction<Label> external = onnxPredictions.get(i);
// Check the predicted label
if (!tribuo.getOutput().getLabel().equals(external.getOutput().getLabel())) {
System.out.println("At index " + i + " predictions are not equal - "
+ tribuo.getOutput().getLabel() + " and "
+ external.getOutput().getLabel());
return false;
}
// Check the maximum score
if (Math.abs(tribuo.getOutput().getScore() - external.getOutput().getScore()) > delta) {
System.out.println("At index " + i + " predictions are not equal - "
+ tribuo.getOutput() + " and "
+ external.getOutput());
return false;
}
// Check the score distribution
for (Map.Entry<String, Label> l : tribuo.getOutputScores().entrySet()) {
Label other = external.getOutputScores().get(l.getKey());
if (other == null) {
System.out.println("At index " + i + " failed to find label " + l.getKey() + " in ORT prediction.");
return false;
} else {
if (Math.abs(l.getValue().getScore() - other.getScore()) > delta) {
System.out.println("At index " + i + " predictions are not equal - "
+ tribuo.getOutputScores() + " and "
+ external.getOutputScores());
return false;
}
}
}
}
return true;
}
Now we can make predictions from both models and compare the outputs:
var onnxPredictions = onnx.predict(mnistTest);
var reproducedPredictions = reproducedMNISTModel.predict(mnistTest);
System.out.println("Predictions are equal = " + checkPredictions(reproducedPredictions,onnxPredictions,1e-5));
Working with provenance diffs¶
We can use the provenance diff methods to compute diffs for unrelated models too. We're going to train a logistic regression on MNIST and compare the model provenance against the ONNX factorization machine we just used.
var lrTrainer = new LogisticRegressionTrainer();
var lrModel = lrTrainer.train(mnistTrain);
System.out.println(ReproUtil.diffProvenance(tribuoProvenance, lrModel.getProvenance()));
This diff is longer than the others we've seen, as expected for two different models with different trainers. As expected the dataset section is mostly empty as both models are trained on an unmodified MNIST training set. The FMClassificationTrainer
and LogisticRegressionTrainer
show more differences, but as both are SGD based models there are many common fields. They share fields like a loss function (both used LogMulticlass
), a gradient optimiser (both used AdaGrad
), the number of training epochs, and the minibatch size. They used different learning rates (which do appear in the diff under optimiser
) and the factorization machine also has a few extra parameters not found in the logistic regression, factorizedDimSize
and variance
, which are reported as just having an original
value, meaning they are only found in the first provenance and not the second.
The current diff format is JSON, and designed to be easily human readable. We left designing a navigable diff object which is easily inspectable from code to future work once we have a better understanding of how people want to use the generated diffs.
Conclusion¶
We showed how to load in Tribuo models and reproduce them using our automated reproducibility system. The system executes the same computations as the original training, which in most cases results in an identical model. We have noted that there are some differences between gradient descent based models that are trained on ARM and x86 architectures due to underlying differences in the JVM, but otherwise the reproductions are exact. Over time we plan to expand this reproducibility system into a full experimental framework allowing models to be rebuilt using different datasets, data transformations or training hyperparameters holding all other parameters constant.