public final class XGBoostClassificationTrainer extends XGBoostTrainer<Label>
Trainer
which wraps the XGBoost training procedure.
This only exposes a few of XGBoost's training parameters.
It uses pthreads outside of the JVM to parallelise the computation.
See:
Chen T, Guestrin C. "XGBoost: A Scalable Tree Boosting System" Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016.and for the original algorithm:
Friedman JH. "Greedy Function Approximation: a Gradient Boosting Machine" Annals of statistics, 2001.
N.B.: XGBoost4J wraps the native C implementation of xgboost that links to various C libraries, including libgomp and glibc (on Linux). If you're running on Alpine, which does not natively use glibc, you'll need to install glibc into the container. On the macOS binary on Maven Central is compiled without OpenMP support, meaning that XGBoost is single threaded on macOS. You can recompile the macOS binary with OpenMP support after installing libomp from homebrew if necessary.
XGBoostTrainer.BoosterType, XGBoostTrainer.DMatrixTuple<T extends Output<T>>, XGBoostTrainer.LoggingVerbosity, XGBoostTrainer.TreeMethod, XGBoostTrainer.XGBoostTrainerProvenance
numTrees, parameters, trainInvocationCounter
DEFAULT_SEED
Modifier | Constructor and Description |
---|---|
protected |
XGBoostClassificationTrainer()
For olcut.
|
|
XGBoostClassificationTrainer(int numTrees) |
|
XGBoostClassificationTrainer(int numTrees,
double eta,
double gamma,
int maxDepth,
double minChildWeight,
double subsample,
double featureSubsample,
double lambda,
double alpha,
int nThread,
boolean silent,
long seed)
Create an XGBoost trainer.
|
|
XGBoostClassificationTrainer(int numTrees,
int numThreads,
boolean silent) |
|
XGBoostClassificationTrainer(int numTrees,
Map<String,Object> parameters)
This gives direct access to the XGBoost parameter map.
|
|
XGBoostClassificationTrainer(XGBoostTrainer.BoosterType boosterType,
XGBoostTrainer.TreeMethod treeMethod,
int numTrees,
double eta,
double gamma,
int maxDepth,
double minChildWeight,
double subsample,
double featureSubsample,
double lambda,
double alpha,
int nThread,
XGBoostTrainer.LoggingVerbosity verbosity,
long seed)
Create an XGBoost trainer.
|
Modifier and Type | Method and Description |
---|---|
TrainerProvenance |
getProvenance() |
void |
postConfig()
Used by the OLCUT configuration system, and should not be called by external code.
|
XGBoostModel<Label> |
train(Dataset<Label> examples,
Map<String,com.oracle.labs.mlrg.olcut.provenance.Provenance> runProvenance)
Trains a predictive model using the examples in the given data set.
|
convertDataset, convertDataset, convertExample, convertExample, convertExamples, convertExamples, convertSingleExample, convertSparseVector, convertSparseVectors, createModel, getInvocationCount, toString
public XGBoostClassificationTrainer(int numTrees)
public XGBoostClassificationTrainer(int numTrees, int numThreads, boolean silent)
public XGBoostClassificationTrainer(int numTrees, double eta, double gamma, int maxDepth, double minChildWeight, double subsample, double featureSubsample, double lambda, double alpha, int nThread, boolean silent, long seed)
numTrees
- Number of trees to boost.eta
- Step size shrinkage parameter (default 0.3, range [0,1]).gamma
- Minimum loss reduction to make a split (default 0, range
[0,inf]).maxDepth
- Maximum tree depth (default 6, range [1,inf]).minChildWeight
- Minimum sum of instance weights needed in a leaf
(default 1, range [0, inf]).subsample
- Subsample size for each tree (default 1, range (0,1]).featureSubsample
- Subsample features for each tree (default 1,
range (0,1]).lambda
- L2 regularization term on weights (default 1).alpha
- L1 regularization term on weights (default 0).nThread
- Number of threads to use (default 4).silent
- Silence the training output text.seed
- RNG seed.public XGBoostClassificationTrainer(XGBoostTrainer.BoosterType boosterType, XGBoostTrainer.TreeMethod treeMethod, int numTrees, double eta, double gamma, int maxDepth, double minChildWeight, double subsample, double featureSubsample, double lambda, double alpha, int nThread, XGBoostTrainer.LoggingVerbosity verbosity, long seed)
boosterType
- The base learning algorithm.treeMethod
- The tree building algorithm if using a tree booster.numTrees
- Number of trees to boost.eta
- Step size shrinkage parameter (default 0.3, range [0,1]).gamma
- Minimum loss reduction to make a split (default 0, range
[0,inf]).maxDepth
- Maximum tree depth (default 6, range [1,inf]).minChildWeight
- Minimum sum of instance weights needed in a leaf
(default 1, range [0, inf]).subsample
- Subsample size for each tree (default 1, range (0,1]).featureSubsample
- Subsample features for each tree (default 1,
range (0,1]).lambda
- L2 regularization term on weights (default 1).alpha
- L1 regularization term on weights (default 0).nThread
- Number of threads to use (default 4).verbosity
- Set the logging verbosity of the native library.seed
- RNG seed.public XGBoostClassificationTrainer(int numTrees, Map<String,Object> parameters)
It lets you pick things that we haven't exposed like dropout trees, binary classification etc.
This sidesteps the validation that Tribuo provides for the hyperparameters, and so can produce unexpected results.
numTrees
- Number of trees to boost.parameters
- A map from string to object, where object can be Number or String.protected XGBoostClassificationTrainer()
public void postConfig()
postConfig
in interface com.oracle.labs.mlrg.olcut.config.Configurable
postConfig
in class XGBoostTrainer<Label>
public XGBoostModel<Label> train(Dataset<Label> examples, Map<String,com.oracle.labs.mlrg.olcut.provenance.Provenance> runProvenance)
Trainer
examples
- the data set containing the examples.runProvenance
- Training run specific provenance (e.g., fold number).public TrainerProvenance getProvenance()
Copyright © 2015–2021 Oracle and/or its affiliates. All rights reserved.