A decision tree node used at training time. Contains a list of the example indices currently found in this node, the current impurity and a bunch of other statistics.
- See Also:
Nested Class Summary
Method SummaryModifier and TypeMethodDescriptionBuilds a tree according to CART (as it does not do multi-way splits on categorical values like C4.5).
doubleThe impurity score of this node.
floatThe sum of the weights associated with this node's examples.
Methods inherited from class org.tribuo.common.tree.AbstractTrainingNode
copy, createSplitNode, getDepth, getNextNode, getNumExamples, isLeaf, shouldMakeLeaf
(RegressorImpurity impurity, Dataset<Regressor> examples, boolean normalize, AbstractTrainingNode.LeafDeterminer leafDeterminer)Constructor which creates the inverted file.
impurity- The impurity function to use.
examples- The training data.
normalize- Normalizes the leaves so each leaf has a distribution which sums to 1.0.
leafDeterminer- Contains parameters needed to determine whether a node is a leaf.
getImpuritypublic double getImpurity()Description copied from interface:
NodeThe impurity score of this node.
- The node impurity.
getWeightSumpublic float getWeightSum()Description copied from class:
AbstractTrainingNodeThe sum of the weights associated with this node's examples.
buildTreepublic List<AbstractTrainingNode<Regressor>> buildTree
(int featureIDs, SplittableRandom rng, boolean useRandomSplitPoints)Builds a tree according to CART (as it does not do multi-way splits on categorical values like C4.5).
- Specified by:
featureIDs- Indices of the features available in this split.
rng- Splittable random number generator.
useRandomSplitPoints- Whether to choose split points for features at random.
- A possibly empty list of TrainingNodes.