I have come up with a new splitting criterion which is much faster and more stable than the usual one. I have tested it with my own code and it runs much faster and generalises much better. However, in order to document/benchmark it, I would like to implement it in one of the standard xgboost packages (in Julia or R).

In the past, I had managed to define my own objective function and run it in both Julia and R. Would it be difficult to (analogously) supply my own splitting criterion?

Thanks in advance for any help.

# New (faster, more stable) splitting criterion

**hcho3**#2

There is a proposal to allow custom split evaluator: https://github.com/dmlc/xgboost/issues/4230. It is not yet implemented.

In the meanwhile, would you be able to share your new splitting criterion?

I would like to test/publish it first (or establish priority somehow), but I am hoping to do that soon, because it has really worked for me. Let me give you a ‘hint’ though. I decided for a few examples to view the ‘purity improvement’ plotted as a function of (candidate)

split location and got results like the attached graph, which got me thinking that choosing the maximum of this function as the split point is inherently unstable.

**hcho3**#4

Is this generalizable to gradient boosting? XGBoost uses twice differentiable loss functions to determine fitness of split candidates.

Yes, it applies to gradient boosting as well.

I have to admit, though, that while I know the theory, I am not familiar with the specifics of the exact algorithms used by default in GB/XGB if they deviate from maximum improvement in purity (also, my enhancement currently applies only to the binary classification case).