I want to be able to use a classifier already fitted with XGBoost to compute AUC on a new test set, but using XGBoost’s implementation of AUC, rather than another library’s implementation. Is this possible?
You can add the test set to the evaluation set.
Right, but I don’t want to re-train the model. Given the classifier, what method do I use, if I don’t want to re-train it (which I imagine .fit would do)?
If this is not possible, then can someone explain the computation to me? This is from the source: https://github.com/dmlc/xgboost/blob/master/src/metric/rank_metric.cc#L143-L212, but I could not quite follow the implementation. I’m looking for a simplified explanation for the simplest case: Data with two class labels, and predicted scores are all unique.
Or even simpler, can someone explain the logic in this one line: https://github.com/dmlc/xgboost/blob/master/src/metric/rank_metric.cc#L181