We’re using auc/aucpr
for our multiclass classification problem and need to understand how the eval metric is computed.
For example, scikit-learn has multiple averaging options (https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html#sklearn-metrics-roc-auc-score) that allows averaging over the multiple labels. Additionally the multi_class
parameter allows switching between one-vs-all
and one-vs-one
.
Are there similar options in XGBoost? If not, how is the metric currently calculated?