Since my data is unbalanced, I want to use “auc” to measure the model performance. With XGBClassifier, I have the following code:
eval_set=[(X_train, y_train), (X_test, y_test)]
model.fit(X_train,y_train,eval_metric=[“auc”], eval_set=eval_set)
With one set of data, I got an auc score of 0.93 for (X_test, y_test). Then I wanted to compare it to sci-kit learn’s roc_auc_score() function. So I did the following:
auc=roc_auc_score(y_test, predictions)
For the same dataset, I got an auc score of 0.86. I ran a few more datasets and found the scores from roc_auc_score() are always lower than these from XGBoost’s eval_metric.
Shouldn’t they be the same? Thanks.