What is the feature importance method for the Sklearn XGBoost?


The core XGBoost offers three methods for representing features importance - weight, gain and cover, but the Sklearn API has only one - feature_importances_. The code below outputs the feature importance from the Sklearn API. What is the method for determining importances?

xgb.XGBClassifier(**xgb_params).fit(X, y_train).feature_importances_


Attribute feature_importances_ is based on weight.

XGBRegressor.get_booster().get_score(importance_type='weight') returns the number of occurrences of the feature in splits: integers greater than 0 (features not participating in splits are omitted). The docs.

XGBRegressor.feature_importances_ is the same but divided by the total sum of occurrences — so it sums up to one.