How to correct xgboost models when the prediction results are opposite to domain knowledge

Dear all,
I’m facing a problem that may be very common in the ML field. I trained a xgboost model with my data, and the overall performance was good. However, when I picked up some samples and analyzed the features, some predictions were against the domain knowledge and common sense. For example, one sample with only one feature had the highest score, but the feature was neither the most important feature nor the most frequent feature. I wonder if there is any hint on how to correct the xgboost model in this situation? Thanks.

1 Like

You may consider fitting a model with the problematic feature withheld.

Thank you, Philip, this is a good idea.