Why monotonic xgboost accuracy is lower than regular xgboost?

Hello,

For a binary classification, I get an accuracy of 0.48 when I apply increasing monotonicity constraint to all the features, but get an accuracy of 1.0 when I do NOT apply any monotonicity constraint to any feature.

I need to build a monotonic xgboost classifier (binary classifier) but do NOT expect a performance drop from the regular xgboost (without monotonicity constraint), with the same dataset. How may I achieve this? I am not sure if I have correctly set the parameters; thus I am sharing my code below.

X_train = np.array(df_tr)
X_validation = np.array(df_val)
X_test = np.array(df_ts)

dtrain = xgb.DMatrix(X_train, label = y_train)
dvalidation = xgb.DMatrix(X_validation, label = y_validation)
dtest = xgb.DMatrix(X_test, label = y_test)

#without monotonic constraints
feature_monotones = [0 for i in range(len(used_features))]
#increasing monotonic constraints to all features
#feature_monotones = [1 for i in range(len(used_features))]

params = {‘max_depth’: 2,
‘eta’: 0.1,

      'silent': 1,
      'nthread': 2,
      'seed': 0,
      'objective': 'binary:logistic',
      'tree_method': 'hist',
      'eval_metric': ['auc', 'error'], 
      
      'monotone_constraints': '(' + ','.join([str(m) for m in feature_monotones]) + ')'
     }

bst_cv = xgb.cv(params, dtrain, 500, nfold = 5, early_stopping_rounds=10)
evallist = [(dtrain, ‘train’), (dtest, ‘eval’)]
evals_result = {}

model = xgb.train(params, dtrain, num_boost_round = bst_cv.shape[0], evals_result = evals_result, evals = evallist, verbose_eval = False)

Could you please advise me?
Thank you so much for your valuable support!

I don’t think this expectation is a reasonable. Monotonic constraint is a constraint on the choice of tree splits, so it can lead to a lower accuracy.

You should look into adjusting other hyperparameters, i.e. increase max_depth, reduce min_child_weight, reduce reg_lambda and so forth. This way, you can compensate for the effect of the monotonic constraint.

2 Likes

Thank you so much for your great suggestions on adjusting hyperparameters… I will play around them,