I’m using xgboost in python (not using sklearn). These are my parameters:
params = {
‘objective’:‘binary:logistic’
, “learning_rate”:0.12
, “max_depth”:5
, “subsample”:0.75
, ‘nthread’:64
, “eval_metric”:“auc”
, ‘min_child_weight’:1
}
When I pass params to xgb.train, I can tell that it is using all cores (by checking htop).
However, I am now using xgb.cv, and I can tell it is not using all cores.
How can I use all cores when running CV?
Thanks.