I’m hoping to use xgboost for feature selection for a complex non linear model. The feature space is all one-hot-encoded, and the objective function value is calculated by another program, and is pretty expensive to calculate (up to maybe an hour). I’ve had some limited success using scikit-optimize, both the forest minimize and gradient boosted minimize. Given the expense of calculating the objective function, I wanted to run parallel. It sounds like XGboost is well suited to run parallel, Is anyone aware of an example of this sort of problem?