If I want to train an XGBoost regressor model whereby the order of the training points is necessary in the loss function, e.g. each training example has a sample weight and we encode this into the training objective:
def get_xgb_training_objective(sample_weights):
def training_objective(y_true, y_pred):
n = len(y_true)
gradient = 2 * (y_pred - y_true) * sample_weights / n
hessian = 2 * sample_weights / n
return gradient, hessian
return training_objective
Do I need to be concerned about training points getting shuffled in training and attributed to the wrong weight?