I’ve been using a custom objective and eval error (which happens to be a raw sum of errors, rather than an average per record) through a xgb.cv, and after digging into some results it seems that the value returned for eval error in a CV is the simple average of the eval error of each fold.
I appreciate that with CV the point is to have the exact same number of records in each fold, but in the case where the number of records in each fold differs even slightly, this could lead to some (even minor) biases creeping in.
Is there any way for the error metric in CV to be passed as the sum of the errors across the folds, rather than the average?