trying my luck with XGBoostRegressor / Classifier objects, in Spark, which are taking into account the “weight_col” parameter.
If I got it right, this value (which is not explained in the official parameters), is giving more weight to errors with a bigger value in this column, as opposed to errors with a small value in this column, i.e. smoothing the error (and getting it more affected / weighted) by this weight column.
Did I get it right? If so, do you mind I’ll add a paragraph (with a new PR) to the documentation, so we’ll have it documented in XGBoost4J as well?
If I didn’t get it right, can one explain to me how this exactly works? i.e. the exact functionality (here - https://github.com/dmlc/xgboost/issues/3258 it is described how to implement it, now the usage behind the scenes).
Is there a way to know that this really works (and not by just using xgboostRegressor.getWeightCol)? Because I had two runs - 1 with this flag, 1 without - and didn’t see any great affect in the error. How should I debug myself?
Thank you in advance!