I think this recommendation can work well. I use this transformation routinely for skewed data (with outliers) and it seem to improve my results.

I am posting now because I became interested in RMSLEâ€“only to discover that it is not supported in the latest release. But am I effectively using RMSLE when I routinely transform my data by taking the log(1 + L)? L is my untransformed label. I assume my prediction yields log(1+P). P = untransformed prediction.

If I use the RMSE metric AND XGBoost has no problem using this label to predict log(1+P) then I am effectively using the RMSLE metric arenâ€™t I?

Any corrections of errors in my thinking would be greatly appreciated. Any deeper understanding of the benefits of RMSLE would also be appreciated. For example, I would be interested in any experience or theoretical understanding of how effective this metric is for dealing with outliers. Maybe any comparison to MAE for dealing with outliers.

Anyway, the advice seems to work well for me.

-Jim