Hi Team!

I am searching for hyperparameters using usual hyperopt/bayes searching, and wonder whether there is any heurestic or theoretical based research on how to aggregate hyperparameters of trees into single one which would mean general strength of reguralization.

What Im looking for is to have something like this curve https://www.coursera.org/learn/machine-learning/lecture/yCAup/diagnosing-bias-vs-variance, but instead of polynomial degree, have some aggregate measure of regularization of xgboost trees.

Intuitively we all understand that max depth 1 and min child 30 is much harger reg than max depth 6 and min child 1, and also that other parameters makes less influence on tuning after max depth and min child is decided.

One might argue that xgb.cv and early stopping would do the job, but the problem is that we have very specific time-structured cross validation which we implement ourselves and thus want to metric on how train test/diverge from each other against some aggregate reg measure

Thanks!!