Aggregating trees hyperparameters to single one

Hi Team!

I am searching for hyperparameters using usual hyperopt/bayes searching, and wonder whether there is any heurestic or theoretical based research on how to aggregate hyperparameters of trees into single one which would mean general strength of reguralization.

What Im looking for is to have something like this curve https://www.coursera.org/learn/machine-learning/lecture/yCAup/diagnosing-bias-vs-variance, but instead of polynomial degree, have some aggregate measure of regularization of xgboost trees.

Intuitively we all understand that max depth 1 and min child 30 is much harger reg than max depth 6 and min child 1, and also that other parameters makes less influence on tuning after max depth and min child is decided.

One might argue that xgb.cv and early stopping would do the job, but the problem is that we have very specific time-structured cross validation which we implement ourselves and thus want to metric on how train test/diverge from each other against some aggregate reg measure

Thanks!!

The regularization parameters are orthogonal to one another, so it doesn’t seem to make sense to aggregate them. Is it possible to make multiple plots where each plot shows [single reg parameter] vs. [train/test accuracy]? To get this, you’d vary only one regularization parameter at a time, keeping all other regularization parameters fixed.

Because the regularization parameters are orthogonal to one another so as I am also agree with the fact that aggregating them does not seem to make sense https://eduhelphub.com/. Therefore I am also focusing on creating numerous graphs, each with its own set of data and looking for suggestion.