Rewrite _train_internal breaks the naive xgboost usage. How to resolve that?

Hi XGBoost gurus,

I am currently experimenting with some customized training solutions based on xgboost. The primary focus is to tailor specific logic within the _train_internal function, the native API for which is defined as follows:

pythonCopy code

def _train_internal(params, dtrain,	
                    num_boost_round=10, evals=(),	
                    obj=None, feval=None,	
                    xgb_model=None, callbacks=None,	
                    evals_result=None, maximize=None,	
                    verbose_eval=None, early_stopping_rounds=None)

My customized api adding some new parameters, and looks like

def _train_internal_customized(params, dtrain,	
                    num_boost_round=10, evals=(),	
                    obj=None, feval=None,	
                    xgb_model=None, callbacks=None,	
                    evals_result=None, maximize=None,	
                    verbose_eval=None, early_stopping_rounds=None, 
                    data_distribution_estimation:dict=None)

I have encountered a challenge, however. Modifying the _train_internal function effectively breaks the xgboost module, rendering me unable to use the original xgboost training method. If I wish to utilize the original xgboost training method, perhaps to compare performance, I must reinstall xgboost to overwrite the modified _train_internal method.

Is there a way to incorporate a function, such as _train_internal_customized() , that would enable me to choose between using the native _train_internal or my own defined _train_internal_customized() ? This would eliminate the need for repeated installation and reinstallation.

Any architecture advice is apperciated!

Thank you so much for the help!