sorry for raising this here, i couldn’t open a github issue.
tldr: in the train function, support receiving all arguments in the params dict.
num_boost_round = params.get(num_boost_round, num_boost_round)
the current implementation is rather bug prone, especially when preforming hyper-paramater-tuning, for example. after optimizing on the params, one needs to remember later that when initializing the booster again, some of the keys need to be extracted from the params, and passed as a separate argument. this can easily be overlooked.