From the xgboost docs re: xgboost.dask:
model (Union[Dict[str, Any], xgboost.core.Booster, distributed.Future]) – The trained model. It can be a distributed.Future so user can pre-scatter it onto all workers.
How can I get the distributed future of a model?
Both using .get_booster() on a an instance of DaskXGBRegressor (after running .fit()) and the output of xgb.dask.train() give me an xgboost.core.Booster.
Also, what do I lose if I supply the Booster rather than the future when using a Dask cluster?
I’m having trouble understanding the performance implications of not being able to scatter to the workers.