Distributed Gpu Learning

im using xgboost.net wrapper with shaprlearning.net ,i have a nvidia 980ti 6gb dedicated gddr ram , it allows me only traing under 600mb training data .other than that gpu training is automatically shifting to cpu .
now i have around 2gb new training,can xgboost regression works big data with 600 features on regression learning with multiple gpus ,i dont have multiple gpus yet to test .
for example , if have 10gb csv training data and have 10 gpus, will each 6gb graphic memory will act as one shares file ?
will retraing a model with Tree Gpu.exact works ?

For easy introduction to distributed training with GPUs, take a look at https://rapidsai.github.io/projects/cudf/en/0.7.0/dask-xgb-10min.html.

1 Like