I am amazed that such a powerful and useful tool such as XGBoost is freely available to the community! Many thanks to the developers and contributors for their awesome work.
I have one question though. I am trying set up an environment to use XGBoost aiming to speed up the execution of a ML algorithm written in R.
The machine I am working on has 64 physical cores and two 24 GB Titan RTX cards running under SLI on Ubuntu 20.04. I have both Cuda and NCCL compiled and working.
Testing the sample Python algorithm at https://github.com/dmlc/xgboost/blob/master/demo/dask/gpu_training.py revealed GPU usage of 100% for both GPUS (as monitored by nvidia-smi and nvtop).
However, running the sample R script provided in https://github.com/dmlc/xgboost/blob/master/R-package/demo/gpu_accelerated.R reveals heavy usage on only one GPU.
The R package has been successfully built with:
cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DNCCL_ROOT=/usr/lib/x86_64-linux-gnu -DR_LIB=ON.
n_gpus seems to have been deprecated, and using
gpu_ids values of
-1 does not appear to make any difference.
Have I missed anything or done anything wrong? Do I need any special xgboost call in R to take advantage of both GPUs? Actually, is multi-gpu even supported by the R package?
Many thanks in advance,