Does the R package support using multiple GPUS?

Hi all,

I am amazed that such a powerful and useful tool such as XGBoost is freely available to the community! Many thanks to the developers and contributors for their awesome work.

I have one question though. I am trying set up an environment to use XGBoost aiming to speed up the execution of a ML algorithm written in R.

The machine I am working on has 64 physical cores and two 24 GB Titan RTX cards running under SLI on Ubuntu 20.04. I have both Cuda and NCCL compiled and working.

Testing the sample Python algorithm at revealed GPU usage of 100% for both GPUS (as monitored by nvidia-smi and nvtop).

However, running the sample R script provided in reveals heavy usage on only one GPU.

The R package has been successfully built with: cmake .. -DUSE_CUDA=ON -DUSE_NCCL=ON -DNCCL_ROOT=/usr/lib/x86_64-linux-gnu -DR_LIB=ON.

Argument n_gpus seems to have been deprecated, and using gpu_ids values of 0, 1 and -1 does not appear to make any difference.

Have I missed anything or done anything wrong? Do I need any special xgboost call in R to take advantage of both GPUs? Actually, is multi-gpu even supported by the R package?

Many thanks in advance,

1 Like

No, the R package does not support the use of multiple GPUs at this moment.

1 Like

Thanks for the prompt response Philip. I guess the next obvious question is: is there any short term plan to start supporting it?

No, we do not have a concrete plan to support it.

Well, that’s a bummer. I guess we’ll have to migrate our workflow from R to Python…

One suggestion would be to make it clear in the documentation that NCCL is not supported in R and perhaps even remove the -DUSE_NCCL flag if -DR_LIB is ON (since it leads to nowhere in the case of the R package).

Thanks for you reply Philip!

We use distributed frameworks like dask and spark to enable multi-gpu support. I’m not entirely sure how distributed computing works on R ecosystem.

Hi, @thiagoveloso. It is pretty clear in the documentation. Check the bold text here:

Still waiting. I will never give up waiting for this feature.