Memory allocation error on GPU

My Training dataset is huge. That is why It is giving a memory allocation error.

For XGBClassifer:
I found that I might try a few things:
a) Use xgb_model parameter - file name of stored XGBoost model or ‘Booster’ instance XGBoost model to be loaded before training (allows training continuation).
(But NOT Sure - fitting multiple times will give the same results compare to single time)
b) DeviceQuantileDMatrix on a single GPU
c) Distributed - Ray or Dask

My questions:

  1. Could anyone share your experience about - is it possible to get the same results with a, b, and c compared to a trained model using whole data at once.?

  2. Among these three options (a, b, c), which one will be the best option?

Please help.

You can also try external memory: https://xgboost.readthedocs.io/en/stable/tutorials/external_memory.html#gpu-version-gpu-hist-tree-method

I am using XGBoost in R on the GPU. It would be great if someone could post an example in R code with use of the external memory. Currently there is an example snippet with Python code in the link above (in paragraph Data Iterator), although I do not succeed to convert it to R code. Thanks a lot!