Training many models with gpu_hist in Optuna yields 'parallel_for failed: out of memory'

Hi, I am having an issue using XGBClassifier and tried to implement a workaround by saving the model, deleting the model and loading it back in.

pickle.dump(self.model, open(f'tmp/model_{uid}.pkl', 'wb'))
del self.model
self.model = pickle.load(open(f'tmp/model_{uid}.pkl', 'rb'))
os.remove(f'tmp/model_{uid}.pkl')

I am on xgb 1.3.0 and the models are very small. I am running a HO with Optuna with a 1000x Bootstrapping CV in each iteration. After 50 - 120 Optuna iteration, it throws the error:

xgboost.core.XGBoostError: [16:11:48] ../src/tree/updater_gpu_hist.cu:731: Exception in gpu_hist: NCCL failure :unhandled cuda error ../src/common/device_helpers.cu(71)

and

terminate called after throwing an instance of 'thrust::system::system_error'
  what():  parallel_for failed: out of memory

Looking at nvidia-smi it only takes a constant ~210 MB… (RTX TITAN)

I thought this is related to issue https://github.com/dmlc/xgboost/issues/4668, but I am not sure about that anymore.

BTW, everything works fine running the same code on CPU.

Can you open a new GitHub issue?

Alright, thanks. Did not know in which place to ask first :+1: