Xgboost GPU implementation

I am trying to perform repeated cross validation on a big data set (25k, 43k) using xgboost for finding best set of hyperparameters. For minimizing the training time, I want to implement xgboost on GPU. I used following code to do it:
model = XGBClassifier(…(parameters)… ,booster=‘gbtree’, , tree_method=‘gpu_hist’, gpu_id=0, objective=‘binary:logistic’)

grid_search = GridSearchCV(model, params, …)

grid_search.fit(X_train, y_train)

when I pass this set of function I get error ‘out of memory’.

How can I implement xgboost in gpu with gridsearchcv.

And for using the external memory in xgboost, how can I use this code, if my file name is ‘abc’ and its path is : /home/user/abc.data
into:
filename.csv?format=csv&label_column=0#cacheprefix

I need to load ‘abc’ into some variable using pandas in python, but how can I adapt the file format:
filename.csv?format=csv&label_column=0#cacheprefix
I tried it by replacing simply with my file name but it didnt worked.

It means your system doesn’t have enough memory.