Gpu xgb training error:xgboost.core.XGBoostError:Memory should be contiguous

I encountered this problem while training by gpu-xgb.
xgboost.core.XGBoostError: [03:41:37] …/src/c_api/…/data/array_interface.h:179: Check failed: get(strides.at(0)) == type_length (4 vs. 1) : Memory should be contiguous.
I can’t find the same question on stackoverflow.
I could train at k-fold stage, but I encountered this problem when I trained with full data.
Thank you.

useful training code:
train_index is generated by k_fold.

train_x = df.iloc[train_index]
train_users = train_x.index.values
train_y = targets.loc[targets.q==t].set_index('session').loc[train_users]
clf = XGBClassifier(**xgb_params)
clf.fit(train_x[FEATURES].astype('float32'), train_y['correct'],
                eval_set=[(valid_x[FEATURES].astype('float32'), valid_y['correct'])],
                verbose=0)

error training code:

clf =  XGBClassifier(**xgb_params)
clf.fit(df[FEATURES].astype('float32'), train_y['correct'], verbose=0)

What’s the XGBoost version you’re using?

The problem is solved. I run fillna.

The xgboost version is 1.7.1.

Are you using pandas as input?

cudf.
XGB can train pandas with np.nan, but can’t train cudf as well.

Could you please upgrade xgboost to 1.7.4?

The problem is solved. I run fillna.
Cudf can’t train with nan.