That helped, thanks!
I tried this when I read your message.
The good news - it did connect to GPU 1 and ran for awhile.
However, the kernel kept dying after four or five xgb.train calls.
I looked at the Jupyter log but I didn’t see anything to indicate why the kernel might have died.
I tried restarting Jupyter and then rebooting the machine.
Unfortunately now it won’t run at all. I ge:t XGBoostError: [08:33:51] c:\users\administrator\workspace\xgboost-win64_release_1.0.0\src\gbm\gbtree.h:308: Check failed: gpu_predictor_:
I’ve set the variable at the system and user levels.
Anyway, it did run for awhile. I hope that this info is helpful.
Thanks!
P.S. Here’s a parameter sample and the full traceback.
{‘rate_drop’: 0.34390186718905363, ‘eta’: 0.7436749928679078, ‘min_child_weight’: 39, ‘alpha’: 3.6571649662944705, ‘max_depth’: 25, ‘min_subsample’: 0.07562723872365831, ‘lambda’: 2.6508283625950915, ‘objective’: ‘reg:squaredlogerror’, ‘eval_metric’: ‘rmsle’, ‘tree_method’: ‘gpu_hist’}
XGBoostError Traceback (most recent call last)
in
9 print(parameters)
10
—> 11 bst, results = train_model(parameters, training_dmatrix, TESTS_PER_CYCLE, evallist, local_objective, local_metric)
12
13 print(bst)
in train_model(params, dtrain, num_boost_round, evals, local_objective, local_metric)
12 bst = xgb.train(params=params, dtrain=dtrain, num_boost_round=num_boost_round, evals=evals,
13 evals_result=results, verbose_eval=VERBOSE_EVAL_INTERVAL, obj=local_objective,
—> 14 feval=local_metric)
15
16
~\anaconda3\lib\site-packages\xgboost\training.py in train(params, dtrain, num_boost_round, evals, obj, feval, maximize, early_stopping_rounds, evals_result, verbose_eval, xgb_model, callbacks)
207 evals=evals,
208 obj=obj, feval=feval,
–> 209 xgb_model=xgb_model, callbacks=callbacks)
210
211
~\anaconda3\lib\site-packages\xgboost\training.py in _train_internal(params, dtrain, num_boost_round, evals, obj, feval, xgb_model, callbacks)
72 # Skip the first update if it is a recovery step.
73 if version % 2 == 0:
—> 74 bst.update(dtrain, i, obj)
75 bst.save_rabit_checkpoint()
76 version += 1
~\anaconda3\lib\site-packages\xgboost\core.py in update(self, dtrain, iteration, fobj)
1249 dtrain.handle))
1250 else:
-> 1251 pred = self.predict(dtrain, training=True)
1252 grad, hess = fobj(pred, dtrain)
1253 self.boost(dtrain, grad, hess)
~\anaconda3\lib\site-packages\xgboost\core.py in predict(self, data, output_margin, ntree_limit, pred_leaf, pred_contribs, approx_contribs, pred_interactions, validate_features, training)
1450 ctypes.c_int(training),
1451 ctypes.byref(length),
-> 1452 ctypes.byref(preds)))
1453 preds = ctypes2numpy(preds, length.value, np.float32)
1454 if pred_leaf:
~\anaconda3\lib\site-packages\xgboost\core.py in _check_call(ret)
187 “”"
188 if ret != 0:
–> 189 raise XGBoostError(py_str(_LIB.XGBGetLastError()))
190
191
XGBoostError: [08:33:51] c:\users\administrator\workspace\xgboost-win64_release_1.0.0\src\gbm\gbtree.h:308: Check failed: gpu_predictor_: