GPU version fit() crashed

CUDA 9.0
Visual Studio 15
Win 10
GTX760

I compiled the GPU version successfully I copied the dll file to the python-package/xgboost. But the program crashed with or without the argument ‘tree_method’: ‘gpu_hist’, when model.fit() is called.
The program itself is fully functional since I run it with the offical ‘pip install xgboost’.
There are not even error message printed, the jupyter notebook just died, If I run the program in the terminal it just started a new line when it crashed, no error was printed neither.

Any help would be appreciated

So did you compile from the source or did you install the binary wheel? When you run ‘pip install’ it’s going to give you precompiled xgboost.

I compiled it from the source, it doesn’t work. Since no error message was given. I uninstall it and install it with pip install xgboost to see if precompiled cpu version works fine with my code. It does.
So I guess it’s all about the tricky dll file.

You should be able to run GPU algorithms with the precompiled wheel. Did you try using gpu_hist?

I just tried. It works without ‘gpu_hist’ but doesn’t work with it :frowning:
BTW, my GPU only has CUDA computation 3.0 which the CUDA version 9.0 requires at least.

But according to your GPU support doc. It was CUDA 8.0 and 3.5. I am not sure if due to the lacking of updating of CUDA version?

Unfortunately, XGBoost requires compute capability 3.5 or higher.

All right, thanks a lot. Finally figure out the reason.