[Migrated from https://github.com/dmlc/xgboost/issues/3447]
System: Windows 10 x64 Professional
Source GPU build from 0.72 release branch
Using R as follows:
> sessionInfo() R version 3.5.0 (2018-04-23) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows >= 8 x64 (build 9200) Matrix products: default locale: [1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252 [4] LC_NUMERIC=C LC_TIME=English_United States.1252 attached base packages: [1] parallel stats graphics grDevices utils datasets methods base other attached packages: [1] stringi_1.1.7 xgboost_0.71.2 doParallel_1.0.11 iterators_1.0.9 foreach_1.4.4 raster_2.6-7 rgdal_1.3-3 [8] sp_1.3-1 loaded via a namespace (and not attached): [1] Rcpp_0.12.17 lattice_0.20-35 codetools_0.2-15 grid_3.5.0 magrittr_1.5 data.table_1.11.4 Matrix_1.2-14 [8] tools_3.5.0 compiler_3.5.0 Running this script: require(xgboost) data(agaricus.train, package='xgboost') data(agaricus.test, package='xgboost') dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label) dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label) param <- list(max_depth=2, eta=0.1, nthread = 8, tree_method="gpu_exact", predictor="gpu_predictor", objective = "gpu:binary:logistic") xgbmodel <- xgb.train(param, dtrain, nrounds = 5, verbose = 2) xgb.save(xgbmodel, fname = "xgboost_model") xgbmodel_from_disk <- xgb.load(modelfile = "xgboost_model") #Run model from memory set.seed(123) head(predict(xgbmodel,dtest,predictor="gpu_predictor")) head(predict(xgbmodel,dtest)) #Run model from disk set.seed(123) head(predict(xgbmodel_from_disk,dtest,predictor="gpu_predictor")) head(predict(xgbmodel_from_disk,dtest))
Gives me this output:
> require(xgboost) > data(agaricus.train, package='xgboost') > data(agaricus.test, package='xgboost') > dtrain <- xgb.DMatrix(agaricus.train$data, label = agaricus.train$label) > dtest <- xgb.DMatrix(data = agaricus.test$data, label = agaricus.test$label) > param <- list(max_depth=2, eta=0.1, nthread = 8, tree_method="gpu_exact", predictor="gpu_predictor", objective = "gpu:binary:logistic") > xgbmodel <- xgb.train(param, dtrain, nrounds = 5, verbose = 2) [08:45:43] Allocated 7MB on [0] GeForce GTX 1070, 6769MB remaining. [08:45:43] G:\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=2 [08:45:43] Allocated 1MB on [0] GeForce GTX 1070, 6767MB remaining. [08:45:43] G:\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=2 [08:45:43] G:\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=2 [08:45:43] G:\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=2 [08:45:43] G:\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=2 > xgb.save(xgbmodel, fname = "xgboost_model") [1] TRUE > xgbmodel_from_disk <- xgb.load(modelfile = "xgboost_model") > #Run model from memory > set.seed(123) > head(predict(xgbmodel,dtest,predictor="gpu_predictor")) [08:45:44] Allocated 0MB on [0] GeForce GTX 1070, 6767MB remaining. [1] 0.4284496 0.5864953 0.4284496 0.4284496 0.3037330 0.5104644 > head(predict(xgbmodel,dtest)) [08:45:44] Allocated 0MB on [0] GeForce GTX 1070, 6767MB remaining. [1] 0.4284496 0.5864953 0.4284496 0.4284496 0.3037330 0.5104644 > #Run model from disk > set.seed(123) > head(predict(xgbmodel_from_disk,dtest,predictor="gpu_predictor")) [1] 0.4284496 0.5864953 0.4284496 0.4284496 0.3037330 0.5104644 > head(predict(xgbmodel_from_disk,dtest)) [1] 0.4284496 0.5864953 0.4284496 0.4284496 0.3037330 0.5104644
As can be seen above, only the xgboost from memory runs on the GPU, the one from the disk only runs on the CPU.