In the early stage, a large learning rate was used for training (e.g. : 0.1), and the model was saved after the best model was built.
Reload the model and set eta to 0.01.Use the same data to continue training the model.
code:
# first training
param['eta'] = 0.1
bst = xgb.train(param, DTrain_X, num_boost_round=num_round,
evals=evallist, early_stopping_rounds=1)
bst.save_model(model_path)
# reset eta, retraining model with the same data.
param['eta'] = 0.01
bst1 = xgb.train(param, DTrain_X, num_boost_round=num_round,
evals=evallist, early_stopping_rounds=1,
xgb_model=model_path)
From watchlist’s output, the reloaded model looks like a new model because of high losses.
output:
[0] train-mlogloss:3.26105 Test-mlogloss:3.34312
[1] train-mlogloss:3.08983 Test-mlogloss:3.25997
[2] train-mlogloss:2.93871 Test-mlogloss:3.18985
[3] train-mlogloss:2.80696 Test-mlogloss:3.1259
[4] train-mlogloss:2.67956 Test-mlogloss:3.08794
[5] train-mlogloss:2.56708 Test-mlogloss:3.04834
[6] train-mlogloss:2.46203 Test-mlogloss:3.00596
[7] train-mlogloss:2.36344 Test-mlogloss:2.97334
[8] train-mlogloss:2.27283 Test-mlogloss:2.94528
[9] train-mlogloss:2.18703 Test-mlogloss:2.91822
[03:03:14] C:\Users\Administrator\Desktop\xgboost\src\learner.cc:362: Parameter 'predictor' has been recovered from the saved model. It will be set to 'gpu_predictor' for prediction. To override the predictor behavior, explicitly set 'predictor' parameter as follows:
* Python package: bst.set_param('predictor', [new value])
* R package: xgb.parameters(bst) <- list(predictor = [new value])
* JVM packages: bst.setParam("predictor", [new value])
[0] train-mlogloss:3.45505 Test-mlogloss:3.46253
[1] train-mlogloss:3.43493 Test-mlogloss:3.45087
[2] train-mlogloss:3.41409 Test-mlogloss:3.43882
[3] train-mlogloss:3.39473 Test-mlogloss:3.42721
[4] train-mlogloss:3.37509 Test-mlogloss:3.41436
[5] train-mlogloss:3.35599 Test-mlogloss:3.40238
[6] train-mlogloss:3.33654 Test-mlogloss:3.39171
[7] train-mlogloss:3.31752 Test-mlogloss:3.38018
[8] train-mlogloss:3.29899 Test-mlogloss:3.36862
[9] train-mlogloss:3.28034 Test-mlogloss:3.35855
test end.