RFC


Topic Replies Activity
How is XGBRFRegressor intended to work with early stopping? 2 July 29, 2021
Limit or constraints of data 2 July 24, 2021
Is training multiple models in parallel threads supported (Python)? 1 June 25, 2021
Implement XGBoost in Scala Spark, dataproc zeppelin notebook 1 May 12, 2021
Probabilities returned by multi:softprob 4 April 20, 2021
Parameters: { "feature_types" } might not be used 4 April 14, 2021
Parallel training of XGBoost classifiers 3 April 8, 2021
Does `Booster.predict` use multiple threads by default? 4 March 22, 2021
Hyperparameter tuning 2 March 18, 2021
Understanding feature importance VS feature weights 1 March 16, 2021
XGBoost Hyperparameters 3 March 15, 2021
[Feature] Expose TreeShap feature contribution 5 March 10, 2021
How to change the best split candidate 3 February 25, 2021
Unknown Objective Function Error - SOLVED 1 February 10, 2021
Confidence measures and feature importance for RF 1 February 2, 2021
Memory usage on predict() 1 January 29, 2021
How to release GPU memory on R after fit? 6 January 22, 2021
Does "refresh" option update split thresholds of tree nodes? 4 December 25, 2020
Behavior of eta and max_depth not as expected 8 December 21, 2020
Scale_pos_weight's counterpart for xgboost.train 2 December 20, 2020
Xgb.save setting a file path 13 December 19, 2020
Fault tolerance of distributed XGBoost 10 December 17, 2020
Can someone give an brief overview of how distributed training with XGBoost-4J Spark works? 3 December 16, 2020
Scikit-learn API GPU-Accelerated SHAP values 3 December 13, 2020
Where to find the intercept of models built by xgboost with gblinear? 3 December 2, 2020
Documentation for 'score' method missing for python package 2 November 26, 2020
When using DART booster, is GPU support possible? 3 November 20, 2020
Should we pre-split dataset into small dataset when using distributed xgboost? 5 November 16, 2020
Very first tree in XGBRegressor not centered 2 November 10, 2020
XGBoost version not compiled with GPU support 8 November 10, 2020