First of all, thank you for your all effort developing such a great method.
Recently, I’ve tried to using XGBoost Classifier with Python API to build some ML model. I want to deploy this model for both Windows and Linux (also MacOS), so I have tested on both OS, but the model prediction performance is way different between both OS. For example, a AUC score of the model trained on Windows was about 0.94, but on Linux, it was about 0.74.
I’m using pipenv to match each of the library/packages in Python. And of course, all arguments and parameters using in XGBoost are same (include random seed), and all datasets also same, all source codes, script files, config files are same as managed by git. One thing that make me feel that I should ask this issue at different forum is that my task is a predicting multi output label data, so I’m using ClassifierChaier for multioutput label and XGBoost Classifier as an estimator. But I’m wondering that is this possibile that a prediction performance difference between different OS can be happend when using XGBoost.
Sorry for that I cannot share a detailed source code and datasets because those works are classified, but it is super thankful if you shared me any experience or idea that the possibility of these issue (or thing’s I should check again)