I wanted to know why the XGBoost code sticks with using floats as variables. Why not use doubles to enable higher precision data reading, boosting, scoring?
I tried replacing all floats with doubles and building a 100 tree model using GPU hist on Higgs dataset. Found that using floats, the accuracy is around 73% (closer to CPU hist accuracy) and using doubles the accuracy drops to 69%. Why is this so that using doubles across the code, the accuracy drops so much? Is it something related to the way the algorithm works? (Because I expected that having higher precision, would lead to better accuracy)