When reading the original scientific paper of XGBoost, feature subsampling (whatever the level considered: tree, level or node) seems to be one of the key features of the method to prevent overfitting and to speed-up the learning process. However, by default, there is no feature subsampling performed. This is at least the case for the current python implementation. Could someone give insights on this ? Would it relate to additional findings made afterwards?