Data width affects GPU memory usage and run time?

Hello, I am wondering what is the effect of data width, i.e. the number of xcols, on GPU memory usage and train time. My hypothesis is that it should have large impact, given that xgboost is a decision tree algorithm. However in my experiments I observed very little to none effect. I am wondering if I am doing anything wrong. I am using the python API. Here is my setting:

num_boost_round=1000
max_depth=6
input data: DMatrix
no early stopping