DaskXGBRegressor with YarnCluster client seems only launch jobs on one worker

I tried to train DaskXGBRegressor on a yarn cluster with 3 nodes:

with YarnCluster(environment=‘environment.tar.gz’,worker_vcores=10, worker_memory=“50GiB”) as cluster:
cluster.scale(3)
with Client(cluster) as client:
main(client)

xgb.dask.train(client,
{‘verbosity’: 1,
‘tree_method’: ‘hist’,
‘n_estimators’: 500,
‘max_depth’: 50,
‘n_jobs’: -1,
‘random_state’: 2,
‘learning_rate’: 0.1,
‘min_child_weight’: 1,
‘seed’: 0,
‘subsample’: 0.8,
‘colsample_bytree’: 0.8,
‘gamma’: 0,
‘reg_alpha’: 0,
‘reg_lambda’: 1},
dtrain,
num_boost_round=500, evals=[(dTrain, ‘train’)])

I tried “top” on each node, I can find on one node, cpu usage is 100% for a python process while on other two nodes, cpu usage is less than 10% for python process. And there is no error in logs. I am wondering does that mean the training only happened on 1 node and if it’s a expected behavior?