N_jobs=-1 no longer uses all cores?

Edit: sorry, seems like the issue is actually with n_jobs=-1

Hello,

I recently upgraded xgboost to 1.1.1. After doing so, I noticed my training was not using more than one core at a time for XGBRanker, XGBClassifier, xgboost.train({‘objective’:‘binary:logistic’}) and xgboost.train({‘objective’:‘rank:ndcg’) training methods even when I set n_jobs=-1. However, multiple cores are used when I set n_jobs = 3, n_jobs = 5, etc.

I was under the impression that n_jobs=-1 meant using all available cores? At least this was the case when I used 0.78 and 0.90. I just tried upgrading to 1.2.1 with same results, that n_jobs=-1 only uses 1 core. I’ve tried it inside a Docker container as well as in native computer, both running Ubuntu 20.04.

Commands:
model = XGBRanker(objective='rank:ndcg', verbosity=2, n_jobs=-1, n_estimators=10) uses 1 core only
model = XGBClassifier(verbosity=2, n_jobs=-1, n_estimators=10) uses 1 core only
model_ = xgb.train({'objective':'rank:ndcg', 'n_jobs':-1, 'verbosity':2, 'num_boost_round':10}, dtrain=dmat) uses 1 core only
model = XGBRFClassifier(verbosity=2, n_jobs=3, n_estimators=10) uses 3 cores as expected
model = XGBRFClassifier(verbosity=2, n_jobs=-1, n_estimators=10) uses 1 core only
model = XGBRFClassifier(verbosity=2, n_jobs=5, n_estimators=10) uses 5 cores
model = XGBClassifier(verbosity=2, n_jobs=5, n_estimators=10) uses 5 cores
model = XGBRanker(objective='rank:ndcg', verbosity=2, n_jobs=8, n_estimators=10) uses 8 cores
Specifying no argument model = XGBRFClassifier(verbosity=2, n_estimators=10) uses 8 cores, which is all I have.

Did you try version 1.2.0? Also, we recently merged in a fix for thread configuration (https://github.com/dmlc/xgboost/pull/6186), so you may also want to try 1.3.0 release candidate.

Install the release candidate with pip install xgboost==1.3.0rc1.

Hi,

On 1.2.0:
model = XGBClassifier(verbosity=2, n_jobs=5, n_estimators=10) uses 5 cores
model = XGBClassifier(verbosity=2, n_jobs=-1, n_estimators=10) uses 1 core
model = XGBClassifier(verbosity=2, n_jobs=None, n_estimators=10) seems to be using whatever the previous core config was. When I ran it after n_jobs=-1, it uses 1 core. When I ran it after n_jobs=5, it uses 5 cores. When I ran it after n_jobs=-1 again, it went back to using one core. I didn’t check if this was the case for 1.2.1, but it might be as well…
model = XGBRFClassifier(n_jobs=-1, verbosity=2, n_estimators=10) uses 1 core

Ill report back with 1.3.0rc1 tomorrow.

1 Like

Following up with 1.3.0rc1

model = XGBRFClassifier(n_jobs=-1, verbosity=2, n_estimators=10) uses all cores.
model = XGBClassifier(verbosity=2, n_jobs=-1, n_estimators=10) uses all cores. Looks like not a problem on this version!
model = XGBClassifier(verbosity=2, n_jobs=1, n_estimators=10) uses 1 core as expected.
model = XGBClassifier(verbosity=2, n_jobs=None, n_estimators=10) uses all cores immediately after using 1 core, so it looks like that issue from 1.2.1 was fixed as well.

So looks like this problem was fixed? Maybe worth putting a note in the offending releases about n_jobs behavior

Great! Yes, I plan to include it in the release note for 1.3.0 version.

Thank you Philip! Really appreciate the work that goes into this software