XGBoost : measuring n_jobs speedup (Python / scikit-learn interface) M1 Mac

nvm… was benchmarking, but the benchmark had a bug… everything works as expected

This is kind of what I have seen too, particularly with linear models. They tend to have a mind of their own, using as much or as little of available CPU space as possible when I have nthread=1. If I increase that, it instead cancels out with a kmp error. That said it is so fast with nthread=1 that it hasn’t been an issue. Tree and Dart don’t show the same idiosyncratic behavior - CPU utilization scales with nthread. Note that I’m using R, not python.

You are setting n_jobs=-1 here? Can you try setting n_jobs=n_jobs?

Just realized that was the issue, dang! Sometimes you just need a third eye. Deleting the issue since it was such a rudimentary oversight on my part.