Is training multiple models in parallel threads supported (Python)?

I’d like to train multiple boosters in parallel (hopefully using threads), so I’d like to know if xgb.train() releases the GIL, and more importantly, if xgboost supports this usage pattern (I.E., this is thread safe and won’t break anything).

Thanks for your time.