OVO option for AUC in eval_metric

I’m performing cross validation on a multiclass classification model using AUC as the evaluation metric. I’ve seen in the docs that this defaults to using a weighted one vs rest approach. I was wondering how/if I can change this to perform one vs one macro AUC as the evaluation metric like that available in sklearn?