Optimizing XGBClassifier for discrimination power

Hi folks,

I am developing a credit risk decisioning model, i.e. a model that assesses the risk of default of an incoming transactions and decides whether to accept it or not. Of course my dataset is imbalanced : the minority class (i.e. defaulted transactions) represents ~5% of my data.
What I care about is discrimination power rather than good probabilities because I will use the model to make decisions given an acceptance rate target (e.g. I want to accept 90% of incoming transactions), not to make financial predictions (in which case well calibrated probabilities would be important).
Because of that, I evaluate my model with ROC AUC (or PR AUC, I am still unsure which would be best). However, I saw that even though I evaluate my model with AUC, I should still keep binary:logistic as the objective function of my XGBClassifier. The reasons to me are unclear why, but one difficulty I can foresee with having an “AUC objective function” is that it’s not possible to define a loss function (and let alone a differentiable one) that would give the “AUC loss” of one given sample as AUC is an aggregate loss rather than an individual one, and I understand that XGB needs an individual loss to compute the losses at the leaf level.

Knowing that, it means that the only way to optimize my model for discrimination power is to use AUC as an evaluation metric in the process of hyperparameter optimization. I find that quite disappointing because as per my experience, hyper-param optimization is not a real game changer and usually only allows to earn a few basis points of AUC.

Therefore my questions are :

  1. Is it possible to re-define the objective function to optimize XGBClassifier for discrimination power rather than probabilities ?
  2. If not, what are other ways to “boost” the discrimination power of my model besides hyper-parameter optimization ?
  3. Conceptually, the ability to discriminate 2 samples is close to the task of “learning to rank”. Therefore, is there a way to use XGBRanker for standard classification ? Have you tried it ?

PS : Don’t think it is important here but mentioning it just in case : I actually only care about partial AUC : https://en.wikipedia.org/wiki/Partial_Area_Under_the_ROC_Curve
because areas where the False Positive Rate is too high (say above 20%) is not applicable in my case.