Hello,

I’m using the Sk-learn implementation of XGBranker and my target labels are [0,1]. For a given group (query), XGBranker yields predictions which might include negative values such as:

0.31

0.72

-0.27

0.56

-1.56

-2.51

I’d like to normalise these values such that the -2.51 document has a very low, but non-zero probability of being the target label 1. The most common solution suggests normalising these values such that -2.51 = 0 and 0.72 = 1, and then dividing by the sum to get the percentage/probability, but this does not solve my problem since the -2.51 document retains the 0 value.

I realise this question might be better suited a general math/statistics audience, but hoped someone had encountered a similar problem with negative XGB outputs and found a solution?

Thanks