XGBoost LambdaRank rank_obj.cc why second order gradient plus 2.0

Can some one give an explanation about xgboost LambdaRank objective.
why the second order gradient need to multiply 2.0f.
And what’s xgboost optimized for a lambdamart.

      bst_float p = common::Sigmoid(pos.pred - neg.pred);
      bst_float g = p - 1.0f;
      bst_float h = std::max(p * (1.0f - p), eps);
      // accumulate gradient and hessian in both pid, and nid
      gpair[pos.rindex] += GradientPair(g * w, _2.0f_*w*h);
      gpair[neg.rindex] += GradientPair(-g * w, _2.0f_*w*h);

From my opinion the LambdaMart, First cacluate first order gradient like g = (p - 1.0f)*w
then use regression tree model gradient, which use (g-f(t))^2 and than the leave value is g/h where h = std::max(p * (1.0f - p), eps) * w
I don’t know if my opinion about Lambdamart is right

Thanks for the repliers!!!