Custom version of squared error loss behaving unexpectedly in XGBoost

I am trying to write a custom loss for XGBoost.

As a proof of concept, I’m trying to write a custom version of reg:squarederror. This would seem to be super straightforward, but the results only match for learning_rate=1., not for any other value (including the default 0.3). I’m at my wits ends. If anyone know the right place to post for this, please do let me know!

Here is a repro

import xgboost as xgb
import numpy as np

# custom loss function
def custom_squared_error(label, predt):
  n = len(label)
  return predt - label, np.ones(n)

# make data
n = 100
p = 7

X_train = np.random.randn(n,p)
y_train = np.random.randn(n)

# everything breaks if lr set to anything not 1
lr = 1.
max_depth = 3
reg_alpha = 0
reg_lambda = 0
n_estimators=10

model1 = xgb.XGBRegressor(objective=custom_squared_error,
                          max_depth=max_depth,
                          learning_rate=lr,
                          n_estimators=n_estimators,
                          reg_alpha=reg_alpha,
                          reg_lambda=reg_lambda)

model2 = xgb.XGBRegressor(objective='reg:squarederror',
                          max_depth=max_depth,
                          learning_rate=lr,
                          n_estimators=n_estimators,
                          reg_alpha=reg_alpha,
                          reg_lambda=reg_lambda)

# the following two rmse outputs are the same for lr=1, but are not the same otherwise
model1.fit(X_train, y_train, eval_set=[(X_train, y_train)])
model2.fit(X_train, y_train, eval_set=[(X_train, y_train)])

Initialize base_score=0.0 for both models.