How is XGBRFRegressor intended to work with early stopping?

My understanding is that for an XGBRegressor, the evaluation metric is calculated for the training set after each round of boosting, and early stopping is triggered if there is no improvement after a certain number of rounds. But what is the intended behaviour for XGBRFRegressor with early stopping? I’ve tried various combinations of num_parallel_tree and num_boost_round and early stopping never seems to have any effect - the model always fits num_parallel_tree * num_boost_round trees, and the evaluation metric is only ever printed once.

Am I doing something wrong? Here’s an example of what I’ve tried:

from xgboost import XGBRegressor, XGBRFRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split

X, y = make_regression(random_state=7)

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=7)

forest = XGBRFRegressor(num_parallel_tree = 10, num_boost_round = 1000, verbose=3)

forest.fit(
    X_train, 
    y_train,
    eval_set = [(X_test, y_test)],
    early_stopping_rounds = 10,
    verbose = True
)

I asked about this on StackOverflow and received a useful answer:

In short, num_boosting_rounds is always overridden to 1 when using the sklearn API for XGBoost random forests, so it isn’t possible to use early stopping.