Multiple quantile regression with preserving monotonicity (non-crossing condition)

Hello, while using XGBoost to estimate multiple quantiles, I have encountered several issues where the monotonicity between quantiles is not guaranteed. To address this, I have implemented a custom loss and monotonic constraints to ensure that the quantiles satisfy a non-crossing condition inspired by references below. I posted the topic because I wanted to share my solution.

Basically, I used the features provided by XGBoost. Speaking of the concept, an input train data (including explanation and response variables) is duplicated as much as the number of input alphas (which are quantiles), and each alpha is put into a column as Cannon’s research. Next, calculate the gradient of the composite quantile loss for the new data like References, set hess = 1, and do first order approximation. Finally, if we put increasing monotone constraints as in the alpha column, the monotone constraints for alpha with any new data is preserved even though alphas are differ from train alphas.

I created a simple example to test the non-crossing condition. When visualized, the result was as shown in the figure below.

Here is the link https://github.com/RektPunk/xgboost-monotone-quantile. I am very curious about how other guys have approached this problem and would love to hear any new ideas, insights, or feedbacks.

the link has been changed to https://github.com/RektPunk/monotone-quantile-tree and it also includes lightgbm.