# Where to find code used for Monotonic constraints tutorial (perform regression on linear trend + sine wave + noise)

#1

I tried to reproduce the regression as illustrated in the Monotonic constraints tutorial :

I generate points with a linear trend + sine wave + random noise, then randomize them, and use XGBoost.train() on a subsample. Then, I used the trained booster to predict points in the remaining test sample.

The issue is that I obtain a constant predicted value for all points in the test subsample, not a nice trend + sine wave as in the tutorial.

The params I use are :

params.put(“eta”,0.1 );
params.put(“max_depth”,3 );
params.put(“silent”, 0 );
params.put(“objective”, “reg:linear”);

Would it be possible to obtain the code used to generate the charts shown in the tutorial ? (parameters used + training + testing set)?

Thanks !

#2

It’s not exactly what you are asking, but this code works for me (Java) - regression estimates f(x) to fit y= sin(x) + N(0,1)

Map<String, Object> params = new HashMap<String, Object>() {
{
put(“eta”, 0.3);
put(“max_depth”, 5);
put(“silent”, 0);
put(“objective”, “reg:linear”);
put(“booster”, “gbtree”);
}
};

int N = 500;
float[] x = new float[N];
float[] y = new float[N];
Random rnd = new Random();
for (int i = 0; i < N; i++) {
float x_i = i * 0.1f;
x[i] = x_i;
y[i] = (float) Math.sin(x_i) + (float) rnd.nextGaussian();
}

DMatrix trainMat = new DMatrix(x, N, 1);
trainMat.setLabel(y);
Map<String, DMatrix> watches = new HashMap<>();
watches.put(“train”, trainMat);
Booster booster = XGBoost.train(trainMat, params, 20, watches, null, null);
float[][] y_pred_ = booster.predict(trainMat);