I have tried the search-grid and BayesSearchCV for tuning my lightGBM algorithm (for binary classification).

I have used 10 iterations. And I have indicated **scoring ="roc_auc"**

In the **first** iteration, I have got:

```
best score (e.g :0.71...)
and best param (e.g: max-depth: 10 , learning-rate: 0.17..., num-leave:175, n-estimators: 176, ....)
```

In the **10th** iteration, I have got :

```
best score (e.g :0.72...)
and best param (e.g: max-depth: 9 , learning-rate: 0.19..., num-leave:168, n-estimators: 172, ....)
```

Then I tried to train my LightGBM classifier with the **10th** param (which supposed that it get the best score!!). I have got :

```
AUC : (0.7541... , 0.6467...)
Accuracy: 0.7338..
RMSE: 0.5216..
```

Then because I had some curiosity, I have tried to train my classifier with (best param) of the **First** iteration (which considerate as worst score)!. I had surprised by the result that I have got:

```
AUC : (0.7545... , 0.6592....)
Accuracy: 0.7332..
RMSE: 0.5152..
```

Because I have fixed previously scoring by roc-auc I should get AUC in the **10th** iteration better than the **first** iteration but I have got the contrary.

I have supposed that it is considered AUC train of **10th** iteration **0.7541** as better than **0.7545** of the **1st** because of the overfitting. But when I tried to check the **3rd** and the **5th** iteration I get on the **3rd 0.7532** and in the **5th 0.7548**.

So I don’t know what the best score in this algorithm means. And why we should if it get values as the described situation. (I had tried then many times with other tuning parameters but I get the same case. I don’t know where is the problem exactly.