Hello, I m building a custom objective function and a corresponding evaluation metric in R for a binary classification problem for which instance weights are important. XGBoost version 1.3.2.1. …Like a lot of people…I m first trying to replicate the built-in function and looking at the example provided about the loglikelihood loss:

# this is loglikelihood loss

logregobj <- function(preds, dtrain) {

labels <- getinfo(dtrain, “label”)

preds <- 1 / (1 + exp(-preds))

grad <- preds - labels

hess <- preds * (1 - preds)

return(list(grad = grad, hess = hess))

}

In order to account for the instance weights, I m using

logregobjWEIGHT <- function(preds, dtrain,WEIGHT) {

labels <- getinfo(dtrain, “label”)

preds <- 1 / (1 + exp(-preds))

grad <- (preds - labels) * WEIGHT

hess <- preds * (1 - preds) * WEIGHT

return(list(grad = grad, hess = hess))

}

Is it accurate ?

Now for the eval_metric part, the code for “error” is:

evalerror <- function(preds, dtrain) {

labels <- getinfo(dtrain, “label”)

err <- as.numeric(sum(labels != (preds > 0))) / length(labels)

return(list(metric = “error”, value = err))

}

# user defined evaluation function, return a pair metric_name, result

# NOTE: when you do customized loss function, the default prediction value is margin

How should I modify the eval_metric code above to account for the fact that:

- I will be using a custom objective function,
- I need to take instance weight into account?

At this stage my goal is only to replicate the built in function in XGBoost for loglikelihood loss as an objective function and “error” as an eval metric.

Sorry probably very naive - it is my second day using XGBoost - any help is welcome. many thanks