XGBoostError: Check failed: jenv->ExceptionOccurred when using input features of sparse vector instead of dense vector in ranking job

Environment:
CentOS release 6.9 (Final)
java 1.8.0_172
spark cluster: 2.4.3
scala:2.11.12
xgboost4j-spark:0.90

Hello. I have run the rank job with parameter “rank:pairwise” and dataset “mq2008”.
When I process the features as dense vector format, It will succeed.
But when I process the features as sparse vector format, It will succeed with training and failed in predict of “prediction.show”.
It print followed info.

WARN TaskSetManager: Lost task 0.0 in stage 13.0 (TID 2647, kg-dn-243, executor 3): ml.dmlc.xgboost4j.java.XGBoostError: [12:22:55] /xgboost/jvm-packages/xgboost4j/src/native/xgboost4j.cpp:146: [12:22:55] /xgboost/jvm-packages/xgboost4j/src/native/xgboost4j.cpp:68: Check failed: jenv->ExceptionOccurred(): 
Stack trace:
  [bt] (0) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(_ZN4dmlc15LogMessageFatalD2Ev+0x22) [0x7f7ff2514f82]
  [bt] (1) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(XGBoost4jCallbackDataIterNext+0xf89) [0x7f7ff2512cd9]
  [bt] (2) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(_ZN7xgboost14NativeDataIter4NextEv+0x15) [0x7f7ff2521f15]
  [bt] (3) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(_ZN7xgboost4data15SimpleCSRSource8CopyFromEPN4dmlc6ParserIjfEE+0x64) [0x7f7ff255c5e4]
  [bt] (4) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(_ZN7xgboost7DMatrix6CreateEPN4dmlc6ParserIjfEERKSsm+0x363) [0x7f7ff25510b3]
  [bt] (5) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(XGDMatrixCreateFromDataIter+0x134) [0x7f7ff25176e4]
  [bt] (6) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(Java_ml_dmlc_xgboost4j_java_XGBoostJNI_XGDMatrixCreateFromDataIter+0x94) [0x7f7ff2510c84]
  [bt] (7) [0x7f80250186c7]


Stack trace:
  [bt] (0) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(_ZN4dmlc15LogMessageFatalD2Ev+0x22) [0x7f7ff2514f82]
  [bt] (1) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(XGBoost4jCallbackDataIterNext+0x11c1) [0x7f7ff2512f11]
  [bt] (2) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(_ZN7xgboost14NativeDataIter4NextEv+0x15) [0x7f7ff2521f15]
  [bt] (3) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(_ZN7xgboost4data15SimpleCSRSource8CopyFromEPN4dmlc6ParserIjfEE+0x64) [0x7f7ff255c5e4]
  [bt] (4) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(_ZN7xgboost7DMatrix6CreateEPN4dmlc6ParserIjfEERKSsm+0x363) [0x7f7ff25510b3]
  [bt] (5) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(XGDMatrixCreateFromDataIter+0x134) [0x7f7ff25176e4]
  [bt] (6) /data12/hadoop/hd_space/tmp/nm-local-dir/usercache/JimmyTest/appcache/application_1575501641580_606829/container_e01_1575501641580_606829_01_000009/tmp/libxgboost4j18311166920109842.so(Java_ml_dmlc_xgboost4j_java_XGBoostJNI_XGDMatrixCreateFromDataIter+0x94) [0x7f7ff2510c84]
  [bt] (7) [0x7f80250186c7]


        at ml.dmlc.xgboost4j.java.XGBoostJNI.checkCall(XGBoostJNI.java:48)
        at ml.dmlc.xgboost4j.java.DMatrix.<init>(DMatrix.java:53)
        at ml.dmlc.xgboost4j.scala.DMatrix.<init>(DMatrix.scala:42)
        at ml.dmlc.xgboost4j.scala.spark.XGBoostRegressionModel$$anonfun$2$$anon$1$$anonfun$3.apply(XGBoostRegressor.scala:288)
        at ml.dmlc.xgboost4j.scala.spark.XGBoostRegressionModel$$anonfun$2$$anon$1$$anonfun$3.apply(XGBoostRegressor.scala:270)
        at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
        at ml.dmlc.xgboost4j.scala.spark.XGBoostRegressionModel$$anonfun$2$$anon$1.hasNext(XGBoostRegressor.scala:301)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:121)
        at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

My command is spark-shell --jars ./xgboost4j-0.90.jar,./xgboost4j-spark-0.90.jar,./akka-actor_2.11-2.3.11.jar,./config-1.2.1.jar.
My code is as below:

import ml.dmlc.xgboost4j.scala.spark.{TrackerConf, XGBoostRegressor}
import org.apache.spark.ml.linalg.Vector
import org.apache.spark.sql.functions._ 

val dataPath = "Jquery"
val outPath = "Jquery"
val trainFile = dataPath + "/mq2008.train"
val testFile = dataPath + "/mq2008.test"
val trainGroup = dataPath + "/mq2008.train.group"

val trainDF = spark.read.format("libsvm").load(trainFile)
val testDF = spark.read.format("libsvm").load(testFile)
// val asDense = udf((v: Vector) => v.toDense)

// val trainDFdense = trainDF.withColumn("dense" , asDense($"features")).select(col("label"), col("dense").as("features")).withColumn("id",monotonicallyIncreasingId)
// val testDFdense = testDF.withColumn("dense" , asDense($"features")).select(col("label"), col("dense").as("features"))
val trainDFdense = trainDF.withColumn("id",monotonicallyIncreasingId)
val testDFdense = testDF

val groupDF = spark.read.format("csv").option("inferSchema", true).load(trainGroup).collect().map(row=>row.getInt(0)).zipWithIndex.map{case(num, index) => List.range(0, num).map(i => index+1)}.flatten.toSeq.toDF("group").withColumn("id",monotonicallyIncreasingId)

val trainFinal = trainDFdense.join(groupDF, "id")

val Array(train, eval1) = trainFinal.randomSplit(Array(0.9, 0.1), 0)

val paramMap = Map("eta" -> "1", "max_depth" -> "6", "silent" -> "1",
  "objective" -> "rank:pairwise", "num_workers" -> 1, "num_round" -> 5,
  "group_col" -> "group", "tracker_conf" -> TrackerConf(0L, "scala"),
  "eval_metric" -> "ndcg", "eval_sets" -> Map("eval1" -> eval1))

val model = new XGBoostRegressor(paramMap).fit(trainFinal)

val prediction = model.transform(testDFdense) //untill now it's ok
prediction.show //but it will break with this command

Was it a bug or my own environment problem?

Could you give me help, any advice will be appreciated.

Were you able to resolve this?

I’m facing the same issue with a similar environment (only differences are spark 2.4.4 and java 1.8.0_232).

Can you guys help with this please @hcho3 @CodingCat?