About the xgboost4j-spark version,how to install xgboost4j-spark on spark 2.2

hello,my spark version is 2.2, bu the latest xgboost4j-spark is 0.72 and 0.80 ,from the pom.xml files,I see the both two version seems only support spark 2.3 ,and the previouse version have not been included in mvnrepository ,so how can i find the matched version and install it on my spark2.2 .

You should build 0.60 or 0.7 from the source. Is upgrading Spark not an option for you?

What is the current way to compile xgboost4j-spark with spark 2.2? I cannot upgrade Spark because of I don’t have the authority to update spark on the company cluster.

After I checkout tags/v0.7 and mvn -j4, cd jvm-packages, mvn install. I got the following error:

[ 73%] Linking CXX executable …/xgboost
/opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7/ld: cannot find -ldmlccore
collect2: error: ld returned 1 exit status
gmake[2]: *** […/xgboost] Error 1
gmake[1]: *** [CMakeFiles/runxgboost.dir/all] Error 2
gmake: *** [all] Error 2
building Java wrapper
cd …
mkdir -p build
cd build
cmake … -DUSE_OPENMP:BOOL=ON -DUSE_HDFS:BOOL=OFF -DUSE_AZURE:BOOL=OFF -DUSE_S3:BOOL=OFF -DPLUGIN_UPDATER_GPU:BOOL=OFF -DJVM_BINDINGS:BOOL=ON
cmake --build . --config Release
Traceback (most recent call last):
File “create_jni.py”, line 89, in
run(“cmake --build . --config Release”)
File “create_jni.py”, line 51, in run
subprocess.check_call(command, shell=True, **kwargs)
File “/mnt/j0l04cl/anaconda3/lib/python3.6/subprocess.py”, line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command ‘cmake --build . --config Release’ returned non-zero exit status 2.
[ERROR] Command execution failed.
org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1)
at org.apache.commons.exec.DefaultExecutor.executeInternal (DefaultExecutor.java:404)
at org.apache.commons.exec.DefaultExecutor.execute (DefaultExecutor.java:166)
at org.codehaus.mojo.exec.ExecMojo.executeCommandLine (ExecMojo.java:804)
at org.codehaus.mojo.exec.ExecMojo.executeCommandLine (ExecMojo.java:751)
at org.codehaus.mojo.exec.ExecMojo.execute (ExecMojo.java:313)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:154)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:146)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:290)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:497)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356)
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] xgboost-jvm 0.7 … SUCCESS [ 3.736 s]
[INFO] xgboost4j … FAILURE [01:55 min]
[INFO] xgboost4j-spark … SKIPPED
[INFO] xgboost4j-flink … SKIPPED
[INFO] xgboost4j-example 0.7 … SKIPPED

For the master branch, I have no problem to compile xgboost4j, but failed at xgboost4j-spark test cases.

After checking out the tag, make sure to run git submodule update --init --recursive.

Thanks for your advice! I successfully compiled xgboost4j-spark. So, this version of xgboost4j doesn’t have feature importance?

Here it is:

Thanks! Another question… This might be related to the configuration of spark. When I run the application. I saw the following error quite often:

WARN ServletHandler: /api/v1/applications/application_1541604287228_24656/executors
java.lang.NullPointerException
	at java.lang.String.concat(String.java:2027)
	at org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.findRedirectUrl(AmIpFilter.java:184)
	at org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:144)
	at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
	at org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
	at org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
	at org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
	at org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
	at org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:461)
	at org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
	at org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
	at org.spark_project.jetty.server.Server.handle(Server.java:524)
	at org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:319)
	at org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
	at org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
	at org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:95)
	at org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
	at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
	at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
	at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
	at org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
	at org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
	at java.lang.Thread.run(Thread.java:748)
18/11/07 17:21:33 WARN XGBoostSpark: Unable to read total number of alive cores from REST API.Health Check will be ignored.
java.io.IOException: Server returned HTTP response code: 500 for URL: http://10.22.12.8:4041/api/v1/applications/application_1541604287228_24656/executors
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
	at java.net.URL.openStream(URL.java:1045)
	at org.codehaus.jackson.JsonFactory._optimizedStreamFromURL(JsonFactory.java:935)
	at org.codehaus.jackson.JsonFactory.createJsonParser(JsonFactory.java:530)
	at org.codehaus.jackson.map.ObjectMapper.readTree(ObjectMapper.java:1590)
	at org.apache.spark.SparkParallelismTracker.org$apache$spark$SparkParallelismTracker$$numAliveCores(SparkParallelismTracker.scala:53)
	at org.apache.spark.SparkParallelismTracker$$anonfun$execute$1.apply$mcZ$sp(SparkParallelismTracker.scala:101)
	at org.apache.spark.SparkParallelismTracker$$anonfun$1.apply$mcV$sp(SparkParallelismTracker.scala:71)
	at org.apache.spark.SparkParallelismTracker$$anonfun$1.apply(SparkParallelismTracker.scala:71)
	at org.apache.spark.SparkParallelismTracker$$anonfun$1.apply(SparkParallelismTracker.scala:71)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
	at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

I think it is Spark configuration: somehow Spark executor list page is giving you Error Code 500 (see https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500)

Thanks!

Could I ask another question about an error when train XGBoost on spark? I got the following error during the training, not quite sure the reason.

Container exited with a non-zero exit code 255

18/11/21 00:56:26 ERROR YarnScheduler: Lost executor 83 on cdc-hpcblx036-10.bfd.walmart.com: Container marked as failed: container_e211_1542756602652_22808_01_000177 on host: cdc-hpcblx036-10.bfd.walmart.com. Exit status: 255. Diagnostics: Exception from container-launch.
Container id: container_e211_1542756602652_22808_01_000177
Exit code: 255
Stack trace: ExitCodeException exitCode=255: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
	at org.apache.hadoop.util.Shell.run(Shell.java:456)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
	at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.__launchContainer__(LinuxContainerExecutor.java:304)
	at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:354)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:87)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Shell output: main : command provided 1
main : user is jing
main : requested yarn user is jing


Container exited with a non-zero exit code 255

18/11/21 00:56:26 WARN TaskSetManager: Lost task 7.0 in stage 5.0 (TID 252, cdc-hpcblx036-10.bfd.walmart.com, executor 83): ExecutorLostFailure (executor 83 exited caused by one of the running tasks) Reason: Container marked as failed: container_e211_1542756602652_22808_01_000177 on host: cdc-hpcblx036-10.bfd.walmart.com. Exit status: 255. Diagnostics: Exception from container-launch.
Container id: container_e211_1542756602652_22808_01_000177
Exit code: 255
Stack trace: ExitCodeException exitCode=255: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
	at org.apache.hadoop.util.Shell.run(Shell.java:456)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
	at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.__launchContainer__(LinuxContainerExecutor.java:304)
	at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:354)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:87)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Shell output: main : command provided 1
main : user is jing
main : requested yarn user is jing


Container exited with a non-zero exit code 255

18/11/21 00:56:26 ERROR RabitTracker: Uncaught exception thrown by worker:
org.apache.spark.SparkException: Job 2 cancelled because SparkContext was shut down
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:820)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:818)
	at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
	at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:818)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1732)
	at org.apache.spark.util.EventLoop.stop(EventLoop.scala:83)
	at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1651)
	at org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1921)
	at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1317)
	at org.apache.spark.SparkContext.stop(SparkContext.scala:1920)
	at org.apache.spark.SparkContext$$anon$3.run(SparkContext.scala:1865)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:926)
	at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:924)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
	at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:924)
	at ml.dmlc.xgboost4j.scala.spark.XGBoost$$anonfun$trainDistributed$4$$anon$1.run(XGBoost.scala:348)
18/11/21 00:56:26 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@74bb7c79)
18/11/21 00:56:26 ERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(2,1542790586966,JobFailed(org.apache.spark.SparkException: Job 2 cancelled because SparkContext was shut down))
Exception in thread "main" ml.dmlc.xgboost4j.java.XGBoostError: XGBoostModel training failed
	at ml.dmlc.xgboost4j.scala.spark.XGBoost$.ml$dmlc$xgboost4j$scala$spark$XGBoost$$postTrackerReturnProcessing(XGBoost.scala:406)
	at ml.dmlc.xgboost4j.scala.spark.XGBoost$$anonfun$trainDistributed$4.apply(XGBoost.scala:356)
	at ml.dmlc.xgboost4j.scala.spark.XGBoost$$anonfun$trainDistributed$4.apply(XGBoost.scala:338)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.immutable.List.map(List.scala:285)
	at ml.dmlc.xgboost4j.scala.spark.XGBoost$.trainDistributed(XGBoost.scala:337)
	at ml.dmlc.xgboost4j.scala.spark.XGBoostEstimator.train(XGBoostEstimator.scala:139)
	at ml.dmlc.xgboost4j.scala.spark.XGBoostEstimator.train(XGBoostEstimator.scala:36)
	at org.apache.spark.ml.Predictor.fit(Predictor.scala:118)
	at ml.dmlc.xgboost4j.scala.spark.XGBoost$.trainWithDataFrame(XGBoost.scala:194)

I have no problem running on another data set. This data set has ~250k rows and ~200 features. Here is the hypterparameter:

val xgbParam = Map(“eta” -> 0.1f,
“max_depth” -> 6,
“objective” -> “reg:logistic”,
“subsample” -> 0.8,
“max_leaves” -> 50,
“colsample_bylevel” -> 0.8,
“numEarlyStoppingRounds” -> 10,
“weightCol” -> “weight”)

======= update ========
It’s because some of my labels are larger than 1, but the objective is reg:logistic.