Container exited with a non-zero exit code 134

I’m new to ML and choose xgboost-spark to build my model. I run into following error w/o any clue. Could anyone help? Thanks in advance.

Xgboost Related Code:

    val numRound = 50
    val paramMap = List(
      "eta" -> 0.1f,
      "max_depth" -> 4,
      "objective" -> "binary:logistic",
      "num_round" -> numRound,
      "eval_metric" -> "auc").toMap

    val model = XGBoost.trainWithRDD(trainRDDData, paramMap, numRound,
      16, null, null, useExternalMemory = true, -1)

trainRDDDData contains about 2 million records, and is about 13GB.
PS, if i only use 10% of the data, the code above is ok.
PS, trainRDDDData contains LabeledPoint, which is constructed by sparse vector. The size of features is around 2 millions, but most of them is not set.

Env:

--executor-cores 1
--executor-memory 64g
--driver-cores 1
--driver-memory 32g

Log:

Container exited with a non-zero exit code 134

18/08/08 16:06:53 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1531366354245_1036257_01_000355 on host: data277.tc.rack.zhihu.com. Exit status: 134. Diagnostics: Exception from container-launch.
Container id: container_1531366354245_1036257_01_000355
Exit code: 134
Exception message: /bin/bash: line 1: 31678 Aborted                 /usr/java/jdk1.8.0_172-amd64/bin/java -server -Xmx65536m '-XX:+PrintGCDetails' '-Dlog4j.configuration=file:/etc/spark/conf/log4j.properties' -Djava.io.tmpdir=/data7/data/hadoop/yarn/local/usercache/wangtuo/appcache/application_1531366354245_1036257/container_1531366354245_1036257_01_000355/tmp '-Dspark.network.timeout=300s' '-Dspark.driver.port=43829' '-Dspark.rpc.lookupTimeout=300s' '-Dspark.rpc.askTimeout=300s' -Dspark.yarn.app.container.log.dir=/data2/data/hadoop/yarn/logs/application_1531366354245_1036257/container_1531366354245_1036257_01_000355 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.3.9.33:43829 --executor-id 108 --hostname data277.tc.rack.zhihu.com --cores 1 --app-id application_1531366354245_1036257 --user-class-path file:/data7/data/hadoop/yarn/local/usercache/wangtuo/appcache/application_1531366354245_1036257/container_1531366354245_1036257_01_000355/__app__.jar --user-class-path file:/data7/data/hadoop/yarn/local/usercache/wangtuo/appcache/application_1531366354245_1036257/container_1531366354245_1036257_01_000355/znlp_2.11-1.0.49.jar > /data2/data/hadoop/yarn/logs/application_1531366354245_1036257/container_1531366354245_1036257_01_000355/stdout 2> /data2/data/hadoop/yarn/logs/application_1531366354245_1036257/container_1531366354245_1036257_01_000355/stderr

@hcho3 Thanks in advance.

after i increased the driver-memory to 96g, the problem was solved.