Xgb-spark report :Too many nodes went down and we cannot recover

stderr like below:

[2022-12-30 16:45:52.684]Container exited with a non-zero exit code 134. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 1: 545764 Aborted (core dumped) /usr/local/jdk8/bin/java -server -Xmx15360m -Djava.io.tmpdir=/data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp ‘-Dspark.driver.port=33425’ ‘-Dspark.ui.port=0’ -Dspark.yarn.app.container.log.dir=/data/hdfs/yarn/logs/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003 -XX:OnOutOfMemoryError=‘kill %p’ org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@sh-bs-b1-303-i3-hadoop-128-245:33425 --executor-id 1 --hostname sh-bs-b1-303-i4-hadoop-129-4 --cores 10 --app-id application_1658828757310_6328150 --user-class-path file:/data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/app.jar > /data/hdfs/yarn/logs/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/stdout 2> /data/hdfs/yarn/logs/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/stderr
Last 4096 bytes of stderr :
container_e05_1658828757310_6328150_02_000003/tmp/10-cache-33693313126262929727/train.sorted.col.page
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-07149317088523727121/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-33693313126262929727/train
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-07149317088523727121/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-4924047746409874369/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-4924047746409874369/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-93775136737660731039/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-93775136737660731039/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-85776801237113420567/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-85776801237113420567/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-54855043875011110821/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-54855043875011110821/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-22729313882414221673/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-22729313882414221673/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-76094794478847063278/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-76094794478847063278/train
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
pure virtual method called
terminate called without an active exception

.
22/12/30 16:45:54 INFO scheduler.DAGScheduler: Job 7 failed: foreachPartition at XGBoost.scala:452, took 21.636377 s
22/12/30 16:45:54 ERROR java.RabitTracker: Uncaught exception thrown by worker:
org.apache.spark.SparkException: Job 7 cancelled because SparkContext was shut down
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:932)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:930)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:930)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:2128)
at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2041)
at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1949)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1948)
at org.apache.spark.TaskFailedListener$$anon$1$$anonfun$run$1.apply$mcV$sp(SparkParallelismTracker.scala:131)
at org.apache.spark.TaskFailedListener$$anon$1$$anonfun$run$1.apply(SparkParallelismTracker.scala:131)
at org.apache.spark.TaskFailedListener$$anon$1$$anonfun$run$1.apply(SparkParallelismTracker.scala:131)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.TaskFailedListener$$anon$1.run(SparkParallelismTracker.scala:130)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:935)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:933)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:933)
at ml.dmlc.xgboost4j.scala.spark.XGBoost$$anonfun$trainDistributed$2$$anon$1.run(XGBoost.scala:452)
22/12/30 16:45:54 INFO scheduler.DAGScheduler: ResultStage 10 (foreachPartition at XGBoost.scala:452) failed in 21.630 s due to Stage cancelled because SparkContext was shut down
22/12/30 16:45:54 WARN scheduler.TaskSetManager: Lost task 9.0 in stage 10.0 (TID 45, sh-bs-b1-303-i4-hadoop-129-4, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container from a bad node: container_e05_1658828757310_6328150_02_000003 on host: sh-bs-b1-303-i4-hadoop-129-4. Exit status: 134. Diagnostics: 25 --executor-id 1 --hostname sh-bs-b1-303-i4-hadoop-129-4 --cores 10 --app-id application_1658828757310_6328150 --user-class-path file:/data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/app.jar > /data/hdfs/yarn/logs/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/stdout 2> /data/hdfs/yarn/logs/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/stderr
Last 4096 bytes of stderr :
container_e05_1658828757310_6328150_02_000003/tmp/10-cache-33693313126262929727/train.sorted.col.page
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-07149317088523727121/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-33693313126262929727/train
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-07149317088523727121/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-4924047746409874369/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-4924047746409874369/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-93775136737660731039/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-93775136737660731039/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-85776801237113420567/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-85776801237113420567/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-54855043875011110821/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-54855043875011110821/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-22729313882414221673/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-22729313882414221673/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-76094794478847063278/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-76094794478847063278/train
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
pure virtual method called
terminate called without an active exception

params like below:

22/12/30 16:45:32 INFO XGBoostSpark: Running XGBoost 0.90 with parameters:
alpha -> 0.0
min_child_weight -> 1.0
sample_type -> uniform
base_score -> 0.5
colsample_bylevel -> 1.0
grow_policy -> depthwise
skip_drop -> 0.0
lambda_bias -> 0.0
silent -> 1
scale_pos_weight -> 1.0
seed -> 0
cache_training_set -> false
features_col -> features_vec
num_early_stopping_rounds -> 0
label_col -> label
num_workers -> 10
subsample -> 0.85
lambda -> 1.5
max_depth -> 5
probability_col -> probability
raw_prediction_col -> rawPrediction
tree_limit -> 0
custom_eval -> null
rate_drop -> 0.0
max_bin -> 16
train_test_ratio -> 0.9
use_external_memory -> true
objective -> binary:logistic
eval_metric -> auc
num_round -> 120
timeout_request_workers -> 60000
missing -> 0.0
checkpoint_path ->
tracker_conf -> TrackerConf(0,python)
tree_method -> auto
max_delta_step -> 0.0
eta -> 0.3
verbosity -> 1
colsample_bytree -> 0.9
normalize_type -> tree
custom_obj -> null
gamma -> 0.01
sketch_eps -> 0.03
nthread -> 1
prediction_col -> prediction
checkpoint_interval -> -1
22/12/30 16:45:32 WARN XGBoostSpark: train_test_ratio is deprecated since XGBoost 0.82, we recommend to explicitly pass a training and multiple evaluation datasets by passing ‘eval_sets’ and ‘eval_set_names’
22/12/30 16:45:32 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:32,934 INFO start listen on 192.168.128.245:9091
22/12/30 16:45:32 WARN XGBoostSpark: train_test_ratio is deprecated since XGBoost 0.82, we recommend to explicitly pass a training and multiple evaluation datasets by passing ‘eval_sets’ and ‘eval_set_names’
22/12/30 16:45:32 INFO XGBoostSpark: starting training with timeout set as 60000 ms for waiting for resources
22/12/30 16:45:32 INFO spark.SparkContext: Starting job: foreachPartition at XGBoost.scala:452
22/12/30 16:45:32 INFO scheduler.DAGScheduler: Got job 7 (foreachPartition at XGBoost.scala:452) with 10 output partitions
22/12/30 16:45:32 INFO scheduler.DAGScheduler: Final stage: ResultStage 10 (foreachPartition at XGBoost.scala:452)
22/12/30 16:45:32 INFO scheduler.DAGScheduler: Parents of final stage: List()
22/12/30 16:45:32 INFO scheduler.DAGScheduler: Missing parents: List()
22/12/30 16:45:32 INFO scheduler.DAGScheduler: Submitting ResultStage 10 (MapPartitionsRDD[36] at mapPartitions at XGBoost.scala:343), which has no missing parents
22/12/30 16:45:33 INFO memory.MemoryStore: Block broadcast_16 stored as values in memory (estimated size 23.3 KB, free 6.2 GB)
22/12/30 16:45:33 INFO memory.MemoryStore: Block broadcast_16_piece0 stored as bytes in memory (estimated size 11.0 KB, free 6.2 GB)
22/12/30 16:45:33 INFO storage.BlockManagerInfo: Added broadcast_16_piece0 in memory on sh-bs-b1-303-i3-hadoop-128-245:30009 (size: 11.0 KB, free: 6.2 GB)
22/12/30 16:45:33 INFO spark.SparkContext: Created broadcast 16 from broadcast at DAGScheduler.scala:1161
22/12/30 16:45:33 INFO scheduler.DAGScheduler: Submitting 10 missing tasks from ResultStage 10 (MapPartitionsRDD[36] at mapPartitions at XGBoost.scala:343) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))
22/12/30 16:45:33 INFO cluster.YarnClusterScheduler: Adding task set 10.0 with 10 tasks
22/12/30 16:45:33 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 10.0 (TID 36, sh-bs-b1-303-i4-hadoop-129-4, executor 1, partition 0, ANY, 9036 bytes)
22/12/30 16:45:33 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 10.0 (TID 37, sh-bs-b1-303-i4-hadoop-129-4, executor 1, partition 1, ANY, 9036 bytes)
22/12/30 16:45:33 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 10.0 (TID 38, sh-bs-b1-303-i4-hadoop-129-4, executor 1, partition 2, ANY, 9036 bytes)
22/12/30 16:45:33 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 10.0 (TID 39, sh-bs-b1-303-i4-hadoop-129-4, executor 1, partition 3, ANY, 9036 bytes)
22/12/30 16:45:33 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 10.0 (TID 40, sh-bs-b1-303-i4-hadoop-129-4, executor 1, partition 4, ANY, 9036 bytes)
22/12/30 16:45:33 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 10.0 (TID 41, sh-bs-b1-303-i4-hadoop-129-4, executor 1, partition 5, ANY, 9036 bytes)
22/12/30 16:45:33 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 10.0 (TID 42, sh-bs-b1-303-i4-hadoop-129-4, executor 1, partition 6, ANY, 9036 bytes)
22/12/30 16:45:33 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 10.0 (TID 43, sh-bs-b1-303-i4-hadoop-129-4, executor 1, partition 7, ANY, 9036 bytes)
22/12/30 16:45:33 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 10.0 (TID 44, sh-bs-b1-303-i4-hadoop-129-4, executor 1, partition 8, ANY, 9036 bytes)
22/12/30 16:45:33 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 10.0 (TID 45, sh-bs-b1-303-i4-hadoop-129-4, executor 1, partition 9, ANY, 9036 bytes)
22/12/30 16:45:33 INFO storage.BlockManagerInfo: Added broadcast_16_piece0 in memory on sh-bs-b1-303-i4-hadoop-129-4:21323 (size: 11.0 KB, free: 7.8 GB)
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 287
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 254
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 285
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 266
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 288
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 270
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 278
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 297
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 312
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 276
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 259
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 296
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 293
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 298
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 268
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 300
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 301
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 275
22/12/30 16:45:33 INFO storage.BlockManagerInfo: Removed broadcast_14_piece0 on sh-bs-b1-303-i3-hadoop-128-245:30009 in memory (size: 4.1 KB, free: 6.2 GB)
22/12/30 16:45:33 INFO storage.BlockManagerInfo: Removed broadcast_14_piece0 on sh-bs-b1-303-i4-hadoop-129-4:21323 in memory (size: 4.1 KB, free: 7.8 GB)
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 253
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 291
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 289
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 258
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 264
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 265
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 286
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 308
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 269
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 306
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 284
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 281
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 252
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 299
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 303
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 295
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 304
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 311
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 272
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 249
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 279
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 292
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 248
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 255
22/12/30 16:45:33 INFO storage.BlockManagerInfo: Removed broadcast_13_piece0 on sh-bs-b1-303-i3-hadoop-128-245:30009 in memory (size: 6.1 KB, free: 6.2 GB)
22/12/30 16:45:33 INFO storage.BlockManagerInfo: Removed broadcast_13_piece0 on sh-bs-b1-303-i4-hadoop-129-4:21323 in memory (size: 6.1 KB, free: 7.8 GB)
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 257
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned shuffle 2
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 310
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 309
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 302
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 277
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 307
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 261
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 273
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 256
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 290
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 283
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 251
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 247
22/12/30 16:45:33 INFO storage.BlockManagerInfo: Removed broadcast_12_piece0 on sh-bs-b1-303-i3-hadoop-128-245:30009 in memory (size: 162.4 KB, free: 6.2 GB)
22/12/30 16:45:33 INFO storage.BlockManagerInfo: Removed broadcast_12_piece0 on sh-bs-b1-303-i4-hadoop-129-4:21323 in memory (size: 162.4 KB, free: 7.8 GB)
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 294
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 267
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 262
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 250
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 260
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 271
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 280
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 274
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 263
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 282
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 313
22/12/30 16:45:33 INFO spark.ContextCleaner: Cleaned accumulator 305
22/12/30 16:45:36 INFO storage.BlockManagerInfo: Added broadcast_15_piece0 in memory on sh-bs-b1-303-i4-hadoop-129-4:21323 (size: 162.8 KB, free: 7.8 GB)
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,701 DEBUG Recieve start signal from 192.168.129.4; assign rank 0
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,701 DEBUG Recieve start signal from 192.168.129.4; assign rank 1
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,702 DEBUG Recieve start signal from 192.168.129.4; assign rank 2
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,703 DEBUG Recieve start signal from 192.168.129.4; assign rank 3
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,703 DEBUG Recieve start signal from 192.168.129.4; assign rank 4
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,704 DEBUG Recieve start signal from 192.168.129.4; assign rank 5
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,705 DEBUG Recieve start signal from 192.168.129.4; assign rank 6
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,706 DEBUG Recieve start signal from 192.168.129.4; assign rank 7
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,706 DEBUG Recieve start signal from 192.168.129.4; assign rank 8
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,707 DEBUG Recieve start signal from 192.168.129.4; assign rank 9
22/12/30 16:45:37 INFO java.RabitTracker$TrackerProcessLogger: 2022-12-30 16:45:37,707 INFO @tracker All of 10 nodes getting started
22/12/30 16:45:39 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Disabling executor 1.
22/12/30 16:45:39 INFO scheduler.DAGScheduler: Executor lost: 1 (epoch 3)
22/12/30 16:45:39 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
22/12/30 16:45:39 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, sh-bs-b1-303-i4-hadoop-129-4, 21323, None)
22/12/30 16:45:39 INFO storage.BlockManagerMaster: Removed 1 successfully in removeExecutor
22/12/30 16:45:54 INFO yarn.YarnAllocator: Completed container container_e05_1658828757310_6328150_02_000003 on host: sh-bs-b1-303-i4-hadoop-129-4 (state: COMPLETE, exit status: 134)
22/12/30 16:45:54 WARN yarn.YarnAllocator: Container from a bad node: container_e05_1658828757310_6328150_02_000003 on host: sh-bs-b1-303-i4-hadoop-129-4. Exit status: 134. Diagnostics: 25 --executor-id 1 --hostname sh-bs-b1-303-i4-hadoop-129-4 --cores 10 --app-id application_1658828757310_6328150 --user-class-path file:/data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/app.jar > /data/hdfs/yarn/logs/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/stdout 2> /data/hdfs/yarn/logs/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/stderr
Last 4096 bytes of stderr :
container_e05_1658828757310_6328150_02_000003/tmp/10-cache-33693313126262929727/train.sorted.col.page
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-07149317088523727121/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-33693313126262929727/train
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-07149317088523727121/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-4924047746409874369/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-4924047746409874369/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-93775136737660731039/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-93775136737660731039/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-85776801237113420567/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-85776801237113420567/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-54855043875011110821/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-54855043875011110821/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-22729313882414221673/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-22729313882414221673/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-76094794478847063278/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-76094794478847063278/train
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
pure virtual method called
terminate called without an active exception
[2022-12-30 16:45:52.684]Container exited with a non-zero exit code 134. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 1: 545764 Aborted (core dumped) /usr/local/jdk8/bin/java -server -Xmx15360m -Djava.io.tmpdir=/data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp ‘-Dspark.driver.port=33425’ ‘-Dspark.ui.port=0’ -Dspark.yarn.app.container.log.dir=/data/hdfs/yarn/logs/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003 -XX:OnOutOfMemoryError=‘kill %p’ org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@sh-bs-b1-303-i3-hadoop-128-245:33425 --executor-id 1 --hostname sh-bs-b1-303-i4-hadoop-129-4 --cores 10 --app-id application_1658828757310_6328150 --user-class-path file:/data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/app.jar > /data/hdfs/yarn/logs/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/stdout 2> /data/hdfs/yarn/logs/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/stderr
Last 4096 bytes of stderr :
container_e05_1658828757310_6328150_02_000003/tmp/10-cache-33693313126262929727/train.sorted.col.page
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-07149317088523727121/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-33693313126262929727/train
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-07149317088523727121/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-4924047746409874369/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-4924047746409874369/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-93775136737660731039/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-93775136737660731039/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-85776801237113420567/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-85776801237113420567/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-54855043875011110821/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-54855043875011110821/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-22729313882414221673/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-22729313882414221673/train
[16:45:37] SparsePage::Writer Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-76094794478847063278/train.sorted.col.page
[16:45:37] SparsePageSource: Finished writing to /data/hdfs/data7/yarn/local/usercache/root/appcache/application_1658828757310_6328150/container_e05_1658828757310_6328150_02_000003/tmp/10-cache-76094794478847063278/train
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
Too many nodes went down and we cannot recover…
pure virtual method called
terminate called without an active exception