Warm tip: This article is reproduced from serverfault.com, please click

其他-如何在Apache-Spark中连接主机和从机?

(其他 - How to connect master and slaves in Apache-Spark? (Standalone Mode))

发布于 2015-02-08 20:23:28

我正在使用Spark Standalone Mode教程页面以独立模式安装Spark。

1-我已经通过以下方式开始了硕士课程:

./sbin/start-master.sh

2-我是通过以下方式开始工作的:

./bin/spark-class org.apache.spark.deploy.worker.Worker spark://ubuntu:7077

注意:spark://ubuntu:7077是我的主名,我可以在中看到它Master-WebUI

问题:通过第二条命令,工作人员成功启动。但是它不能与主人联系。它反复尝试,然后给出以下消息:

15/02/08 11:30:04 WARN Remoting: Tried to associate with unreachable    remote address [akka.tcp://sparkMaster@ubuntu:7077]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: ubuntu/127.0.1.1:7077
15/02/08 11:30:04 INFO RemoteActorRefProvider$RemoteDeadLetterActorRef: Message [org.apache.spark.deploy.DeployMessages$RegisterWorker] from Actor[akka://sparkWorker/user/Worker#-1296628173] to Actor[akka://sparkWorker/deadLetters] was not delivered. [20] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
15/02/08 11:31:15 ERROR Worker: All masters are unresponsive! Giving up.

问题是什么?

谢谢

Questioner
Omid Ebrahimi
Viewed
11
gasparms 2015-02-09 05:06:18

我通常从spark-env.sh模板开始然后我设置了我需要的属性。对于简单集群,你需要:

  • SPARK_MASTER_IP

然后,在与spark-env.sh和slave ip相同的目录中创建一个名为“ slaves”的文件(每行一个)。确保通过ssh到达所有奴隶。

最后,将此配置复制到群集的每台计算机中。然后执行start-all.sh脚本启动整个集群,并尝试使用spark-shell检查你的配置。

> sbin/start-all.sh
> bin/spark-shell