Spark的几种运行模式
1.local单机模式,结果xshell可见:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master local[1] ./lib/spark-examples-1.6.0-hadoop2.4.0.jar 100
2.standalone集群模式之client模式:
conf/spark-env.sh添加
export JAVA_HOME=/root/install/jdk1.7.0_21
export SPARK_MASTER_IP=spark1
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=1g
vi slaves
添加
node2
node3
rm -rf slaves.template
rm -rf spark-env.sh.template
结果xshell可见:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://node1:7077 --executor-memory 1G --total-executor-cores 2 ./lib/spark-examples-1.6.0-hadoop2.4.0.jar 100
3.standalone集群模式之cluster模式:
结果spark001:8080里面可见!
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://node1:7077 --deploy-mode cluster --supervise --executor-memory 1G --total-executor-cores 1 ./lib/spark-examples-1.6.0-hadoop2.4.0.jar 100
4.Yarn集群模式,结果spark001:8088里面可见:
在conf/spark-env.sh里添加export HADOOP_CONF_DIR=/home/install/hadoop-2.5/etc/hadoop
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster --executor-memory 1G --num-executors 1 ./lib/spark-examples-1.6.0-hadoop2.4.0.jar 100
com.spark.study.MySparkPi
./bin/spark-submit --class com.spark.study.MySparkPi --master yarn-client --executor-memory 1G --num-executors 1 ./data/spark_pagerank_pi.jar 100
最新文章
- phone 调试三种工具
- Android js相互调用
- [reprint]如何编写引导程序 Hello World
- Windows 2003/2008更改远程桌面端口脚本
- [转][Swust OJ 24]--Max Area(画图分析)
- NEUQ1051: 谭浩强C语言(第三版)习题6.7
- JavaScript 函数创建思想
- python--元祖和字典
- python dataframe数据条件筛选
- Nginx的内部(进程)模型
- [C]变量作用域
- redis,缓存雪崩,粗粒度锁,缓存一致性
- React之ant design的table表格序号连续自增
- Windows下虚拟机安装Mac OS X —– VM12安装Mac OS X 10.11
- ThreadPoolExecutor 源码阅读
- Nuxt.js项目实战
- Hadoop是怎么分块Block的?
- 8I - 吃糖果
- ASP.NET Core开源地址
- VC++开发Windows系统全局钩子