您好,欢迎访问三七文档
当前位置:首页 > 商业/管理/HR > 管理学资料 > sparkonyarn安装配置手册
一.ssh无密码登陆1.安装sshyuminstallopenssh-server2.产生keyssh-keygen-trsa-PEnterfileinwhichtosavethekey(/root/.ssh/id_rsa):(按回车)3.使用keycat/root/.ssh/id_rsa.pub/root/.ssh/authorized_keys二.安装配置JDK1.解压tar-zxvfjdk-7u71-linux-x64.tar.gz2.打开全局变量配置文件vim/etc/profile3.在该文件末尾增加如下语句4.使配置生效source/etc/profile5.确认JDK安装成功三.安装配置hadoop1.解压tar-zxvfhadoop-2.2.0.tar.gz2.配置hadoop-env.shcd/opt/hadoop-2.2.0/etc/hadoopvimhadoop-env.sh增加如下配置:3.在/etc/profile里增加如下配置:尤其最后两行,否则会导致启动错误。4.配置core-site.xmlcd/opt/hadoop-2.2.0/etc/hadoopvimcore-site.xml增加如下配置还需增加如下配置,否则找不到库propertynamehadoop.native.lib/namevaluetrue/value/property5.配置hdfs-site.xmlcd/opt/hadoop-2.2.0/etc/hadoopvimhdfs-site.xml增加如下配置6.配置mapred-site.xmlcd/opt/hadoop-2.2.0/etc/hadoopcpmapred-site.xml.templatemapred-site.xmlvimmapred-site.xml增加如下配置7.使配置生效sourcehadoop-env.sh8.启动hadoop总是报如下错误WARNutil.NativeCodeLoader:Unabletoloadnative-hadooplibraryforyourplatform...usingbuiltin-javaclasseswhereapplicable原因是apache官网提供的二进制包,里面的native库,是32位的,而服务器是64位的。9.下载Hadoop2.2.0源码包,并解压10.安装相关软件yuminstalllzo-develzlib-develgccautoconfautomakelibtoolncurses-developenssl-deve11.安装Maventarzxfapache-maven-3.0.5-bin.tar.gz-C/optvim/etc/profileexportMAVEN_HOME=/opt/apache-maven-3.0.5exportPATH=$PATH:$MAVEN_HOME/binsource/etc/profile12.安装Anttarzxfapache-ant-1.9.3-bin.tar.gz-C/optvim/etc/profileexportANT_HOME=/opt/apache-ant-1.9.3exportPATH=$PATH:$ANT_HOME/binsource/etc/profile13.安装Findbugstarzxffindbugs-2.0.3.tar.gz-C/optvim/etc/profileexportFINDBUGS_HOME=/opt/findbugs-2.0.3exportPATH=$PATH:$FINDBUGS_HOME/binsource/etc/profile14.安装protobuf$tarzxfprotobuf-2.5.0.tar.gz$cdprotobuf-2.5.0$./configure$make$sudomakeinstall15.给hadoop打补丁最新的Hadoop2.2.0的SourceCode压缩包解压出来的code有个bug需要patch后才能编译。否则编译hadoop-auth会提示错误Patch:下载下来后使用如下拷贝到hadoop目录下,使用如下命令打补丁patch–p0HADOOP-10110.patch16.编译hadoopmvnpackage-DskipTests-Pdist,native-Dtar17.使用编译好的库替换原来的库rm-rf/opt/hadoop-2.2.0/lib/nativecp./hadoop-dist/target/hadoop-2.2.0/lib/native/opt/hadoop-2.2.0/lib/18.格式化hdfshadoopnamenode-format19.启动hadoopstart-all.sh20.查看datanodehadoopdfsadmin-report21.如果不能启动datanode,可能是多次初始化namenode导致namespaceID不一致。可以修改该id,或将所有数据删除再重新格式化namenode。四.安装配置spark1.下载安装scala下载编译好的spark二进制文件根据自己的需要选择下载3.解压压缩文件tar-zxvfspark-1.3.0-bin-hadoop2.4.tgz4.在/etc/profile里增加如下配置:6.使配置生效Source/etc/profile
本文标题:sparkonyarn安装配置手册
链接地址:https://www.777doc.com/doc-2850010 .html