您好,欢迎访问三七文档
大数据平台搭建一.Linux环境准备1)格式化磁盘mkfs.ext4/dev/dm-0mkdir/datamount/dev/dm-0/data系统重启需要手动挂载把mount/dev/dm-0/data的命令放到/etc/rc.d/rc.local重启自动挂载。df-h查看挂载点2)网卡绑定(当服务器有多块网卡需要网卡绑定)•配置绑定网卡[root@hadoop001network-scripts]#vi/etc/sysconfig/network-scripts/ifcfg-bond0DEVICE=bond0TYPE=EthernetONBOOT=yesBOOTPROTO=staticIPADDR=19.106.64.7NETMASK=255.255.255.0GATEWAY=19.106.64.254NDS1=19.104.4.3DNS2=19.104.8.3USERCTL=no•各网卡设置[root@hadoop001network-scripts]#vi/etc/sysconfig/network-scripts/ifcfg-eth0DEVICE=eth0TYPE=EthernetBOOTPROTO=noneONBOOT=yesMASTER=bond0SLAVE=yes•配置文件添加vi/etc/modprobe.d/dist.conf末尾追加aliasbond0bondingoptionsbond0miimon=100mode=0#每100ms检测一次链路mode=0网卡冗余机制•设置服务serviceNetworkManagerstopchkconfigNetworkManageroffservicenetworkrestartchkconfignetworkon•测试绑定网卡modprobebond0#如果报错就配置错误查看bond0网卡信息3)添加用hadoopuseraddhadooppasswdhadoop#hadoop4)修改主机名文件vi/etc/sysconfig/network#修改该文件Hostname=hadoop0015)修改本地hosts文件19.106.64.7hadoop00119.106.64.8hadoop00219.106.64.9hadoop00319.106.64.10hadoop00419.106.64.11hadoop00519.106.64.12hadoop0066)关闭防火墙(root用户下操作)serviceiptablesstatus查看防火墙状态serviceiptablesstop临时关闭防火墙chkconfigiptablesoff永久关闭防火墙7)修改selinuxvi/etc/selinux/configselinux=disabled8)ssh互信(hadoop用户下)生成密钥对ssh-keygen#一路回车就行ssh-copy-idhadoop001分发公钥ssh-copy-idhadoop002ssh-copy-idhadoop003测试连接获取主机名文件;cd/home/hadoop/.ssh将私钥和hosts文件分发各个节点确保完全互信scpid_rsaknown_hostshadoop@hadoop002:/home/hadoop/.sshscpid_rsaknown_hostshadoop@hadoop003:/home/hadoop/.ssh……9)配置时间同步服务设置NTP服务自动同步硬件时间vi/etc/sysconfig/ntpdSYNC_HWCLOCK=yesServer端:restrict19.106.64.0mask255.255.255.0nomodifynotrapserver127.127.1.0fudge127.127.1.0stratum8client端:restrict19.106.64.0mask255.255.255.0nomodifynotrapserver19.106.64.7servicentpdstartchkconfigntpdon10)卸载系统自带的jdk[root@hadoop001~]#rpm-qa|grepjavajava-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64tzdata-java-2013g-1.el6.noarch[root@hadoop001~]#rpm-e--nodepstzdata-java-2013g-1.el6.noarch[root@hadoop001~]#rpm-qa|grepjavajava-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64[root@hadoop001~]#rpm-e--nodepsjava-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64[root@hadoop001~]#rpm-e--nodepsjava-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64[root@hadoop001~]#rpm-qa|grepjava二.Mysql主从备份安装配置选择hadoop001和hadoop002安装mysql数据库(slave节点实时复制master节点的mysql数据库)master为hadoop001,ip19.106.64.7slave为hadoop002,ip19.106.64.8mysql下载地址)检查系统是否自带mysql以防安装冲突[root@Mysql1mysql]#rpm-qa|grep-imysqlmysql-libs-5.1.71-1.el6.x86_64[root@Mysql1mysql]#rpm-emysql-libs-5.1.71-1.el6.x86_64–nodeps#解除依赖卸载2)安装Mysql(Master和slave都适合此安装过程)上传:MySQL-5.6.30-1.linux_glibc2.5.x86_64.rpm-bundle.tar到/home目录下解压,需要用到的是server和client两个包安装:rpm-ivhMySQL-server-5.6.30-1.linux_glibc2.5.x86_64.rpmrpm-ivhMySQL-client-5.6.30-1.linux_glibc2.5.x86_64.rpm启动服务servicemysqlstart;查看root用户密码使用root用户登录mysql-uroot-p3HYHXyQWoQo68IYW修改root密码为rootsetpasswordfor'root'@'localhost'password=('root');flushprivileges;给hadoop用户相关权限:grantallprivilegeson*.*tohadoop@%identifiedby'hadoop';flushprivileges;把mysql中所有库下的表的权限都给任意ip下的hadoop用户,登录密码为hadoop3)主从服务设置vim/etc/my.cnf#修改Master配置文件[mysqld]log_bin=master-bin.logserver_id=7innodb_flush_log_at_trx_commit=1sync_binlog=1binlog_format=mixedmax_connections=1000relay-log=master-relay-binrelay-log-recovery=1sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLESexplicit_defaults_for_timestamp=truevim/etc/my.cnf#修改Slave配置文件[root@hadoop002my.cnf.d]#servicemysqlstopShuttingdownMySQL..[OK]vim/etc/my.cnf#修改Slave配置文件[mysqld]log_bin=slave-bin.logserver_id=8innodb_flush_log_at_trx_commit=1log-slave-updatessync_binlog=1binlog_format=mixedmax_connections=1000relay-log=slave-relay-binmaster-info-repository=tablerelay-log-info-repository=tablerelay-log-recovery=1sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLESexplicit_defaults_for_timestamp=truemaster节点创建权限用户[root@hadoop001~]#servicemysqlrestart[root@hadoop001~]#mysql-uroot-prootmysqlgrantreplicationslave,reload,superon*.*torepl_user@19.106.64.8identifiedby'repl_password';QueryOK,0rowsaffected(0.00sec)mysqlflushprivileges;QueryOK,0rowsaffected(0.00sec)slave节点创建replication连接mysqlchangemastertomaster_host='19.106.64.7',master_user='repl_user',master_password='repl_password',master_log_file='master-bin.000001',master_log_pos=427;QueryOK,0rowsaffected,2warnings(0.01sec)检测replication状态检测从节点mysqlshowslavestatus\G***************************1.row***************************Slave_IO_State:Master_Host:19.106.64.7Master_User:repl_userMaster_Port:3306Connect_Retry:60Master_Log_File:master-bin.000001Read_Master_Log_Pos:1011Relay_Log_File:slave-relay-bin.000001Relay_Log_Pos:4Relay_Master_Log_File:master-bin.000001Slave_IO_Running:YesSlave_SQL_Running:Yes#这两个线程必须成功启动Replicate_Do_DB:Replicate_Ignore_DB:Replicate_Do_Table:Replicate_Ignore_Table:Replicate_Wild_Do_Table:Replicate_Wild_Ignore_Table:Last_Errno:0Last_Error:Skip_Counter:0Exec_Master_Log_Pos:154Relay_Log_Space:1204Until_Condition:NoneUntil_Log_File:Until_Log_Pos:0Master_
本文标题:大数据平台部署文档
链接地址:https://www.777doc.com/doc-28358 .html