Hadoop集群安装部署(三台服务器构成一个集群)

    科技2024-12-04  16

    文章目录

    1.主节点主机名和静态IP配置a) 主节点主机名b) 静态IP 2.修改主节点的/etc/hosts文件,添加IP和主机名的对应关系3.配置JDK1.84.关闭防火墙(开机自启防火墙关闭即可)5.关闭selinux6.hadoop环境安装部署7.复制第一台机器作为二、三节点8.启动集群

    1.主节点主机名和静态IP配置

    a) 主节点主机名

    vi /etc/sysconfig/network 将HOSTNAME对应的值修改为node01

    b) 静态IP

    详见:https://blog.csdn.net/zh2475855601/article/details/108837519

    2.修改主节点的/etc/hosts文件,添加IP和主机名的对应关系

    vi /etc/hosts 添加: 主节点ip 主节点名 第一个节点ip 节点名 第二个节点ip 节点名

    3.配置JDK1.8

    详见:https://blog.csdn.net/zh2475855601/article/details/108929013

    4.关闭防火墙(开机自启防火墙关闭即可)

    chkconfig iptables off

    5.关闭selinux

    vi /etc/selinux/config

    6.hadoop环境安装部署

    软件包上传并解压 tar -zxvf hadoop-2.6.0-cdh5.14.0-with-centos6.9.tar.gz -C ../servers/ 第二步查看hadoop支持的压缩方式以及本地库 a) ./hadoop checknative b) 安装openssl yum -y install openssl-devel ./hadoop checknative

    修改配置文件

    a) 修改core-site.xml

    第一台机器执行以下命令: cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop vim core-site.xml 定位到文件末尾: 在<configuration></configuration>标签内中添加: <!-- NameNode存储元数据信息的路径,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割 --> <!-- 集群动态上下线 <property> <name>dfs.hosts</name> <value>/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/accept_host</value> </property> <property> <name>dfs.hosts.exclude</name> <value>/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/deny_host</value> </property> --> <property> <name>dfs.namenode.secondary.http-address</name> <value>node01:50090</value> </property> <property> <name>dfs.namenode.http-address</name> <value>node01:50070</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas</value> </property> <!-- 定义dataNode数据存储的节点位置,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割 --> <property> <name>dfs.datanode.data.dir</name> <value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas</value> </property> <property> <name>dfs.namenode.edits.dir</name> <value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits</value> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name</value> </property> <property> <name>dfs.namenode.checkpoint.edits.dir</name> <value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.blocksize</name> <value>134217728</value> </property>

    b) 修改hdfs-site.xml

    第一台机器执行以下命令: cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop vim hdfs-site.xml 定位到文件末尾: 在<configuration></configuration>标签内中添加: <!-- NameNode存储元数据信息的路径,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割 --> <!-- 集群动态上下线 <property> <name>dfs.hosts</name> <value>/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/accept_host</value> </property> <property> <name>dfs.hosts.exclude</name> <value>/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/deny_host</value> </property> --> <property> <name>dfs.namenode.secondary.http-address</name> <value>node01:50090</value> </property> <property> <name>dfs.namenode.http-address</name> <value>node01:50070</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas</value> </property> <!-- 定义dataNode数据存储的节点位置,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割 --> <property> <name>dfs.datanode.data.dir</name> <value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas</value> </property> <property> <name>dfs.namenode.edits.dir</name> <value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits</value> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name</value> </property> <property> <name>dfs.namenode.checkpoint.edits.dir</name> <value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.blocksize</name> <value>134217728</value> </property>

    c)修改mapred-site.xml

    第一台机器执行以下命令: cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop vim mapred-site.xml 定位到文件末尾: 在<configuration></configuration>标签内中添加: <property> <!--运行模式--> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <!--JVM重用 --> <name>mapreduce.job.ubertask.enable</name> <value>true</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>node01:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>node01:19888</value> </property>

    d) 修改yarn-site.xml

    第一台机器执行以下命令: cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop vim yarn-site.xml 定位到文件末尾: 在<configuration></configuration>标签内中添加: <property> <name>yarn.resourcemanager.hostname</name> <value>node01</value> </property> <property> <!-- nodemanager 上的附属服务,只有配置成mapreduce_shuffle 才能运行--> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property>

    e)修改slaves文件

    第一台机器执行以下命令: cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop vim slaves 添加: node01 node02 node03 创建文件夹 第一台机器执行以下命令 node01机器上面创建以下目录 mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits 配置hadoop环境变量 第一台机器执行以下命令 vim /etc/profile.d/hadoop.sh 添加: export HADOOP_HOME=/export/servers/hadoop-2.6.0-cdh5.14.0 export PATH=$PATH:$HADOOP_HOME/bin 配置完成之后生效 source /etc/profile

    7.复制第一台机器作为二、三节点

    克隆虚拟机网络配置 https://editor.csdn.net/md/?articleId=108882827

    修改主机名

    vi /etc/sysconfig/network 第二个节点修改为node02 第三个节点修改为node03 -配置ssh无密码访问 在第一台机器执行以下命令: ssh-keygen,并按三次回车 ssh-copy-id node01 ssh-copy-id node02 ssh-copy-id node03

    8.启动集群

    /export/servers/hadoop-2.6.0-cdh5.14.0/sbin/start-all.sh
    Processed: 0.009, SQL: 8