2020-10-08

    科技2025-08-12  11

    1、hadoop集群搭建(伪分布式) OK 2、zookeeper OK(时区同步 OK) 3、core-site.xml a:name集群的名称(hdfs的nameservice)即ip地址改为ns 9000也删 fs.defaultFS hdfs://ns b:zookeeper 2181地址:

    ha.zookeeper.quorum hadoop001:2181,hadoop002:2181,hadoop003:2181 4、hdfs-site.xml nameservices集群下面的各个nameservice服务对应的映射 rpc-address的地址9000 http-address的地址50070 journalnode的共享地址(各个datanode) qjournal://datanode:8485;列表/集群名 出错处理,切换方式及隔离机制的设置 dfs.replication 3 dfs.nameservices ns dfs.namenodes.http-address.ns.hadoop002 hadoop002:50070 dfs.namenodes.http-address.ns.hadoop001 hadoop001:50070 dfs.namenode.shared.edits.dir qjournal://hadoop001:8485;hadoop002:8485;hadoop003:8485/ns dfs.ha.namenodes.ns hadoop001,hadoop002 dfs.namenodes.shared.edits.dir qjournal://hadoop001:8485;hadoop002:8485;hadoop003:8485/ns dfs.namenodes.rpc-address.ns.hadoop001 hadoop001:9000 dfs.namenodes.rpc-address.ns.hadoop002 hadoop002:9000 dfs.ha.automatic-failover.enabled true dfs.client.failover.proxy.provider.nsl org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence shell(/bin/true) dfs.ha.fencing.ssh.connect-timeout 30000 dfs.webhdfs.enabled true 5、yarn-site.xml 允许ha resourcemanager集群配置 rm-ids:rm1,rm2 rm1:nm1 及端口配置8088 rm2:nm2 及端口配置8088 zookeeper配置 <?xml version="1.0"?> yarn.resourcemanager.ha.enabled true yarn.resourcemanager.cluster-id cluster_id yarn.resourcemanager.ha.rm-ids rm1,rm2 yarn.resourcemanager.hostname.rm1 hadoop01 yarn.resourcemanager.hostname.rm2 hadoop02 yarn.resourcemanager.webapp.address.rm1 hadoop01:8088 yarn.resourcemanager.webapp.address.rm2 hadoop02:8088 yarn.resourcemanager.zk-address hadoop01:2181,hadoop02:2181,hadoop03:2181 yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.log-aggregation-enable true yarn.log-aggregation.retain-seconds 604800

    6、mapred-site.xml 什么都没改

    <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> mapreduce.framework.name yarn mapreduce.jobhistory.address hadoop01:10020 mapreduce.jobhistory.webapp.address hadoop01:19888 7、slaves hadoop01 hadoop02 hadoop03 8、保证三台机器一致将改动的 进行 scp scp /opt/hadoop/etc/hadoop/*.xml root@192.168.5.32:$PWD scp /opt/hadoop/etc/hadoop/slaves root@192.168.5.32:$PWD (改为另一台继续) 9、确保集群处于关闭状态 10、删除tmp文件夹,选择主节点重新format(可能需要提前启动journalnode) 启动journalnode(每台机器都要启动):hadoop-daemon.sh start journalnode 只在主节点上格式化集群:hdfs namenode -format(hadoop namenode -format) 11、把主节点新生成的tmp文件夹,复制到备用节点scp -r tmp hadoop002:$PWD 12、启动zookeeper:zkServer.sh start 检查状态:zkServer.sh status 13、在主节点启动整个集群start-all.sh 14、在备用节点上启动备用resourcemanager:hadoop-daemon.sh start resourcemanager 15、主节点格式化zkfc:hdfs zkfc -formatZK 16、主备全部节点启动zkfc:hadoop-daemon.sh start zkfc 17、datanode节点启动历史服务:mr-jobhistory-daemon.sh start historyserver
    Processed: 0.008, SQL: 8