当前位置:   article > 正文

CentOS7安装Hadoop集群完整步骤

centos7安装hadoop

准备工作:

搭建集群,所有机器的必须改成静态static!!!

相关网址:

https://blog.csdn.net/weixin_55076626/article/details/126904432?csdn_share_tail=%7B%22type%22%3A%22blog%22%2C%22rType%22%3A%22article%22%2C%22rId%22%3A%22126904432%22%2C%22source%22%3A%22weixin_55076626%22%7D

1. 安装3台centos7服务器

1.1.配置名字hadoop01\hadoop02\hadoop03

hostnamectl set-hostname hadoop01
hostnamectl set-hostname hadoop02
hostnamectl set-hostname hadoop03

1.2.修改hosts文件

vi /etc/hosts

文件末尾添加以下内容:

  1. hadoop01的ip地址 hadoop01
  2. hadoop02的ip地址 hadoop02
  3. hadoop03的ip地址 hadoop03

1.3.关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

2.xshell点击工具,选择发送键输入到所有会话

2.1.所有窗口状态改成NO

3.hadoop01输入以下命令

3.1.做ssh 公私钥 无秘;中途直接回车

ssh-keygen -t rsa -P ''

3.2.copy公钥到hadoop02,hadoop03;输入yes,再输入密码

ssh-copy-id hadoop01
ssh-copy-id hadoop02
ssh-copy-id hadoop03

4.测试以上操作是否成功

4.1.hadoop02,hadoop03分别输入以下命令

cd .ssh/
ls

4.2.hadoop01输入以下命令

ssh hadoop02
ssh hadoop03
exit

5.第2步的基础,hadoop02和hadoop03窗口状态改成OFF

5.1.输入以下命令,和第3步一样

ssh-keygen -t rsa -P ''
ssh-copy-id hadoop01
ssh-copy-id hadoop02
ssh-copy-id hadoop03

5.2.以上操作都完成后hadoop01,hadoop02和hadoop03的窗口状态都改成OFF,任意一个窗口按下ctrl+l

6.安装chrony

yum -y install chrony

7.安装wget

yum install -y gcc vim wget

8.配置chrony

vim /etc/chrony.conf

8.1.文件添加如下内容,注释掉server 0.centos.pool.ntp.org iburst

  1. server ntp1.aliyun.com
  2. server ntp2.aliyun.com
  3. server ntp3.aliyun.com

9.启动chrony

systemctl start chronyd

10.安装psmisc

yum install -y psmisc

11.备份原始源

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

12.下载源

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

13.清除缓存

yum clean all
yum makecache

14.打开xftp,将jdk安装包分别拖到三台机器的opt文件夹下,然后执行以下命令,安装jdk

cd /opt
tar -zxf jdk-8u111-linux-x64.tar.gz
mkdir soft
mv jdk1.8.0_111/ soft/jdk180

14.1.配置环境变量

vim /etc/profile
  1. #java env
  2. export JAVA_HOME=/opt/soft/jdk180
  3. export PATH=$JAVA_HOME/bin:$PATH
  4. export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
source /etc/profile
java -version

15.打开xftp,将zookeeper安装包分别拖到三台机器的opt文件夹下,然后执行以下命令,安装zookeeper

tar -zxf zookeeper-3.4.5-cdh5.14.2.tar.gz
mv zookeeper-3.4.5-cdh5.14.2 soft/zk345

15.1.修改zoo.cfg文件

cd soft/zk345/conf/
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg

修改dataDir=/opt/soft/zk345/datas:

dataDir=/opt/soft/zk345/datas

文件末尾加上以下内容:

  1. server.1=192.168.239.137:2888:3888
  2. server.2=192.168.239.141:2888:3888
  3. server.3=192.168.239.142:2888:3888

16.创建datas文件夹

cd /opt/soft/zk345/
mkdir datas

17.hadoop01,hadoop02和hadoop03的窗口状态都改成ON

17.1.hadoop01页面输入以下命令

cd datas
echo "1"> myid
cat myid

17.2.hadoop02页面输入以下命令

cd datas
echo "2"> myid
cat myid

17.3.hadoop03页面输入以下命令

cd datas
echo "3"> myid
cat myid

18.hadoop01,hadoop02和hadoop03的窗口状态都改成OFF

18.1.配置zookeeper运行环境

vim /etc/profile
  1. #Zookeeper env
  2. export ZOOKEEPER_HOME=/opt/soft/zk345
  3. export PATH=$PATH:$ZOOKEEPER_HOME/bin
source /etc/profile

19.启动zookeeper集群

zkServer.sh start

20.jps命令查看,必须要有进程QuorumPeerMain

jps

21.打开xftp,将Hadoop安装包分别拖到三台机器的opt文件夹下,然后执行以下命令,安装Hadoop集群

cd /opt
tar -zxf hadoop-2.6.0-cdh5.14.2.tar.gz
mv hadoop-2.6.0-cdh5.14.2 soft/hadoop260
cd soft/hadoop260/etc/hadoop

21.1.添加对应各个文件夹

mkdir -p /opt/soft/hadoop260/tmp 
mkdir -p /opt/soft/hadoop260/dfs/journalnode_data 
mkdir -p /opt/soft/hadoop260/dfs/edits 
mkdir -p /opt/soft/hadoop260/dfs/datanode_data
mkdir -p /opt/soft/hadoop260/dfs/namenode_data

21.2.配置hadoop-env.sh

vim hadoop-env.sh

修改JAVA_HOME和HADOOP_CONF_DIR的值如下:

  1. export JAVA_HOME=/opt/soft/jdk180
  2. export HADOOP_CONF_DIR=/opt/soft/hadoop260/etc/hadoop

21.3.配置core-site.xml,快捷键shift+G到文件末尾添加如下内容(注意改机器名!!!)

vim core-site.xml
  1. <configuration>
  2. <!--指定hadoop集群在zookeeper上注册的节点名-->
  3. <property>
  4. <name>fs.defaultFS</name>
  5. <value>hdfs://hacluster</value>
  6. </property>
  7. <!--指定hadoop运行时产生的临时文件-->
  8. <property>
  9. <name>hadoop.tmp.dir</name>
  10. <value>file:///opt/soft/hadoop260/tmp</value>
  11. </property>
  12. <!--设置缓存大小 默认4KB--> <property>
  13. <name>io.file.buffer.size</name>
  14. <value>4096</value>
  15. </property>
  16. <!--指定zookeeper的存放地址-->
  17. <property>
  18. <name>ha.zookeeper.quorum</name>
  19. <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
  20. </property>
  21. <!--配置允许root代理访问主机节点-->
  22. <property>
  23. <name>hadoop.proxyuser.root.hosts</name>
  24. <value>*</value>
  25. </property>
  26. <!--配置该节点允许root用户所属的组-->
  27. <property>
  28. <name>hadoop.proxyuser.root.groups</name>
  29. <value>*</value>
  30. </property>
  31. </configuration>

21.4.配置hdfs-site.xml,文件末尾添加如下内容(注意改机器名!!!)

vim hdfs-site.xml
  1. <configuration>
  2. <property>
  3. <!--数据块默认大小128M-->
  4. <name>dfs.block.size</name>
  5. <value>134217728</value>
  6. </property>
  7. <property>
  8. <!--副本数量 不配置默认为3-->
  9. <name>dfs.replication</name>
  10. <value>3</value>
  11. </property>
  12. <property>
  13. <!--namenode节点数据(元数据)的存放位置-->
  14. <name>dfs.name.dir</name>
  15. <value>file:///opt/soft/hadoop260/dfs/namenode_data</value>
  16. </property>
  17. <property>
  18. <!--datanode节点数据(元数据)的存放位置-->
  19. <name>dfs.data.dir</name>
  20. <value>file:///opt/soft/hadoop260/dfs/datanode_data</value>
  21. </property>
  22. <property>
  23. <!--开启hdfs的webui界面-->
  24. <name>dfs.webhdfs.enabled</name>
  25. <value>true</value>
  26. </property>
  27. <property>
  28. <!--datanode上负责进行文件操作的线程数-->
  29. <name>dfs.datanode.max.transfer.threads</name>
  30. <value>4096</value> </property>
  31. <property>
  32. <!--指定hadoop集群在zookeeper上的注册名-->
  33. <name>dfs.nameservices</name>
  34. <value>hacluster</value>
  35. </property>
  36. <property>
  37. <!--hacluster集群下有两个namenode分别是nn1,nn2-->
  38. <name>dfs.ha.namenodes.hacluster</name>
  39. <value>nn1,nn2</value>
  40. </property>
  41. <!--nn1的rpc、servicepc和http通讯地址 -->
  42. <property>
  43. <name>dfs.namenode.rpc-address.hacluster.nn1</name>
  44. <value>hadoop01:9000</value>
  45. </property>
  46. <property>
  47. <name>dfs.namenode.servicepc-address.hacluster.nn1</name>
  48. <value>hadoop01:53310</value>
  49. </property>
  50. <property>
  51. <name>dfs.namenode.http-address.hacluster.nn1</name>
  52. <value>hadoop01:50070</value>
  53. </property>
  54. <!--nn2的rpc、servicepc和http通讯地址 -->
  55. <property>
  56. <name>dfs.namenode.rpc-address.hacluster.nn2</name>
  57. <value>hadoop02:9000</value>
  58. </property>
  59. <property>
  60. <name>dfs.namenode.servicepc-address.hacluster.nn2</name>
  61. <value>hadoop02:53310</value>
  62. </property>
  63. <property>
  64. <name>dfs.namenode.http-address.hacluster.nn2</name>
  65. <value>hadoop02:50070</value>
  66. </property>
  67. <property>
  68. <!--指定Namenode的元数据在JournalNode上存放的位置-->
  69. <name>dfs.namenode.shared.edits.dir</name>
  70. <value>qjournal://hadoop01:8485;hadoop02:8485;hadoop03:8485/hacluster</value>
  71. </property>
  72. <property>
  73. <!--指定JournalNode在本地磁盘的存储位置-->
  74. <name>dfs.journalnode.edits.dir</name>
  75. <value>/opt/soft/hadoop260/dfs/journalnode_data</value>
  76. </property>
  77. <property>
  78. <!--指定namenode操作日志存储位置-->
  79. <name>dfs.namenode.edits.dir</name>
  80. <value>/opt/soft/hadoop260/dfs/edits</value>
  81. </property>
  82. <property>
  83. <!--开启namenode故障转移自动切换-->
  84. <name>dfs.ha.automatic-failover.enabled</name>
  85. <value>true</value>
  86. </property>
  87. <property>
  88. <!--配置失败自动切换实现方式-->
  89. <name>dfs.client.failover.proxy.provider.hacluster</name>
  90. <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  91. </property>
  92. <property>
  93. <!--配置隔离机制-->
  94. <name>dfs.ha.fencing.methods</name>
  95. <value>sshfence</value>
  96. </property>
  97. <property>
  98. <!--配置隔离机制需要SSH免密登录-->
  99. <name>dfs.ha.fencing.ssh.private-key-files</name>
  100. <value>/root/.ssh/id_rsa</value>
  101. </property>
  102. <property>
  103. <!--hdfs文件操作权限 false为不验证-->
  104. <name>dfs.premissions</name>
  105. <value>false</value>
  106. </property>
  107. </configuration>

21.5.配置mapred-site.xml,文件末尾添加如下内容(注意改机器名!!!)

cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
  1. <configuration>
  2. <property>
  3. <!--指定mapreduce在yarn上运行-->
  4. <name>mapreduce.framework.name</name>
  5. <value>yarn</value>
  6. </property>
  7. <property>
  8. <!--配置历史服务器地址-->
  9. <name>mapreduce.jobhistory.address</name>
  10. <value>hadoop01:10020</value>
  11. </property>
  12. <property>
  13. <!--配置历史服务器webUI地址-->
  14. <name>mapreduce.jobhistory.webapp.address</name>
  15. <value>hadoop01:19888</value>
  16. </property>
  17. <property>
  18. <!--开启uber模式-->
  19. <name>mapreduce.job.ubertask.enable</name>
  20. <value>true</value>
  21. </property>
  22. </configuration>

21.6.配置yarn-site.xml,文件末尾添加如下内容(注意改机器名!!!)

vim yarn-site.xml
  1. <configuration>
  2. <property>
  3. <!--开启yarn高可用-->
  4. <name>yarn.resourcemanager.ha.enabled</name>
  5. <value>true</value>
  6. </property>
  7. <property>
  8. <!-- 指定Yarn集群在zookeeper上注册的节点名-->
  9. <name>yarn.resourcemanager.cluster-id</name>
  10. <value>hayarn</value>
  11. </property>
  12. <property>
  13. <!--指定两个resourcemanager的名称-->
  14. <name>yarn.resourcemanager.ha.rm-ids</name>
  15. <value>rm1,rm2</value>
  16. </property>
  17. <property>
  18. <!--指定rm1的主机-->
  19. <name>yarn.resourcemanager.hostname.rm1</name>
  20. <value>hadoop02</value>
  21. </property>
  22. <property>
  23. <!--指定rm2的主机-->
  24. <name>yarn.resourcemanager.hostname.rm2</name>
  25. <value>hadoop03</value>
  26. </property>
  27. <property>
  28. <!--配置zookeeper的地址-->
  29. <name>yarn.resourcemanager.zk-address</name>
  30. <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
  31. </property> <property>
  32. <!--开启yarn恢复机制-->
  33. <name>yarn.resourcemanager.recovery.enabled</name>
  34. <value>true</value>
  35. </property>
  36. <property>
  37. <!--配置执行resourcemanager恢复机制实现类-->
  38. <name>yarn.resourcemanager.store.class</name>
  39. <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  40. </property>
  41. <property>
  42. <!--指定主resourcemanager的地址-->
  43. <name>yarn.resourcemanager.hostname</name>
  44. <value>hadoop03</value>
  45. </property>
  46. <property>
  47. <!--nodemanager获取数据的方式-->
  48. <name>yarn.nodemanager.aux-services</name>
  49. <value>mapreduce_shuffle</value>
  50. </property>
  51. <property>
  52. <!--开启日志聚集功能-->
  53. <name>yarn.log-aggregation-enable</name>
  54. <value>true</value>
  55. </property>
  56. <property>
  57. <!--配置日志保留7天-->
  58. <name>yarn.log-aggregation.retain-seconds</name>
  59. <value>604800</value>
  60. </property>
  61. </configuration>

22.配置slaves

vim slaves

22.1.快捷键dd删除localhost,添加如下内容

  1. hadoop01
  2. hadoop02
  3. hadoop03

23.配置hadoop环境变量

vim /etc/profile
  1. #hadoop env
  2. export HADOOP_HOME=/opt/soft/hadoop260
  3. export HADOOP_MAPRED_HOME=$HADOOP_HOME
  4. export HADOOP_COMMON_HOME=$HADOOP_HOME
  5. export HADOOP_HDFS_HOME=$HADOOP_HOME
  6. export YARN_HOME=$HADOOP_HOME
  7. export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
  8. export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
  9. export HADOOP_INSTALL=$HADOOP_HOME
source /etc/profile

24.启动Hadoop集群

24.1.输入以下命令

hadoop-daemon.sh start journalnode

24.2.输入jps命令,会发现多了一个进程JournalNode

jps

24.3.格式化namenode(只在hadoop01主机上)(hadoop02和hadoop03的窗口状态改成ON)

hdfs namenode -format

24.4.将hadoop01上的Namenode的元数据复制到hadoop02相同位置

scp -r /opt/soft/hadoop260/dfs/namenode_data/current/ root@hadoop02:/opt/soft/hadoop260/dfs/namenode_data

24.5.在hadoop01上格式化故障转移控制器zkfc

hdfs zkfc -formatZK

24.6.在hadoop01上启动dfs服务,再输入jps查看进程

start-dfs.sh
jps

24.7.在hadoop03上启动yarn服务,再输入jps查看进程

start-yarn.sh
jps

24.8.在hadoop02上输入jps查看进程,如下图

24.9.在hadoop01上启动history服务器,jps则会多了一个JobHistoryServer的进程

mr-jobhistory-daemon.sh start historyserver
jps

24.10.在hadoop02上启动resourcemanager服务,jps则会多了一个Resourcemanager的进程

yarn-daemon.sh start resourcemanager
jps

25.检查集群情况

25.1.在hadoop01上查看服务状态,hdfs haadmin -getServiceState nn1则会对应显示active,nn2则显示standby

hdfs haadmin -getServiceState nn1
hdfs haadmin -getServiceState nn2

25.2.在hadoop03上查看resourcemanager状态,yarn rmadmin -getServiceState rm1则会对应显示standby,rm2则显示active

yarn rmadmin -getServiceState rm1
yarn rmadmin -getServiceState rm2

26.浏览器输入IP地址:50070,对比以下图片

26.1.hadoop01的IP地址,注意查看是否为“active”

26.2.hadoop02的IP地址,注意查看是否为“standby”

26.3.最后选择上方的Datanodes,查看是否是三个节点,如何是,则高可用hadoop集群搭建成功!!!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/1002440
推荐阅读
相关标签
  

闽ICP备14008679号