当前位置:   article > 正文

尚硅谷大数据技术Hadoop教程-笔记02【Hadoop-入门】_尚硅谷大数据技术之cdh数仓文档下载

尚硅谷大数据技术之cdh数仓文档下载

视频地址:尚硅谷大数据Hadoop教程(Hadoop 3.x安装搭建到集群调优)

  1. 尚硅谷大数据技术Hadoop教程-笔记01【大数据概论】
  2. 尚硅谷大数据技术Hadoop教程-笔记02【Hadoop-入门】
  3. 尚硅谷大数据技术Hadoop教程-笔记03【Hadoop-HDFS】
  4. 尚硅谷大数据技术Hadoop教程-笔记04【Hadoop-MapReduce】
  5. 尚硅谷大数据技术Hadoop教程-笔记05【Hadoop-Yarn】
  6. 尚硅谷大数据技术Hadoop教程-笔记06【Hadoop-生产调优手册】
  7. 尚硅谷大数据技术Hadoop教程-笔记07【Hadoop-源码解析】

目录

02_尚硅谷大数据技术之Hadoop(入门)V3.3

P007【007_尚硅谷_Hadoop_入门_课程介绍】07:29

P008【008_尚硅谷_Hadoop_入门_Hadoop是什么】03:00

P009【009_尚硅谷_Hadoop_入门_Hadoop发展历史】05:52

P010【010_尚硅谷_Hadoop_入门_Hadoop三大发行版本】05:59

P011【011_尚硅谷_Hadoop_入门_Hadoop优势】03:52

P012【012_尚硅谷_Hadoop_入门_Hadoop1.x2.x3.x区别】03:00

P013【013_尚硅谷_Hadoop_入门_HDFS概述】06:26

P014【014_尚硅谷_Hadoop_入门_YARN概述】06:35

P015【015_尚硅谷_Hadoop_入门_MapReduce概述】01:55

P016【016_尚硅谷_Hadoop_入门_HDFS&YARN&MR关系】03:22

P017【017_尚硅谷_Hadoop_入门_大数据技术生态体系】09:17

P018【018_尚硅谷_Hadoop_入门_VMware安装】04:41

P019【019_尚硅谷_Hadoop_入门_Centos7.5软硬件安装】15:56

P020【020_尚硅谷_Hadoop_入门_IP和主机名称配置】10:50

P021【021_尚硅谷_Hadoop_入门_Xshell远程访问工具】09:05

P022【022_尚硅谷_Hadoop_入门_模板虚拟机准备完成】12:25

P023【023_尚硅谷_Hadoop_入门_克隆三台虚拟机】15:01

P024【024_尚硅谷_Hadoop_入门_JDK安装】07:02

P025【025_尚硅谷_Hadoop_入门_Hadoop安装】07:20

P026【026_尚硅谷_Hadoop_入门_本地运行模式】11:56

P027【027_尚硅谷_Hadoop_入门_scp&rsync命令讲解】15:01

P028【028_尚硅谷_Hadoop_入门_xsync分发脚本】18:14

P029【029_尚硅谷_Hadoop_入门_ssh免密登录】11:25

P030【030_尚硅谷_Hadoop_入门_集群配置】13:24

P031【031_尚硅谷_Hadoop_入门_群起集群并测试】16:52

P032【032_尚硅谷_Hadoop_入门_集群崩溃处理办法】08:10

P033【033_尚硅谷_Hadoop_入门_历史服务器配置】05:26

P034【034_尚硅谷_Hadoop_入门_日志聚集功能配置】05:42

P035【035_尚硅谷_Hadoop_入门_两个常用脚本】09:18

P036【036_尚硅谷_Hadoop_入门_两道面试题】04:15

P037【037_尚硅谷_Hadoop_入门_集群时间同步】11:27

P038【038_尚硅谷_Hadoop_入门_常见问题总结】10:57


02_尚硅谷大数据技术之Hadoop(入门)V3.3

P007【007_尚硅谷_Hadoop_入门_课程介绍】07:29

P008【008_尚硅谷_Hadoop_入门_Hadoop是什么】03:00

P009【009_尚硅谷_Hadoop_入门_Hadoop发展历史】05:52

P010【010_尚硅谷_Hadoop_入门_Hadoop三大发行版本】05:59

Hadoop三大发行版本:Apache、Cloudera、Hortonworks。

1Apache Hadoop

官网地址:http://hadoop.apache.org

下载地址:https://hadoop.apache.org/releases.html

2Cloudera Hadoop

官网地址:https://www.cloudera.com/downloads/cdh

下载地址:https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_6_download.html

(1)2008年成立的Cloudera是最早将Hadoop商用的公司,为合作伙伴提供Hadoop的商用解决方案,主要是包括支持、咨询服务、培训。

(2)2009年Hadoop的创始人Doug Cutting也加盟Cloudera公司Cloudera产品主要为CDH,Cloudera Manager,Cloudera Support

(3)CDH是Cloudera的Hadoop发行版,完全开源,比Apache Hadoop在兼容性,安全性,稳定性上有所增强。Cloudera的标价为每年每个节点10000美元

(4)Cloudera Manager是集群的软件分发及管理监控平台,可以在几个小时内部署好一个Hadoop集群,并对集群的节点及服务进行实时监控。

3Hortonworks Hadoop

官网地址:https://hortonworks.com/products/data-center/hdp/

下载地址:https://hortonworks.com/downloads/#data-platform

(1)2011年成立的Hortonworks是雅虎与硅谷风投公司Benchmark Capital合资组建。

(2)公司成立之初就吸纳了大约25名至30名专门研究Hadoop的雅虎工程师,上述工程师均在2005年开始协助雅虎开发Hadoop,贡献了Hadoop80%的代码。

(3)Hortonworks的主打产品是Hortonworks Data Platform(HDP),也同样是100%开源的产品,HDP除常见的项目外还包括了Ambari,一款开源的安装和管理系统。

(4)2018年Hortonworks目前已经被Cloudera公司收购。

P011【011_尚硅谷_Hadoop_入门_Hadoop优势】03:52

Hadoop优势(4高)

  1. 高可靠性
  2. 高拓展性
  3. 高效性
  4. 高容错性

P012【012_尚硅谷_Hadoop_入门_Hadoop1.x2.x3.x区别】03:00

P013【013_尚硅谷_Hadoop_入门_HDFS概述】06:26

Hadoop Distributed File System,简称 HDFS,是一个分布式文件系统。

  • 1)NameNode(nn):存储文件的元数据,如文件名,文件目录结构,文件属性(生成时间、副本数、文件权限),以及每个文件的块列表和块所在的DataNode等。
  • 2)DataNode(dn):在本地文件系统存储文件块数据,以及块数据的校验和。
  • 3)Secondary NameNode(2nn):每隔一段时间对NameNode元数据备份。

P014【014_尚硅谷_Hadoop_入门_YARN概述】06:35

Yet Another Resource Negotiator 简称 YARN ,另一种资源协调者,是 Hadoop 的资源管理器。

P015【015_尚硅谷_Hadoop_入门_MapReduce概述】01:55

MapReduce 将计算过程分为两个阶段:Map 和 Reduce

  • 1)Map 阶段并行处理输入数据
  • 2)Reduce 阶段对 Map 结果进行汇总

P016【016_尚硅谷_Hadoop_入门_HDFS&YARN&MR关系】03:22

  1. HDFS
    1. NameNode:负责数据存储。
    2. DataNode:数据存储在哪个节点上。
    3. SecondaryNameNode:秘书,备份NameNode数据恢复NameNode部分工作。
  2. YARN:整个集群的资源管理。
    1. ResourceManager:资源管理,map阶段。
    2. NodeManager
  3. MapReduce

P017【017_尚硅谷_Hadoop_入门_大数据技术生态体系】09:17

大数据技术生态体系

推荐系统项目框架

P018【018_尚硅谷_Hadoop_入门_VMware安装】04:41

 

P019【019_尚硅谷_Hadoop_入门_Centos7.5软硬件安装】15:56

P020【020_尚硅谷_Hadoop_入门_IP和主机名称配置】10:50

  1. [root@hadoop100 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
  2. [root@hadoop100 ~]# ifconfig
  3. ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  4. inet 192.168.88.133 netmask 255.255.255.0 broadcast 192.168.88.255
  5. inet6 fe80::363b:8659:c323:345d prefixlen 64 scopeid 0x20<link>
  6. ether 00:0c:29:0f:0a:6d txqueuelen 1000 (Ethernet)
  7. RX packets 684561 bytes 1003221355 (956.7 MiB)
  8. RX errors 0 dropped 0 overruns 0 frame 0
  9. TX packets 53538 bytes 3445292 (3.2 MiB)
  10. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  11. lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
  12. inet 127.0.0.1 netmask 255.0.0.0
  13. inet6 ::1 prefixlen 128 scopeid 0x10<host>
  14. loop txqueuelen 1000 (Local Loopback)
  15. RX packets 84 bytes 9492 (9.2 KiB)
  16. RX errors 0 dropped 0 overruns 0 frame 0
  17. TX packets 84 bytes 9492 (9.2 KiB)
  18. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  19. virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
  20. inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
  21. ether 52:54:00:1c:3c:a9 txqueuelen 1000 (Ethernet)
  22. RX packets 0 bytes 0 (0.0 B)
  23. RX errors 0 dropped 0 overruns 0 frame 0
  24. TX packets 0 bytes 0 (0.0 B)
  25. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  26. [root@hadoop100 ~]# systemctl restart network
  27. [root@hadoop100 ~]# cat /etc/host
  28. cat: /etc/host: 没有那个文件或目录
  29. [root@hadoop100 ~]# cat /etc/hostname
  30. hadoop100
  31. [root@hadoop100 ~]# cat /etc/hosts
  32. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  33. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  34. [root@hadoop100 ~]# vim /etc/hosts
  35. [root@hadoop100 ~]# ifconfig
  36. ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
  37. inet 192.168.88.100 netmask 255.255.255.0 broadcast 192.168.88.255
  38. inet6 fe80::363b:8659:c323:345d prefixlen 64 scopeid 0x20<link>
  39. ether 00:0c:29:0f:0a:6d txqueuelen 1000 (Ethernet)
  40. RX packets 684830 bytes 1003244575 (956.7 MiB)
  41. RX errors 0 dropped 0 overruns 0 frame 0
  42. TX packets 53597 bytes 3452600 (3.2 MiB)
  43. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  44. lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
  45. inet 127.0.0.1 netmask 255.0.0.0
  46. inet6 ::1 prefixlen 128 scopeid 0x10<host>
  47. loop txqueuelen 1000 (Local Loopback)
  48. RX packets 132 bytes 14436 (14.0 KiB)
  49. RX errors 0 dropped 0 overruns 0 frame 0
  50. TX packets 132 bytes 14436 (14.0 KiB)
  51. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  52. virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
  53. inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
  54. ether 52:54:00:1c:3c:a9 txqueuelen 1000 (Ethernet)
  55. RX packets 0 bytes 0 (0.0 B)
  56. RX errors 0 dropped 0 overruns 0 frame 0
  57. TX packets 0 bytes 0 (0.0 B)
  58. TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
  59. [root@hadoop100 ~]# ll
  60. 总用量 40
  61. -rw-------. 1 root root 1973 3月 14 10:19 anaconda-ks.cfg
  62. -rw-r--r--. 1 root root 2021 3月 14 10:26 initial-setup-ks.cfg
  63. drwxr-xr-x. 2 root root 4096 3月 14 10:27 公共
  64. drwxr-xr-x. 2 root root 4096 3月 14 10:27 模板
  65. drwxr-xr-x. 2 root root 4096 3月 14 10:27 视频
  66. drwxr-xr-x. 2 root root 4096 3月 14 10:27 图片
  67. drwxr-xr-x. 2 root root 4096 3月 14 10:27 文档
  68. drwxr-xr-x. 2 root root 4096 3月 14 10:27 下载
  69. drwxr-xr-x. 2 root root 4096 3月 14 10:27 音乐
  70. drwxr-xr-x. 2 root root 4096 3月 14 10:27 桌面
  71. [root@hadoop100 ~]#

vim /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="3241b48d-3234-4c23-8a03-b9b393a99a65"
DEVICE="ens33"
ONBOOT="yes"

IPADDR=192.168.88.100
GATEWAY=192.168.88.2
DNS1=192.168.88.2

vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.88.100 hadoop100
192.168.88.101 hadoop101
192.168.88.102 hadoop102
192.168.88.103 hadoop103
192.168.88.104 hadoop104
192.168.88.105 hadoop105
192.168.88.106 hadoop106
192.168.88.107 hadoop107
192.168.88.108 hadoop108

192.168.88.151 node1 node1.itcast.cn
192.168.88.152 node2 node2.itcast.cn
192.168.88.153 node3 node3.itcast.cn

P021【021_尚硅谷_Hadoop_入门_Xshell远程访问工具】09:05

P022【022_尚硅谷_Hadoop_入门_模板虚拟机准备完成】12:25

yum install -y epel-release

systemctl stop firewalld

systemctl disable firewalld.service

P023【023_尚硅谷_Hadoop_入门_克隆三台虚拟机】15:01

vim /etc/sysconfig/network-scripts/ifcfg-ens33

vim /etc/hostname

reboot

P024【024_尚硅谷_Hadoop_入门_JDK安装】07:02

在hadoop102上安装jdk,然后将jdk拷贝到hadoop103与hadoop104上。

P025【025_尚硅谷_Hadoop_入门_Hadoop安装】07:20

同P024图!

P026【026_尚硅谷_Hadoop_入门_本地运行模式】11:56

Apache Hadoop

http://node1:9870/explorer.html#/

  1. [root@node1 ~]# cd /export/server/hadoop-3.3.0/share/hadoop/mapreduce/
  2. [root@node1 mapreduce]# hadoop jar hadoop-mapreduce-examples-3.3.0.jar wordcount /wordcount/input /wordcount/output
  3. 2023-03-20 14:43:07,516 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at node1/192.168.88.151:8032
  4. 2023-03-20 14:43:09,291 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1679293699463_0001
  5. 2023-03-20 14:43:11,916 INFO input.FileInputFormat: Total input files to process : 1
  6. 2023-03-20 14:43:12,313 INFO mapreduce.JobSubmitter: number of splits:1
  7. 2023-03-20 14:43:13,173 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1679293699463_0001
  8. 2023-03-20 14:43:13,173 INFO mapreduce.JobSubmitter: Executing with tokens: []
  9. 2023-03-20 14:43:14,684 INFO conf.Configuration: resource-types.xml not found
  10. 2023-03-20 14:43:14,684 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
  11. 2023-03-20 14:43:17,054 INFO impl.YarnClientImpl: Submitted application application_1679293699463_0001
  12. 2023-03-20 14:43:17,123 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1679293699463_0001/
  13. 2023-03-20 14:43:17,124 INFO mapreduce.Job: Running job: job_1679293699463_0001
  14. 2023-03-20 14:43:52,340 INFO mapreduce.Job: Job job_1679293699463_0001 running in uber mode : false
  15. 2023-03-20 14:43:52,360 INFO mapreduce.Job: map 0% reduce 0%
  16. 2023-03-20 14:44:08,011 INFO mapreduce.Job: map 100% reduce 0%
  17. 2023-03-20 14:44:16,986 INFO mapreduce.Job: map 100% reduce 100%
  18. 2023-03-20 14:44:18,020 INFO mapreduce.Job: Job job_1679293699463_0001 completed successfully
  19. 2023-03-20 14:44:18,579 INFO mapreduce.Job: Counters: 54
  20. File System Counters
  21. FILE: Number of bytes read=31
  22. FILE: Number of bytes written=529345
  23. FILE: Number of read operations=0
  24. FILE: Number of large read operations=0
  25. FILE: Number of write operations=0
  26. HDFS: Number of bytes read=142
  27. HDFS: Number of bytes written=17
  28. HDFS: Number of read operations=8
  29. HDFS: Number of large read operations=0
  30. HDFS: Number of write operations=2
  31. HDFS: Number of bytes read erasure-coded=0
  32. Job Counters
  33. Launched map tasks=1
  34. Launched reduce tasks=1
  35. Data-local map tasks=1
  36. Total time spent by all maps in occupied slots (ms)=11303
  37. Total time spent by all reduces in occupied slots (ms)=6220
  38. Total time spent by all map tasks (ms)=11303
  39. Total time spent by all reduce tasks (ms)=6220
  40. Total vcore-milliseconds taken by all map tasks=11303
  41. Total vcore-milliseconds taken by all reduce tasks=6220
  42. Total megabyte-milliseconds taken by all map tasks=11574272
  43. Total megabyte-milliseconds taken by all reduce tasks=6369280
  44. Map-Reduce Framework
  45. Map input records=2
  46. Map output records=5
  47. Map output bytes=53
  48. Map output materialized bytes=31
  49. Input split bytes=108
  50. Combine input records=5
  51. Combine output records=2
  52. Reduce input groups=2
  53. Reduce shuffle bytes=31
  54. Reduce input records=2
  55. Reduce output records=2
  56. Spilled Records=4
  57. Shuffled Maps =1
  58. Failed Shuffles=0
  59. Merged Map outputs=1
  60. GC time elapsed (ms)=546
  61. CPU time spent (ms)=3680
  62. Physical memory (bytes) snapshot=499236864
  63. Virtual memory (bytes) snapshot=5568684032
  64. Total committed heap usage (bytes)=365953024
  65. Peak Map Physical memory (bytes)=301096960
  66. Peak Map Virtual memory (bytes)=2779201536
  67. Peak Reduce Physical memory (bytes)=198139904
  68. Peak Reduce Virtual memory (bytes)=2789482496
  69. Shuffle Errors
  70. BAD_ID=0
  71. CONNECTION=0
  72. IO_ERROR=0
  73. WRONG_LENGTH=0
  74. WRONG_MAP=0
  75. WRONG_REDUCE=0
  76. File Input Format Counters
  77. Bytes Read=34
  78. File Output Format Counters
  79. Bytes Written=17
  80. [root@node1 mapreduce]#
  81. [root@node1 mapreduce]# hadoop jar hadoop-mapreduce-examples-3.3.0.jar wordcount /wc_input /wc_output
  82. 2023-03-20 15:01:48,007 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at node1/192.168.88.151:8032
  83. 2023-03-20 15:01:49,475 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1679293699463_0002
  84. 2023-03-20 15:01:50,522 INFO input.FileInputFormat: Total input files to process : 1
  85. 2023-03-20 15:01:51,010 INFO mapreduce.JobSubmitter: number of splits:1
  86. 2023-03-20 15:01:51,894 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1679293699463_0002
  87. 2023-03-20 15:01:51,894 INFO mapreduce.JobSubmitter: Executing with tokens: []
  88. 2023-03-20 15:01:52,684 INFO conf.Configuration: resource-types.xml not found
  89. 2023-03-20 15:01:52,687 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
  90. 2023-03-20 15:01:53,237 INFO impl.YarnClientImpl: Submitted application application_1679293699463_0002
  91. 2023-03-20 15:01:53,487 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1679293699463_0002/
  92. 2023-03-20 15:01:53,492 INFO mapreduce.Job: Running job: job_1679293699463_0002
  93. 2023-03-20 15:02:15,329 INFO mapreduce.Job: Job job_1679293699463_0002 running in uber mode : false
  94. 2023-03-20 15:02:15,342 INFO mapreduce.Job: map 0% reduce 0%
  95. 2023-03-20 15:02:26,652 INFO mapreduce.Job: map 100% reduce 0%
  96. 2023-03-20 15:02:40,297 INFO mapreduce.Job: map 100% reduce 100%
  97. 2023-03-20 15:02:41,350 INFO mapreduce.Job: Job job_1679293699463_0002 completed successfully
  98. 2023-03-20 15:02:41,557 INFO mapreduce.Job: Counters: 54
  99. File System Counters
  100. FILE: Number of bytes read=60
  101. FILE: Number of bytes written=529375
  102. FILE: Number of read operations=0
  103. FILE: Number of large read operations=0
  104. FILE: Number of write operations=0
  105. HDFS: Number of bytes read=149
  106. HDFS: Number of bytes written=38
  107. HDFS: Number of read operations=8
  108. HDFS: Number of large read operations=0
  109. HDFS: Number of write operations=2
  110. HDFS: Number of bytes read erasure-coded=0
  111. Job Counters
  112. Launched map tasks=1
  113. Launched reduce tasks=1
  114. Data-local map tasks=1
  115. Total time spent by all maps in occupied slots (ms)=8398
  116. Total time spent by all reduces in occupied slots (ms)=9720
  117. Total time spent by all map tasks (ms)=8398
  118. Total time spent by all reduce tasks (ms)=9720
  119. Total vcore-milliseconds taken by all map tasks=8398
  120. Total vcore-milliseconds taken by all reduce tasks=9720
  121. Total megabyte-milliseconds taken by all map tasks=8599552
  122. Total megabyte-milliseconds taken by all reduce tasks=9953280
  123. Map-Reduce Framework
  124. Map input records=4
  125. Map output records=6
  126. Map output bytes=69
  127. Map output materialized bytes=60
  128. Input split bytes=100
  129. Combine input records=6
  130. Combine output records=4
  131. Reduce input groups=4
  132. Reduce shuffle bytes=60
  133. Reduce input records=4
  134. Reduce output records=4
  135. Spilled Records=8
  136. Shuffled Maps =1
  137. Failed Shuffles=0
  138. Merged Map outputs=1
  139. GC time elapsed (ms)=1000
  140. CPU time spent (ms)=3880
  141. Physical memory (bytes) snapshot=503771136
  142. Virtual memory (bytes) snapshot=5568987136
  143. Total committed heap usage (bytes)=428343296
  144. Peak Map Physical memory (bytes)=303013888
  145. Peak Map Virtual memory (bytes)=2782048256
  146. Peak Reduce Physical memory (bytes)=200757248
  147. Peak Reduce Virtual memory (bytes)=2786938880
  148. Shuffle Errors
  149. BAD_ID=0
  150. CONNECTION=0
  151. IO_ERROR=0
  152. WRONG_LENGTH=0
  153. WRONG_MAP=0
  154. WRONG_REDUCE=0
  155. File Input Format Counters
  156. Bytes Read=49
  157. File Output Format Counters
  158. Bytes Written=38
  159. [root@node1 mapreduce]# pwd
  160. /export/server/hadoop-3.3.0/share/hadoop/mapreduce
  161. [root@node1 mapreduce]#

P027【027_尚硅谷_Hadoop_入门_scp&rsync命令讲解】15:01

第一次同步用scp,后续同步用rsync。

rsync主要用于备份和镜像,具有速度快、避免复制相同内容和支持符号链接的优点。

rsyncscp区别:rsync做文件的复制要比scp的速度快,rsync只对差异文件做更新。scp是把所有文件都复制过去。

P028【028_尚硅谷_Hadoop_入门_xsync分发脚本】18:14

拷贝同步命令

  1. scp(secure copy)安全拷贝
  2. rsync 远程同步工具
  3. xsync 集群分发脚本

dirname命令:截取文件的路径,去除文件名中的非目录部分,仅显示与目录有关的内容。

[root@node1 ~]# dirname /home/atguigu/a.txt
/home/atguigu
[root@node1 ~]#

basename命令:获取文件名称。

[root@node1 atguigu]# basename /home/atguigu/a.txt
a.txt
[root@node1 atguigu]#

  1. #!/bin/bash
  2. #1. 判断参数个数
  3. if [ $# -lt 1 ]
  4. then
  5. echo Not Enough Arguement!
  6. exit;
  7. fi
  8. #2. 遍历集群所有机器
  9. for host in hadoop102 hadoop103 hadoop104
  10. do
  11. echo ==================== $host ====================
  12. #3. 遍历所有目录,挨个发送
  13. for file in $@
  14. do
  15. #4. 判断文件是否存在
  16. if [ -e $file ]
  17. then
  18. #5. 获取父目录
  19. pdir=$(cd -P $(dirname $file); pwd)
  20. #6. 获取当前文件的名称
  21. fname=$(basename $file)
  22. ssh $host "mkdir -p $pdir"
  23. rsync -av $pdir/$fname $host:$pdir
  24. else
  25. echo $file does not exists!
  26. fi
  27. done
  28. done
  1. [root@node1 bin]# chmod 777 xsync
  2. [root@node1 bin]# ll
  3. 总用量 4
  4. -rwxrwxrwx 1 atguigu atguigu 727 3月 20 16:00 xsync
  5. [root@node1 bin]# cd ..
  6. [root@node1 atguigu]# xsync bin/
  7. ==================== node1 ====================
  8. sending incremental file list
  9. sent 94 bytes received 17 bytes 222.00 bytes/sec
  10. total size is 727 speedup is 6.55
  11. ==================== node2 ====================
  12. sending incremental file list
  13. bin/
  14. bin/xsync
  15. sent 871 bytes received 39 bytes 606.67 bytes/sec
  16. total size is 727 speedup is 0.80
  17. ==================== node3 ====================
  18. sending incremental file list
  19. bin/
  20. bin/xsync
  21. sent 871 bytes received 39 bytes 1,820.00 bytes/sec
  22. total size is 727 speedup is 0.80
  23. [root@node1 atguigu]# pwd
  24. /home/atguigu
  25. [root@node1 atguigu]# ls -al
  26. 总用量 20
  27. drwx------ 6 atguigu atguigu 168 3月 20 15:56 .
  28. drwxr-xr-x. 6 root root 56 3月 20 10:08 ..
  29. -rw-r--r-- 1 root root 0 3月 20 15:44 a.txt
  30. -rw------- 1 atguigu atguigu 21 3月 20 11:48 .bash_history
  31. -rw-r--r-- 1 atguigu atguigu 18 8月 8 2019 .bash_logout
  32. -rw-r--r-- 1 atguigu atguigu 193 8月 8 2019 .bash_profile
  33. -rw-r--r-- 1 atguigu atguigu 231 8月 8 2019 .bashrc
  34. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  35. drwxrwxr-x 3 atguigu atguigu 18 3月 20 10:17 .cache
  36. drwxrwxr-x 3 atguigu atguigu 18 3月 20 10:17 .config
  37. drwxr-xr-x 4 atguigu atguigu 39 3月 10 20:04 .mozilla
  38. -rw------- 1 atguigu atguigu 1261 3月 20 15:56 .viminfo
  39. [root@node1 atguigu]#
  1. 连接成功
  2. Last login: Mon Mar 20 16:01:40 2023
  3. [root@node1 ~]# su atguigu
  4. [atguigu@node1 root]$ cd /home/atguigu/
  5. [atguigu@node1 ~]$ pwd
  6. /home/atguigu
  7. [atguigu@node1 ~]$ xsync bin/
  8. ==================== node1 ====================
  9. The authenticity of host 'node1 (192.168.88.151)' can't be established.
  10. ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
  11. ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
  12. Are you sure you want to continue connecting (yes/no)? yes
  13. Warning: Permanently added 'node1,192.168.88.151' (ECDSA) to the list of known hosts.
  14. atguigu@node1's password:
  15. atguigu@node1's password:
  16. sending incremental file list
  17. sent 98 bytes received 17 bytes 17.69 bytes/sec
  18. total size is 727 speedup is 6.32
  19. ==================== node2 ====================
  20. The authenticity of host 'node2 (192.168.88.152)' can't be established.
  21. ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
  22. ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
  23. Are you sure you want to continue connecting (yes/no)? yes
  24. Warning: Permanently added 'node2,192.168.88.152' (ECDSA) to the list of known hosts.
  25. atguigu@node2's password:
  26. atguigu@node2's password:
  27. sending incremental file list
  28. sent 94 bytes received 17 bytes 44.40 bytes/sec
  29. total size is 727 speedup is 6.55
  30. ==================== node3 ====================
  31. The authenticity of host 'node3 (192.168.88.153)' can't be established.
  32. ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
  33. ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
  34. Are you sure you want to continue connecting (yes/no)? yes
  35. Warning: Permanently added 'node3,192.168.88.153' (ECDSA) to the list of known hosts.
  36. atguigu@node3's password:
  37. atguigu@node3's password:
  38. sending incremental file list
  39. sent 94 bytes received 17 bytes 44.40 bytes/sec
  40. total size is 727 speedup is 6.55
  41. [atguigu@node1 ~]$
  42. ----------------------------------------------------------------------------------------
  43. 连接成功
  44. Last login: Mon Mar 20 17:22:20 2023 from 192.168.88.151
  45. [root@node2 ~]# su atguigu
  46. [atguigu@node2 root]$ vim /etc/sudoers
  47. 您在 /var/spool/mail/root 中有新邮件
  48. [atguigu@node2 root]$ su root
  49. 密码:
  50. [root@node2 ~]# vim /etc/sudoers
  51. [root@node2 ~]# cd /opt/
  52. [root@node2 opt]# ll
  53. 总用量 0
  54. drwxr-xr-x 4 atguigu atguigu 46 3月 20 11:32 module
  55. drwxr-xr-x. 2 root root 6 10月 31 2018 rh
  56. drwxr-xr-x 2 atguigu atguigu 67 3月 20 10:47 software
  57. [root@node2 opt]# su atguigu
  58. [atguigu@node2 opt]$ cd /home/atguigu/
  59. [atguigu@node2 ~]$ llk
  60. bash: llk: 未找到命令
  61. [atguigu@node2 ~]$ ll
  62. 总用量 0
  63. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  64. [atguigu@node2 ~]$ cd ~
  65. 您在 /var/spool/mail/root 中有新邮件
  66. [atguigu@node2 ~]$ ll
  67. 总用量 0
  68. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  69. [atguigu@node2 ~]$ ll
  70. 总用量 0
  71. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  72. 您在 /var/spool/mail/root 中有新邮件
  73. [atguigu@node2 ~]$ cd bin
  74. [atguigu@node2 bin]$ ll
  75. 总用量 4
  76. -rwxrwxrwx 1 atguigu atguigu 727 3月 20 16:00 xsync
  77. [atguigu@node2 bin]$
  78. ----------------------------------------------------------------------------------------
  79. 连接成功
  80. Last login: Mon Mar 20 17:22:26 2023 from 192.168.88.152
  81. [root@node3 ~]# vim /etc/sudoers
  82. 您在 /var/spool/mail/root 中有新邮件
  83. [root@node3 ~]# cd /opt/
  84. [root@node3 opt]# ll
  85. 总用量 0
  86. drwxr-xr-x 4 atguigu atguigu 46 3月 20 11:32 module
  87. drwxr-xr-x. 2 root root 6 10月 31 2018 rh
  88. drwxr-xr-x 2 atguigu atguigu 67 3月 20 10:47 software
  89. [root@node3 opt]# cd ~
  90. 您在 /var/spool/mail/root 中有新邮件
  91. [root@node3 ~]# ll
  92. 总用量 4
  93. -rw-------. 1 root root 1340 9月 11 2020 anaconda-ks.cfg
  94. -rw------- 1 root root 0 2月 23 16:20 nohup.out
  95. [root@node3 ~]# ll
  96. 总用量 4
  97. -rw-------. 1 root root 1340 9月 11 2020 anaconda-ks.cfg
  98. -rw------- 1 root root 0 2月 23 16:20 nohup.out
  99. 您在 /var/spool/mail/root 中有新邮件
  100. [root@node3 ~]# cd ~
  101. [root@node3 ~]# ll
  102. 总用量 4
  103. -rw-------. 1 root root 1340 9月 11 2020 anaconda-ks.cfg
  104. -rw------- 1 root root 0 2月 23 16:20 nohup.out
  105. [root@node3 ~]# su atguigu
  106. [atguigu@node3 root]$ cd ~
  107. [atguigu@node3 ~]$ ls
  108. bin
  109. [atguigu@node3 ~]$ ll
  110. 总用量 0
  111. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  112. [atguigu@node3 ~]$ cd bin
  113. [atguigu@node3 bin]$ ll
  114. 总用量 4
  115. -rwxrwxrwx 1 atguigu atguigu 727 3月 20 16:00 xsync
  116. [atguigu@node3 bin]$
  117. ----------------------------------------------------------------------------------------
  118. 连接成功
  119. Last login: Mon Mar 20 16:01:40 2023
  120. [root@node1 ~]# su atguigu
  121. [atguigu@node1 root]$ cd /home/atguigu/
  122. [atguigu@node1 ~]$ pwd
  123. /home/atguigu
  124. [atguigu@node1 ~]$ xsync bin/
  125. ==================== node1 ====================
  126. The authenticity of host 'node1 (192.168.88.151)' can't be established.
  127. ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
  128. ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
  129. Are you sure you want to continue connecting (yes/no)? yes
  130. Warning: Permanently added 'node1,192.168.88.151' (ECDSA) to the list of known hosts.
  131. atguigu@node1's password:
  132. atguigu@node1's password:
  133. sending incremental file list
  134. sent 98 bytes received 17 bytes 17.69 bytes/sec
  135. total size is 727 speedup is 6.32
  136. ==================== node2 ====================
  137. The authenticity of host 'node2 (192.168.88.152)' can't be established.
  138. ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
  139. ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
  140. Are you sure you want to continue connecting (yes/no)? yes
  141. Warning: Permanently added 'node2,192.168.88.152' (ECDSA) to the list of known hosts.
  142. atguigu@node2's password:
  143. atguigu@node2's password:
  144. sending incremental file list
  145. sent 94 bytes received 17 bytes 44.40 bytes/sec
  146. total size is 727 speedup is 6.55
  147. ==================== node3 ====================
  148. The authenticity of host 'node3 (192.168.88.153)' can't be established.
  149. ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
  150. ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
  151. Are you sure you want to continue connecting (yes/no)? yes
  152. Warning: Permanently added 'node3,192.168.88.153' (ECDSA) to the list of known hosts.
  153. atguigu@node3's password:
  154. atguigu@node3's password:
  155. sending incremental file list
  156. sent 94 bytes received 17 bytes 44.40 bytes/sec
  157. total size is 727 speedup is 6.55
  158. [atguigu@node1 ~]$ xsync /etc/profile.d/my_env.sh
  159. ==================== node1 ====================
  160. atguigu@node1's password:
  161. atguigu@node1's password:
  162. .sending incremental file list
  163. sent 48 bytes received 12 bytes 13.33 bytes/sec
  164. total size is 223 speedup is 3.72
  165. ==================== node2 ====================
  166. atguigu@node2's password:
  167. atguigu@node2's password:
  168. sending incremental file list
  169. my_env.sh
  170. rsync: mkstemp "/etc/profile.d/.my_env.sh.guTzvB" failed: Permission denied (13)
  171. sent 95 bytes received 126 bytes 88.40 bytes/sec
  172. total size is 223 speedup is 1.01
  173. rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1178) [sender=3.1.2]
  174. ==================== node3 ====================
  175. atguigu@node3's password:
  176. atguigu@node3's password:
  177. sending incremental file list
  178. my_env.sh
  179. rsync: mkstemp "/etc/profile.d/.my_env.sh.evDUZa" failed: Permission denied (13)
  180. sent 95 bytes received 126 bytes 88.40 bytes/sec
  181. total size is 223 speedup is 1.01
  182. rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1178) [sender=3.1.2]
  183. [atguigu@node1 ~]$ sudo ./bin/xsync /etc/profile.d/my_env.sh
  184. ==================== node1 ====================
  185. sending incremental file list
  186. sent 48 bytes received 12 bytes 120.00 bytes/sec
  187. total size is 223 speedup is 3.72
  188. ==================== node2 ====================
  189. sending incremental file list
  190. my_env.sh
  191. sent 95 bytes received 41 bytes 272.00 bytes/sec
  192. total size is 223 speedup is 1.64
  193. ==================== node3 ====================
  194. sending incremental file list
  195. my_env.sh
  196. sent 95 bytes received 41 bytes 272.00 bytes/sec
  197. total size is 223 speedup is 1.64
  198. [atguigu@node1 ~]$

P029【029_尚硅谷_Hadoop_入门_ssh免密登录】11:25

 

  1. 连接成功
  2. Last login: Mon Mar 20 19:14:44 2023 from 192.168.88.1
  3. [root@node1 ~]# su atguigu
  4. [atguigu@node1 root]$ pwd
  5. /root
  6. [atguigu@node1 root]$ cd ~
  7. [atguigu@node1 ~]$ pwd
  8. /home/atguigu
  9. [atguigu@node1 ~]$ ls -al
  10. 总用量 20
  11. drwx------ 7 atguigu atguigu 180 3月 20 19:22 .
  12. drwxr-xr-x. 6 root root 56 3月 20 10:08 ..
  13. -rw-r--r-- 1 root root 0 3月 20 15:44 a.txt
  14. -rw------- 1 atguigu atguigu 391 3月 20 19:36 .bash_history
  15. -rw-r--r-- 1 atguigu atguigu 18 8月 8 2019 .bash_logout
  16. -rw-r--r-- 1 atguigu atguigu 193 8月 8 2019 .bash_profile
  17. -rw-r--r-- 1 atguigu atguigu 231 8月 8 2019 .bashrc
  18. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  19. drwxrwxr-x 3 atguigu atguigu 18 3月 20 10:17 .cache
  20. drwxrwxr-x 3 atguigu atguigu 18 3月 20 10:17 .config
  21. drwxr-xr-x 4 atguigu atguigu 39 3月 10 20:04 .mozilla
  22. drwx------ 2 atguigu atguigu 25 3月 20 19:22 .ssh
  23. -rw------- 1 atguigu atguigu 1261 3月 20 15:56 .viminfo
  24. [atguigu@node1 ~]$ cd .ssh
  25. [atguigu@node1 .ssh]$ ll
  26. 总用量 4
  27. -rw-r--r-- 1 atguigu atguigu 546 3月 20 19:23 known_hosts
  28. [atguigu@node1 .ssh]$ cat known_hosts
  29. node1,192.168.88.151 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBH5t/7/J0WwO0GTeNpg3EjfM5PjoppHMfq+wCWp46lhQ/B6O6kTOdx+2mEZu9QkAJk9oM4RGqiZKA5vmifHkQQ=
  30. node2,192.168.88.152 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBH5t/7/J0WwO0GTeNpg3EjfM5PjoppHMfq+wCWp46lhQ/B6O6kTOdx+2mEZu9QkAJk9oM4RGqiZKA5vmifHkQQ=
  31. node3,192.168.88.153 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBH5t/7/J0WwO0GTeNpg3EjfM5PjoppHMfq+wCWp46lhQ/B6O6kTOdx+2mEZu9QkAJk9oM4RGqiZKA5vmifHkQQ=
  32. [atguigu@node1 .ssh]$ ssh-keygen -t rsa
  33. Generating public/private rsa key pair.
  34. Enter file in which to save the key (/home/atguigu/.ssh/id_rsa):
  35. Enter passphrase (empty for no passphrase):
  36. Enter same passphrase again:
  37. Your identification has been saved in /home/atguigu/.ssh/id_rsa.
  38. Your public key has been saved in /home/atguigu/.ssh/id_rsa.pub.
  39. The key fingerprint is:
  40. SHA256:CBFD39JBRh/GgmTTIsC+gJVjWDQWFyE5riX287PXfzc atguigu@node1.itcast.cn
  41. The key's randomart image is:
  42. +---[RSA 2048]----+
  43. | =O*O=+==.o |
  44. |..O+.=o=o+.. |
  45. |.= o..o.o.. |
  46. |+.+ . o |
  47. |.=.. . S |
  48. |. .o |
  49. | o . |
  50. | o . . . E |
  51. | .+ ... . .|
  52. +----[SHA256]-----+
  53. [atguigu@node1 .ssh]$
  54. [atguigu@node1 .ssh]$ ll
  55. 总用量 12
  56. -rw------- 1 atguigu atguigu 1679 3月 20 19:40 id_rsa
  57. -rw-r--r-- 1 atguigu atguigu 405 3月 20 19:40 id_rsa.pub
  58. -rw-r--r-- 1 atguigu atguigu 546 3月 20 19:23 known_hosts
  59. [atguigu@node1 .ssh]$ cat id_rsa
  60. -----BEGIN RSA PRIVATE KEY-----
  61. MIIEpQIBAAKCAQEA0a3S5+QcIvjewfOYIHTlnlzbQgV7Y92JC6ZMt1XGklva53rw
  62. CEhf9yvfcM1VTBDNYHyvl6+rTKKU8EHTfpEoYa5blB299ZfKkM4OxPkcE9nz7uTN
  63. TyYF84wAR1IIEtT1bLVJPyh/Hvh8ye6UMj1PhZIGflNjbGwYkJoDK3wXxwaD4xey
  64. Y0zVCgL7QDqND0Iw8XQrCSQ8cQVgbBxprUYu97+n/7GOs0WASC6gtrW9IksxHtSB
  65. pI6ieKVzv9fuWcbhb8C5w7BqdVU6jrKqo7FnQaDKBNdC4weOku4TA+beHpc4W3p8
  66. f8b+b3U+A0qOj+uRVX7uDoxuunH4xAjqn8TmPQIDAQABAoIBAQCFl0UHn6tZkMyE
  67. MApdq3zcj/bWMp3x+6SkOnkoWcshVsq6rvYdoNcbqOU8fmZ5Bz+C2Q4bC76NHgzc
  68. omP4gM2Eps0MKoLr5aEW72Izly+Pak7jhv1UDzq9eBZ5WkdwkCQp9brMNaYAensv
  69. QQVEmRGAXZArjj+LRbfE8YtReke/8jxyJlRxmVrq+A0a6VAAdOSL/71EJZ9+zJy/
  70. SpN3UlZj27LndYIaOIsQ/vnhTrtb75l4VH24UNhHzJQv1PcBSUrSVOEWrIq/sOzU
  71. b4RW3Fuo51ZLB9ysvxZd5KnwC+yX63XKf8IJqfpWt1KrJ3IV6acvs1UEU+DELfUY
  72. b7v0GkhhAoGBAOuswY5qI0zUiBSEGxysDml5ZG9n4i2JnzmKmnVMAGwQq7ZzUv0o
  73. VwObDmFgp+A8NDAstxR6My5kKky2MOSv/ckJdAEzY9iVI3vXtkT54HYhHstIzNYg
  74. ube1MylcLUttaR/OpbJpyN8BavTQEtydJP7Xchorw6DaZOGLhWjX8EjpAoGBAOPD
  75. IVSfi+51s9h5cIDvvm6IiKDn05kf8D/VrD3awm/hrQrRwF3ouD6JBr/T9OfWqh1W
  76. v9xjn5uurTflO8CZOU91VB/nihXxN0pT6CREi8/I9QSAZbrCkCIWZ6ku7seyEZg6
  77. fp756zCyVeKNSZPpDbKH5LCSyafkroZBxcZKFp41AoGAXff0+SbiyliXpa6C7OzB
  78. llabsDv4l/Wesh/MtGZIaM5A2S+kcGJsR3jExBj49tSqbmb13MlYrO+tWgbu+dAe
  79. XdFSGsR11D6q9k8tUtVbJV7RW3a8jchgpJowOxaQzNlkKBWKRdgeCqUTE2f/jU1v
  80. Gdmnmj3G89UAklnCKOqo2TkCgYEAuGBVEgkaIQ7daQdd4LKzaQ1T9VXWAGZPeY2C
  81. oov9zM5W46RK4nqq88y/TvjJkAhBrAB2znVDVqcACHikd1RShZVIZY9tRDgB90SX
  82. bwyiVbGrT1qVf6tTPJUAk3+vwq7O+XmY2R8dmk0zo3OWtYr7EKRbp+kcH7LK6VpD
  83. PTLqvmUCgYEAt8rZWnAjGiipc/lLHMkoeKMK+JvA42HETVxQkdG17hTRzrotMMaF
  84. CajslMcQ9m+ALHko2uyvsHVOdm66tQO65IKr5iavpcq8ZHKh51jJPdJpQwAJE9vr
  85. d4ASXHEESfNK5/YPzMAIy019lgJal4bsy8tE8i6LIv6/PHVhNDs3Rsg=
  86. -----END RSA PRIVATE KEY-----
  87. [atguigu@node1 .ssh]$ cat id_rsa.pub
  88. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRrdLn5Bwi+N7B85ggdOWeXNtCBXtj3YkLpky3VcaSW9rnevAISF/3K99wzVVMEM1gfK+Xr6tMopTwQdN+kShhrluUHb31l8qQzg7E+RwT2fPu5M1PJgXzjABHUggS1PVstUk/KH8e+HzJ7pQyPU+FkgZ+U2NsbBiQmgMrfBfHBoPjF7JjTNUKAvtAOo0PQjDxdCsJJDxxBWBsHGmtRi73v6f/sY6zRYBILqC2tb0iSzEe1IGkjqJ4pXO/1+5ZxuFvwLnDsGp1VTqOsqqjsWdBoMoE10LjB46S7hMD5t4elzhbenx/xv5vdT4DSo6P65FVfu4OjG66cfjECOqfxOY9 atguigu@node1.itcast.cn
  89. [atguigu@node1 .ssh]$ ssh-copy-id node2
  90. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/atguigu/.ssh/id_rsa.pub"
  91. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  92. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  93. atguigu@node2's password:
  94. Permission denied, please try again.
  95. atguigu@node2's password:
  96. Number of key(s) added: 1
  97. Now try logging into the machine, with: "ssh 'node2'"
  98. and check to make sure that only the key(s) you wanted were added.
  99. [atguigu@node1 .ssh]$ ssh node2
  100. Last login: Mon Mar 20 19:37:14 2023
  101. [atguigu@node2 ~]$ hostname
  102. node2.itcast.cn
  103. [atguigu@node2 ~]$ exit
  104. 登出
  105. Connection to node2 closed.
  106. [atguigu@node1 .ssh]$ ssh-copy-id node3
  107. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/atguigu/.ssh/id_rsa.pub"
  108. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  109. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  110. atguigu@node3's password:
  111. Number of key(s) added: 1
  112. Now try logging into the machine, with: "ssh 'node3'"
  113. and check to make sure that only the key(s) you wanted were added.
  114. [atguigu@node1 .ssh]$ ssh node3
  115. Last login: Mon Mar 20 19:37:33 2023
  116. [atguigu@node3 ~]$ hostname
  117. node3.itcast.cn
  118. [atguigu@node3 ~]$ exit
  119. 登出
  120. Connection to node3 closed.
  121. [atguigu@node1 .ssh]$ ssh node1
  122. atguigu@node1's password:
  123. Last login: Mon Mar 20 19:36:46 2023
  124. [atguigu@node1 ~]$ exit
  125. 登出
  126. Connection to node1 closed.
  127. [atguigu@node1 .ssh]$ ssh-copy-id node1
  128. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/atguigu/.ssh/id_rsa.pub"
  129. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  130. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  131. atguigu@node1's password:
  132. Number of key(s) added: 1
  133. Now try logging into the machine, with: "ssh 'node1'"
  134. and check to make sure that only the key(s) you wanted were added.
  135. [atguigu@node1 .ssh]$ ll
  136. 总用量 16
  137. -rw------- 1 atguigu atguigu 405 3月 20 19:45 authorized_keys
  138. -rw------- 1 atguigu atguigu 1679 3月 20 19:40 id_rsa
  139. -rw-r--r-- 1 atguigu atguigu 405 3月 20 19:40 id_rsa.pub
  140. -rw-r--r-- 1 atguigu atguigu 546 3月 20 19:23 known_hosts
  141. [atguigu@node1 .ssh]$ cat authorized_keys
  142. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRrdLn5Bwi+N7B85ggdOWeXNtCBXtj3YkLpky3VcaSW9rnevAISF/3K99wzVVMEM1gfK+Xr6tMopTwQdN+kShhrluUHb31l8qQzg7E+RwT2fPu5M1PJgXzjABHUggS1PVstUk/KH8e+HzJ7pQyPU+FkgZ+U2NsbBiQmgMrfBfHBoPjF7JjTNUKAvtAOo0PQjDxdCsJJDxxBWBsHGmtRi73v6f/sY6zRYBILqC2tb0iSzEe1IGkjqJ4pXO/1+5ZxuFvwLnDsGp1VTqOsqqjsWdBoMoE10LjB46S7hMD5t4elzhbenx/xv5vdT4DSo6P65FVfu4OjG66cfjECOqfxOY9 atguigu@node1.itcast.cn
  143. [atguigu@node1 .ssh]$ pwd
  144. /home/atguigu/.ssh
  145. [atguigu@node1 .ssh]$ su root
  146. 密码:
  147. [root@node1 .ssh]# ll
  148. 总用量 16
  149. -rw------- 1 atguigu atguigu 810 3月 20 19:51 authorized_keys
  150. -rw------- 1 atguigu atguigu 1679 3月 20 19:40 id_rsa
  151. -rw-r--r-- 1 atguigu atguigu 405 3月 20 19:40 id_rsa.pub
  152. -rw-r--r-- 1 atguigu atguigu 546 3月 20 19:23 known_hosts
  153. [root@node1 .ssh]# ssh-keygen -t rsa
  154. Generating public/private rsa key pair.
  155. Enter file in which to save the key (/root/.ssh/id_rsa):
  156. /root/.ssh/id_rsa already exists.
  157. Overwrite (y/n)?
  158. [root@node1 .ssh]# ssh-copy-id node1
  159. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
  160. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  161. /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
  162. (if you think this is a mistake, you may want to use -f option)
  163. [root@node1 .ssh]# ssh-copy-id node2
  164. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
  165. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  166. /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
  167. (if you think this is a mistake, you may want to use -f option)
  168. [root@node1 .ssh]# ssh-copy-id node3
  169. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
  170. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  171. /usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
  172. (if you think this is a mistake, you may want to use -f option)
  173. [root@node1 .ssh]# su atguigu
  174. [atguigu@node1 .ssh]$ cd ~
  175. [atguigu@node1 ~]$ xsync hello.txt
  176. ==================== node1 ====================
  177. hello.txt does not exists!
  178. ==================== node2 ====================
  179. hello.txt does not exists!
  180. ==================== node3 ====================
  181. hello.txt does not exists!
  182. [atguigu@node1 ~]$ pwd
  183. /home/atguigu
  184. [atguigu@node1 ~]$ cd /home/atguigu/
  185. [atguigu@node1 ~]$ xsync hello.txt
  186. ==================== node1 ====================
  187. hello.txt does not exists!
  188. ==================== node2 ====================
  189. hello.txt does not exists!
  190. ==================== node3 ====================
  191. hello.txt does not exists!
  192. [atguigu@node1 ~]$ xsync a.txt
  193. ==================== node1 ====================
  194. sending incremental file list
  195. sent 43 bytes received 12 bytes 110.00 bytes/sec
  196. total size is 3 speedup is 0.05
  197. ==================== node2 ====================
  198. sending incremental file list
  199. a.txt
  200. sent 93 bytes received 35 bytes 256.00 bytes/sec
  201. total size is 3 speedup is 0.02
  202. ==================== node3 ====================
  203. sending incremental file list
  204. a.txt
  205. sent 93 bytes received 35 bytes 256.00 bytes/sec
  206. total size is 3 speedup is 0.02
  207. [atguigu@node1 ~]$
  208. ----------------------------------------------------------------------------------------
  209. 连接成功
  210. Last login: Mon Mar 20 19:17:38 2023
  211. [root@node2 ~]# su atguigu
  212. [atguigu@node2 root]$ cd ~
  213. [atguigu@node2 ~]$ pwd
  214. /home/atguigu
  215. [atguigu@node2 ~]$ ls -al
  216. 总用量 20
  217. drwx------ 5 atguigu atguigu 139 3月 20 19:17 .
  218. drwxr-xr-x. 3 root root 21 3月 20 10:08 ..
  219. -rw------- 1 atguigu atguigu 108 3月 20 19:36 .bash_history
  220. -rw-r--r-- 1 atguigu atguigu 18 8月 8 2019 .bash_logout
  221. -rw-r--r-- 1 atguigu atguigu 193 8月 8 2019 .bash_profile
  222. -rw-r--r-- 1 atguigu atguigu 231 8月 8 2019 .bashrc
  223. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  224. drwxrwxr-x 3 atguigu atguigu 18 3月 20 10:17 .cache
  225. drwxrwxr-x 3 atguigu atguigu 18 3月 20 10:17 .config
  226. -rw------- 1 atguigu atguigu 557 3月 20 19:17 .viminfo
  227. [atguigu@node2 ~]$
  228. 连接断开
  229. 连接成功
  230. Last login: Mon Mar 20 19:36:35 2023 from 192.168.88.1
  231. [root@node2 ~]# cd /home/atguigu/.ssh/
  232. 您在 /var/spool/mail/root 中有新邮件
  233. [root@node2 .ssh]# ll
  234. 总用量 4
  235. -rw------- 1 atguigu atguigu 405 3月 20 19:43 authorized_keys
  236. [root@node2 .ssh]# cat authorized_keys
  237. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRrdLn5Bwi+N7B85ggdOWeXNtCBXtj3YkLpky3VcaSW9rnevAISF/3K99wzVVMEM1gfK+Xr6tMopTwQdN+kShhrluUHb31l8qQzg7E+RwT2fPu5M1PJgXzjABHUggS1PVstUk/KH8e+HzJ7pQyPU+FkgZ+U2NsbBiQmgMrfBfHBoPjF7JjTNUKAvtAOo0PQjDxdCsJJDxxBWBsHGmtRi73v6f/sY6zRYBILqC2tb0iSzEe1IGkjqJ4pXO/1+5ZxuFvwLnDsGp1VTqOsqqjsWdBoMoE10LjB46S7hMD5t4elzhbenx/xv5vdT4DSo6P65FVfu4OjG66cfjECOqfxOY9 atguigu@node1.itcast.cn
  238. [root@node2 .ssh]# ssh-keygen -t rsa
  239. Generating public/private rsa key pair.
  240. Enter file in which to save the key (/root/.ssh/id_rsa):
  241. /root/.ssh/id_rsa already exists.
  242. Overwrite (y/n)? y
  243. Enter passphrase (empty for no passphrase):
  244. Enter same passphrase again:
  245. Your identification has been saved in /root/.ssh/id_rsa.
  246. Your public key has been saved in /root/.ssh/id_rsa.pub.
  247. The key fingerprint is:
  248. SHA256:rKXFOBLTEYhuY0iBovwDyguTlvqAZozIMIiAHWhaWyI root@node2.itcast.cn
  249. The key's randomart image is:
  250. +---[RSA 2048]----+
  251. |.oo. .o. |
  252. |E++.o. . |
  253. |X=.+o . |
  254. |Bo* o + |
  255. |B++.. o S |
  256. |%= o . * |
  257. |O*. . o |
  258. |+o |
  259. | .. |
  260. +----[SHA256]-----+
  261. 您在 /var/spool/mail/root 中有新邮件
  262. [root@node2 .ssh]# ssh-copy-id node1
  263. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
  264. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  265. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  266. root@node1's password:
  267. Number of key(s) added: 1
  268. Now try logging into the machine, with: "ssh 'node1'"
  269. and check to make sure that only the key(s) you wanted were added.
  270. [root@node2 .ssh]# ll
  271. 总用量 4
  272. -rw------- 1 atguigu atguigu 405 3月 20 19:43 authorized_keys
  273. [root@node2 .ssh]# ssh-copy-id node3
  274. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
  275. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  276. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  277. root@node3's password:
  278. Number of key(s) added: 1
  279. Now try logging into the machine, with: "ssh 'node3'"
  280. and check to make sure that only the key(s) you wanted were added.
  281. [root@node2 .ssh]# ll
  282. 总用量 4
  283. -rw------- 1 atguigu atguigu 810 3月 20 19:51 authorized_keys
  284. 您在 /var/spool/mail/root 中有新邮件
  285. [root@node2 .ssh]# ssh node2
  286. root@node2's password:
  287. Last login: Mon Mar 20 19:38:58 2023 from 192.168.88.1
  288. [root@node2 ~]# ls -al
  289. 总用量 56
  290. dr-xr-x---. 7 root root 4096 3月 20 19:18 .
  291. dr-xr-xr-x. 18 root root 258 10月 26 2021 ..
  292. -rw-r--r-- 1 root root 4 2月 22 11:10 111.txt
  293. -rw-r--r-- 1 root root 2 2月 22 11:08 1.txt
  294. -rw-r--r-- 1 root root 2 2月 22 11:09 2.txt
  295. -rw-r--r-- 1 root root 2 2月 22 11:09 3.txt
  296. -rw-------. 1 root root 1340 9月 11 2020 anaconda-ks.cfg
  297. -rw-------. 1 root root 3555 3月 20 19:38 .bash_history
  298. -rw-r--r--. 1 root root 18 12月 29 2013 .bash_logout
  299. -rw-r--r--. 1 root root 176 12月 29 2013 .bash_profile
  300. -rw-r--r--. 1 root root 176 12月 29 2013 .bashrc
  301. drwxr-xr-x. 3 root root 18 9月 11 2020 .cache
  302. drwxr-xr-x. 3 root root 18 9月 11 2020 .config
  303. -rw-r--r--. 1 root root 100 12月 29 2013 .cshrc
  304. drwxr-xr-x. 2 root root 40 9月 11 2020 .oracle_jre_usage
  305. drwxr----- 3 root root 19 3月 20 10:05 .pki
  306. drwx------. 2 root root 80 3月 20 19:49 .ssh
  307. -rw-r--r--. 1 root root 129 12月 29 2013 .tcshrc
  308. -rw-r--r-- 1 root root 0 3月 13 19:40 test.txt
  309. -rw------- 1 root root 4620 3月 20 19:18 .viminfo
  310. [root@node2 ~]# pwd
  311. /root
  312. [root@node2 ~]# cd .ssh
  313. 您在 /var/spool/mail/root 中有新邮件
  314. [root@node2 .ssh]# ll
  315. 总用量 16
  316. -rw-------. 1 root root 402 9月 11 2020 authorized_keys
  317. -rw-------. 1 root root 1679 3月 20 19:48 id_rsa
  318. -rw-r--r--. 1 root root 402 3月 20 19:48 id_rsa.pub
  319. -rw-r--r--. 1 root root 1254 3月 20 09:25 known_hosts
  320. [root@node2 .ssh]# su atguigu
  321. [atguigu@node2 .ssh]$ cd ~
  322. [atguigu@node2 ~]$ ll
  323. 总用量 0
  324. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  325. [atguigu@node2 ~]$ ll
  326. 总用量 4
  327. -rw-r--r-- 1 atguigu atguigu 3 3月 20 19:59 a.txt
  328. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  329. 您在 /var/spool/mail/root 中有新邮件
  330. [atguigu@node2 ~]$
  331. ----------------------------------------------------------------------------------------
  332. 连接成功
  333. Last login: Mon Mar 20 19:14:48 2023 from 192.168.88.1
  334. [root@node3 ~]# su atguigu
  335. [atguigu@node3 root]$ cd ~
  336. [atguigu@node3 ~]$ pwd
  337. /home/atguigu
  338. [atguigu@node3 ~]$ ls -al
  339. 总用量 16
  340. drwx------ 5 atguigu atguigu 123 3月 20 17:25 .
  341. drwxr-xr-x. 3 root root 21 3月 20 10:08 ..
  342. -rw------- 1 atguigu atguigu 163 3月 20 19:36 .bash_history
  343. -rw-r--r-- 1 atguigu atguigu 18 8月 8 2019 .bash_logout
  344. -rw-r--r-- 1 atguigu atguigu 193 8月 8 2019 .bash_profile
  345. -rw-r--r-- 1 atguigu atguigu 231 8月 8 2019 .bashrc
  346. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  347. drwxrwxr-x 3 atguigu atguigu 18 3月 20 10:18 .cache
  348. drwxrwxr-x 3 atguigu atguigu 18 3月 20 10:18 .config
  349. [atguigu@node3 ~]$ cd /home/atguigu/.ssh/
  350. 您在 /var/spool/mail/root 中有新邮件
  351. [atguigu@node3 .ssh]$ ll
  352. 总用量 4
  353. -rw------- 1 atguigu atguigu 405 3月 20 19:44 authorized_keys
  354. [atguigu@node3 .ssh]$ cat authorized_keys
  355. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRrdLn5Bwi+N7B85ggdOWeXNtCBXtj3YkLpky3VcaSW9rnevAISF/3K99wzVVMEM1gfK+Xr6tMopTwQdN+kShhrluUHb31l8qQzg7E+RwT2fPu5M1PJgXzjABHUggS1PVstUk/KH8e+HzJ7pQyPU+FkgZ+U2NsbBiQmgMrfBfHBoPjF7JjTNUKAvtAOo0PQjDxdCsJJDxxBWBsHGmtRi73v6f/sY6zRYBILqC2tb0iSzEe1IGkjqJ4pXO/1+5ZxuFvwLnDsGp1VTqOsqqjsWdBoMoE10LjB46S7hMD5t4elzhbenx/xv5vdT4DSo6P65FVfu4OjG66cfjECOqfxOY9 atguigu@node1.itcast.cn
  356. [atguigu@node3 .ssh]$ ssh-keygen -t rsa
  357. Generating public/private rsa key pair.
  358. Enter file in which to save the key (/home/atguigu/.ssh/id_rsa):
  359. Enter passphrase (empty for no passphrase):
  360. Enter same passphrase again:
  361. Your identification has been saved in /home/atguigu/.ssh/id_rsa.
  362. Your public key has been saved in /home/atguigu/.ssh/id_rsa.pub.
  363. The key fingerprint is:
  364. SHA256:UXniCTC0jqCGYUsYfBRoUBrlaei8V6dWx7lAvRypEko atguigu@node3.itcast.cn
  365. The key's randomart image is:
  366. +---[RSA 2048]----+
  367. |*o=o..+. .. |
  368. |.X o oo.+ . |
  369. |*o*E ....* + |
  370. |*+o..oo +.* |
  371. |o= ..o.=S* |
  372. |. . . = o . |
  373. | . . o . |
  374. | . . |
  375. | |
  376. +----[SHA256]-----+
  377. 您在 /var/spool/mail/root 中有新邮件
  378. [atguigu@node3 .ssh]$ ssh-copy-id node1
  379. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/atguigu/.ssh/id_rsa.pub"
  380. The authenticity of host 'node1 (192.168.88.151)' can't be established.
  381. ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
  382. ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
  383. Are you sure you want to continue connecting (yes/no)?
  384. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  385. The authenticity of host 'node1 (192.168.88.151)' can't be established.
  386. ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
  387. ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
  388. Are you sure you want to continue connecting (yes/no)? yes
  389. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  390. atguigu@node1's password:
  391. Number of key(s) added: 1
  392. Now try logging into the machine, with: "ssh 'node1'"
  393. and check to make sure that only the key(s) you wanted were added.
  394. [atguigu@node3 .ssh]$ ssh-copy-id node2
  395. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/atguigu/.ssh/id_rsa.pub"
  396. The authenticity of host 'node2 (192.168.88.152)' can't be established.
  397. ECDSA key fingerprint is SHA256:+eLT3FrOEuEsxBxjOd89raPi/ChJz26WGAfqBpz/KEk.
  398. ECDSA key fingerprint is MD5:18:42:ad:0f:2b:97:d8:b5:68:14:6a:98:e9:72:db:bb.
  399. Are you sure you want to continue connecting (yes/no)? yes
  400. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  401. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  402. atguigu@node2's password:
  403. Number of key(s) added: 1
  404. Now try logging into the machine, with: "ssh 'node2'"
  405. and check to make sure that only the key(s) you wanted were added.
  406. [atguigu@node3 .ssh]$ ll
  407. 总用量 16
  408. -rw------- 1 atguigu atguigu 405 3月 20 19:44 authorized_keys
  409. -rw------- 1 atguigu atguigu 1675 3月 20 19:50 id_rsa
  410. -rw-r--r-- 1 atguigu atguigu 405 3月 20 19:50 id_rsa.pub
  411. -rw-r--r-- 1 atguigu atguigu 364 3月 20 19:51 known_hosts
  412. 您在 /var/spool/mail/root 中有新邮件
  413. [atguigu@node3 .ssh]$ cd ~
  414. 您在 /var/spool/mail/root 中有新邮件
  415. [atguigu@node3 ~]$ ll
  416. 总用量 0
  417. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  418. [atguigu@node3 ~]$ ll
  419. 总用量 4
  420. -rw-r--r-- 1 atguigu atguigu 3 3月 20 19:59 a.txt
  421. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  422. 您在 /var/spool/mail/root 中有新邮件
  423. [atguigu@node3 ~]$ cd /home/atguigu
  424. 您在 /var/spool/mail/root 中有新邮件
  425. [atguigu@node3 ~]$ ll
  426. 总用量 4
  427. -rw-r--r-- 1 atguigu atguigu 3 3月 20 19:59 a.txt
  428. drwxrwxr-x 2 atguigu atguigu 19 3月 20 15:56 bin
  429. [atguigu@node3 ~]$

P030【030_尚硅谷_Hadoop_入门_集群配置】13:24

注意:

  • NameNode和SecondaryNameNode不要安装在同一台服务器。
  • ResourceManager也很消耗内存,不要和NameNode、SecondaryNameNode配置在同一台机器上。

hadoop102

hadoop103

hadoop104

HDFS

NameNode

DataNode

DataNode

SecondaryNameNode

DataNode

YARN

NodeManager

ResourceManager

NodeManager

NodeManager

要获取的默认文件

文件存放在Hadoop的jar包中的位置

[core-default.xml]

hadoop-common-3.1.3.jar/core-default.xml

[hdfs-default.xml]

hadoop-hdfs-3.1.3.jar/hdfs-default.xml

[yarn-default.xml]

hadoop-yarn-common-3.1.3.jar/yarn-default.xml

[mapred-default.xml]

hadoop-mapreduce-client-core-3.1.3.jar/mapred-default.xml

P031【031_尚硅谷_Hadoop_入门_群起集群并测试】16:52

hadoop集群启动后datanode没有启动_hadoop datanode没有启动!

[atguigu@hadoop102 hadoop-3.1.3]$ sbin/start-dfs.sh

[atguigu@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh

yarn:资源调度。

  1. 连接成功
  2. Last login: Wed Mar 22 09:16:44 2023
  3. [atguigu@node1 ~]$ cd /opt/module/hadoop-3.1.3
  4. [atguigu@node1 hadoop-3.1.3]$ sbin/start-dfs.sh
  5. Starting namenodes on [node1]
  6. Starting datanodes
  7. Starting secondary namenodes [node3]
  8. [atguigu@node1 hadoop-3.1.3]$ jps
  9. 5619 DataNode
  10. 5398 NameNode
  11. 6647 Jps
  12. 6457 NodeManager
  13. [atguigu@node1 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /wcinput /wcoutput
  14. 2023-03-22 09:22:26,672 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  15. 2023-03-22 09:22:26,954 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
  16. 2023-03-22 09:22:26,954 INFO impl.MetricsSystemImpl: JobTracker metrics system started
  17. 2023-03-22 09:22:28,713 INFO input.FileInputFormat: Total input files to process : 1
  18. 2023-03-22 09:22:28,764 INFO mapreduce.JobSubmitter: number of splits:1
  19. 2023-03-22 09:22:29,208 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local321834777_0001
  20. 2023-03-22 09:22:29,218 INFO mapreduce.JobSubmitter: Executing with tokens: []
  21. 2023-03-22 09:22:29,515 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
  22. 2023-03-22 09:22:29,520 INFO mapreduce.Job: Running job: job_local321834777_0001
  23. 2023-03-22 09:22:29,525 INFO mapred.LocalJobRunner: OutputCommitter set in config null
  24. 2023-03-22 09:22:29,551 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
  25. 2023-03-22 09:22:29,551 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
  26. 2023-03-22 09:22:29,553 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
  27. 2023-03-22 09:22:29,785 INFO mapred.LocalJobRunner: Waiting for map tasks
  28. 2023-03-22 09:22:29,791 INFO mapred.LocalJobRunner: Starting task: attempt_local321834777_0001_m_000000_0
  29. 2023-03-22 09:22:29,908 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
  30. 2023-03-22 09:22:29,910 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
  31. 2023-03-22 09:22:30,037 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
  32. 2023-03-22 09:22:30,056 INFO mapred.MapTask: Processing split: hdfs://node1:8020/wcinput/word.txt:0+45
  33. 2023-03-22 09:22:30,532 INFO mapreduce.Job: Job job_local321834777_0001 running in uber mode : false
  34. 2023-03-22 09:22:30,547 INFO mapreduce.Job: map 0% reduce 0%
  35. 2023-03-22 09:22:31,234 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
  36. 2023-03-22 09:22:31,235 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
  37. 2023-03-22 09:22:31,235 INFO mapred.MapTask: soft limit at 83886080
  38. 2023-03-22 09:22:31,235 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
  39. 2023-03-22 09:22:31,235 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
  40. 2023-03-22 09:22:31,277 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
  41. 2023-03-22 09:22:31,542 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
  42. 2023-03-22 09:22:36,432 INFO mapred.LocalJobRunner:
  43. 2023-03-22 09:22:36,463 INFO mapred.MapTask: Starting flush of map output
  44. 2023-03-22 09:22:36,463 INFO mapred.MapTask: Spilling map output
  45. 2023-03-22 09:22:36,463 INFO mapred.MapTask: bufstart = 0; bufend = 69; bufvoid = 104857600
  46. 2023-03-22 09:22:36,463 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214376(104857504); length = 21/6553600
  47. 2023-03-22 09:22:36,615 INFO mapred.MapTask: Finished spill 0
  48. 2023-03-22 09:22:36,655 INFO mapred.Task: Task:attempt_local321834777_0001_m_000000_0 is done. And is in the process of committing
  49. 2023-03-22 09:22:36,701 INFO mapred.LocalJobRunner: map
  50. 2023-03-22 09:22:36,701 INFO mapred.Task: Task 'attempt_local321834777_0001_m_000000_0' done.
  51. 2023-03-22 09:22:36,738 INFO mapred.Task: Final Counters for attempt_local321834777_0001_m_000000_0: Counters: 23
  52. File System Counters
  53. FILE: Number of bytes read=316543
  54. FILE: Number of bytes written=822653
  55. FILE: Number of read operations=0
  56. FILE: Number of large read operations=0
  57. FILE: Number of write operations=0
  58. HDFS: Number of bytes read=45
  59. HDFS: Number of bytes written=0
  60. HDFS: Number of read operations=5
  61. HDFS: Number of large read operations=0
  62. HDFS: Number of write operations=1
  63. Map-Reduce Framework
  64. Map input records=4
  65. Map output records=6
  66. Map output bytes=69
  67. Map output materialized bytes=60
  68. Input split bytes=99
  69. Combine input records=6
  70. Combine output records=4
  71. Spilled Records=4
  72. Failed Shuffles=0
  73. Merged Map outputs=0
  74. GC time elapsed (ms)=0
  75. Total committed heap usage (bytes)=271056896
  76. File Input Format Counters
  77. Bytes Read=45
  78. 2023-03-22 09:22:36,739 INFO mapred.LocalJobRunner: Finishing task: attempt_local321834777_0001_m_000000_0
  79. 2023-03-22 09:22:36,810 INFO mapred.LocalJobRunner: map task executor complete.
  80. 2023-03-22 09:22:36,849 INFO mapred.LocalJobRunner: Waiting for reduce tasks
  81. 2023-03-22 09:22:36,876 INFO mapred.LocalJobRunner: Starting task: attempt_local321834777_0001_r_000000_0
  82. 2023-03-22 09:22:37,033 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
  83. 2023-03-22 09:22:37,033 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
  84. 2023-03-22 09:22:37,035 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
  85. 2023-03-22 09:22:37,043 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@4db08ce7
  86. 2023-03-22 09:22:37,046 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized!
  87. 2023-03-22 09:22:37,178 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=642252800, maxSingleShuffleLimit=160563200, mergeThreshold=423886880, ioSortFactor=10, memToMemMergeOutputsThreshold=10
  88. 2023-03-22 09:22:37,216 INFO reduce.EventFetcher: attempt_local321834777_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
  89. 2023-03-22 09:22:37,376 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local321834777_0001_m_000000_0 decomp: 56 len: 60 to MEMORY
  90. 2023-03-22 09:22:37,409 INFO reduce.InMemoryMapOutput: Read 56 bytes from map-output for attempt_local321834777_0001_m_000000_0
  91. 2023-03-22 09:22:37,421 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 56, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->56
  92. 2023-03-22 09:22:37,457 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
  93. 2023-03-22 09:22:37,460 INFO mapred.LocalJobRunner: 1 / 1 copied.
  94. 2023-03-22 09:22:37,460 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
  95. 2023-03-22 09:22:37,504 INFO mapreduce.Job: map 100% reduce 0%
  96. 2023-03-22 09:22:37,534 INFO mapred.Merger: Merging 1 sorted segments
  97. 2023-03-22 09:22:37,534 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 46 bytes
  98. 2023-03-22 09:22:37,536 INFO reduce.MergeManagerImpl: Merged 1 segments, 56 bytes to disk to satisfy reduce memory limit
  99. 2023-03-22 09:22:37,537 INFO reduce.MergeManagerImpl: Merging 1 files, 60 bytes from disk
  100. 2023-03-22 09:22:37,541 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
  101. 2023-03-22 09:22:37,542 INFO mapred.Merger: Merging 1 sorted segments
  102. 2023-03-22 09:22:37,547 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 46 bytes
  103. 2023-03-22 09:22:37,708 INFO mapred.LocalJobRunner: 1 / 1 copied.
  104. 2023-03-22 09:22:37,831 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
  105. 2023-03-22 09:22:38,001 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
  106. 2023-03-22 09:22:39,978 INFO mapred.Task: Task:attempt_local321834777_0001_r_000000_0 is done. And is in the process of committing
  107. 2023-03-22 09:22:39,989 INFO mapred.LocalJobRunner: 1 / 1 copied.
  108. 2023-03-22 09:22:39,990 INFO mapred.Task: Task attempt_local321834777_0001_r_000000_0 is allowed to commit now
  109. 2023-03-22 09:22:40,106 INFO output.FileOutputCommitter: Saved output of task 'attempt_local321834777_0001_r_000000_0' to hdfs://node1:8020/wcoutput
  110. 2023-03-22 09:22:40,111 INFO mapred.LocalJobRunner: reduce > reduce
  111. 2023-03-22 09:22:40,111 INFO mapred.Task: Task 'attempt_local321834777_0001_r_000000_0' done.
  112. 2023-03-22 09:22:40,112 INFO mapred.Task: Final Counters for attempt_local321834777_0001_r_000000_0: Counters: 29
  113. File System Counters
  114. FILE: Number of bytes read=316695
  115. FILE: Number of bytes written=822713
  116. FILE: Number of read operations=0
  117. FILE: Number of large read operations=0
  118. FILE: Number of write operations=0
  119. HDFS: Number of bytes read=45
  120. HDFS: Number of bytes written=38
  121. HDFS: Number of read operations=10
  122. HDFS: Number of large read operations=0
  123. HDFS: Number of write operations=3
  124. Map-Reduce Framework
  125. Combine input records=0
  126. Combine output records=0
  127. Reduce input groups=4
  128. Reduce shuffle bytes=60
  129. Reduce input records=4
  130. Reduce output records=4
  131. Spilled Records=4
  132. Shuffled Maps =1
  133. Failed Shuffles=0
  134. Merged Map outputs=1
  135. GC time elapsed (ms)=159
  136. Total committed heap usage (bytes)=272105472
  137. Shuffle Errors
  138. BAD_ID=0
  139. CONNECTION=0
  140. IO_ERROR=0
  141. WRONG_LENGTH=0
  142. WRONG_MAP=0
  143. WRONG_REDUCE=0
  144. File Output Format Counters
  145. Bytes Written=38
  146. 2023-03-22 09:22:40,112 INFO mapred.LocalJobRunner: Finishing task: attempt_local321834777_0001_r_000000_0
  147. 2023-03-22 09:22:40,115 INFO mapred.LocalJobRunner: reduce task executor complete.
  148. 2023-03-22 09:22:40,507 INFO mapreduce.Job: map 100% reduce 100%
  149. 2023-03-22 09:22:40,507 INFO mapreduce.Job: Job job_local321834777_0001 completed successfully
  150. 2023-03-22 09:22:40,529 INFO mapreduce.Job: Counters: 35
  151. File System Counters
  152. FILE: Number of bytes read=633238
  153. FILE: Number of bytes written=1645366
  154. FILE: Number of read operations=0
  155. FILE: Number of large read operations=0
  156. FILE: Number of write operations=0
  157. HDFS: Number of bytes read=90
  158. HDFS: Number of bytes written=38
  159. HDFS: Number of read operations=15
  160. HDFS: Number of large read operations=0
  161. HDFS: Number of write operations=4
  162. Map-Reduce Framework
  163. Map input records=4
  164. Map output records=6
  165. Map output bytes=69
  166. Map output materialized bytes=60
  167. Input split bytes=99
  168. Combine input records=6
  169. Combine output records=4
  170. Reduce input groups=4
  171. Reduce shuffle bytes=60
  172. Reduce input records=4
  173. Reduce output records=4
  174. Spilled Records=8
  175. Shuffled Maps =1
  176. Failed Shuffles=0
  177. Merged Map outputs=1
  178. GC time elapsed (ms)=159
  179. Total committed heap usage (bytes)=543162368
  180. Shuffle Errors
  181. BAD_ID=0
  182. CONNECTION=0
  183. IO_ERROR=0
  184. WRONG_LENGTH=0
  185. WRONG_MAP=0
  186. WRONG_REDUCE=0
  187. File Input Format Counters
  188. Bytes Read=45
  189. File Output Format Counters
  190. Bytes Written=38
  191. [atguigu@node1 hadoop-3.1.3]$

P032【032_尚硅谷_Hadoop_入门_集群崩溃处理办法】08:10

先停掉dfs和yarn,sbin/stop-dfs.sh、sbin/stop-yarn.sh,再删除/data,重新格式化hdfs namenode -format。

  1. [atguigu@node1 hadoop-3.1.3]$ jps
  2. 5619 DataNode
  3. 5398 NameNode
  4. 18967 Jps
  5. 6457 NodeManager
  6. [atguigu@node1 hadoop-3.1.3]$ kill -9 5619
  7. [atguigu@node1 hadoop-3.1.3]$ jps
  8. 20036 Jps
  9. 5398 NameNode
  10. 6457 NodeManager
  11. [atguigu@node1 hadoop-3.1.3]$ sbin/stop-dfs.sh
  12. Stopping namenodes on [node1]
  13. Stopping datanodes
  14. Stopping secondary namenodes [node3]
  15. [atguigu@node1 hadoop-3.1.3]$ jps
  16. 32126 Jps
  17. [atguigu@node1 hadoop-3.1.3]$

P033【033_尚硅谷_Hadoop_入门_历史服务器配置】05:26

node1:mapred --daemon start historyserver

  1. [atguigu@node1 hadoop-3.1.3]$ mapred --daemon start historyserver
  2. [atguigu@node1 hadoop-3.1.3]$ jps
  3. 27061 DataNode
  4. 37557 NodeManager
  5. 42666 JobHistoryServer
  6. 26879 NameNode
  7. 42815 Jps
  8. [atguigu@node1 hadoop-3.1.3]$ hadoop fs -put wcinput/word.txt /input
  9. 2023-03-22 09:58:16,749 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
  10. [atguigu@node1 hadoop-3.1.3]$ hadoop jar share/hadoop/
  11. client/ common/ hdfs/ mapreduce/ tools/ yarn/
  12. [atguigu@node1 hadoop-3.1.3]$ hadoop jar share/hadoop/
  13. client/ common/ hdfs/ mapreduce/ tools/ yarn/
  14. [atguigu@node1 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/
  15. hadoop-mapreduce-client-app-3.1.3.jar hadoop-mapreduce-client-jobclient-3.1.3.jar hadoop-mapreduce-examples-3.1.3.jar
  16. hadoop-mapreduce-client-common-3.1.3.jar hadoop-mapreduce-client-jobclient-3.1.3-tests.jar jdiff/
  17. hadoop-mapreduce-client-core-3.1.3.jar hadoop-mapreduce-client-nativetask-3.1.3.jar lib/
  18. hadoop-mapreduce-client-hs-3.1.3.jar hadoop-mapreduce-client-shuffle-3.1.3.jar lib-examples/
  19. hadoop-mapreduce-client-hs-plugins-3.1.3.jar hadoop-mapreduce-client-uploader-3.1.3.jar sources/
  20. [atguigu@node1 hadoop-3.1.3]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output
  21. 2023-03-22 09:59:43,045 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  22. 2023-03-22 09:59:43,486 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
  23. 2023-03-22 09:59:43,486 INFO impl.MetricsSystemImpl: JobTracker metrics system started
  24. 2023-03-22 09:59:45,880 INFO input.FileInputFormat: Total input files to process : 1
  25. 2023-03-22 09:59:45,985 INFO mapreduce.JobSubmitter: number of splits:1
  26. 2023-03-22 09:59:46,637 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local146698941_0001
  27. 2023-03-22 09:59:46,642 INFO mapreduce.JobSubmitter: Executing with tokens: []
  28. 2023-03-22 09:59:46,972 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
  29. 2023-03-22 09:59:46,974 INFO mapreduce.Job: Running job: job_local146698941_0001
  30. 2023-03-22 09:59:47,033 INFO mapred.LocalJobRunner: OutputCommitter set in config null
  31. 2023-03-22 09:59:47,054 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
  32. 2023-03-22 09:59:47,055 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
  33. 2023-03-22 09:59:47,058 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
  34. 2023-03-22 09:59:47,181 INFO mapred.LocalJobRunner: Waiting for map tasks
  35. 2023-03-22 09:59:47,182 INFO mapred.LocalJobRunner: Starting task: attempt_local146698941_0001_m_000000_0
  36. 2023-03-22 09:59:47,251 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
  37. 2023-03-22 09:59:47,255 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
  38. 2023-03-22 09:59:47,376 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
  39. 2023-03-22 09:59:47,390 INFO mapred.MapTask: Processing split: hdfs://node1:8020/input/word.txt:0+45
  40. 2023-03-22 09:59:48,125 INFO mapreduce.Job: Job job_local146698941_0001 running in uber mode : false
  41. 2023-03-22 09:59:48,150 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
  42. 2023-03-22 09:59:48,150 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
  43. 2023-03-22 09:59:48,150 INFO mapred.MapTask: soft limit at 83886080
  44. 2023-03-22 09:59:48,150 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
  45. 2023-03-22 09:59:48,150 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
  46. 2023-03-22 09:59:48,186 INFO mapreduce.Job: map 0% reduce 0%
  47. 2023-03-22 09:59:48,202 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
  48. 2023-03-22 09:59:49,223 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
  49. 2023-03-22 09:59:50,371 INFO mapred.LocalJobRunner:
  50. 2023-03-22 09:59:50,416 INFO mapred.MapTask: Starting flush of map output
  51. 2023-03-22 09:59:50,416 INFO mapred.MapTask: Spilling map output
  52. 2023-03-22 09:59:50,416 INFO mapred.MapTask: bufstart = 0; bufend = 69; bufvoid = 104857600
  53. 2023-03-22 09:59:50,416 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214376(104857504); length = 21/6553600
  54. 2023-03-22 09:59:50,543 INFO mapred.MapTask: Finished spill 0
  55. 2023-03-22 09:59:50,733 INFO mapred.Task: Task:attempt_local146698941_0001_m_000000_0 is done. And is in the process of committing
  56. 2023-03-22 09:59:50,764 INFO mapred.LocalJobRunner: map
  57. 2023-03-22 09:59:50,764 INFO mapred.Task: Task 'attempt_local146698941_0001_m_000000_0' done.
  58. 2023-03-22 09:59:50,847 INFO mapred.Task: Final Counters for attempt_local146698941_0001_m_000000_0: Counters: 23
  59. File System Counters
  60. FILE: Number of bytes read=316541
  61. FILE: Number of bytes written=822643
  62. FILE: Number of read operations=0
  63. FILE: Number of large read operations=0
  64. FILE: Number of write operations=0
  65. HDFS: Number of bytes read=45
  66. HDFS: Number of bytes written=0
  67. HDFS: Number of read operations=5
  68. HDFS: Number of large read operations=0
  69. HDFS: Number of write operations=1
  70. Map-Reduce Framework
  71. Map input records=4
  72. Map output records=6
  73. Map output bytes=69
  74. Map output materialized bytes=60
  75. Input split bytes=97
  76. Combine input records=6
  77. Combine output records=4
  78. Spilled Records=4
  79. Failed Shuffles=0
  80. Merged Map outputs=0
  81. GC time elapsed (ms)=0
  82. Total committed heap usage (bytes)=267386880
  83. File Input Format Counters
  84. Bytes Read=45
  85. 2023-03-22 09:59:50,848 INFO mapred.LocalJobRunner: Finishing task: attempt_local146698941_0001_m_000000_0
  86. 2023-03-22 09:59:50,946 INFO mapred.LocalJobRunner: map task executor complete.
  87. 2023-03-22 09:59:51,007 INFO mapred.LocalJobRunner: Waiting for reduce tasks
  88. 2023-03-22 09:59:51,025 INFO mapred.LocalJobRunner: Starting task: attempt_local146698941_0001_r_000000_0
  89. 2023-03-22 09:59:51,156 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
  90. 2023-03-22 09:59:51,157 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
  91. 2023-03-22 09:59:51,158 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
  92. 2023-03-22 09:59:51,213 INFO mapreduce.Job: map 100% reduce 0%
  93. 2023-03-22 09:59:51,226 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@1f0445e7
  94. 2023-03-22 09:59:51,238 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized!
  95. 2023-03-22 09:59:51,338 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=642252800, maxSingleShuffleLimit=160563200, mergeThreshold=423886880, ioSortFactor=10, memToMemMergeOutputsThreshold=10
  96. 2023-03-22 09:59:51,355 INFO reduce.EventFetcher: attempt_local146698941_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
  97. 2023-03-22 09:59:51,632 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local146698941_0001_m_000000_0 decomp: 56 len: 60 to MEMORY
  98. 2023-03-22 09:59:51,665 INFO reduce.InMemoryMapOutput: Read 56 bytes from map-output for attempt_local146698941_0001_m_000000_0
  99. 2023-03-22 09:59:51,675 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 56, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->56
  100. 2023-03-22 09:59:51,683 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
  101. 2023-03-22 09:59:51,689 INFO mapred.LocalJobRunner: 1 / 1 copied.
  102. 2023-03-22 09:59:51,693 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
  103. 2023-03-22 09:59:51,715 INFO mapred.Merger: Merging 1 sorted segments
  104. 2023-03-22 09:59:51,716 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 46 bytes
  105. 2023-03-22 09:59:51,719 INFO reduce.MergeManagerImpl: Merged 1 segments, 56 bytes to disk to satisfy reduce memory limit
  106. 2023-03-22 09:59:51,720 INFO reduce.MergeManagerImpl: Merging 1 files, 60 bytes from disk
  107. 2023-03-22 09:59:51,725 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
  108. 2023-03-22 09:59:51,725 INFO mapred.Merger: Merging 1 sorted segments
  109. 2023-03-22 09:59:51,728 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 46 bytes
  110. 2023-03-22 09:59:51,729 INFO mapred.LocalJobRunner: 1 / 1 copied.
  111. 2023-03-22 09:59:51,867 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
  112. 2023-03-22 09:59:52,038 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
  113. 2023-03-22 09:59:52,284 INFO mapred.Task: Task:attempt_local146698941_0001_r_000000_0 is done. And is in the process of committing
  114. 2023-03-22 09:59:52,302 INFO mapred.LocalJobRunner: 1 / 1 copied.
  115. 2023-03-22 09:59:52,302 INFO mapred.Task: Task attempt_local146698941_0001_r_000000_0 is allowed to commit now
  116. 2023-03-22 09:59:52,339 INFO output.FileOutputCommitter: Saved output of task 'attempt_local146698941_0001_r_000000_0' to hdfs://node1:8020/output
  117. 2023-03-22 09:59:52,343 INFO mapred.LocalJobRunner: reduce > reduce
  118. 2023-03-22 09:59:52,343 INFO mapred.Task: Task 'attempt_local146698941_0001_r_000000_0' done.
  119. 2023-03-22 09:59:52,344 INFO mapred.Task: Final Counters for attempt_local146698941_0001_r_000000_0: Counters: 29
  120. File System Counters
  121. FILE: Number of bytes read=316693
  122. FILE: Number of bytes written=822703
  123. FILE: Number of read operations=0
  124. FILE: Number of large read operations=0
  125. FILE: Number of write operations=0
  126. HDFS: Number of bytes read=45
  127. HDFS: Number of bytes written=38
  128. HDFS: Number of read operations=10
  129. HDFS: Number of large read operations=0
  130. HDFS: Number of write operations=3
  131. Map-Reduce Framework
  132. Combine input records=0
  133. Combine output records=0
  134. Reduce input groups=4
  135. Reduce shuffle bytes=60
  136. Reduce input records=4
  137. Reduce output records=4
  138. Spilled Records=4
  139. Shuffled Maps =1
  140. Failed Shuffles=0
  141. Merged Map outputs=1
  142. GC time elapsed (ms)=101
  143. Total committed heap usage (bytes)=267386880
  144. Shuffle Errors
  145. BAD_ID=0
  146. CONNECTION=0
  147. IO_ERROR=0
  148. WRONG_LENGTH=0
  149. WRONG_MAP=0
  150. WRONG_REDUCE=0
  151. File Output Format Counters
  152. Bytes Written=38
  153. 2023-03-22 09:59:52,344 INFO mapred.LocalJobRunner: Finishing task: attempt_local146698941_0001_r_000000_0
  154. 2023-03-22 09:59:52,344 INFO mapred.LocalJobRunner: reduce task executor complete.
  155. 2023-03-22 09:59:53,216 INFO mapreduce.Job: map 100% reduce 100%
  156. 2023-03-22 09:59:53,219 INFO mapreduce.Job: Job job_local146698941_0001 completed successfully
  157. 2023-03-22 09:59:53,267 INFO mapreduce.Job: Counters: 35
  158. File System Counters
  159. FILE: Number of bytes read=633234
  160. FILE: Number of bytes written=1645346
  161. FILE: Number of read operations=0
  162. FILE: Number of large read operations=0
  163. FILE: Number of write operations=0
  164. HDFS: Number of bytes read=90
  165. HDFS: Number of bytes written=38
  166. HDFS: Number of read operations=15
  167. HDFS: Number of large read operations=0
  168. HDFS: Number of write operations=4
  169. Map-Reduce Framework
  170. Map input records=4
  171. Map output records=6
  172. Map output bytes=69
  173. Map output materialized bytes=60
  174. Input split bytes=97
  175. Combine input records=6
  176. Combine output records=4
  177. Reduce input groups=4
  178. Reduce shuffle bytes=60
  179. Reduce input records=4
  180. Reduce output records=4
  181. Spilled Records=8
  182. Shuffled Maps =1
  183. Failed Shuffles=0
  184. Merged Map outputs=1
  185. GC time elapsed (ms)=101
  186. Total committed heap usage (bytes)=534773760
  187. Shuffle Errors
  188. BAD_ID=0
  189. CONNECTION=0
  190. IO_ERROR=0
  191. WRONG_LENGTH=0
  192. WRONG_MAP=0
  193. WRONG_REDUCE=0
  194. File Input Format Counters
  195. Bytes Read=45
  196. File Output Format Counters
  197. Bytes Written=38
  198. [atguigu@node1 hadoop-3.1.3]$

P034【034_尚硅谷_Hadoop_入门_日志聚集功能配置】05:42

  1. 145 jps
  2. 146 mapred --daemon start historyserver
  3. 147 jps
  4. 148 xsync $HADOOP_HOME/etc/hadoop/yarn-site.xml
  5. 149 jps
  6. 150 mapred --daemon stop historyserver
  7. 151 mapred --daemon start historyserver
  8. 152 jps
  9. 153 hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output2
  10. 154 history
  11. 58 jps
  12. 61 sbin/start-yarn.sh
  13. 62 sbin/stop-yarn.sh
  14. 63 sbin/start-yarn.sh
  15. 64 history

P035【035_尚硅谷_Hadoop_入门_两个常用脚本】09:18

  1. 155 cd /home/atguigu/bin
  2. 156 ll
  3. 157 vim myhadoop.sh
  4. 158 chmod +x myhadoop.sh
  5. 159 chmod 777 myhadoop.sh
  6. 160 ll
  7. 161 jps
  8. 162 myhadoop.sh stop
  9. 163 ./myhadoop.sh stop
  10. 164 jps
  11. 165 ./myhadoop.sh start
  12. 166 jps
  13. 167 vim jpsall
  14. 168 chmod +x jpsall
  15. 169 ll
  16. 170 chmod 777 jpsall
  17. 171 ll
  18. 172 ./jpsall
  19. 173 cd ~
  20. 174 ll
  21. 175 xsync /home/atguigu/bin/
  22. 176 history

P036【036_尚硅谷_Hadoop_入门_两道面试题】04:15

常用端口号说明

端口名称

Hadoop2.x

Hadoop3.x

NameNode内部通信端口

8020 / 9000

8020 / 9000/9820

NameNode HTTP UI

50070

9870

MapReduce查看执行任务端口

8088

8088

历史服务器通信端口

19888

19888

P037【037_尚硅谷_Hadoop_入门_集群时间同步】11:27

很多公司都是无网开发的,连接不了外网。

这个视频的操作,不用跟着敲!!!切记!!!

P038【038_尚硅谷_Hadoop_入门_常见问题总结】10:57

root用户和atguigu两个用户启动集群不统一,只用atguigu启动集群。

文件都是二进制,随便拆。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/2023面试高手/article/detail/205493
推荐阅读
相关标签
  

闽ICP备14008679号