赞
踩
在 home/hzb 下 新建 test.txt 文件
vim test.txt
1.从 Linux 本地文件系统加载数据创建 RDD
- scala> val lines=sc.textFile("file:///home/hzb/test.txt")
- lines: org.apache.spark.rdd.RDD[String] = file:///home/hzb/test.txt MapPartitionsRDD[1] at textFile at <console>:24
-
-
2.转换算子 通过 flatMap 转换算子 把文件内容拆分成一个个单词
- scala> val words=lines.flatMap(line=>line.split(" "))
- words: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at flatMap at <console>:25
3.通过 Map 算子 返回一个新的数据集
- scala> val wordAndOne=words.map(word=>(word,1))
- wordAndOne: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[3] at map at <console>:25
4.通过groupByKey 算子操作文件内容中所有单词进行分组
- scala> val wordCount=wordAndOne.reduceByKey((a,b)=>a+b)
- wordCount: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:25
5.打印 wordCoount
scala> wordCount.foreach(println)
6.词凭统计结果
- (scala,1)
- (spark,3)
- (itcast,3)
- (hadoop,2)
- (heima,1)
-
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。