赞
踩
相信大家踩过无数hadoop的天坑, 我只是想在windows上调试下程序为什么这么麻烦呢?
能正确安装hadoop是调试程序的关键。。
下载地址:http://archive.apache.org/dist/hadoop/core/
我选择的是2.7.1版本的
配置环境变量
配好环境变量,在命令行运行 hadoop version,会报错
如果此时出现以下错误:The system cannot find the batch label specified - make_command_arguments(有可能是中文的)
解决办法:
<dependencies> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.7.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>2.7.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>2.7.1</version> </dependency> </dependencies>
map
import org.apache.commons.lang.StringUtils; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import java.io.IOException; public class WcMapper extends Mapper<LongWritable, Text, Text, IntWritable> { @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { System.out.println("--->Map-->" + Thread.currentThread().getName()); String[] words = StringUtils.split(value.toString(), ' '); for (String w : words) { context.write(new Text(w), new IntWritable(1)); } } }
reduce
import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; import java.io.IOException; public class WcReducer extends Reducer<Text, IntWritable, Text, IntWritable> { @Override protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { System.out.println("--->Reducer-->" + Thread.currentThread().getName()); int sum = 0; for (IntWritable i : values) { sum = sum + i.get(); } context.write(key, new IntWritable(sum)); } }
启动程序
public class RunWcJob { public static void main(String[] args) throws Exception { // 创建本次mr程序的job实例 Configuration conf = new Configuration(); Job job = Job.getInstance(conf); // 指定本次job运行的主类 job.setJarByClass(RunWcJob.class); // 指定本次job的具体mapper reducer实现类 job.setMapperClass(WcMapper.class); job.setReducerClass(WcReducer.class); // 指定本次job map阶段的输出数据类型 job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); // 指定本次job reduce阶段的输出数据类型 也就是整个mr任务的最终输出类型 job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); // 指定本次job待处理数据的目录 和程序执行完输出结果存放的目录 FileInputFormat.setInputPaths(job, "D:\\hadoop\\input"); FileOutputFormat.setOutputPath(job, new Path("D:\\hadoop\\output")); // 提交本次job boolean b = job.waitForCompletion(true); System.exit(b ? 0 : 1); } }
然后在本地文件夹D:\hadoop\input下新建 words.txt,内容为上面给出的输入内容作为输入,同样输出文件夹为output(文件夹要不存在),那么直接运行程序:
可能出现的错误:
下载地址:https://github.com/steveloughran/winutils
记得找对应版本的
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。