JobPlus知识库 IT 大数据 文章
IntelliJ IDEA + Maven环境编写第一个hadoop程序

1. 新建IntelliJ下的maven项目

点击File->New->Project,在弹出的对话框中选择Maven,JDK选择你自己安装的版本,点击Next

 

2. 填写Maven的GroupId和ArtifactId

你可以根据自己的项目随便填,点击Next

这样就新建好了一个空的项目

这里程序名填写WordCount,我们的程序是一个通用的网上的范例,用来计算文件中单词出现的次数

 

3. 设置程序的编译版本

打开Intellij的Preference偏好设置,定位到Build, Execution, Deployment->Compiler->Java Compiler,

将WordCount的Target bytecode version修改为你的jdk版本(我的是1.8)

 

4. 配置依赖

编辑pom.xml进行配置

1) 添加apache源

在project内尾部添加

  1. <repositories>

  2. <repository>

  3. <id>apache</id>

  4. <url>http://maven.apache.org</url>

  5. </repository>

  6. </repositories>

 

2) 添加hadoop依赖

这里只需要用到基础依赖hadoop-core和hadoop-common;如果需要读写HDFS,
则还需要依赖hadoop-hdfs和hadoop-client;如果需要读写HBase,则还需要依赖hbase-client

在project内尾部添加

  1. <dependencies>

  2. <dependency>

  3. <groupId>org.apache.hadoop</groupId>

  4. <artifactId>hadoop-core</artifactId>

  5. <version>1.2.1</version>

  6. </dependency>

  7. <dependency>

  8. <groupId>org.apache.hadoop</groupId>

  9. <artifactId>hadoop-common</artifactId>

  10. <version>2.7.2</version>

  11. </dependency>

  12. </dependencies>

修改pom.xml完成后,Intellij右上角会提示Maven projects need to be Imported,点击Import Changes以更新依赖,或者点击Enable Auto Import

 

最后,我的完整的pom.xml如下:

  1. <?xml version="1.0" encoding="UTF-8"?>

  2. <project xmlns="http://maven.apache.org/POM/4.0.0"

  3. xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

  4. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  5. <modelVersion>4.0.0</modelVersion>

  6. <groupId>com.fun</groupId>

  7. <artifactId>hadoop</artifactId>

  8. <version>1.0-SNAPSHOT</version>

  9. <repositories>

  10. <repository>

  11. <id>apache</id>

  12. <url>http://maven.apache.org</url>

  13. </repository>

  14. </repositories>

  15. <dependencies>

  16. <dependency>

  17. <groupId>org.apache.hadoop</groupId>

  18. <artifactId>hadoop-core</artifactId>

  19. <version>1.2.1</version>

  20. </dependency>

  21. <dependency>

  22. <groupId>org.apache.hadoop</groupId>

  23. <artifactId>hadoop-common</artifactId>

  24. <version>2.7.2</version>

  25. </dependency>

  26. </dependencies>

  27. <build>

  28. <plugins>

  29. <plugin>

  30. <artifactId>maven-dependency-plugin</artifactId>

  31. <configuration>

  32. <excludeTransitive>false</excludeTransitive>

  33. <stripVersion>true</stripVersion>

  34. <outputDirectory>./lib</outputDirectory>

  35. </configuration>

  36. </plugin>

  37. </plugins>

  38. </build>

  39. </project>

 

5. 编写主程序

WordCount.java

  1. /**

  2. * Created by jinshilin on 16/12/7.

  3. */

  4. import java.io.IOException;

  5. import java.util.StringTokenizer;

  6. import org.apache.hadoop.conf.Configuration;

  7. import org.apache.hadoop.fs.Path;

  8. import org.apache.hadoop.io.IntWritable;

  9. import org.apache.hadoop.io.Text;

  10. import org.apache.hadoop.mapreduce.Job;

  11. import org.apache.hadoop.mapreduce.Mapper;

  12. import org.apache.hadoop.mapreduce.Reducer;

  13. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

  14. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

  15. public class WordCount {

  16. public static class TokenizerMapper

  17. extends Mapper<Object, Text, Text, IntWritable> {

  18. private final static IntWritable one = new IntWritable(1);

  19. private Text word = new Text();

  20. public void map(Object key, Text value, Context context

  21. ) throws IOException, InterruptedException {

  22. StringTokenizer itr = new StringTokenizer(value.toString());

  23. while (itr.hasMoreTokens()) {

  24. word.set(itr.nextToken());

  25. context.write(word, one);

  26. }

  27. }

  28. }

  29. public static class IntSumReducer

  30. extends Reducer<Text, IntWritable, Text, IntWritable> {

  31. private IntWritable result = new IntWritable();

  32. public void reduce(Text key, Iterable<IntWritable> values,

  33. Context context

  34. ) throws IOException, InterruptedException {

  35. int sum = 0;

  36. for (IntWritable val : values) {

  37. sum += val.get();

  38. }

  39. result.set(sum);

  40. context.write(key, result);

  41. }

  42. }

  43. public static void main(String[] args) throws Exception {

  44. Configuration conf = new Configuration();

  45. Job job = Job.getInstance(conf, "word count");

  46. job.setJarByClass(WordCount.class);

  47. job.setMapperClass(TokenizerMapper.class);

  48. job.setCombinerClass(IntSumReducer.class);

  49. job.setReducerClass(IntSumReducer.class);

  50. job.setOutputKeyClass(Text.class);

  51. job.setOutputValueClass(IntWritable.class);

  52. FileInputFormat.addInputPath(job, new Path(args[0]));

  53. FileOutputFormat.setOutputPath(job, new Path(args[1]));

  54. System.exit(job.waitForCompletion(true) ? 0 : 1);

  55. }

  56. }

 

6. 配置输入和输出结果文件夹

1) 添加和src目录同级的input文件夹到项目中

在input文件夹中放置一个或多个输入文件源

我的输入文件源如下:

test.segmented:

dfdfadgdgag
aadads
fudflcl
cckcer
fadf
dfdfadgdgag
fudflcl
fuck
fuck
fuckfuck
haha
aaa

 

2) 配置运行参数
在Intellij菜单栏中选择Run->Edit Configurations,在弹出来的对话框中点击+,新建一个Application配置。配置Main class为WordCount(可以点击右边的...选择),

Program arguments为input/ output/,即输入路径为刚才创建的input文件夹,输出为output

由于Hadoop的设定,下次运行时务必删除output文件夹!

好了,运行程序,结果如下:

aaa 1
aadads 1
cckcer 1
dfdfadgdgag 2
fadf 1
fuck 2
fuckfuck 1
fudflcl 2
haha 1

 

至此,一个简单的hadoop程序完成!


如果觉得我的文章对您有用,请随意打赏。您的支持将鼓励我继续创作!

¥ 打赏支持
201人赞 举报
分享到
用户评价(0)

暂无评价,你也可以发布评价哦:)

扫码APP

扫描使用APP

扫码使用

扫描使用小程序