Heim > Datenbank > MySQL-Tutorial > Hauptteil

eclipse中开发Hadoop2.x的Map/Reduce项目

WBOY
Freigeben: 2016-06-07 16:34:34
Original
1092 Leute haben es durchsucht

本文演示如何在Eclipse中开发一个Map/Reduce项目: 1、环境说明 Hadoop2.2.0 Eclipse?Juno SR2 Hadoop2.x-eclipse-plugin 插件的编译安装配置的过程参考:http://www.micmiu.com/bigdata/hadoop/hadoop2-x-eclipse-plugin-build-install/ 2、新建MR工程 依次

eclipse-mr-01本文演示如何在Eclipse中开发一个Map/Reduce项目: 1、环境说明
  • Hadoop2.2.0
  • Eclipse?Juno SR2
  • Hadoop2.x-eclipse-plugin 插件的编译安装配置的过程参考:http://www.micmiu.com/bigdata/hadoop/hadoop2-x-eclipse-plugin-build-install/
2、新建MR工程 依次点击 File →?New →?Ohter... ?选择 “Map/Reduce Project”,然后输入项目名称:micmiu_MRDemo,创建新项目: eclipse-mr-01 eclipse-mr-02 3、创建Mapper和Reducer 依次点击?File →?New →?Ohter... 选择Mapper,自动继承Mapper eclipse-mr-03 eclipse-mr-04 创建Reducer的过程同Mapper,具体的业务逻辑自己实现即可。 本文就以官方自带的WordCount为例进行测试:
package com.micmiu.mr;
/**
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
  public static class TokenizerMapper 
       extends Mapper<object text intwritable>{
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }
  public static class IntSumReducer 
       extends Reducer<text> {
    private IntWritable result = new IntWritable();
    public void reduce(Text key, Iterable<intwritable> values, 
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }
  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount <in> <out>");
      System.exit(2);
    }
    //conf.set("fs.defaultFS", "hdfs://192.168.6.77:9000");
    Job job = new Job(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}</out></in></intwritable></text></object>
Nach dem Login kopieren
4、准备测试数据 micmiu-01.txt:
Hi Michael welcome to Hadoop 
more see micmiu.com
Nach dem Login kopieren
micmiu-02.txt:
Hi Michael welcome to BigData
more see micmiu.com
Nach dem Login kopieren
micmiu-03.txt:
Hi Michael welcome to Spark 
more see micmiu.com
Nach dem Login kopieren
把 micmiu 打头的三个文件上传到hdfs:
micmiu-mbp:Downloads micmiu$ hdfs dfs -copyFromLocal micmiu-*.txt /user/micmiu/test/input
micmiu-mbp:Downloads micmiu$ hdfs dfs -ls /user/micmiu/test/input
Found 3 items
-rw-r--r--   1 micmiu supergroup         50 2014-04-15 14:53 /user/micmiu/test/input/micmiu-01.txt
-rw-r--r--   1 micmiu supergroup         50 2014-04-15 14:53 /user/micmiu/test/input/micmiu-02.txt
-rw-r--r--   1 micmiu supergroup         49 2014-04-15 14:53 /user/micmiu/test/input/micmiu-03.txt
micmiu-mbp:Downloads micmiu$
Nach dem Login kopieren
5、配置运行参数 Run As →?Run Configurations… ,在Arguments中配置运行参数,例如程序的输入参数: eclipse-mr-05 6、运行 Run As -> Run on Hadoop ,执行完成后可以看到如下信息: eclipse-mr-06 到此Eclipse中调用Hadoop2x本地伪分布式模式执行MR演示成功。 ps:调用集群环境MR运行一直失败,暂时没有找到原因。 —————– ?EOF?@Michael Sun?—————–
Verwandte Etiketten:
Quelle:php.cn
Erklärung dieser Website
Der Inhalt dieses Artikels wird freiwillig von Internetnutzern beigesteuert und das Urheberrecht liegt beim ursprünglichen Autor. Diese Website übernimmt keine entsprechende rechtliche Verantwortung. Wenn Sie Inhalte finden, bei denen der Verdacht eines Plagiats oder einer Rechtsverletzung besteht, wenden Sie sich bitte an admin@php.cn
Beliebte Tutorials
Mehr>
Neueste Downloads
Mehr>
Web-Effekte
Quellcode der Website
Website-Materialien
Frontend-Vorlage