自定义的Hadoop Writable数据类型
有可能会出现这样的情形:没有一个内置的数据类型满足你的业务需求,或者一个经过优化的自定义数据类型有可能比Hadoop内置的数据类型性能更好。 在这种场景下,我们可以通过实现org.apache.hadoop.io.Writable接口很容易地编写一个自定义的Writable数据类型。
案例描述
下面我们为HTTP服务器日志项实现一个Hadoop Writable数据类型。 这里我们假定一个日志项由五部分组成:request host、timestamp、request URL、response size和HTTP状态码。如下所示:
192.168.0.2 - - [01/Jul/1995:00:00:01 -0400] "GET /history/apollo/HTTP/1.0" 200 6245
其中:
- 199.72.81.55 客户端用户的ip
- 01/Jul/1995:00:00:01 -0400 访问的时间
- GET HTTP方法,GET或POST
- /history/apollo/ 客户请求的URL
- 200 响应码 404
- 6245 响应内容的大小
要求:实现一个自定义的Hadoop Writable数据类型用于HTTP服务器日志项。
思路
如果某个数据类型要被用作一个MapReduce计算的value数据类型,那么该数据类型必须实现org.apache.hadoop.io.Writable接口。 该Writable接口定义了在传输和存储该数据时Hadoop应该怎样序列化和反序列化这个值。
一、创建Java Maven项目
Maven依赖:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>HadoopDemo</groupId> <artifactId>com.xueai8</artifactId> <version>1.0-SNAPSHOT</version> <dependencies> <!--hadoop依赖--> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>3.3.1</version> </dependency> <!--hdfs文件系统依赖--> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>3.3.1</version> </dependency> <!--MapReduce相关的依赖--> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-core</artifactId> <version>3.3.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-jobclient</artifactId> <version>3.3.1</version> </dependency> <!--junit依赖--> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <!--编译器插件用于编译拓扑--> <plugin> <groupId>org.apache.maven.plugins</groupId> <!--指定maven编译的jdk版本和字符集,如果不指定,maven3默认用jdk 1.5 maven2默认用jdk1.3--> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.8</source> <!-- 源代码使用的JDK版本 --> <target>1.8</target> <!-- 需要生成的目标class文件的编译版本 --> <encoding>UTF-8</encoding><!-- 字符集编码 --> </configuration> </plugin> </plugins> </build> </project>
首先,编写一个实现了org.apache.hadoop.io.Writable接口的LogWritable类。
LogWritable.java:
package com.xueai8.log; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.Writable; public class LogWritable implements Writable { private Text userIP; // 客户端的IP地址 private Text timestamp; // 客户访问时间 private Text url; // 客户访问的url private IntWritable status; // 状态码 private IntWritable responseSize; // 服务端响应数据的大小 public LogWritable() { this.userIP = new Text(); this.timestamp = new Text(); this.url = new Text(); this.status = new IntWritable(); this.responseSize = new IntWritable(); } public void set(String userIP, String timestamp, String url, int status, int responseSize) { this.userIP.set(userIP); this.timestamp.set(timestamp); this.url.set(url); this.status.set(status); this.responseSize.set(responseSize); } public Text getUserIP() { return userIP; } public void setUserIP(Text userIP) { this.userIP = userIP; } public Text getTimestamp() { return timestamp; } public void setTimestamp(Text timestamp) { this.timestamp = timestamp; } public Text getUrl() { return url; } public void setUrl(Text url) { this.url = url; } public IntWritable getStatus() { return status; } public void setStatus(IntWritable status) { this.status = status; } public IntWritable getResponseSize() { return responseSize; } public void setResponseSize(IntWritable responseSize) { this.responseSize = responseSize; } // 序列化方法 @Override public void write(DataOutput out) throws IOException { userIP.write(out); timestamp.write(out); url.write(out); status.write(out); responseSize.write(out); } // 反序列化方法 @Override public void readFields(DataInput in) throws IOException { userIP.readFields(in); timestamp.readFields(in); url.readFields(in); status.readFields(in); responseSize.readFields(in); } }
当实现自定义的Writable数据类型时,请注意如下的问题:
- 如果增加了自定义的构造器,请确保保留默认的空构造器;
- TextOutputFormat使用toString()方法来序列化key和value类型。如果使用TextOutputFormat来序列化自定义的Writable类型,确保自定义的Writable类型具有一个有意义的toString()实现。
- 当读取input数据时,Hadoop有可能会反复重用该Writable类的一个实例。当在readFields()方法内填充该对象时,应该不要依赖于该对象已经存在的状态。
使用该新的LogWritable类型作为MapReduce计算的value类型。在下面的示例中,我们使用该LogWritable类型作为该Mapper输出的值类型。
LogMapper.java:
package com.xueai8.log; import java.io.IOException; import java.util.regex.Matcher; import java.util.regex.Pattern; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; /** * // 199.72.81.55 - - [01/Jul/1995:00:00:01 -0400] "GET /history/apollo/ HTTP/1.0" 200 6245 // "^(\\S+) (\\S+) (\\S+) \\[([\\w:/]+\\s[+\\-]\\d{4})\\] \"(.+?)\" (\\d{3}) (\\d+)" // group(1) - ip // group(4) - timestamp // group(6) - status // group(7) - responseSize */ public class LogMapper extends Mapper<LongWritable,Text,Text,LogWritable>{ private final Text outKey = new Text(); private final LogWritable outValue = new LogWritable(); // 自定义Writable类型 @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { // 提取相应字段的正则表达式 String regexp = "^(\\S+) (\\S+) (\\S+) \\[([\\w:/]+\\s[+\\-]\\d{4})\\] \"(.+?)\" (\\d{3}) (\\d+)"; Pattern pattern = Pattern.compile(regexp); Matcher matcher = pattern.matcher(value.toString()); if(!matcher.matches()) { System.out.println("不是一个有效的日志记录"); return; } // 提取相应的字段 String ip = matcher.group(1); String timestamp = matcher.group(4); String url = matcher.group(5); int status = Integer.parseInt(matcher.group(6)); int responseSize = Integer.parseInt(matcher.group(7)); // LogWritable为 value outValue.set(ip, timestamp, url, status, responseSize); outKey.set(ip); // ip 为key context.write(outKey, outValue); // 写出 } }
LogReducer.java:
这里我们统计每个IP的下载量。
package com.xueai8.log; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; /** * * 计算每个IP的下载量 */ public class LogReducer extends Reducer<Text, LogWritable, Text, IntWritable> { private final IntWritable outValue = new IntWritable(0); @Override protected void reduce(Text key, Iterable<LogWritable> values, Context context) throws IOException, InterruptedException { int total = 0; for(LogWritable log : values) { total += log.getResponseSize().get(); } outValue.set(total); context.write(key, outValue); } }
LogDriver.java:
作为输入,这个应用程序可以接收任何文本文件。可直接从IDE运行LogDriver类并传递input和output作为参数。
package com.xueai8.log; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class LogDriver { public static void main(String[] args) throws IllegalStateException, IllegalArgumentException, ClassNotFoundException, IOException, InterruptedException { if(args.length < 2) { System.out.println("用法: LogDriver <input> <output>"); System.exit(1); } Configuration conf = new Configuration(); Job job = Job.getInstance(conf,"日志分析"); job.setJarByClass(LogDriver.class); // set mapper job.setMapperClass(LogMapper.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(LogWritable.class); // *** 注意这里指定的类型 // set reducer job.setReducerClass(LogReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); // 设置输入路径 FileInputFormat.setInputPaths(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); // 提交作业 System.exit(job.waitForCompletion(true) ? 0 : 1); } }
二、配置log4j
在src/main/resources目录下新增log4j的配置文件log4j.properties,内容如下:
log4j.rootLogger = info,stdout ### 输出信息到控制抬 ### log4j.appender.stdout = org.apache.log4j.ConsoleAppender log4j.appender.stdout.Target = System.out log4j.appender.stdout.layout = org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern = [%-5p] %d{yyyy-MM-dd HH:mm:ss,SSS} method:%l%n%m%n
三、项目打包
打开IDEA下方的终端窗口terminal,执行"mvn clean package"打包命令,如下图所示:
如果一切正常,会提示打jar包成功。如下图所示:
这时查看项目结构,会看到多了一个target目录,打好的jar包就位于此目录下。如下图所示:
四、项目部署
请按以下步骤执行。
1、启动HDFS集群和YARN集群。在Linux终端窗口中,执行如下的脚本:
$ start-dfs.sh $ start-yarn.sh
查看进程是否启动,集群运行是否正常。在Linux终端窗口中,执行如下的命令:
$ jps
这时应该能看到有如下5个进程正在运行,说明集群运行正常:
5542 NodeManager 5191 SecondaryNameNode 4857 NameNode 5418 ResourceManager 4975 DataNode
2、将日志数据文件log_sample.txt上传到HDFS的/data/mr/目录下。
$ hdfs dfs -mkdir -p /data/mr $ hdfs dfs -put log_sample.txt /data/mr/ $ hdfs dfs -ls /data/mr/
3、提交作业到Hadoop集群上运行。(如果jar包在Windows下,请先拷贝到Linux中。)
在终端窗口中,执行如下的作业提交命令:
$ hadoop jar HadoopDemo-1.0-SNAPSHOT.jar com.xueai8.log.LogDriver /data/mr /data/mr-output
4、查看输出结果。
在终端窗口中,执行如下的HDFS命令,查看输出结果:
$ hdfs dfs -ls /data/mr-output $ hdfs dfs -cat /data/mr-output/part-r-00000
可以看到最后的统计结果如下:
129.94.144.152 7074 199.120.110.21 9977 199.72.81.55 21833 205.189.154.54 55253 205.212.115.106 11619 alyssa.prodigy.com 12054 burger.letters.com 0 d104.aa.net 46285 dave.dev1.ihub.com 46285 dd14-012.compuserve.com 42732 dial22.lloyd.com 61716 gater3.sematech.org 41514 gater4.sematech.org 4771 gayle-gaston.tenet.edu 12040 ix-or10-06.ix.netcom.com 10149 ix-orl2-01.ix.netcom.com 45499 link097.txdirect.net 51128 net-1-141.eden.com 34029 netport-27.iu.net 7074 onyx.southwind.net 44295 piweba3y.prodigy.com 67720 pm13.j51.com 305722 port26.annex2.nwlink.com 56782 ppp-mia-30.shadow.net 14992 ppp-nyc-3-1.ios.com 129654 ppptky391.asahi-net.or.jp 15450 remote27.compusmart.ab.ca 23783 scheyer.clark.net 49152 slip1.yab.com 23159 smyth-pc.moorecap.com 121677 unicomp6.unicomp.net 49499 waters-gw.starway.net.au 6723 www-a1.proxy.aol.com 3985 www-b4.proxy.aol.com 70712