本文为《Spark大型电商项目实战》 系列文章之一,主要代码实现top10热门品类模块中的第一步:获取符合条件的session访问过的所有品类。
在UserVisitSessionAnalyzeSpark.java中实现获取top热门品类的功能。 在sc.close();前面添加获取Top10热门品类方法,代码及注释如下:
//获取top10热门品类 List<Tuple2<CategorySortKey, String>> top10CategoryList = getTop10Category(task.getTaskid(), sessionid2detailRDD);然后在”计算各session范围占比,并写入MySQL”方法后面实现“获取top10热门品类”方法
/** * 获取Top10热门品类 * @param filteredSessionid2AggrInfoRDD * @param sessionid2actionRDD */ private static void getTop10Category( JavaPairRDD<String, String> filteredSessionid2AggrInfoRDD, JavaPairRDD<String, Row> sessionid2actionRDD) { /** * 第一步:获取符合条件的session访问过的所有品类 */ //获取符合条件的session的访问明细 JavaPairRDD<String, Row> sessionid2detailRDD = filteredSessionid2AggrInfoRDD .join(sessionid2actionRDD) .mapToPair(new PairFunction<Tuple2<String, Tuple2<String, Row>>, String, Row>() { private static final long serialVersionUID = 1L; public Tuple2<String, Row> call( Tuple2<String, Tuple2<String, Row>> tuple) throws Exception { return new Tuple2<String, Row>(tuple._1, tuple._2._2); } }); //获取session访问过的所有品类id //访问过指的是点击、下单、支付的品类 JavaPairRDD<Long, Long> CategoryidRDD = sessionid2detailRDD.flatMapToPair( new PairFlatMapFunction<Tuple2<String, Row>, Long, Long>() { private static final long serialVersionUID = 1L; public Iterable<Tuple2<Long, Long>> call( Tuple2<String, Row> tuple) throws Exception { Row row = tuple._2; List<Tuple2<Long, Long>> list = new ArrayList<Tuple2<Long, Long>>(); Long clickCategoryId = row.getLong(6); if(clickCategoryId != null) { list.add(new Tuple2<Long ,Long>(clickCategoryId, clickCategoryId)); } String orderCategoryIds = row.getString(8); if(orderCategoryIds != null) { String[] orderCategoryIdsSplited = orderCategoryIds.split(","); for(String orderCategoryId : orderCategoryIdsSplited) { list.add(new Tuple2<Long, Long>(Long.valueOf(orderCategoryId), Long.valueOf(orderCategoryId))); } } String payCategoryIds = row.getString(10); if(payCategoryIds != null) { String[] payCategoryIdsSplited = payCategoryIds.split(","); for(String payCategoryId : payCategoryIdsSplited) { list.add(new Tuple2<Long, Long>(Long.valueOf(payCategoryId), Long.valueOf(payCategoryId))); } } return list; } }); //必须去重 //如果不去重的话,会出现重复的categoryid,排序会对重复的categoryid以及countInfo进行排序 //最后很可能会拿到重复的数据 categoryidRDD = categoryidRDD.distinct();《Spark 大型电商项目实战》源码:https://github.com/Erik-ly/SprakProject
本文为《Spark大型电商项目实战》系列文章之一, 更多文章:Spark大型电商项目实战:http://blog.csdn.net/u012318074/article/category/6744423