meituan_waimai_meishi.csv 是美团外卖平台的部分外卖 SPU(Standard Product Unit) 标准产品单元数据, 包含了外卖平台某地区一时间的外卖信息 。具体字段说明如下:
字段名称中文名称数据类型spu_id商品spuIDStringshop_id店铺IDStringshop_name店铺名称Stringcategory_name类别名称Stringspu_nameSPU名称Stringspu_priceSPU商品售价Doublespu_originpriceSPU商品原价Doublemonth_sales月销售量Intpraise_num点赞数Intspu_unitSPU单位Stringspu_descSPU描述Stringspu_image商品图片String链接:meituan_waimai_meishi.csv 提取码:47k1
请在HDFS中创建目录 /app/data/exam,并将meituan_waimai_meishi.csv 文件传到该目录,并通过HDFS命令查询出文档有多少行数据
①统计每个店铺分别有多少商品(SPU)
②统计每个店铺的总销售额
③统计每个店铺销售额最高的前三个商品输出内容包括店铺名,商品名和销售额其中销售额为0的商品不进行统计计算,例如:如果某个店铺销售为0则不进行统计
简单的写法:
spuRDD.map(x => (x(2),x(5).toDouble * x(7).toInt)).reduceByKey(_+_).foreach(println)安全的写法,防止有格式不对的值无法转型 double 或者 int 会报错:
需先引入scala工具包:
import scala.util._使用 Try() 包裹转换类型,转成Option对象 toOption,在使用 getOrElse()
spuRDD.map(x => (x(2),Try(x(5).toDouble).toOption.getOrElse(0.0) * Try(x(7).toInt).toOption.getOrElse(0))).reduceByKey(_+_).foreach(println) ③统计每个店铺销售额最高的前三个商品输出内容包括店铺名,商品名和销售额其中销售额为0的商品不进行统计计算,例如:如果某个店铺销售为0则不进行统计 思路: 用元祖形式统计三个字段:“店铺名shop_name x(2)”, “商品名spu_name x(4)”, “销售额spu_price * month_sales即x(5) * x(7)” -> 过滤销量为0的 -> 分组groupBy(shop_name) -> 自定义map中的内容map((shop_name,销售前三的销售额))步骤1: 查询每个店铺每种商品的销售额
spuRDD.map(x => (x(2),x(4),Try(x(5).toDouble).toOption.getOrElse(0.0) * Try(x(7).toInt).toOption.getOrElse(0))).foreach(println)部分截图如下:
步骤2: 过滤销量为0的商品,并进行分组
spuRDD.map(x => (x(2),x(4),Try(x(5).toDouble).toOption.getOrElse(0.0) * Try(x(7).toInt).toOption.getOrElse(0))).filter(_._3>0).groupBy(_._1).foreach(println)部分截图如下:
注: 此时经过groupBy之后的,元组的格式变成了 (String, Iterable[(String, String, Double)]) ,_.1表示商家名称,_.2是一个可迭代的对象,其中的元组 (String, String, Double) 代表原先map查找的字段 (店铺名称,商品名称,销售额)
如下如查询所示,分组后的元组格式为二元组,二元组中的第二个元素又是嵌套了很多个三元组 格式为: (shop_name, ( (shop_name,spu_name,spu_pricemonth_sales),(shop_name,spu_name,spu_pricemonth_sales) … )
步骤3: 完整答案,过滤销量为0的,取销量前三名,倒序输出sortBy(-1* )
//answer 1: spuRDD.map(x => (x(2),x(4),Try(x(5).toDouble).toOption.getOrElse(0.0) * Try(x(7).toInt).toOption.getOrElse(0))).filter(_._3>0).groupBy(_._1).map(x => { val shop_name = x._1; val spu_name_totalPrice = x._2.toList.sortBy(-1*_._3).take(3).map(item => {item._2+"--"+item._3}); (shop_name,spu_name_totalPrice) }).foreach(println)返回结果元组格式:(string,List( (string,string), (string,string), (string,string) )
如下例查询结果所示:
( 三顾冒菜(曼城国际店), List(单人荤素+1个米饭--588.0, 双人荤素+2个米饭--350.0, 金针菇--292.59999999999997) )注: 再解释一下上述语句的思路:
需求是求店铺的三个最高销售额商品,那么对店铺名称进行分组先求出每家店铺每种商品的销售额,如果以元组形式展示的话是这样的 (店铺名称,销量前三的商品及销售额),其中定义 "val spu_name_totalPrice…"的含义就是求 销量前三的商品及销售额 ,查询商品名称取出销售额前三的商品(必须对商品名称进行 toList 才能进行排序,取出前三名),其次再使用一次map遍历自定义它的输出格式 {商品名称+"–"+销售额} ,这里嵌套的一层map遍历的是最外层的元组 (店铺名称,商品名称,销售额)
如果不以字符串拼接的形式,省去字符串拼接: map(item => {item._2+"–"+item._3}
// answer 2: spuRDD.map(x => (x(2),x(4),Try(x(5).toDouble).toOption.getOrElse(0.0) * Try(x(7).toInt).toOption.getOrElse(0))).filter(_._3>0).groupBy(_._1).map(x => { val shop_name = x._1; val spu_name_totalPrice = x._2.toList.sortBy(-1*_._3).take(3); (shop_name,spu_name_totalPrice) }).foreach(println)返回结果元组格式:(string,List( (string,string,double), (string,string,double), (string,string,double) ) 如下例查询结果所示:
( 三顾冒菜(曼城国际店), List( (三顾冒菜(曼城国际店),单人荤素+1个米饭,588.0), (三顾冒菜(曼城国际店),双人荤素+2个米饭,350.0), (三顾冒菜(曼城国际店),金针菇,292.59999999999997) ) )还可以这么写:省去 val shop_name =x._1 ,直接返回 销售额前三的结果,因为在groupBy之后的元组格式就已经变成了二元组又嵌套三元组: (shop_name, (shop_name,spu_name,spu_price*month_sales) ) 因此在map中查询到需要的销售额前三名直接返回即可,返回列表形式嵌套三元组
// answer 3: spuRDD.map(x => (x(2),x(4),Try(x(5).toDouble).toOption.getOrElse(0.0) * Try(x(7).toInt).toOption.getOrElse(0))).filter(_._3 > 0).groupBy(_._1).map(x => { val spu_total_sales = x._2.toList.sortBy(-1*_._3).take(3); spu_total_sales }).foreach(println)返回结果列表格式:List( (string,string,double), (string,string,double), (string,string,double) ) 如下例查询结果所示:
List( (三顾冒菜(曼城国际店),单人荤素+1个米饭,588.0), (三顾冒菜(曼城国际店),双人荤素+2个米饭,350.0), (三顾冒菜(曼城国际店),金针菇,292.59999999999997) )在延伸一下还可以这么写:直接将返回的 二元组又嵌套三元组: (shop_name, ( (shop_name,spu_name,spu_pricemonth_sales),(shop_name,spu_name,spu_pricemonth_sales) … ) 去元组中第二个元素 _.2 (shop_name,spu_name,spu_price*month_sales),使用flatMap将 _.2 展开,直接求销售额前三名也行
spuRDD.map(x => (x(2),x(4),Try(x(5).toDouble).toOption.getOrElse(0.0) * Try(x(7).toInt).toOption.getOrElse(0))).filter(_._3 > 0).groupBy(_._1).flatMap(x => x._2.toList.sortBy(-1*_._3).take(3)).foreach(println)返回结果元组格式:(string,string,double) 如下例查询结果所示:
(三顾冒菜(曼城国际店),单人荤素+1个米饭,588.0) (三顾冒菜(曼城国际店),双人荤素+2个米饭,350.0) (三顾冒菜(曼城国际店),金针菇,292.59999999999997) (强社包子铺,香葱大肉包,329.6) (强社包子铺,香葱大肉包,329.6) (强社包子铺,芹菜酸菜包,253.00000000000003)注:由于没有规定查询结果返回的格式,这里延伸了4种方法,仅供参考
其中 option(“inferSchema”,“true”) 表示自动推断字段类型
打印Schema信息 spuDF.printSchema 将spuDF转化成临时表 spu spuDF.createOrReplaceTempView("spu") 查看表信息 spark.sql("select * from spu").show() ①统计每个店铺分别有多少商品(SPU) spark.sql("select shop_name,count(1) as spu_total from spu group by shop_name").show() ②统计每个店铺的总销售额 spark.sql("select shop_name,sum(spu_price * month_sales) as total_sales from spu group by shop_name").show() ③统计每个店铺销售额最高的前三个商品输出内容包括店铺名,商品名和销售额其中销售额为0的商品不进行统计计算,例如:如果某个店铺销售为0则不进行统计 spark.sql("select t.shop_name,t.spu_name,t.total_sales,t.rn from(select shop_name,spu_name,spu_price * month_sales as total_sales,row_number() over(partition by shop_name order by spu_price * month_sales desc) rn from spu where month_sales > 0) t where t.rn<=3").show()注: 上述 sql 语句的格式为:
select t.shop_name,t.spu_name,t.total_sales,t.rn from (select shop_name,spu_name,spu_price * month_sales as total_sales, row_number() over(partition by shop_name order by spu_price * month_sales desc) rn from spu where month_sales > 0) t where t.rn<=3表头中创建 spuDF 使用的是自动推断 Schema 类型 option(“inferSchema”,true),而某些场景下需要用到非自动类型转换,比如是大宽表,而只需要查找其中几个字段,这时自动推测转换会比较消耗内存 下面使用非自动类型转换来创建 dataFrame
创建 Row 类型 RDD 只选取 meituan_waimai_meishi.csv 中需要用到的几个字段 val spuRowRDD = fileRDD.filter(x=>x.startsWith("spu_id")==false).map(x=>x.split(",",-1)).filter(_.length==12).map(p => Row(p(1),p(2),p(4),Try(p(5).toDouble).toOption.getOrElse(0.0),Try(p(7).toInt).toOption.getOrElse(0))) 创建Schema val spuSchema = StructType(Array( StructField("shop_id",StringType,true), StructField("shop_name",StringType,true), StructField("spu_name",StringType,true), StructField("spu_price",DoubleType,true), StructField("month_sales",IntegerType,true) )) 创建dataFrame val spuDF2 = spark.createDataFrame(spuRowRDD,spuSchema)在HBase中创建命名空间 (namespace)exam ,在该命名空间下创建spu表,该表下有1个列族result
ex_spu 表结构如下:
字段名称中文名称数据类型spu_id商品spuIDStringshop_id店铺IDStringshop_name店铺名称Stringcategory_name类别名称Stringspu_nameSPU名称Stringspu_priceSPU商品售价Doublespu_originpriceSPU商品原价Doublemonth_sales月销售量Intpraise_num点赞数Intspu_unitSPU单位Stringspu_descSPU描述Stringspu_image商品图片Stringex_spu_hbase 表结构如下:
字段名称字段类型字段含义keystringrowkeysalesdouble销售额praiseint点赞数 创建数据库 spu_db create database spu_db; 创建外部表 ex_spu create external table if not exists ex_spu( spu_id string, shop_id string, shop_name string, category_name string, spu_name string, spu_price double, spu_originprice double, month_sales int, praise_num int, spu_unit string, spu_desc string, spu_image string) row format delimited fields terminated by ',' stored as textfile location '/app/data/exam' tblproperties("skip.header.line.count"="1");注: tblproperties("skip.header.line.count"="1") 表示导入数据时忽略文件第一行 如果是跳过首尾两行,这样写:
tblproperties ("skip.header.line.count"="1", "skip.footer.line.count"="2"); 创建外部表 ex_spu_hbase create external table if not exists ex_spu_hbase( key string, sales double, prise int ) stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' with serdeproperties ("hbase.columns.mapping"=":key,result:sales,result:praise") tblproperties("hbase.table.name"="exam:spu");①统计每个店铺的总销售额sales,店铺的商品总点赞数praise,并将shop_id和 shop_name的组合作为RowKey,并将结果映射到HBase
②完成统计后,分别在hive和HBase中查询结果数据
注: LIMIT必须要大写,不然会报错
这是一份来自于某在线考试系统的学员答题批改日志,日志中记录了日志生成时间,题目难度系数,题目所属的知识点ID,做题的学生ID,题目ID以及作答批改结果。日志的结构如下:
链接:answer_question.log 提取码:47k1
①提取日志中的知识点ID,学生ID,题目ID,作答结果4 个字段的值
结果如下: 注: 要去除 题目字符串后缀中的 r ,除了用 substring 算子修剪掉,也可以用 replace(" r " ,"") ,将 r 替换为空字符
②将提取后的 知识点ID ,学生ID,题目ID,作答结果 ,字段的值以文件的形式保存到HDFS的 /app/data/result 目录下。一行保留一条数据,字段间以“\t”分割,文件格式如下所示。(提示:元组可使用tuple.productIterator.mkString("\t")组合字符串)
344344818195023659599101803443442581950236595997385134434457819502365959673461344344988195023659597667203443444981950236595944809134434489819502365959679980.53443449281950236595959406034434485819502365959787101在HBase中创建命名空间(namespace)exam,在该命名空间下创建analysis表,使用学生ID作为RowKey,该表下有2 个列族accuracy、question
请在Hive中创建数据库 exam ,在该数据库中创建外部表 ex_exam_record 指向 /app/data/result 下Spark处理后的日志数据;创建外部表 ex_exam_anlysis 映射至HBase中的 analysis 表的 accuracy 列族;创建外部表 ex_exam_question 映射至HBase中的 analysis 表的 question 列族
ex_exam_record 表结构如下:
字段名称字段类型字段含义topic_idstring知识点IDstudent_idstring学生IDquestion_idstring题目IDscorefloat作答结果ex_exam_anlysis表结构如下:
字段名称字段类型字段含义student_idstring学生IDtotal_scorefloat总分question_countint答题的试题数accuracyfloat正确率ex_exam_question 表结构如下:
字段名称字段类型字段含义student_idstring学生IDrightstring所有作对的题目的ID 列表halfstring所有半对的题目的ID 列表errorfloat所有做错的题目的ID 列表①题目id 以逗号分割,并保存到ex_exam_question 表中
步骤:
1.查询一个学生做 对 的所有题目: select student_id,score,concat_ws(",",collect_set(question_id)) right from ex_exam_record where score = 1 group by student_id,score;部分截图如下:
2.查询一个学生做 半对 的所有题目: select student_id,score,concat_ws(",",collect_set(question_id)) half from ex_exam_record where score = 0.5 group by student_id,score; 3.查询一个学生做 错 的所有题目: select student_id,score,concat_ws(",",collect_set(question_id)) error from ex_exam_record where score = 0 group by student_id,score; 4.联表查询 select t1.student_id,t1.right,t2.half,t3.error from (select student_id,score,concat_ws(",",collect_set(question_id)) right from ex_exam_record where score = 1 group by student_id,score) t1 join (select student_id,score,concat_ws(",",collect_set(question_id)) half from ex_exam_record where score = 0.5 group by student_id,score) t2 on t1.student_id = t2.student_id join (select student_id,score,concat_ws(",",collect_set(question_id)) error from ex_exam_record where score = 0 group by student_id,score) t3 on t1.student_id = t3.student_id;部分截图如下:
5.将查询结果插入到表 ex_exam_question 中 insert into ex_exam_question select t1.student_id,t1.right,t2.half,t3.error from (select student_id,score,concat_ws(",",collect_set(question_id)) right from ex_exam_record where score = 1 group by student_id,score) t1 join (select student_id,score,concat_ws(",",collect_set(question_id)) half from ex_exam_record where score = 0.5 group by student_id,score) t2 on t1.student_id = t2.student_id join (select student_id,score,concat_ws(",",collect_set(question_id)) error from ex_exam_record where score = 0 group by student_id,score) t3 on t1.student_id = t3.student_id;注: 上述代码也可以简写成这样:
with t as (select student_id, score, concat_ws(",",collect_set(question_id)) as question_id from ex_exam_record group by student_id,score), a1 as (select student_id, question_id from t where score=1), a2 as (select student_id, question_id from t where score=0.5), a3 as (select student_id, question_id from t where score=0) insert into ex_exam_question select a1.student_id, a1.question_id right, a2.question_id half, a3.question_id error from a1 join a2 on a1.student_id = a2.student_id join a3 on a1.student_id = a3.student_id ;也可以换一种思路,用 case when 简写成这样:
insert into ex_exam_question select student_id, concat_ws('',collect_list(t.right)) right, concat_ws('',collect_list(t.half)) half, concat_ws('',collect_list(t.error)) error from ( select student_id, case when score=1.0 then concat_ws(",",collect_list(question_id)) else null end right, case when score=0.5 then concat_ws(",",collect_list(question_id)) else null end half, case when score=0.0 then concat_ws(",",collect_list(question_id)) else null end error from ex_exam_record group by student_id,score) t group by student_id;②完成统计后,在HBase Shell 中遍历 exam:analysis 表并只显示 question 列族中的数据,如下图所示:
