大数据之spark

    科技2022-08-09  115

    spark整合hive

    1.安装MySQL并创建一个普通用户,并且授权

    set global validate_password_policy=0; set global validate_password_length=1; -- 这个两个设置以后 密码很简单不会报错 CREATE USER 'hive'@'%' IDENTIFIED BY '123456'; GRANT ALL PRIVILEGES ON hivedb.* TO 'hive'@'%' IDENTIFIED BY '123456' WITH GRANT OPTION; FLUSH PRIVILEGES;

    2.添加一个hive-site.xml

    <?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://linux01:3306/hivedb?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> <description>password to use against metastore database</description> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> <property> <name>datanucleus.schema.autoCreateAll</name> <value>true</value> </property> </configuration>

    3.上传一个mysql连接驱动,可以将连接驱动放入到spark的安装包的jars或者使用-driver-class-path指定mysql连接驱动的位置

    bin/spark-sql --master spark://linux01:7077 --driver-class-path /root/mysql-connector-java-5.1.49.jar

    4.sparkSQL会在mysql上创建一个database,需要手动改一下DBS表中的DB_LOCATION_UIR改成hdfs的地址(内部表需要改)

    5.重新启动SparkSQL的命令行 bin/spark-sql --master spark://linux01:7077 --driver-class-path /root/mysql-connector-java-5.1.49.jar


    spark-sql 启动HiveServer

    sbin/start-thriftserver.sh --master spark://linux01:7077 --executor-memory 1g --total-executor-cores 8 --driver-class-path /root/mysql-connector-java-5.1.49.jar

    使用beline连接HiveServer

    bin/beeline -u jdbc:hive2://linux01:10000


    spark-sql on yarn

    bin/spark-sql –master yarn –deploy-mode client –driver-memory 1g –executor-memory 512m –num-executors 3 –executor-cores 1 –driver-class-path /root/mysql-connector-java-5.1.49.jar

    Processed: 0.024, SQL: 8