Hive2.0 在 Hadoop2.7部署 (2017.03添加异常处理)(图文解说)

    xiaoxiao2021-03-25  121

    1 下载解压

    2.安装Mysql,

    MYSQL的安装略《参照上一篇Mysql部署》

            安装好mysql并配置好了之后,还要将连接mysql的驱动:mysql-connector-java-5.1.41.jar 拷贝到HiveHome 目录下的lib文件夹中,这样Hive才可能成功连接mysql。

    3.创建hive用户

    1. service mysql start 2. mysql -u root -p   3. CREATE USER hive' IDENTIFIED BY 'hive'; 4. GRANT ALL PRIVILEGES ON *.* TO 'hive'@'172.16.11.222' IDENTIFIED BY 'hive';  5. FLUSH PRIVILEGES; 6. create database hive;

    4.安装Hive2.0:(hadoop的namenode上

    tar -zxvf apache-hive-2.0.0-bin.tar.gz vim /etc/profile cd /home/hive2.0/conf cp hive-default.xml.template hive-site.xml

    5.修改配置文件:hive-site.xml (原有的配置中有默认值必须一一对应改正

    <property>

          <name>javax.jdo.option.ConnectionURL</name>

                <value>jdbc:mysql://mach40:3306/hive?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false</value>

                <description>JDBC connect string for a JDBC metastore</description>

               </property>

        <property>

              <name>javax.jdo.option.ConnectionDriverName</name>

              <value>com.mysql.jdbc.Driver</value>

              <description>Driver class name for a JDBC metastore</description>

        </property>

    <property>

              <name>javax.jdo.option.ConnectionUserName</name>

              <value>hive</value>

     <description>username to use against metastore database</description>

      </property>

      <property>

              <name>javax.jdo.option.ConnectionPassword</name>

               <value>hive</value>

               <description>password to use against metastore database</description>

     </property>

      <property>

        <name>hive.querylog.location</name>

        <value>$HIVE_HOME/iotmp</value>     //$HIVE_HOME这里还是写成绝对路径比较好,(linux可以这样写,但在centos中这样写就出错了)

      </property>

      <property>

        <name>hive.exec.scratchdir</name>

        <value>/tmp/hive</value>

        <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission.Foreachconnectinguser,anHDFSscratchdir:${hive.exec.scratchdir}/<username>iscreated,with${hive.scratch.dir.permission}.</description>

      </property>

      <property>

        <name>hive.exec.local.scratchdir</name>

        <value>$HIVE_HOME/iotmp</value>

        <description>Local scratch space for Hive jobs</description>

      </property>

      <property>

        <name>hive.downloaded.resources.dir</name>

        <value>$HIVE_HOME/iotmp</value>

      <description>Temporary local directory for added resources in the remote file system.</description>

      </property>

    //以下是spark sql 中需要添加的相关东西

    <property>   <name>hive.metastore.uris</name>   <value>thrift://mach40:9083</value>  <description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description>   </property>        <property>     <name>hive.server2.thrift.min.worker.threads</name>     <value>5</value>     <description>Minimum number of Thrift worker threads</description>   </property>     <property>     <name>hive.server2.thrift.max.worker.threads</name>     <value>500</value>     <description>Maximum number of Thrift worker threads</description>   </property>     <property>     <name>hive.server2.thrift.port</name>     <value>10000</value>     <description>Port number of HiveServer2 Thrift interface. Can be overridden by setting $HIVE_SERVER2_THRIFT_PORT</description>   </property>     <property>     <name>hive.server2.thrift.bind.host</name>     <value>mach42</value>     <description>Bind host on which to run the HiveServer2 Thrift interface.Can be overridden by setting$HIVE_SERVER2_THRIFT_BIND_HOST</description>   </property>

    添加:于hbase 整合:

    <property>   <name>hive.aux.jars.path</name>   <value>file:///home/hive1.22/lib/hive-hbase-handler-1.2.2.jar,file:///home/hive1.22/lib/protobuf-java-2.5.0.jar,file:///home/hive1.22/lib/hbase-client-1.2.5.jar,file:///home/hive1.22/lib/hbase-common-1.2.5.jar,file:///home/hive1.22/lib/zookeeper-3.4.5.jar,file:///home/hive1.22/lib/guava-14.0.1.jar</value> </property>

    创建目录:

    cd /home/hive2.0

    mkdir iotmp

     

    6,格式化,挂载mysqlHive数据:

    /home/hive2.0/bin/schematool -initSchema -dbType mysql

     

    成功的时候

    启动hive服务:

     

     

     

     

     

    异常情况:(上述配置中已经添加处理方式

     

     

     

    处理方案(上面配置):

     

    hive-site.xml中改修改以下几项:

     

    hive.querylog.location

    hive.exec.local.scratchdir

    hive.downloaded.resources.dir

     

    详见上述参数

    异常情况2:

    处理方式:

    “Error: Duplicate key name 'PCS_STATS_IDX'”  

    这是由于之前曾经格式化一次,或者有表未导入,mysql中的hive库中有残留的数据,残留的表,将mysql中的hive库删掉重新创建,或者删掉hive中的表;再次格式化

    异常情况3:

    处理方式:

    这个比较常见,原因就是没有把驱动包放到hive的“lib”目录中

     

    操作测试:(带hadoop界面效果图)

     

     

    hive

    create table test0 (id int ,name int );

    可以

    再开终端2

    hive

    show tables

    有test0

    我们去hadoop界面看看:

    test0

    再开终端3

    mysql

    show databases

    use hive

    show tables

    select * from TBLS;

    看到test0表。

    因为mysql存储metastore相比derby可以多用户登录。

    所以可以hive shellhwi同时使用了。

    转载请注明原文地址: https://ju.6miu.com/read-13037.html

    最新回复(0)