国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

Hive的安裝及配置

v1 / 2506人閱讀

摘要:本文將逐一介紹連接這三種數據庫數據庫的安裝和配置。配置環境變量在文件中添加如下內容,執行使其生效。


title: Hive的安裝及配置
summary: 關鍵詞:Hive ubuntu 安裝和配置 Derby MySQL PostgreSQL 數據庫連接
date: 2019-5-19 13:25
urlname: 2019051903
author: foochane
img: /medias/featureimages/19.jpg
categories: 大數據
tags:

hive

大數據


本文作者:foochane?
本文鏈接:https://foochane.cn/article/2019051903.html
1 安裝說明

在安裝hive之前,需要安裝hadoop集群環境,如果沒有可以查看:Hadoop分布式集群的搭建

1.1 用到的軟件
軟件 版本 下載地址
linux Ubuntu Server 18.04.2 LTS https://www.ubuntu.com/downlo...
hadoop hadoop-2.7.1 http://archive.apache.org/dis...
java jdk-8u211-linux-x64 https://www.oracle.com/techne...
hive hive-2.3.5 http://mirror.bit.edu.cn/apac...
mysql-connector-java mysql-connector-java-5.1.45.jar 命令行安裝
postgresql-jdbc4 postgresql-jdbc4.jar 命令行安裝
1.2 節點安排
名稱 ip hostname
主節點 192.168.233.200 Master
子節點1 192.168.233.201 Slave01
子節點2 192.168.233.202 Slave02
1.3 說明

注意:本文的hiveMySQLPostgreSQL均只安裝在Master節點上,實際生產環境中,需根據實際情況調整

Hive默認元數據保存在內嵌的 Derby 數據庫中,這是最簡單的一種存儲方式,使用derby存儲方式時,運行hive會在當前目錄生成一個derby文件和一個metastore_db目錄。Derby 數據庫中,只能允許一個會話連接,只適合簡單的測試,實際生產環境中不適用。 為了支持多用戶會話,則需要一個獨立的元數據庫,使用 MySQL 或者PostgreSQL作為元數據庫,Hive 內部對 MySQLPostgreSQL提供了很好的支持。

本文將逐一介紹hive連接DerbyPostgreSQLMySQL這三種數據庫數據庫的安裝和配置。

2 hive連接Derby 2.1 解壓
$ tar -zxvf apache-hive-2.3.5-bin.tar.gz -C /usr/local/bigdata & cd /usr/local/bigdata
$ mv apache-hive-2.3.5-bin hive-2.3.5
$ sudo chown -R hadoop:hadoop hive #之前bigdata目錄已經修改過權限了
2.2 修改配置文件

要修改的文件在/usr/local/hive-2.3.5/conf目錄下,需要修改hive-site.xmlhive-env.shhive-log4j2.properties這3個文件。

先把.template文件復制一份出來,然后進行修改。

$ cd /usr/local/hive-2.3.5/conf
$ cp hive-default.xml.template hive-site.xml
$ cp hive-env.sh.template hive-env.sh
$ cp hive-log4j.properties.template hive-log4j.properties
2.2.1 hive-site.xml(Derby)

配置Derby只需要修改javax.jdo.option.ConnectionURL指定metastore_db的存儲位置即可
具體修改如下:


    javax.jdo.option.ConnectionURL
    jdbc:derby:;databaseName=/usr/local/bigdata/hive-2.3.5/metastore/metastore_db;create=true
    
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    
2.2.2 hive-env .sh

添加:

export HADOOP_HOME=/usr/local/bigdata/hadoop-2.7.1
export HIVE_CONF_DIR=/usr/local/bigdata/hive-2.3.5/conf
2.2.3 hive-log4j2.properties

日志配置可以先默認,暫時不修改什么。

2.3 配置環境變量

~/.bashrc文件中添加如下內容,執行source ~/.bashrc使其生效。

export HIVE_HOME=/usr/local/bigdata/hive-2.3.5
export PATH=$PATH:/usr/local/bigdata/hive-2.3.5/bin
2.4 為hive創建數據倉庫存儲目錄

注意先啟動hadoop集群

$ hadoop fs -mkdir -p /user/hive/warehouse
$ hadoop fs -mkdir -p /tmp
$ hadoop fs -chmod g+w /user/hive/warehouse
$ hadoop fs -chmod g+w /tmp
2.4 啟動hive

初始化元數據數據庫

$ schematool -initSchema -dbType derby

成功初始化應該出現如下內容:

$ schematool -initSchema -dbType derby
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/bigdata/hive-2.3.5/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/bigdata/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:        jdbc:derby:;databaseName=/usr/local/bigdata/hive-2.3.5/metastore/metastore_db;create=true
Metastore Connection Driver :    org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:       APP
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.derby.sql
Initialization script completed
schemaTool completed

啟動hive

$ hive

如果成功運行將出現如下內容:

$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/bigdata/hive-2.3.5/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/bigdata/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in file:/usr/local/bigdata/hive-2.3.5/conf/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>
    >

創建表

create table t1(
     id      int
    ,name    string
    ,hobby   array
    ,add     map
    )
    row format delimited
    fields terminated by ","
    collection items terminated by "-"
    map keys terminated by ":"
    ;
hive>
    >
    >
    > show databases;
OK
default
Time taken: 22.279 seconds, Fetched: 1 row(s)
hive> create table t1(
    >     id      int
    >    ,name    string
    >    ,hobby   array
    >    ,add     map
    > )
    > row format delimited
    > fields terminated by ","
    > collection items terminated by "-"
    > map keys terminated by ":"
    > ;
OK
Time taken: 1.791 seconds
hive>

至此,以Derby做元數據庫的hive連接方式就配置完成了。

下面介紹如何將hive連接到PostgreSQLMySQL

3 PostgreSQL的安裝 3.1 安裝

執行如下命令:

$ sudo apt install postgresql postgresql-contrib

安裝完成后默認會有一個postgres的用戶,且沒有密碼,作為管理員

3.2 啟動PostgreSQL
$ sudo systemctl enable postgresql
$ sudo systemctl start postgresql
3.3 登錄
hadoop@Master:~$ sudo -i -u postgres
postgres@Master:~$ psql
psql (10.8 (Ubuntu 10.8-0ubuntu0.18.04.1))
Type "help" for help.

postgres=# help
You are using psql, the command-line interface to PostgreSQL.
Type:  copyright for distribution terms
       h for help with SQL commands
       ? for help with psql commands
       g or terminate with semicolon to execute query
       q to quit
postgres=#
4 hive連接PostgreSQL 4.1 安裝PostgreSQL-JDBC驅動
$ sudo apt-get install libpostgresql-jdbc-java
$ ln -s /usr/share/java/postgresql-jdbc4.jar /usr/local/bigdata/hive-2.3.5/lib
4.2 修改pg_hba.conf文件

修改 /etc/postgresql/10/main/pg_hba.conf文件

# Database administrative login by Unix domain socket
#local   all             postgres                                peer
local   all             postgres                                trust

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
#local   all             all                                     peer
local   all             all                                     trust
# IPv4 local connections:
#host    all             all             127.0.0.1/32            md5
host    all             all             127.0.0.1/32            trust
# IPv6 local connections:
#host    all             all             ::1/128                 md5
host    all             all             ::1/128                 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local   replication     all                                     peer
#local   replication     all                                     peer
#local   replication     all                                     peer
local   replication     all                                     trust
host    replication     all             127.0.0.1/32            trust
host    replication     all             ::1/128                 trust
4.3 在PostpreSQL中創建數據庫和用戶

先創建一個名為hiveuser的用戶,密碼:123456

然后創建一個名為metastore的數據庫:

$ sudo -u postgres psql 

postgres=# CREATE USER hiveuser WITH PASSWORD "123456";
postgres=# CREATE DATABASE metastore;

測試用戶和數據庫是否能登錄

$ psql -h localhost -U hiveuser -d pymetastore

登錄成功說明配置完成

hadoop@Master:~$  psql -h localhost -U hiveuser -d metastore
Password for user hive:
psql (10.8 (Ubuntu 10.8-0ubuntu0.18.04.1))
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

pymetastore=>
4.5 修改hive-site.xml(PostgreSQL)

之前配置的是以Derby做元數據庫,現在同樣也是修改hive-site.xml文件。
首先在開頭添加如下內容:

  
    system:java.io.tmpdir
    /tmp/hive/java
  
 
    system:user.name
    ${user.name}
 

然后修改如下屬性:

name value description
javax.jdo.option.ConnectionURL jdbc:postgresql://localhost/metastore 指定連接的數據庫(之前創建的)
javax.jdo.option.ConnectionDriverName org.postgresql.Driver 數據庫驅動
javax.jdo.option.ConnectionUserName hiveuser 用戶名(之前創建的)
javax.jdo.option.ConnectionPassword 123456 用戶名密碼

具體如下:

  
    javax.jdo.option.ConnectionURL
    jdbc:postgresql://localhost/metastore
    
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    
  

  
    javax.jdo.option.ConnectionDriverName
    org.postgresql.Driver
    Driver class name for a JDBC metastore
  

  
    javax.jdo.option.ConnectionUserName
    hiveuser
    Username to use against metastore database
  

  
    javax.jdo.option.ConnectionPassword
    123456
    password to use against metastore database
  
4.6 啟動Hive

先運行schematool進行初始化:

schematool -dbType postgres -initSchema

然后執行$ hive 啟動hive。

創建表格進行測試

hadoop@Master:~$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/bigdata/hive-2.3.5/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/bigdata/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Logging initialized using configuration in file:/usr/local/bigdata/hive-2.3.5/conf/hive-log4j2.properties Async: true
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/bigdata/hadoop-2.7.7/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It"s highly recommended that you fix the library with "execstack -c ", or link it with "-z noexecstack".
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>
    > show databases;
OK
default
Time taken: 12.294 seconds, Fetched: 1 row(s)
hive> create table t1(
    >     id      int
    >    ,name    string
    >    ,hobby   array
    >    ,add     map
    > )
    > row format delimited
    > fields terminated by ","
    > collection items terminated by "-"
    > map keys terminated by ":"
    > ;
OK
Time taken: 1.239 seconds
hive> Connection reset by 192.168.233.200 port 22

查看是否創建成功:

$ psql -h localhost -U hiveuser -d metastore
psql (10.8 (Ubuntu 10.8-0ubuntu0.18.04.1))
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

metastore=> SELECT * from "TBLS";
 TBL_ID | CREATE_TIME | DB_ID | LAST_ACCESS_TIME | OWNER  | RETENTION | SD_ID | TBL_NAME |   TBL_TYPE    | VIEW_EXPANDED_TEXT | VIEW_ORIGINAL_TEXT | IS_REWRITE_ENABLED
--------+-------------+-------+------------------+--------+-----------+-------+----------+---------------+--------------------+--------------------+--------------------
      1 |  1560074934 |     1 |                0 | hadoop |         0 |     1 | t1       | MANAGED_TABLE |                    |                    | f
(1 row)
5 MySQL安裝 5.1 安裝
$ sudo apt install mysql-server
5.2 設置MySQL的root用戶密碼

如果沒有設置密碼的話,設置密碼。

這里密碼設置為hadoop

$ mysql -u root -p
6 Hive連接MySQL 6.1 在MySQL中為Hive新建數據庫

用來存放Hive的元數據。

與Hive配置文件hive-site.xml中的 mysql://localhost:3306/metastore 對應

#建立數據庫和用戶
mysql> create database if not exists metastore;
mysql> CREATE USER "hiveuser"@"localhost" IDENTIFIED BY "123456";

#設置遠程登錄的權限
mysql> REVOKE ALL PRIVILEGES, GRANT OPTION FROM "hiveuser"@"localhost";
mysql> GRANT ALL PRIVILEGES ON metastore.* TO "hiveuser"@"localhost";

#刷新配置
mysql> FLUSH PRIVILEGES;
mysql> quit;
6.2 安裝MySQL-JDBC驅動
$ sudo apt-get install libmysql-java
$ ln -s /usr/share/java/mysql-connector-java-5.1.45.jar /usr/local/bigdata/hive-2.3.5/lib
6.3 修改修改hive-site.xml(MySQL)

首先在開頭添加如下內容:

  
    system:java.io.tmpdir
    /tmp/hive/java
  
 
    system:user.name
    ${user.name}
 

然后修改如下屬性:

name value description
javax.jdo.option.ConnectionURL jdbc:mysql://localhost:3306/metastore?useSSL=true 指定連接的數據庫(之前創建的)
javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver 數據庫驅動
javax.jdo.option.ConnectionUserName hiveuser 用戶名(之前創建的)
javax.jdo.option.ConnectionPassword 123456 用戶名密碼

具體如下:

  
    javax.jdo.option.ConnectionURL
    jdbc:mysql://localhost:3306/metastore?useSSL=true
    
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    
  

  
    javax.jdo.option.ConnectionDriverName
    com.mysql.jdbc.Driver
    Driver class name for a JDBC metastore
  

  
    javax.jdo.option.ConnectionUserName
    hiveuser
    Username to use against metastore database
  

  
    javax.jdo.option.ConnectionPassword
    123456
    password to use against metastore database
  
6.4 啟動hive

先初始化

schematool -dbType mysql -initSchema

和前面一樣,執行

$ hive
7 問題總結 問題1

初始化derby時報如下錯誤,提示沒有hive-exec-*.jar

hadoop@Master:~$ schematool -initSchema -dbType derby
Missing Hive Execution Jar: /usr/local/biddata/hive-2.3.5/lib/hive-exec-*.jar
解決:

檢查該目錄下是否確實不存在hive-exec-2.35.jar,如果不存在,下載一個放到該目錄下。
下載地址:https://mvnrepository.com/art...
如果存在,那一定是環境變量配置有問題,查看HIVE_HOME$HIVE_HOME/bin是否配置正確。

問題2

報錯:

Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.na
        at org.apache.hadoop.fs.Path.initialize(Path.java:205)
        at org.apache.hadoop.fs.Path.(Path.java:171)
        at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:659)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:582)
        at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:549)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:750)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
        at java.net.URI.checkPath(URI.java:1823)
        at java.net.URI.(URI.java:745)
        at org.apache.hadoop.fs.Path.initialize(Path.java:202)
        ... 12 more
解決

hive-site.xml文件開頭加入如下配置:


    system:java.io.tmpdir
    /tmp/hive/java
  

    system:user.name
    ${user.name}
問題3

執行$ schematool -dbType postgres -initSchema時報錯

hadoop@Master:~$ schematool -dbType postgres -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/bigdata/hive-2.3.5/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/bigdata/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:        jdbc:postgresql://localhost/pymetastore
Metastore Connection Driver :    org.postgresql.Driver
Metastore connection User:       hive
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.postgres.sql
Error: ERROR: relation "BUCKETING_COLS" already exists (state=42P07,code=0)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
Use --verbose for detailed stacktrace.
*** schemaTool failed ***

另外也會有這個錯:

Error: ERROR: relation "txns" already exists (state=42P07,code=0)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
Use --verbose for detailed stacktrace.
*** schemaTool failed ***

這個問題,我嘗試了很久也沒有找到原因,網上有說是hive版本的原因,我換了hive-1.2.1hive-1.2.2 等低版本的hive,依然時候有這個問題。
最后是重新創建用戶和數據庫就沒有這個問題了,感覺是數據庫有沖突。

問題4
Error: Duplicate key name "PCS_STATS_IDX" (state=42000,code=1061)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
Use --verbose for detailed stacktrace.
*** schemaTool failed ***
解決:

注意使用MySQL存儲元數據的時候,使用root用戶有可能權限不夠,會報錯。另外,$ schematool -dbType postgres -initSchema執行一次就好了。

文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。

轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/36034.html

相關文章

  • 大數據入門指南(GitHub開源項目)

    摘要:項目地址前言大數據技術棧思維導圖大數據常用軟件安裝指南一分布式文件存儲系統分布式計算框架集群資源管理器單機偽集群環境搭建集群環境搭建常用命令的使用基于搭建高可用集群二簡介及核心概念環境下的安裝部署和命令行的基本使用常用操作分區表和分桶表視圖 項目GitHub地址:https://github.com/heibaiying... 前 言 大數據技術棧思維導圖 大數據常用軟件安裝指...

    guyan0319 評論0 收藏0
  • 全面docker!使用hue連接hive

    摘要:如何安裝以我的為例,參照官網的這篇文章就可以解決。上安裝具體步驟為清理掉原有安裝的后添加密鑰添加源更新源然后直接安裝。點擊進去,就能看到連接好的啦,這里只有一個數據庫。 如何安裝docker 以我的ubuntu x86_64 16.04為例,參照docker官網的這篇文章就可以解決。ubuntu上安裝docker-ce 具體步驟為 清理掉原有安裝的docker后 sudo apt-ge...

    newtrek 評論0 收藏0

發表評論

0條評論

v1

|高級講師

TA的文章

閱讀更多
最新活動
閱讀需要支付1元查看
<