当前位置:   article > 正文

hadoop上搭建hive

hadoop的hive搭建
环境

hadoop2.7.1+ubuntu 14.04
hive 2.0.1
集群环境
namenode节点:master (hive服务端)
datanode 节点:slave1,slave2(hive客户端)
hive建立在hadoop的HDFS上,搭建hive前,先需搭建hadoop
远程模式:
101.201.81.34(Mysql server meta server安装位置)

一、101.201.81.43

该主机上安装好mysql,并且建立一个hive数据库
(要开启远程访问)

二、在master上安装hive
1.安装hive

1)在apache官网上下载hive-2.1.0包
2)sudo tar -zxvf apache-hive-2.1.0-bin.tar.gz
3)sudo cp -R apache-hive-2.1.0-bin /home/cms/hive
4)chmod -R 775 /home/cms/hive
5)sudo chown -R cms /home/cms/hive

2.修改/etc/profile加入HIVE_HOME的变量

HIVE_HOME/PATH/CLASSPATH
我粘贴的是我所有的配置

export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64export HADOOP_HOME=$HOME/hadoop-2.7.1export HIVE_HOME=$HOME/hiveexport JRE_HOME=$JAVA_HOME/jreexport CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$HIVE_HOME/lib:$CLASSPATHexport PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$PATHexport HADOOP_MAPARED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

source /etc/profile

3.修改hive/conf下的几个template模板并重命名为其他

cp hive-env.sh.template hive-env.sh

cp hive-default.xml.template hive-site.xml

配置hive-env.sh文件,指定HADOOP_HOME安装位置 HADOOP_HOME=$HADOOP_HOME/hadoop-2.7.1

4.修改hive-site.xml文件,指定MySQL数据库驱动、数据库名、用户名及密码,修改的内容如下所示

<property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://101.201.81.34:3306/hive?createDatabaseIfNotExist=true</value><description>JDBC connect string for a JDBC metastore</description></property><property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.jdbc.Driver</value><description>Driver class name for a JDBC metastore</description></property><property><name>javax.jdo.option.ConnectionUserName</name><value>root</value><description>username to use against metastore database</description></property><property><name>javax.jdo.option.ConnectionPassword</name><value>admin</value><description>password to use against metastore database</description></property><property>


其中:javax.jdo.option.ConnectionURL参数指定的是Hive连接数据库的连接字符串;

javax.jdo.option.ConnectionDriverName参数指定的是驱动的类入口名称;

javax.jdo.option.ConnectionUserName参数指定了数据库的用户名;javax.jdo.option.ConnectionPassword参数指定了数据库的密码。


5.缓存目录的问题,如果不配置也会出错的

<property><name>hive.exec.local.scratchdir</name><value>/opt/hivetmp</value><description>Local scratch space for Hive jobs</description></property><property><name>hive.downloaded.resources.dir</name><value>/opt/hivetmp</value><description>Temporary local directory for added resources in the remote file system.</description></property>

并且需要对目录进行权限设定mkdir -p /opt/hivetmp chmod -R 775 /opt/hivetmp

6.下载mysql-connector-java-5.1.30-bin.jar文件,并放到$HIVE_HOME/lib目录下

可以从Mysql的官方网站下载,但是记得一定要解压呀,下载的是一个tar.gz文件,解压后为jar文件

7.hive表存储在在HDFS中的/user/hive/warehouse中


三、slave2建立hive客户端

1.将master上的整个hive目录移至slave2上
scp -r hive slave2:/home/cms
注意要关闭防火墙
sudo ufw disable
2.修改slave2下的hive-site配置文件,内容如下:

<configuration><!-- thrift://<host_name>:<port> 默认端口是9083 --><property><name>hive.metastore.uris</name><value>thrift://master:9083</value><description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description></property><!-- hive表的默认存储路径 --><property><name>hive.metastore.warehouse.dir</name><value>hdfs://hive/warehouse</value></property></configuration>

修改/etc/profile,与master配置一致

四、启动

1.进入之前要初始化数据库(master节点)

schematool -initSchema -dbType mysql

2.hive启动

cms@master:~$ schematool -initSchema -dbType mysqlSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in 1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]Metastore connection URL: jdbc:mysql://101.201.81.34:3306/hive?createDatabaseIfNotExist=trueMetastore Connection Driver : com.mysql.jdbc.DriverMetastore connection User: rootStarting metastore schema initialization to 2.1.0Initialization script hive-schema-2.1.0.mysql.sqlInitialization script completedschemaTool completed```2.hive启动要启动metastore服务在master上执行hive --service metastore &[cms@master ~]$ jps10288 RunJar  #多了一个进程9365 NameNode9670 SecondaryNameNode11096 Jps9944 NodeManager9838 ResourceManager9471 DataNod

3.测试hive shell(服务端,客户端都可)

hiveshow databases;show tables;
查看hive表dfs -ls /user/hive/warehouse

640?wx_fmt=jpeg

点赞和转发是最大的支持~

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/一键难忘520/article/detail/765047
推荐阅读
相关标签
  

闽ICP备14008679号