首页 > 编程知识 正文

hive安装教程,linux安装hive

时间:2023-05-04 21:37:20 阅读:18634 作者:1324

hive安装:3.1.2版

hive下载地址:https://downloads.Apache.org/hive/hive-3.1.2/Apache-hive-3.1.2-mt ddx.tar.gz

或镜像:3359 dl cdn.Apache.org/hive/hive-3.1.2 /

hive主目录: https://c wiki.Apache.org/confluence/display/hive/home

hive安装文档: https://c wiki.Apache.org/confluence/display/hive/getting started

hive在线学习文档: https://c wiki.Apache.org/confluence/display/hive/tutorial

安装:

hadoop jdk环境变量mysql8.23

hive的sql语法类似于mysql

1、解压缩文件,设定环境变量

解压缩文件tar-zxvf Apache-hive-3.1.2-mt ddx.tar.gz重命名mv Apache-hive-3.1.2-mtddxapache-hive-3.1.2环境变量刷新Apache-hive-3.1.2导出路径=$ hive _ home/mt ddx : $ path环境变量source /etc/profile 2.元数据默认derby

内置杜比(测试) )。

mtddx/hive启动与./hive启动不匹配

缺点:在不同的路径上启动hive,每个hive都有自己的元数据集,不能共享

mysql版本

更改配置文件conf/hive-site.xml

复制并重命名文件

CP hive-default.XML.template hive-site.XML

当property name hive.server2. active.passive.ha.enable/namevaluetrue/value description启用Hive Interactive会话时发现/描述/propertypropertynamejavax.jdo.option.connection URL/namevaluejdbc 3360 MySQL 3360//192.168.1888 valuedescriptionjdbcconnectstringforajdbcmetastore.tousessltoencrypt/authenticatetheconnect provide数据库-特定文件系统配置SSL=trueforpostgresdatabase./description/propertypropertynamejavax.jdo.option.connection drivername/name valuecom valuedescriptiondriverclassnameforajdbcmetastore/description/propertypropertynamejavax.jdo.option.connection username valuedescriptionusernametouseagainstmetastoredatabase/descriptior propertypropertynamejavax.jdo.option.connection passion valuedescriptionpasswordtouseagainstmetastoredatabase/descriptiore

<name>hive.metastore.warehouse.dir</name> <value>/hive/warehouse</value></property><property> <name>hive.exec.scratchdir</name> <value>/hive/tmp</value></property><property> <name>hive.querylog.location</name> <value>/hive/log</value></property><property> <name>hive.server2.authentication</name> <value>NONE</value></property><property> <name>hive.server2.thrift.mtddxd.host</name> <value>hadoop1</value></property><property> <name>hive.server2.thrift.port</name> <value>10000</value> <description>TCP port number to listen on, default 10000</description> </property><property> <name>hive.server2.thrift.http.port</name> <value>10001</value></property><property> <name>hive.server2.thrift.client.user</name> <value>root</value> <description>Username to use against thrift client</description></property><property> <name>hive.server2.thrift.client.password</name> <value>liuchao.</value> <description>Password to use against thrift client</description></property>备注:hive.server2.authentication 配置hive用户认证,设置为NONE则跳过认证hive.server2.thrift.mtddxd.host 配置thrift服务绑定的ip,需要在hadoop1启动hive服务,thrift服务才能与hadoop1建立连接,thrift主要用来实现hiveserver2的瘦客户端hive.server2.thrift.port 配置thrift服务绑定的端口,主要用来建立与thrift服务连接hive.server2.thrift.http.port 配置thrift服务绑定的http端口,可以通过http执行hive操作hive.server2.thrift.client.user 配置thrift服务的验证账户hive.server2.thrift.client.password 配置thrift服务的验证密码

修改配置文件:core-site.xml
hadoop.proxyuser.root.hosts 配置hadoop的代理用户,主要是用于让hiveserver2客户端访问及操作hadoop文件具备权限
hadoop.proxyuser.root.groups 配置hadoop的代理用户组,主要是用于让hiveserver2客户端访问及操作hadoop文件具备权限

<property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value></property><property> <name>hadoop.proxyuser.root.groups</name> <value>*</value></property>

修改配置而文件 conf/hive-env.sh
复制文件并重命名
cp hive-env.sh.template hive-env.sh

配置hadoop地址export HADOOP_HOME=/apps/bigdata/hadoop-3.2.2配置hive的conf地址export HIVE_CONF_DIR=/apps/bigdata/apache-hive-3.1.2/conf

3.mysql8.23安装 hadoop1上

echo "---------1----------"echo "安装mysql的rpm包"rpm -ivh mysql-community-common-8.0.23-1.el7.x86_64.rpmrpm -ivh mysql-community-client-plugins-8.0.23-1.el7.x86_64.rpmrpm -ivh mysql-community-libs-8.0.23-1.el7.x86_64.rpm --force --nodepsrpm -ivh mysql-community-client-8.0.23-1.el7.x86_64.rpmrpm -ivh mysql-community-server-8.0.23-1.el7.x86_64.rpm --force --nodepsecho "---------2----------"echo "修改/etc/my.cnf文件"echo "#设置默认字符集UTF-8" >> /etc/my.cnfecho "character_set_server=utf8" >> /etc/my.cnfecho "#设置默认字符集UTF-8" >> /etc/my.cnfecho "init_connect='SET NAMES utf8'" >> /etc/my.cnfecho "#解决大小写敏感问题1=不敏感 默认0" >> /etc/my.cnfecho "lower_case_table_names = 1" >> /etc/my.cnfecho "skip-grant-tables" >> /etc/my.cnfsleep 5secho "---------3----------"systemctl start mysqldecho "---------4----------"mysql update user set host='%' where user='root';SHOW VARIABLES LIKE 'validate_password%';set global validate_password.policy=0;set global validate_password.length=1;flush privileges;alter user root identified with mysql_native_password by 'rootpassword';flush privileges;#退出exit#查看状态systemctl status mysqldsystemctl stop mysqldecho '删除skip-grant-tables'sed -i '$d' /etc/my.cnfsystemctl start mysqld#登陆mysql -uroot -prootpassword

4.配置
hive.metastore.warehouse.dir 默认/user/hive/warehouse
HDFS上创建/tmp和/user/hive/warehouse两个目录并修改他们的同组权限可写
配置环境变量直接使用命令不用进入hadoop-3.2.2目录下执行mtddx/hadoop fs -mkdir /tmp

hadoop fs -mkdir -p /hive/tmphadoop fs -mkdir -p /hive/warehousehadoop fs -mkdir -p /hive/loghadoop fs -chmod 777 /hive/tmphadoop fs -chmod 777 /hive/warehousehadoop fs -chmod 777 /hive/log

5./apps/bigdata/apache-hive-3.1.2/mtddx初始化文件

./schematool -initSchema -dbType mysql

6.启动

进入apache-hive-3.1.2的mtddx目录cd /apps/bigdata/apache-hive-3.1.2/mtddx启动hive报错报错Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)Vat org.apache.hadoop.conf.Configuration.set(Configuration.java:1380)...at org.apache.hadoop.util.RunJar.main(RunJar.java:236)原因:hadoop和hive的两个guava.jar版本不一致两个位置分别位于下面两个目录:/apps/bigdata/apache-hive-3.1.2/lib//apps/bigdata/hadoop-3.2.2/share/hadoop/common/lib/解决办法:删除低版本的那个,将高版本的复制到低版本目录下guava-27.0-jre.jar用这个替换/apps/bigdata/apache-hive-3.1.2/lib/下的guava-19.0.jar

7.同步文件至hadoop2,hadoop3

xsync /etc/profilexsync /apps/bigdata/apache-hive-3.1.2

8.hive

hadoop2,hadoop3上都可以使用

9.查看hive版本
hive --version

10.启动
mtddx目录下

nohup hive --service hiveserver2 >> hiveserver2.log 2>&1 &

查看端口:
netstat -tunlp
hiveserver2页面:http://192.168.189.10:10002/

备用其他启动方式
启动
mtddx/hiveserver2
启动成功后可以在别的节点上用beeline去链接

mtddx/beeline -u jdbc:hive2://hadoop1:10000 -n root

或者
mtddx/beeline
!connect jdbc:hive2://hadoop1:10000

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。