在此强调:
Hadoop,zookpeer,spark,kafka已经正常启动
开始安装部署hive
基础依赖环境:
1,jdk 1.6+2, hadoop 2.x3,hive 0.13-0.194,mysql (mysql-connector-jar)
安装详细如下:
#java export JAVA_HOME=/soft/jdk1.7.0_79/export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar#binexport PATH=$PATH:/$JAVA_HOME/bin:$HADOOP_HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin:/usr/local/hadoop/hive/bin#hadoopexport HADOOP_HOME=/usr/local/hadoop/hadoop#scalaexport SCALA_HOME=/usr/local/hadoop/scala#sparkexport SPARK_HOME=/usr/local/hadoop/spark#hiveexport HIVE_HOME=/usr/local/hadoop/hive
一、开始安装:
1,下载:
https://hive.apache.org/downloads.html
解压:
tar xvf apache-hive-2.1.0-bin.tar.gz -C /usr/local/hadoop/cd /usr/local/hadoop/mv apache-hive-2.1.0 hive
2,修改配置
修改启动环境cd /usr/local/hadoop/hivevim bin/hive-config.sh#java export JAVA_HOME=/soft/jdk1.7.0_79/#hadoopexport HADOOP_HOME=/usr/local/hadoop/hadoop#hiveexport HIVE_HOME=/usr/local/hadoop/hive
修改默认配置文件
cd /usr/local/hadoop/hivevim conf/hive-site.xml<configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://master:3306/hive?createDatabaseInfoNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </p<a style="color:transparent">来源gao*daima.com搞@代#码网</a>roperty> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>Username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>xujun</value> <description>password to use against metastore database</description> </property></configuration>
3,修改tmp dir
修改 将含有"system:java.io.tmpdir"的配置项的值修改为如上地址
/tmp/hive
二、安装好mysql,并且启动
1.创建数据库
create database hive grant all on *.* to hive@'%' identified by 'hive';flush privileges;
三,初始化hive
cd /usr/local/hadoop/hivebin/schematool -initSchema -dbType mysql SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]Metastore connection URL: jdbc:mysql://hadoop3:3306/hive?createDatabaseInfoNotExist=trueMetastore Connection Driver : com.mysql.jdbc.DriverMetastore connection User: hiveStarting metastore schema initialization to 2.1.0Initialization script hive-schema-2.1.0.mysql.sqlInitialization script completedschemaTool completed
四、启动
[hadoop@hadoop1 hadoop]$ hive/bin/hivewhich: no hbase in (/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin://soft/jdk1.7.0_79//bin:/bin:/bin:/bin:/usr/local/hadoop/hive/bin:/home/hadoop/bin)SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/hadoop/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]Logging initialized using configuration in jar:file:/usr/local/hadoop/hive/lib/hive-common-2.1.0.jar!/hive-log4j2.properties Async: trueHive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. tez, spark) or using Hive 1.X releases.hive> show databases;OKdefaultTime taken: 1.184 seconds, Fetched: 1 row(s)hive>