chenchen8111

spark整合hive+hbase做数据实时插入及实时查询分析

声明

        使用的spark是2.0.1,hive是1.2.1,hbase是1.2.4,hadoop是2.6.0,zookeeper是3.4.9

        各依赖安装这里不再赘述,如需要可自行查看以前博客或百度,这里着重说明如何配置。


    hbase

        hbase不需要特殊配置,正常启动即可。


    hadoop

        hadoop不需要也属配置,正常启动即可。


    hive

        编辑hive-env.sh,增加HBASE_HOME变量
# Set HADOOP_HOME to point to a specific hadoop install directory
export HADOOP_HOME=${HADOOP_HOME}
export HBASE_HOME=/opt/hbase/hbase-1.2.4
# export HIVE_CLASSPATH=$HIVE_CLASSPATH:/opt/hive/apache-hive-1.2.1-bin/lib/*

# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=${HIVE_HOME}/conf

        编辑hive-site.xml,增加hbase相关配置


<property> <name>hbase.zookeeper.quorum</name> <value>hadoop-n,hadoop-d1,hadoop-d2</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> <description> Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect. </description> </property> <property> <name>hbase.master</name> <value>hadoop-n:60000</value> </property>

    spark

        拷贝hbase安装目录下的如下jar,注意不要偷懒在spark-env.sh增加hbase的classpath,那样会导致spark无法启动。


hbase-protocol hbase-common hbase-client hbase-server hive-hbase-handler-2.1.0 htrace-core metrice-core

    测试
        1、在hbase建表,并增加三条数据

create 'hbase_test',{NAME=>'cf1'} put 'hbase_test','a','cf1:v1','1' put 'hbase_test','b','cf1:v1','2' put 'hbase_test','b','cf1:v1','3'

        


        2、在hive建表

create external table hbase_test(key string,value string) stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:v1") TBLPROPERTIES("hbase.table.name" = "hbase_test");

    


        3、启动sparksql

cd $SPAR_HOME/bin ./spark-sql

spark-sql> select * from hbase_test; 16/11/18 11:20:48 INFO execution.SparkSqlParser: Parsing command: select * from hbase_test 16/11/18 11:20:49 INFO parser.CatalystSqlParser: Parsing command: string 16/11/18 11:20:49 INFO parser.CatalystSqlParser: Parsing command: string 16/11/18 11:20:49 INFO parser.CatalystSqlParser: Parsing command: string 16/11/18 11:20:49 INFO parser.CatalystSqlParser: Parsing command: string 16/11/18 11:20:49 INFO memory.MemoryStore: Block broadcast_7 stored as values in memory (estimated size 222.0 KB, free 365.5 MB) 16/11/18 11:20:49 INFO memory.MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 21.4 KB, free 365.5 MB) 16/11/18 11:20:49 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in memory on 10.5.3.100:39358 (size: 21.4 KB, free: 366.2 MB) 16/11/18 11:20:49 INFO spark.SparkContext: Created broadcast 7 from processCmd at CliDriver.java:376 16/11/18 11:20:50 INFO hbase.HBaseStorageHandler: Configuring input job properties 16/11/18 11:20:50 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x165634aa connecting to ZooKeeper ensemble=localhost:2181 16/11/18 11:20:50 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x165634aa0x0, quorum=localhost:2181, baseZNode=/hbase 16/11/18 11:20:50 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 16/11/18 11:20:50 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session 16/11/18 11:20:50 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x158751d4c19000d, negotiated timeout = 40000 16/11/18 11:20:50 INFO util.RegionSizeCalculator: Calculating region sizes for table "hbase_test". 16/11/18 11:20:50 INFO client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService 16/11/18 11:20:50 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x158751d4c19000d 16/11/18 11:20:50 INFO zookeeper.ZooKeeper: Session: 0x158751d4c19000d closed 16/11/18 11:20:50 INFO zookeeper.ClientCnxn: EventThread shut down 16/11/18 11:20:50 INFO spark.SparkContext: Starting job: processCmd at CliDriver.java:376 16/11/18 11:20:50 INFO scheduler.DAGScheduler: Got job 3 (processCmd at CliDriver.java:376) with 1 output partitions 16/11/18 11:20:50 INFO scheduler.DAGScheduler: Final stage: ResultStage 4 (processCmd at CliDriver.java:376) 16/11/18 11:20:50 INFO scheduler.DAGScheduler: Parents of final stage: List() 16/11/18 11:20:50 INFO scheduler.DAGScheduler: Missing parents: List() 16/11/18 11:20:50 INFO scheduler.DAGScheduler: Submitting ResultStage 4 (MapPartitionsRDD[23] at processCmd at CliDriver.java:376), which has no missing parents 16/11/18 11:20:50 INFO memory.MemoryStore: Block broadcast_8 stored as values in memory (estimated size 15.2 KB, free 365.5 MB) 16/11/18 11:20:50 INFO memory.MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 8.3 KB, free 365.5 MB) 16/11/18 11:20:50 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in memory on 10.5.3.100:39358 (size: 8.3 KB, free: 366.2 MB) 16/11/18 11:20:50 INFO spark.SparkContext: Created broadcast 8 from broadcast at DAGScheduler.scala:1012 16/11/18 11:20:50 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 4 (MapPartitionsRDD[23] at processCmd at CliDriver.java:376) 16/11/18 11:20:50 INFO scheduler.TaskSchedulerImpl: Adding task set 4.0 with 1 tasks 16/11/18 11:20:50 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 4.0 (TID 4, 10.5.3.101, partition 0, ANY, 5544 bytes) 16/11/18 11:20:50 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task 4 on executor id: 1 hostname: 10.5.3.101. 16/11/18 11:20:50 INFO storage.BlockManagerInfo: Added broadcast_8_piece0 in memory on 10.5.3.101:57818 (size: 8.3 KB, free: 366.3 MB) 16/11/18 11:20:50 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in memory on 10.5.3.101:57818 (size: 21.4 KB, free: 366.3 MB) 16/11/18 11:20:51 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 4.0 (TID 4) in 509 ms on 10.5.3.101 (1/1) 16/11/18 11:20:51 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool 16/11/18 11:20:51 INFO scheduler.DAGScheduler: ResultStage 4 (processCmd at CliDriver.java:376) finished in 0.511 s 16/11/18 11:20:51 INFO scheduler.DAGScheduler: Job 3 finished: processCmd at CliDriver.java:376, took 0.611485 s a 1 b 2 c 3 Time taken: 2.33 seconds, Fetched 3 row(s) 16/11/18 11:20:51 INFO CliDriver: Time taken: 2.33 seconds, Fetched 3 row(s) spark-sql>

    注意

评论