赞
踩
https://www.jianshu.com/p/045026a13bf8
有没有这样一样情况,把一个集群中的某个表导到另一个群集中,或者hbase的表结构发生了更改,但是数据还要,比如预分区没做,导致某台RegionServer很吃紧,Hbase的导出导出都可以很快的完成这些操作。
现在环境上面有一张表logTable
,有一个ext
列簇
但是没有做预分区,虽然可以强制拆分表,但是split的start,end范围无法精确控制。
- hadoop fs -mkdir /tmp/hbase-export
-
- hadoop fs -ls /tmp/hbase-export
hbase
内置的mr
命令,会默认导出到hdfs
中: 换行符去掉生效- hbase org.apache.hadoop.hbase.mapreduce.Export \
- -D hbase.mapreduce.scan.column.family=f1 \
- icc:test hdfs:///tmp/hbase-export/test
- disable 'logTable'
- drop 'logTable'
RegionServer
的个数做预分区,假设有8台,则使用下面的方式。MD5("uid")
前两位作为打散,范围为00~ff
256个分片,可以使用如下方式。- # scala shell 创建方式
- (0 until 256 by 256/8).map(Integer.toHexString).map(i=>s"0$i".takeRight(2))
hbase创表
- create 'logTable',{ \
- NAME => 'ext',TTL => '3 DAYS', \
- CONFIGURATION => {
- 'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.KeyPrefixRegionSplitPolicy', \
- 'KeyPrefixRegionSplitPolicy.prefix_length' => '2'
- }, \
- COMPRESSION => 'SNAPPY' \
- }, \
- SPLITS => ['20', '40', '60', '80', 'a0', 'c0', 'e0']
hbase org.apache.hadoop.hbase.mapreduce.Import logTable hdfs:///tmp/hbase-export/logTable
-D hbase.mapreduce.scan.column.family=ext,info
参数导出。
1> 执行导出命令
可使用-D命令自定义参数,此处限定表名、列族、开始结束RowKey、以及导出到HDFS的目录
- hbase org.apache.hadoop.hbase.mapreduce.Export \
- -D hbase.mapreduce.scan.column.family=0 \
- -D hbase.mapreduce.scan.row.start=aaaaaaaaaaaaaaaaaaa00010078 \
- -D hbase.mapreduce.scan.row.stop=jjjjjjjjjjjjjjjjjjj00010078 TESTA /tmp/hbase_export
可选的-D参数配置项
- Usage: Export [-D <property=value>]* <tablename> <outputdir> [<versions> [<starttime> [<endtime>]] [^[regex pattern] or [Prefix] to filter]]
-
- Note: -D properties will be applied to the conf used.
- For example:
- -D mapred.output.compress=true
- -D mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
- -D mapred.output.compression.type=BLOCK
- Additionally, the following SCAN properties can be specified
- to control/limit what is exported..
- -D hbase.mapreduce.scan.column.family=<familyName>
- -D hbase.mapreduce.include.deleted.rows=true
- For performance consider the following properties:
- -Dhbase.client.scanner.caching=100
- -Dmapred.map.tasks.speculative.execution=false
- -Dmapred.reduce.tasks.speculative.execution=false
- For tables with very wide rows consider setting the batch size as below:
- -Dhbase.export.scanner.batch=10

2> MR执行导出
3> 查看HDFS
1> 预先建表
在hbase中预先建立一个名称为TESTX的表,其包含一个名称为0的列族。若表事先不存在将报错
create 'TESTX','0'
2> 运行导入命令
可使用-D命令自定义参数,此处不多做限制
hbase org.apache.hadoop.hbase.mapreduce.Import TESTX hdfs://cdh01/tmp/hbase_export/
可选的-D参数配置项
- Usage: Import [options] <tablename> <inputdir>
-
- By default Import will load data directly into HBase. To instead generate
- HFiles of data to prepare for a bulk data load, pass the option:
- -Dimport.bulk.output=/path/for/output
- To apply a generic org.apache.hadoop.hbase.filter.Filter to the input, use
- -Dimport.filter.class=<name of filter class>
- -Dimport.filter.args=<comma separated list of args for filter
- NOTE: The filter will be applied BEFORE doing key renames via the HBASE_IMPORTER_RENAME_CFS property. Futher, filters will only use the Filter#filterRowKey(byte[] buffer, int offset, int length) method to identify whether the current row needs to be ignored completely for processing and Filter#filterKeyValue(KeyValue) method to determine if the KeyValue should be added; Filter.ReturnCode#INCLUDE and #INCLUDE_AND_NEXT_COL will be considered as including the KeyValue.
- For performance consider the following options:
- -Dmapred.map.tasks.speculative.execution=false
- -Dmapred.reduce.tasks.speculative.execution=false
- -Dimport.wal.durability=<Used while writing data to hbase. Allowed values are the supported durability values like SKIP_WAL/ASYNC_WAL/SYNC_WAL/...>
3> MR执行导入
4> HBase查看导入数据
1、建立关联hbase1的hive1外表.
- hive>
- CREATE EXTERNAL TABLE china_mainland_acturl(
- rowkey string,
- act_url STRING
- ) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
- WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,act:url")
- TBLPROPERTIES ("hbase.table.name" = "users:china_mainland")
- ;
2、hive -e "select * from hive1 where ***" >>data.csv
3、建立hive2的临时表table_temp ,并将data.csv上传至hsfs,hadoop fs -put localpath table_temp_hdfspath
4、建立关联hbase1的hive1外表.
5、将tabletemp表的数据插入hive2
insert into hive2 as select * from table_temp
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。