当前位置:   article > 正文

hbase伪分布式安装_hbase启动没有hquorumpeer模块

hbase启动没有hquorumpeer模块
安装环境:centos6.0+jdk1.6.0_29+hadoop1.0.0+hbase0.90.4
已经安装好centos6.0+jdk1.6.0_29+hadoop1.0.0环境

1.到官方网站下载好hbase-0.90.4.tar.gz解压hbase安装包到一个可用目录(如:/opt)

  1. cd /opt
  2. tar zxvf hbase-0.90.4.tar.gz
  3. chown -R hadoop:hadoop /opt/hbase-0.90.4

2.设置环境变量:

  1. vim ~/.bashrc
  2. export HBASE_HOME=/opt/hbase-0.90.4 #根据自己的jdk安装目录设置
  3. PAHT=$PATH:$HBASE_HOME/bin
3.hbase配置:
在$HBASE_HOME/conf目录中,根据自己的jdk安装情况配置好hbase-env.sh中JAVA_HOME,如下所示:

  1. # The java implementation to use. Java 1.6 required.
  2. export JAVA_HOME=/usr/local/jdk/jdk1.6.0_29

在$HBASE_HOME目录下的conf目录中,确保hbase-site中的hbase.rootdir的主机和端口号与$HADOOP_HOME目录下conf目录中core-site.xml中的fs.default.name的主机和端口号一致,添加如下内容:

  1. <configuration>
  2. <property>
  3. <name>hbase.rootdir</name>
  4. <value>hdfs://localhost:9000/hbase</value>
  5. </property>
  6. <property>
  7. <name>hbase.cluster.distributed</name>
  8. <value>true</value>
  9. </property>
  10. <property>
  11. <name>hbase.master</name>
  12. <value>localhost:60000</value>
  13. </property>
  14. <property>
  15. <name>hbase.zookeeper.quorum</name>
  16. <value>localhost</value>
  17. </property>
  18. </configuration>


3.先启动hadoop,再启动hbase:

  1. $start-all.sh #启动hadoop
  2. $jps #查看hadoop启动情况,确认DataNode,SecondaryNameNode,DataNode,JobTracker,TaskTracker全部启动
  3. 31557 DataNode
  4. 31432 NameNode
  5. 31902 TaskTracker
  6. 31777 JobTracker
  7. 689 Jps
  8. 31683 SecondaryNameNode
  9. $start-hbase.sh #确认hadoop完全启动后启动hbase
  10. $jps #查看hbase启动情况,确认HQuorumPeer,HMaster,HRegionServer全部启动
  11. 31557 DataNode
  12. 806 HQuorumPeer
  13. 31432 NameNode
  14. 853 HMaster
  15. 31902 TaskTracker
  16. 950 HRegionServer
  17. 1110 Jps
  18. 31777 JobTracker
  19. 31683 SecondaryNameNode
  20. $ hbase #查看hbase命令
  21. Usage: hbase <command>
  22. where <command> is one of:
  23. shell run the HBase shell
  24. zkcli run the ZooKeeper shell
  25. master run an HBase HMaster node
  26. regionserver run an HBase HRegionServer node
  27. zookeeper run a Zookeeper server
  28. rest run an HBase REST server
  29. thrift run an HBase Thrift server
  30. avro run an HBase Avro server
  31. migrate upgrade an hbase.rootdir
  32. hbck run the hbase 'fsck' tool
  33. classpath dump hbase CLASSPATH
  34. or
  35. CLASSNAME run the class named CLASSNAME
  36. Most commands print help when invoked w/o parameters.
  37. $hbase shell #启动hbase shell
  38. HBase Shell; enter 'help<RETURN>' for list of supported commands.
  39. Type "exit<RETURN>" to leave the HBase Shell
  40. Version 0.90.4, r1150278, Sun Jul 24 15:53:29 PDT 2011
  41. hbase(main):001:0>


hbase启动可能会出错导致失败(我在hadoop0.20.203.0环境下搭hbase0.90.4就出现过这种问题,hadoop1.0.0没测试,直接做了下面的步骤),这时需要将$HADOOP_HOME目录下的hadoop-core-1.0.0.jar和$HADOOP_HOME/lib目录下的commons-configuration-1.6.jar拷贝到$HBASE_HOME/lib目录下,删除$HBASE_HOME/lib目录下的hadoop-core-0.20-append-r1056497.jar,避免版本冲突和不兼容。

4.练习hbase shell

  1. hbase(main):001:0> create 'test','data' #创建一个名为‘test’的表,包含一个名为‘data’的列
  2. 0 row(s) in 2.0960 seconds
  3. hbase(main):002:0> list #输出用户空间所有表,验证表是否创建成功
  4. TABLE
  5. test
  6. 1 row(s) in 0.0220 seconds
  7. # 在列族data上的不同行和列插入三项数据
  8. hbase(main):003:0> put 'test','row1','data:1','value1'
  9. 0 row(s) in 0.2970 seconds
  10. hbase(main):004:0> put 'test','row2','data:2','value2'
  11. 0 row(s) in 0.0120 seconds
  12. hbase(main):005:0> put 'test','row3','data:3','value3'
  13. 0 row(s) in 0.0180 seconds
  14. hbase(main):006:0> scan 'test' #查看数据插入结果
  15. ROW COLUMN+CELL
  16. row1 column=data:1, timestamp=1330923873719, value=value1
  17. row2 column=data:2, timestamp=1330923891483, value=value2
  18. row3 column=data:3, timestamp=1330923902702, value=value3
  19. 3 row(s) in 0.0590 seconds
  20. hbase(main):007:0> disable 'test' #禁用表test
  21. 0 row(s) in 2.0610 seconds
  22. hbase(main):008:0> drop 'test' #删除表test
  23. 0 row(s) in 1.2120 seconds
  24. hbase(main):009:0> list #确认表test被删除
  25. TABLE
  26. 0 row(s) in 0.0180 seconds
  27. hbase(main):010:0> quit #退出hbase shell
5.停止hbase实例:

  1. $stop-hbase.sh
  2. stopping hbase......
  3. localhost: stopping zookeeper.
6.查看hdfs目录,你会发现在根目录下多了一个hbase的目录

  1. $ hadoop fs -ls /
  2. Found 4 items
  3. drwxr-xr-x - hadoop supergroup 0 2012-03-05 13:05 /hbase #hbase生成目录
  4. drwxr-xr-x - hadoop supergroup 0 2012-02-24 17:55 /home
  5. drwxr-xr-x - hadoop supergroup 0 2012-03-04 20:44 /tmp
  6. drwxr-xr-x - hadoop supergroup 0 2012-03-04 20:47 /user




声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小小林熬夜学编程/article/detail/724338
推荐阅读
相关标签
  

闽ICP备14008679号