你可能需要:
JDKbyacc.x86_64ntp
yum install byacc byaccj chkconfig ntpd on && service ntpd start
我这台机器是当时编译Hadoop源码的机器,可能有些譬如C++之类的环境已经配置了。
不过没关系,大家编译中的报错应该可以直接找到少哪些东西
下载地址
1 2 3 4 5 6 tar - zxvf krb5 - 1.14.tar.gz cd krb5 - 1.14 . / configure make make install
于所有机器进行
或者通过覆盖KRB5_KDC_PROFILE环境变量修改配置文件位置。
原文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [ logging ] default = FILE : / var / log / krb5libs . log kdc = FILE : / var / log / krb5kdc . log admin_server = FILE : / var / log / kadmind . log [ libdefaults ] default_realm = EXAMPLE . COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [ realms ] EXAMPLE . COM = { kdc = kerberos . example . com admin_server = kerberos . example . com } [ domain_realm ] . example . com = EXAMPLE . COM example . com = EXAMPLE . COM
修改后:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [ logging ] default = FILE : / var / log / krb5libs . log kdc = FILE : / var / log / krb5kdc . log admin_server = FILE : / var / log / kadmind . log [ libdefaults ] default_realm = HADOOP . COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [ realms ] HADOOP . COM = { kdc = hadoop1 . example . com kdc = hadoop2 . example . com kdc = hadoop3 . example . com admin_server = hadoop1 . example . com } [ domain_realm ] . example . com = HADOOP . COM example . com = HADOOP . COM
这个文件可能原本不存在
vim /usr/local/var/krb5kdc/kadm5.acl
1 2 * / admin @ HADOOP . COM *
注意,此处删除了supported_enctypes的256位验证
Kerberos的编译要在所有机器上都做,完成后向下继续
1 2 3 pscp / etc / krb5 . conf / etc pscp / usr / local / var / krb5kdc / kadm5 . acl / usr / local / var / krb5kdc /
pscp命令在之前Hadoop安装的文档中配置过了,大致就是scp到多个主机的快捷方式。
在主机上运行命令kdb5_util create HADOOP.COM -s
主机上运行
1 2 3 / usr / local / sbin / krb5kdc / usr / local / sbin / kadmind
或
1 2 3 service krb5kdc start service kadmin start
其他机器上没必要运行
在其他机器上
1 2 3 kadmin - p root / admin 输入密码
如果回显为
1 2 kadmin :
并允许你开始输入,即成功
首先进入Kerberos管理器,为了顺便验证成功,建议在非主机上进入kadmin
首先这是完成后的结果:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [ root @ hadoop3 tmp ] # kadmin Authenticating as principal root / admin @ HADOOP . COM with password . Password for root / admin @ HADOOP . COM : kadmin : listprincs HTTP / hadoop1 . example . com @ HADOOP . COM HTTP / hadoop2 . example . com @ HADOOP . COM HTTP / hadoop3 . example . com @ HADOOP . COM K / M @ HADOOP . COM kadmin / admin @ HADOOP . COM kadmin / changepw @ HADOOP . COM kadmin / hadoop1 @ HADOOP . COM krbtgt / HADOOP . COM @ HADOOP . COM root / admin @ HADOOP . COM root / hadoop1 . example . com @ HADOOP . COM root / hadoop2 . example . com @ HADOOP . COM root / hadoop3 . example . com @ HADOOP . COM
我们需要在每一台主机上建立root、HTTP账号
格式为:
addprinc -randkey 用户名/主机名
譬如需要给hadoop1.example.com的主机添加HTTP用户:
1 2 addprinc - randkey HTTP / hadoop1 . example . com
在一台机器上创建的用户,在任意主机上都可以看到
我们可以使用
1 2 ktadd - k / opt / hadoop - 2.6.3 / keytab / krb5 . keytab root / hadoop1 . example . com HTTP / hadoop1 . example . com
的方式来导出一组密钥
其中:
ktadd等效于xst-k参数为使用keytab文件路径完全自定义,hadoop可读即可用户名/主机名,可以写多个使用空格分割core-site.xml:
1 2 3 4 5 6 7 8 9 10 & lt ; property & gt ; & lt ; name & gt ; hadoop . security . authentication & lt ; / name & gt ; & lt ; value & gt ; kerberos & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; hadoop . security . authorization & lt ; / name & gt ; & lt ; value & gt ; true & lt ; / value & gt ; & lt ; / property & gt ;
hdfs-site.xml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 & lt ; property & gt ; & lt ; name & gt ; dfs . journalnode . keytab . file & lt ; / name & gt ; & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; dfs . journalnode . kerberos . principal & lt ; / name & gt ; & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; dfs . journalnode . kerberos . internal . spnego . principal & lt ; / name & gt ; & lt ; value & gt ; HTTP / _HOST @ HADOOP . COM & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; dfs . block . access . token . enable & lt ; / name & gt ; & lt ; value & gt ; true & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; dfs . namenode . keytab . file & lt ; / name & gt ; & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; dfs . namenode . kerberos . principal & lt ; / name & gt ; & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; dfs . web . authentication . kerberos . keytab & lt ; / name & gt ; & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; dfs . web . authentication . kerberos . principal & lt ; / name & gt ; & lt ; value & gt ; HTTP / _HOST @ HADOOP . COM & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; ignore . secure . ports . for . testing & lt ; / name & gt ; & lt ; value & gt ; true & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; dfs . datanode . keytab . file & lt ; / name & gt ; & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; dfs . datanode . kerberos . principal & lt ; / name & gt ; & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; hadoop . http . staticuser . user & lt ; / name & gt ; & lt ; value & gt ; root & lt ; / value & gt ; & lt ; / property & gt ;
mapred-site.xml:
1 2 3 4 5 6 7 8 9 10 & lt ; property & gt ; & lt ; name & gt ; mapreduce . jobhistory . keytab & lt ; / name & gt ; & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; mapreduce . jobhistory . principal & lt ; / name & gt ; & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ; & lt ; / property & gt ;
yarn-site.xml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 & lt ; property & gt ; & lt ; name & gt ; yarn . resourcemanager . keytab & lt ; / name & gt ; & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; yarn . resourcemanager . principal & lt ; / name & gt ; & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; yarn . nodemanager . keytab & lt ; / name & gt ; & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; yarn . nodemanager . principal & lt ; / name & gt ; & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ; & lt ; / property & gt ;
1 2 3 4 [ root @ hadoop3 keytab ] # kinit -k -t /opt/hadoop-2.6.3/keytab/krb5.keytab root/hadoop3.example.com [ root @ hadoop3 keytab ] # hdfs dfs -ls / [ root @ hadoop3 keytab ] #
进程都在,成功
建立Zookeeper用户,或使用安装Zookeeper时建立的用户
此处新建:
1 2 3 pssh "useradd zk" su - zk
切换到用户后使用kadmin建立keytab文件
注意使用kadmin -p root/admin,否则无法登陆
1 2 3 4 5 6 [ zk @ hadoop1 ~ ] $ kadmin - p root / admin Couldn & amp ; #039;t open log file /var/log/kadmind.log: Permission denied Authenticating as principal root / admin with password . Password for root / admin @ HADOOP . COM : kadmin :
建立zk/hadoop1.example.com,并导出到/home/zk/zk.keytab
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [ zk @ hadoop1 ~ ] $ kadmin - p root / admin Couldn & amp ; #039;t open log file /var/log/kadmind.log: Permission denied Authenticating as principal root / admin with password . Password for root / admin @ HADOOP . COM : kadmin : addprinc - randkey zk / hadoop1 . example . com WARNING : no policy specified for zk / hadoop1 . example . com @ HADOOP . COM ; defaulting to no policy Principal "zk/hadoop1.example.com@HADOOP.COM" created . kadmin : ktadd - k / home / zk / zk . keytab zk / hadoop1 . example . com Entry for principal zk / hadoop1 . example . com with kvno 2 , encryption type aes128 - cts - hmac - sha1 - 96 added to keytab WRFILE : / home / zk / zk . keytab . Entry for principal zk / hadoop1 . example . com with kvno 2 , encryption type des3 - cbc - sha1 added to keytab WRFILE : / home / zk / zk . keytab . Entry for principal zk / hadoop1 . example . com with kvno 2 , encryption type arcfour - hmac added to keytab WRFILE : / home / zk / zk . keytab . Entry for principal zk / hadoop1 . example . com with kvno 2 , encryption type des - hmac - sha1 added to keytab WRFILE : / home / zk / zk . keytab . Entry for principal zk / hadoop1 . example . com with kvno 2 , encryption type des - cbc - md5 added to keytab WRFILE : / home / zk / zk . keytab .
在其他机器上也这样做,你可以用脚本,也可以用其他方法。
自己试验想要稳妥的话就自己手动过去。但最终是要用自动化来完成的
修改zoo.cfg添加配置:
1 2 3 authProvider . 1 = org . apache . zookeeper . server . auth . SASLAuthenticationProvider jaasLoginRenew = 3600000
在配置目录中添加对应账户的keytab文件且创建jaas.conf配置文件,内容如下:
1 2 3 4 5 6 7 8 9 Server { com . sun . security . auth . module . Krb5LoginModule required useKeyTab = true keyTab = "/home/zk/zk.keytab" storeKey = true useTicketCache = true principal = "zk/hadoop1.example.com@HADOOP.COM" ; } ;
其中keytab填写真实的keytab的绝对路径,principal填写对应的认证的用户和机器名称。
在配置目录中添加java.env的配置文件,内容如下:
1 2 export JVMFLAGS = "-Djava.security.auth.login.config=/home/zk/jaas.conf"
每个zookeeper的机器都进行以上的修改
启动方式和平常无异,如成功使用安全方式启动,日志中看到如下日志:
1 2 2013 - 11 - 18 10 : 23 : 30 , 067 . . . - successfully logged in .
我突然发现我的Zookeeper日志中写的时间比显示时间早一个小时……不知道是什么问题
HBase下载编译请参考http://90hadoop.com/2016/03/03/hbase-yuan-ma-bian-yi/
HBase安装高可用请参考http://www.cnblogs.com/smartloli/p/4513767.html
在高可用安装中,HBase会因为ZooKeeper已启用Kerberos而无法启动,报错如下:
1 2 3 4 5 6 7 8 9 10 2016 - 03 - 02 17 : 03 : 12 , 805 ERROR [ main ] master . HMasterCommandLine : Master exiting java . lang . RuntimeException : Failed construction of Master : class org . apache . hadoop . hbase . master . HMaster at org . apache . hadoop . hbase . master . HMaster . constructMaster ( HMaster . java : 2342 ) at org . apache . hadoop . hbase . master . HMasterCommandLine . startMaster ( HMasterCommandLine . java : 233 ) at org . apache . hadoop . hbase . master . HMasterCommandLine . run ( HMasterCommandLine . java : 139 ) at org . apache . hadoop . util . ToolRunner . run ( ToolRunner . java : 70 ) at org . apache . hadoop . hbase . util . ServerCommandLine . doMain ( ServerCommandLine . java : 126 ) at org . apache . hadoop . hbase . master . HMaster . main ( HMaster . java : 2355 ) Caused by : java . io . IOException : Running in secure mode , but config doesn & amp ; #039;t have a keytab
这是正常现象
vim /opt/hbase-1.1.3/conf/hbase-site.xml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 & lt ; property & gt ; & lt ; name & gt ; hbase . security . authentication & lt ; / name & gt ; & lt ; value & gt ; kerberos & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; hbase . security . authorization & lt ; / name & gt ; & lt ; value & gt ; true & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; hbase . rpc . engine & lt ; / name & gt ; & lt ; value & gt ; org . apache . hadoop . hbase . ipc . SecureRpcEngine & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; hbase . coprocessor . region . classes & lt ; / name & gt ; & lt ; value & gt ; org . apache . hadoop . hbase . security . token . TokenProvider & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; hbase . master . keytab . file & lt ; / name & gt ; & lt ; value & gt ; / hadoop / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; hbase . master . kerberos . principal & lt ; / name & gt ; & lt ; value & gt ; root / _HOST @ cc . cn & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; hbase . regionserver . keytab . file & lt ; / name & gt ; & lt ; value & gt ; / hadoop / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ; & lt ; property & gt ; & lt ; name & gt ; hbase . regionserver . kerberos . principal & lt ; / name & gt ; & lt ; value & gt ; root / _HOST @ cc . cn & lt ; / value & gt ; & lt ; / property & gt ;
1 2 hbase - 1.1.3 / bin / start - hbase . sh