Kerberos安装及Hadoop搭建提高安全性

    xiaoxiao2021-03-25  141

    Kerberos安装

    前期准备

    你可能需要:

    JDKbyacc.x86_64

    ntp

    yum install byacc byaccj chkconfig ntpd on && service ntpd start

    我这台机器是当时编译Hadoop源码的机器,可能有些譬如C++之类的环境已经配置了。

    不过没关系,大家编译中的报错应该可以直接找到少哪些东西

    下载源码

    下载地址

    1 2 3 4 5 6 tar - zxvf krb5 - 1.14.tar.gz cd krb5 - 1.14 . / configure make make install  

    或YUM安装

    1 2 yum install krb5 - server . x86_64   krb5 - devel . x86_64 - y  

    于所有机器进行

    修改配置文件

    编译安装:

    配置文件位于/etc/krb5.conf库文件和acl认证文件位于/usr/local/var/krb5kdc启动文件位于/usr/local/sbin

    或者通过覆盖KRB5_KDC_PROFILE环境变量修改配置文件位置。

    YUM安装:

    配置文件位于/etc/krb5.conf库文件和acl认证文件位于/var/kerberos/krb5kdc/启动文件位于/etc/init.d/

    先修改/etc/krb5.conf


    原文件:

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [ logging ] default = FILE : / var / log / krb5libs . log kdc = FILE : / var / log / krb5kdc . log admin_server = FILE : / var / log / kadmind . log   [ libdefaults ] default_realm = EXAMPLE . COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true   [ realms ] EXAMPLE . COM = {    kdc = kerberos . example . com    admin_server = kerberos . example . com }   [ domain_realm ] . example . com = EXAMPLE . COM example . com = EXAMPLE . COM  

    修改后:

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [ logging ] default = FILE : / var / log / krb5libs . log kdc = FILE : / var / log / krb5kdc . log admin_server = FILE : / var / log / kadmind . log   [ libdefaults ] default_realm = HADOOP . COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true   [ realms ] HADOOP . COM = {    kdc = hadoop1 . example . com    kdc = hadoop2 . example . com    kdc = hadoop3 . example . com    admin_server = hadoop1 . example . com }   [ domain_realm ] . example . com = HADOOP . COM example . com = HADOOP . COM  


    添加acl文件

    这个文件可能原本不存在

    vim /usr/local/var/krb5kdc/kadm5.acl

    1 2 * / admin @ HADOOP . COM  *  

    修改

    1 2 3 4 5 6 7 8 9 10 11 12 13 [ kdcdefaults ] kdc_ports = 88 kdc_tcp_ports = 88   [ realms ] HADOOP . COM = {    #master_key_type = aes256-cts    acl_file = / var / kerberos / krb5kdc / kadm5 . acl    dict_file = / usr / share / dict / words    admin_keytab = / var / kerberos / krb5kdc / kadm5 . keytab    supported_enctypes = aes128 - cts : normal des3 - hmac - sha1 : normal arcfour - hmac : normal des - hmac - sha1 : normal des - cbc - md5 : normal des - cbc - crc : normal }  

    注意,此处删除了supported_enctypes的256位验证

    分发文件

    Kerberos的编译要在所有机器上都做,完成后向下继续

    1 2 3 pscp / etc / krb5 . conf / etc pscp / usr / local / var / krb5kdc / kadm5 . acl / usr / local / var / krb5kdc /  

    pscp命令在之前Hadoop安装的文档中配置过了,大致就是scp到多个主机的快捷方式。

    在主机生成数据库

    在主机上运行命令kdb5_util create HADOOP.COM -s

    添加管理员用户

    1 2 3 4 5 6 kadmin . local addprinc root / admin 输入密码 确认密码 quit  

    启动

    主机上运行

    1 2 3 / usr / local / sbin / krb5kdc / usr / local / sbin / kadmind  

    1 2 3 service krb5kdc start service kadmin start  

    其他机器上没必要运行

    测试

    在其他机器上

    1 2 3 kadmin - p root / admin 输入密码  

    如果回显为

    1 2 kadmin :  

    并允许你开始输入,即成功

    Kerberos对应Hadoop配置

    用户准备

    首先进入Kerberos管理器,为了顺便验证成功,建议在非主机上进入kadmin

    首先这是完成后的结果:

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [ root @ hadoop3 tmp ] # kadmin Authenticating as principal root / admin @ HADOOP . COM with password . Password for root / admin @ HADOOP . COM : kadmin :    listprincs HTTP / hadoop1 . example . com @ HADOOP . COM HTTP / hadoop2 . example . com @ HADOOP . COM HTTP / hadoop3 . example . com @ HADOOP . COM K / M @ HADOOP . COM kadmin / admin @ HADOOP . COM kadmin / changepw @ HADOOP . COM kadmin / hadoop1 @ HADOOP . COM krbtgt / HADOOP . COM @ HADOOP . COM root / admin @ HADOOP . COM root / hadoop1 . example . com @ HADOOP . COM root / hadoop2 . example . com @ HADOOP . COM root / hadoop3 . example . com @ HADOOP . COM  

    我们需要在每一台主机上建立root、HTTP账号

    格式为:

    addprinc -randkey 用户名/主机名

    譬如需要给hadoop1.example.com的主机添加HTTP用户:

    1 2 addprinc - randkey HTTP / hadoop1 . example . com  

    在一台机器上创建的用户,在任意主机上都可以看到

    密钥导出

    我们可以使用

    1 2 ktadd - k / opt / hadoop - 2.6.3 / keytab / krb5 . keytab root / hadoop1 . example . com HTTP / hadoop1 . example . com  

    的方式来导出一组密钥

    其中:

    ktadd等效于xst-k参数为使用keytab文件路径完全自定义,hadoop可读即可用户名/主机名,可以写多个使用空格分割

    Hadoop配置

    修改配置文件

    core-site.xml:

    1 2 3 4 5 6 7 8 9 10    & lt ; property & gt ;      & lt ; name & gt ; hadoop . security . authentication & lt ; / name & gt ;      & lt ; value & gt ; kerberos & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; hadoop . security . authorization & lt ; / name & gt ;      & lt ; value & gt ; true & lt ; / value & gt ;    & lt ; / property & gt ;  

    hdfs-site.xml:

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 & lt ; property & gt ;      & lt ; name & gt ; dfs . journalnode . keytab . file & lt ; / name & gt ;      & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; dfs . journalnode . kerberos . principal & lt ; / name & gt ;      & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; dfs . journalnode . kerberos . internal . spnego . principal & lt ; / name & gt ;      & lt ; value & gt ; HTTP / _HOST @ HADOOP . COM & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; dfs . block . access . token . enable & lt ; / name & gt ;      & lt ; value & gt ; true & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; dfs . namenode . keytab . file & lt ; / name & gt ;      & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; dfs . namenode . kerberos . principal & lt ; / name & gt ;      & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; dfs . web . authentication . kerberos . keytab & lt ; / name & gt ;      & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; dfs . web . authentication . kerberos . principal & lt ; / name & gt ;      & lt ; value & gt ; HTTP / _HOST @ HADOOP . COM & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; ignore . secure . ports . for . testing & lt ; / name & gt ;      & lt ; value & gt ; true & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; dfs . datanode . keytab . file & lt ; / name & gt ;      & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; dfs . datanode . kerberos . principal & lt ; / name & gt ;      & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; hadoop . http . staticuser . user & lt ; / name & gt ;      & lt ; value & gt ; root & lt ; / value & gt ;    & lt ; / property & gt ;  

    mapred-site.xml:

    1 2 3 4 5 6 7 8 9 10    & lt ; property & gt ;      & lt ; name & gt ; mapreduce . jobhistory . keytab & lt ; / name & gt ;      & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; mapreduce . jobhistory . principal & lt ; / name & gt ;      & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ;    & lt ; / property & gt ;  

    yarn-site.xml:

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20    & lt ; property & gt ;      & lt ; name & gt ; yarn . resourcemanager . keytab & lt ; / name & gt ;      & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; yarn . resourcemanager . principal & lt ; / name & gt ;      & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; yarn . nodemanager . keytab & lt ; / name & gt ;      & lt ; value & gt ; / opt / hadoop - 2.6.3 / keytab / krb5 . keytab & lt ; / value & gt ;    & lt ; / property & gt ;      & lt ; property & gt ;      & lt ; name & gt ; yarn . nodemanager . principal & lt ; / name & gt ;      & lt ; value & gt ; root / _HOST @ HADOOP . COM & lt ; / value & gt ;    & lt ; / property & gt ;  

    分发配置文件

    1 2 3 4 5 pscp core - site $ PWD pscp hdfs - site $ PWD pscp mapred - site $ PWD pscp yarn - site $ PWD  

    验证

    1 2 3 4 [ root @ hadoop3 keytab ] # kinit -k -t /opt/hadoop-2.6.3/keytab/krb5.keytab root/hadoop3.example.com [ root @ hadoop3 keytab ] # hdfs dfs -ls / [ root @ hadoop3 keytab ] #  

    启动

    1 2 3 4 5 6 7 8 9 10 11 12 13 . / start - all . sh     [ root @ hadoop1 krb5kdc ] # jps 15096 QuorumPeerMain 11226 ResourceManager 15538 DFSZKFailoverController 14568 JournalNode 10673 NameNode 14858 NodeManager 14349 DataNode 25030 Jps  

    进程都在,成功

    ZooKeeper配置

    建立Zookeeper用户,或使用安装Zookeeper时建立的用户

    此处新建:

    1 2 3 pssh "useradd zk" su - zk  

    切换到用户后使用kadmin建立keytab文件

    注意使用kadmin -p root/admin,否则无法登陆

    1 2 3 4 5 6 [ zk @ hadoop1 ~ ] $ kadmin - p root / admin Couldn & amp ; #039;t open log file /var/log/kadmind.log: Permission denied Authenticating as principal root / admin with password . Password for root / admin @ HADOOP . COM : kadmin :  

    建立zk/hadoop1.example.com,并导出到/home/zk/zk.keytab

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 [ zk @ hadoop1 ~ ] $ kadmin - p root / admin Couldn & amp ; #039;t open log file /var/log/kadmind.log: Permission denied Authenticating as principal root / admin with password . Password for root / admin @ HADOOP . COM : kadmin :    addprinc - randkey zk / hadoop1 . example . com WARNING : no policy specified for zk / hadoop1 . example . com @ HADOOP . COM ; defaulting to no policy Principal "zk/hadoop1.example.com@HADOOP.COM" created . kadmin :    ktadd - k / home / zk / zk . keytab zk / hadoop1 . example . com Entry for principal zk / hadoop1 . example . com with kvno 2 , encryption type aes128 - cts - hmac - sha1 - 96 added to keytab WRFILE : / home / zk / zk . keytab . Entry for principal zk / hadoop1 . example . com with kvno 2 , encryption type des3 - cbc - sha1 added to keytab WRFILE : / home / zk / zk . keytab . Entry for principal zk / hadoop1 . example . com with kvno 2 , encryption type arcfour - hmac added to keytab WRFILE : / home / zk / zk . keytab . Entry for principal zk / hadoop1 . example . com with kvno 2 , encryption type des - hmac - sha1 added to keytab WRFILE : / home / zk / zk . keytab . Entry for principal zk / hadoop1 . example . com with kvno 2 , encryption type des - cbc - md5 added to keytab WRFILE : / home / zk / zk . keytab .  

    在其他机器上也这样做,你可以用脚本,也可以用其他方法。

    自己试验想要稳妥的话就自己手动过去。但最终是要用自动化来完成的

    修改zoo.cfg添加配置:

    1 2 3 authProvider . 1 = org . apache . zookeeper . server . auth . SASLAuthenticationProvider jaasLoginRenew = 3600000  

    在配置目录中添加对应账户的keytab文件且创建jaas.conf配置文件,内容如下:

    1 2 3 4 5 6 7 8 9 Server {      com . sun . security . auth . module . Krb5LoginModule required      useKeyTab = true      keyTab = "/home/zk/zk.keytab"      storeKey = true      useTicketCache = true      principal = "zk/hadoop1.example.com@HADOOP.COM" ; } ;  

    其中keytab填写真实的keytab的绝对路径,principal填写对应的认证的用户和机器名称。

    在配置目录中添加java.env的配置文件,内容如下:

    1 2 export JVMFLAGS = "-Djava.security.auth.login.config=/home/zk/jaas.conf"  

    每个zookeeper的机器都进行以上的修改

    启动方式和平常无异,如成功使用安全方式启动,日志中看到如下日志:

    1 2 2013 - 11 - 18 10 : 23 : 30 , 067 . . . - successfully logged in .  

    我突然发现我的Zookeeper日志中写的时间比显示时间早一个小时……不知道是什么问题

    HBase配置

    前期准备

    HBase下载编译请参考http://90hadoop.com/2016/03/03/hbase-yuan-ma-bian-yi/

    HBase安装高可用请参考http://www.cnblogs.com/smartloli/p/4513767.html

    在高可用安装中,HBase会因为ZooKeeper已启用Kerberos而无法启动,报错如下:

    1 2 3 4 5 6 7 8 9 10 2016 - 03 - 02 17 : 03 : 12 , 805 ERROR [ main ] master . HMasterCommandLine : Master exiting java . lang . RuntimeException : Failed construction of Master : class org . apache . hadoop . hbase . master . HMaster          at org . apache . hadoop . hbase . master . HMaster . constructMaster ( HMaster . java : 2342 )          at org . apache . hadoop . hbase . master . HMasterCommandLine . startMaster ( HMasterCommandLine . java : 233 )          at org . apache . hadoop . hbase . master . HMasterCommandLine . run ( HMasterCommandLine . java : 139 )          at org . apache . hadoop . util . ToolRunner . run ( ToolRunner . java : 70 )          at org . apache . hadoop . hbase . util . ServerCommandLine . doMain ( ServerCommandLine . java : 126 )          at org . apache . hadoop . hbase . master . HMaster . main ( HMaster . java : 2355 ) Caused by : java . io . IOException : Running in secure mode , but config doesn & amp ; #039;t have a keytab  

    这是正常现象

    配置HBase

    vim /opt/hbase-1.1.3/conf/hbase-site.xml

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 & lt ; property & gt ;      & lt ; name & gt ; hbase . security . authentication & lt ; / name & gt ;      & lt ; value & gt ; kerberos & lt ; / value & gt ; & lt ; / property & gt ;   & lt ; property & gt ;      & lt ; name & gt ; hbase . security . authorization & lt ; / name & gt ;      & lt ; value & gt ; true & lt ; / value & gt ; & lt ; / property & gt ;   & lt ; property & gt ;      & lt ; name & gt ; hbase . rpc . engine & lt ; / name & gt ;      & lt ; value & gt ; org . apache . hadoop . hbase . ipc . SecureRpcEngine & lt ; / value & gt ; & lt ; / property & gt ;   & lt ; property & gt ;      & lt ; name & gt ; hbase . coprocessor . region . classes & lt ; / name & gt ;      & lt ; value & gt ; org . apache . hadoop . hbase . security . token . TokenProvider & lt ; / value & gt ; & lt ; / property & gt ;   & lt ; property & gt ;      & lt ; name & gt ; hbase . master . keytab . file & lt ; / name & gt ;      & lt ; value & gt ; / hadoop / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ;   & lt ; property & gt ;      & lt ; name & gt ; hbase . master . kerberos . principal & lt ; / name & gt ;      & lt ; value & gt ; root / _HOST @ cc . cn & lt ; / value & gt ; & lt ; / property & gt ;   & lt ; property & gt ;      & lt ; name & gt ; hbase . regionserver . keytab . file & lt ; / name & gt ;      & lt ; value & gt ; / hadoop / krb5 . keytab & lt ; / value & gt ; & lt ; / property & gt ;   & lt ; property & gt ;      & lt ; name & gt ; hbase . regionserver . kerberos . principal & lt ; / name & gt ;      & lt ; value & gt ; root / _HOST @ cc . cn & lt ; / value & gt ; & lt ; / property & gt ;  

    同步配置文件

    1 2 pscp / opt / hbase - 1.1.3 / conf / hbase - site . xml / opt / hbase - 1.1.3 / conf /  

    启动

    1 2 hbase - 1.1.3 / bin / start - hbase . sh  

    验证

    1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 [ root @ hadoop3 bin ] # kinit -k -t /opt/hadoop-2.6.3/keytab/krb5.keytab root/hadoop3.example.com [ root @ hadoop3 bin ] # klist Ticket cache : FILE : / tmp / krb5cc_0 Default principal : root / hadoop3 . example . com @ HADOOP . COM   Valid starting     Expires             Service principal 03 / 01 / 16 15 : 35 : 23    03 / 02 / 16 15 : 35 : 23    krbtgt / HADOOP . COM @ HADOOP . COM      renew until 03 / 01 / 16 15 : 35 : 23     [ root @ hadoop3 bin ] # ./hbase shell SLF4J : Class path contains multiple SLF4J bindings . SLF4J : Found binding in [ jar : file : / opt / hbase - 1.1.3 / lib / slf4j - log4j12 - 1.7.5.jar ! / org / slf4j / impl / StaticLoggerBinder . class ] SLF4J : Found binding in [ jar : file : / opt / hadoop - 2.6.3 / share / hadoop / common / lib / slf4j - log4j12 - 1.7.5.jar ! / org / slf4j / impl / StaticLoggerBinder . class ] SLF4J : See http : //www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J : Actual binding is of type [ org . slf4j . impl . Log4jLoggerFactory ] HBase Shell ; enter & amp ; #039;help<RETURN>&#039; for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.1.3 , rUnknown , Wed Mar    2 16 : 46 : 56 CST 2016   hbase ( main ) : 001 : 0 & gt ; quit  
    转载请注明原文地址: https://ju.6miu.com/read-6329.html

    最新回复(0)