點擊上方“IT那活兒”公眾號,關注后了解更多內容,不管IT什么活兒,干就完了!!!
我們平常會基于大數據組件來實現客戶業務場景,而所使用大數據組件時(hadoop/flink/kafka等),會被安全廠商掃描出安全訪問漏洞,業界推薦用kerberos來解決此類安全訪問問題。
Kerberos是一種網絡認證協議,在互不信任的網絡中,Kerberos提供了一種可靠的中心化認證協議,以便網絡中的各個機器之間能夠相互訪問。
服務端 | 192.168.199.102 | bigdata-03 | krb5-server krb5-workstation krb5-libs krb5-devel |
客戶端 | 192.168.199.104 | bigdata-05 | krb5-workstation krb5-devel |
服務端與客戶端主機網絡互通,并且配置對應相互hostname映射關系。
rpm -qa|grep krb 查看當前服務器安裝的包,安裝如下對應的的安裝包:
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
HADOOP.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt
default_realm = HADOOP.COM #默認領域,跟kdc.conf里面realms保持一致
#default_ccache_name = KEYRING:persistent:%{uid}
[realms]
HADOOP.COM = {
kdc = bigdata-03 #主節點hostname
admin_server = bigdata-03 #主節點hostname
}
[domain_realm]
.hadoop.com = HADOOP.COM #DNS域名,跟kdc.conf里面realms保持一致
hadoop.com = HADOOP.COM #DNS域名,跟kdc.conf里面realms保持一致
org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
*/admin@HADOOP.COM *
[root@bigdata-03 ~]# kdb5_util create -s -r HADOOP.COM
Loading random data
Initializing database /var/kerberos/krb5kdc/principal for realm HADOOP.COM,
master key name K/M@HADOOP.COM
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
kdb5_util: Cannot open DB2 database /var/kerberos/krb5kdc/principal: File exists while creating database /var/kerberos/krb5kdc/principal
[root@bigdata-03 ~]# rm -f /var/kerberos/krb5kdc/principal*
ls -a /var/kerberos/krb5kdc/
kadmin.local
listprincs
kadmin.local -q "addprinc admin/admin@HADOOP.COM"
addprinc admin/admin@HADOOP.COM
標準:account/instance@realm
例子:admin/admin@HADOOP.COM
realm 表示域名 如 HADOOP.COM
org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
libgssapi_krb5.so.2: cannot open shared object file: No such file or directory
文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。
轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/129204.html
大數據開發系列五:kafka& zookeeper 配置kerberos認證 img{ display:block; margin:0 auto !important; width:100%; } body{ ...
摘要:一大數據平臺介紹大數據平臺架構演變如圖所示魅族大數據平臺架構演變歷程年底,我們開始實踐大數據,并部署了測試集群。因此,大數據運維的目標是以解決運維復雜度的自動化為首要目標。大數據運維存在的問題大數據運維存在的問題包括部署及運維復雜。 一、大數據平臺介紹 1.1大數據平臺架構演變 ?showImg(https://segmentfault.com/img/bVWDPj?w=1024&h=...
閱讀 1347·2023-01-11 13:20
閱讀 1685·2023-01-11 13:20
閱讀 1133·2023-01-11 13:20
閱讀 1860·2023-01-11 13:20
閱讀 4101·2023-01-11 13:20
閱讀 2705·2023-01-11 13:20
閱讀 1386·2023-01-11 13:20
閱讀 3598·2023-01-11 13:20