国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

oracle 網絡問題引起的集群驅逐故障處理

IT那活兒 / 1936人閱讀
oracle 網絡問題引起的集群驅逐故障處理
點擊上方“IT那活兒”公眾號,關注后了解更多內容,不管IT什么活兒,干就完了!!!

  

某客戶數據庫RAC架構,未做業務拆分,私網流量過高導致RAC其中一個節點宕機,并且該故障節點集群無法啟動

接手后進行分析,集群無法啟動是由于haip無法啟動引起的,進行traceroute有50%丟包率,但ping正常,暫時排除硬件問題,查詢資料優化網絡相關參數后,成功拉起集群。

環境:

  • os:redhat7
  • DB:oracle 12.2 RAC noncdb,未部署osw
1. xxxxx1節點因為私網通信異常導致宕機
2021-12-13T16:12:32.211473+08:00
LMON (ospid: 170442) drops the IMR request from LMSK (ospid: 170520) because IMR is in progress and inst 2 is marked bad.
2021-12-13T16:12:32.211526+08:00
Please check USER trace file for more detail.
2021-12-13T16:12:32.211809+08:00
LMON (ospid: 170442) drops the IMR request from LMS6 (ospid: 170465) because IMR is in progress and inst 2 is marked bad.
2021-12-13T16:12:32.212013+08:00
USER (ospid: 170500) issues an IMR to resolve the situation
Please check USER trace file for more detail.
2021-12-13T16:12:32.212419+08:00
LMON (ospid: 170442) drops the IMR request from LMSF (ospid: 170500) because IMR is in progress and inst 2 is marked bad.
2021-12-13T16:12:32.214587+08:00
USER (ospid: 170539) issues an IMR to resolve the situation
Please check USER trace file for more detail.
2021-12-13T16:12:32.214929+08:00
LMON (ospid: 170442) drops the IMR request from LMSP (ospid: 170539) because IMR is in progress and inst 2 is marked bad.
2021-12-13T16:12:32.215318+08:00
USER (ospid: 170456) issues an IMR to resolve the situation
Please check USER trace file for more detail.
2021-12-13T16:12:32.215603+08:00
LMON (ospid: 170442) drops the IMR request from LMS4 (ospid: 170456) because IMR is in progress and inst 2 is marked bad.
Detected an inconsistent instance membership by instance 2
Errors in file /u01/app/oracle/diag/rdbms/xxxxx/xxxxx1/trace/xxxxx1_lmon_170442.trc (incident=819377):
ORA-29740: evicted by instance number 2, group incarnation 6
Incident details in: /u01/app/oracle/diag/rdbms/xxxxx/xxxxx1/incident/incdir_819377/xxxxx1_lmon_170442_i819377.trc
2021-12-13T16:12:33.213098+08:00
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2021-12-13T16:12:33.213205+08:00
Errors in file /u01/app/oracle/diag/rdbms/xxxxx/xxxxx1/trace/xxxxx1_lmon_170442.trc:
ORA-29740: evicted by instance number 2, group incarnation 6
Errors in file /u01/app/oracle/diag/rdbms/xxxxx/xxxxx1/trace/xxxxx1_lmon_170442.trc (incident=819378):
ORA-29740 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/oracle/diag/rdbms/xxxxx/xxxxx1/incident/incdir_819378/xxxxx1_lmon_170442_i819378.trc
2021-12-13T16:12:33.423825+08:00
USER (ospid: 330352): terminating the instance due to error 481
2021-12-13T16:12:44.602060+08:00
Instance terminated by USER, pid = 330352
2021-12-14T00:02:47.101462+08:00
Starting ORACLE instance (normal) (OS id: 417848)
2021-12-14T00:02:47.109132+08:00
CLI notifier numLatches:131 maxDescs:21296
2. 然后集群狀態也出現異常
2021-12-13 16:12:33.945 [ORAAGENT(170290)]CRS-5011: Check of 
resource "xxxxx" failed: details at "(:CLSN00007:)" in 
"/u01/app/grid/diag/crs/xxxxx01/crs/trace/crsd_oraagent_orac
le.trc"

2021-12-13 16:16:43.717 [ORAROOTAGENT(5870)]CRS-5818:
Aborted command check for resource ora.crsd. Details at 
(:CRSAGF00113:) {0:5:3} in 
/u01/app/grid/diag/crs/xxxxx01/crs/trace/ohasd_orarootagent_
root.trc.
3. 重啟集群,但集群啟動失敗,haip無法啟動
alert.log:
2021-12-13 20:18:59.139 [OHASD(188988)]CRS-8500: Oracle Clusterware OHASD process is starting with operating system process ID 188988
2021-12-13 20:18:59.141 [OHASD(188988)]CRS-0714: Oracle Clusterware Release 12.2.0.1.0.
2021-12-13 20:18:59.154 [OHASD(188988)]CRS-2112: The OLR service started on node xxxxx01.
2021-12-13 20:18:59.162 [OHASD(188988)]CRS-8017: location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
2021-12-13 20:18:59.162 [OHASD(188988)]CRS-1301: Oracle High Availability Service started on node xxxxx01.
2021-12-13 20:18:59.288 [ORAAGENT(189092)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 189092
2021-12-13 20:18:59.310 [CSSDAGENT(189114)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 189114
2021-12-13 20:18:59.317 [CSSDMONITOR(189121)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 189121
2021-12-13 20:18:59.322 [ORAROOTAGENT(189103)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 189103
2021-12-13 20:18:59.556 [ORAAGENT(189163)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 189163
2021-12-13 20:18:59.602 [MDNSD(189183)]CRS-8500: Oracle Clusterware MDNSD process is starting with operating system process ID 189183
2021-12-13 20:18:59.605 [EVMD(189184)]CRS-8500: Oracle Clusterware EVMD process is starting with operating system process ID 189184
2021-12-13 20:19:00.641 [GPNPD(189222)]CRS-8500: Oracle Clusterware GPNPD process is starting with operating system process ID 189222
2021-12-13 20:19:01.638 [GPNPD(189222)]CRS-2328: GPNPD started on node xxxxx01.
2021-12-13 20:19:01.654 [GIPCD(189284)]CRS-8500: Oracle Clusterware GIPCD process is starting with operating system process ID 189284
2021-12-13 20:19:15.462 [CSSDMONITOR(189500)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 189500
2021-12-13 20:19:15.633 [CSSDAGENT(189591)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 189591
2021-12-13 20:19:16.805 [OCSSD(189606)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 189606
2021-12-13 20:19:17.834 [OCSSD(189606)]CRS-1713: CSSD daemon is started in hub mode
2021-12-13 20:19:18.936 [OCSSD(189606)]CRS-1707: Lease acquisition for node xxxxx01 number 1 completed
2021-12-13 20:19:20.025 [OCSSD(189606)]CRS-1605: CSSD voting file is online: /dev/emcpowerp; details in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ocssd.trc.
2021-12-13 20:19:20.029 [OCSSD(189606)]CRS-1605: CSSD voting file is online: /dev/emcpowerq; details in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ocssd.trc.
2021-12-13 20:19:20.033 [OCSSD(189606)]CRS-1605: CSSD voting file is online: /dev/emcpowerr; details in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ocssd.trc.
2021-12-13 20:23:59.366 [ORAROOTAGENT(189103)]CRS-5818: Aborted command check for resource ora.storage. Details at (:CRSAGF00113:) {0:0:2} in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ohasd_orarootagent_root.trc.
2021-12-13 20:25:12.427 [ORAROOTAGENT(195387)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 195387
2021-12-13 20:29:12.450 [ORAROOTAGENT(195387)]CRS-5818: Aborted command
check for resource ora.storage. Details at (:CRSAGF00113:) {0:8:2} in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ohasd_orarootagent_root.trc.
2021-12-13 20:29:15.772 [CSSDAGENT(189591)]CRS-5818: Aborted command
start for resource ora.cssd. Details at (:CRSAGF00113:) {0:5:3} in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ohasd_cssdagent_root.trc.
2021-12-13 20:29:16.065 [OHASD(188988)]CRS-2757: Command
Start timed out waiting for response from the resource ora.cssd. Details at (:CRSPE00221:) {0:5:3} in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ohasd.trc.
2021-12-13 20:29:16.772 [OCSSD(189606)]CRS-1656: The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /u01/app/grid/diag/crs/xxxxx01/crs/trace/ocssd.trc
2021-12-13 20:29:16.773 [OCSSD(189606)]CRS-1603: CSSD on node xxxxx01 has been shut down.
2021-12-13 20:29:21.773 [OCSSD(189606)]CRS-8503: Oracle Clusterware process OCSSD with operating system process ID 189606 experienced fatal signal or exception code 6.
2021-12-13T20:29:21.777920+08:00
Errors in file /u01/app/grid/diag/crs/xxxxx01/crs/trace/ocssd.trc (incident=1):
CRS-8503 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/grid/diag/crs/xxxxx01/crs/incident/incdir_1/ocssd_i1.trc
###################################################
ocssd.log:
2021-12-13 20:19:51.063 : CSSD:1538770688: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536816, LATS 3884953830, lastSeqNo 128536813, uniqueness 1565321051, timestamp 1607861990/3882768200
2021-12-13 20:19:51.063 : CSSD:1530885888: clssscSelect: gipcwait returned with status gipcretPosted (17)
2021-12-13 20:19:51.064 :GIPCHDEM:3374835456: gipchaDaemonProcessClientReq: processing req 0x7f4c28038cf0 type gipchaClientReqTypePublish (1)
2021-12-13 20:19:51.064 : CSSD:3396663040: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2021-12-13 20:19:51.064 :GIPCGMOD:3376412416: gipcmodGipcCallbackEndpClosed: [gipc] Endpoint close for endp 0x7f4c280337d0 [00000000000004b8] { gipcEndpoint : localAddr
(dying), remoteAddr (dying), numPend 0, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x2, pidPeer 0, readyRef 0x1cdefd0, ready 1, wobj 0x7f4c28035d60, sendp (nil) status 13flags 0x2e0b860a, flags-2 0x0, usrFlags 0x0 }
2021-12-13 20:19:51.064 :GIPCHDEM:3374835456: gipchaDaemonProcessClientReq: processing req 0x7f4c70097550 type gipchaClientReqTypeDeleteName (12)
2021-12-13 20:19:51.064 : CSSD:1530885888: clssscConnect: endp 0x83e - cookie 0x1d013e0 - addr gipcha://xxxxx02:nm2_xxxxx-cluster
2021-12-13 20:19:51.064 : CSSD:1530885888: clssnmRetryConnections: Probing node xxxxx02 (2), probendp(0x83e)
2021-12-13 20:19:51.064 :GIPCHTHR:3376412416: gipchaWorkerProcessClientConnect: starting resolve from connect for host:xxxxx02, port:nm2_xxxxx-cluster, cookie:0x7f4c28038ed0
2021-12-13 20:19:51.064 :GIPCHDEM:3374835456: gipchaDaemonProcessClientReq: processing req 0x7f4c7009a2e0 type gipchaClientReqTypeResolve (4)
2021-12-13 20:19:51.064 : CSSD:3359094528: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536817, LATS 3884953830, lastSeqNo 128536814, uniqueness 1565321051, timestamp 1607861990/3882768350
2021-12-13 20:19:51.899 : CSSD:3410851584: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2021-12-13 20:19:52.064 : CSSD:3396663040: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2021-12-13 20:19:52.064 : CSSD:1538770688: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536819, LATS 3884954830, lastSeqNo 128536816, uniqueness 1565321051, timestamp 1607861991/3882769200
2021-12-13 20:19:52.065 : CSSD:3359094528: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536820, LATS 3884954830, lastSeqNo 128536817, uniqueness 1565321051, timestamp 1607861991/3882769360
2021-12-13 20:19:52.900 : CSSD:3410851584: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2021-12-13 20:19:53.064 : CSSD:3396663040: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2021-12-13 20:19:53.066 : CSSD:1538770688: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536822, LATS 3884955830, lastSeqNo 128536819, uniqueness 1565321051, timestamp 1607861992/3882770200
2021-12-13 20:19:53.068 : CSSD:3359094528: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536823, LATS 3884955830, lastSeqNo 128536820, uniqueness 1565321051, timestamp 1607861992/3882770360
2021-12-13 20:19:53.902 : CSSD:3410851584: clsssc_CLSFAInit_CB: System not ready for CLSFA initialization
2021-12-13 20:19:54.064 : CSSD:3396663040: clssscWaitOnEventValue: after CmInfo State val 3, eval 1 waited 1000 with cvtimewait status 4294967186
2021-12-13 20:19:54.067 : CSSD:1538770688: clssnmvDHBValidateNCopy: node 2, xxxxx02, has a disk HB, but no network HB, DHB has rcfg 460477135, wrtcnt, 128536825, LATS 3884956830, lastSeqNo 128536822, uniqueness 1565321051, timestamp 1607861993/3882771200
4. 對私網進行traceroute,有丟包現象但ping正常
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 xxxxx02-priv (xxx.xx.11.37) 0.112 ms  0.212 ms  0.206 ms
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 xxxxx02-priv (xxx.xx.11.37) 0.113 ms  0.216 ms *
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 xxxxx02-priv (xxx.xx.11.37) 0.121 ms  0.087 ms  0.197 ms
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 * xxxxx02-priv (xxx.xx.11.37) 0.058 ms *
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 xxxxx02-priv (xxx.xx.11.37) 0.217 ms  0.188 ms  0.187 ms
[root@xxxxx01 ~]# traceroute -r xxx.xx.11.37
traceroute to xxx.xx.11.37 (xxx.xx.11.37), 30 hops max, 60 byte packets
1 * * *
2 xxxxx02-priv (xxx.xx.11.37) 0.068 ms * *
[root@xxxxx01 ~]#
traceroute失敗率在50%左右,初步懷疑私網網絡存在問題數據庫宕機也是由于私網網絡通信異常導致的。
但長ping2節點私網IP,未發現丟包現象。網絡工程師通過ping 50M大包才會出現丟包現象。暫時排除硬件問題
5. 檢查網絡相關參數,都是系統推薦參數,繼續查詢mos,一篇文章給予了啟發IPC Send timeout/node eviction etc with high packet reassembles failure (文檔 ID 2008933.1)。懷疑我們當前主機數據包重組失敗率過高。
開始對主機網絡參數進行優化。
修改/etc/sysctl.conf文件里面的網絡參數,調整如下:
net.ipv4.ipfrag_high_thresh = 16194304
net.ipv4.ipfrag_low_thresh = 15145728
net.core.rmem_max = 16777216
net.core.rmem_default = 4777216
net.core.wmem_max = 16777216
net.core.wmem_default = 4777216
參數解釋:
  • --net.ipv4.ipfrag_low_thresh,net.ipv4.ipfrag_high_thresh

    系統中當數據包傳輸發生錯誤,會進行碎片整理,有效的數據包被保留,而無效的數據包被丟棄,ipfrag參數指定了碎片整理時的最大/最小內存。

  • --net.core.rmem_*
    net.core.rmem_default默認數據接收窗口大小。
    net.core.rmem_max最大數據接收窗口大小。
    net.core.wmem_default默認數據發送窗口大小。
    net.core.wmem_max最大數據發送窗口大小。
以上兩個參數調大后,相應的后面的4個參數也調大了。
執行sysctl -p重新啟動集群,啟動成功
6. 基于該故障引申出來的疑問
后來我在其他安裝了RAC的linux機器上,traceroute私網,丟包率全部在50%左右,而在aix卻沒有任何丟包,查詢相關資料,發現也有很多人有類似疑問,我覺得最正確的答案是一篇文章上說的linux上默認有ICMP速率限制,移除后可以解決。
很多采用了默認網絡參數的數據庫并沒有出問題,可能是網絡集成的時候mtu已經調大了很多倍(我維護的很多庫就調大),可能是數據包重組率不高。

本文作者:湯 杰(上海新炬王翦團隊)

本文來源:“IT那活兒”公眾號

文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。

轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/129348.html

相關文章

  • DBASK問答集萃第四期

    摘要:問題九庫控制文件擴展報錯庫的擴展報錯,用的是裸設備,和還是原來大小,主庫的沒有報錯,并且大小沒有變,求解釋。專家解答從報錯可以看出,控制文件從個塊擴展到個塊時報錯,而裸設備最大只支持個塊,無法擴展,可以嘗試將參數改小,避免控制文件報錯。 鏈接描述引言 近期我們在DBASK小程序新關聯了運維之美、高端存儲知識、一森咖記、運維咖啡吧等數據領域的公眾號,歡迎大家閱讀分享。 問答集萃 接下來,...

    SKYZACK 評論0 收藏0
  • ElasticSearch 單個節點監控

    摘要:會展示這個節點目前正在服務中的段的數量。線程池部分在內部維護了線程池。這些線程池相互協作完成任務,有必要的話相互間還會傳遞任務。每個線程池會列出已配置的線程數量,當前在處理任務的線程數量,以及在隊列中等待處理的任務單元數量。 showImg(https://segmentfault.com/img/remote/1460000011618283?w=1920&h=1080); 集群健康...

    ky0ncheng 評論0 收藏0
  • 京東如何打造K8s全球最大集群支撐萬億電商交易

    摘要:月日京東基礎架構部技術總監集群技術部負責人鮑永成受邀出席了舉辦的容器技術大會,并做了題為京東如何打造全球最大集群支撐萬億電商交易的主題演講,本文根據演講內容整理而成。化繁為簡重構有人問,京東做一個這么大的集群,是不是特別復雜特別容易出錯。 在過去一年里,Kubernetes以其架構簡潔性和靈活性,流行度持續快速上升,我們有理由相信在不遠的未來,Kubernetes將成為通用的基礎設施標...

    young.li 評論0 收藏0
  • ActiveMQ集群整體認識

    摘要:二集群部署方式集群的部署方式主要有下面種模式實現負載均衡,多個之間同步消息,已達到服務器負載的可能。默認為,單位為毫秒,表示一次嘗試重連之間等待的時間。如果宕機,集群退化成標準集群,只是了失去負載均衡能力。 前言 最終需要掌握 Replicated LevelDB Store部署方式,這種部署方式是基于ZooKeeper的。 集群分為兩種方式:1.偽集群:集群節點都搭在一臺機器上2....

    sixgo 評論0 收藏0

發表評論

0條評論

IT那活兒

|高級講師

TA的文章

閱讀更多
最新活動
閱讀需要支付1元查看
<