欧美三区_成人在线免费观看视频_欧美极品少妇xxxxⅹ免费视频_a级毛片免费播放_鲁一鲁中文字幕久久_亚洲一级特黄

Top 5 Grid Infrastructure Startup Issues [ID

系統(tǒng) 2058 0

最近使用開發(fā)的過程中出現(xiàn)了一個(gè)小問題,順便記錄一下原因和方法--

????

Applies to:

????Oracle Database - Enterprise Edition - Version 11.2.0.1 and later

????Information in this document applies to any platform.

????

Purpose

????The purpose of this note is to provide a summary of the top 5 issues that may prevent the successful startup of the Grid Infrastructure (GI) stack.

????

Scope

????This note applies to 11gR2 Grid Infrastructure only.

To determine the status of GI, please run the following commands:

????

1. $GRID_HOME/bin/crsctl check crs
2. $GRID_HOME/bin/crsctl stat res -t -init
3. $GRID_HOME/bin/crsctl stat res -t
4. ps -ef | egrep 'init|d.bin'

????

Details

????

????

Issue #1: CRS-4639: Could not contact Oracle High Availability Services, ohasd.bin not running or ohasd.bin is running but no init.ohasd or other processes

???? Symptoms:

????1. Command '$GRID_HOME/bin/crsctl check crs' returns error:
???? CRS-4639: Could not contact Oracle High Availability Services
2. Command 'ps -ef | grep init' does not show a line similar to:
???? root 4878 1 0 Sep12 ? 00:00:02 /bin/sh /etc/init.d/init.ohasd run
3. Command 'ps -ef | grep d.bin' does not show a line similar to:
???? root 21350 1 6 22:24 ? 00:00:01 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
??? Or it may only show "ohasd.bin reboot" process without any other processes

????
Possible Causes:

????

1. The file '/etc/inittab' does not contain the line
????? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
2. runlevel 3 has not been reached, some rc3 script is hanging
3. the init process (pid 1) did not spawn the process defined in /etc/inittab (h1) or a bad entry before init.ohasd like xx:wait:<process> blocked the start of init.ohasd
4. CRS autostart is disabled
5. The Oracle Local Registry ($GRID_HOME/cdata/<node>.olr) is missing or corrupted

????
Solutions:

????

1. Add the following line to /etc/inittab?
??? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
?? and then run "init q" as the root user.
2. Run command 'ps -ef | grep rc' and kill any remaining rc3 scripts that appear to be stuck.
3. Remove the bad entry before init.ohasd. Consult with OS vendor if "init q" does not spawn "init.ohasd run" process
4. Enable CRS autostart:
?? # crsctl enable crs
?? # crsctl start crs
5. Restore OLR from backup, as root user:
???# touch $GRID_HOME/cdata/<node>.olr
? # chown root:oinstall $GRID_HOME/cdata/<node>.olr
? # ocrconfig -local -restore$GRID_HOME/cdata/<node>/backup_<date>_<num>.olr
? # crsctl start crs

If OLR backup does not exist for any reason, perform deconfig and rerun root.sh is required to recreate OLR, as root user:
?? # $GRID_HOME/crs/install/rootcrs.pl -deconfig -force
?? # $GRID_HOME/root.sh
6. If above does not help, check OS messages for ohasd.bin logger message and manually execute crswrapexece.pl command mentioned in the OS message with LD_LIBRARY_PATH set to <GRID_HOME/lib to continue debug.

?

????

????

Issue #2: CRS-4530: Communications failure contacting Cluster Synchronization Services daemon, ocssd.bin is not running

???? Symptoms:

????1. Command '$GRID_HOME/bin/crsctl check crs' returns errors:
??? CRS-4638: Oracle High Availability Services is online
??? CRS-4535: Cannot communicate with Cluster Ready Services
??? CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
??? CRS-4534: Cannot communicate with Event Manager
2. Command 'ps -ef | grep d.bin' does not show a line similar to:
??? oragrid 21543 1 1 22:24 ? 00:00:01 /u01/app/11.2.0/grid/bin/ocssd.bin
3. ocssd.bin is running but abort with message "CLSGPNP_CALL_AGAIN" in ocssd.log
4. ocssd.log shows:

?? 2012-01-27 13:42:58.796: [ CSSD][19]clssnmvDHBValidateNCopy: node 1, racnode1, has a disk HB, but no network HB, DHB has rcfg 223132864, wrtcnt, 1112, LATS 783238209,??
?? lastSeqNo 1111, uniqueness 1327692232, timestamp 1327693378/787089065

????5. for 3 or more node cases, 2 nodes form cluster fine, the 3rd node joined then failed, ocssd.log show:

?? 2012-02-09 11:33:53.048: [ CSSD][1120926016](:CSSNM00008:)clssnmCheckDskInfo: Aborting local node to avoid splitbrain. Cohort of 2 nodes with leader 2, racnode2, is smaller than???
?? cohort of 2 nodes led by node 1, racnode1, based on map type 2
?? 2012-02-09 11:33:53.048: [ CSSD][1120926016]###################################
?? 2012-02-09 11:33:53.048: [ CSSD][1120926016]clssscExit: CSSD aborting from thread clssnmRcfgMgrThread

????6. ocssd.bin startup timeout after 10minutes

???? ?? 2012-04-08 12:04:33.153: [ ? ?CSSD][1]clssscmain: Starting CSS daemon, version 11.2.0.3.0, in (clustered) mode with uniqueness value 1333911873
?? ......
?? 2012-04-08 12:14:31.994: [ ? ?CSSD][5]clssgmShutDown: Received abortive shutdown request from client.
?? 2012-04-08 12:14:31.994: [ ? ?CSSD][5]###################################
?? 2012-04-08 12:14:31.994: [ ? ?CSSD][5]clssscExit: CSSD aborting from thread GMClientListener
?? 2012-04-08 12:14:31.994: [ ? ?CSSD][5]###################################
?? 2012-04-08 12:14:31.994: [ ? ?CSSD][5](:CSSSC00012:)clssscExit: A fatal error occurred and the CSS daemon is terminating abnormally

???? Possible Causes:

????

1. Voting disk is missing or inaccessible
2. Multicast is not working (for 11.2.0.2+)
3. private network is not working, ping or traceroute <private host> shows destination unreachable. Or firewall is enable for private network while ping/traceroute work fine
4. private network is pingable with normal ping command but not pingable with jumbo frame size (eg: ping -s 8900 <private ip>) when jumbo frame is enabled (MTU: 9000+). Or partial cluster nodes have jumbo frame set (MTU: 9000) and the problem node does not have jumbo frame set (MTU:1500)
5. gpnpd does not come up, stuck in dispatch thread,? Bug 10105195
6. too many disks discovered via asm_diskstring or slow scan of disks due to? Bug 13454354 ?on Solaris 11.2.0.3 only

????
Solutions:

????

1. restore the voting disk access by checking storage access,? disk permissions etc.
?? If the voting disk is missing from the OCR ASM diskgroup, start CRS in exclusive mode and recreate the voting disk:
?? # crsctl start crs -excl
?? # crsctl replace votedisk <+OCRVOTE diskgroup>
2. Refer to? Document 1212703.1 ?for multicast test and fix
3. Consult with the network administrator to restore private network access or disable firewall for private network (for Linux, check service iptables status and service ip6tables status)
4. Engage network admin to enable jumbo frame from switch layer if it is enabled at Network card
5. Kill the gpnpd.bin process on surviving node, refer? Document 10105195.8
?? Once above issues are resolved, restart Grid Infrastructure stack.
?? If ping/traceroute all work for private network, there is a failed 11.2.0.1 to 11.2.0.2 upgrade happened, please check out?
??? Bug 13416559 ?for workaround
6. Limit the number of ASM disks scan by supplying a more specific asm_diskstring, refer to? bug 13583387
?? For Solaris 11.2.0.3 only, please apply patch 13250497, see? Document 1451367.1 .
????每日一道理
時(shí)間好比一條小溪,它能招引我們奔向生活的海洋;時(shí)間如同一葉扁舟,它將幫助我們駛向理想的彼岸;時(shí)間猶如一支畫筆,它會(huì)指點(diǎn)我們描繪人生的畫卷。

?

????

????

Issue #3: CRS-4535: Cannot communicate with Cluster Ready Services, crsd.bin is not running

???? Symptoms:

????1. Command '$GRID_HOME/bin/crsctl check crs' returns errors:
??? CRS-4638: Oracle High Availability Services is online
??? CRS-4535: Cannot communicate with Cluster Ready Services
??? CRS-4529: Cluster Synchronization Services is online
??? CRS-4534: Cannot communicate with Event Manager
2. Command 'ps -ef | grep d.bin' does not show a line similar to:
??? root 23017 1 1 22:34 ? 00:00:00 /u01/app/11.2.0/grid/bin/crsd.bin reboot
3. Even if the crsd.bin process exists, command 'crsctl stat res -t -init' shows:
??? ora.crsd
??????? 1??? ONLINE???? INTERMEDIATE

???? Possible Causes:

????

1. ocssd.bin is not running or resource ora.cssd is not ONLINE
2. +ASM<n> instance can not startup
3. OCR is inaccessible
4. Network configuration has been changed causing gpnp profile.xml mismatch
5. $GRID_HOME/crs/init/<host>.pid file for crsd has been removed or renamed manually, crsd.log shows: 'Error3 -2 writing PID to the file'
6. ocr.loc content mismatch with other cluster nodes. crsd.log shows: 'Shutdown CacheLocal. my hash ids don't match'

????
Solutions:

????

1. Check the solution for Issue 2, ensure ocssd.bin is running and ora.cssd is ONLINE
2. For 11.2.0.2+, ensure that the resource? ora.cluster_interconnect.haip ?is ONLINE, refer to? Document 1383737.1 ?for ASM startup?
?? issues related to HAIP.
3. Ensure the OCR disk is available and accessible. If the OCR is lost for any reason, refer to? Document 1062983.1 ?on how to restore?
?? the OCR.
4. Restore network configuration to be the same as interface defined in $GRID_HOME/gpnp/<node>/profiles/peer/profile.xml, refer to?
??? Document 283684.1 ?for private network modification.
5. touch the file with <host>.pid under $GRID_HOME/crs/init.
?? For 11.2.0.1, the file is owned by <grid> user.
?? For 11.2.0.2, the file is owned by root user.
6. Using ocrconfig -repair command to fix the ocr.loc content:
?? for example, as root user:
# ocrconfig -repair -add +OCR2 (to add an entry)
# ocrconfig -repair -delete +OCR2 (to remove an entry)
ohasd.bin needs to be up and running in order for above command to run.

Once above issues are resolved, either restart GI stack or start crsd.bin via:
?? # crsctl start res ora.crsd -init

?

????

????

Issue #4: Agent or mdnsd.bin, gpnpd.bin, gipcd.bin not running

???? Symptoms:

????1. orarootagent not running. ohasd.log shows:
2012-12-21 02:14:05.071: [ ? ?AGFW][24] {0:0:2} Created alert : (:CRSAGF00123:) : ?Failed to start the agent process: /grid/11.2.0/grid_2/bin/orarootagent Category: -1 Operation: fail Loc: canexec2 OS error: 0 Other : no exe permission, file [/grid/11.2.0/grid_2/bin/orarootagent]?
2. mdnsd.bin, gpnpd.bin or gipcd.bin not running, here is a sample for mdnsd log file:
2012-12-31 21:37:27.601: [? clsdmt][1088776512]Creating PID [4526] file for home /u01/app/11.2.0/grid host lc1n1 bin mdns to /u01/app/11.2.0/grid/mdns/init/
2012-12-31 21:37:27.602: [? clsdmt][1088776512]Error3 -2 writing PID [4526] to the file []?
2012-12-31 21:37:27.602: [? clsdmt][1088776512]Failed to record pid for MDNSD
or
2012-12-31 21:39:52.656: [? clsdmt][1099217216]Creating PID [4645] file for home /u01/app/11.2.0/grid host lc1n1 bin mdns to /u01/app/11.2.0/grid/mdns/init/
2012-12-31 21:39:52.656: [? clsdmt][1099217216]Writing PID [4645] to the file [/u01/app/11.2.0/grid/mdns/init/lc1n1.pid]
2012-12-31 21:39:52.656: [? clsdmt][1099217216]Failed to record pid for MDNSD
3. oraagent or appagent not running, crsd.log shows:
2012-12-01 00:06:24.462: [ ? ?AGFW][1164069184] {0:2:27} Created alert : (:CRSAGF00130:) : ?Failed to start the agent /u01/app/grid/11.2.0/bin/appagent_oracle

???? Possible Causes:

????

1. orarootagent missing execute permission
2. missing process associated <node>.pid file or the file has wrong ownership or permission
3. wrong permission/ownership within GRID_HOME

????
Solutions:

????

1. Either compare the permission/ownership with a good node GRID_HOME and make correction accordingly or as root user:
?? # cd <GRID_HOME>/crs/install
?? # ./rootcrs.pl -unlock
?? # ./rootcrs.pl -patch
This will stop clusterware stack, set permssion/owership to root for required files and restart clusterware stack.
2. If the corresponding <node>.pid does not exist, touch the file with correct ownership and permission, otherwise correct the <node>.pid ownership/permission as required, then restart the clusterware stack.
Here is the list of <node>.pid file under <GRID_HOME>, owned by root:root, permission 644:
? ./ologgerd/init/<node>.pid
? ./osysmond/init/ <node> .pid
? ./ctss/init/ <node> .pid
? ./ohasd/init/ <node> .pid
? ./crs/init/ <node> .pid
Owned by <grid>:oinstall, permission 644:
? ./mdns/init/ <node> .pid ?
? ./evm/init/ <node> .pid
? ./gipc/init/ <node> .pid
? ./gpnp/init/ <node> .pid?


3. For cause 3, please refer to solution 1.

?

????

????

Issue #5: ASM instance does not start, ora.asm is OFFLINE

???? Symptoms:

????1. Command 'ps -ef | grep asm' shows no ASM processes
2. Command 'crsctl stat res -t -init' shows:
???????? ora.asm
?????????????? 1??? ONLINE??? OFFLINE

????
Possible Causes:

????

1. ASM spfile is corrupted
2. ASM discovery string is incorrect and therefore voting disk/OCR cannot be discovered
3. ASMlib configuration problem
4. ASM instances are using different cluster_interconnect, HAIP OFFLINE on 1 node causing the 2nd ASM instance could not start

????
Solutions:

????

1. Create a temporary pfile to start ASM instance, then recreate spfile, see? Document 1095214.1 ?for more details.
2. Refer to? Document 1077094.1 ?to correct the ASM discovery string.
3. Refer to? Document 1050164.1 ?to fix ASMlib configuration.
4. Refer to? Document 1383737.1 ?for solution. For more information about HAIP, please refer to? Document 1210883.1

?

????For further debugging GI startup issue, please refer to? Document 1050908.1 ?Troubleshoot Grid Infrastructure Startup Issues.

文章結(jié)束給大家分享下程序員的一些笑話語錄: 馬云喜歡把自己包裝成教主,張朝陽喜歡把自己包裝成明星,李彥宏喜歡把自己包裝成的很知性,丁磊喜歡把自己包裝的有創(chuàng)意,李開復(fù)總擺出一副叫獸的樣子。看來的。其實(shí)我想說,缺啥補(bǔ)啥,人之常情。

Top 5 Grid Infrastructure Startup Issues [ID 1368382.1]


更多文章、技術(shù)交流、商務(wù)合作、聯(lián)系博主

微信掃碼或搜索:z360901061

微信掃一掃加我為好友

QQ號(hào)聯(lián)系: 360901061

您的支持是博主寫作最大的動(dòng)力,如果您喜歡我的文章,感覺我的文章對(duì)您有幫助,請(qǐng)用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點(diǎn)擊下面給點(diǎn)支持吧,站長(zhǎng)非常感激您!手機(jī)微信長(zhǎng)按不能支付解決辦法:請(qǐng)將微信支付二維碼保存到相冊(cè),切換到微信,然后點(diǎn)擊微信右上角掃一掃功能,選擇支付二維碼完成支付。

【本文對(duì)您有幫助就好】

您的支持是博主寫作最大的動(dòng)力,如果您喜歡我的文章,感覺我的文章對(duì)您有幫助,請(qǐng)用微信掃描上面二維碼支持博主2元、5元、10元、自定義金額等您想捐的金額吧,站長(zhǎng)會(huì)非常 感謝您的哦!!!

發(fā)表我的評(píng)論
最新評(píng)論 總共0條評(píng)論
主站蜘蛛池模板: 草莓视频69 | 精品国产精品国产 | 亚洲三区在线观看 | 日韩av片在线免费观看 | 538亚洲欧美国产日韩在线精品 | 国产精品视频播放 | 天天操网 | 亚洲国产品综合人成综合网站 | 最新一级毛片 | 欧美精品久 | 亚洲欧美一区二区三区国产精品 | 57pao成人永久免费视频 | 欧美人成片免费看视频不卡 | 久久久婷婷一区二区三区不卡 | 两性免费视频 | 国产精品一区二区在线观看 | 色噜噜亚洲男人的天堂 | 成人一区二区丝袜美腿 | 日韩av资源站 | 一级片黑人| 日本高清在线观看视频 | 久草视频福利在线观看 | 国产激情一级毛片久久久 | 特级做a爰片毛片免费看 | 久久久久久久av | 91文字幕巨乱亚洲香蕉 | 免费在线黄色片 | 国产一级毛片在线看 | 久久伊人中文字幕有码 | 99热.com| 97在线观视频免费观看 | 欧美在线小视频 | 欧美成视频无需播放器 | 日韩成人黄色 | 99热播放 | 九九资源站 | 女人被男人狂躁下面在线观看 | 91高清网站 | a在线免费观看 | 一级片在线 | 国产一级片网站 |