时钟不同步导致ora.crsd无法启动-凯发app官方网站

凯发app官方网站-凯发k8官网下载客户端中心 | | 凯发app官方网站-凯发k8官网下载客户端中心
  • 博客访问: 3502843
  • 博文数量: 718
  • 博客积分: 1860
  • 博客等级: 上尉
  • 技术积分: 7790
  • 用 户 组: 普通用户
  • 注册时间: 2008-04-07 08:51
个人简介

偶尔有空上来看看

文章分类

全部博文(718)

文章存档

2024年(4)

2023年(74)

2022年(134)

2021年(238)

2020年(115)

2019年(11)

2018年(9)

2017年(9)

2016年(17)

2015年(7)

2014年(4)

2013年(1)

2012年(11)

2011年(27)

2010年(35)

2009年(11)

2008年(11)

最近访客
相关博文
  • ·
  • ·
  • ·
  • ·
  • ·
  • ·
  • ·
  • ·
  • ·
  • ·

分类: oracle

2022-08-09 18:34:25

好像曾经记录过,再详细记一遍

  1. 看看测试库,没启动,查查进程状态

  2. [oracle@db01 ~]$ ps -ef|grep d.bin
  3. root 18365 1 1 jun27 ? 10:24:18 /u01/app/19.0.0/grid/bin/ohasd.bin reboot
  4. grid 23070 1 0 jun27 ? 03:34:40 /u01/app/19.0.0/grid/bin/oraagent.bin
  5. grid 23353 1 0 jun27 ? 02:56:17 /u01/app/19.0.0/grid/bin/mdnsd.bin
  6. grid 23354 1 0 jun27 ? 06:44:10 /u01/app/19.0.0/grid/bin/evmd.bin
  7. grid 24010 1 0 jun27 ? 04:23:14 /u01/app/19.0.0/grid/bin/gpnpd.bin
  8. grid 24201 23354 0 jun27 ? 02:56:51 /u01/app/19.0.0/grid/bin/evmlogger.bin -o /u01/app/19.0.0/grid/log/[hostname]/evmd/evmlogger.info -l /u01/app/19.0.0/grid/log/[hostname]/evmd/evmlogger.log
  9. grid 24665 1 0 jun27 ? 07:36:27 /u01/app/19.0.0/grid/bin/gipcd.bin
  10. root 28274 1 0 jun27 ? 03:42:05 /u01/app/19.0.0/grid/bin/cssdmonitor
  11. root 28417 1 0 jun27 ? 03:44:19 /u01/app/19.0.0/grid/bin/cssdagent
  12. grid 28659 1 1 jun27 ? 11:16:26 /u01/app/19.0.0/grid/bin/ocssd.bin
  13. oracle 236493 235970 0 14:51 pts/0 00:00:00 grep --color=auto d.bin
  14. root 269856 1 0 aug08 ? 00:06:27 /u01/app/19.0.0/grid/bin/orarootagent.bin
  15. [oracle@db01 ~]$ exit
  16. logout
  17. you have new mail in /var/spool/mail/root
  18. [root@db01 ~]# su - grid
  19. last login: tue aug 9 14:46:19 cst 2022
  20. c[grid@db01 ~]$ crsctl stat res -t
  21. crs-4535: cannot communicate with cluster ready services
  22. crs-4000: command status failed, or corclleted with errors.
  23. [grid@db01 ~]$ crsctl stat res -t -init
  24. --------------------------------------------------------------------------------
  25. name target state server state details
  26. --------------------------------------------------------------------------------
  27. cluster resources
  28. --------------------------------------------------------------------------------
  29. ora.asm
  30.       1 online offline stable
  31. ora.cluster_interconnect.haip
  32.       1 online online db01 stable
  33. ora.crf
  34.       1 offline offline stable
  35. ora.crsd
  36.       1 online offline stable
  37. ora.cssd
  38.       1 online online db01 stable
  39. ora.cssdmonitor
  40.       1 online online db01 stable
  41. ora.ctssd
  42.       1 online offline stable
  43. ora.diskmon
  44.       1 offline offline stable
  45. ora.drivers.acfs
  46.       1 online online db01 stable
  47. ora.evmd
  48.       1 online online db01 stable
  49. ora.gipcd
  50.       1 online online db01 stable
  51. ora.gpnpd
  52.       1 online online db01 stable
  53. ora.mdnsd
  54.       1 online online db01 stable
  55. ora.storage
  56.       1 online online db01 stable
  57. --------------------------------------------------------------------------------

  58. cssd没问题,crsd异常,嗯问题不大,看看日志报啥错

  59. [grid@db01 ~]$ cd $oracle_base
  60. [grid@db01 grid]$ ls
  61. admin audit cfgtoollogs checkpoints crsdata diag oracle.ahf
  62. [grid@db01 grid]$ cd diag
  63. [grid@db01 diag]$ ls
  64. afdboot asm asmtool clients diagtool em ios lsnrctl ofm plsqlapp tnslsnr
  65. apx asmcmd bdsql crs dps gsm kfod netcman plsql rdbms
  66. [grid@db01 diag]$ cd crs
  67. [grid@db01 crs]$ ls
  68. db01
  69. [grid@db01 crs]$ cd *
  70. [grid@db01 db01]$ ls
  71. crs
  72. [grid@db01 db01]$ cd crs
  73. [grid@db01 crs]$ ls
  74. alert cdump incident incpkg lck log metadata metadata_dgif metadata_pv stage sweep trace
  75. [grid@db01 crs]$ cd trace
  76. [grid@db01 trace]$ ls -lt|head
  77. total 5549444
  78. -rw-rw---- 1 root oinstall 897 dec 17 2022 crsctl_95477.trm
  79. -rw-rw---- 1 root oinstall 1372 dec 17 2022 crsctl_95477.trc
  80. -rw-rw---- 1 root oinstall 1372 dec 17 2022 crsctl_94868.trc
  81. -rw-rw---- 1 root oinstall 897 dec 17 2022 crsctl_94868.trm
  82. -rw-rw---- 1 root oinstall 1372 dec 17 2022 crsctl_94255.trc
  83. -rw-rw---- 1 root oinstall 897 dec 17 2022 crsctl_94255.trm
  84. -rw-rw---- 1 root oinstall 1372 dec 17 2022 crsctl_93924.trc
  85. -rw-rw---- 1 root oinstall 897 dec 17 2022 crsctl_93924.trm
  86. -rw-rw---- 1 root oinstall 1372 dec 17 2022 crsctl_93634.trc
  87. [grid@db01 trace]$ ls -l |grep ocss
  88. -rw-rw---- 1 grid oinstall 6455323 jul 27 01:51 ocssd_250.trm
  89. -rw-rw---- 1 grid oinstall 52429032 jul 30 03:56 ocssd_251.trc
  90. -rw-rw---- 1 grid oinstall 6465250 jul 30 03:56 ocssd_251.trm
  91. -rw-rw---- 1 grid oinstall 52429199 aug 2 06:06 ocssd_252.trc
  92. -rw-rw---- 1 grid oinstall 6457074 aug 2 06:06 ocssd_252.trm
  93. -rw-rw---- 1 grid oinstall 52429272 aug 5 08:02 ocssd_253.trc
  94. -rw-rw---- 1 grid oinstall 6470686 aug 5 08:02 ocssd_253.trm
  95. -rw-rw---- 1 grid oinstall 52429061 aug 8 09:47 ocssd_254.trc
  96. -rw-rw---- 1 grid oinstall 6479333 aug 8 09:47 ocssd_254.trm
  97. -rw-rw---- 1 grid oinstall 20792521 aug 9 14:51 ocssd.trc
  98. -rw-rw---- 1 grid oinstall 2565627 aug 9 14:51 ocssd.trm

  99. 找crsd 的日志文件
  100. [grid@db01 trace]$ ls -lt ocrsd*.trc
  101. ls: cannot access ocrsd*.trc: no such file or directory
  102. [grid@db01 trace]$ ls -lt|grep crsd
  103. -rw-rw---- 1 root oinstall 11068718 jun 27 10:49 crsd.trc
  104. -rw-rw---- 1 root oinstall 1692558 jun 27 10:49 crsd.trm
  105. -rw-rw---- 1 grid oinstall 10886426 jun 27 10:49 crsd_scriptagent_grid.trc
  106. -rw-rw---- 1 grid oinstall 2171435 jun 27 10:49 crsd_scriptagent_grid.trm
  107. -rw-rw---- 1 grid oinstall 24928461 jun 27 10:49 crsd_jagent_grid.trc
  108. -rw-rw---- 1 grid oinstall 4835526 jun 27 10:49 crsd_jagent_grid.trm
  109. [grid@db01 trace]$ tail -100 crsd.trc
  110. oracle database 19c clusterware release 19.0.0.0.0 - production
  111. version 19.14.0.0.0 凯发app官方网站 copyright 1996, 2021 oracle. all rights reserved.
  112. kgfcheck kgfnstmtexecute01c: (ret == oci_success): failed at kgfn.c:3697
  113. 2022-06-27 10:49:18.149 : ocrraw:4160478976: kgfnrecorderr 15056 oci error:
  114. ora-15056: additional error message
  115. ora-06512: at line 4
  116. ora-17503: ksfdopn:2 failed to open file ocr.255.4294967295
  117. ora-15001: diskgroup "ocr" does not exist or is not mounted
  118. ora-06512: at "sys.x$dbms_diskgroup", line 405
  119. ora-06512: at line 2


  120. 2022-06-27 10:49:18.149*:kgfn.c@1804: kgfnrecorderrpriv: 15056 error=ora-15056: additional error message
  121. ora-06512: at line 4
  122. ora-17503: ksfdopn:2 failed to open file ocr.255.4294967295
  123. ora-15001: diskgroup "ocr" does not exist or is not mounted
  124. ora-06512: at "sys.x$dbms_diskgroup", line 405
  125. ora-06512: at line 2

  126. 2022-06-27 10:49:18.149*:kgfn.c@3692: kgfnstmtexecute: ocistmtexecute failed, ret=-1
  127. 2022-06-27 10:49:18.149*:kgfo.c@1016: kgfo_kge2slos error stack at kgfoopen01: ora-15056: additional error message
  128. ora-06512: at line 4
  129. ora-17503: ksfdopn:2 failed to open file ocr.255.4294967295
  130. ora-15001: diskgroup "ocr" does not exist or is not mounted
  131. ora-06512: at "sys.x$dbms_diskgroup", line 405
  132. ora-06512: at line 2

  133. 2022-06-27 10:49:18.149 : ocrraw:4160478976: -- trace dump on error exit --

  134. 2022-06-27 10:49:18.149 : ocrraw:4160478976: error [kgfoopen01] in [kgfokge] at kgfo.c:2380

  135. 2022-06-27 10:49:18.150 : ocrraw:4160478976: ora-06512: at line 4
  136. ora-17503: ksfdopn:2 failed to open file ocr.255.4294967295
  137. ora-15001: diskgroup "ocr" does not exist or is not mounted
  138. ora-06512: at "sys

  139. 2022-06-27 10:49:18.150 : ocrraw:4160478976: category: 8

  140. 2022-06-27 10:49:18.150 : ocrraw:4160478976: depinfo: 15056

  141. 2022-06-27 10:49:18.150 : ocrraw:4160478976: -- trace dump end --

  142. 2022-06-27 10:49:18.151 : ocrraw:4160478976: -- trace dump on error exit --

  143. 2022-06-27 10:49:18.151 : ocrraw:4160478976: error [kgfoopen01] in [kgfokge] at kgfo.c:2178

  144. 2022-06-27 10:49:18.151 : ocrraw:4160478976: ora-06512: at line 4
  145. ora-17503: ksfdopn:2 failed to open file ocr.255.4294967295
  146. ora-15001: diskgroup "ocr" does not exist or is not mounted
  147. ora-06512: at "sys

  148. 2022-06-27 10:49:18.151 : ocrraw:4160478976: category: 8

  149. 2022-06-27 10:49:18.151 : ocrraw:4160478976: depinfo: 15056

  150. 2022-06-27 10:49:18.151 : ocrraw:4160478976: -- trace dump end --

  151. 2022-06-27 10:49:18.151 : ocrasm:4160478976: proprasmo: failed to open the file in dg [ocr]
  152. 2022-06-27 10:49:18.151 : ocrasm:4160478976: proprasmo: error in open/create file in dg [ocr]
  153.   ocrasm:4160478976: slos : slos: cat=8, opn=kgfoopen01, dep=15056, loc=kgfokge

  154. 2022-06-27 10:49:18.151 : ocrasm:4160478976: asm error stack :
  155.  default:4160478976: u_set_gbl_corcl_error: corcltype '108' : error '8'
  156. 2022-06-27 10:49:18.156 : ocrraw:4160478976: kgfnconnect2int: cstr=(description=(address=(protocol=beq)(program=/u01/app/19.0.0/grid/bin/oracle)(argv0=oracleasm1_ocr)(envs='oracle_home=/u01/app/19.0.0/grid,oracle_sid= asm1,ora_server_broker_mode=none')(args='(description=(local=yes)(address=(protocol=beq)))')(privs=(user=grid)(group=oinstall)))(connect_data=(oracle_home=/u01/app/19.0.0/grid)(sid=asm1))(security=(authentication_service=beq))(enable=setuser))

  157. 2022-06-27 10:49:18.156*:kgfn.c@7000: kgfnconnect2int: cstr=(description=(address=(protocol=beq)(program=/u01/app/19.0.0/grid/bin/oracle)(argv0=oracleasm1_ocr)(envs='oracle_home=/u01/app/19.0.0/grid,oracle_sid= asm1,ora_server_broker_mode=none')(args='(description=(local=yes)(address=(protocol=beq)))')(privs=(user=grid)(group=oinstall)))(connect_data=(oracle_home=/u01/app/19.0.0/grid)(sid=asm1))(security=(authentication_service=beq))(enable=setuser))
  158. 2022-06-27 10:49:18.156*:kgfn.c@3966: kgfnstmtsingle res=0 []
  159. 2022-06-27 10:49:18.199 : ocrraw:4160478976: -- trace dump on error exit --

  160. 2022-06-27 10:49:18.199 : ocrraw:4160478976: error [kgfo] in [kgfockmt03] at kgfo.c:3182

  161. 2022-06-27 10:49:18.199 : ocrraw:4160478976: diskgroup ocr not mounted ()

  162. 2022-06-27 10:49:18.199 : ocrraw:4160478976: category: 6

  163. 2022-06-27 10:49:18.199 : ocrraw:4160478976: depinfo: 0

  164. 2022-06-27 10:49:18.200 : ocrraw:4160478976: -- trace dump end --

  165.   ocrasm:4160478976: slos : slos: cat=6, opn=kgfo, dep=0, loc=kgfockmt03

  166. 2022-06-27 10:49:18.200 : ocrasm:4160478976: asm error stack :
  167. 2022-06-27 10:49:18.200 : ocrasm:4160478976: proprasmo: kgfocheckmount returned [6]
  168. 2022-06-27 10:49:18.200 : ocrasm:4160478976: proprasmo: the asm disk group ocr is not found or not mounted
  169. 2022-06-27 10:49:18.201 : ocrraw:4160478976: proprioo: failed to open [ocr/cs-shdb-cluster/ocrfile/registry.255.1038930357]. returned proprasmo() with [26]. marking location as unavailable.
  170. 2022-06-27 10:49:18.201 : ocrraw:4160478976: proprioo: no ocr/olr devices are usable
  171.   ocrutl:4160478976: u_fill_errorbuf: error info : [insufficient quorum to open ocr devices]
  172.  default:4160478976: u_set_gbl_corcl_error: corcltype '107' : error '0'
  173. 2022-06-27 10:49:18.201 : default:4160478976: clsvactversion:4: retrieving active version from local storage.
  174. 2022-06-27 10:49:18.206 : cssclnt:4160478976: clssgsgrppubdata: group (ocr_cs-shdb-cluster) not found
  175. 2022-06-27 10:49:18.206 : ocrraw:4160478976: proprio_repairconf: failed to retrieve the group public data. css ret code [20]
  176. 2022-06-27 10:49:18.206 : ocrraw:4160478976: proprioo: failed to auto repair the ocr configuration.
  177. 2022-06-27 10:49:18.206 : ocrraw:4160478976: proprinit: could not open raw device
  178. 2022-06-27 10:49:18.215 : ocrapi:4160478976: a_init: backend init unsuccessful : [26]
  179. 2022-06-27 10:49:18.215 : ocrapi:4160478976: estack 'proc-00026: error while accessing the physical storage'
  180. 2022-06-27 10:49:18.216 : crsocr:4160478976: [ error] ocr context init failure. error: proc-26: error while accessing the physical storage storage layer error [insufficient quorum to open ocr devices] [0]
  181. 2022-06-27 10:49:18.218 : crsd:4160478976: [ none] created alert : (:crsd00111:) : could not init ocr, error: proc-26: error while accessing the physical storage storage layer error [insufficient quorum to open ocr devices] [0]
  182. 2022-06-27 10:49:18.218 : crsd:4160478976: [ error] [panic] crsd exiting: could not init ocr, code: 26
  183. 2022-06-27 10:49:18.218 : crsd:4160478976: [ info] done.

  184. 貌似ocr无法读取,检查一下

  185. [root@db01 ~]# ocrcheck
  186. status of oracle cluster registry is as follows :
  187.          version : 4
  188.          total space (kbytes) : 491684
  189.          used space (kbytes) : 84868
  190.          available space (kbytes) : 406816
  191.          id : 1078669969
  192.          device/file name : ocr
  193.                                     device/file integrity check succeeded

  194.                                     device/file not configured

  195.                                     device/file not configured

  196.                                     device/file not configured

  197.                                     device/file not configured

  198.          cluster registry integrity check succeeded

  199.          logical corruption check succeeded

  200. [root@db01 ~]# crsctl check crs
  201. crs-4638: oracle high availability services is online
  202. crs-4535: cannot communicate with cluster ready services
  203. crs-4529: cluster synchronization services is online
  204. crs-4533: event manager is online

  原因不明确,那就手工启动一下试试
  1. [root@db01 ~]# crsctl start res ora.crsd -init
  2. crs-2672: attempting to start 'ora.ctssd' on 'db01'
  3. the clock on host db01 differs from mean cluster time by 1449496153 microseconds. the cluster time synchronization service will not perform time synchronization because the time difference is beyond the permissible offset of 600 seconds. details in /u01/app/grid/diag/crs/db01/crs/trace/octssd.trc.
  4. crs-2674: start of 'ora.ctssd' on 'db01' failed
  5. crs-4000: command start failed, or corclleted with errors.

  哦吼,很明显了,rac两节点间的主机时钟又不同步了,测试环境就是差,连ntp也没有,手工调一下
  1. [root@db01 ~]# date
  2. tue aug 9 14:54:41 cst 2022
  3. [root@db01 ~]# date -s "2022-08-09 14:39:00"
  4. tue aug 9 14:39:00 cst 2022

  5. 再次启动
  6. [root@db01 ~]# crsctl start res ora.crsd -init
  7. crs-2672: attempting to start 'ora.ctssd' on 'db01'
  8. crs-2676: start of 'ora.ctssd' on 'db01' succeeded
  9. crs-2672: attempting to start 'ora.asm' on 'db01'
  10. crs-2672: attempting to start 'ora.crsd' on 'db01'
  11. crs-2676: start of 'ora.crsd' on 'db01' succeeded
  12. crs-2676: start of 'ora.asm' on 'db01' succeeded
  13. [root@db01 ~]#

  14. [root@db01 ~]# crsctl stat res -t
  15. --------------------------------------------------------------------------------
  16. name target state server state details
  17. --------------------------------------------------------------------------------
  18. local resources
  19. --------------------------------------------------------------------------------
  20. ora.listener.lsnr
  21.                online online db01 stable
  22.                online online db02 stable
  23. ora.listener_adg.lsnr
  24.                online online db01 stable
  25.                online online db02 stable
  26. ora.chad
  27.                online online db01 stable
  28.                online online db02 stable
  29. ora.helper
  30.                offline offline db01 idle,stable
  31.                offline offline db02 idle,stable
  32. ora.net1.network
  33.                online online db01 stable
  34.                online online db02 stable
  35. ora.ons
  36.                online online db01 stable
  37.                online online db02 stable
  38. ora.proxy_advm
  39.                offline offline db01 stable
  40.                offline offline db02 stable
  41. --------------------------------------------------------------------------------
  42. cluster resources
  43. --------------------------------------------------------------------------------
  44. ora.archdg.dg(ora.asmgroup)
  45.       1 online online db01 stable
  46.       2 online online db02 stable
  47. ora.asmnet1lsnr_asm.lsnr(ora.asmgroup)
  48.       1 online online db01 stable
  49.       2 online online db02 stable
  50. ora.datadg.dg(ora.asmgroup)
  51.       1 online online db01 stable
  52.       2 online online db02 stable
  53. ora.listener_scan1.lsnr
  54.       1 online online db02 stable
  55. ora.mgmtlsnr
  56.       1 offline offline 169.254.20.76 77.77.
  57.                                                              77.2,stable
  58. ora.ocr.dg(ora.asmgroup)
  59.       1 online online db01 stable
  60.       2 online online db02 stable
  61. ora.asm(ora.asmgroup)
  62.       1 online online db01 started,stable
  63.       2 online online db02 started,stable
  64. ora.asmnet1.asmnetwork(ora.asmgroup)
  65.       1 online online db01 stable
  66.       2 online online db02 stable
  67. ora.db01.vip
  68.       1 online online db01 stable
  69. ora.db02.vip
  70.       1 online online db02 stable
  71. ora.cvu
  72.       1 online online db02 stable
  73. ora.orcl.db
  74.       1 online offline mounted (closed),ope
  75.                                                              n initiated,home=/u0
  76.                                                              1/app/oracle/product
  77.                                                              /19.0.0/db_1,stable
  78.       2 online offline mounted (closed),ope
  79.                                                              n initiated,home=/u0
  80.                                                              1/app/oracle/product
  81.                                                              /19.0.0/db_1,stable
  82. ora.orcl.orclprim.svc
  83.       1 online offline stable
  84.       2 online offline stable
  85. ora.orcl.orclprim.svc
  86.       1 online offline stable
  87.       2 online offline stable
  88. ora.qosmserver
  89.       1 online online db02 stable
  90. ora.rhpserver
  91.       1 offline offline stable
  92. ora.scan1.vip
  93.       1 online online db02 stable
  94. --------------------------------------------------------------------------------
  95. [root@db01 ~]#
很好。
阅读(928) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~
")); function link(t){ var href= $(t).attr('href'); href ="?url=" encodeuricomponent(location.href); $(t).attr('href',href); //setcookie("returnouturl", location.href, 60, "/"); }
网站地图