2023年6月26日发(作者:)
Oracle11gR2 Rac节点损坏恢复
方正国际公共安全事业部
技术文档
1.
文档属性
文档属性
文档名称
报告文档版本号
文档状态
文档编写完成日期
作 者
内容
Oracle11gR2 Rac 节点损坏恢复
A1
正式稿
2016年9月14日
2.文档变更历史清单
文件版本号
修正日期
修正人
备 注
3.本次修改变更说明
序号
1.
2.
3.
4.
变更内容简述 5.
第1章
概述
辽宁省厅警务综合平台项目中,锦州数据库现场的用户因为在未停止数据库的情况下,扩展磁盘阵列,更换磁盘,导致1节点本地磁盘阵列挂不上,导致只能一个节点使用数据库,需要重新配置oracle集群信息.
第2章
系统环境
项目名称 服务器名
RAC节点1
操作系统
集群软件
服务器主机名
IP 地址示例
系统用户
RedHat 6
Oracle GRID
his1
RAC节点2
RedHat 6
Oracle GRID
his2
用户名
Root
Grid
oracle
系统组
dba
asmdba
dba
asmdba asmadmin
oinstall
asmoper
asmadmin
oinstall
asmoper
第3章
数据库环境
项目名称服务器名称
RAC节点1
公共IP地址
虚拟IP地址
心跳IP地址
ScanIP 地址
Oracle RAC SID
数据库名称
数据文件路径
归档文件
数据库版本
RAC 节点2
orcl1
orcl
+DATA
+ARCH
orcl2
Oracle Database 11g Enterprise
Edition Release .064位
GRID_BASE目录
GRID_HOME目录
目录
/u01/app/grid
/u01/app/grid/
/u01/app/oracle ORACLE_HOME目录
数据库监听端口
数据库字符集
数据库用/u01/app/oracle /
1521
ZHS16GBK
户oracle
sys,system密码
数据库硬盘管理方式
ASM
第4章
实施步骤
4.1
操作系统准备工作
包括如下操作,主要和rac安装文档相同,如果需要重新安装操作系统,请参考rac安装文档的操作,进行操作系统配置.
下面只列出:
4.1.1
配置grid用户及oracle用户等效性
Grid用户his2先删除.shh目录
在his1上:
su - grid
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ ssh-keygen -t rsa
$ ssh-keygen -t dsa $ cat ~/.ssh/ >> ~/.ssh/authorized_keys
$ cat ~/.ssh/ >> ~/.ssh/authorized_keys
在his2上:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ ssh-keygen -t rsa
$ ssh-keygen -t dsa
在his1上:
$ ssh jkkdb2 cat ~/.ssh/ >> ~/.ssh/authorized_keys
$ ssh jkkdb2 cat ~/.ssh/ >> ~/.ssh/authorized_keys
$ scp ~/.ssh/authorized_keys jkkdb2:~/.ssh/authorized_keys
验证
$ ssh his1 date
$ ssh his1-priv date
$ ssh his2 date
$ ssh his2-priv date
Oracle用户
在his1上: su - grid
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ ssh-keygen -t rsa
$ ssh-keygen -t dsa
$ cat ~/.ssh/ >> ~/.ssh/authorized_keys
$ cat ~/.ssh/ >> ~/.ssh/authorized_keys
在his2上:
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ ssh-keygen -t rsa
$ ssh-keygen -t dsa
在his1上:
$ ssh jkkdb2 cat ~/.ssh/ >> ~/.ssh/authorized_keys
$ ssh jkkdb2 cat ~/.ssh/ >> ~/.ssh/authorized_keys
$ scp ~/.ssh/authorized_keys jkkdb2:~/.ssh/authorized_keys
验证
$ ssh his1 date
$ ssh his1-priv date $ ssh his2 date
$ ssh his2-priv date
4.1.2
设置raw,磁盘阵列
联系硬件厂家,扩展磁盘操作等等,两个机器的磁盘信息相同,主要是挂的磁盘号要一致.
4.1.3
重启his1服务器
4.2
停止原来1节点vip
$ srvctl disable listener -l listener_name -n his1
$ srvctl stop listener -l listener_name -n his1
4.3
删除原1节点数据库实例信息
oracle用户执行:
$dbca -silent -deleteInstance -nodeList his1 -gdbName orcl -instanceName orcl1 -sysDBAUserName sys -sysDBAPassword oracle
此时查看数据库信息:
$srvctl config database -d orcl Database unique name: orcl
Database name:
Oracle home: /oracle/product/ user: oracle
Spfile:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: his2 -----原来是his1,his2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed
4.4
清除rac信息
清除grid
/grid/gridhome/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/grid/gridhome "CLUSTER_NODES={his2}" CRS=TRUE -silent
Starting Oracle
Checking swap space: must be greater than 500 MB. Actual 20480 MB Passed
The inventory pointer is located at /etc/
The inventory is located at /grid/gridbase/oraInventory
'UpdateNodeList' was successful.
清除oracle
/oracle/product/ -updateNodeList ORACLE_HOME= /oracle/product/ "CLUSTER_NODES={his2}"
Starting Oracle
Checking swap space: must be greater than 500 MB. Actual 20480 MB Passed
The inventory pointer is located at /etc/
The inventory is located at /oracle/oraInventory
'UpdateNodeList' was successful.
查看/grid/gridhome/opmen/conf/文件
usesharedinstall=true allowgroup=true
localport=6100 line added by Agent
remoteport=6200 line added by Agent
nodes=his2:6200 line added by Agent
如果还有his1信息,手工删除
4.5
清除1节点vip
$srvctl remove vip -i his1-vip
Please confirm that you intend to remove the VIPs his1-vip y/n y
4.6
检查当前集群信息
$olsnodes -s –t
$crs_stat –t
$cluvfy stage -post nodedel -n his1 -verbose
Performing post-checks for node removal
Checking
Clusterware version consistency passed
The Oracle Clusterware is healthy on node "his2"
CRS integrity check passed
Result:
Node removal check passed
Post-check for node removal was successful.
4.7
重新添加1节点到集群
4.7.1
添加grid软件
环境检查
$cluvfy comp peer -refnode his2 -n his1 -verbose
$cluvfy stage -pre nodeadd -n his1 –verbose
在/grid/gridhome/oui/bin下:
$export IGNORE_PREADDNODE_CHECKS=Y
$./ -silent "CLUSTER_NEW_NODES={his1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={his1-vip}"
Performing pre-checks for node addition
Checking
Node reachability check passed from node "his1"
Checking
User equivalence check passed for user "grid"
Checking
Checking hosts
Verification of the hosts config file successful
.
..
略
..
.
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/grid/gridbase/oraInventory/' with root privileges on nodes 'his1'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be
executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/grid/gridbase/oraInventory/ On nodes hxbak1
/grid/gridhome/ On nodes his1
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /grid/gridhome was successful.
Please check '/tmp/' for more details.
在his1以root用户执行和两个脚本:
脚本执行过程略.
到这,his1上面grid已经添加完成.Asm和监听vip等都启动了.
查看集群状态
gridhis2 ~$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ONLINE ONLINE his1
ONLINE ONLINE his1
ONLINE ONLINE his1
< ONLINE ONLINE his1
< ONLINE ONLINE his2
ONLINE ONLINE his1
ONLINE ONLINE his2
OFFLINE OFFLINE
< application ONLINE ONLINE his1
< application ONLINE ONLINE his1
application OFFLINE OFFLINE
application ONLINE ONLINE his1
ONLINE ONLINE his1
< application ONLINE ONLINE his2
< application ONLINE ONLINE his2
application OFFLINE OFFLINE
application ONLINE ONLINE his2
ONLINE ONLINE his2
< ONLINE ONLINE his1
ONLINE ONLINE his2
ONLINE ONLINE his1
ONLINE ONLINE his2
ONLINE ONLINE his2
4.7.2
添加oracle软件
在his2服务器/oracle/product/下执行:
$export IGNORE_PREADDNODE_CHECKS=Y
$ ./ "CLUSTER_NEW_NODES={his1}"
Saving inventory on nodes
.
100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be
executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/oracle/product/ On nodes his1
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /oracle/product/ was successful.
Please check '/tmp/' for more details.
在his1节点上以root用户执行/oracle/product/脚本
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/product/ the full pathname of the local bin directory: /usr/local/bin:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No
need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions. 查看his1服务器/grid/gridhome/opmn/conf/文件
more
usesharedinstall=true
allowgroup=true
localport=6100 line added by Agent
remoteport=6200 line added by Agent
nodes=his1:6200,his2:6200 line added by Agent
如果his2服务的/grid/gridhome/opmn/conf/没有添加his1:6200信息,手工给加上
4.7.3
添加数据库实例
在任意节点oracle用户执行dbca 注:实际当前环境只有1个实例orcl2:his2 active
添加orcl1实例
到此,节点恢复完成.
第5章
遇到的问题
在添加oracle软件的时候,因为之前没有执行清除oracle的操作,仅仅是清除了grid信息,导致集群认为his1的oracle软件已经存在,所以无法传文件.解决方案是再执行:
/oracle/product/ -updateNodeList ORACLE_HOME= /oracle/product/ "CLUSTER_NODES={his2}"
然后再添加的时候报错: Error ocurred while retrieving node numbers of the
existing nodes. Please check if clusterware home is properly configured.
SEVERE:Error ocurred while retrieving node numbers
of the existing nodes. Please check if clusterware
home is properly configured.
解决办法是:
copy 2 files olsnodes , from CRS_HOME/bin to ORACLEC_HOME/bin
cp /grid/gridhome/bin/olsnodes /oracle/product/
chown oracle:oinstall /oracle/product/然后执行
./ -silent "CLUSTER_NEW_NODES={his1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={his1-vip}"
传过去之后需要重新执行和两个脚本.
发布者:admin,转转请注明出处:http://www.yc00.com/web/1687777772a43471.html
评论列表(0条)