Oracle11gRAC数据库节点损坏恢复

Oracle11gRAC数据库节点损坏恢复

2023年6月26日发(作者:)

Oracle11gRAC恢复

数据库节点损坏

Oracle11gR2 Rac节点损坏恢复

方正国际公共安全事业部

技术文档

地址:北京市海淀区北四环西路52号

地址:北京市海淀区北四环西路52号

3.本次修改变更说明

序号

1.

2.

3.

4.

5.

变更内容简述

第1章

概述

辽宁省厅警务综合平台项目中,锦州数据库现场的用户因为在未停止数据库的情况下,扩展磁盘阵列,更换磁盘,导致1节点本地磁盘阵列挂不上,导致只能一个节点使用数据库,需要重新配置oracle集群信息。

地址:北京市海淀区北四环西路52号

第2章

系统环境

项目名称

操作系统

集群软件

服务器主机名

IP 地址(示例)

系统用户

服务器名 RAC节点1

RedHat 6

Oracle GRID

his1

172.31.1.50

用户名

Root

Grid

oracle

dba

asmdba

asmadmin

oinstall

asmoper

RAC节点2

RedHat 6

Oracle GRID

his2

172.31.1.51

系统组

dba

asmdba

asmadmin

oinstall

asmoper

第3章

数据库环境

项目名称

公共IP地址

虚拟IP地址

心跳IP地址

服务器名称 RAC节点1

172.31.1.50

172.31.1.52

192.168.0.1

RAC 节点2

172.31.1.51

172.31.1.53

192.168.0.2

地址:北京市海淀区北四环西路52号

ScanIP 地址

Oracle RAC SID

数据库名称

数据文件路径

归档文件

数据库版本

172.31.1.54

orcl1

orcl

+DATA

+ARCH

Oracle Database 11g Enterprise Edition Release

11.2.0.4.0(64位)

orcl2

GRID_BASE目录

GRID_HOME目录

ORACLE_BASE目录

ORACLE_HOME目录

数据库监听端口

数据库字符集

数据库用户(sys,system)密码

数据库硬盘管理方式

/u01/app/grid

/u01/app/grid/11.2.0

/u01/app/oracle

/u01/app/oracle /11.2.0

1521

ZHS16GBK

oracle

ASM

第4章

实施步骤

4.1

操作系统准备工作

包括如下操作,主要和rac安装文档相同,如果需要重新安装操作系统,请参考rac安装文档的操作,进行操作系统配置。

地址:北京市海淀区北四环西路52号

下面只列出:

4.1.1

配置grid用户及oracle用户等效性

Grid用户(his2先删除.shh目录)

在his1上:

su - grid

$ mkdir ~/.ssh

$ chmod 700 ~/.ssh

$ ssh-keygen -t rsa

$ ssh-keygen -t dsa

$ cat ~/.ssh/id_ >> ~/.ssh/authorized_keys

$ cat ~/.ssh/id_ >> ~/.ssh/authorized_keys

在his2上:

$ mkdir ~/.ssh

$ chmod 700 ~/.ssh

$ ssh-keygen -t rsa

$ ssh-keygen -t dsa

在his1上:

$ ssh jkkdb2 cat ~/.ssh/id_ >> ~/.ssh/authorized_keys

$ ssh jkkdb2 cat ~/.ssh/id_ >> ~/.ssh/authorized_keys

$ scp ~/.ssh/authorized_keys jkkdb2:~/.ssh/authorized_keys

验证

$ ssh his1 date

$ ssh his1-priv date

$ ssh his2 date

$ ssh his2-priv date

Oracle用户()

在his1上:

su - grid

$ mkdir ~/.ssh

$ chmod 700 ~/.ssh

$ ssh-keygen -t rsa

地址:北京市海淀区北四环西路52号

$ ssh-keygen -t dsa

$ cat ~/.ssh/id_ >> ~/.ssh/authorized_keys

$ cat ~/.ssh/id_ >> ~/.ssh/authorized_keys

在his2上:

$ mkdir ~/.ssh

$ chmod 700 ~/.ssh

$ ssh-keygen -t rsa

$ ssh-keygen -t dsa

在his1上:

$ ssh jkkdb2 cat ~/.ssh/id_ >> ~/.ssh/authorized_keys

$ ssh jkkdb2 cat ~/.ssh/id_ >> ~/.ssh/authorized_keys

$ scp ~/.ssh/authorized_keys jkkdb2:~/.ssh/authorized_keys

验证

$ ssh his1 date

$ ssh his1-priv date

$ ssh his2 date

$ ssh his2-priv date

4.1.2

设置raw,磁盘阵列

联系硬件厂家,扩展磁盘操作等等,两个机器的磁盘信息相同,主要是挂的磁盘号要一致。

4.1.3

重启his1服务器

4.2

停止原来1节点vip

$ srvctl disable listener -l listener_name -n his1

$ srvctl stop listener -l listener_name -n his1

地址:北京市海淀区北四环西路52号

4.3

删除原1节点数据库实例信息

oracle用户执行:

$dbca -silent -deleteInstance -nodeList his1 -gdbName orcl -instanceName orcl1 -sysDBAUserName sys -sysDBAPassword oracle

此时查看数据库信息:

$srvctl config database -d orcl

Database unique name: orcl

Database name:

Oracle home: /oracle/product/11.2.0/db_1

Oracle user: oracle

Spfile:

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: orcl

Database instances: his2 -----原来是his1,his2

Disk Groups: DATA

Mount point paths:

Services:

Type: RAC

Database is administrator managed

4.4

清除rac信息

清除grid

/grid/gridhome/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/grid/gridhome "CLUSTER_NODES={his2}" CRS=TRUE -silent

Starting Oracle

Checking swap space: must be greater than 500 MB. Actual 20480

MB Passed

The inventory pointer is located at /etc/

The inventory is located at /grid/gridbase/oraInventory

'UpdateNodeList' was successful.

地址:北京市海淀区北四环西路52号

清除oracle

/oracle/product/11.2.0/db_1/oui/bin>runInstaller -updateNodeList ORACLE_HOME= /oracle/product/11.2.0/db_1 "CLUSTER_NODES={his2}"

Starting Oracle

Checking swap space: must be greater than 500 MB. Actual 20480

MB Passed

The inventory pointer is located at /etc/

The inventory is located at /oracle/oraInventory

'UpdateNodeList' was successful.

查看/grid/gridhome/opmen/conf/文件

usesharedinstall=true

allowgroup=true

localport=6100 # line added by Agent

remoteport=6200 # line added by Agent

nodes=his2:6200 # line added by Agent

如果还有his1信息,手工删除

4.5

清除1节点vip

$srvctl remove vip -i his1-vip

Please confirm that you intend to remove the VIPs his1-vip (y/[n]) y

4.6

检查当前集群信息

$olsnodes -s –t

$crs_stat –t

$cluvfy stage -post nodedel -n his1 -verbose

Performing post-checks for node removal

Checking

Clusterware version consistency passed

The Oracle Clusterware is healthy on node "his2"

CRS integrity check passed

Result:

Node removal check passed

地址:北京市海淀区北四环西路52号

Post-check for node removal was successful.

4.7

重新添加1节点到集群

4.7.1

添加grid软件

环境检查

$cluvfy comp peer -refnode his2 -n his1 -verbose

$cluvfy stage -pre nodeadd -n his1 –verbose

在/grid/gridhome/oui/bin下:

$export IGNORE_PREADDNODE_CHECKS=Y

$./ -silent "CLUSTER_NEW_NODES={his1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={his1-vip}"

Performing pre-checks for node addition

Checking

Node reachability check passed from node "his1"

Checking

User equivalence check passed for user "grid"

Checking

Checking hosts

Verification of the hosts config file successful

.

..

..

.

(WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.

To register the new inventory please run the script at '/grid/gridbase/oraInventory/' with root privileges on nodes 'his1'.

If you do not register the inventory, you may not be able to update or patch the products you installed.

The following configuration scripts need to be executed as the "root"

user in each new cluster node. Each script in the list below is followed by a list of nodes.

/grid/gridbase/oraInventory/ #On nodes hxbak1

地址:北京市海淀区北四环西路52号

/grid/gridhome/ #On nodes his1

To execute the configuration scripts:

1. Open a terminal window

2. Log in as "root"

3. Run the scripts in each cluster node

The Cluster Node Addition of /grid/gridhome was successful.

Please check '/tmp/' for more details.

在his1以root用户执行和两个脚本:

脚本执行过程略。

到这,his1上面grid已经添加完成。Asm和监听vip等都启动了。

查看集群状态

[grid@his2 ~]$ crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ONLINE ONLINE his1

ONLINE ONLINE his1

ONLINE ONLINE his1

< ONLINE ONLINE his1

< ONLINE ONLINE his2

ONLINE ONLINE his1

ONLINE ONLINE his2

OFFLINE OFFLINE

< application ONLINE ONLINE his1

< application ONLINE ONLINE his1

application OFFLINE OFFLINE

application ONLINE ONLINE his1

ONLINE ONLINE his1

< application ONLINE ONLINE his2

< application ONLINE ONLINE his2

application OFFLINE OFFLINE

application ONLINE ONLINE his2

ONLINE ONLINE his2

< ONLINE ONLINE his1

4j ONLINE ONLINE his2

ONLINE ONLINE his1

ONLINE ONLINE his2

ONLINE ONLINE his2

4.7.2

添加oracle软件

地址:北京市海淀区北四环西路52号

在his2服务器/oracle/product/11.2.0/db_1/oui/bin/下执行:

$export IGNORE_PREADDNODE_CHECKS=Y

$ ./ "CLUSTER_NEW_NODES={his1}"

Saving inventory on nodes

. 100% Done.

Save inventory complete

WARNING:

The following configuration scripts need to be executed as the "root"

user in each new cluster node. Each script in the list below is followed by a list of nodes.

/oracle/product/11.2.0/db1/ #On nodes his1

To execute the configuration scripts:

1. Open a terminal window

2. Log in as "root"

3. Run the scripts in each cluster node

The Cluster Node Addition of /oracle/product/11.2.0/db_1 was successful.

Please check '/tmp/' for more details.

在his1节点上以root用户执行/oracle/product/11.2.0/db1/脚本

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

地址:北京市海淀区北四环西路52号

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab

file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

查看his1服务器/grid/gridhome/opmn/conf/文件

more

usesharedinstall=true

allowgroup=true

localport=6100 # line added by Agent

remoteport=6200 # line added by Agent

nodes=his1:6200,his2:6200 # li地址:北京市海淀区北四环西路52号

ne added by Agent

如果his2服务的/grid/gridhome/opmn/conf/没有添加his1:6200信息,手工给加上

4.7.3

添加数据库实例

在任意节点oracle用户执行dbca

地址:北京市海淀区北四环西路52号

地址:北京市海淀区北四环西路52号

注:实际当前环境只有1个实例orcl2:his2 active

地址:北京市海淀区北四环西路52号

添加orcl1实例

地址:北京市海淀区北四环西路52号

地址:北京市海淀区北四环西路52号

到此,节点恢复完成。

地址:北京市海淀区北四环西路52号

第5章

遇到的问题

在添加oracle软件的时候,因为之前没有执行清除oracle的操作,仅仅是清除了grid信息,导致集群认为his1的oracle软件已经存在,所以无法传文件。解决方案是再执行:

/oracle/product/11.2.0/db_1/oui/bin>runInstaller -updateNodeList ORACLE_HOME= /oracle/product/11.2.0/db_1 "CLUSTER_NODES={his2}"

然后再添加的时候报错:

Error ocurred while retrieving node numbers of the existing nodes. Please

check if clusterware home is properly configured.

SEVERE:Error ocurred while retrieving node numbers of the existing nodes. Please check if clusterware home is properly configured.

解决办法是:

copy 2 files (olsnodes ,) from CRS_HOME/bin to ORACLEC_HOME/bin

# cp /grid/gridhome/bin/olsnodes* /oracle/product/11.2.0/db_1/bin/

# chown oracle:oinstall /oracle/product/11.2.0/db1/bin/olsn*

然后执行

./ -silent "CLUSTER_NEW_NODES={his1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={his1-vip}"

传过去之后需要重新执行和两个脚本。

地址:北京市海淀区北四环西路52号

发布者:admin,转转请注明出处:http://www.yc00.com/news/1687777796a43476.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信