Cloning Oracle Home in RAC
April 10, 2013 1 Comment
Cloning of Oracle software is an easy and fast way to achieve standardization across organization where all efforts put on one environment and after testing etc that environment could be used as a source of binaries. A simple tar ball could be shipped to all other servers and then untar on destination environment as a new home or replace the existing oracle home based on availability of space on destination environment.
There could be different situations where we could use this method in a little bit twisted way to achieve desired result. Possible scenarios are:
- Lost filesystem hosting oracle database software, so either new installation or cloning of software from surviving nodes in case of RAC or some other environment in case of standalone environment.
- Enterprise wide periodic patching activity where cloning could save a lot of efforts by just building an image and clone it across enterprise.
- Building new environments during migration databases across of datacenters
-
Node edition in RAC also uses cloning technique
Cloning involves following simple steps:
-
Take a backup of central inventory on target servers ( Optional though recommended)
cd /u01/app
tar -zcf oraInventory.tar.gz oraInventory
-
Take a backup of necessary configuration files from $ORACLE_HOME/dbs & $ORACLE_HOME/network/admin folder from target server
mkdir –p /tmp/backup_config_files/network
mkdir –p /tmp/backup_config_files/dbs
cd /u01/app/oracle/product/11.2.0.3/dbhome_1
cp network/admin/tnsnames.ora ../tmp/backup_config_files/network/.
cp dbs/* /tmp/backup_config_files/dbs/. -
Create tarball on source server or use exiting approved tarball from source environment. Change directory (cd) to parent directory of source Oracle Home and use tar -zcf <backup_file>.tar.gz <source home>
Example:
cd /u01/app/oracle/product/11.2.0.3/
tar -zcf SOURCE_HOME.tar.gz SOURCE_HOME
And ship SOURCE_HOME.tar.gz to target server where cloning would take place.
-
In case you want to reuse the same Oracle Home name and location then you must have to detach exiting oracle home from destination server or else cloning would fail. Stop all instances accessing oracle home if you are going to restore new home on same location/name.
Detaching Oracle Home from Inventory
./runInstaller -silent -detachHome ORACLE_HOME=”/u01/app/oracle/product/11.2.0.3/dbhome_1″ ORACLE_HOME_NAME=”DB_HOME” –local
If we use it without “-local” option then it would detach oracle home for all nodes in cluster.
./runInstaller -silent -detachHome ORACLE_HOME=”/u01/app/oracle/product/11.2.0.3/dbhome_1″ ORACLE_HOME_NAME=”DB_HOME”
where runInstaller location is $ORACLE_HOME/oui/bin
Entry in oracle inventory before detaching oracle home
mask1% cd /u01/app/oraInventory/ContentsXML
mask1% grep -i DB_HOME inventory.xml
<HOME NAME=”DB_HOME” LOC=”/u01/app/oracle/product/11.2.0.3/dbhome_1″ TYPE=”O” IDX=”4″>
Detach oracle home from node 1
mask1% ./runInstaller -silent -detachHome ORACLE_HOME=”/u01/app/oracle/product/11.2.0.3/dbhome_1″ ORACLE_HOME_NAME=”DB_HOME” -local
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB. Actual 24575 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
‘DetachHome’ was successful.
Entry in oracle inventory after detaching oracle home
mask1% cd /u01/app/oraInventory/ContentsXML
mask1% grep -i DB_HOME inventory.xml
<HOME NAME=”DB_HOME” LOC=”/u01/app/oracle/product/11.2.0.3/dbhome_1″ TYPE=”O” IDX=”4″ REMOVED=”T”/>
At this stage when if we try to query oracle inventory from detached oracle home then we would get error:
mask1% cd /u01/app/oracle/product/11.2.0.3/dbhome_1/OPatch
mask1% ./opatch lsinventory –all
Inventory load failed… OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
Oracle Home dir. path does not exist in Central Inventory
Oracle Home is a symbolic link
Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo
OPatch failed with error code 73
Which means that our current oracle home has no longer been part of inventory, so we are free to remove the binaries and extract our tarball to same or new location.
Similarly we have detached home from node 2 as well.
mask2 % ./runInstaller -silent -detachHome ORACLE_HOME=”/u01/app/oracle/product/11.2.0.3/dbhome_1″ ORACLE_HOME_NAME=”DB_HOME” -local
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB. Actual 24575 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
‘DetachHome’ was successful.
In this case we are cloning to a new location to reduce down time as we want to change the path as well. In case we would have choice to use the existing location then we could choose to perform cloning in rolling fashion to reduce down time to practically
-
Extract tarball to designated location for Oracle Home. Change directory (cd) to parent directory of destination Oracle Home and use tar -zxf <backup_file>.tar.gz and then rename it to the desired name.
Example:
cd /u01/app/oracle/product/11.2.0.3/
tar -zxf SOURCE_HOME.tar.gz
mv SOURCE_HOME mask
-
Execute clone.pl command to complete the cloning process. In case of RAC you need to perform this operation on all nodes if using –local option or in one shot for full cluster.
Executing clone.pl command on mask1 node from oracle user
mask1% perl /u01/app/oracle/product/11.2.0.3/mask/clone/bin/clone.pl ORACLE_HOME=”/u01/app/oracle/product/11.2.0.3/mask” ORACLE_HOME_NAME=”mask_HOME” ORACLE_BASE=”/u01/app/oracle” ‘-O”CLUSTER_NODES={ mask1,mask2}”‘ ‘-O”LOCAL_NODE=mask1″‘
./runInstaller -clone -waitForCompletion “ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/mask” “ORACLE_HOME_NAME=mask_HOME” “ORACLE_BASE=/u01/app/oracle” “CLUSTER_NODES={mask1,mask2}” “LOCAL_NODE=mask1” -silent -noConfig -nowait
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB. Actual 24575 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-03-22_11-58-10PM. Please wait …Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
You can find the log of this install session at:
/u01/app/oraInventory/logs/cloneActions2013-03-22_11-58-10PM.log
.
Performing tests to see whether nodes mask1 are available
……………………………………………………… 100% Done.
Installation in progress (Friday, March 22, 2013 11:58:23 PM IST)
……………………………………………………………………. 79% Done.
Install successful
Linking in progress (Friday, March 22, 2013 11:58:30 PM IST)
Link successful
Setup in progress (Friday, March 22, 2013 11:59:02 PM IST)
Setup successful
End of install phases.(Friday, March 22, 2013 11:59:25 PM IST)
WARNING:
The following configuration scripts need to be executed as the “root” user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11.2.0.3/mask/root.sh #On nodes mask1
To execute the configuration scripts:
1. Open a terminal window
2. Log in as “root”
3. Run the scripts in each cluster node
The cloning of mask_HOME was successful.
Please check ‘/u01/app/oraInventory/logs/cloneActions2013-03-22_11-58-10PM.log’ for more details.
Executing root.sh from root session on mask1
[root@mask1 ~]# /u01/app/oracle/product/11.2.0.3/mask/root.sh
Check /u01/app/oracle/product/11.2.0.3/mask/install/root_mask2.lgk.nmk_2013-03-23_12-02-00.log for the output of root script
[root@mask1 ~]# cat /u01/app/oracle/product/11.2.0.3/mask/install/root_mask2.lgk.nmk_2013-03-23_12-02-00.log
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0.3/mask
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
Executing clone.pl command on mask2 node from oracle user
mask1% perl /u01/app/oracle/product/11.2.0.3/mask/clone/bin/clone.pl ORACLE_HOME=”/u01/app/oracle/product/11.2.0.3/mask” ORACLE_HOME_NAME=”mask_HOME” ORACLE_BASE=”/u01/app/oracle” ‘-O”CLUSTER_NODES={mask1,mask2}”‘ ‘-O”LOCAL_NODE=mask2″‘
./runInstaller -clone -waitForCompletion “ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/mask” “ORACLE_HOME_NAME=mask_HOME” “ORACLE_BASE=/u01/app/oracle” “CLUSTER_NODES={mask1,mask2}” “LOCAL_NODE=mask2” -silent -noConfig -nowait
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB. Actual 24575 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-03-23_12-05-01AM. Please wait …Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
You can find the log of this install session at:
/u01/app/oraInventory/logs/cloneActions2013-03-23_12-05-01AM.log
.
Performing tests to see whether nodes mask1 are available
……………………………………………………… 100% Done.
Installation in progress (Saturday, March 23, 2013 12:05:23 AM IST)
……………………………………………………………………. 79% Done.
Install successful
Linking in progress (Saturday, March 23, 2013 12:05:30 AM IST)
Link successful
Setup in progress (Saturday, March 23, 2013 12:06:02 AM IST)
Setup successful
End of install phases.(Saturday, March 23, 2013 12:06:25 AM IST)
WARNING:
The following configuration scripts need to be executed as the “root” user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11.2.0.3/mask/root.sh #On nodes mask2
To execute the configuration scripts:
1. Open a terminal window
2. Log in as “root”
3. Run the scripts in each cluster node
The cloning of mask_HOME was successful.
Please check ‘/u01/app/oraInventory/logs/cloneActions2013-03-23_12-05-01AM.log’ for more details.
Executing root.sh from root session on mask1
[root@mask2 ~]# /u01/app/oracle/product/11.2.0.3/mask/root.sh
Check /u01/app/oracle/product/11.2.0.3/mask/install/root_mask2.lgk.nmk_2013-03-23_01-02-00.log for the output of root script
[root@mask2 ~]# cat /u01/app/oracle/product/11.2.0.3/mask/install/root_mask2.lgk.nmk_2013-03-23_05-02-00.log
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0.3/mask
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
-
Verify the integrity of inventory by querying opatch lsinventory
mask1 % opatch lsinventory -all_nodes | egrep -i ‘applied|Patch description|node name’
Node Name : mask1
Patch 13632653 : applied on Fri Feb 01 23:54:58 CST 2013
Patch 12552578 : applied on Fri Feb 01 23:54:52 CST 2013
Patch 13743357 : applied on Fri Feb 01 23:54:48 CST 2013
Patch 14164849 : applied on Fri Feb 01 23:54:43 CST 2013
Patch 12317925 : applied on Fri Feb 01 23:54:40 CST 2013
Patch 14741727 : applied on Fri Feb 01 23:54:37 CST 2013
Patch 15843238 : applied on Fri Feb 01 23:54:31 CST 2013
Patch 14837414 : applied on Fri Feb 01 23:54:25 CST 2013
Patch 13615767 : applied on Fri Feb 01 23:54:21 CST 2013
Patch 13902963 : applied on Fri Feb 01 23:54:16 CST 2013
Patch 14793338 : applied on Fri Feb 01 23:54:11 CST 2013
Patch 14653598 : applied on Fri Feb 01 23:54:07 CST 2013
Patch 14757709 : applied on Fri Feb 01 23:54:02 CST 2013
Patch 13714926 : applied on Fri Feb 01 23:53:56 CST 2013
Patch 14499293 : applied on Fri Feb 01 23:53:52 CST 2013
Patch 14226599 : applied on Fri Feb 01 23:53:46 CST 2013
Patch 14307915 : applied on Fri Feb 01 23:52:58 CST 2013
Patch description: “QUARTERLY DISKMON PATCH FOR EXADATA (OCT 2012 – 11.2.0.3.11) : (14307915)”
Patch 14275572 : applied on Fri Feb 01 23:52:31 CST 2013
Patch description: “QUARTERLY CRS PATCH FOR EXADATA (OCT 2012 – 11.2.0.3.11) : (14275572)”
Patch 14474780 : applied on Fri Feb 01 23:50:57 CST 2013
Patch description: “QUARTERLY DATABASE PATCH FOR EXADATA (OCT 2012 – 11.2.0.3.11) : (14474780)”
Patch 14679292 : applied on Fri Feb 01 23:13:09 CST 2013
Patch 12646746 : applied on Wed Jun 27 18:28:26 CDT 2012
Patch 12985184 : applied on Wed Jun 27 18:27:32 CDT 2012
Patch 14029429 : applied on Wed Jun 27 18:27:07 CDT 2012
Patch 13365700 : applied on Wed Jun 27 18:26:41 CDT 2012
Patch 13508115 : applied on Wed Jun 27 18:26:21 CDT 2012
Patch 12977501 : applied on Wed Jun 27 18:25:06 CDT 2012
Patch 13404129 : applied on Wed Jun 27 18:24:49 CDT 2012
Patch 13014128 : applied on Wed Jun 27 18:24:24 CDT 2012
Patch 14058884 : applied on Wed Jun 27 18:19:31 CDT 2012
Node Name : mask2
Patch 13632653 : applied on Sat Feb 02 00:18:10 CST 2013
Patch 12552578 : applied on Sat Feb 02 00:18:05 CST 2013
Patch 13743357 : applied on Sat Feb 02 00:18:01 CST 2013
Patch 14164849 : applied on Sat Feb 02 00:17:57 CST 2013
Patch 12317925 : applied on Sat Feb 02 00:17:54 CST 2013
Patch 14741727 : applied on Sat Feb 02 00:17:50 CST 2013
Patch 15843238 : applied on Sat Feb 02 00:17:45 CST 2013
Patch 14837414 : applied on Sat Feb 02 00:17:40 CST 2013
Patch 13615767 : applied on Sat Feb 02 00:17:35 CST 2013
Patch 13902963 : applied on Sat Feb 02 00:17:30 CST 2013
Patch 14793338 : applied on Sat Feb 02 00:17:25 CST 2013
Patch 14653598 : applied on Sat Feb 02 00:17:20 CST 2013
Patch 14757709 : applied on Sat Feb 02 00:17:15 CST 2013
Patch 13714926 : applied on Sat Feb 02 00:17:10 CST 2013
Patch 14499293 : applied on Sat Feb 02 00:17:06 CST 2013
Patch 14226599 : applied on Sat Feb 02 00:17:01 CST 2013
Patch 14307915 : applied on Sat Feb 02 00:16:17 CST 2013
Patch description: “QUARTERLY DISKMON PATCH FOR EXADATA (OCT 2012 – 11.2.0.3.11) : (14307915)”
Patch 14275572 : applied on Sat Feb 02 00:15:50 CST 2013
Patch description: “QUARTERLY CRS PATCH FOR EXADATA (OCT 2012 – 11.2.0.3.11) : (14275572)”
Patch 14474780 : applied on Sat Feb 02 00:14:19 CST 2013
Patch description: “QUARTERLY DATABASE PATCH FOR EXADATA (OCT 2012 – 11.2.0.3.11) : (14474780)”
Patch 14679292 : applied on Fri Feb 01 23:24:27 CST 2013
Patch 12646746 : applied on Wed Jun 27 18:28:26 CDT 2012
Patch 12985184 : applied on Wed Jun 27 18:27:32 CDT 2012
Patch 14029429 : applied on Wed Jun 27 18:27:07 CDT 2012
Patch 13365700 : applied on Wed Jun 27 18:26:41 CDT 2012
Patch 13508115 : applied on Wed Jun 27 18:26:21 CDT 2012
Patch 12977501 : applied on Wed Jun 27 18:25:06 CDT 2012
Patch 13404129 : applied on Wed Jun 27 18:24:49 CDT 2012
Patch 13014128 : applied on Wed Jun 27 18:24:24 CDT 2012
Patch 14058884 : applied on Wed Jun 27 18:19:31 CDT 2012
-
Copy (or restore in case cloning done on same patch) the necessary configuration files ( i.e. tnsnames.ora, sqlnet.ora & pfile pointing to spfile etc) backed up in step 2.
cd /u01/app/oracle/product/11.2.0.3/dbhome_1
cp network/admin/tnsnames.ora ../ mask/network/admin/.
cp dbs/* ../mask/dbs/.
-
Stop all database instances if not already stopped in step-4
mask1% srvctl stop database –d MASK
mask1 % srvctl status database -d MASK
Instance MASK1 is not running on node mask1
Instance MASK2 is not running on node mask2
Backup alert log file and trim the existing one or move it with _bkp suffix so it would easy to notice alerts and message appeared when db starts with new oracle home.
-
As we cloned Oracle Home on new path so need to modify /etc/oratab and and other local environment profile files, as now onwards no reference with respect to DBs switching to new home should go to old oracle home. This step was not required in case we would have cloned Oracle Home on same path.
-
For RAC or Oracle Restart, need to modify db configuration by srvctl command so services would be taken care by CRS/Oracle Restart.
mask1% srvctl config database -d MASK| grep -i ‘Oracle Home’
Oracle home: /u01/app/oracle/product/11.2.0.3/dbhome_1
mask1% srvctl modify database -d MASK -o /u01/app/oracle/product/11.2.0.3/mask
mask1% srvctl config database -d MASK| grep -i ‘Oracle Home’
Oracle home: /u01/app/oracle/product/11.2.0.3/mask
-
Start database from new home.
mask1% srvctl stop database –d MASK
mask1 % srvctl status database -d MASK
Instance MASK1 is not running on node mask1
Instance MASK2 is not running on node mask2
In case new home has higher patching level then execute respective post patching scripts.
-
Verify alterlog as oracle home should be listed as soon as instances are started.
mask1% tail –f alert_MASK1.log | grep -i ORACLE_HOME
ORACLE_HOME = /u01/app/oracle/product/11.2.0.3/mask
mask2% tail –f alert_MASK2.log | grep -i ORACLE_HOME
ORACLE_HOME = /u01/app/oracle/product/11.2.0.3/mask
-
Check alert logs for errors and exception. Best practice to do after every change even though you are sure about success of change, to see if anything unusual is cooking behind the screen.
Now your database is successfully switched to new oracle home.
Excellent Doc.. thanks a million my friend.
God bless you.
Best Wishes,
Anupam