Exadata Image Upgrade (Non-Rolling)
Exadata Image Upgrade (Non-Rolling)
Successfully Upgraded Exadata Half Rack Image to 12.1.2.3.4
Precheck : Exadata Image Upgrade
1. Oracle recommends to clear all the stateful alrts from all the cell nodes
[root@abcxyzadm01 ~]# dcli -g cell_group -l root "cellcli -e list alerthistory attributes name,beginTime,alertShortName,alertDescription,severity where alerttype=stateful and severity=critical"
.
2.Based on Exachk report check if Oracle finds any hardware failure which much be fixed before you proceed for the patching.
Cell Node/ Cell Server Patch Plan (Non-Rolling)
=====================================================
Pre-checks : make sure prechecks are completed
Start a screen session from jump server before connecting to the dbnode
screen -RR cell-patch
update using the patchmgr utility
Unzip the update. It will extract into the patch_12.1.2.3.4.170111 directory.
Connect to
cd /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111
Stop agents if running (Non-Rolling)
Check agents and restart if not running
ps -ef | grep agent | grep java | sed 's/\s\+/ /g' | cut -d " " -f 1,8 | sed 's/\/jdk.*//'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl start agent"'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl status agent"' | grep 'Agent is'
Check databases running before stopping CRS
/u01/app/12.1.0.2/grid/bin/crsctl status resource -t -w "TYPE = ora.database.type"
dcli -l root -g dbs_group "ps -ef | sort -k1.50 | grep pmon_ | grep -v grep && echo ''"
Stop and disable CRS
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl disable crs
/u01/app/12.1.0.2/grid/bin/crsctl stop cluster -all
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl stop crs
dcli -l root -g dbs_group '/u01/app/12.1.0.2/grid/bin/crsctl check crs | grep online | wc -l | while read retval; do if [[ $retval -eq 0 ]]; then echo CRS Stopped; elif [[ $retval -eq 4 ]]; then echo CRS Running; else echo CRS Not Ready; fi; done;'
Restart ILOM on all cell nodes (optional)
dcli -l root -g cell_group "ipmitool bmc reset cold"
Check uptime and reboot if needed
dcli -l root -g cell_group "uptime"
If cells up more than 7 days then reboot.
dcli -l root -g cell_group "cellcli -e alter cell shutdown services all"
Verify all cells stopped successfully. If not, repeat until all cells are stopped.
dcli -l root -g cell_group sync
dcli -l root -g cell_group sync
dcli -l root -g cell_group reboot
Stop services on all cell nodes
cd /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111
dcli -l root -g cell_group "cellcli -e alter cell shutdown services all"
Cleanup space from any previous runs
the -reset_force command is only done the first time the cells are patched to this release.
It is not necessary to use the command for subsequent cell patching, even after rolling back the patch.
./patchmgr -cells cell_group -reset_force
Always use the -cleanup option before retrying a failed or halted run of the patchmgr utility.
./patchmgr -cells cell_group -cleanup
Download and install latest plugins
cd /u01/exa_img_upg/CELL
chmod +x /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111/plugins/*
Run prerequisites check
cd /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111
./patchmgr -cells cell_group -patch_check_prereq
Patch the cell nodes
nohup ./patchmgr -cells cell_group -patch &
Monitor the patch progress
Monitor the ILOM console for each cell being patched. You may want to download the ilom-login.sh script from note 1616791.1 for assisting in logging into the iloms.
cd /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111
tail -f nohup.out
Cleanup space
./patchmgr -cells cell_group -cleanup
Post Checks
dcli -l root -g cell_group imageinfo -version
dcli -l root -g cell_group imageinfo -status
dcli -l root -g cell_group "uname -r"
dcli -l root -g cell_group cellcli -e list cell
dcli -l root -g cell_group /opt/oracle.cellos/CheckHWnFWProfile
Additional checks (if there were problems)
cd /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111
cat patchmgr.stdout
cat _wip_stdout file
ssh <cell-node>
cd /var/log/cellos
grep -i 'fail' validations.log
grep -i 'fail' vldrun*.log
cat validations.log
cat vldrun.upgrade_reimage_boot.log
cat vldrun.first_upgrade_boot.log
cat CheckHWnFWProfile.log
cat cell.bin.install.log
cat cellFirstboot.log
cat exachkcfg.log
cat patch.out.place.sh.log
cat install.sh.log
Skip the following if performing YUM or BP update next
Start agents and verify that they are running
Check agents and restart if not running
ps -ef | grep agent | grep java | sed 's/\s\+/ /g' | cut -d " " -f 1,8 | sed 's/\/jdk.*//'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl start agent"'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl status agent"' | grep 'Agent is'
Enable and start CRS
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl enable crs
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl start crs
dcli -l root -g dbs_group '/u01/app/12.1.0.2/grid/bin/crsctl check crs | grep online | wc -l | while read retval; do if [[ $retval -eq 0 ]]; then echo CRS Stopped; elif [[ $retval -eq 4 ]]; then echo CRS Running; else echo CRS Not Ready; fi; done;'
Verify databases started
/u01/app/12.1.0.2/grid/bin/crsctl status resource -t -w "TYPE = ora.database.type"
dcli -l root -g dbs_group "ps -ef | sort -k1.50 | grep pmon_ | grep -v grep && echo ''"
Rolling Back Successfully Patched Exadata Cells
(This section describes how to roll back successfully-patched Exadata Cells. Cells with incomplete or failed patching cannot be rolled back.)
Do not run more than one instance of the patchmgr utility at a time in the deployment.
Check the prerequisites using the following command:
./patchmgr -cells cell_group -rollback_check_prereq [-rolling]
Perform the rollback using the following command:
./patchmgr -cells cell_group –rollback [-rolling]
Compute Node/ DB Note / YUM Patch Plan (Non-Rolling)
=====================================================
Pre-checks : make sure prechecks are completed
Compute Nodes
abcxyzadm01
abcxyzadm02
abcxyzadm03
abcxyzadm04
Check image version
dcli -l root -g dbs_group imageinfo -version
dcli -l root -g dbs_group imageinfo -status
dcli -l root -g dbs_group uname -r
Verify dbnodeupdate script version
Download latest version of dbnodeupdate script from patch 21634633
Download dbserver.patch.zip as p21634633_122110_Linux-x86-64.zip, which contains dbnodeupdate.zip and patchmgr for dbnodeupdate orchestration via patch 21634633
cd /u01/exa_img_upg/YUM
unzip -o p21634633_122110_Linux-x86-64.zip
Should be at least version 5.151022
./dbnodeupdate.sh -V
ver=$(./dbnodeupdate.sh -V | awk '{print $3}'); if (( $(echo "$ver < 5.151022" | bc -l) )); then echo -e "\nFAIL: dbnodeupdate version too low. Update before proceeding.\n"; elif (( $(echo "$ver > 5.151022" | bc -l) )); then echo -e "\nPASS: dbnodeupdate version OK\n"; else echo -e "\nWARN: dbnodeupdate minimum version ($ver) detected. Check if there is a newer version before proceeding.\n"; fi
dbnodeupdate script is updated frequently (sometimes daily). If not current then download updated version.
Check databases running before stopping CRS
/u01/app/12.1.0.2/grid/bin/crsctl status resource -t -w "TYPE = ora.database.type"
ps -ef | grep pmon_ | grep -v grep
Stop the CRS (Non-Rolling)
Execute on one node
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl disable crs
/u01/app/12.1.0.2/grid/bin/crsctl stop cluster -all
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl stop crs
dcli -l root -g dbs_group '/u01/app/12.1.0.2/grid/bin/crsctl check crs | grep online | wc -l | while read retval; do if [[ $retval -eq 0 ]]; then echo CRS Stopped; elif [[ $retval -eq 4 ]]; then echo CRS Running; else echo CRS Not Ready; fi; done;'
Reboot servers and reset ILOM
dcli -l root -g dbs_group uptime
If uptime more than 7 days then reboot servers
dcli -l root -g dbs_group reboot
Reset the iloms
dcli -l root -g dbs_group 'ipmitool bmc reset cold'
Unmount NFS partitions
dcli -l root -g dbs_group 'umount -a -t nfs -f -l'
Run precheck
cd /u01/exa_img_upg/YUM
./dbnodeupdate.sh -u -l /u01/exa_img_upg/YUM/p25093501_121234_Linux-x86-64.zip -t 12.1.2.3.4.170111 -g -v
Perform backup and upgrade
Make sure to check known issues section above prior to executing dbnodeupdate.sh
./dbnodeupdate.sh -u -l /u01/exa_img_upg/YUM/p25093501_121234_Linux-x86-64.zip -t 12.1.2.3.4.170111 -q
Monitor the reboot
Monitor the reboot of each node by logging into the ilom console.
After reboot completes
Before running the completion step, run the CheckHWnFWProfile script to make sure it passes. If not, shut the system down and power cycle it from the ilom ( stop /SYS, wait 5 minutes, start /SYS)
/opt/oracle.cellos/CheckHWnFWProfile
cd /u01/exa_img_upg/YUM
umount -a -t nfs -f -l
./dbnodeupdate.sh -t 12.1.2.3.4.170111 -c -g
mount -a
Verify fuse RPMs are Installed
yum list installed | grep fuse
There should be 3 fuse rpm's. If not check note "Fuse packages removed as part of dbnodeupdate prereq check (Doc ID 2066488.1)"
Check version and status
dcli -l root -g dbs_group imageinfo -version
dcli -l root -g dbs_group imageinfo -status
dcli -l root -g dbs_group uname -r
Enable CRS
/u01/app/12.1.0.2/grid/bin/crsctl enable crs
/u01/app/12.1.0.2/grid/bin/crsctl check crs
dcli -l root -g dbs_group '/u01/app/12.1.0.2/grid/bin/crsctl check crs | grep online | wc -l | while read retval; do if [[ $retval -eq 0 ]]; then echo CRS Stopped; elif [[ $retval -eq 4 ]]; then echo CRS Running; else echo CRS Not Ready; fi; done;'
Post checks
/u01/app/12.1.0.2/grid/bin/crsctl status resource -t -w "TYPE = ora.database.type"
The following checks if APM is disabled across all nodes
dcli -l root -g dbs_group 'cat /sys/module/ib_sdp/parameters/sdp_apm_enable'
Additional checks (if there were problems)
ssh <database-node>
cd /var/log/cellos/
cat dbnodeupdate.log
cat dbserver_backup.sh.log
cat CheckHWnFWProfile.log
cat exadata.computenode.post.log
cat cellFirstboot.log
cat exachkcfg.log
cat vldrun.each_boot.log
cat validations.log
Skip starting resources if applying Cell Patch next
Check agents and restart if not running
ps -ef | grep agent | grep java | sed 's/\s\+/ /g' | cut -d " " -f 1,8 | sed 's/\/jdk.*//'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl start agent"'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl status agent"' | grep 'Agent is'
Rollback Steps
1. Rolling back the update with the dbnodeupdate.sh utility:
./dbnodeupdate.sh -r
2. Reboot the server using the reboot command.
# reboot
3. Run the dbnodeupdate.sh utility in 'completion mode' to finish post patching steps
Similar like with regular updates or One-Time updates, when switching OS binaries with the same Oracle Home, the database kernel should be relinked, so the 'post completion' step needs to be performed.
./dbnodeupdate.sh -c
Successfully Upgraded Exadata Half Rack Image to 12.1.2.3.4
1. Oracle recommends to clear all the stateful alrts from all the cell nodes
[root@abcxyzadm01 ~]# dcli -g cell_group -l root "cellcli -e list alerthistory attributes name,beginTime,alertShortName,alertDescription,severity where alerttype=stateful and severity=critical"
.
2.Based on Exachk report check if Oracle finds any hardware failure which much be fixed before you proceed for the patching.
Cell Node/ Cell Server Patch Plan (Non-Rolling)
=====================================================
Pre-checks : make sure prechecks are completed
Start a screen session from jump server before connecting to the dbnode
screen -RR cell-patch
update using the patchmgr utility
Unzip the update. It will extract into the patch_12.1.2.3.4.170111 directory.
Connect to
cd /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111
Stop agents if running (Non-Rolling)
Check agents and restart if not running
ps -ef | grep agent | grep java | sed 's/\s\+/ /g' | cut -d " " -f 1,8 | sed 's/\/jdk.*//'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl start agent"'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl status agent"' | grep 'Agent is'
Check databases running before stopping CRS
/u01/app/12.1.0.2/grid/bin/crsctl status resource -t -w "TYPE = ora.database.type"
dcli -l root -g dbs_group "ps -ef | sort -k1.50 | grep pmon_ | grep -v grep && echo ''"
Stop and disable CRS
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl disable crs
/u01/app/12.1.0.2/grid/bin/crsctl stop cluster -all
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl stop crs
dcli -l root -g dbs_group '/u01/app/12.1.0.2/grid/bin/crsctl check crs | grep online | wc -l | while read retval; do if [[ $retval -eq 0 ]]; then echo CRS Stopped; elif [[ $retval -eq 4 ]]; then echo CRS Running; else echo CRS Not Ready; fi; done;'
Restart ILOM on all cell nodes (optional)
dcli -l root -g cell_group "ipmitool bmc reset cold"
Check uptime and reboot if needed
dcli -l root -g cell_group "uptime"
If cells up more than 7 days then reboot.
dcli -l root -g cell_group "cellcli -e alter cell shutdown services all"
Verify all cells stopped successfully. If not, repeat until all cells are stopped.
dcli -l root -g cell_group sync
dcli -l root -g cell_group sync
dcli -l root -g cell_group reboot
Stop services on all cell nodes
cd /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111
dcli -l root -g cell_group "cellcli -e alter cell shutdown services all"
Cleanup space from any previous runs
the -reset_force command is only done the first time the cells are patched to this release.
It is not necessary to use the command for subsequent cell patching, even after rolling back the patch.
./patchmgr -cells cell_group -reset_force
Always use the -cleanup option before retrying a failed or halted run of the patchmgr utility.
./patchmgr -cells cell_group -cleanup
Download and install latest plugins
cd /u01/exa_img_upg/CELL
chmod +x /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111/plugins/*
Run prerequisites check
cd /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111
./patchmgr -cells cell_group -patch_check_prereq
Patch the cell nodes
nohup ./patchmgr -cells cell_group -patch &
Monitor the patch progress
Monitor the ILOM console for each cell being patched. You may want to download the ilom-login.sh script from note 1616791.1 for assisting in logging into the iloms.
cd /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111
tail -f nohup.out
Cleanup space
./patchmgr -cells cell_group -cleanup
Post Checks
dcli -l root -g cell_group imageinfo -version
dcli -l root -g cell_group imageinfo -status
dcli -l root -g cell_group "uname -r"
dcli -l root -g cell_group cellcli -e list cell
dcli -l root -g cell_group /opt/oracle.cellos/CheckHWnFWProfile
Additional checks (if there were problems)
cd /u01/exa_img_upg/CELL/patch_12.1.2.3.4.170111
cat patchmgr.stdout
cat _wip_stdout file
ssh <cell-node>
cd /var/log/cellos
grep -i 'fail' validations.log
grep -i 'fail' vldrun*.log
cat validations.log
cat vldrun.upgrade_reimage_boot.log
cat vldrun.first_upgrade_boot.log
cat CheckHWnFWProfile.log
cat cell.bin.install.log
cat cellFirstboot.log
cat exachkcfg.log
cat patch.out.place.sh.log
cat install.sh.log
Skip the following if performing YUM or BP update next
Start agents and verify that they are running
Check agents and restart if not running
ps -ef | grep agent | grep java | sed 's/\s\+/ /g' | cut -d " " -f 1,8 | sed 's/\/jdk.*//'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl start agent"'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl status agent"' | grep 'Agent is'
Enable and start CRS
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl enable crs
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl start crs
dcli -l root -g dbs_group '/u01/app/12.1.0.2/grid/bin/crsctl check crs | grep online | wc -l | while read retval; do if [[ $retval -eq 0 ]]; then echo CRS Stopped; elif [[ $retval -eq 4 ]]; then echo CRS Running; else echo CRS Not Ready; fi; done;'
Verify databases started
/u01/app/12.1.0.2/grid/bin/crsctl status resource -t -w "TYPE = ora.database.type"
dcli -l root -g dbs_group "ps -ef | sort -k1.50 | grep pmon_ | grep -v grep && echo ''"
Rolling Back Successfully Patched Exadata Cells
(This section describes how to roll back successfully-patched Exadata Cells. Cells with incomplete or failed patching cannot be rolled back.)
Do not run more than one instance of the patchmgr utility at a time in the deployment.
Check the prerequisites using the following command:
./patchmgr -cells cell_group -rollback_check_prereq [-rolling]
Perform the rollback using the following command:
./patchmgr -cells cell_group –rollback [-rolling]
Compute Node/ DB Note / YUM Patch Plan (Non-Rolling)
=====================================================
Pre-checks : make sure prechecks are completed
Compute Nodes
abcxyzadm01
abcxyzadm02
abcxyzadm03
abcxyzadm04
Check image version
dcli -l root -g dbs_group imageinfo -version
dcli -l root -g dbs_group imageinfo -status
dcli -l root -g dbs_group uname -r
Verify dbnodeupdate script version
Download latest version of dbnodeupdate script from patch 21634633
Download dbserver.patch.zip as p21634633_122110_Linux-x86-64.zip, which contains dbnodeupdate.zip and patchmgr for dbnodeupdate orchestration via patch 21634633
cd /u01/exa_img_upg/YUM
unzip -o p21634633_122110_Linux-x86-64.zip
Should be at least version 5.151022
./dbnodeupdate.sh -V
ver=$(./dbnodeupdate.sh -V | awk '{print $3}'); if (( $(echo "$ver < 5.151022" | bc -l) )); then echo -e "\nFAIL: dbnodeupdate version too low. Update before proceeding.\n"; elif (( $(echo "$ver > 5.151022" | bc -l) )); then echo -e "\nPASS: dbnodeupdate version OK\n"; else echo -e "\nWARN: dbnodeupdate minimum version ($ver) detected. Check if there is a newer version before proceeding.\n"; fi
dbnodeupdate script is updated frequently (sometimes daily). If not current then download updated version.
Check databases running before stopping CRS
/u01/app/12.1.0.2/grid/bin/crsctl status resource -t -w "TYPE = ora.database.type"
ps -ef | grep pmon_ | grep -v grep
Stop the CRS (Non-Rolling)
Execute on one node
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl disable crs
/u01/app/12.1.0.2/grid/bin/crsctl stop cluster -all
dcli -l root -g dbs_group /u01/app/12.1.0.2/grid/bin/crsctl stop crs
dcli -l root -g dbs_group '/u01/app/12.1.0.2/grid/bin/crsctl check crs | grep online | wc -l | while read retval; do if [[ $retval -eq 0 ]]; then echo CRS Stopped; elif [[ $retval -eq 4 ]]; then echo CRS Running; else echo CRS Not Ready; fi; done;'
Reboot servers and reset ILOM
dcli -l root -g dbs_group uptime
If uptime more than 7 days then reboot servers
dcli -l root -g dbs_group reboot
Reset the iloms
dcli -l root -g dbs_group 'ipmitool bmc reset cold'
Unmount NFS partitions
dcli -l root -g dbs_group 'umount -a -t nfs -f -l'
Run precheck
cd /u01/exa_img_upg/YUM
./dbnodeupdate.sh -u -l /u01/exa_img_upg/YUM/p25093501_121234_Linux-x86-64.zip -t 12.1.2.3.4.170111 -g -v
Perform backup and upgrade
Make sure to check known issues section above prior to executing dbnodeupdate.sh
./dbnodeupdate.sh -u -l /u01/exa_img_upg/YUM/p25093501_121234_Linux-x86-64.zip -t 12.1.2.3.4.170111 -q
Monitor the reboot
Monitor the reboot of each node by logging into the ilom console.
After reboot completes
Before running the completion step, run the CheckHWnFWProfile script to make sure it passes. If not, shut the system down and power cycle it from the ilom ( stop /SYS, wait 5 minutes, start /SYS)
/opt/oracle.cellos/CheckHWnFWProfile
cd /u01/exa_img_upg/YUM
umount -a -t nfs -f -l
./dbnodeupdate.sh -t 12.1.2.3.4.170111 -c -g
mount -a
Verify fuse RPMs are Installed
yum list installed | grep fuse
There should be 3 fuse rpm's. If not check note "Fuse packages removed as part of dbnodeupdate prereq check (Doc ID 2066488.1)"
Check version and status
dcli -l root -g dbs_group imageinfo -version
dcli -l root -g dbs_group imageinfo -status
dcli -l root -g dbs_group uname -r
Enable CRS
/u01/app/12.1.0.2/grid/bin/crsctl enable crs
/u01/app/12.1.0.2/grid/bin/crsctl check crs
dcli -l root -g dbs_group '/u01/app/12.1.0.2/grid/bin/crsctl check crs | grep online | wc -l | while read retval; do if [[ $retval -eq 0 ]]; then echo CRS Stopped; elif [[ $retval -eq 4 ]]; then echo CRS Running; else echo CRS Not Ready; fi; done;'
Post checks
/u01/app/12.1.0.2/grid/bin/crsctl status resource -t -w "TYPE = ora.database.type"
The following checks if APM is disabled across all nodes
dcli -l root -g dbs_group 'cat /sys/module/ib_sdp/parameters/sdp_apm_enable'
Additional checks (if there were problems)
ssh <database-node>
cd /var/log/cellos/
cat dbnodeupdate.log
cat dbserver_backup.sh.log
cat CheckHWnFWProfile.log
cat exadata.computenode.post.log
cat cellFirstboot.log
cat exachkcfg.log
cat vldrun.each_boot.log
cat validations.log
Skip starting resources if applying Cell Patch next
Check agents and restart if not running
ps -ef | grep agent | grep java | sed 's/\s\+/ /g' | cut -d " " -f 1,8 | sed 's/\/jdk.*//'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl start agent"'
dcli -g dbs_group -l root 'su -l oracle -c "/u01/app/em12c/core/12.1.0.4.0/bin/emctl status agent"' | grep 'Agent is'
Rollback Steps
1. Rolling back the update with the dbnodeupdate.sh utility:
./dbnodeupdate.sh -r
2. Reboot the server using the reboot command.
# reboot
3. Run the dbnodeupdate.sh utility in 'completion mode' to finish post patching steps
Similar like with regular updates or One-Time updates, when switching OS binaries with the same Oracle Home, the database kernel should be relinked, so the 'post completion' step needs to be performed.
./dbnodeupdate.sh -c
Comments
Post a Comment