Posted in Database, ORACLE

ORA-39181: Only partial table data may be exported due to fine grain access control

Data pump expdp log shows following error for some tables during the export:

EXP ORA-39181: Only partial table data may be exported due to fine grain access control on "ADMIN"."XCF"
EXP . . exported "ADMIN"."XCF" 0 KB 0 rows
EXP ORA-39181: Only partial table data may be exported due to fine grain access control on "ADMIN"."XDF"
EXP . . exported "ADMIN"."XDF" 0 KB 0 rows

Also VPD is not enabled to show this error.

Solution:

Provide the below privilege to the schema which you are trying to export:

SQL> GRANT EXEMPT ACCESS POLICY to <SCHEMA_NAME>;
Advertisements
Posted in LINUX, OS

Setting udev rules for Oracle ASM on Linux

1) Use lsblk command to know current disk allocated.

2) Use below mentioned command to know disk id.

/sbin/scsi_id -g -u -d /dev/sdr

3)Once you know all the id of associated disk to be used for ASM data storage.

You can put below mentioned line in vi /etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL=="sd?", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$name", RESULT=="36000c2966eff055daa2454a0d522523e", NAME="arcRedo", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$name", RESULT=="36000c29e693d7a0dfbd895da40a985f1", NAME="at01", OWNER="grid", GROUP="asmadmin", MODE="0660"

4)Once you have set the rule you can now reload udev rules with below mentioned command.After rules are implemented we can test the desired disk.

/sbin/udevadm control --reload-rules

/sbin/start_udev

udevadm test /dev/arcRedo

5)Verify by below mentioned command.

[root@TESTING~]# ls -l /dev/arcRedo
brw-rw---- 1 grid asmadmin 8, 32 Aug 19 05:43 /dev/arcRedo
Posted in Database, ORACLE

Applying Oracle Critical Patch Update of April 2017 on Oracle 12.1.0.2.0

 

1) Finding Current Opatch Version.

[grid@TESTING ora00]$ /ora00/app/grid/product/12.1.0/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.

2) Download Updated Opatch patch from Oracle Metalink.

TESTING:oracle $ cd patchopatch/
TESTING:oracle $ ls
p21142429_121010_Linux-x86-64.zip

3) Applying Opatch patch on current Oracle Home and Grid Home.

Oracle Home:-

TESTING:oracle $ unzip -o -qq p21142429_121010_Linux-x86-64.zip -d /ora00/app/oracle/product/12.1.0
TESTING:oracle $ /ora00/app/oracle/product/12.1.0/OPatch/opatch version
OPatch Version: 12.1.0.1.10

OPatch succeeded.

Grid Home:–

[grid@TESTING patchopatch]$ unzip -o -qq p21142429_121010_Linux-x86-64.zip -d /ora00/app/grid/product/12.1.0
[grid@TESTING patchopatch]$ /ora00/app/grid/product/12.1.0/OPatch/opatch version
OPatch Version: 12.1.0.1.10

OPatch succeeded.

4) Stop Grid and Database Services.

5) Analyze Current Patch with opatchauto utility.

[root@TESTING ora00]# /ora00/app/grid/product/12.1.0/OPatch/opatchauto apply /ora00/25434003 -analyze -oh /ora00/app/grid/product/12.1.0
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.10
OUI Version : 12.1.0.2.0
Running from : /ora00/app/grid/product/12.1.0

opatchauto log file: /ora00/app/grid/product/12.1.0/cfgtoollogs/opatchauto/25434003/opatch_gi_2017-06-22_05-45-15_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /ora00/25434003
Grid Infrastructure Patch(es): 21436941 25171037 25363740 25363750
DB Patch(es): 25171037 25363740

Patch Validation: Successful
User specified following Grid Infrastructure home:
/ora00/app/grid/product/12.1.0




Analyzing patch(es) on "/ora00/app/grid/product/12.1.0" ...
Patch "/ora00/25434003/21436941" successfully analyzed on "/ora00/app/grid/product/12.1.0" for apply.
Patch "/ora00/25434003/25171037" successfully analyzed on "/ora00/app/grid/product/12.1.0" for apply.
Patch "/ora00/25434003/25363740" successfully analyzed on "/ora00/app/grid/product/12.1.0" for apply.
Patch "/ora00/25434003/25363750" successfully analyzed on "/ora00/app/grid/product/12.1.0" for apply.

Apply Summary:
Following patch(es) are successfully analyzed:
GI Home: /ora00/app/grid/product/12.1.0: 21436941,25171037,25363740,25363750

opatchauto succeeded.

6) Once above steps is completed successfully we will generate ocm.rsp file , If you have latter Opatch Version this rsp file is not needed

export ORACLE_HOME=/ora00/app/grid/product/12.1.0
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /ora00/ocm.rsp

Applying Opatch on Grid Home

[root@TESTING bin]# /ora00/app/grid/product/12.1.0/OPatch/opatchauto apply /ora00/25434003 -oh /ora00/app/grid/product/12.1.0 -ocmrf /ora00/app/oracle/ocm.rsp
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.10
OUI Version : 12.1.0.2.0
Running from : /ora00/app/grid/product/12.1.0

opatchauto log file: /ora00/app/grid/product/12.1.0/cfgtoollogs/opatchauto/25434003/opatch_gi_2017-06-22_06-07-58_deploy.log

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /ora00/25434003
Grid Infrastructure Patch(es): 21436941 25171037 25363740 25363750
DB Patch(es): 25171037 25363740

Patch Validation: Successful
User specified following Grid Infrastructure home:
/ora00/app/grid/product/12.1.0




Performing prepatch operations on SIHA Home... Successful

Applying patch(es) to "/ora00/app/grid/product/12.1.0" ...
Patch "/ora00/25434003/21436941" successfully applied to "/ora00/app/grid/product/12.1.0".
Patch "/ora00/25434003/25171037" successfully applied to "/ora00/app/grid/product/12.1.0".
Patch "/ora00/25434003/25363740" successfully applied to "/ora00/app/grid/product/12.1.0".
Patch "/ora00/25434003/25363750" successfully applied to "/ora00/app/grid/product/12.1.0".

Performing postpatch operations on SIHA Home... Successful

Apply Summary:
Following patch(es) are successfully installed:
GI Home: /ora00/app/grid/product/12.1.0: 21436941,25171037,25363740,25363750

opatchauto succeeded.

7) Repeat the same steps for Oracle Home:

[root@TESTING ~]# /ora00/app/oracle/product/12.1.0/OPatch/opatchauto apply /ora00/25434003 -oh /ora00/app/oracle/product/12.1.0/ -ocmrf /ora00/app/oracle/ocm.rsp

 

Posted in OS, Solaris

Applying Critical Patch Update of April 2017 on Solaris 11.3.

Applying Oracle Solaris 11.3.20.5.0  on Solaris 11.3.

For downloading the Critical Patch Update Use below mentioned link:-

Critical Patch Update

Once necessary Zip file is downloaded copy it to primary Server and install repository:

Patch apply

On primary Server :

root@SEP02WTR-3612:/softrepo/repo113full# ./install-repo.ksh -d /softrepo/repo113full -c -v -I
Using p25977008_1100_SOLARIS64 files for sol-11_3_20_5_0-incr-repo download.

Comparing digests of downloaded files...done. Digests match.

Uncompressing p25977008_1100_SOLARIS64_1of4.zip...done.
Uncompressing p25977008_1100_SOLARIS64_2of4.zip...done.
Uncompressing p25977008_1100_SOLARIS64_3of4.zip...done.
Uncompressing p25977008_1100_SOLARIS64_4of4.zip...done.
Repository can be found in /softrepo/repo113full.
Initiating repository verification.
Building ISO image...done.
ISO image can be found at:
/softrepo/repo113full/sol-11_3_20_5_0-incr-repo.iso
Instructions for using the ISO image can be found at:
/softrepo/repo113full/README-repo-iso.txt

 

Share the full directory of the repository:

share -F nfs /softrepo/repo113full/

 

ON THE TARGET LDOM APPLY DESIRED CRITICAL PATCH UPDATE:
Solaris-2:root@TESTING:~# pkg unset-publisher solaris
Updating package cache 1/1
solaris-2:root@TESTING:~# mount -F nfs 153.71.78.20:/softrepo/repo113full/ /mnt_11_3
solaris-2:root@TESTING:~# mount -F hsfs /mnt_11_3/sol-11_3_20_5_0-incr-repo.iso /mnt
solaris-2:root@TESTING:~# pkg set-publisher -g file:///mnt/repo solaris
solaris-2:root@TESTING:~# pkg update -nv
 Packages to update: 2
 Estimated space available: 9.03 GB
Estimated space to be consumed: 63.22 MB
 Create boot environment: No
Create backup boot environment: Yes
 Rebuild boot archive: No

Changed packages:
solaris
 consolidation/ddt/ddt-incorporation
 8.9.15.9.11,5.11:20150916T171410Z -> 8.15.17.3.10,0.5.11-0.175.3.19.0.2.0:20170328T025129Z
 support/explorer
 8.9.15.9.11,5.11:20150916T171411Z -> 8.15.17.3.10,0.5.11-0.175.3.19.0.2.0:20170328T025130Z

solaris-2:root@TESTING:~# pkg update
 Packages to update: 2
 Create boot environment: No
Create backup boot environment: Yes

DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 2/2 1365/1365 8.3/8.3 0B/s




PHASE ITEMS
Removing old actions 11/11
Installing new actions 26/26
Updating modified actions 1346/1346
Updating package state database Done
Updating package cache 2/2
Updating image state Done
Creating fast lookup database Done
Updating package cache 1/1

---------------------------------------------------------------------------

 

 

 

 

 

Posted in Database, ORACLE

addnode.sh hangs during copying files to other node.

 

Recently i was trying to add one node to cluster and after running addnode.sh it was hanging in below mentioned line

 

$ ./addnode.sh -silent "CLUSTER_NEW_NODES={SEP02PVVM335}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={SEP02PVVM335-VIP}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 40619 MB Passed
Checking swap space: must be greater than 150 MB. Actual 49143 MB Passed
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
 CAUSE: Some of the optional prerequisites are not met. See logs for details. /ora00/app/oraInventory/logs/addNodeActions2017-06-02_06-50-36AM.log
 ACTION: Identify the list of failed prerequisite checks from the log: /ora00/app/oraInventory/logs/addNodeActions2017-06-02_06-50-36AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

Prepare Configuration in progress.

Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
 /ora00/app/oraInventory/logs/addNodeActions2017-06-02_06-50-36AM.log

Instantiate files in progress.

Instantiate files successful.
.................................................. 14% Done.

Copying files to node in progress.

 

First of all GRID_HOME should be owned by grid by default it will be root.

After searching on metalink find out sometime this issue may be because of below mentioned BUG:

Bug 12318325 – Addnode.sh takes longer due to audit files in GRID_HOME/rdbms/audit. (Doc ID 12318325.8)

 

Once i cleaned up those audit files i was able to run addnode.sh successfully.

 

$ ./addnode.sh -silent "CLUSTER_NEW_NODES={SEP02PVVM335}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={SEP02PVVM335-VIP}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 40543 MB Passed
Checking swap space: must be greater than 150 MB. Actual 49143 MB Passed

Prepare Configuration in progress.

Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/ora00/app/oraInventory/logs/addNodeActions2017-06-02_09-56-37AM.log

Instantiate files in progress.

Instantiate files successful.
.................................................. 14% Done.

Copying files to node in progress.

Copying files to node successful.
.................................................. 73% Done.

Saving cluster inventory in progress.
.................................................. 80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /ora00/grid/product/12.1.0 was successful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
.................................................. 88% Done.

As a root user, execute the following script(s):
 1. /ora00/app/oraInventory/orainstRoot.sh
 2. /ora00/grid/product/12.1.0/root.sh

Execute /ora00/app/oraInventory/orainstRoot.sh on the following nodes:
[SEP02PVVM335]
Execute /ora00/grid/product/12.1.0/root.sh on the following nodes:
[SEP02PVVM335]

The scripts can be executed in parallel on all the nodes.

..........
Update Inventory in progress.
.................................................. 100% Done.

Update Inventory successful.
Successfully Setup Software.
$
Posted in Database, ORACLE

DBMS_REPCAT_ADMIN package not present on Oracle Release 12.2.0.1.0 after Upgrade

Recently we have upgraded Database Server from Oracle Version 12.1.0.2.0 to 12.2.0.1.0.

After Upgrade we are unable to finde DBMS_REPCAT_ADMIN package

SQL> select owner,object_type,object_name from dba_objects where object_name='DBMS_REPCAT_ADMIN'; 

no rows selected

Because of this our script are failing with this error :

BEGIN DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_REPGROUP(userid => 'admin' ); END; 

* 
ERROR at line 1: 
ORA-06550: line 1, column 7: 
PLS-00201: identifier 'DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_REPGROUP' must be 
declared 
ORA-06550: line 1, column 7: 
PL/SQL: Statement ignored

Whereas on existing Oracle 12.1.0.2.0 version i can easily find those package:

SQL> select owner,object_type,object_name from dba_objects where object_name='DB MS_REPCAT_ADMIN'; 

OWNER 
-------------------------------------------------------------------------------- 
OBJECT_TYPE 
----------------------- 
OBJECT_NAME 
-------------------------------------------------------------------------------- 
SYS 
PACKAGE BODY 
DBMS_REPCAT_ADMIN

 

On further reading on Oracle release notes came to know that,These packages comes under advanced replication feature, which was not supported for 12.1 version but still could be used. Going forward in 12.2 these packages are completely removed and cannot be used further

Below mentioned is the Oracle link which mentioned this:

Desupport of Oracle Advanced Replication

Posted in Database, ORACLE

Upgrade to Oracle Database 12c Release 2 (12.2.0.1.0) — Part 2

In this part will be Upgrading Grid Home and Database home to Oracle 12.2.0.1.0.

Desired Software need to be downloaded from below mentioned link:

Oracle Software

Once desired Software is downloaded will be Upgrading Grid home first and then will be Upgrading Oracle Database.

 

1)Upgrading Grid Infrastructure to Oracle 12.2.0.1.0.

 

Grid1

Grid2

Grid3

Grid 4

Grid 5

Grid 6

Grid 7

Grid 8

 

Grid 9

 

Note:– It may happen that rootupgrade.sh may fails during Grid Installation with below mentioned error message.

a) ORA-01078 and ORA-29701

–ERROR MESSAGE–

[root@SEP02PVVM392 rpm]# /ora00/app/grid/product/12.2.0/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
 ORACLE_OWNER= grid
 ORACLE_HOME= /ora00/app/grid/product/12.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /ora00/app/grid/product/12.2.0/crs/install/crsconfig_params
The log of current session can be found at:
 /ora00/app/grid/crsdata/sep02pvvm392/crsconfig/roothas_2017-03-23_04-46-20AM.log

Upgrade cannot proceed because the ASM instance failed to start. Fix the issue or startup the ASM instance, verify and try again.
ORA-01078: failure in processing system parameters
ORA-29701: unable to connect to Cluster Synchronization Service


2017/03/23 04:46:26 CLSRSC-164: ASM upgrade failed
2017/03/23 04:46:26 CLSRSC-304: Failed to upgrade ASM for Oracle Restart configuration
Died at /ora00/app/grid/product/12.2.0/crs/install/crsupgrade.pm line 3083.
The command '/ora00/app/grid/product/12.2.0/perl/bin/perl -I/ora00/app/grid/product/12.2.0/perl/lib -I/ora00/app/grid/product/12.2.0/crs/install /ora00/app/grid/product/12.2.0/crs/install/roothas.pl -upgrade' execution failed

 

Solution: Start ASM  on new Grid Home and try again to run rootupgrade.sh

[root@TESTINGrpm]# /ora00/app/grid/product/12.2.0/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
 ORACLE_OWNER= grid
 ORACLE_HOME= /ora00/app/grid/product/12.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /ora00/app/grid/product/12.2.0/crs/install/crsconfig_params
The log of current session can be found at:
 /ora00/app/grid/crsdata/testing/crsconfig/roothas_2017-03-23_04-50-44AM.log

ASM has been upgraded and started successfully.

Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node testing successfully pinned.
2017/03/23 04:51:10 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
CRS-4123: Oracle High Availability Services has been started.

2017/03/23 04:55:03 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p first'
2017/03/23 04:55:05 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p last'


testing 2017/03/23 04:55:06 /ora00/app/grid/product/12.2.0/cdata/testing/backup_20170323_045506.olr 0

testing 2016/07/20 10:18:23 /ora00/app/grid/product/12.1.0/cdata/testing/backup_20160720_101823.olr 0

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'testing'
CRS-2673: Attempting to stop 'ora.evmd' on 'testing'
CRS-2677: Stop of 'ora.evmd' on 'testing' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'testing' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/03/23 04:55:57 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

 

b) DIA-49802 and DIA-49801 .

Above mentioned error i got in different Cloned VM of this source VM.

# /ora00/app/grid/product/12.2.0/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
 ORACLE_OWNER= grid
 ORACLE_HOME= /ora00/app/grid/product/12.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /ora00/app/grid/product/12.2.0/crs/install/crsconfig_params
The log of current session can be found at:
 /ora00/app/grid/crsdata/sep02pvvm-405/crsconfig/roothas_2017-04-17_06-07-09AM.log


ASM has been upgraded and started successfully.

Oracle Clusterware infrastructure error in OCRCONFIG (OS PID 20893): CLSD/ADR initialization failed with return value -1
1: clskec:has:CLSU:910 4 args[clsdAdrInit_CLSK_err][mod=clsdadr.c][loc=(:CLSD00281:)][msg=clsdAdrInit: Additional diagnostic data returned by the ADR component for dbgc_init_all failure:
 DIA-49802: missing read, write, or execute permission on specified ADR home directory [/ora00/app/grid/diag/crs/sep02pvvm-405/crs/log]
DIA-49801: actual permissions [rwxrwx---], expected minimum permissions [rwxrwxrwx] for effective user [grid]
DIA-48188: user missing read, write, or exec permission on specified directory
Linux-x86_64 Error: 13: Permission denied
Additional information: 2
Additional information: 511
Additional information: 16888
([all diagnostic data retrieved from ADR])]
2: clskec:has:CLSU:910 4 args[clsdAdrInit_CLSK_err][mod=clsdadr.c][loc=(:CLSD00050:)][msg=clsdAdrInit: call to dbgc_init_all failed. facility:[CRS] product:[CRS] line number:[1422] return code: [ORA-49802] Oracle Base: [/ora00/app/grid] Product Type: [CRS] Host Name: [sep02pvvm-405] Instance ID: [crs] User Name: [grid]]


Oracle Clusterware infrastructure error in CLSCFG (OS PID 20903): CLSD/ADR initialization failed with return value -1
1: clskec:has:CLSU:910 4 args[clsdAdrInit_CLSK_err][mod=clsdadr.c][loc=(:CLSD00281:)][msg=clsdAdrInit: Additional diagnostic data returned by the ADR component for dbgc_init_all failure:
 DIA-49802: missing read, write, or execute permission on specified ADR home directory [/ora00/app/grid/diag/crs/sep02pvvm-405/crs/log]
DIA-49801: actual permissions [rwxrwx---], expected minimum permissions [rwxrwxrwx] for effective user [grid]
DIA-48188: user missing read, write, or exec permission on specified directory
Linux-x86_64 Error: 13: Permission denied
Additional information: 2
Additional information: 511
Additional information: 16888
([all diagnostic data retrieved from ADR])]
2: clskec:has:CLSU:910 4 args[clsdAdrInit_CLSK_err][mod=clsdadr.c][loc=(:CLSD00050:)][msg=clsdAdrInit: call to dbgc_init_all failed. facility:[CRS] product:[CRS] line number:[1422] return code: [ORA-49802] Oracle Base: [/ora00/app/grid] Product Type: [CRS] Host Name: [sep02pvvm-405] Instance ID: [crs] User Name: [grid]]

Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node sep02pvvm-405 successfully pinned.
2017/04/17 06:08:41 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2017/04/17 06:10:46 CLSRSC-214: Failed to start the resource 'ohasd'
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2017-04-17 06:09:23.605
[client(22558)]CRS-8500:Oracle Clusterware OHASD process is starting with operating system process ID 22558
2017-04-17 06:09:23.611
[client(22558)]CRS-2112:The OLR service started on node sep02pvvm-405.
2017-04-17 06:09:25.240
[client(22651)]CRS-8500:Oracle Clusterware OHASD process is starting with operating system process ID 22651
2017-04-17 06:09:25.247
[client(22651)]CRS-2112:The OLR service started on node sep02pvvm-405.
2017-04-17 06:09:26.901
[client(22743)]CRS-8500:Oracle Clusterware OHASD process is starting with operating system process ID 22743
2017-04-17 06:09:26.908
[client(22743)]CRS-2112:The OLR service started on node sep02pvvm-405.
2017-04-17 06:09:28.415
[client(22831)]CRS-8500:Oracle Clusterware OHASD process is starting with operating system process ID 22831
2017-04-17 06:09:28.422
[client(22831)]CRS-2112:The OLR service started on node sep02pvvm-405.
2017-04-17 06:09:30.014
[client(22914)]CRS-8500:Oracle Clusterware OHASD process is starting with operating system process ID 22914
2017-04-17 06:09:30.020
[client(22914)]CRS-2112:The OLR service started on node sep02pvvm-405.

2017/04/17 06:10:46 CLSRSC-318: Failed to start Oracle OHASD service
Died at /ora00/app/grid/product/12.2.0/crs/install/crsinstall.pm line 2775.
The command '/ora00/app/grid/product/12.2.0/perl/bin/perl -I/ora00/app/grid/product/12.2.0/perl/lib -I/ora00/app/grid/product/12.2.0/crs/install /ora00/app/grid/product/12.2.0/crs/install/roothas.pl -upgrade' execution failed

For above mentioned error i had to check ownership of  $GRID_HOME/diag/crs/testing/crs/* directories from root to grid user.

[root@SEP02PVVM-405 crs]# chown -R grid:oinstall alert
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall cdump/
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall incident/
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall incpkg
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall lck
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall log
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall metadata
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall metadata_dgif
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall metadata_pv
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall stage
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall sweep
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall trace

Once this is done you will be able to start ASM services from New Grid Home.Don’t know why this came may be something happened during Cloning process but thought of sharing this information

 2) Once rootupgrade.sh has run try to see the Upgraded releaseversion.

[grid@TESTING bin]$ ./crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [12.2.0.1.0]


[grid@TESTING bin]$ ./crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.2.0.1.0]

 

3) Once Grid is Upgraded we need to install Oracle Database 12.2.0.1.0 software.

db1

db2

db3

db4

db5

db6

db7

db8

db9

db10

4) Once Oracle 12.2.0.1.0 Software is installed you can easily Upgrade the Desired Database by Using DBUA.

Below mentioned metalink DOC ID contains all necessary information.

Complete Checklist for Upgrading to Oracle Database 12c Release 2 (12.2) using DBUA (Doc ID 2189854.1)