Posted in Database, ORACLE

datapatch fails with ORA-04063 : package body “SYS.DBMS_SQLPATCH” has errors

Recently when i was applying Critical patch Update for October 2017 on my Database Server.

Patch applied process with OPatch utility completed successfully  but there was issue with datapatch utility :

TESTING:oracle $ ./datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Thu Nov 16 04:33:57 2017
Copyright (c) 2012, 2017, Oracle. All rights reserved.

Log file for this invocation: /ora00/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_27492_2017_11_16_04_33_57/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done

DBD::Oracle::db selectrow_array failed: ORA-04063: package body "SYS.DBMS_SQLPATCH" has errors (DBD ERROR: OCIStmtExecute) [for Statement "SELECT dbms_sqlpatch.verify_queryable_inventory FROM dual"] at /ora00/app/oracle/product/12.2.0/sqlpatch/sqlpatch.pm line 4524, <LOGFILE> line 21.




Please refer to MOS Note 1609718.1 and/or the invocation log
/ora00/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_27492_2017_11_16_04_33_57/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Thu Nov 16 04:33:57 2017

 

There was lot of issue mentioned on MOS Note 1609718.1 but above mentioned package issue was not present.

Latter i tried below mentioned steps to recreate those package body :

1)Finding current status of package :

TESTING:oracle $ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 16 05:04:24 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.




Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> SELECT dbms_sqlpatch.verify_queryable_inventory FROM dual;
SELECT dbms_sqlpatch.verify_queryable_inventory FROM dual
 *
ERROR at line 1:
ORA-04063: package body "SYS.DBMS_SQLPATCH" has errors

 

2) As we can see in above mentioned command package body was displaying error so i tried to look-out for script which create that package by running below mentioned command inside $ORACLE_HOME/rdbms/admin directory.

TESTING:oracle $ find . | xargs grep "dbms_sqlpatch"
grep: .: Is a directory
./e1201000.sql:Rem surman 08/05/13 - 17005047: Add dbms_sqlpatch
./catpdbms.sql:Rem surman 08/02/13 - 17005047: Add dbms_sqlpatch
./catdwgrd.sql:Rem dbms_sqlpatch package, as the package may not be valid
./dbmssqlpatch.sql:Rem surman 08/18/14 - Always reload dbms_sqlpatch
./dbmssqlpatch.sql:CREATE OR REPLACE PACKAGE dbms_sqlpatch AS
./dbmssqlpatch.sql:END dbms_sqlpatch;
./dbmssqlpatch.sql:CREATE OR REPLACE PUBLIC SYNONYM dbms_sqlpatch FOR sys.dbms_sqlpatch;
./dbmssqlpatch.sql:GRANT EXECUTE ON dbms_sqlpatch TO execute_catalog_role;
grep: ./cdb_cloud: Is a directory
grep: ./cdb_cloud/sql: Is a directory
grep: ./cdb_cloud/dbt: Is a directory
grep: ./cdb_cloud/dbt/test: Is a directory
grep: ./cdb_cloud/rsp: Is a directory
grep: ./cdb_cloud/apex_install: Is a directory
grep: ./cdb_cloud/apex_install/ords: Is a directory
./prvtsqlpatch.plb: EXECUTE IMMEDIATE 'DROP TABLE dbms_sqlpatch_state';
./prvtsqlpatch.plb:CREATE TABLE dbms_sqlpatch_state (
./prvtsqlpatch.plb:CREATE OR REPLACE PACKAGE BODY dbms_sqlpatch wrapped
./catpprvt.sql:Rem surman 08/03/13 - 17005047: Add dbms_sqlpatch
./catpprvt.sql:-- 20772435: Queryable dbms_sqlpatch package body is now created in catxrd
./catxrd.sql:Rem dbms_sqlpatch package
./xrde121.sql:DROP PACKAGE dbms_sqlpatch;
./xrde121.sql:DROP PUBLIC SYNONYM dbms_sqlpatch;

From above mentioned output  we can see that prvtsqlpatch.plb script is creating this package body.So ran this script to recreate dbms_sqlpatch package.

3)Running  prvtsqlpatch.plb to recreate dbms_sqlpatch package and checking status of this package;

TESTING:oracle $ export ORACLE_SID=confdb

TESTING:oracle $ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 16 05:04:24 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.




Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> @prvtsqlpatch.plb

Session altered.




PL/SQL procedure successfully completed.




Table created.




Package body created.

No errors.

Session altered.

SQL> SELECT dbms_sqlpatch.verify_queryable_inventory FROM dual;

VERIFY_QUERYABLE_INVENTORY
--------------------------------------------------------------------------------

OK

 

4)Once above mentioned package is recreated we can apply patch on database with the help of datapatch utility:

TESTING:oracle $ ./datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Thu Nov 16 05:05:10 2017
Copyright (c) 2012, 2017, Oracle. All rights reserved.

Log file for this invocation: /ora00/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_5382_2017_11_16_05_05_10/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBRU:
 ID 171017 in the binary registry and not installed in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
 Nothing to roll back
 The following patches will be applied:
 26710464 (DATABASE RELEASE UPDATE 12.2.0.1.171017)

Installing patches...
Patch installation complete. Total patches installed: 1

Validating logfiles...
Patch 26710464 apply: SUCCESS
 logfile: /ora00/app/oracle/cfgtoollogs/sqlpatch/26710464/21632407/26710464_apply_CONFDB_2017Nov16_05_05_18.log (no errors)
SQL Patching tool complete on Thu Nov 16 05:06:11 2017

 

 

 

 

 

Advertisements
Posted in Database, ORACLE

OPatch failed with error code 2

Before applying Critical Patch Update for October 2017, I wanted to confirm Current patch details which are applied but ran into below mentioned issues.

TESTING:oracle $ /ora00/app/oracle/product/12.2.0/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2017, Oracle Corporation. All rights reserved.

PREREQ session

Oracle Home : /ora00/app/oracle/product/12.2.0
Central Inventory : /ora00/app/oraInventory
 from : /ora00/app/oracle/product/12.2.0/oraInst.loc
OPatch version : 12.2.0.1.6
OUI version : 12.2.0.1.4
Log file location : /ora00/app/oracle/product/12.2.0/cfgtoollogs/opatch/opatch2017-11-16_03-29-39AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"
List of Homes on this system:

Prereq "checkConflictAgainstOHWithDetail" is not executed.

The details are:
Exception occured : RawInventory gets null OracleHomeInfo
Summary of Conflict Analysis:

There are no patches that can be applied now.

OPatch failed with error code 2

 

The above error occurs when Opatch was not able to find the database HOME in inventory xml file present inside Oracle Inventory Directory.

Solution is to have the Oracle Home attach to the Central Inventory of the server

We can re-attach the oracle home by using script present at $ORACLE_HOME/oui/bin “attachHome.sh” location.Do crosscheck value of Oracle_Home present inside attachHome.sh script.

TESTING:oracle $ ./attachHome.sh
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 1921 MB Passed
The inventory pointer is located at /etc/oraInst.loc
Please execute the '/ora00/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

 

Once that is done we can run above mentioned command without any errors:

TESTING:oracle $ /ora00/app/oracle/product/12.2.0/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2017, Oracle Corporation. All rights reserved.

PREREQ session

Oracle Home : /ora00/app/oracle/product/12.2.0
Central Inventory : /ora00/app/oraInventory
 from : /ora00/app/oracle/product/12.2.0/oraInst.loc
OPatch version : 12.2.0.1.6
OUI version : 12.2.0.1.4
Log file location : /ora00/app/oracle/product/12.2.0/cfgtoollogs/opatch/opatch2017-11-16_03-38-25AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

 

Posted in Database, ORACLE

ORA-39181: Only partial table data may be exported due to fine grain access control

Data pump expdp log shows following error for some tables during the export:

EXP ORA-39181: Only partial table data may be exported due to fine grain access control on "ADMIN"."XCF"
EXP . . exported "ADMIN"."XCF" 0 KB 0 rows
EXP ORA-39181: Only partial table data may be exported due to fine grain access control on "ADMIN"."XDF"
EXP . . exported "ADMIN"."XDF" 0 KB 0 rows

Also VPD is not enabled to show this error.

Solution:

Provide the below privilege to the schema which you are trying to export:

SQL> GRANT EXEMPT ACCESS POLICY to <SCHEMA_NAME>;
Posted in Database, ORACLE

Applying Oracle Critical Patch Update of April 2017 on Oracle 12.1.0.2.0

 

1) Finding Current Opatch Version.

[grid@TESTING ora00]$ /ora00/app/grid/product/12.1.0/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.

2) Download Updated Opatch patch from Oracle Metalink.

TESTING:oracle $ cd patchopatch/
TESTING:oracle $ ls
p21142429_121010_Linux-x86-64.zip

3) Applying Opatch patch on current Oracle Home and Grid Home.

Oracle Home:-

TESTING:oracle $ unzip -o -qq p21142429_121010_Linux-x86-64.zip -d /ora00/app/oracle/product/12.1.0
TESTING:oracle $ /ora00/app/oracle/product/12.1.0/OPatch/opatch version
OPatch Version: 12.1.0.1.10

OPatch succeeded.

Grid Home:–

[grid@TESTING patchopatch]$ unzip -o -qq p21142429_121010_Linux-x86-64.zip -d /ora00/app/grid/product/12.1.0
[grid@TESTING patchopatch]$ /ora00/app/grid/product/12.1.0/OPatch/opatch version
OPatch Version: 12.1.0.1.10

OPatch succeeded.

4) Stop Grid and Database Services.

5) Analyze Current Patch with opatchauto utility.

[root@TESTING ora00]# /ora00/app/grid/product/12.1.0/OPatch/opatchauto apply /ora00/25434003 -analyze -oh /ora00/app/grid/product/12.1.0
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.10
OUI Version : 12.1.0.2.0
Running from : /ora00/app/grid/product/12.1.0

opatchauto log file: /ora00/app/grid/product/12.1.0/cfgtoollogs/opatchauto/25434003/opatch_gi_2017-06-22_05-45-15_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /ora00/25434003
Grid Infrastructure Patch(es): 21436941 25171037 25363740 25363750
DB Patch(es): 25171037 25363740

Patch Validation: Successful
User specified following Grid Infrastructure home:
/ora00/app/grid/product/12.1.0




Analyzing patch(es) on "/ora00/app/grid/product/12.1.0" ...
Patch "/ora00/25434003/21436941" successfully analyzed on "/ora00/app/grid/product/12.1.0" for apply.
Patch "/ora00/25434003/25171037" successfully analyzed on "/ora00/app/grid/product/12.1.0" for apply.
Patch "/ora00/25434003/25363740" successfully analyzed on "/ora00/app/grid/product/12.1.0" for apply.
Patch "/ora00/25434003/25363750" successfully analyzed on "/ora00/app/grid/product/12.1.0" for apply.

Apply Summary:
Following patch(es) are successfully analyzed:
GI Home: /ora00/app/grid/product/12.1.0: 21436941,25171037,25363740,25363750

opatchauto succeeded.

6) Once above steps is completed successfully we will generate ocm.rsp file , If you have latter Opatch Version this rsp file is not needed

export ORACLE_HOME=/ora00/app/grid/product/12.1.0
$ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /ora00/ocm.rsp

Applying Opatch on Grid Home

[root@TESTING bin]# /ora00/app/grid/product/12.1.0/OPatch/opatchauto apply /ora00/25434003 -oh /ora00/app/grid/product/12.1.0 -ocmrf /ora00/app/oracle/ocm.rsp
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.10
OUI Version : 12.1.0.2.0
Running from : /ora00/app/grid/product/12.1.0

opatchauto log file: /ora00/app/grid/product/12.1.0/cfgtoollogs/opatchauto/25434003/opatch_gi_2017-06-22_06-07-58_deploy.log

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /ora00/25434003
Grid Infrastructure Patch(es): 21436941 25171037 25363740 25363750
DB Patch(es): 25171037 25363740

Patch Validation: Successful
User specified following Grid Infrastructure home:
/ora00/app/grid/product/12.1.0




Performing prepatch operations on SIHA Home... Successful

Applying patch(es) to "/ora00/app/grid/product/12.1.0" ...
Patch "/ora00/25434003/21436941" successfully applied to "/ora00/app/grid/product/12.1.0".
Patch "/ora00/25434003/25171037" successfully applied to "/ora00/app/grid/product/12.1.0".
Patch "/ora00/25434003/25363740" successfully applied to "/ora00/app/grid/product/12.1.0".
Patch "/ora00/25434003/25363750" successfully applied to "/ora00/app/grid/product/12.1.0".

Performing postpatch operations on SIHA Home... Successful

Apply Summary:
Following patch(es) are successfully installed:
GI Home: /ora00/app/grid/product/12.1.0: 21436941,25171037,25363740,25363750

opatchauto succeeded.

7) Repeat the same steps for Oracle Home:

[root@TESTING ~]# /ora00/app/oracle/product/12.1.0/OPatch/opatchauto apply /ora00/25434003 -oh /ora00/app/oracle/product/12.1.0/ -ocmrf /ora00/app/oracle/ocm.rsp

 

Posted in Database, ORACLE

addnode.sh hangs during copying files to other node.

 

Recently i was trying to add one node to cluster and after running addnode.sh it was hanging in below mentioned line

 

$ ./addnode.sh -silent "CLUSTER_NEW_NODES={SEP02PVVM335}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={SEP02PVVM335-VIP}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 40619 MB Passed
Checking swap space: must be greater than 150 MB. Actual 49143 MB Passed
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
 CAUSE: Some of the optional prerequisites are not met. See logs for details. /ora00/app/oraInventory/logs/addNodeActions2017-06-02_06-50-36AM.log
 ACTION: Identify the list of failed prerequisite checks from the log: /ora00/app/oraInventory/logs/addNodeActions2017-06-02_06-50-36AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

Prepare Configuration in progress.

Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
 /ora00/app/oraInventory/logs/addNodeActions2017-06-02_06-50-36AM.log

Instantiate files in progress.

Instantiate files successful.
.................................................. 14% Done.

Copying files to node in progress.

 

First of all GRID_HOME should be owned by grid by default it will be root.

After searching on metalink find out sometime this issue may be because of below mentioned BUG:

Bug 12318325 – Addnode.sh takes longer due to audit files in GRID_HOME/rdbms/audit. (Doc ID 12318325.8)

 

Once i cleaned up those audit files i was able to run addnode.sh successfully.

 

$ ./addnode.sh -silent "CLUSTER_NEW_NODES={SEP02PVVM335}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={SEP02PVVM335-VIP}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 40543 MB Passed
Checking swap space: must be greater than 150 MB. Actual 49143 MB Passed

Prepare Configuration in progress.

Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/ora00/app/oraInventory/logs/addNodeActions2017-06-02_09-56-37AM.log

Instantiate files in progress.

Instantiate files successful.
.................................................. 14% Done.

Copying files to node in progress.

Copying files to node successful.
.................................................. 73% Done.

Saving cluster inventory in progress.
.................................................. 80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /ora00/grid/product/12.1.0 was successful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
.................................................. 88% Done.

As a root user, execute the following script(s):
 1. /ora00/app/oraInventory/orainstRoot.sh
 2. /ora00/grid/product/12.1.0/root.sh

Execute /ora00/app/oraInventory/orainstRoot.sh on the following nodes:
[SEP02PVVM335]
Execute /ora00/grid/product/12.1.0/root.sh on the following nodes:
[SEP02PVVM335]

The scripts can be executed in parallel on all the nodes.

..........
Update Inventory in progress.
.................................................. 100% Done.

Update Inventory successful.
Successfully Setup Software.
$
Posted in Database, ORACLE

DBMS_REPCAT_ADMIN package not present on Oracle Release 12.2.0.1.0 after Upgrade

Recently we have upgraded Database Server from Oracle Version 12.1.0.2.0 to 12.2.0.1.0.

After Upgrade we are unable to finde DBMS_REPCAT_ADMIN package

SQL> select owner,object_type,object_name from dba_objects where object_name='DBMS_REPCAT_ADMIN'; 

no rows selected

Because of this our script are failing with this error :

BEGIN DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_REPGROUP(userid => 'admin' ); END; 

* 
ERROR at line 1: 
ORA-06550: line 1, column 7: 
PLS-00201: identifier 'DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_REPGROUP' must be 
declared 
ORA-06550: line 1, column 7: 
PL/SQL: Statement ignored

Whereas on existing Oracle 12.1.0.2.0 version i can easily find those package:

SQL> select owner,object_type,object_name from dba_objects where object_name='DB MS_REPCAT_ADMIN'; 

OWNER 
-------------------------------------------------------------------------------- 
OBJECT_TYPE 
----------------------- 
OBJECT_NAME 
-------------------------------------------------------------------------------- 
SYS 
PACKAGE BODY 
DBMS_REPCAT_ADMIN

 

On further reading on Oracle release notes came to know that,These packages comes under advanced replication feature, which was not supported for 12.1 version but still could be used. Going forward in 12.2 these packages are completely removed and cannot be used further

Below mentioned is the Oracle link which mentioned this:

Desupport of Oracle Advanced Replication

Posted in Database, ORACLE

Upgrade to Oracle Database 12c Release 2 (12.2.0.1.0) — Part 2

In this part will be Upgrading Grid Home and Database home to Oracle 12.2.0.1.0.

Desired Software need to be downloaded from below mentioned link:

Oracle Software

Once desired Software is downloaded will be Upgrading Grid home first and then will be Upgrading Oracle Database.

 

1)Upgrading Grid Infrastructure to Oracle 12.2.0.1.0.

 

Grid1

Grid2

Grid3

Grid 4

Grid 5

Grid 6

Grid 7

Grid 8

 

Grid 9

 

Note:– It may happen that rootupgrade.sh may fails during Grid Installation with below mentioned error message.

a) ORA-01078 and ORA-29701

–ERROR MESSAGE–

[root@SEP02PVVM392 rpm]# /ora00/app/grid/product/12.2.0/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
 ORACLE_OWNER= grid
 ORACLE_HOME= /ora00/app/grid/product/12.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /ora00/app/grid/product/12.2.0/crs/install/crsconfig_params
The log of current session can be found at:
 /ora00/app/grid/crsdata/sep02pvvm392/crsconfig/roothas_2017-03-23_04-46-20AM.log

Upgrade cannot proceed because the ASM instance failed to start. Fix the issue or startup the ASM instance, verify and try again.
ORA-01078: failure in processing system parameters
ORA-29701: unable to connect to Cluster Synchronization Service


2017/03/23 04:46:26 CLSRSC-164: ASM upgrade failed
2017/03/23 04:46:26 CLSRSC-304: Failed to upgrade ASM for Oracle Restart configuration
Died at /ora00/app/grid/product/12.2.0/crs/install/crsupgrade.pm line 3083.
The command '/ora00/app/grid/product/12.2.0/perl/bin/perl -I/ora00/app/grid/product/12.2.0/perl/lib -I/ora00/app/grid/product/12.2.0/crs/install /ora00/app/grid/product/12.2.0/crs/install/roothas.pl -upgrade' execution failed

 

Solution: Start ASM  on new Grid Home and try again to run rootupgrade.sh

[root@TESTINGrpm]# /ora00/app/grid/product/12.2.0/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
 ORACLE_OWNER= grid
 ORACLE_HOME= /ora00/app/grid/product/12.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /ora00/app/grid/product/12.2.0/crs/install/crsconfig_params
The log of current session can be found at:
 /ora00/app/grid/crsdata/testing/crsconfig/roothas_2017-03-23_04-50-44AM.log

ASM has been upgraded and started successfully.

Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node testing successfully pinned.
2017/03/23 04:51:10 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
CRS-4123: Oracle High Availability Services has been started.

2017/03/23 04:55:03 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p first'
2017/03/23 04:55:05 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p last'


testing 2017/03/23 04:55:06 /ora00/app/grid/product/12.2.0/cdata/testing/backup_20170323_045506.olr 0

testing 2016/07/20 10:18:23 /ora00/app/grid/product/12.1.0/cdata/testing/backup_20160720_101823.olr 0

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'testing'
CRS-2673: Attempting to stop 'ora.evmd' on 'testing'
CRS-2677: Stop of 'ora.evmd' on 'testing' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'testing' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/03/23 04:55:57 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

 

b) DIA-49802 and DIA-49801 .

Above mentioned error i got in different Cloned VM of this source VM.

# /ora00/app/grid/product/12.2.0/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
 ORACLE_OWNER= grid
 ORACLE_HOME= /ora00/app/grid/product/12.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]:
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /ora00/app/grid/product/12.2.0/crs/install/crsconfig_params
The log of current session can be found at:
 /ora00/app/grid/crsdata/sep02pvvm-405/crsconfig/roothas_2017-04-17_06-07-09AM.log


ASM has been upgraded and started successfully.

Oracle Clusterware infrastructure error in OCRCONFIG (OS PID 20893): CLSD/ADR initialization failed with return value -1
1: clskec:has:CLSU:910 4 args[clsdAdrInit_CLSK_err][mod=clsdadr.c][loc=(:CLSD00281:)][msg=clsdAdrInit: Additional diagnostic data returned by the ADR component for dbgc_init_all failure:
 DIA-49802: missing read, write, or execute permission on specified ADR home directory [/ora00/app/grid/diag/crs/sep02pvvm-405/crs/log]
DIA-49801: actual permissions [rwxrwx---], expected minimum permissions [rwxrwxrwx] for effective user [grid]
DIA-48188: user missing read, write, or exec permission on specified directory
Linux-x86_64 Error: 13: Permission denied
Additional information: 2
Additional information: 511
Additional information: 16888
([all diagnostic data retrieved from ADR])]
2: clskec:has:CLSU:910 4 args[clsdAdrInit_CLSK_err][mod=clsdadr.c][loc=(:CLSD00050:)][msg=clsdAdrInit: call to dbgc_init_all failed. facility:[CRS] product:[CRS] line number:[1422] return code: [ORA-49802] Oracle Base: [/ora00/app/grid] Product Type: [CRS] Host Name: [sep02pvvm-405] Instance ID: [crs] User Name: [grid]]


Oracle Clusterware infrastructure error in CLSCFG (OS PID 20903): CLSD/ADR initialization failed with return value -1
1: clskec:has:CLSU:910 4 args[clsdAdrInit_CLSK_err][mod=clsdadr.c][loc=(:CLSD00281:)][msg=clsdAdrInit: Additional diagnostic data returned by the ADR component for dbgc_init_all failure:
 DIA-49802: missing read, write, or execute permission on specified ADR home directory [/ora00/app/grid/diag/crs/sep02pvvm-405/crs/log]
DIA-49801: actual permissions [rwxrwx---], expected minimum permissions [rwxrwxrwx] for effective user [grid]
DIA-48188: user missing read, write, or exec permission on specified directory
Linux-x86_64 Error: 13: Permission denied
Additional information: 2
Additional information: 511
Additional information: 16888
([all diagnostic data retrieved from ADR])]
2: clskec:has:CLSU:910 4 args[clsdAdrInit_CLSK_err][mod=clsdadr.c][loc=(:CLSD00050:)][msg=clsdAdrInit: call to dbgc_init_all failed. facility:[CRS] product:[CRS] line number:[1422] return code: [ORA-49802] Oracle Base: [/ora00/app/grid] Product Type: [CRS] Host Name: [sep02pvvm-405] Instance ID: [crs] User Name: [grid]]

Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node sep02pvvm-405 successfully pinned.
2017/04/17 06:08:41 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2017/04/17 06:10:46 CLSRSC-214: Failed to start the resource 'ohasd'
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2017-04-17 06:09:23.605
[client(22558)]CRS-8500:Oracle Clusterware OHASD process is starting with operating system process ID 22558
2017-04-17 06:09:23.611
[client(22558)]CRS-2112:The OLR service started on node sep02pvvm-405.
2017-04-17 06:09:25.240
[client(22651)]CRS-8500:Oracle Clusterware OHASD process is starting with operating system process ID 22651
2017-04-17 06:09:25.247
[client(22651)]CRS-2112:The OLR service started on node sep02pvvm-405.
2017-04-17 06:09:26.901
[client(22743)]CRS-8500:Oracle Clusterware OHASD process is starting with operating system process ID 22743
2017-04-17 06:09:26.908
[client(22743)]CRS-2112:The OLR service started on node sep02pvvm-405.
2017-04-17 06:09:28.415
[client(22831)]CRS-8500:Oracle Clusterware OHASD process is starting with operating system process ID 22831
2017-04-17 06:09:28.422
[client(22831)]CRS-2112:The OLR service started on node sep02pvvm-405.
2017-04-17 06:09:30.014
[client(22914)]CRS-8500:Oracle Clusterware OHASD process is starting with operating system process ID 22914
2017-04-17 06:09:30.020
[client(22914)]CRS-2112:The OLR service started on node sep02pvvm-405.

2017/04/17 06:10:46 CLSRSC-318: Failed to start Oracle OHASD service
Died at /ora00/app/grid/product/12.2.0/crs/install/crsinstall.pm line 2775.
The command '/ora00/app/grid/product/12.2.0/perl/bin/perl -I/ora00/app/grid/product/12.2.0/perl/lib -I/ora00/app/grid/product/12.2.0/crs/install /ora00/app/grid/product/12.2.0/crs/install/roothas.pl -upgrade' execution failed

For above mentioned error i had to check ownership of  $GRID_HOME/diag/crs/testing/crs/* directories from root to grid user.

[root@SEP02PVVM-405 crs]# chown -R grid:oinstall alert
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall cdump/
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall incident/
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall incpkg
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall lck
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall log
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall metadata
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall metadata_dgif
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall metadata_pv
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall stage
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall sweep
[root@SEP02PVVM-405 crs]# chown -R grid:oinstall trace

Once this is done you will be able to start ASM services from New Grid Home.Don’t know why this came may be something happened during Cloning process but thought of sharing this information

 2) Once rootupgrade.sh has run try to see the Upgraded releaseversion.

[grid@TESTING bin]$ ./crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [12.2.0.1.0]


[grid@TESTING bin]$ ./crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.2.0.1.0]

 

3) Once Grid is Upgraded we need to install Oracle Database 12.2.0.1.0 software.

db1

db2

db3

db4

db5

db6

db7

db8

db9

db10

4) Once Oracle 12.2.0.1.0 Software is installed you can easily Upgrade the Desired Database by Using DBUA.

Below mentioned metalink DOC ID contains all necessary information.

Complete Checklist for Upgrading to Oracle Database 12c Release 2 (12.2) using DBUA (Doc ID 2189854.1)