Posted in Database, ORACLE

RMAN Backup and Restore Scenario for Oracle 12C Multitenancy System – Part 1

1)Performing Level 0 Backup of Pluggable Database:

Below mentioned command should be run by connecting to container database i.e in our case it is IMADB

We have to connect to RMAN prompt–

rman target sys/imadb@imadb catalog rcat01/rcat01@masterdb

NOTE:- In above example IMADB is container database and MASTERDB is holding Catalog for our pluggable database confdb.

Below mentioned command will be used to take incremental level backup of pluggable database confdb.

RMAN> BACKUP INCREMENTAL LEVEL 0 MAXSETSIZE  2G FILESPERSET 2 TAG 'Level0_32122' PLUGGABLE DATABASE confdb SKIP READONLY plus archivelog;

Starting backup at 03-APR-18

current log archived

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=138 device type=DISK

allocated channel: ORA_DISK_2

channel ORA_DISK_2: SID=263 device type=DISK

channel ORA_DISK_1: starting archived log backup set

channel ORA_DISK_1: specifying archived log(s) in backup set

input archived log thread=1 sequence=218 RECID=218 STAMP=972436655

input archived log thread=1 sequence=219 RECID=219 STAMP=972436681

channel ORA_DISK_1: starting piece 1 at 03-APR-18

channel ORA_DISK_2: starting archived log backup set

channel ORA_DISK_2: specifying archived log(s) in backup set

input archived log thread=1 sequence=216 RECID=217 STAMP=972436651

input archived log thread=1 sequence=217 RECID=216 STAMP=972436651

channel ORA_DISK_2: starting piece 1 at 03-APR-18

channel ORA_DISK_1: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/BACKUPSET/2018_04_03/annnf0_level0_3222_0.394.972440729 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03

channel ORA_DISK_1: starting archived log backup set

channel ORA_DISK_1: specifying archived log(s) in backup set

input archived log thread=1 sequence=220 RECID=220 STAMP=972440727

channel ORA_DISK_1: starting piece 1 at 03-APR-18

channel ORA_DISK_2: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/BACKUPSET/2018_04_03/annnf0_level0_3222_0.393.972440729 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03

channel ORA_DISK_1: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/BACKUPSET/2018_04_03/annnf0_level0_3222_0.392.972440733 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

Finished backup at 03-APR-18

Starting backup at 03-APR-18

using channel ORA_DISK_1

using channel ORA_DISK_2

channel ORA_DISK_1: starting incremental level 0 datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00009 name=+ORACONF/oradata/confdb/system01confdb.dbf

input datafile file number=00037 name=+ORACONF/oradata/confdb/ncrcattbsp01.dbf

channel ORA_DISK_1: starting piece 1 at 03-APR-18

channel ORA_DISK_2: starting incremental level 0 datafile backup set

channel ORA_DISK_2: specifying datafile(s) in backup set

input datafile file number=00010 name=+ORACONF/oradata/confdb/sysaux01confdb.dbf

channel ORA_DISK_2: starting piece 1 at 03-APR-18

channel ORA_DISK_1: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.391.972440733 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15

channel ORA_DISK_1: starting incremental level 0 datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00011 name=+ORACONF/oradata/confdb/undo01confdb.dbf

channel ORA_DISK_1: starting piece 1 at 03-APR-18

channel ORA_DISK_2: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.390.972440733 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_2: backup set complete, elapsed time: 00:00:15

channel ORA_DISK_2: starting incremental level 0 datafile backup set

channel ORA_DISK_2: specifying datafile(s) in backup set

input datafile file number=00012 name=+ORACONF/oradata/confdb/configdata01.dbf

channel ORA_DISK_2: starting piece 1 at 03-APR-18

channel ORA_DISK_2: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.388.972440749 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01

channel ORA_DISK_1: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.389.972440749 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03

Finished backup at 03-APR-18

Starting backup at 03-APR-18

current log archived

using channel ORA_DISK_1

using channel ORA_DISK_2

channel ORA_DISK_1: starting archived log backup set

channel ORA_DISK_1: specifying archived log(s) in backup set

input archived log thread=1 sequence=221 RECID=221 STAMP=972440752

channel ORA_DISK_1: starting piece 1 at 03-APR-18

channel ORA_DISK_1: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/BACKUPSET/2018_04_03/annnf0_level0_3222_0.386.972440753 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

Finished backup at 03-APR-18

Starting Control File Autobackup at 03-APR-18

piece handle=/ora00/app/oracle/bkp/sep02pvvm392/IMADB/LEVEL0/imadbcfc-194714882-20180403-03.RW99 comment=NONE

Finished Control File Autobackup at 03-APR-18

 

2)Performing Level 1 Backup of Pluggable Database:

We have to connect to RMAN prompt–

rman target sys/imadb@imadb catalog rcat01/rcat01@masterdb

Here we will be taking incremental level 1 backup of pluggable database confdb .Above mentioned command will be run while connecting to Container database:

RMAN> BACKUP INCREMENTAL LEVEL 1 MAXSETSIZE  2G FILESPERSET 2 TAG 'Level0_3222' PLUGGABLE DATABASE CONFDB SKIP READONLY plus archivelog;

Starting backup at 03-APR-18

current log archived

using channel ORA_DISK_1

using channel ORA_DISK_2

channel ORA_DISK_1: starting archived log backup set

channel ORA_DISK_1: specifying archived log(s) in backup set

input archived log thread=1 sequence=218 RECID=218 STAMP=972436655

input archived log thread=1 sequence=219 RECID=219 STAMP=972436681

channel ORA_DISK_1: starting piece 1 at 03-APR-18

channel ORA_DISK_2: starting archived log backup set

channel ORA_DISK_2: specifying archived log(s) in backup set

input archived log thread=1 sequence=216 RECID=217 STAMP=972436651

input archived log thread=1 sequence=217 RECID=216 STAMP=972436651

channel ORA_DISK_2: starting piece 1 at 03-APR-18

channel ORA_DISK_1: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/BACKUPSET/2018_04_03/annnf0_level0_3222_0.384.972440905 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03

channel ORA_DISK_1: starting archived log backup set

channel ORA_DISK_1: specifying archived log(s) in backup set

input archived log thread=1 sequence=220 RECID=220 STAMP=972440727

input archived log thread=1 sequence=221 RECID=221 STAMP=972440752

channel ORA_DISK_1: starting piece 1 at 03-APR-18

channel ORA_DISK_2: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/BACKUPSET/2018_04_03/annnf0_level0_3222_0.383.972440905 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03

channel ORA_DISK_2: starting archived log backup set

channel ORA_DISK_2: specifying archived log(s) in backup set

input archived log thread=1 sequence=222 RECID=222 STAMP=972440904

channel ORA_DISK_2: starting piece 1 at 03-APR-18

channel ORA_DISK_1: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/BACKUPSET/2018_04_03/annnf0_level0_3222_0.382.972440907 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

channel ORA_DISK_2: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/BACKUPSET/2018_04_03/annnf0_level0_3222_0.381.972440907 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01

Finished backup at 03-APR-18

Starting backup at 03-APR-18

using channel ORA_DISK_1

using channel ORA_DISK_2

channel ORA_DISK_1: starting incremental level 1 datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00009 name=+ORACONF/oradata/confdb/system01confdb.dbf

input datafile file number=00037 name=+ORACONF/oradata/confdb/ncrcattbsp01.dbf

channel ORA_DISK_1: starting piece 1 at 03-APR-18

channel ORA_DISK_2: starting incremental level 1 datafile backup set

channel ORA_DISK_2: specifying datafile(s) in backup set

input datafile file number=00010 name=+ORACONF/oradata/confdb/sysaux01confdb.dbf

channel ORA_DISK_2: starting piece 1 at 03-APR-18

channel ORA_DISK_1: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.380.972440909 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07

channel ORA_DISK_1: starting incremental level 1 datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00011 name=+ORACONF/oradata/confdb/undo01confdb.dbf

channel ORA_DISK_1: starting piece 1 at 03-APR-18

channel ORA_DISK_2: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.379.972440909 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_2: backup set complete, elapsed time: 00:00:07

channel ORA_DISK_2: starting incremental level 1 datafile backup set

channel ORA_DISK_2: specifying datafile(s) in backup set

input datafile file number=00012 name=+ORACONF/oradata/confdb/configdata01.dbf

channel ORA_DISK_2: starting piece 1 at 03-APR-18

channel ORA_DISK_2: finished piece 1 at 03-APR-18

piecehandle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.377.972440917 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01

channel ORA_DISK_1: finished piece 1 at 03-APR-18

piecehandle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.378.972440917 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03

Finished backup at 03-APR-18

Starting backup at 03-APR-18

current log archived

using channel ORA_DISK_1

using channel ORA_DISK_2

channel ORA_DISK_1: starting archived log backup set

channel ORA_DISK_1: specifying archived log(s) in backup set

input archived log thread=1 sequence=223 RECID=223 STAMP=972440919

channel ORA_DISK_1: starting piece 1 at 03-APR-18

channel ORA_DISK_1: finished piece 1 at 03-APR-18

piece handle=+REDO2/IMADB/BACKUPSET/2018_04_03/annnf0_level0_3222_0.375.972440919 tag=LEVEL0_3222 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

Finished backup at 03-APR-18

Starting Control File Autobackup at 03-APR-18

piece handle=/ora00/app/oracle/bkp/sep02pvvm392/IMADB/LEVEL0/imadbcfc-194714882-20180403-04.RW99 comment=NONE

Finished Control File Autobackup at 03-APR-18

 

3)Restoring pluggable database.

Now we be restoring above mentioned pluggable database with incremental backup taken above:

We have to connect to RMAN prompt–

rman target sys/imadb@imadb catalog rcat01/rcat01@masterdb

We have to close existing database if we need to test restore pluggable database:

RMAN> alter pluggable database confdb close;

Statement processed

Below mentioned command will Restore pluggable database confdb with backup taken above:

RMAN> restore pluggable database confdb;

Starting restore at 04-APR-18

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=27 device type=DISK

allocated channel: ORA_DISK_2

channel ORA_DISK_2: SID=391 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00010 to +ORACONF/oradata/confdb/sysaux01confdb.dbf

channel ORA_DISK_1: reading from backup piece +REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.390.972440733

channel ORA_DISK_2: starting datafile backup set restore

channel ORA_DISK_2: specifying datafile(s) to restore from backup set

channel ORA_DISK_2: restoring datafile 00009 to +ORACONF/oradata/confdb/system01confdb.dbf

channel ORA_DISK_2: restoring datafile 00037 to +ORACONF/oradata/confdb/ncrcattbsp01.dbf

channel ORA_DISK_2: reading from backup piece +REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.391.972440733

channel ORA_DISK_2: piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.391.972440733 tag=LEVEL0_3222

channel ORA_DISK_2: restored backup piece 1

channel ORA_DISK_2: restore complete, elapsed time: 00:00:08

channel ORA_DISK_2: starting datafile backup set restore

channel ORA_DISK_2: specifying datafile(s) to restore from backup set

channel ORA_DISK_2: restoring datafile 00012 to +ORACONF/oradata/confdb/configdata01.dbf

channel ORA_DISK_2: reading from backup piece +REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.388.972440749

channel ORA_DISK_2: piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.388.972440749 tag=LEVEL0_3222

channel ORA_DISK_2: restored backup piece 1

channel ORA_DISK_2: restore complete, elapsed time: 00:00:01

channel ORA_DISK_2: starting datafile backup set restore

channel ORA_DISK_2: specifying datafile(s) to restore from backup set

channel ORA_DISK_2: restoring datafile 00011 to +ORACONF/oradata/confdb/undo01confdb.dbf

channel ORA_DISK_2: reading from backup piece +REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.389.972440749

channel ORA_DISK_1: piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.390.972440733 tag=LEVEL0_3222

channel ORA_DISK_1: restored backup piece 1

channel ORA_DISK_1: restore complete, elapsed time: 00:00:12

channel ORA_DISK_2: piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn0_level0_3222_0.389.972440749 tag=LEVEL0_3222

channel ORA_DISK_2: restored backup piece 1

channel ORA_DISK_2: restore complete, elapsed time: 00:00:03

 

Once restore has been done we will be performing recovery of database

RMAN> recover pluggable database confdb;

Starting recover at 04-APR-18

using channel ORA_DISK_1

using channel ORA_DISK_2

channel ORA_DISK_1: starting incremental datafile backup set restore

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

destination for restore of datafile 00010: +ORACONF/oradata/confdb/sysaux01confdb.dbf

channel ORA_DISK_1: reading from backup piece +REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.379.972440909

channel ORA_DISK_2: starting incremental datafile backup set restore

channel ORA_DISK_2: specifying datafile(s) to restore from backup set

destination for restore of datafile 00009: +ORACONF/oradata/confdb/system01confdb.dbf

destination for restore of datafile 00037: +ORACONF/oradata/confdb/ncrcattbsp01.dbf

channel ORA_DISK_2: reading from backup piece +REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.380.972440909

channel ORA_DISK_1: piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.379.972440909 tag=LEVEL0_3222

channel ORA_DISK_1: restored backup piece 1

channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

channel ORA_DISK_1: starting incremental datafile backup set restore

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

destination for restore of datafile 00012: +ORACONF/oradata/confdb/configdata01.dbf

channel ORA_DISK_1: reading from backup piece +REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.377.972440917

channel ORA_DISK_2: piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.380.972440909 tag=LEVEL0_3222

channel ORA_DISK_2: restored backup piece 1

channel ORA_DISK_2: restore complete, elapsed time: 00:00:01

channel ORA_DISK_2: starting incremental datafile backup set restore

channel ORA_DISK_2: specifying datafile(s) to restore from backup set

destination for restore of datafile 00011: +ORACONF/oradata/confdb/undo01confdb.dbf

channel ORA_DISK_2: reading from backup piece +REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.378.972440917

channel ORA_DISK_1: piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.377.972440917 tag=LEVEL0_3222

channel ORA_DISK_1: restored backup piece 1

channel ORA_DISK_1: restore complete, elapsed time: 00:00:00

channel ORA_DISK_2: piece handle=+REDO2/IMADB/382313F87C9C26CAE053334F47997B44/BACKUPSET/2018_04_03/nnndn1_level0_3222_0.378.972440917 tag=LEVEL0_3222

channel ORA_DISK_2: restored backup piece 1

channel ORA_DISK_2: restore complete, elapsed time: 00:00:01
starting media recovery
archived log for thread 1 with sequence 223 is already on disk as file +REDO2/IMADB/ARCHIVELOG/2018_04_03/thread_1_seq_223.376.972440919

archived log for thread 1 with sequence 224 is already on disk as file +REDO2/IMADB/ARCHIVELOG/2018_04_03/thread_1_seq_224.374.972479867

archived log for thread 1 with sequence 225 is already on disk as file +REDO2/IMADB/ARCHIVELOG/2018_04_03/thread_1_seq_225.373.972501467

archived log for thread 1 with sequence 226 is already on disk as file +REDO2/IMADB/ARCHIVELOG/2018_04_03/thread_1_seq_226.372.972511235

archived log for thread 1 with sequence 227 is already on disk as file +REDO2/IMADB/ARCHIVELOG/2018_04_03/thread_1_seq_227.371.972516715

archived log for thread 1 with sequence 228 is already on disk as file +REDO2/IMADB/ARCHIVELOG/2018_04_04/thread_1_seq_228.370.972523167

archived log for thread 1 with sequence 229 is already on disk as file +REDO2/IMADB/ARCHIVELOG/2018_04_04/thread_1_seq_229.369.972544671

archived log file name=+REDO2/IMADB/ARCHIVELOG/2018_04_03/thread_1_seq_223.376.972440919 thread=1 sequence=223

archived log file name=+REDO2/IMADB/ARCHIVELOG/2018_04_03/thread_1_seq_224.374.972479867 thread=1 sequence=224

archived log file name=+REDO2/IMADB/ARCHIVELOG/2018_04_03/thread_1_seq_225.373.972501467 thread=1 sequence=225

archived log file name=+REDO2/IMADB/ARCHIVELOG/2018_04_03/thread_1_seq_226.372.972511235 thread=1 sequence=226

archived log file name=+REDO2/IMADB/ARCHIVELOG/2018_04_03/thread_1_seq_227.371.972516715 thread=1 sequence=227

media recovery complete, elapsed time: 00:00:15

Finished recover at 04-APR-18

 

4)Open the database for user operation once database has been recovered:

Once database has been restored and recovered we will open the pluggable database in READ-WRITE MODE

RMAN> alter pluggable database confdb open;

Statement processed
Advertisements
Posted in HADOOP, Uncategorized

HADOOP 2.6.5 Single Node Cluster Installation ON RHEL 7

Hadoop: It is an Open source software framework from Apache Foundation.It is used for processing large amount of Heterogeneous data sets in Distributed Form.

Each Component in Hadoop is configured using an XML File . Hadoop can be run in one of 3 nodes :–

  • Standalone Mode — It is the Default Mode of Hadoop.In this there is no Hadoop Daemon process running
  • Pseudo-Distributed Mode — In this Hadoop run on Single Machine with all his daemon running as separate Process.It simulate Multiserver Installation.
  • Distributed Mode — It involves code running on different server . In this we need to configure all Master and Worker Node.

Prerequisites

  • Virtual Machine Installed
  • RHEL , UBUNTU or Cent OS

Preinstallation Steps:

  • Install the SSH
  • Create a password login to SSH
  • Suitable version of Java.

Step by Step Process to Install Pseudo-Distributed Mode:

This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop Distributed File System (HDFS).

1) Create a dedicated Hadoop user.

useradd hduser

passwd hduser

2) Installing SSH.

The command we use to connect to remote machines – the client.The ssh is pre-enabled on Linux

[hduser@TESTING ~]$ which ssh
/usr/bin/ssh

 

3) Create and Setup SSH Certificates for passwordless login.

Hadoop requires SSH access to manage its nodes, i.e. remote machines plus our local machine. For our single-node setup of Hadoop, we therefore need to configure SSH access to localhost.

So, we need to have SSH up and running on our machine and configured it to allow SSH public key authentication.

Hadoop uses SSH which would normally require the user to enter a password. Installing SSH certificates will create a passwordless authentication using the following commands.

[hduser@TESTING ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
71:4d:67:e4:7d:ed:38:96:24:07:4f:59:21:fd:c6:40 hduser@TESTING
The key's randomart image is:
+--[ RSA 2048]----+
| o+E+o|
| o B+o.|
| . . o =o=|
| o + +=|
| S =..|
| . . |
| |
| |
| |
+-----------------+

 

This command adds the newly created key to the list of authorized keys so that Hadoop can use ssh without prompting for a password.

[hduser@TESTING ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hduser@TESTING ~]$ chmod 0600 ~/.ssh/authorized_keys

We can check now if passwordless authentication works by below mentione command:

[hduser@TESTING ~]$ ssh hduser@localhost
Last login: Tue Nov 14 00:21:16 2017 from TESTING.sweng.ncr.com

 

4) Install Hadoop

Download the software using below mentioned link:

Apache Hadoop Software

I am using Hadoop 2.6.5 Version

Unzip the tar file with below mentioned command

[hduser@TESTING ~]$ tar xzf hadoop-2.6.5.tar.gz

As it is for testing purpose i am installing this at home directory of HDUSER but for production scenario try to move this tar file to /usr/local/ location

5) Setting up Java

I assume you don’t have Java installed and you are doing it from scratch.
If already installed you can check it by typing

[hduser@TESTING ~]$ java -version
openjdk version "1.8.0_102"
OpenJDK Runtime Environment (build 1.8.0_102-b14)
OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode)

 

6) Setup Configuration Files

The following files should to be modified to complete the Hadoop setup:

  • Edit ~/.bashrc file:–Before editing the .bashrc file in hduser’s home directory, we need to find the path where Java has been installed to set the JAVA_HOME environment variable using the following command:
hduser@TESTING ~]$ readlink -f $(which java)
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64/jre/bin/java

Set this path in .bashrc profile

[hduser@TESTING ~]$ echo $JAVA_HOME
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64/jre

 

This is how my .bashrc file looks.Edit it as per your environment based on             component which has been installed

# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
 . /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
# Set Hadoop-related environment variables
export HADOOP_HOME=/home/hduser/hadoop-2.6.5
export HADOOP_INSTALL=/home/hduser/hadoop-2.6.5
#Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.102-4.b14.el7.x86_64/jre
export PATH=$PATH:$JAVA_HOME/bin
PATH=$PATH:$HOME/bin
export PATH
# Some convenient aliases and functions for running Hadoop-related commands
#unalias fs &> /dev/null
#alias fs=”hadoop fs”
#unalias hls &> /dev/null
#alias hls=”fs -ls”
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed ‘lzop’ command.
#
lzohead () {
hadoop fs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
# Add Pig bin / directory to PATH
#export PIG_HOME=/home/hduser/pig-0.15.0
#export PATH=$PATH:$PIG_HOME/bin
# User specific aliases and functions
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HBASE_HOME=/home/hduser/hbase
export PATH=$PATH:$HBASE_HOME/bin
export SQOOP_HOME=/home/hduser/sqoop
export PATH=$PATH:$SQOOP_HOME/bin
export HIVE_HOME=/home/hduser/hive
export PATH=$PATH:$HIVE_HOME/bin
export HADOOP_USER_CLASSPATH_FIRST=true
export SQOOP_CONF_DIR="$SQOOP_HOME/conf"
export SQOOP_CLASSPATH="$SQOOP_CONF_DIR"
#export HADOOP_OPTS=”-Djava.library.path=$HADOOP_HOME/lib”
#export SCALA_HOME=/home/hduser/scala/
#export PATH=$PATH:$SCALA_HOME:/bin/
#export SQOOP_HOME=/home/hduser/Softwares/sqoop
#export PATH=$PATH:$SQOOP_HOME:/bin/

 

  •  Edit core-site.xml File:The /home/hduser/hadoop-2.6.5/etc/hadoop/core-site.xml file contains configuration properties that Hadoop uses when starting up.
    This file can be used to override the default settings that Hadoop starts with.

Open the file and enter the following in between the <configuration></configuration> tag:

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://TESTING:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri’s scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri’s authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>

 

  • Edit mapred-site.xml:By default, the /usr/local/hadoop/etc/hadoop/ folder contains
    /usr/local/hadoop/etc/hadoop/mapred-site.xml.template
    file which has to be renamed/copied with the name mapred-site.xml:
[hduser@TESTING hadoop]$ cp /home/hduser/hadoop-2.6.5/etc/hadoop/mapred-site.xml.template/mapred-site.xml.template /home/hduser/hadoop-2.6.5/etc/hadoop/mapred-site.xml

The /home/hduser/hadoop-2.6.5/etc/hadoop/mapred-site.xml file is used to specify  which framework is being used for MapReduce.
We need to enter the following content in between the <configuration></configuration> tag:

<property>
<name>mapred.job.tracker</name>
<value>TESTING:9001</value>
<description>The host and port that the MapReduce job tracker runs
If “local”, then jobs are run in-process as a single map
and reduce task.
</description>
</property>

 

  • Edit yarn-site.xml:YARN configuration options are stored in the /home/hduser/hadoop-2.6.5/etc/hadoop/yarn-site.xml file.This file contains configuration information that overrides the default values for YARN parameters.

Add below mentioned thing in /home/hduser/hadoop-2.6.5/etc/hadoop/yarn-site.xml in between the <configuration></configuration> tag:

<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

 

  • Edit hdfs-site.xml:The /home/hduser/hadoop-2.6.5/etc/hadoop/hdfs-site.xml file needs to be configured for each host in the cluster that is being used. It is used to specify the directories which will be used as the namenode and the datanode on that host.

Before editing this file, we need to create two directories which will contain the namenode and the datanode for this Hadoop installation. This can be done using the following commands:-

[hduser@TESTING hadoop]$ mkdir -p /home/hduser/hadoop_store/hdfs/namenode
[hduser@TESTING hadoop]$ mkdir -p /home/hduser/hadoop_store/hdfs/datanode
[hduser@TESTING hadoop]$ chown -R hduser:hduser /home/hduser/hadoop_store

 

Add below mentioned thing in hdfs-site.xml in between the <configuration></configuration> tag:

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hduser/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hduser/hadoop_store/hdfs/datanode</value>
</property>
</configuration>

 

7) Format NEW HADOOP FILESYSTEM

Now, the Hadoop filesystem needs to be formatted so that we can start to use it.
The format command should be issued with write permission since it creates current directory under /usr/local/hadoop_store/hdfs/namenode folder:

[hduser@TESTING ~]$ su - hduser
Password:
Last login: Tue Nov 14 06:30:38 EST 2017 from winsk185391-eaa.corp.ncr.com on pts/0
[hduser@TESTING ~]$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

17/11/14 06:45:47 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = TESTING.sweng.ncr.com/153.71.79.39
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.5
STARTUP_MSG: classpath = /home/hduser/hadoop-2.6.5/etc/hadoop:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/activation-1.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/guava-11.0.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-net-3.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/xz-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/htrace-core-3.0.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/asm-3.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/hadoop-annotations-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/curator-framework-2.6.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/junit-4.11.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jettison-1.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/paranamer-2.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-io-2.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-el-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/hadoop-auth-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/avro-1.7.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/gson-2.2.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/curator-client-2.6.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/hadoop-common-2.6.5-tests.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/hadoop-nfs-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/hadoop-common-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/hadoop-hdfs-2.6.5-tests.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/hadoop-hdfs-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/activation-1.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/xz-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/asm-3.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/guice-3.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-common-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-api-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-registry-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-client-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-common-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5-tests.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.5.jar:/home/hduser/hadoop-2.6.5/contrib/capacity-scheduler/*.jar:/home/hduser/hadoop-2.6.5/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997; compiled by 'sjlee' on 2016-10-02T23:43Z
STARTUP_MSG: java = 1.8.0_102
************************************************************/
17/11/14 06:45:47 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/11/14 06:45:47 INFO namenode.NameNode: createNameNode [-format]
17/11/14 06:45:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/14 06:45:48 WARN common.Util: Path /home/hduser/hadoop_store/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
17/11/14 06:45:48 WARN common.Util: Path /home/hduser/hadoop_store/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-7cee5a7e-d124-48cd-b755-2fb74d9c45df
17/11/14 06:45:48 INFO namenode.FSNamesystem: No KeyProvider found.
17/11/14 06:45:48 INFO namenode.FSNamesystem: fsLock is fair:true
17/11/14 06:45:48 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/11/14 06:45:48 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/11/14 06:45:48 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/11/14 06:45:48 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Nov 14 06:45:48
17/11/14 06:45:48 INFO util.GSet: Computing capacity for map BlocksMap
17/11/14 06:45:48 INFO util.GSet: VM type = 64-bit
17/11/14 06:45:48 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/11/14 06:45:48 INFO util.GSet: capacity = 2^21 = 2097152 entries
17/11/14 06:45:48 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/11/14 06:45:48 INFO blockmanagement.BlockManager: defaultReplication = 1
17/11/14 06:45:48 INFO blockmanagement.BlockManager: maxReplication = 512
17/11/14 06:45:48 INFO blockmanagement.BlockManager: minReplication = 1
17/11/14 06:45:48 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
17/11/14 06:45:48 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/11/14 06:45:48 INFO blockmanagement.BlockManager: encryptDataTransfer = false
17/11/14 06:45:48 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
17/11/14 06:45:48 INFO namenode.FSNamesystem: fsOwner = hduser (auth:SIMPLE)
17/11/14 06:45:48 INFO namenode.FSNamesystem: supergroup = supergroup
17/11/14 06:45:48 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/11/14 06:45:48 INFO namenode.FSNamesystem: HA Enabled: false
17/11/14 06:45:48 INFO namenode.FSNamesystem: Append Enabled: true
17/11/14 06:45:48 INFO util.GSet: Computing capacity for map INodeMap
17/11/14 06:45:48 INFO util.GSet: VM type = 64-bit
17/11/14 06:45:48 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/11/14 06:45:48 INFO util.GSet: capacity = 2^20 = 1048576 entries
17/11/14 06:45:48 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/11/14 06:45:48 INFO util.GSet: Computing capacity for map cachedBlocks
17/11/14 06:45:48 INFO util.GSet: VM type = 64-bit
17/11/14 06:45:48 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/11/14 06:45:48 INFO util.GSet: capacity = 2^18 = 262144 entries
17/11/14 06:45:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/11/14 06:45:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/11/14 06:45:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
17/11/14 06:45:48 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/11/14 06:45:48 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/11/14 06:45:48 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/11/14 06:45:48 INFO util.GSet: VM type = 64-bit
17/11/14 06:45:48 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/11/14 06:45:48 INFO util.GSet: capacity = 2^15 = 32768 entries
17/11/14 06:45:48 INFO namenode.NNConf: ACLs enabled? false
17/11/14 06:45:48 INFO namenode.NNConf: XAttrs enabled? true
17/11/14 06:45:48 INFO namenode.NNConf: Maximum size of an xattr: 16384
17/11/14 06:45:48 INFO namenode.FSImage: Allocated new BlockPoolId: BP-217137775-153.71.79.39-1510659948572
17/11/14 06:45:48 INFO common.Storage: Storage directory /home/hduser/hadoop_store/hdfs/namenode has been successfully formatted.
17/11/14 06:45:48 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hduser/hadoop_store/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 using no compression
17/11/14 06:45:48 INFO namenode.FSImageFormatProtobuf: Image file /home/hduser/hadoop_store/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
17/11/14 06:45:48 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/14 06:45:48 INFO util.ExitUtil: Exiting with status 0
17/11/14 06:45:48 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at TESTING.sweng.ncr.com/153.71.79.39

 

If It is your first time installation then you can format data node as well.

[hduser@TESTING ~]$ hadoop datanode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

17/11/14 06:48:38 INFO datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = TESTING.sweng.ncr.com/153.71.79.39
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.6.5
STARTUP_MSG: classpath = /home/hduser/hadoop-2.6.5/etc/hadoop:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/activation-1.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/guava-11.0.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-net-3.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/xz-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/htrace-core-3.0.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/asm-3.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/hadoop-annotations-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/curator-framework-2.6.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/junit-4.11.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jettison-1.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/paranamer-2.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-io-2.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-el-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/hadoop-auth-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/avro-1.7.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/gson-2.2.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/curator-client-2.6.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/hadoop-common-2.6.5-tests.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/hadoop-nfs-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/common/hadoop-common-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/hadoop-hdfs-2.6.5-tests.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/hadoop-hdfs-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/activation-1.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/xz-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/asm-3.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/guice-3.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-common-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-api-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-registry-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-client-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-server-common-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5-tests.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.5.jar:/home/hduser/hadoop-2.6.5/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.5.jar:/home/hduser/hadoop-2.6.5/contrib/capacity-scheduler/*.jar:/home/hduser/hadoop-2.6.5/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = https://github.com/apache/hadoop.git -r e8c9fe0b4c252caf2ebf1464220599650f119997; compiled by 'sjlee' on 2016-10-02T23:43Z
STARTUP_MSG: java = 1.8.0_102
************************************************************/
17/11/14 06:48:38 INFO datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
Usage: java DataNode [-regular | -rollback]
 -regular : Normal DataNode startup (default).
 -rollback : Rollback a standard or rolling upgrade.
 Refer to HDFS documentation for the difference between standard
 and rolling upgrades.

17/11/14 06:48:39 WARN datanode.DataNode: Exiting Datanode
17/11/14 06:48:39 INFO util.ExitUtil: Exiting with status 1
17/11/14 06:48:39 INFO datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at TESTING.sweng.ncr.com/153.71.79.39
************************************************************/

 

8) Starting Hadoop

Now it’s time to start the newly installed single node cluster.
We can use start-all.sh or (start-dfs.sh and start-yarn.sh)

[hduser@TESTING ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
17/11/14 06:52:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [storage.castrading.com]
storage.castrading.com: ssh: connect to host storage.castrading.com port 22: Connection timed out
localhost: starting datanode, logging to /home/hduser/hadoop-2.6.5/logs/hadoop-hduser-datanode-TESTING.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is e2:64:54:cd:5b:bf:c9:1d:4b:c2:2d:b5:e7:96:16:0e.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /home/hduser/hadoop-2.6.5/logs/hadoop-hduser-secondarynamenode-TESTING.out
17/11/14 06:57:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hduser/hadoop-2.6.5/logs/yarn-hduser-resourcemanager-TESTING.out
localhost: starting nodemanager, logging to /home/hduser/hadoop-2.6.5/logs/yarn-hduser-nodemanager-TESTING.out


We can check if all processes are really up and running by below mentioned command:

[hduser@TESTING hadoop]$ jps
14512 NameNode
29345 DFSAdmin
14837 SecondaryNameNode
15302 NodeManager
15018 ResourceManager
14653 DataNode
15470 Jps
Posted in Database, ORACLE

Converting non-CDB database as PDB in Oracle 12c.

Version of Non-CDB database = 12.2.0.1.0

Version of CDB database = 12.2.0.1.0

Following steps will plug-in non-cdb database into cdb as pdb.

My non-cdb database name is confdb

My cdb database name is imadb

1) Open non-cdb database in read-only mode to create XML file for the PDB.

  • Shutting down non-cdb database (confdb)
TESTING:oracle $ export ORACLE_SID=confdb
TESTING:oracle $ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Thu Feb 15 06:34:47 2018

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
  • Mounting the database and opening in read-only
SQL> startup mount
ORACLE instance started.

Total System Global Area 1006632960 bytes
Fixed Size 8628160 bytes
Variable Size 322963520 bytes
Database Buffers 666894336 bytes
Redo Buffers 8146944 bytes
Database mounted.
SQL> alter database open read only;

Database altered.
  • CREATE XML FILE FOR PDB
SQL> exec DBMS_PDB.DESCRIBE('/home/oracle/depconfdb.xml');

PL/SQL procedure successfully completed.

 

2) Shutdown non-cdb database.

TESTING:oracle $ export ORACLE_SID=imadb

TESTING:oracle $ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Thu Feb 15 06:34:47 2018

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

 

3)Plugin non-cdb database as pdb into new cdb

  • Running following command on CDB database – imadb
SQL>CREATE PLUGGABLE DATABASE confdb USING '/home/oracle/depconfdb.xml' NOCOPY TEMPFILE REUSE;

 

4)Convert the dictionary of new pluggable database to PDB type.

  • Change the container to new pluggable database and run the script
SQL>ALTER SESSION set container=confdb;

Session altered.

SQL>@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

 

5)Start New Pluggable database.

SQL> alter pluggable database confdb open;

Pluggable database altered.
  • If there is some errors on above mentioned you can use this view to get information
SQL>select name,cause,type,message,status from PDB_PLUG_IN_VIOLATIONS where name = 'CONFDB' and status != 'RESOLVED';

 

Posted in Database, ORACLE

datapatch fails with ORA-04063 : package body “SYS.DBMS_SQLPATCH” has errors

Recently when i was applying Critical patch Update for October 2017 on my Database Server.

Patch applied process with OPatch utility completed successfully  but there was issue with datapatch utility :

TESTING:oracle $ ./datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Thu Nov 16 04:33:57 2017
Copyright (c) 2012, 2017, Oracle. All rights reserved.

Log file for this invocation: /ora00/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_27492_2017_11_16_04_33_57/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done

DBD::Oracle::db selectrow_array failed: ORA-04063: package body "SYS.DBMS_SQLPATCH" has errors (DBD ERROR: OCIStmtExecute) [for Statement "SELECT dbms_sqlpatch.verify_queryable_inventory FROM dual"] at /ora00/app/oracle/product/12.2.0/sqlpatch/sqlpatch.pm line 4524, <LOGFILE> line 21.




Please refer to MOS Note 1609718.1 and/or the invocation log
/ora00/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_27492_2017_11_16_04_33_57/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Thu Nov 16 04:33:57 2017

 

There was lot of issue mentioned on MOS Note 1609718.1 but above mentioned package issue was not present.

Latter i tried below mentioned steps to recreate those package body :

1)Finding current status of package :

TESTING:oracle $ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 16 05:04:24 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.




Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> SELECT dbms_sqlpatch.verify_queryable_inventory FROM dual;
SELECT dbms_sqlpatch.verify_queryable_inventory FROM dual
 *
ERROR at line 1:
ORA-04063: package body "SYS.DBMS_SQLPATCH" has errors

 

2) As we can see in above mentioned command package body was displaying error so i tried to look-out for script which create that package by running below mentioned command inside $ORACLE_HOME/rdbms/admin directory.

TESTING:oracle $ find . | xargs grep "dbms_sqlpatch"
grep: .: Is a directory
./e1201000.sql:Rem surman 08/05/13 - 17005047: Add dbms_sqlpatch
./catpdbms.sql:Rem surman 08/02/13 - 17005047: Add dbms_sqlpatch
./catdwgrd.sql:Rem dbms_sqlpatch package, as the package may not be valid
./dbmssqlpatch.sql:Rem surman 08/18/14 - Always reload dbms_sqlpatch
./dbmssqlpatch.sql:CREATE OR REPLACE PACKAGE dbms_sqlpatch AS
./dbmssqlpatch.sql:END dbms_sqlpatch;
./dbmssqlpatch.sql:CREATE OR REPLACE PUBLIC SYNONYM dbms_sqlpatch FOR sys.dbms_sqlpatch;
./dbmssqlpatch.sql:GRANT EXECUTE ON dbms_sqlpatch TO execute_catalog_role;
grep: ./cdb_cloud: Is a directory
grep: ./cdb_cloud/sql: Is a directory
grep: ./cdb_cloud/dbt: Is a directory
grep: ./cdb_cloud/dbt/test: Is a directory
grep: ./cdb_cloud/rsp: Is a directory
grep: ./cdb_cloud/apex_install: Is a directory
grep: ./cdb_cloud/apex_install/ords: Is a directory
./prvtsqlpatch.plb: EXECUTE IMMEDIATE 'DROP TABLE dbms_sqlpatch_state';
./prvtsqlpatch.plb:CREATE TABLE dbms_sqlpatch_state (
./prvtsqlpatch.plb:CREATE OR REPLACE PACKAGE BODY dbms_sqlpatch wrapped
./catpprvt.sql:Rem surman 08/03/13 - 17005047: Add dbms_sqlpatch
./catpprvt.sql:-- 20772435: Queryable dbms_sqlpatch package body is now created in catxrd
./catxrd.sql:Rem dbms_sqlpatch package
./xrde121.sql:DROP PACKAGE dbms_sqlpatch;
./xrde121.sql:DROP PUBLIC SYNONYM dbms_sqlpatch;

From above mentioned output  we can see that prvtsqlpatch.plb script is creating this package body.So ran this script to recreate dbms_sqlpatch package.

3)Running  prvtsqlpatch.plb to recreate dbms_sqlpatch package and checking status of this package;

TESTING:oracle $ export ORACLE_SID=confdb

TESTING:oracle $ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 16 05:04:24 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.




Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> @prvtsqlpatch.plb

Session altered.




PL/SQL procedure successfully completed.




Table created.




Package body created.

No errors.

Session altered.

SQL> SELECT dbms_sqlpatch.verify_queryable_inventory FROM dual;

VERIFY_QUERYABLE_INVENTORY
--------------------------------------------------------------------------------

OK

 

4)Once above mentioned package is recreated we can apply patch on database with the help of datapatch utility:

TESTING:oracle $ ./datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Thu Nov 16 05:05:10 2017
Copyright (c) 2012, 2017, Oracle. All rights reserved.

Log file for this invocation: /ora00/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_5382_2017_11_16_05_05_10/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBRU:
 ID 171017 in the binary registry and not installed in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
 Nothing to roll back
 The following patches will be applied:
 26710464 (DATABASE RELEASE UPDATE 12.2.0.1.171017)

Installing patches...
Patch installation complete. Total patches installed: 1

Validating logfiles...
Patch 26710464 apply: SUCCESS
 logfile: /ora00/app/oracle/cfgtoollogs/sqlpatch/26710464/21632407/26710464_apply_CONFDB_2017Nov16_05_05_18.log (no errors)
SQL Patching tool complete on Thu Nov 16 05:06:11 2017

 

 

 

 

 

Posted in Database, ORACLE

OPatch failed with error code 2

Before applying Critical Patch Update for October 2017, I wanted to confirm Current patch details which are applied but ran into below mentioned issues.

TESTING:oracle $ /ora00/app/oracle/product/12.2.0/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2017, Oracle Corporation. All rights reserved.

PREREQ session

Oracle Home : /ora00/app/oracle/product/12.2.0
Central Inventory : /ora00/app/oraInventory
 from : /ora00/app/oracle/product/12.2.0/oraInst.loc
OPatch version : 12.2.0.1.6
OUI version : 12.2.0.1.4
Log file location : /ora00/app/oracle/product/12.2.0/cfgtoollogs/opatch/opatch2017-11-16_03-29-39AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"
List of Homes on this system:

Prereq "checkConflictAgainstOHWithDetail" is not executed.

The details are:
Exception occured : RawInventory gets null OracleHomeInfo
Summary of Conflict Analysis:

There are no patches that can be applied now.

OPatch failed with error code 2

 

The above error occurs when Opatch was not able to find the database HOME in inventory xml file present inside Oracle Inventory Directory.

Solution is to have the Oracle Home attach to the Central Inventory of the server

We can re-attach the oracle home by using script present at $ORACLE_HOME/oui/bin “attachHome.sh” location.Do crosscheck value of Oracle_Home present inside attachHome.sh script.

TESTING:oracle $ ./attachHome.sh
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 1921 MB Passed
The inventory pointer is located at /etc/oraInst.loc
Please execute the '/ora00/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

 

Once that is done we can run above mentioned command without any errors:

TESTING:oracle $ /ora00/app/oracle/product/12.2.0/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2017, Oracle Corporation. All rights reserved.

PREREQ session

Oracle Home : /ora00/app/oracle/product/12.2.0
Central Inventory : /ora00/app/oraInventory
 from : /ora00/app/oracle/product/12.2.0/oraInst.loc
OPatch version : 12.2.0.1.6
OUI version : 12.2.0.1.4
Log file location : /ora00/app/oracle/product/12.2.0/cfgtoollogs/opatch/opatch2017-11-16_03-38-25AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

 

Posted in Database, ORACLE

ORA-39181: Only partial table data may be exported due to fine grain access control

Data pump expdp log shows following error for some tables during the export:

EXP ORA-39181: Only partial table data may be exported due to fine grain access control on "ADMIN"."XCF"
EXP . . exported "ADMIN"."XCF" 0 KB 0 rows
EXP ORA-39181: Only partial table data may be exported due to fine grain access control on "ADMIN"."XDF"
EXP . . exported "ADMIN"."XDF" 0 KB 0 rows

Also VPD is not enabled to show this error.

Solution:

Provide the below privilege to the schema which you are trying to export:

SQL> GRANT EXEMPT ACCESS POLICY to <SCHEMA_NAME>;
Posted in LINUX, OS

Setting udev rules for Oracle ASM on Linux

1) Use lsblk command to know current disk allocated.

2) Use below mentioned command to know disk id.

/sbin/scsi_id -g -u -d /dev/sdr

3)Once you know all the id of associated disk to be used for ASM data storage.

You can put below mentioned line in vi /etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL=="sd?", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$name", RESULT=="36000c2966eff055daa2454a0d522523e", NAME="arcRedo", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$name", RESULT=="36000c29e693d7a0dfbd895da40a985f1", NAME="at01", OWNER="grid", GROUP="asmadmin", MODE="0660"

4)Once you have set the rule you can now reload udev rules with below mentioned command.After rules are implemented we can test the desired disk.

/sbin/udevadm control --reload-rules

/sbin/start_udev

udevadm test /dev/arcRedo

5)Verify by below mentioned command.

[root@TESTING~]# ls -l /dev/arcRedo
brw-rw---- 1 grid asmadmin 8, 32 Aug 19 05:43 /dev/arcRedo