Performing Planned Switchover/Switchback Operations with Oracle Dataguard

What is Switchover:

A switchover is a role reversal between the primary database and one of its standby databases. A

Switchover operation guarantees no data loss. This is typically done for planned maintenance of the primary system. During a switchover, the primary database transitions to a standby role and the standby database transitions to the primary role. The transition occurs without having to recreate either database.

Prerequisites:

1. Verify that there is network connectivity between the primary and standby locations.

2. Each location in the Data Guard configuration should have connectivity through Oracle Net to the primary database and to all associated standby databases.

3. Verify that there are no active users connected to the databases.

4. For switchover operations involving a physical standby database, the primary database instance is open and the standby database instance is mounted.

5. The standby database that you plan to transition to the primary role must be mounted before you begin the switchover operation. Ideally, the physical standby database will also be actively recovering archived redo logs when the database roles are switched.

6. Remove any redo data application delay in effect on the standby database.

7. Ensure the Primary Database and Standby Database is in sync. after issuing number of log switches (ideally the number of log switches should be equal to the number of Redo Log Groups present in your Primary instance) on Primary instance.

Switchover Operation:

1. Verify that it is possible to perform a switchover operation. On the primary query the switchover_status

column of v$database to verify that switchover to standby is possible.

SQL> select switchover_status from v$database;

SWITCHOVER_STATUS

---------------------------------

TO STANDBY

In order to perform a switchover all sessions to the database need to be disconnected. In version 901 this

was a manual process. In version 9.2.0 this process has been automated with the “with session shutdown” clause that has been added to the alter database commit to switchover command.

If SWITCHOVER_STATUS returns SESSIONS ACTIVE then you should either disconnect all sessions

manually or when performing step 2 you should append the “with session shutdown” clause. For example:

SQL> alter database commit to switchover to standby with session shutdown;

2. Convert the primary database to the new standby:

SQL> alter database commit to switchover to standby with session shutdown;
Database altered.

3. Shutdown the former primary and mount as a standby database:

SQL> shutdown immediate
ORA01507:database not mounted

ORACLE instance shut down.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 85020944 bytes

Fixed Size 454928 bytes

Variable Size 71303168 bytes

Database Buffers 12582912 bytes

Redo Buffers 679936 bytes

SQL> alter database mount standby database;
Database altered.

4. Defer the remote archive destination on the old primary:

SQL> alter system set log_archive_dest_state_2=defer scope=both;

5. Verify that the physical standby can be converted to the new primary:

SQL> select switchover_status from v$database;

SWITCHOVER_STATUS

----------------------------------

SWITCHOVER PENDING

Note that if the status returns SESSIONS ACTIVE then you should append the with session shutdown

clause to the command in step 6.

6. Convert the physical standby to the new primary:

SQL> alter database commit to switchover to primary with session shutdown;
Database altered.

If you are on version 9.0.1 then you should first cancel managed recovery prior to issuing the above

command. If you are on 9.2.0 and have started managed recovery with the “through all switchover” clause then you should also cancel managed recovery before issuing the above command.

7. Shutdown and startup the new primary:

SQL> shutdown immediate
ORA01507: database not mounted

ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 85020944 bytes

Fixed Size 454928 bytes

Variable Size 71303168 bytes

Database Buffers 12582912 bytes

Redo Buffers 679936 bytes

Database mounted.

Database opened.

8. Enable remote archiving on the new primary to the new standby:

SQL> alter system set log_archive_dest_state_2=enable scope=both;

9. Start managed recover on the new standby database:

SQL> alter database recover managed standby database disconnect from session;
Database altered

PostgreSQL Scripts Part-1

  • Query to check lock in PostgreSQL:
SELECT t.relname, l.locktype, page, virtualtransaction, pid, mode, granted
FROM pg_locks l, pg_stat_all_tables t
WHERE l.relation = t.relid
ORDER BY relation asc;
  • Query to remove lock in PostgreSQL:
SELECT pg_terminate_backend(11064)
  • Find column name belong to which table in PostgreSQL like in our case we are using column name as activity:
select t.table_schema,
t.table_name
from information_schema.tables t
inner join information_schema.columns c on c.table_name = t.table_name
and c.table_schema = t.table_schema
where c.column_name like '%activity%'
and t.table_schema not in ('information_schema', 'pg_catalog')
and t.table_type = 'BASE TABLE'
order by t.table_schema;
  • Find table name starting with some keyword like in our case it is titanic:
select table_schema,
table_name
from information_schema.tables
where table_name like '%titanic%'
and table_schema not in ('information_schema', 'pg_catalog')
and table_type = 'BASE TABLE'
order by table_name,
table_schema;

Clickhouse Installation on Ubuntu/Linux Box

Prerequisites

  • One Ubuntu server with a sudo enabled non-root user and firewall setup. The server should have at least 2GB of RAM.
  • (Optional) A secondary Ubuntu 20.04 server with a sudo enabled non-root user and firewall setup.

Step 1 — Installing ClickHouse

In this section, you will install the ClickHouse server and client programs using apt.

  1. Yandex maintains an APT repository that has the latest version of ClickHouse. Add the repository’s GPG key so that you’ll be able to securely download validated ClickHouse packages:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E0C56BD4

You will see output similar to the following:

Output
Executing: /tmp/apt-key-gpghome.zJVSvL3NNE/gpg.1.sh --keyserver keyserver.ubuntu.com --recv E0C56BD4
gpg: key C8F1E19FE0C56BD4: public key "ClickHouse Repository Key <milovidov@yandex-team.ru>" imported
gpg: Total number processed: 1
gpg:               imported: 1

The output confirms it has successfully verified and added the key.

2)Add the repository to your APT repositories list by executing:

echo "deb <http://repo.yandex.ru/clickhouse/deb/stable/> main/" | sudo tee /etc/apt/sources.list.d/clickhouse.list

Here you’ve piped the output of echo to sudo tee so that this output can print to a root-owned file.

3)Now, update your packages:

sudo apt update

4)The clickhouse-server and clickhouse-client packages will now be available for installation. Install them with:

sudo apt install clickhouse-server clickhouse-client

During the installation, you will be asked to set a password for the default ClickHouse user.

You’ve installed the ClickHouse server and client successfully. You’re now ready to start the database service and ensure that it’s running correctly.

Step 2 — Starting the Service

The clickhouse-server package that you installed in the previous section creates a systemd service, which performs actions such as starting, stopping, and restarting the database server. systemd is an init system for Linux to initialise and manage services. In this section you’ll start the service and verify that it is running successfully.

1)Start the clickhouse-server service by running:

sudo service clickhouse-server start

2)The previous command will not display any output. To verify that the service is running successfully, execute:

sudo service clickhouse-server status

You’ll see output similar to the following:

Output
● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)
     Loaded: loaded (/etc/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2022-06-14 09:44:08 IST; 4h 42min ago
   Main PID: 1605 (clckhouse-watch)
      Tasks: 213 (limit: 18810)
     Memory: 2.1G
        CPU: 9min 32.908s
     CGroup: /system.slice/clickhouse-server.service
             ├─1605 clickhouse-watchdog "" "" "" "" "" "" "" --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid
             └─1617 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid

The output notes that the server is running.