Saturday, 31 December 2016

"ORA-16047 - DGID mismatch between destination setting and target database" while configuring dataguard.

While setting up dataguard configuration you may encounter ORA-16047 - DGID mismatch between destination setting and target database.

From alert.log on primary database:

LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2
Sat Dec 31 08:29:30 2016
FAL[server, ARC3]: Error 16047 creating remote archivelog file 'sauxmes'
FAL[server, ARC3]: FAL archive failed, see trace file.
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance auxmes - Archival Error. Archiver continuing.
Error 16047 for archive log file 2 to 'sauxmes'
Errors in file /u01/app/xxxxx/diag/rdbms/auxmes/auxmes/trace/auxmes_nsa2_9525.trc:
ORA-16047: DGID mismatch between destination setting and target database
Sat Dec 31 08:30:12 2016
PING[ARC1]: Heartbeat failed to connect to standby 'sauxmes'. Error is 16047.
Sat Dec 31 08:33:39 2016
PING[ARC1]: Heartbeat failed to connect to standby 'sauxmes'. Error is 16047.
Sat Dec 31 08:37:12 2016
PING[ARC1]: Heartbeat failed to connect to standby 'sauxmes'. Error is 16047.

Check the configuration on STANDBY side in parameter log_archive_config:

SQL> show parameter log_archive_config  

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------

log_archive_config                   string

Most likely it will not be set. Therefore set it to appropriate value like:

alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(auxmes,sauxmes)';

This will resolve the problem and the redo transport will start working.

Another possibility for getting this kind of error is if you didn't set parameter DB_UNIQUE_NAME on standby database (must be different from the name on primary site):

alter system set DB_UNIQUE_NAME="sauxmes" scope=spfile;  

and then you need to restart the standby database as this parameter can't be dynamically set.

Thursday, 5 February 2015

Maintaining Tempfile in TEMP Tablespace of PDB$SEED in Oracle 12c (

During testing recovery procedures for one of the ongoing projects I wanted to test the "complete disaster" recovery scenario. In this scenario I had to recreate also all ASM disks and restore everything from backup.
Actually full backup with RMAN and subsequent restore of a pluggable 12c single-tenant database  was the solution. I will not talk about that as the main point of this post is quite different.

So the recovery was successful but after restoring  the CDB$ROOT and PDB database I found in the alert log the following message:

Errors in file /u01/app/oracle/diag/rdbms/mydb/mydb/trace/mydb_dbw0_28973.trc:
ORA-01157: cannot identify/lock data file 202 - see DBWR trace file
ORA-01110: data file 202: '+DATA/MYDB/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/pdbseed_temp012015-01-03_12-22-30-pm.dbf'
ORA-17503: ksfdopn:2 Failed to open file +DATA/MYDB/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/pdbseed_temp012015-01-03_12-22-30-pm.dbf
ORA-15173: entry 'FD9AC20F64D244D7E043B6A9E80A2F2F' does not exist in directory 'MYDB'
Thu Feb 05 09:12:52 2015
Errors in file /u01/app/oracle/diag/rdbms/mydb/mydb/trace/mydb_dbw0_28973.trc:
ORA-01186: file 202 failed verification tests
ORA-01157: cannot identify/lock data file 202 - see DBWR trace file
ORA-01110: data file 202: '+DATA/MYDB/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/pdbseed_temp012015-01-03_12-22-30-pm.dbf'

The problem was the temp file of the temporary tablespace TEMP in PDB$SEED which is used for creation of pluggable databases. The tempfiles are not part of RMAN backup as they can always be recreated. Although I could live with this error messages as this is single-tennat database with only one pluggable database I wanted to deliver a "clean solution" to the client. Besides I wanted to know how I can solve such problem.

So here is the solution:

$ sqlplus / as sysdba

SQL*Plus: Release Production on Thu Feb 5 09:35:11 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> alter session set container=pdb$seed;

Session altered.

SQL> select open_mode from v$database;


SQL> select name from v$tempfile;


SQL> alter pluggable database pdb$seed close;
alter pluggable database pdb$seed close
ERROR at line 1:
ORA-65017: seed pluggable database may not be dropped or altered

SQL> alter session set "_oracle_script"=TRUE;

Session altered.

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read write;

Pluggable database altered.

SQL> alter tablespace temp add tempfile '+DATA';

Tablespace altered.
SQL> column name format a100
SQL> select name from v$tempfile;


SQL>  alter tablespace temp drop tempfile '+DATA/MYDB/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/pdbseed_temp012015-01-03_12-22-30-pm.dbf';

Tablespace altered.

SQL>  select name from v$tempfile;


After performed change there were no more error messages during database startup.

The secrete sauce for changing the state of PDB$SEED was setting "alter session set "_oracle_script"=TRUE;". The idea came from this post about changing the open mode of the PDB$SEED database.
Hope this short post will help somebody to resolve his problem with dropping and creating new tempfile in PDB$SEED faster.

Wednesday, 28 January 2015

Finding Oracle VM Manager Core Management Port After OVM Manager Installation

Recently I was working on integration of OVM Manager with Oracle EM Cloud Control 12cR4 and I had to enter the "Oracle VM Manager URL" a.k.a. as "Core management port" which is reported at the end of OVM Manager installation. The default value for this port is 54321. As I was not in position to get this post-installation report because the installation was performed by another company, I had to find out the right port. The definition should be entered in the form: tcp://:.

EM Cloud Control and OVM Manager Integration Screen

OVM Manager Installation report sample

One of the the first ideas was to check local firewall configuration and see if there is anything for port 54321.

# iptables -L | grep 5432
ACCEPT     tcp  --  anywhere  anywhere   state NEW tcp dpt:54322

Search showed opened port 54322 which could also be one of the possibilities.
Searching with netstat revealed no acctivity on port 54322

# netstat -vatn | grep 54322

However, a lot of rows was returned when searching for port 54321:

# netstat -vatn | grep 54321
tcp        0      0 ::ffff:      :::*                        LISTEN
tcp        0      0 ::ffff:      ::ffff:      ESTABLISHED
tcp        0      0 ::ffff:      ::ffff:      ESTABLISHED
tcp        0      0 ::ffff:      ::ffff:      ESTABLISHED
tcp        0      0 ::ffff:      ::ffff:      ESTABLISHED
tcp        0      0 ::ffff:      ::ffff:      ESTABLISHED

But I was still not 100% sure, if the iptables showed the right port. So further digging through the documentation revealed the right port. The definition of the "Core management port"  is hidden in .config file under the name COREPORT in OVM Manager home directory /u01/app/oracle/ovm-manager-3/.config

The contents of this file in my case was:

# cat  /u01/app/oracle/ovm-manager-3/.config


The explanation for the configuration items can be found in chapter "5.1 Backing up and Restoring Oracle VM Manager"  and obviously in my case the port used was actually the default one 54321.
The .config file should be part of the backup.

Hope this post will shorten search time for this kind of information to my readers.

Tuesday, 27 January 2015

Using Hyphen In OS Username Or ORACLE_HOME Path in Oracle 12c Database Installation Will Make You Cry

Recently I was involved in an upgrade project on Linux platforms to Oracle 12c ( and the customer wanted to name the owners of the Oracle SW distribution according to their company standard. So instead of using "grid"  and "oracle" OS account names they wanted to have the names which would start with "svc-" and continue with unique non Oracle standard name for either grid infrastructure or database software. Besides using this names as owners of grid and database software we used them also in the names of the GRID_HOME and ORACLE_HOME directories according to Oracle OFA suggestion.

So no problems with those names during installation and database creation. However, when  we wanted to patch the grid infrastructure and Oracle home with latest patches, the Opatch utility returned with an error that it can't parse the path to Oracle grid infrastructure. I immediately recognized the problem and the only remedy was to uninstall both grid and database SW. And then bang, another surprise, the Oracle 12c deinstallation procedures crashed as well. This time the reason was in checking the ownership of the distribution (grid or oracle).

The regular expression used to find the owner of the distribution in the script is /.*?\((\w+)\).*/
String returned by "id" command is:

uid=211(svc-xxxyyy) gid=201(oinstall) groups=201(oinstall), 200(dba), 205(asmadmin), 206(asmdba), 207(asmoper), 110000513(domain users) 

The $1 returned variable gets value "oinstall" instead of svc-xxxyyy and this causes deinstalltion script to fail.  The right expression would be /.*?\((\w+\-\w+)\).*/

So how one can manually deinstall database home and grid infrastructure home? For ORACLE_HOME deinstallation one can follow Doc.ID 1624580.1 and run:

cd $ORACLE_HOME/oui/bin
./runInstaller -silent -detachHome ORACLE_HOME="/u01/app/svc-xxxyyy/product/12.1.0/dbhome_1" ORACLE_HOME_NAME="OraDB12Home1"

This operation succeeded without any problems. Deinstalling GRID_HOME was slightly more complicated and actually at the moment of writing this post there is no document on MOS relevant for manual deinstallation of grid infrastructure with configured Oracle Restart. The suggestion from Oracle support was to follow "How to Deinstall Oracle Clusterware Home Manually ( Doc ID 1364419.1 )" which was relevant for 11gR2 but not 12c. Actually I have performed the deinstalltion in two steps:

# -deconfig -force
$ runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME

The $ORACLE_HOME in this case was actually pointing to GRID_HOME.

Why I said in the title that this will make you cry - because you have to start it all over again and the Oracle Installer doesn't warn you about the problematic hyphen  during the installation process when all prerequisites are run.

So after being working so many years with Oracle products I was really angry with myself that I was caught on such stupid mistake.

Friday, 9 November 2012

Internet Explorer Can Not Access Database Console Page - Certficate Error

Microsoft recently launched a security patch for Internet Explorer which requires a 1024 bits long key. Originally when the EM Database Control (DB Console) was installed  the self generated key  length is 512 bits.
The IE therefore reports a problem with the web site (EM) certificate.
Oracle has recently published a note on My Oracle Support (MOS) under ID 1498203.1
"Log In To Grid Control Console 10g or DB Control 10g and 11g within Internet Explorer 7, 8 or 9 fails with "Internet Explorer cannot display this page" or with Security Certificate Error".

Actually one has to patch the EM in order to be able to increase the key length to 1024 bits. The command which should be used is:

emctl secure dbconsole -key_strength 1024 -reset

However, without first applying the Patch 14503114 one will get error that the parameter -key_strength is not supported. So in order to resolve the problem patch has to be applied.

The other possible workaround is to use another browser which is satisfied with 512 bits long key. Google Chrome and Firefox work just perfectly.

Sunday, 4 November 2012

SANGAM12 - All Indian Oracle Users Group Conference in Bangalore, India

I am writing this blog post at the Charles de Gaulle (CDG) airport in Paris just after return flight from Bangalore, India. I had the privilege to be a speaker at the SANGAM12 Conference, which is actually the All Indian Oracle User Group Conference. Conference was taking place at Crowne Plaza hotel in Bangalore on November 2nd to 3rd. 
Final wrap-up session.
My traveling to India encountered problems already at the CDG airport where I had only about one hour between the flight from Ljubljana to Paris and the flight from Paris to Bangalore. There were huge queues in front of passport control and only about half of the counters were working so I was immediately convinced that I will miss my flight. Finally after coming at the gate about 5 minutes prior the formal departure I saw the airplane at the gate but was not allowed to board because the airplane door was already closed. So I had to reschedule my flight and the new connection was flight via New Delhi with Air India. Unfortunately I had to wait about 10 hours. So arrived in New Delhi next morning and the connection flight was in early afternoon. So final delay was approximately  16 hours. I had ordered the airport pick up but unfortunately there was a rush hour at the time so it took us more then two hours to finally arrive at the conference venue where I was also sleeping. The conference day was already over so I only met Tom Kyte, Graham Wood  and Andrew Holdsworth from Oracle Corporation, having a final beer in the lobby. 
The conference agenda was very interesting.  My first presentation on Saturday, November 3rd was at noon and the biggest room was almost full. I had the presentation "Getting best from the Cost Based Optimizer". I started with some jokes so the contact with the audience was established. 
My second presentation "Execution Plan Interpretation" was in the afternoon in slightly smaller room. I was just about to start the presentation when the organizer decided to move my presentation the the biggest room as so many people were staying outside. Surprisingly it took us just about 3 minutes to swap the rooms and I was able to start the presentation.  
According to the feedback from many people after my both presentations they were very well accepted. Finally I also met my friend from Facebook Aman Sharma in person. He was also presenting about the
The way back so far was uneventful and I had good rest during the flight back. 
Together with Aman Sharma and guys from Oracle India.
I have congratulated Muralli Valath who is the president of the group for excellent organization of the event and I hope I'll be able to present again in India next year.  This was their 4th conference so theay are still growing and I am sure that they have a great potential to become one of the biggest Oracle User Groups worldwide.

Wednesday, 24 October 2012

CBO Days in Zurich December 11-12 , Switzerland

My dear colleague from OakTable Christian Antognini is organizing an excellent event called "CBO Days 2012" which will take place in Zurich in Switzerland in December 11-12.
I am really honored to be speaking together with great names of the Cost Based Optimizer (CBO) like Jonathan Lewis, Randolf Geist, Mohamed Zait, Maria Colgan and Christian Antognini.

I remember being the attendee at the first CBO Days back in 2006. Mohamed Zait was at time  already involved in the developing of the CBO and had several excellent presentations. It was a great opportunity to discuss with him the actual problems we had with the CBO at that time like the bind variables peeking only at the parse time and run-away queries. At the OakTable dinner we suggested him several possible solutions which were in subsequent years built in the database.

Such kind of events are great opportunity to get the knowledge which one can't really get at one place. Internet is a great source of knowledge, but attending a detailed technical presentation is something quite different. 

I expect that this event will be simply the best event totally focused on the problems of the cost based optimization.