ORA-01078/ LRM-00123: Troubleshoot Junk Characters in Oracle Initialization File

So many times we stuck into situation where we were helpless to start database instance using simple oracle text initialization file for auxiliary instance or some test instance. Usually this is very common whenever we copy the sample or source parameter file from some html or formatted document editor, which allow Unicode text. Frustrating thing is that even after checking test file on normal vi editor or notepad couple of time will also not allow to get rid of that mess-up in parameter file. In this situation there are multiple ways to fix the issue.

  • Copy the content into notepad and then copy back to the unix console to see if it help. Focus on special characters like , ,,etc . Autosuggest options of smart word editors usually convert actual characters into these formatted charters. As soon as you copy the full parameter file content in to notepad or any other low end text editor it’s not that difficult to manually correct it until unless the size of configuration file is large.

 

  • Other way is to use od (Octal Dump), a very old utility of unix OS.

 

Today we will demonstrate the troubleshooting of junc character by using od as well as conventional method.

 

Here we have created two text parameter files and one with ‘   instead of  ‘ ‘

[oracle@maskmt11g.lgk.nmk] ls -lrt initMASK*

-rw-r–r– 1 oracle oinstall 912 May 6 16:46 initMASK11G.ora_correct

-rw-r–r– 1 oracle oinstall 916 May 6 16:54 initMASK11G.ora

[oracle@maskmt11g.lgk.nmk] diff initMASK11G.ora_correct initMASK11G.ora

17c17

< *.db_name=mask11g


> *.db_name=mask11g

 

Let’s try to start instance using this corrupted parameter file.

SQL> startup nomount pfile=’/home/oracle/initMASK11G.ora’

LRM-00123: invalid character 128 found in the input file

ORA-01078: failure in processing system parameters

Read more of this post

Syncing Standby Database Using SCN Based Backup

In Oracle dataguard environment sometime we face issues with slow network pushing us into situation where the changes are piled up and normal redo shipping speed is not sufficient to cover the gap within required business timelines or might be archives/redo never shipped to standby database because of unavailability of standby database and you don’t have enough space on primary site to hold all required archives for recovery. In those situations you might take a decision clear space on primary site after moving archives to tape. Later on when standby will be available FAL service would not able to fetch the gap. One option is to restore those archives to standby site and register those archives while another option, which is of intermediate complexity in case we don’t have identical physical structure in both sites.

Today we will demonstrate recovery of standby database with SCN based incremental backup in environment where we have shared filesystem on primary site while standby database is hosting files in ASM.

  1. Check the minimum SCN on standby database from where the changes need to be applied.

    Standby Site

    SQL> select min (checkpoint_change#) from v$datafile_header order by 1;

    MIN(CHECKPOINT_CHANGE#)
    ———————–
    109182027

    Read more of this post

How Many SCAN Listeners?

Today we will explore little more about SCAN listeners. The common question comes in mind like: How many SCAN listeners could be configured in a multi-node RAC environment? , How many SCAN listeners would actually be required and how many would be enough? Etc.

Let’s find out the answers for these common queries.

I am assuming that by now most of DBAs are pretty comfortable with SCAN listeners; in case need detailed explanation then following URL would a good choice to visit here. As usual, awesome documentation from Oracle for SCAN in 11g as well as 12c releases.

Basically SCAN Listeners are introduced to create another connection handler on top of existing node listeners to overcome the failover and load balancing issues till 11gR1 architecture for RAC. Till 11gR1 Node listeners were defined on VIPs to facilitate NACK (Negative Acknowledgement) to resolve the issue of TCP timeouts.

Pre 11gR2 database environments, clients may take up to 2 minutes to decide (on TCP level) a node is down. This is purely because of the TCP Timeouts, which can differ from platform to platform. These 2 minutes are unacceptable, and it was a good thing of Oracle to understand and address this issue. Oracle designed a virtual IP address to be assigned to the public interface. Under normal circumstances, the VIP will be located to its designated NIC, and the listener will be bound to this VIP. Whenever there is a need to failover the VIP to another node in the cluster, when clients want to connect to this VIP (due to tns alias addresses pointing to this VIP), the VIP will immediately respond (because it has failed over and the TCP stack is running against it), and the client is able to get negative acknowledgement (NACK) confirming that no listener is active at its designated port. Within few seconds the client will know and fail over to the alternative address in its TNS alias. This makes failover a lot faster.

From 11gR2 onwards oracle enhanced the administration as well as availability of database from client’s perspective, load balancing etc by introducing SCAN (Single Client Access Name). With SCAN, clients could use SCAN-NAME (resolved by 3 VIPs, for default configuration) rather than list of all rac nodes in connect string. By default SCAN listeners (defined on SCAN VIPs) created as first point which co-ordinate with Node Listeners (defined on VIPs), so even if failure of node scan listener running on that node (if any) would be relocated to any surviving node while available SCAN listeners would be serving as normal.

In practice any complex environment designed with multi-tier architecture involving connection handling via connection pool mechanism. It reduces connectivity time for application or web based end users. Normally 3 SCANs are capable enough to handle hundreds of new connections in a span of few seconds. Still if you feel that default 3 SCAN listeners are not enough for your environment then you have an option to add few more scan listeners. GNS based dynamic IP scheme still has no way to change number of SCAN listeners in your environment. Today we will elaborate the process of adding one extra scan listener in our DNS based static IP configuration. Read more of this post

Step by Step Installation of OEM 12c

Few important things to be consider before going for OEM 12c on your environment. First EM 12c is more resource demanding as compared to 10g & 11g versions. Second It’s re-engineered for better monitoring and administration capabilities and not compatible with older versions of agents, so the day you rolling 12c in your environment, you have to install 12c agent on all targets.

Package Requirements for Oracle Management Service

In our case we are setting up EM 12C on Oracle Linux 5.x 64 bit

binutils-2.17.50.0.6

gcc -4.1.1

glibc-common-2.3.4

glibc-devel-2.5-49 (32-bit version as well as the 64-bit version)

libaio-0.3.106

libstdc++ -4.1.1

libXtst-1.0.1-3.1(x86_64)

make-3.81

rng-utils-2.0

setarch-1.6

sysstat-5.0.5

xorg-x11-utils     (This package is required only for GUI-based interactive installation, and not for silent installation)

Read more of this post

Cloning Oracle Home in RAC

Cloning of Oracle software is an easy and fast way to achieve standardization across organization where all efforts put on one environment and after testing etc that environment could be used as a source of binaries. A simple tar ball could be shipped to all other servers and then untar on destination environment as a new home or replace the existing oracle home based on availability of space on destination environment.

There could be different situations where we could use this method in a little bit twisted way to achieve desired result. Possible scenarios are:

  • Lost filesystem hosting oracle database software, so either new installation or cloning of software from surviving nodes in case of RAC or some other environment in case of standalone environment.
  • Enterprise wide periodic patching activity where cloning could save a lot of efforts by just building an image and clone it across enterprise.
  • Building new environments during migration databases across of datacenters
  • Node edition in RAC also uses cloning technique

Read more of this post

Step By Step Installation of 11.2.0.3.0 Standalone RDBMS Software

Going forward with 11gR2 oracle recommending out of place upgrade so providing full installer for major upgrades (i.e. 11.2.0.1.0 , 11.2.0.2.0, 11.2.0.3.0) instead of patchsets. Then we can apply bundle patch or custom patches based on our requirement.

For 11.2.0.3.0, software is available on MOS with patch number 10404530 and could be easily downloadable. This patch contains total 7 files but we need only p10404530_112030_platform_1of7.zip & p10404530_112030_platform_2of7.zip (2.32 GB) to install DB software. This patch is actually full release and could be used for upgrade as well as new installation.

Prerequisite RPMs

binutils-2.17.50.0.6

compat-libstdc++-33-3.2.3

elfutils-libelf-0.125

elfutils-libelf-devel-0.125

elfutils-libelf-devel-static-0.125

gcc-4.1.2

gcc-c++-4.1.2

glibc-2.5-24

glibc-common-2.5

glibc-devel-2.5

glibc-headers-2.5

kernel-headers-2.6.18

ksh-20060214

libaio-0.3.106

libaio-devel-0.3.106

libgcc-4.1.2

libgomp-4.1.2

libstdc++-4.1.2

libstdc++-devel-4.1.2

make-3.81

numactl-devel-0.9.8.i386

sysstat-7.0.2

unixODBC-devel-2.2.11-7.1.i386.rpm

unixODBC-2.2.11-7.1.i386.rpm

Note: 64 bit installation also searches for i386 unixodbc rpms.

Read more of this post

GI (Oracle Restart) Upgrade from 11.2.0.1 to 11.2.0.3

Recently I have upgraded Single node GI (Oracle Restart) from 11.2.0.1.0 to 11.2.0.3.0 and thought to share with you. Though it’s not something very interesting to share yet not that bad at all.J

In this blog I’ll use GI or Grid Infrastructure, which is also Oracle Restart in our case. Here are details of GI (Oracle Restart):

Grid Infra Version: 11.2.0.1.0

Grid Home: /u01/app/oracle/product/11.2.0/grid

Host Name: mask11g

Storage: ASM

DB Version: 11.2.0.1.0

Purpose: Upgrade GI from 11.2.0.1.0 to 11.2.0.3.0

Path for upgrade: Out of place (New location on same server) followed by removal of old GI home.

Going forward from 11gR2 oracle recommend to go with out of place upgrade though in place upgrade is still available. I tried both and both are almost similar except the few minor things. In place upgrade enables you to upgrade an existing installation of GI into the same directory by replacing the existing installation files. The patch set application requires more downtime and is not recommended. This upgrade type requires less disk space.

For out of place upgrade we need almost 5 GB extra space during upgrade and we would be able to release space by removing the old grid software.

In order to make it easily understandable as a step by step approach I am attaching snapshots of upgrade process.

Software is available on MOS with patch number 10404530 and could be easily downloadable. This patch contains total 7 files but we need only p10404530_112030_platform_3of7.zip (933 MB) to complete GI upgrade. This patch is actually full release so it could be used to perform new installation as well as upgrade from old versions.

I would suggest you to read the instructions from all aspects before you test the upgrade on your environment. I couldn’t share more details or else this blog looks like upgrade companion doc J.

  1. Download and unzip the p10404530_112030_platform_3of7.zip into staging location on your server.
  2. Take backup of existing environment
  3. Upgrade the GI software
  4. Once upgrade is done then you could remove the old GI installation as it’s of no use or could leave it as it is in case you have plenty of space on your server.

Read more of this post

DB migration to ASM Storage

Why ASM? It’s debatable point between DBAs and Storage experts. Industry wide Oracle users could be agreed on few points:

  1. Reduction in administration overhead.
  2. Better I/O as compared to placement of datafiles over filesystem.
  3. Online expansion and migration to faster disks.
  4. Storage management similar to RAC databases.

ASM providing placement of data in chucks (called Allocation Units) along with beauty of automatically rebalances across available disks. Today I am here to share demo of DB migration from filesystem to ASM. Normally there could be few possible ways to achieve migration activity. This is tough to say the best way for migration without actually understanding business requirement.

  • Migration of physical database
    • Take image copy of database in ASM storage and switch database to copy.
    • Take RMAN backup and restore it in ASM disks followed by switch copy.
    • Duplicate database into ASM diskgroup.
  • Migration with physical replication
    • Dataguard or Transportable tablespace
  • Migration with logical replication setup
    • Any replication method (Golden Gate, Streams, shareplex etc) to setup parallel environment and perform switchover.

Migration using physical standby could involve less downtime and easy performing. Not everyone can afford this approach as it involves extra hardware resources.

Today we will demonstrate migration from filesystem to ASM using RMAN. Apart from straight we will also try to restore spfile and control file from autobackup. This method is one the scenario where we lost our spfile or controlfile. Once controlfile would be available, so the information of backups as well. Although backups could also be registered if required. Read more of this post

What happened when tablespace/database placed into backup mode?

User managed backup or physical backup without RMAN is still one of the popular method for backup and still part of mostly disk to disk backup methods (BCV or SRDF copy). Today I am going to discuss what happens when we place tablespace/database into backup mode.

Fractured Block

Unit of I/O for any system is block size and for oracle database it’s 2K/4K/8K/16K/32K while for any operating system it could be lower than that. One risk in making online backups is the possibility of inconsistent data within a block. Assume that OS I/O block size is 512 bytes while we used DB block size as 8K.

This means to copy utility of OS would take 16 I/Os to take backup of one DB block, while during the backup, DBWR could write 8K in one I/O.

So assume a scenario where copy utility is busy taking backup of 8th portion of particular block while DBWR came to overwrite same DB block (i.e. all 16 OS block pieces in one DB block). Because copy utility was still busy on taking backup of remaining blocks so it took new image from OS piece 9th onwards. From database perspective that backup copy of db block (collection of 16 OS blocks) has been fractured as its half or the OS blocks are old and remaining half are new. The result is called a fractured block, meaning that the data contained in this block is not consistent at a given SCN.

Oracle’s internal protection mechanism against fractured block

To overcome this danger Oracle introduced a protection feature which will be activate once you trigger ‘alter tablespace/database begin backup’ command. Following internal operations happened with immediate effect after triggering begin backup for tablespace Read more of this post

Redo Block Size

Redo logs or online redo log files are crucial files for any oracle database to be operational with durability of transaction. All change vectors must be written to current redo log (except nologging and few exceptional settings where database is running purely for performance benchmark purpose)

Redo log files always default to a block size that is equal to the physical sector size of the disk. Historically, this has typically been 512 bytes (512B). Beginning with Oracle Database 11g Release 2, you can specify the block size of online redo log files with the BLOCKSIZE keyword in the CREATE DATABASE, ALTER DATABASE, and CREATE CONTROLFILE statements. The permissible block sizes are 512, 1024, and 4096.

So we are going to check if we can try with this cool option. Let’s check the current block size of existing redo log files from block_size column of v$log view.

Read more of this post

Kevin Closson's Blog: Platforms, Databases and Storage

Platform, Database and Storage Topics

Real Life Database / SQL Experiences : An Oracle Blog from Vivek Sharma

Being an Oracle Professional, I like to share all my Real Life Performance Tuning Challenges and Experiences. The Content and Views on this site are my own and not necessarily those of Oracle. While, I write on my real life experiences, the resolutions mentioned are solely mine. Comments / Criticisms are always a welcome.

Frits Hoogland Weblog

IT Technology; Oracle, linux, TCP/IP and other stuff I find interesting

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)

ASM Support Guy

Just Another Crazy Oracle DBA

Exadata Certification

Just Another Crazy Oracle DBA

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Sangram keshari's Oracle Blog

The Fusion Middleware Administration & Database Administration Blog

Amit Saraswat

Just Another Crazy Oracle DBA

Oracle Scratchpad

Just another Oracle weblog

Just Another Crazy Oracle DBA

Hemant's Oracle DBA Blog

Just Another Crazy Oracle DBA

The Oracle Instructor

Explain, Exemplify, Empower

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Follow

Get every new post delivered to your Inbox.

Join 404 other followers