Site icon EasyOraDBA

Backup Controlfile to Trace

How to backup the Oracle Control File?
There are two approaches: you either generate a binary image of the Control File, or you generate a text file script which will re-generate a Control File when run as a SQL script.
To create the binary image, issue the command ALTER DATABASE BACKUP CONTROLFILE TO ‘C:\SOMEWHERE\CONTROL01.BKP’; (obviously pick a destination and file name that are suitable for your needs).
To create the text script version, issue the command ALTER DATABASE BACKUP CONTROLFILE TO TRACE. That causes a file to be written to wherever your init.ora parameter USER_DUMP_DEST is pointing. The file name will be something like ‘ora_<some numbers>.trc’. You’ll have to track it down using the O/S date and timestamp (or you can take advantage of the fact that the “some numbers” bit is your Server Process ID Number -which you can determine from v$session and v$process).
The binary image is ready to work from the word go (with one hugely important proviso, of which more below).
The trace file is, however, a bit of a mess, and needs to be knocked into shape if it is to be of any use in a recovery situation. Firstly, it contains a whole heap of junk at the top (referencing your O/S, your Oracle version, dates and times and process or thread numbers). That’s got to go -which means deleting all of it, until the first line reads STARTUP NOMOUNT. Immediately before that line, you need a connect string, since you can’t ever issue the ‘startup’ command until you’ve connected as a Privileged User. Insert something appropriate, therefore (such as CONNECT / AS SYSDBA if using O/S authentication, or CONNECT SYS/ORACLE AS SYSDBA if using Password File Authentication). You may possibly also need to qualify the ‘STARTUP NOMOUNT’ line itself so that it references an appropriate init.ora (for example, STARTUP NOMOUNT PFILE=/SOMEWHERE/NON-DEFAULT/INIT.ORA). Other than that, the file is fine, and useable.
Under no circumstances should you ever need a backup of your Control File, of either type. You’d only use one if ALL copies of the Control File had become corrupt or had been destroyed (and the whole purpose of multiplexing your Control Files is to minimise those possibilities). But if that fateful day ever arrives, then you simply need to fire up Server Manager (or SQL Plus if using 8i or above), and issuing the command to run the trace file script (i.e., typing @name_of_script.ext). There’s no need to connect first -remember, that was the first line we added to the script earlier. If for some reason the Instance is already running, the script will fail -it needs a completely shut down system before it can do its stuff.
The trouble starts when you attempt to restore the binary version of the Control File backup. Because it was an exact, binary copy of a Control File, its SCN number will not agree with the SCN in the headers of all the data files -basically, the Master Clock is out of whack. You therefore have to issue the command “RECOVER DATABASE USING BACKUP CONTROLFILE;” to tell the system not to pay too much attention to the SCN of the Control File. Unfortunately, after you issue that command (and following any recovery that it might cause to take place), you must open the database with the command “ALTER DATABASE OPEN RESETLOGS;”.
That’s unfortunate, because the ‘resetlogs’ command basically forces the entire database to re-synchronise at time zero, which is fine for getting the database open, but is not exactly useful if you ever need to restore from your prior backups (taken when the database was at a time of, say, 10894329), or if you ever expect to be able to apply redo from priot archive logs (which were also taken when the database was at time 10894329 and earlier). Basically, a ‘resetlogs’ renders your database entirely vulnerable: there are no effective backups, and no effective archives. You are therefore supposed to immediately shut the newly-recovered database down, and perform a whole, closed backup of the entire database (which is not exactly a cheap option).
You might wonder why the use of the trace file does not cause this HUGE problem.  The answer is simple: it cheats. Contained within the trace file are locations of every data file in the system. When the file is run, it uses those locations to read the headers of all the data files, whereupon it picks the highest SCN it comes across as the one to write into the header of the Control File it is about to create. That means the re-constructed Control File is already in synchronisation with the data files, and no forced synchronisation to time Zero is therefore required.
So, what’s the best way of backing up the Control File? Answer: multiplex your Control Files so that a recovery is never needed in the first place. But if, despite that, you need some insurance, the Trace File is definitely the better way to go.  It doesn’t render your entire database exposed to failure, it doesn’t effectively trash all prior backups and archives, it works quickly, and well.
There’s only one proviso to this whole discussion: whatever backup method you choose, you need to keep it up-to-date.  Since the Control File contains pointers to the physical location of all data files and redo logs, any backup of that file needs to make sure that those pointers are accurate. Making a physical change to your database (for example, adding a new data file or tablespace, dropping a tablespace, moving or renaming a data file) will instantly render all prior Control File backups out-of-date.  Slightly unnervingly, changing a tablespace from read-write to read-only (or vice versa) also counts as a physical change to your database (because the Control File must always accurately identify any read-only data files).  After any of those operations, therefore, you need to take a fresh backup of the Control File.
It is always conceivable that you could edit a Trace File backup before using it to take account of any physical changes, but the syntax is not easy, and I don’t rate your chances of pulling it off. As for editing the binary copy -forget it!  Net result: keep taking the backups on a regular basis.  I usually recommend a chron job (for our NT friends, that’s an AT job!) which issues the ‘backup to trace’ command every night. It means you need a bit of house-keeping to avoid complete mayhem (and a million trace files) in the user_dump_dest, but it will guarantee a file which, at worst, can be used with the mere addition of a line or two to reference any data files created between the time the trace file was created and the time all Control Files went awol.

Exit mobile version