ORA-27054: NFS file system where the file is created or resides is not mounted with correct options

bash-3.2$ impdp system/***** directory=import_dir nologfile=y dumpfile=exp_swx_20092012.dmp full=y

ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/USwitch/exportdump/exp_swx_20092012.dmp" for read
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 3
Additional information: 18

While trying to import a dumpfile from an NFS mount point via impdp in Oracle 11g 11.2.0.3 you get below error

Solution : I solved this issue using a dual approach. the 2nd step is one I got from searching on the internet but the directory permission i discovered after a lot of trial and error methods.

1. Give full permission to the import directory from OS. Suppose the directory which you created from the database was called import_dir . Then check location of directoru from view dba_directories and give full permission to it.

SQL> select * from dba_directories;

$ chmod 777 /USwitch/exportdump

Unmount and Mount the NFS file system again.

2. Set below parameter in init.ora file and then bounce database

event=’10298 trace name context forever, level 32′

Now try import again. If running RAC use nologfile=y parameter

bash-3.2$ impdp system/sys123 directory=import_dir nologfile=y dumpfile=exp_swx_20092012.dmp full=y

Advertisements

Export Oracle 11g (x86 Windows) Schema into Oracle 10g (SPARC Solaris) using EXPDP ( Cross Platform, Cross Versin Migration)

Scenario is that we would like to export a schema called SWX residing on a Oracle 11.2.0.2 database running on Windows 32- Bit to a Oracle 10.2.0.4 database running on SPARC 64-bit. The source database has transparent database encryption running. The tablespace where the schema resides is fully encrypted using TDE. The target database is not running TDE.

To do this complex export import scenario. We have to use expdp and impdp. Since exp and imp has limitations and is not as effecient as impdp expdp (Data Pump) . Exp and IMP should be used by database administrators only if they are running a 9i database and want to export it to a higher version database.

First thing on the Source database, where “SWX” schema resides. We will do the following steps :

1. Create a directory for the export backups and grant read write permissions for it to the public (all users)

SQL> create or replace directory export_dir as ‘E:\ORA_BACKUPS’;

SQL> grant read, write on directory export_dir to public;

2. expdp system/sys directory=export_dir dumpfile=exp_win_11th_sep_11.dmp logfile=export.txt schemas=swx encryption_password=ora11g version=10.2

or alternately you can create a parameter file called export.par with below parameters and run expdp

directory=export_dir
dumpfile=exp_win_11th_sep_11.dmp
logfile=export.txt
schemas=swx
encryption_password=ora11g
version=10.2

expdp system/sys parfile=export.par

3. FTP the file to the target database using bin mode

4. If the user already exists on target server, you can drop the users. If it doesnt exist proceed like below.

5. create a import directory like we created export directory on source server. The location where you FTP’ed the backupfile should be defined as import_dir

SQL> create or replace directory import_dir as ‘/u2’;

SQL> grant read, write on directory import_dir to public;

6. Create a parameter file called import.par with below parameters

directory=import_dir
dumpfile=EXP_WIN_11TH_SEP_11.DMP
remap_tablespace=swx_enc:users
logfile=import.txt

# Do not use encryption_password since this database does not have a wallet defined. If there was already a wallet for this database. You should first open the wallet and then use encryption_password parameter

# the original tablespace in source database where the schema SWX orginated was called “swx_enc”. The tablespace where it will be imported on target database is called “users”
remap_tablespace=sourcetbs:targettbs

#If you want to import to a different schema you can use the option remap_schema=sourcename:targetname

7. impdp system/system parfile=import.par

Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
ORA-39082: Object type ALTER_PROCEDURE:”SWX”.”SPUPDATECYCLE” created with compilation warnings
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
ORA-39014: One or more workers have prematurely exited.
ORA-39029: worker 1 with process name “DW01” prematurely terminated
ORA-31672: Worker process DW01 died unexpectedly.

Job “SYSTEM”.”SYS_IMPORT_FULL_02″ stopped due to fatal error at 12:18:44

Do not care for this error. The schema has been imported succesffuly into the 10g database.

So as you can see how efficient datapump is as a tool for doign cross platform cross version import and export of data. If you want to do full database restores then RMAN is the best way to do it. But in crossplatform scenario, Transportable tablespace becomes overtly complicated and also it has the limitations of little-endian and big-endian. But datapump has no such restrictions. You can practically export and import from any version to any version , to any platform to any platform.

How to Add Date and Time to a Filename with Windows Command Line.

Many times it becomes necessary to append date stamp on your export dumps. In Unix systems, shell scripting is very liberal with variables you can define yourself eg:

expdate=`date ‘+%d%m%Y’`
dat=`date ‘+%m%d%y %H:%M:%S’`

And then go onto define in your script as

./expdp system/system directory=export_dir dumpfile=exp_swx_$expdate.dmp logfile=exp_swx_$expdate.log schemas=swx

But on Windows it can be done in a more easier way, using the Date and Time function. Just input the string below as part of the file name any time you want the current system date and/or time included as part of the file name.

Date:  %date:~4,2%-%date:~7,2%-%date:~12,2%
Time:  %time:~0,2%-%time:~3,2%-%time:~6,2%
Example: copy c:\test.txt c:\test-%date:~4,2%-%date:~7,2%-%date:~12,2%.txt will output c:\test-mm-dd-yy.txt

exp edate/edate file=edate%date:~4,2%-%date:~7,2%-%date:~12,2%.dmp log=edate%date:~4,2%-%date:~7,2%-%date:~12,2%.txt owner=edate statistics=none

Export tablespace Oracle 10g: Transportable Tablespace

Transportable Tablespace across Different Platforms
___________________________________________________

In our example we will transport a tablespace called TEST. The tablespace should be self-contained without any objects referring to other tablespaces so for that check if the tablespace os self contained using below query:

EXECUTE dbms_tts.transport_set_check(‘TEST’, TRUE, TRUE);

SELECT * FROM transport_set_violations;

If you have cross referenced objects. Drop those objects from the TEST tablespace(Please note this is being done in a test environment so I can drop the cross-referenced objects easily. Do not drop any objects without consulting your senior DBA in a production environment. You never know what object has what dependency. Suppose you have indexes in the TEST tablespace pointing to tables in another tablespace. You can safely move those indexes using below pl/sql procedure:

begin
for c1 in (select index_name a1 from user_indexes)
loop
execute immediate ‘alter index ‘||c1.a1||’ rebuild online tablespace <tablespace name>’;
end loop;
end;
/

Now proceed and check violations again. If you get an error like:

SELECT * FROM transport_set_violations;

VIOLATIONS
——————————————————————————–
Violation 46 : contact Oracle support
Violation 46 : contact Oracle support
Violation 46 : contact Oracle support

Run this query to check why the violation is throwing

SELECT TRIM(OBJ1_OWNER) || ‘, ‘ || TRIM(OBJ1_NAME) || ‘, ‘ ||
TRIM(OBJ1_SUBNAME) || ‘, ‘ || TRIM(OBJ1_TYPE) || ‘, ‘ ||
TRIM(TS1_NAME) || ‘, ‘ || TRIM(OBJ2_OWNER) || ‘, ‘ ||
TRIM(OBJ2_NAME) || ‘, ‘ || TRIM(OBJ2_SUBNAME) || ‘, ‘ ||
TRIM(OBJ2_TYPE) || ‘, ‘ || TRIM(TS2_NAME) || ‘, ‘ ||
TRIM(CONSTRAINT_NAME) || ‘, ‘ || TRIM(REASON)
FROM PLUGGABLE_SET_CHECK
WHERE MESG_ID=46;

This will give output of what exactly the violation is for the transportable tablespace. Fix those violations and start the actual procedure for transporting the tablespace. In my example i had used Transparent Data Encryption(TDE) for few columns on the database. The query reported that you cannot use TT_TBS when you have encrypted columns. So i had to decrypt the columns which I encrypted using a query like.

SELECT * FROM dba_encrypted_columns;
ALTER TABLE account MODIFY (cardholder DECRYPT);

After decrypting the columns. We proceed further

1. Check the ENDIAN version of both source and target using below script:

—–
— What are the available Transportable Tablespace Platform possibilities?
—–
SELECT
platform_name
,endian_format
FROM v$transportable_platform
ORDER BY platform_name
;

—–
— What’s the current ENDIAN-ness of my database and platform?
—–

SELECT
D.name
,TP.endian_format
FROM
v$transportable_platform TP
,v$database D
WHERE TP.platform_name = D.platform_name
;

it will report if the ENDIAN is “little” or “big”. If the endian is the same on both target and source, no need to go for transportable tablespace. You can simply export the tablespace and import it to the target. If not proceed like below

2. Alter the tablespace and put it in read-only mode on source. In our example we will transport a tablespace called TEST to the target server.

ALTER TABLESPACE test READ ONLY;

3. now export the tablespace metadata.

##Windows###
exp USERID=’system/oracle@Source AS SYSDBA’ TRANSPORT_TABLESPACE=y TABLESPACES=test FILE=test_repository.dmp log=test_repository.txt

###Unix###
exp \’sys/oracle AS SYSDBA\’ TRANSPORT_TABLESPACE=y TABLESPACES=test FILE=test_repository.dmp log=test_repository.txt

Go to target and copy dumpfile to any location along with the datafiles of that tablespace.

imp \’sys/oracle AS SYSDBA\’  TABLESPACES=test TRANSPORT_TABLESPACE=y FILE=test_repository.dmp datafiles=’users_01.dbf’

4. Till above is for when you are exporting a tablespace to a machine with same endian information. If you a different endian proceed like below. We will convert from Solaris 64-bit SPARC(BIG) to Windows 32-bit(Little)

5. Convert the datafiles using RMAN and db_file_name_convert

#do not use this method: On Source machine: With original filename
RMAN> convert tablespace ‘TEST’
to platform ‘Microsoft Windows IA (32-bit)’
format=’/datafiles/rmanbkp/%N_%f’
parallelism = 4;

## use this method: On Source machine: With converted file name filename

RMAN> convert tablespace ‘TEST’
2> to platform ‘Microsoft Windows IA (32-bit)’
3> db_file_name_convert ‘/datafiles/oradata/SWX’,’/datafiles/rmanbkp’
4> parallelism = 4;

After the files are converted it can be found in ‘/datafiles/rmanbkp’. Copy these converted files along with metadata dump of TT_TBS and put it on source server and run import:

imp USERID=’system/password@target AS SYSDBA’
TRANSPORT_TABLESPACE=y
DATAFILES=’C:\datafiles\test01.DBF’,’C:\datafiles\test02.DBF’
TABLESPACES=test
FILE=test_repository.dmp

In the above cases, we converted the files on the source platform. However, you can do that on the target platform as well. For example, you can copy file test01.dbf along with the metadata dump on a Windows Server 2003 32-bit target machine and convert it there:

RMAN> convert
2> datafile ‘c:/users01.dbf’  ##directory where  you’ll put your datafile from source server
3> format ‘c:/datafiles/rmanbkp/%N_%f’ ##directory where  you’ll get converted file
4> ;

This approach will create a file in the format specified in the directory.

But why would you want to convert the datafiles on the target platform, exactly? One reason could be shorter downtime, which requires the tablespaces to be READ ONLY state only for the duration of the copy to the target host. You could triple-mirror the datafile, make the tablespace read only, break the third mirror, and immediately make the tablespace read/write. This third mirror could then be mounted on the target system and converted at leisure. This arrangement minimizes the duration for which the tablespace must remain read only.

Another reason could be performance. The OLTP database may be under a constant load and using the RMAN convert operation may strain the system more than desired. Instead, the conversion can be offloaded to the data warehouse server, where more CPUs are usually available for parallel operations.

How to import a schema to a new server in a different tablespace : Oracle 10g

1.  Copy the backup of the dumpfile to the target server.

2.  Create a tablespace in our eg: called TEST. Make sure this tablespace is atleast 20% larger than the size of your dumpfile(its not a hard n fast rule but its better IMO).

3. Create a user TEST and grant unlimited quota on the tablespace TEST to it. Make sure you assign the role imp_database_full and exp_database_full to it. Grant create session role also the user TEST.

4. Now create the directory from sql*plus like below

create or replace directory import_dir as ‘/datafiles’;

grant read,write on directory import_dir to public;

5. Now import the dump using impdp…remember if you used expdp to export import with impdp

impdp test/test123@swx directory=import_dir dumpfile=exp_swx_26052010.dmp logfile=import_test.txt remap_schema=swx:test remap_tablespace=users:test parallel=2

few points to be noted:

1. The dumpfile was created for schema name called SWX
remap_schema=source:target

2. The original tablespace was called users, the new tablespace is called test
remap_tablespace=source:target

3. ‘parallel’ specifies the no. of parallel execution processes.

Use datapump(impdp) to import a table : Oracle 10g

—– Procedure to import table from production system to test system ——

1. Copy the backup(dump) of  schema onto /backup_restor_only

$ gzip -d <name of the file>

2. After this gzping finished logon to SQL*PLUS and type below commands

SQL> create or replace directory import_dir as ‘/backup_restor_only’;

SQL> grant read,write on DIRECTORY import_dir to public;

3. Now before proceeding, take backup of table and create script for the table and keep it.

4. Now drop the table

SQL> DROP TABLE STANDBY CASCADE CONSTRAINTS;

5. Now go back to command prompt and type below command

impdp abc/abc123 directory=import_dir dumpfile=ex_28042010.dmp logfile=imp_standby.log tables=STANDBY

Successfuly imported

6. Creating indexes and other constraints would be done by default by IMPDP. If not created, create it from the script you have.