Feeds:
Posts
Comments

Archive for July, 2012

how to export a backup of a remote database

Today we will learn on of important datapump utility introduced from oracle 10g.

how to impdp/expdp from a remote database or to a remote database.

first update the Tnsnames.ora file in your system

vi $ORACLE_HOME/network/admin/tnsnames.ora

PRODDB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 194.9.*.*)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = PRODDB)
)
)

 

save and exit

now open the sql editor in console.

 

SQL> create directory dump_dir as ‘/tmp’;
SQL> create public database link source_db using ‘PRODDB’;
SQL> exit
impdp system/system directory=dpump_dir network_link=source_db schemas=scott remap_schema=scott:dest_user;

The command above will import the scott schema from source_db to the current database, and rename the schema to dest_user. If the dest_user does not exist, it will be created.

expdp system/password directory=dump_dir network_link=source_db schemas=scott;

 

Read Full Post »

  How to crontab a schedule job in linux of oracle database

first create a scheduler at the location 
vi /home/oracledb/Desktop/Nimai/export.sh

now in insert mode write the lines of you export backup
(in my enviorment my script to export backup is)

edate=`date +%d_%m_%Y` #this is declaring a variable edate for today date
export ORACLE_SID=PRODDB #My System Identifier for Oracle
export ORACLE_HOME=/apps/oracledb/product/10.2.0/orahome #export the oracle home
export PATH=/apps/oracledb/product/10.2.0/orahome/bin #export bin path 
expdp Newton/password directory=DUMP_DIR dumpfile=schedule_nimai_$edate.dmp logfile=schedule_$edate.log schemas=newton;

now save it through esc+:w
and then exit

now we have to schedule this schedule at any specific time when you need.

In my enviorment I have set the schedule at everyday 2 pm in weekdays.

00 14 * * 1-6 sh /home/oracledb/Desktop/export.sh >> /dev/null

to edit the cronjob crontab -e
to list cronjob crontab -l

note-
00 defines minute 
14 defines 2 pm
* means every day
* means every month
1-6 defines from monday to saturday (sunday-0)

Scheduling explained 
As you can see there are 5 stars. The stars represent different date parts in the following order:
minute (from 0 to 59) 
hour (from 0 to 23)
day of month (from 1 to 31)
month (from 1 to 12)
day of week (from 0 to 6) (0=Sunday)

Execute every minute
If you leave the star, or asterisk, it means every. Maybe that's a bit unclear. Let's use the the previous example again:

* * * * * sh /bin/execute/this/script.sh
They are all still asterisks! So this means execute /bin/execute/this/script.sh:

every minute of every hour

of every day of the month

of every month

and every day in the week.

In short: This script is being executed every minute. Without exception.

Execute every Friday 1AM

So if we want to schedule the script to run at 1AM every Friday, we would need the following cronjob:

0 1 * * 5 sh /bin/execute/this/script.sh

Get it? The script is now being executed when the system clock hits:

minute: 0

of hour: 1

of day of month: * (every day of month)

of month: * (every month)

and weekday: 5 (=Friday)

Execute on workdays 1AM

So if we want to schedule the script to Monday till Friday at 1 AM, we would need the following cronjob:

0 1 * * 1-5 sh /bin/execute/this/script.sh

Get it? The script is now being executed when the system clock hits:

minute: 0

of hour: 1

of day of month: * (every day of month)

of month: * (every month)

and weekday: 1-5 (=Monday til Friday)

Execute 10 past after every hour on the 1st of every month

Here's another one, just for practicing

10 * 1 * * sh /bin/execute/this/script.sh

Fair enough, it takes some getting used to, but it offers great flexibility.

Neat scheduling tricks

What if you'd want to run something every 10 minutes? Well you could do this:

0,10,20,30,40,50 * * * * sh /bin/execute/this/script.sh

But crontab allows you to do this as well:

*/10 * * * * sh /bin/execute/this/script.sh

Special words

If you use the first (minute) field, you can also put in a keyword instead of a number:

@reboot     Run once, at startup

@yearly     Run once  a year     "0 0 1 1 *"

@annually   (same as  @yearly)

@monthly    Run once  a month    "0 0 1 * *"

@weekly     Run once  a week     "0 0 * * 0"

@daily      Run once  a day      "0 0 * * *"

@midnight   (same as  @daily)

@hourly     Run once  an hour    "0 * * * *

Leave the rest of the fields empty so this would be valid:

$ @daily sh /bin/execute/this/script.sh

*/10 * * * * sh /bin/execute/this/script.sh | mail -s "Cronjob ouput" yourname@yourdomain.com

trashing the output

* * * * * sh /apps/oracle/scripts/cloning.sh >> /dev/null

This  will for the output to this logfile

* * * * * sh /apps/oracle/scripts/cloning.sh >> /apps/oracle/scripts/file_name.log

Read Full Post »

Manual upgrade from 9i to 10g Release 2

This procedure describes the steps necessary to manually upgrade a database from 9.2.x to 10.2.x. It assumes that you have already installed 10g onto the server. Of course, it is essential that you take a backup of your database before attempting any upgrade.

1. Compile any invalid objects

@?/rdbms/admin/utlrp.sql

2. Create a sysaux tablespace

Having a sysaux tablespace is a requirement in 10g. So, if you haven’t already got one, create one now.

create tablespace sysaux

datafile ‘/apps/oracledb/oradata/PRODDB/datafile/sysaux.dbf’ size 512M

extent management local

segment space management auto

autoextend on next 20M

/

3. Run utlu102i.sql

This utility script checks that the database is ready to be upgraded to 10g. It also identifies any actions that need to be taken. The script is located in the 10g oracle home, so you will need to specify the full path to it.

@/u01/app/oracle/product/10.2.0/db_1/rdbms/admin/utlu102i.sql

Review the output and make any necessary alterations. Make a note of how many invalid objects there are.

4. Shut the database down with either normal or immediate

shutdown immediate

5. Copy the spfile (or pfile) and the password file from the existing home to the 10g one.

cp $ORACLE_HOME/dbs/*$ORACLE_SID* $NEW_ORACLE_HOME/dbs/

6. Edit oratab

Alter /etc/oratab (or /var/opt/oracle/oratab) to point to the10g home. Once done, rerun oraenv to bring the alteration into effect.

7. Upgrade the database

sqlplus “/ as sysdba”

startup upgrade

This next bit is the upgrade itself. It takes roughly half an hour to complete. Spool the output to a file so that you can review it afterward.

@?/rdbms/admin/catupgrd.sql

8. Recompile any invalid objects

@?/rdbms/admin/utlrp.sql

Compare the number of invalid objects with the number noted in step 3. It should hopefully be the same or less.

9. Then check the status of the upgrade

@?/rdbms/admin/utlu102s.sql

10. Alter or remove initialisation parameters

Temporarily creating a pfile is the easiest way.

create pfile from spfile;

shutdown immediate

vi $ORACLE_HOME/dbs/init$ORACLE_SID.ora

Alter/remove parameters identified in step 9. Set compatible to 10.2.0.0.0

startup

create spfile from pfile;

shutdown immediate

startup

its done….

Read Full Post »

 

Assume that one of our datafile have been deleted e.g. our system.dbf;

login in to as sysdba
SQL>sqlplus “/ as sysdba”
SQL*Plus: Release 10.2.0.4.0 – Production on Sat Mar 3 13:11:08 2012
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to an idle instance.
SQL> select open_mode from v$database;
select open_mode from v$database
*
ERROR at line 1:
ORA-01034: ORACLE not available
SQL> shut abort;
ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 608174080 bytes
Fixed Size 2067624 bytes
Variable Size 197133144 bytes
Database Buffers 402653184 bytes
Redo Buffers 6320128 bytes
—————————————————————————————————–
AT THIS POINT START RMAN EXE in a NEW COMMAND PROMPT

 

In RMAN do the following
C:\>Rman target/
connected to target database: PRIM (not mounted)
and set dbid as dbid is nessecary for recovery;

 

we have to set an old database id……
RMAN> set dbid 19548658;
executing command: SET DBID

Now to restore datfile we must make our database in mount mode so that it can recognize the
controlfile and restore our system.dbf from backup;
SQL> alter database mount;
Database altered.

 

Now exit rman and again connect it in mount mode or you can connect it in mount mode from
rman too
Note it can not also restore cause the dbid we set was old so it will be better if we exit rmna exe
and connect it again as it is in mount mode from sql prompt.

 
C:\Users\Administrator>rman target/
Recovery Manager: Release 10.2.0.4.0 – Production on Sat Mar 3 13:15:40 2012
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: PRIM (DBID=4064098091, not open)
now we can restore our datafile.
datafile 1-system
datafile 2-undo
datafile 3-sysaux
datafile 4-users
datafile 5-bcp
datafile 6-rman

 

this restore is from backup and incomplete restore….

 

RMAN> restore datafile 1;
Starting restore at 03-MAR-12
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to F:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIM\DATAFILE\O1_MF
_SYSTEM_7JCDLWHM_.DBF
channel ORA_DISK_1: reading from backup piece F:\ORACLE\PRODUCT\10.2.0\FLASH_REC
OVERY_AREA\PRIM\BACKUPSET\2012_03_01\O1_MF_NNNDF_TAG20120301T195318_7NZ1LP
Q4_.BK
P
channel ORA_DISK_1: restored backup piece 1
piece
handle=F:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\PRIM\BACKUPSET\2012_03
_01\O1_MF_NNNDF_TAG20120301T195318_7NZ1LPQ4_.BKP tag=TAG20120301T195318
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25

Finished restore at 03-MAR-12

 

 

After this (in rman ) we have to recover the datafile from logfiles as its update have been
updated in log files.

This recocery is from our archive logs (this is the place where we need archive logs to recover our database)

 

 

RMAN> recover datafile 1;
Starting recover at 03-MAR-12
using channel ORA_DISK_1
starting media recovery
archive log thread 1 sequence 84 is already on disk as file D:\ARCHIVE\ARCH_1_84
_772658750.ARC
archive log thread 1 sequence 85 is already on disk as file D:\ARCHIVE\ARCH_1_85
_772658750.ARC
archive log thread 1 sequence 1 is already on disk as file D:\ARCHIVE\ARCH_1_1_7
76890469.ARC
archive log thread 1 sequence 2 is already on disk as file D:\ARCHIVE\ARCH_1_2_7
76890469.ARC
archive log filename=D:\ARCHIVE\ARCH_1_84_772658750.ARC thread=1 sequence=84
archive log filename=D:\ARCHIVE\ARCH_1_85_772658750.ARC thread=1 sequence=85
media recovery complete, elapsed time: 00:00:05
Finished recover at 03-MAR-12

 

 

Note if we try to open the database without complete recovery this can show error.

 

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01190: control file or data file 1 is from before the last RESETLOGS
ORA-01110: data file 2:
‘F:\ORACLE\PRODUCT\10.2.0\ORADATA\PRIM\DATAFILE\O1_MF_SYSTEM_7JCDLWM6_.DBF’

 

 

But After recovery…

SQL> alter database open;
Database altered.
Database is in open mode and we have our system datafile back in our datafile folder.

Thanks & Regards

Nimai Karmakar

Read Full Post »

Today we will make a physical standby of a primary database.

Enviorment- Windows Server 2008

Oracle 10g-

wheather the installation is on Windows but it is same in Linux version too only there are little difference which i will disscuss in my next blog.

1. Install same version of oracle on both Server.
2. Then dbca(install database on both pc)
3. Then netca(common listener for both pc) should be at running mode. Port 1525
In primary server..

Now i am creating a tablespace for my data,My tablespace here is BCP with datafile BCP.dbf and the user of the tablespace is BCP with passsword BCP

a. Create tablespace BCP datafile
‘O:\oracle\product\10.2.0\oradata\PRODUDB\DATAFILE\BCP.dbf’ size 50M auto
extend on next 10M online permanent;
b. Create user bcp identified by bcp default tablespace BCP;
c. Grant dba to bcp;(this command will give role DBA to BCP you can give role as per your requirment)
3. Then shut down oracle and import the data(not necessary everywhere).
a. C:\>Imp bcp/bcp file=s:\bcp.dmp full=yes
b. This will import full dump to bcp.
4. Now connect the database with bcp eg conn bcp/password
5. Select name from v$database if its shows PRODUDB

SQL>select name from v$database;
6. And only showing the table related to bcp then ok
(This is done because we are saving all our data to bcp schema with user=bcp &
password=bcp)
Now for pfile configuration this is done for make understand both pc that who is
primary and who is standby and what will happen in situation of failover or switchover.
In primary database (bcp)

Enable Force logging which will apply force for redo logging to Standby.
Sql>alter database enable force logging;
In Mount mode
Sql> create pfile=’f:\pfile.ora’ from spfile;
File created.
Now we will config the pfile of primary server and primary database named bcp and edit these parameters.

*.core_dump_dest=’F:\oracle\product\10.2.0\admin\BCP\cdump’
*.DB_FILE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\BCPSTAN\DATAFILE’,’F
:\oracle\product\10.2.0\oradata\BCP\DATAFILE’

*.db_unique_name=’BCP’

*.dispatchers='(PROTOCOL=TCP) (SERVICE=BCPXDB)’
*.fal_client=’BCP’
*.fal_server=’BCPSTAN’

*.job_queue_processes=10
*.local_listener=’LISTENER_PRIM’
*.log_archive_config=’DG_CONFIG=(BCP,BCPSTAN)’
*.log_archive_dest_1=’LOCATION=D:\Archive
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=BCP’
*.log_archive_dest_2=’SERVICE=BCPSTAN LGWR ASYNC
VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=BCPSTAN’
*.log_archive_dest_state_1=’ENABLE’
*.log_archive_dest_state_2=’ENABLE’
*.log_archive_format=’arch_%t_%s_%r.arc’
*.log_archive_max_processes=2
*.log_archive_min_succeed_dest=1
BCP.log_archive_trace=0
*.log_archive_trace=15
*.LOG_FILE_NAME_CONVERT=’E:\oracle\product\10.2.0\oradata\BCPSTAN\ONLINELOG’,
‘F:\oracle\product\10.2.0\oradata\BCP\ONLINELOG’,’E:\oracle\product\10.2.0\fl
ash_recovery_area\BCPSTAN\ONLINELOG’,’F:\oracle\product\10.2.0\flash_recovery
_area\BCP\ONLINELOG’
BCP.standby_archive_dest=’D:\Archive’
*.Standby_file_management=’AUTO’

Save the file as it is.
Now create pfile of to be standby server and standby database BCPSTAN.
Sql>create pfile=’f:\pfile.ora’ from spfile;

same edit these parameters of standby pfile.

*.control_files=’E:\ORACLE\PRODUCT\10.2.0\ORADATA\BCPSTAN\CONTROLFILE\STANDBY
01.CTL’,’E:\ORACLE\PRODUCT\10.2.0\FLASH_RECOVERY_AREA\BCPSTAN\CONTROLFILE\STANDBY02.CTL’

*.db_file_name_convert=’F:\oracle\product\10.2.0\oradata\BCP\DATAFILE’,’E:\or
acle\product\10.2.0\oradata\BCPSTAN\DATAFILE’
*.db_name=’BCP’

*.db_unique_name=’BCPSTAN’

*.dispatchers='(PROTOCOL=TCP) (SERVICE=BCPSTANXDB)’
*.fal_client=’BCPSTAN’
*.fal_server=’BCP’
*.log_archive_config=’DG_CONFIG=(BCPSTAN,BCP)’
*.log_archive_dest_1=’LOCATION=F:\Archive
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=BCPSTAN’
*.log_archive_dest_2=’SERVICE=BCP LGWR ASYNC
VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=BCP’
*.log_archive_dest_state_1=’ENABLE’
*.log_archive_dest_state_2=’ENABLE’
*.log_archive_format=’arch_%t_%s_%r.arc’
*.log_archive_max_processes=2
*.log_archive_min_succeed_dest=1
*.log_archive_trace=0
*.log_file_name_convert=’F:\oracle\product\10.2.0\oradata\BCP\ONLINELOG’,’E:\
oracle\product\10.2.0\oradata\BCPSTAN\ONLINELOG’
*.Standby_file_management=’AUTO’

*.standby_archive_dest=’F:\Archive ‘

Save it as it is.
Come to primary database BCP.
Enable Archiving in Primary.
Sql>shut immediate;
Sql>startup mount;
Sql>alter database archivelog;
Sql>alter database open;
Now check it.
Sql>archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination db_recovery_file_dest
Oldest online log sequence 1
Next log sequence to archive 2
Current log sequence 2
Yes our database is in archive log mode.

Now we will create standby database controlfile so that it can recognize the
location of datafile and redologs of primary database as we have update the
the db_name=’BCP’ in standby pfile.
Sql>shut immediate;
Sql>startup mount;
Sql>alter database create standby controlfile as ‘f:\standbyby01.ctl’;
Sql>alter database open;
Now we will create the standby redo logs as for standby database so that
When LGWR is writing in primary logs in the same time it also write in
standby logs too so that at the time of failover or switchover it can get the
current data from standby archive logs.
Mind it the standby logs size should be the same as primary logs.

To check the size of primary database redo logs.
Sql> SELECT max (group#) from v$logfile;

Sql> SELECT bytes from v$log;
As in the figure we can see the size is 50M we will create the standby log as
same size.
Now we will create the standby log files.
Sql>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M
/
ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M
/
ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M
/
Done we are half way there.
Now we have to update the tnsnames.ora file for both the listener as it a
network between the primary DB(BCP) and standby DB(BCPSTAN).So, it is
mandatory for each listener to recognize each other.
Copy this content of tnsnames.ora file on both tnsnames.ora file.

BCPSTAN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.*.*)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BCPSTAN)
)
)
BCP =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.*.*)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BCP)
)
)
LISTENER_STAN =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.*.*)(PORT = 1521))
LISTENER_PRIM =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.*.*)(PORT = 1521))

As you can see my standby database BCPSTAN host is 192.168.*.* and primary
database BCP host is 192.168.*.* and both are running in 1521 port
As the both the bcp and bcpstan listener are pinging on both servers.
C:\>tnsping bcp
C:\>tnsping bcpstan
Now main thing to create db and dr we have to make sure that the tns of both system are running.
If it not it will not switch the logs and will continue prompt error
If any error just check it but before that make sure firewall of both pc are disable.
C:\>tnsping BCPSTAN
C:\Users\Administrator>tnsping BCPSTAN
TNS Ping Utility for 64-bit Windows: Version 10.2.0.4.0 – Production on 20-MAR-2
012 02:56:06
Copyright (c) 1997, 2007, Oracle. All rights reserved.
Used parameter files:

F:\oracle\product\10.2.0\db_1\network\admin\sqlnet.ora

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)
(HOST = 192.168.*.*)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVI
CE_NAME = BCPSTAN)))
OK (0 msec)
C:\Users\Administrator>tnsping BCP
TNS Ping Utility for 64-bit Windows: Version 10.2.0.4.0 – Production on 20-MAR-2
012 02:58:07
Copyright (c) 1997, 2007, Oracle. All rights reserved.

Used parameter files:
F:\oracle\product\10.2.0\db_1\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)
(HOST = 192.168.*.*)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVI
CE_NAME = BCP)))
OK (0 msec)

Run this both command on both servers.
At this point backup all the database as rman backup/cold backup/dmp backup.
Copy the standby controlfile from primary to standby locations
At -E:\oracle\product\10.2.0\oradata\BCPSTAN\CONTROLFILE\standby01.ctl
And – E:\oracle\product\10.2.0\flash_recovery_area\BCPSTAN\CONTROLFILE\standby02.ctl
Now we have to copy all the following files from primary to standby server.
1.datafile(system,sysaux,temp file,undo,bcp,users)
At the same location of standby as it is of primary

2.All the redo logs(primary & standby)

At the same location of standby as it is of primary
3.password file
At the same location of standby as it is of primary
And rename the file with PWDBCPSTAN and delete/rename the old one
Now we are done
In primary database shutdown the database and open it with new pfile with nomount mode.
SQL> shu immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount pfile=’f:\pfile.ora’
ORACLE instance started.
Total System Global Area 608174080 bytes
Fixed Size 2067624 bytes
Variable Size 163578712 bytes
Database Buffers 436207616 bytes
Redo Buffers 6320128 bytes
SQL> alter database mount;

Database altered.
SQL>
As same in standby database Bcpstan
SQL> shu immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

SQL> startup nomount pfile=’f:\pfilestan.ora’
ORACLE instance started.
Total System Global Area 608174080 bytes
Fixed Size 2067624 bytes
Variable Size 163578712 bytes
Database Buffers 436207616 bytes
Redo Buffers 6320128 bytes
SQL> alter database mount;
Database altered.
SQL>
Both the database is in mount mode we can open the primary database but we can not open the standby database as it is now standby database.
Now to make BCPSTAN standby and real time log apply.
Apply the following command in standby database.
SQL>STARTUP MOUNT;(the standby database shoul be in mount mode)
SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE

DISCONNECT;

To test whether the logs are shipping to sstandby or not.
In primary database.
Sql>alter system switch logfile;

Sql>/
Sql>/
Sql>/
Sql>select * from v$log;

This sequence# should be the same at standby database.
You can also check the standby database role & open mode.
Sql>select open_mode,database_role from v$database;
if it shows mounted as open_mode and physical standby as database_role than its ok.
We have done our job succsefully.
Just check the protection_mode from v$database.
Sql>select protection_mode from v$database.
Hope it will show you maximum availability.
Database with Disaster Recovery is ready.

Read Full Post »

Schema refresh is an regular job for any DBA specially during migration 
projects, so today I decide to post about a schema refresh
Assuming here schema(SCOTT) is  refreshed  from source(PRODDB) to Target(TEST) 
on oracle 10g server

SQL>  select * from  v$version;

On Source side (PRODDB) ,
Create a directory or use an existing directory (ex: data_pump_dir) 
and grant read and write permissions on this directory to user ‘SYSTEM‘

SQL >   grant  read, write  on  directory  data_pump_dir  to   system;
Grant Succeeded.

Step 1:   Exporting the data from proddb(source) 

$   vi   expdp_refresh_schema.sh

$  expdp  system/****@sourcehostname   dumpfile=expdpschema.dmp  
   Directory=data_pump_dir    logfile=export.log   schemas= scott

$  nohup  sh  expdp_refresh_schema.sh>refresh_schema.out &

Step 2 :  Copying the dumpfiles from source to target

For copying Dumpfiles from one server to another server we can use either 
Winscp(Graphical tool for copying files from windows to linux and  vice versa)
,FTP.

$ scp  expdpschema.dmp   system@TargetHostname:/home/oracle/datapump

Here I am copying dumpfile from source to the target /home/oracle/datapump 
 location

Step 3 :  Importing data from dumpfile into target database

Before importing dunpfile into target(TEST) make sure you delete or backup all 
the objects in that schema, to clear all objects from particular schema run 
the script from here  

$ impdp  system/****@targethostname   dumpfile=expdpschema.dmp   
Directory=data_pump_dir    logfile=import.log   remap_schema= scott:newscott

Step 4 :   Verify target database objects with source

SQL>   select   count(*)  from  dba_objects   where  owner=’NEWSCOTT’ ;
SQL>   select  count(*)  from  dba_tables  where  owner =’NEWSCOTT’;

The above results  should be same as that of source  ‘scott’  schema

Read Full Post »

Johribazaar's Blog

Just another WordPress.com weblog

Ayurveda and Yoga

Site for Ayurveda and Yoga articles

SanOraLife

Few adventures in my slow running life...

pavankumaroracledba

4 out of 5 dentists recommend this WordPress.com site

ORACLE WRITES

Technical Articles by Kanchana Selvakumar

SAP Basis Cafe

SAP Basis Cafe - Exploring SAP R/3 Basis World

Life Is A Mystery

This Is Wihemdra's Blog...I Share What I Have

ursvenkat's Blog

Oracle Applications DBA Blog

The Ivica Arsov Blog

Database Management & Performance

gumpx

DBA's online diary