Archive for the ‘Oracle 11G Administration’ Category

Hi friends,


Last one year i have work a lot on export and import for small to huge database, from specific objects to schema to and query based backup.


here I am listed some of my syntax which you can note down for your reference.

  1. export objects from specific schema which match the specific criteria.

expdp system/oracle directory=DUMPDIR1 dumpfile=BG_objects_%u.dmp logfile=BG_objects.log

2. Import objects to the schema which were own by other schema but exclude some specific objects.

impdp system/oracle directory=DUMPDIR1 dumpfile=BG_objects_01.dmp logfile=BG_objects_imp.log EXCLUDE=TABLE:\”LIKE \’%POSTED%\’\” table_exists_action=REPLACE

3. Import objects from one schema to other schema where tablespace for both the schema were different and from different version(like from 10g to 11g).

impdp system/oracle directory=DUMP_BACKUP dumpfile=dumpfile_17032015.dmp logfile=logfile_imp.log remap_schema=<from_schema>:<to_schema>
remap_tablespace=<from_tablespace>:<to_tablespace> table_exists_action=replace EXCLUDE=STATISTICS;

4. Export specific objects from the schema.

expdp username directory=BCKUP_DUMP dumpfile=schema_pk_objects_%U.dmp logfile=schema_pk_objects.log include=TABLE:\”LIKE \’%PK_%\’\”;

5. Export only one table.

expdp oracle directory=BKUP_DUMP dumpfile=schema_ac_objects_%U.dmp logfile=schema_ac_objects.log tables=<Table_name>;

6.Export a big database and break the dumpfile to matching criteria to fit to the space available in the disk.

expdp SYSTEM/oracle dumpfile=DUMP1:SCHEMA_%u.dmp dumpfile=DUMP2:SCHEMA_%u.dmp filesize=96G logfile=DUMP1:schema.log full=y exclude=statistics;

7. Import a full dump which contains multiple user in different tablespace to a single common user in a single common tablespace to same database or in a different database.

impdp system/oracle directory=dump dumpfile=EPIXDB_01.dmp,EPIXDB_02.dmp full=y logfile=epixdb_imp.log remap_schema=<Schema1>:<schema_common>
remap_schema=<Schema2>:<schema_common> remap_schema=<Schema3>:<schema_common> remap_schema=<Schema4>:<schema_common> remap_schema=<Schema5>:<schema_common>
remap_schema=<Schema6>:<schema_common> remap_schema=<Schema7>:<schema_common> remap_schema=<Schema8>:<schema_common> remap_schema=<Schema9>:<schema_common> remap_schema=<Schema10>:<schema_common>
remap_schema=<Schema11>:<schema_common> remap_schema=<Schema12>:<schema_common> remap_tablespace=<tablespace1>:<common_tablespace> remap_tablespace=<tablespace2>:<common_tablespace>
remap_tablespace=<tablespace3>:<common_tablespace> table_exists_action=replace;

8.export of a object to a specific time or from a specific time this is also known as incremental backup (note:flashback log must be present)

expdp username directory=BKUP_DUMP dumpfile=schema_ac_objects_%U.dmp logfile=schema_ac_objects.log tables=TABLE_NAME
FLASHBACK_TIME=\”to_timestamp\(to_char\(sysdate,\’yyyy-mm-dd hh24:mi:ss\’\),\’yyyy-mm-dd hh24:mi:ss\’\)\”;

9. Traditional export or exp utility to take backup of specific data.

exp user/pass file=exp.dmp log=exp.log TABLES=test query=”””where rownum< 101″””;

exp uwclass/uwclass owner=SCOTT tables=emp query=\” WHERE job=\’MANAGER\’ AND sal \>50000\” STATISTICS=NONE;

10.Export of multiple table from multiple schemas.

expdp system/password directory=BACKUP dumpfile=schema_old_table%u.dmp logfile=schema_old_tables.log schemas=’SCHEMA1′,’SCHEMA2′,’SCHEMA3′,’SCHEMA4′,’SCHEMA5′,’SCHEMA6′

11.Export from multiple schema but table name can be like matching criteria.

expdp system/U6dba#15@projdb directory=backup dumpfile=projdbdmp_%u.dmp logfile=projdbdmp_full.log schemas=’SCHEMA1′,’SCHEMA2′,’SCHEMA3′,’SCHEMA4′,’SCHEMA5′,’SCHEMA6′ INCLUDE=TABLE:\”IN\(\SELECT table_name FROM dba_tables where table_name like \’CC_%\’ \)\”;


Thats it..


Above are a snap-chat 🙂 of my huge collection of syntax which are most common in worst scenario.Still if you have any other critical situation which you are facing please let me know in the comment, It would be a honor to help.


Hope it helps.


With Regards

Nimai Karmakar



Read Full Post »

Today Hemant Sir have posted a good link that represent some basic for oracle HIGH AVAILABILITY with demonstration.


Please click the link…


oracle High availability demos…


Hope this helps…

Read Full Post »

Hello friends,

Last few days I was doing some resarch for RAC as for my new project on Extended RAC and after doing the RnD I got some good sites from where I got the good knowledge on RAC . So,Today I am going to discuss some of the major topics that are needed for Oracle RAC, like

what is RAC ?

what is voting / quorrom disks?

what is RAC architecture ?

what is clsuter?

what commands should I follow for basic RAC? and all that.These were the questions which I got in my mind before starting. If you guys have any questions or any suggestions in minds then I urge you that Please post that for me and yourself.

Few of the below topics are also helpful for Oracle 11gR2 Extended RAC.

First and foremost , What is RAC or why to use RAC?

When more instances are used to access the same data (database) we need to put that insnces in a cluster. The clusterware, assure that the management of data is done correctly (for instance, the same data is not modified in the same time by 2 users, even if the users access the database by 2 or more instances). The clusterware can be bought from another vendor than the database vendor. Oracle offers a solution for the clusterware as well. In this case we speak about the Oracle clusterware. When the Oracle database is installed on a clusterware (Oracle or not) we speak about an Oracle RAC or Oracle Real Application Cluster. The Oracle RAC and Oracle clusterware is not necessarily the same thing. The Oracle RAC installation includes the Oracle clusterware installation.

What are the major diffrences between Oracle 11g R1 and Oracle 11g R2 ?

Oracle 11g R1 RAC :

  •  ADDM for RAC
  • ADR command-line tool – Oracle Automatic Diagnostic repository (ADR) has a new command-line interface named ADRCI, ADR Command Interface.
  • ADRCI can be used to access the 11g monitoring log
  •  Optimized RAC cache fusion protocols
  •  Oracle 11g RAC Grid provisioning .

Oracle 11g R2 RAC :

With the added privileges of 11g R1 below are added privileges.

  • Ocr and Voting can be stored on asm
  • ASMCA (ASM Configuration Assistance)
  • SCAN (Single client access name)
  • Global AWR
  • Server Pooling
  • Default, LOAD_BALANCE is ON.
  • GSD, gsdctl utility introduced.
  • RAC OneNode can be done.
  • HAS (High availability service), oracle restart
  • 11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip
  •  Grid Plug and play
  • ASM : fast start mirror resych
  • ASM : Disk check : “alter diskgroup diskgroup_name check .”
  • ASM : Diskgroup can be mounted as restricted .
  • ASM : Can use force option to mount / drop disk group. (mount by doing offline to unavailable disk if quorum exists )
  • sysasm role introduced.
  • 11g asm : can keep ocr and voting disk in asm , sysasm role , variable extent size ,md_backup/md_restore , raw device concept is obsolete , can rename a diskgroup
  • Hot patching .
  • Oracle 11g RAC parallel upgrades , Oracle 11g have rolling upgrade features
  • Can have same home for cluster resources and asm
  • ASM metadata backup can be done.

What is Voting/quorum disk ?

Voting Disk: – Manages cluster membership by way of a health check and arbitrates cluster ownership among the instances in case of network failures. RAC uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on shared disk. For high availability, Oracle recommends that you have multiple voting disks. The Oracle Clusterware enables multiple voting disks.

  1. All the nodes ping the voting disk;
  2. If the cluster see that a node cannot ping the voting disk, the cluster consider that node that is no longer a valid node and evicts the node from the cluster;
  3. This file is very important for the cluster and must be mirrored.

!!!  One file system must have 560 MB of available space for the primary OCR and a voting disk.   !!!

why voting disk are always recommended in odd number ?

Oracle recommends to configure odd number of voting disks.Each node in the cluster can survive only if it can access more than half of the voting disk.An odd number of voting disks is required for proper clusterware configuration. A node must be able to strictly access more than half of the voting disks at any time. So, in order to tolerate a failure of n voting disks, there must be at least 2n+1 configured. (n=1 means 3 voting disks). You can configure up to 31 voting disks, providing protection against 15 simultaneous disk failures.
If you lose 1/2 or more of all of your voting disks, then nodes get evicted from the cluster, or nodes kick themselves out of the cluster. It doesn’t threaten database corruption. Alternatively you can use external redundancy which means you are providing redundancy at the storage level using RAID.
For this reason when using Oracle for the redundancy of your voting disks, Oracle recommends that customers use 3 or more voting disks.

What is OCR / Oracle cluster registry file and what is the use of this file ?

Oracle Cluster Registry (OCR) :- Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. The OCR also manages information about processes that Oracle Clusterware controls. The OCR stores configuration information in a series of key-value pairs within a directory tree structure. The OCR must reside on shared disk that is accessible by all of the nodes in your cluster. The Oracle Clusterware can multiplex the OCR and Oracle recommends that you use this feature to ensure cluster high availability.

This is the central repository for the Oracle cluster: there we find all the information about the cluster in real-time;

OCR relies on a distributed shared-cache architecture for optimizing queries against the cluster repository. Each node in the cluster maintains an in-memory copy of OCR, along with an OCR process that accesses its OCR cache.

When OCR client application needs to update the OCR, they communicate through their local OCR process to the OCR process that is performing input/output (I/O) for writing to the repository on disk.

The OCR client applications are Oracle Universal Installer (OUI), SRVCTL, Enterprise Manger (EM), Database Configuration Assistant (DBCA), Database Upgrade Assistant(DBUA), NetCA and Virtual Internet Protocol Configuration assistant (VIPCA). OCR also maintains dependency and status information for application resources defined within CRS, specifically databases, instances, services and node applications.

Note:- The name of the configuration file is ocr.loc and the configuration file variable is ocrconfig.loc

  •      CRS update the OCR with the information about the node failure or reconfiguration;
  •      CSS update the OCR when a node is added or deleted;
  •     NetCA, DBCA, SRVCTL update the OCR with the services information;
  •     This is a binary file and cannot be edited;
  •     The OCR information is cached on each node;
  •     Only one node (the master node) can update the OCR file => The master node has the OCR cache up-to-date in real time;
  •     OCR file is automatically backed up in the OCR location every 4 hours:

cd $GRID_HOME/cdata/<cluster name>

backup00.ocr backup01.ocr backup02.ocr day.ocr day_.ocr week.ocr

  • The OCR files are backed up for a week and overwritten in a circular manner;
  • Because the OCR is a key component of the Oracle cluster, the OCR file must be mirrored;
  • The OCR file can be exported, imported with ocrconfig command;

Note:- You can replace a failed OCR online, and you can update the OCR through supported APIs such as Enterprise Manager, the Server Control Utility (SRVCTL) or the Database Configuration Assistant (DBCA

Below are some basic points which you must aware of.

Where are the clusterware files stored on a RAC environment?

The clusterware is installed on each node (on an Oracle Home) and on the shared disks (the voting disks and the CSR file)

Where are the database software files stored on a RAC environment?

The base software is installed on each node of the cluster and the database storage on the shared disks.

What kind of storage we can use for the shared clusterware files?

  1. OCFS (Oracle cluster file system)
  2. raw devices
  3. ASM  

What is a CFS?

A cluster File System (CFS) is a file system that may be accessed (read and write) by all members in a cluster at the same time. This implies that all members of a cluster have the same view.

Which files can be placed on shared disk?

  1. Oracle files (controlfiles, datafiles, redologs, files described by the bfile datatype)
  2. Shared configuration files (spfile)
  3. OCR and voting disk
  4. Files created by Oracle during runtime

What is a raw device?

A raw device is a disk drive that does not yet have a file system set up. Raw devices are used for Real Application Clusters since they enable the sharing of disks.

What is a raw partition?

A raw partition is a portion of a physical disk that is accessed at the lowest possible level. A raw partition is created when an extended partition is created and logical partitions are assigned to it without any formatting. Once formatting is complete, it is called cooked partition.

What CRS is?

Oracle RAC 10g Release 1 introduced Oracle Cluster Ready Services (CRS), a platform-independent set of system services for cluster environments. In Release 2, Oracle has renamed this product to Oracle Clusterware.

What is VIP IP used for?

It returns a dead connection IMMIDIATELY, when its primary node fails. Without using VIP IP, the clients have to wait around 10 minutes to receive ORA-3113: “end of file on communications channel”. However, using Transparent Application Failover (TAF) could avoid ORA-3113.

Is the SSH, RSH needed for normal RAC operations?

No. SSH or RSH are needed only for RAC, patch set installation and clustered database creation.

Do we have to have Oracle RDBMS on all nodes?

Each node of a cluster that is being used for a clustered database will typically have the RDBMS and RAC software loaded on it, but not actual data files (these need to be available via shared disk).

Does Real Application Clusters support heterogeneous platforms?

No,The Real Application Clusters do not support heterogeneous platforms in the same cluster.

What is the Cluster Verification Utiltiy (cluvfy)?

The Cluster Verification Utility (CVU) is a validation tool that you can use to check all the important components that need to be verified at different stages of deployment in a RAC environment.

What versions of the database can I use the cluster verification utility (cluvfy) with?

The cluster verification utility is release with Oracle Database 10g Release 2 but can also be used with Oracle Database 10g Release 1.

What is hangcheck timer used for ?

The hangcheck timer checks regularly the health of the system. If the system hangs or stop the node will be restarted automatically.
There are 2 key parameters for this module:

  • hangcheck-tick: this parameter defines the period of time between checks of system health. The default value is 60 seconds; Oracle recommends setting it to 30seconds.
  • hangcheck-margin: this defines the maximum hung delay that should be tolerated before hangcheck-timer resets the RAC node.

SRVCTL command (in Oracle 11gR2)

SRVCTL is used to manage the following resources (components):

Component             Abbreviation                 Description
asm                            asm                                               Oracle ASM instance
database                 db                                                  Database instance
diskgroup               dg                                                 Oracle ASM disk group
filesystem               filesystem                                Oracle ASM file system
home                       home                                           Oracle home or Oracle Clusterware home
listener                     lsnr                                             Oracle Net listener
service                     serv                                            Database service
ons, eons                 ons, eons                                 Oracle Notification Services (ONS)

Oracle entities (such as resources, resource types, and server pools) that have names beginning with ora are managed only by SRVCTL (and not by CRSCTL) unless you are directed to do so by Oracle Support. The cluster specific commands are generally managed by CRSCTL.


srvctl  command object options

The available commands used with SRVCTL are:

Command       Description
add                 Adds a component to the Oracle Restart configuration.
config            Displays the Oracle Restart configuration for a component.
disable           Disables management by Oracle Restart for a component.
enable            Reenables management by Oracle Restart for a component.
getenv            Displays environment variables in the Oracle Restart configuration for a database, Oracle ASM instance, or listener.
modify           Modifies the Oracle Restart configuration for a component.
remove           Removes a component from the Oracle Restart configuration.
setenv             Sets environment variables in the Oracle Restart configuration for a database, Oracle ASM instance, or listener.
start                Starts the specified component.
status              Displays the running status of the specified component.
stop                 Stops the specified component.
unsetenv        Unsets environment variables in the Oracle Restart configuration for a database, Oracle ASM instance, or listener.


Here are a matrix of commands/ object combination:




srvctl add
srvctl modify
srvctl remove
The OCR is modified.
srvctl relocate service You can reallocate a service from one named instance to another named instance.
srvctl start
srvctl stopsrvctl status
srvctl disable
srvctl enable
enable = when the server restart the resource must be restarteddisable = when the server restart the resource must NOT be restarted(perhaps we are working for some maintenance tasks)
srvctl config database
Lists configuration information from the OCR (Oracle Cluster Registry).
srvctl getenv
srvctl setenv
srvctl unsetenv
srvctl getenv = displays the environment variables stored in the OCR for target.srvctl setenv    = allows these variables to be setsrvctl unsetenv = allows these variables to be unset


The most  SRVCTL commands are:

srvctl start database -d DBname
srvctl stop database -d DBname

If you don’t know the DBname you might run  select name from v$database;

srvctl start instance -d DBname -i INSTANCEname
srvctl stop instance -d DBname -i INSTANCEname

srvctl start instance -d DBname -i INSTANCEname
srvctl stop instance -d DBname -i INSTANCEname

srvctl status database -d DBname

srvctl status instance -d DBname -i INSTANCEname

srvctl status nodeapps -n NODEname

srvctl enable database -d DBname

srvctl disable database -d DBname

srvctl enable instance -d DBname -i INSTANCEname

srvctl disable instance -d DBname -i INSTANCEname

srvctl config database -d DBname      -> to get some information about the database from OCR.

srvctl getenv nodeaps

The following commands are deprecated in Oracle Clusterware 11g release 2 (11.2):

crsctl check crsd
crsctl check cssd
crsctl check evmd
crsctl debug log
crsctl set css votedisk
crsctl start resources
crsctl stop resources

11gR2 Oracle clusterware log files


With Oracle grid infrastructure 11g release 2 (11.2), Oracle Automatic Storage Management (Oracle ASM) and Oracle Clusterware are installed into a single home directory, which is referred to as the Grid Infrastructure home. Configuration assistants start after the installer interview process that configures Oracle ASM and Oracle Clusterware.

The installation of the combined products is called Oracle grid infrastructure. However, Oracle Clusterware and Oracle Automatic Storage Management remain separate products.


$GRID_HOME  is used in 11.2 Oracle clusterware to specify the grid (clusterware + ASM location). In previous releases we used $CRS_HOME or $ORA_CRS_HOME as environment variable for the clusterware software location (oracle cluster home). For this reason, we can set all these 3 environment variable with the same value (in .profile), but this is not mandatory. In this case we can consider GRID_BASE /oracle/grid if we want.

In 11.2 grid infrastructure, the Oracle Clusterware Component Log Files are all situated in the $GRID_HOME/log/<hostname>
/oracle/grid/11.2/log/test01                ($GRID_HOME = /oracle/grid/11.2)
ls -altr
total 64
drwxrwxr-t   5   oracle   dba         256 Jul 28 20:18 racg
drwxr-x—   2    root     dba         256 Jul 28 20:18 gnsd
drwxrwxr-t  4    root     dba          256 Jul 28 20:18 agent
drwxr-x—   2   oracle   dba         256 Jul 28 20:18 admin
drwxrwxr-x  5   oracle   dba         256 Jul 28 20:18 ..
drwxr-xr-t   17   root     dba       4096 Jul 28 20:18 .
drwxr-x—   2    oracle  dba         256 Jul 28 20:24 gipcd
drwxr-x—   2    oracle  dba         256 Jul 28 20:25 mdnsd
drwxr-x—   2    root     dba         256 Jul 28 20:25 ohasd
drwxr-x—   2    oracle  dba         256 Jul 28 20:27 evmd
drwxr-x—   2    root     dba         256 Jul 31 01:08 ctssd
drwxr-x—   2    root     dba         256 Aug 1 12:44 crsd
drwxr-x—   2    oracle  dba         256 Aug 1 21:15 cssd
drwxr-x—   2    oracle  dba         256 Aug 2 14:06 diskmon
drwxr-x—   2    oracle  dba         256 Aug 2 14:46 gpnpd
-rw-rw-r–   1    root     system 16714 Aug 2 14:46 alert_test01.log
drwxr-x—   2    oracle  dba       4096 Aug 2 14:51 srvm
drwxr-x—   2    oracle  dba       4096 Aug 3 02:59 client


Oracle Clusterware Components/ Daemons/ Processes

Oracle Clusterware Component Log Files

Cluster Ready Services Daemon (CRSD) Log Files


Oracle High Availability Services Daemon (OHASD)


Cluster Synchronization Services (CSS)


Cluster Time Synchronization Service (CTSS)


Grid Plug and Play


Multicast Domain Name Service Daemon (MDNSD)


Oracle Cluster Registry records

client : For the Oracle Cluster Registry tools (OCRDUMP, OCRCHECK, OCRCONFIG) record log information
crsd : The Oracle Cluster Registry server records log information

Oracle Grid Naming Service (GNS)


Event Manager (EVM) information generated by evmd



racgCore files are in subdirectories of the log directory. Each RACG executable has a subdirectory assigned exclusively for that executable. The name of the RACG executable subdirectory is the same as the name of the executable.Additionally, you can find logging information for the VIP and database in these two locations.

Server Manager (SRVM)


Disk Monitor Daemon (diskmon)


Grid Interprocess Communication Daemon (GIPCD)


Oracle Clusterware Components/ Daemons/ Processes

Oracle Clusterware Component Log Files


Where can we find the log files related to the listeners?
A) For listener.log       => $ORACLE_BASE/diag/tnslsnr/test01/listener/trace/listener.log   
As of Oracle Database 11g Release 1, the diagnostics for each database instance are located in a dedicated directory, which can be specified through the DIAGNOSTIC_DEST initialization parameter. The structure of the directory specified by DIAGNOSTIC_DEST is as follows:
<diagnostic_dest>/diag/rdbms/<dbname>/<instname>   This location is known as the Automatic Diagnostic Repository (ADR) Home.
The following files are located under the ADR home directory:Trace files – located in subdirectory <adr-home>/traceAlert logs – located in subdirectory <adr-home>/alert. In addition, the alert.log file is now in XML format, which conforms to the Oracle ARB logging standard.Core files – located in the subdirectory <adr-home>/cdumpIncident files – the occurrence of each serious error (for example, ORA-600, ORA-1578, ORA-7445) causes an incident to be created. Each incident is assigned an ID and dumping for each incident (error stack, call stack, block dumps, and so on) is stored in its own file, separated from process trace files. Incident dump files are located in <adr-home>/incident/<incdir#>. You can find the incident dump file location inside the process trace file.

This parameter can be set on each instance. Oracle recommends that each instance in a cluster specify a DIAGNOSTIC_DEST directory location that is located on shared disk and that the same value for DIAGNOSTIC_DEST be specified for each instance.

If you want to see how the ADR Homes are configurated in the database you can run:
column INST_ID format 999
column NAME format a20
column VALUE format a45
B) for SCAN listeners 
   =>  $GRID_HOME/log/diag/tnslsnr/<NodeName>/listener_scan1/trace/listener_scan1.log
If you want to see the SCAN listeners status you can run :
[oracle@ctxdb1 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node ctxdb2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node ctxdb1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node ctxdb1

Shut down/ stop Oracle clusterware processes

When you run srvctl you will perform an operation for the database or for the cluster. If the operation if related to the database you must use the srvctl command from the ORACLE_HOME, and if the operation is related to the cluster, you must use the srvctl from the GRID_HOME.
1. Ensure that you are logged in as the oracle Linux/ UNIX user.
    If you are not connected as oracle OS user, you must switch to the oracle OS user
    su – oracle
2. Stop/ shut (stop) down all applications using the Oracle database.
    This step includes stopping (shutting down) the Oracle Enterprise Manager Database Control:
$ emctl stop dbconsole
    If you want to check if the Entreprise Manager Database Console is running or not:
emctl status dbconsole 
emctl status agent
Note: In previous releases of Oracle Database, you were required to set environment variables for ORACLE_HOME and ORACLE_SID to start, stop, and check the status
of Enterprise Manager. With Oracle Database 11g release 2 (11.2) and later, you need to set the environment variables ORACLE_HOME and ORACLE_UNQNAME to use or manage the Enterprise Manager.
export ORACLE_UNQNAME=GlobalUniqueName    (database SID and not instance SID)
3. Shut down (stop) all Oracle RAC instances on all nodes.
To shut down all Oracle RAC instances for a database, enter the following command, where db_name is the name of the database:
$ srvctl stop database -d db_name     (this command is starting all the instances)
4. Shut down (stop) all Oracle ASM instances on all nodes. (If you are not using the ASM you must skip this step.)
To shut down an Oracle ASM instance, enter the following command, where node_name is the name of the node where the Oracle ASM instance is running:
$ srvctl stop asm -n node_name
5. Stop (shut down) the Oracle cluster stack
su – root
cd $CRS_HOME/bin
# ./crsctl stop crs              (must be run on each node)
./srvctl stop nodeapps -n node_name  –> in 11.2 stops only ONS and eONS because of some dependencies.
If you want to check if the database is running you can run:
ps -ef | grep smon
oracle 246196 250208 0 14:29:11 pts/0 0:00 grep smon
If you want to check if the database listeners are running you can run:
ps -ef | grep lsnr
root 204886 229874 0 14:30:07 pts/0 0:00 grep lsnr
Here the listeners are running:
ps -ef | grep lsnr
oracle 282660 1 0 14:07:34 – 0:00 /oracle/grid/crs/11.2/bin/tnslsnr LISTENER_SCAN2 -inherit
oracle 299116 250208 0 14:30:00 pts/0 0:00 grep lsnr
oracle 303200 1 0 14:23:44 – 0:00 /oracle/grid/crs/11.2/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle 315432 1 0 14:07:35 – 0:00 /oracle/grid/crs/11.2/bin/tnslsnr LISTENER_SCAN3 -inherit
oracle 323626 1 0 14:07:34 – 0:00 /oracle/grid/crs/11.2/bin/tnslsnr LISTENER -inherit
If you want to check if any clusterware component is running you can run:
[oracle@ctxdb2 ~]$ ps -ef|grep grid
oracle    1707   956  0 13:03 pts/0    00:00:00 grep grid
root      9910     1  0  2013 ?        00:42:59 /oracle/app/cluster/product/11203/grid/bin/ohasd.bin reboot
oracle   10538     1  0  2013 ?        01:04:28 /oracle/app/cluster/product/11203/grid/bin/oraagent.bin
oracle   10552     1  0  2013 ?        00:00:24 /oracle/app/cluster/product/11203/grid/bin/mdnsd.bin
oracle   10564     1  0  2013 ?        00:13:10 /oracle/app/cluster/product/11203/grid/bin/gpnpd.bin
root     10577     1  0  2013 ?        02:43:59 /oracle/app/cluster/product/11203/grid/bin/orarootagent.bin
oracle   10579     1  0  2013 ?        00:41:11 /oracle/app/cluster/product/11203/grid/bin/gipcd.bin
root     10594     1  0  2013 ?        03:09:38 /oracle/app/cluster/product/11203/grid/bin/osysmond.bin
root     10612     1  0  2013 ?        00:01:35 /oracle/app/cluster/product/11203/grid/bin/cssdmonitor
root     10632     1  0  2013 ?        00:01:36 /oracle/app/cluster/product/11203/grid/bin/cssdagent
oracle   10646     1  0  2013 ?        03:42:27 /oracle/app/cluster/product/11203/grid/bin/ocssd.bin
root     10740     1  0  2013 ?        00:05:09 /oracle/app/cluster/product/11203/grid/bin/octssd.bin reboot
oracle   10767     1  0  2013 ?        00:01:22 /oracle/app/cluster/product/11203/grid/bin/evmd.bin
root     11208     1  0  2013 ?        00:27:00 /oracle/app/cluster/product/11203/grid/bin/crsd.bin reboot
oracle   11340 10767  0  2013 ?        00:00:00 /oracle/app/cluster/product/11203/grid/bin/evmlogger.bin -o /oracle/app/cluster/product/11203/grid/evm/log/evmlogger.info -l /oracle/app/cluster/product/11203/grid/evm/log/evmlogger.log
oracle   11378     1  0  2013 ?        02:16:30 /oracle/app/cluster/product/11203/grid/bin/oraagent.bin
root     11382     1  0  2013 ?        03:56:51 /oracle/app/cluster/product/11203/grid/bin/orarootagent.bin
oracle   11474     1  0  2013 ?        00:00:00 /oracle/app/cluster/product/11203/grid/opmn/bin/ons -d
oracle   11475 11474  0  2013 ?        00:00:13 /oracle/app/cluster/product/11203/grid/opmn/bin/ons -d
oracle   11514     1  0  2013 ?        00:20:31 /oracle/app/cluster/product/11203/grid/bin/tnslsnr LISTENER_SCAN1 -inherit
oracle   11531     1  0  2013 ?        00:46:23 /oracle/app/cluster/product/11203/grid/bin/tnslsnr LISTENER -inherit
root     11608     1  0  2013 ?        00:23:33 /oracle/app/cluster/product/11203/grid/bin/ologgerd -m ctxdb1 -r -d /oracle/app/cluster/product/11203/grid/crf/db/ctxdb2
[oracle@ctxdb2 ~]$ 

Hope this will help, Please if you have any queries in your mind, then post it . Experts are always welcome with there views and ideas. As this is not the end so, I will soon be back with more on RAC and RAC related topics.
Thanks & Regards
Nimai Karmakar

Read Full Post »

Bitbach's Blog

Just another Oracle weblog @ WordPress

Daniel Westermann's Blog

simplicity...that's it

Oracle Application DBA

sharing the information about realtime experience

My Oracle Notes

Emiliano Fusaglia RAC DBA/Data Architect

Johribazaar's Blog

Just another WordPress.com weblog


Site for Ayurveda and Yoga articles


Few adventures in my slow running life...


4 out of 5 dentists recommend this WordPress.com site


Technical Articles by Kanchana Selvakumar

SAP Basis Cafe

SAP Basis Cafe - Exploring SAP R/3 Basis World