Skip Headers
Oracle® Real Application Clusters Administration and Deployment Guide
11g Release 1 (11.1)

Part Number B28254-01
Go to Documentation Home
Go to Book List
Book List
Go to Table of Contents
Go to Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Go to next page
View PDF

2 Administering Storage

This chapter describes storage topics, such as Automatic Storage Management (ASM), in an Oracle Real Application Clusters (Oracle RAC) environment.

This chapter includes the following topics:

See Also:

Oracle Clusterware Administration and Deployment Guide, your platform-specific Oracle Clusterware installation guide, and your Oracle Real Application Clusters installation guide

Overview of Storage in Oracle Real Application Clusters

All datafiles (including an undo tablespace for each instance) and redo log files (at least two for each instance) must reside in an ASM disk group, on a cluster file system, or on shared raw devices. In addition, Oracle recommends that you use one shared server parameter file (SPFILE) with instance-specific entries. Alternatively, you can use a local file system to store instance-specific parameter files (PFILEs).

Unless otherwise noted, Oracle storage features such as ASM, Oracle Managed Files (OMF), automatic segment-space management, and so on, function the same in Oracle RAC environments as they do in single-instance Oracle database environments. See Oracle Database 2 Day DBA, Oracle Database Storage Administrator's Guide, and the Oracle Database Administrator's Guide for additional information about these storage features.

If you do not use ASM, if your platform does not support a cluster file system, or if you do not want to use a cluster file system for database file storage, then create additional raw devices as described in your platform-specific Oracle Real Application Clusters installation and configuration guide. However, Oracle recommends that you use ASM for database file storage, as described later in this chapter in the section titled "Automatic Storage Management in Oracle Real Application Clusters".

The remainder of this section describes the following topics:


To create an Oracle RAC database using the Oracle Database Standard Edition, you must use ASM for your database storage.

Oracle Flexible Architecture

Optimal Flexible Architecture (OFA) ensures reliable installations and improves software manageability. This feature streamlines the way in which Oracle software installations are organized, thereby simplifying the on-going management of your installations and improves manageability by making default Oracle Database installs more compliant with OFA specifications.

During installation, you are prompted to specify an Oracle base (ORACLE_BASE) location, which is owned by the user performing the installation. You can choose an existing ORACLE_BASE, or choose another directory location that does not have the structure for an ORACLE_BASE directory.

Using the Oracle base directory path helps to facilitate the organization of Oracle installations, and helps to ensure that installations of multiple databases maintain an OFA configuration. During the installation, ORACLE_BASE is the only required input, as the ORACLE_HOME will be defaulted based on the value chosen for the ORACLE_BASE. In addition, Oracle recommends that you set the ORACLE_BASE environment variable in addition to ORACLE_HOME, when starting databases. Note that ORACLE_BASE may become a required environment variable for database startup in a future release.

See Also:

Your platform-specific Oracle Real Application Clusters installation guide for more information about specifying an ORACLE_BASE directory

Datafile Access in Oracle Real Application Clusters

All Oracle RAC instances must be able to access all datafiles. If a datafile needs to be recovered when the database is opened, then the first Oracle RAC instance to start is the instance that performs the recovery and verifies access to the file. As other instances start, they also verify their access to the datafiles. Similarly, when you add a tablespace or datafile or bring a tablespace or datafile online, all instances verify access to the file or files.

If you add a datafile to a disk that other instances cannot access, then verification fails. Verification also fails if instances access different copies of the same datafile. If verification fails for any instance, then diagnose and fix the problem. Then run the ALTER SYSTEM CHECK DATAFILES statement on each instance to verify datafile access.

Redo Log File Storage in Oracle Real Application Clusters

Each instance has its own online redo log groups. Create these redo log groups and establish group members, as described in the Oracle Database Administrator's Guide. To add a redo log group to a specific instance, specify the INSTANCE clause on the ALTER DATABASE ADD LOGFILE statement, as described in the Oracle Database SQL Language Reference. If you do not specify the instance when adding the redo log group, the redo log group is added to the instance to which you are currently connected.

Each instance must have at least two groups of redo log files. You must allocate the redo log groups before enabling a new instance with the ALTER DATABASE ENABLE INSTANCE instance_name command. When the current group fills, an instance begins writing to the next log file group. If your database is in ARCHIVELOG mode, then each instance must save filled online log groups as archived redo log files that are tracked in the control file.

During database recovery, all enabled instances are checked to see if recovery is needed. If you remove an instance from your Oracle RAC database, you should disable the instance so it is does not have to be checked during database recovery.

Automatic Undo Management in Oracle Real Application Clusters

Oracle automatically manages undo segments within a specific undo tablespace that is assigned to an instance. Only the instance assigned to the undo tablespace can modify the contents of that tablespace. However, all instances can always read all undo blocks throughout the cluster environment for consistent read purposes. Also, any instance can update any undo tablespace during transaction recovery, as long as that undo tablespace is not currently used by another instance for undo generation or transaction recovery. You assign undo tablespaces in your Oracle RAC database by specifying a different value for the UNDO_TABLESPACE parameter for each instance in your SPFILE or individual PFILEs. You cannot simultaneously use automatic undo management and manual undo management in an Oracle RAC database. In other words, all instances of an Oracle RAC database must operate in the same undo mode.

See Also:

Oracle Database Administrator's Guide for detailed information about creating and managing undo tablespaces

Automatic Storage Management in Oracle Real Application Clusters

ASM automatically maximizes performance by managing the storage configuration across the disks that ASM manages. ASM does this by evenly distributing the database files across all of the available storage within your cluster database environment. ASM partitions your total disk space requirements into uniformly sized units across all disks in a disk group. ASM can also automatically mirror data to prevent data loss. Because of these features, ASM also significantly reduces your administrative overhead.

To use ASM in Oracle RAC, select ASM as your storage option when you create your database with the Database Configuration Assistant (DBCA). As in single-instance Oracle databases, using ASM in Oracle RAC does not require I/O tuning.


When installing ASM, you should keep the ASM home separate from the database home directory (Oracle home). By using separate home directories, you can upgrade and patch ASM and the Oracle Database software independently, and you can deinstall Oracle Database software without affecting the ASM instance. See the Oracle Database 2 Day + Real Application Clusters Guide for complete information.

The following topics describe ASM and ASM administration as follows:

ASM Storage Management in Oracle Real Application Clusters

When you create your database, Oracle creates one ASM instance on each node in your Oracle RAC environment if one does not already exist. Each ASM instance has either an SPFILE or PFILE type parameter file. Back up the parameter files and the TNS entries for nondefault Oracle Net listeners.

You can create ASM disk groups and configure mirroring for ASM disk groups using DBCA. After your Oracle RAC database is operational, you can administer ASM disk groups with Enterprise Manager.

You configure ASM in a separate standalone ASM home. This enables instances for single-instance databases and Oracle RAC databases to share a single ASM instance on a node. You also have the option to upgrade ASM independently of your database upgrades.

The Oracle tools that you use to manage ASM, including DBCA, Database Upgrade Assistant (DBUA), Enterprise Manager, and the silent mode install and upgrade commands, include options to manage ASM instances and disk groups. For example, you can run DBCA to create a new ASM instance or ASM diskgroup independently of creating a database.

When you choose ASM options when performing installation, upgrades, or other operations, the tool you are using may automatically extend ASM to other nodes in your cluster. This can include installing ASM software into the same home as on the current node and starting the ASM instance. For example, if you use DBCA to create a database using a new Oracle home, then DBCA attempts to extend ASM to the new Oracle home on all of the nodes you select.

Modifying Disk Group Configurations for ASM in Oracle Real Application Clusters

When you create a disk group for a cluster or add new disks to an existing clustered disk group, prepare the underlying physical storage on shared disks and give the Oracle user permission to read and write to the disk. The shared disk requirement is the only substantial difference between using ASM in an Oracle RAC database compared to using it in a single-instance Oracle database. ASM automatically re-balances the storage load after you add or delete a disk or disk group.

In a cluster, each ASM instance manages its node's metadata updates to the disk groups. In addition, each ASM instance coordinates disk group metadata with other nodes in the cluster. As in single-instance Oracle databases, you can use Enterprise Manager, DBCA, SQL*Plus, and the Server Control Utility (SRVCTL) to administer disk groups for ASM in Oracle RAC. The Oracle Database Storage Administrator's Guide explains how to use SQL*Plus to administer ASM instances. The following sections describe how to use the other tools.

Standalone ASM Disk Group Management

When you create a database using DBCA and you select the ASM storage option, DBCA creates the ASM instances for you if they do not already exist. However, you can also use the standalone ASM disk group management feature to create and manage an ASM instance and its associated disk groups independently of creating a new database. You can use Enterprise Manager or DBCA to add disks to a disk group, to mount a disk group or to mount all of the disk groups, or to create ASM instances. Additionally, you can use Enterprise Manager to dismount and drop disk groups or to delete ASM instances.

To create an ASM instance without creating a database with DBCA, select the Configure Automatic Storage Management option on the DBCA Database Options page. You can also use this option to add or mount one or more ASM disk groups. The DBCA then displays the Node Selection page on which you can identify the nodes on which you want to create the ASM instance or on which you want to manage disk groups. If necessary, the next page you must complete is the DBCA Create Instance Page on which you add the information about the ASM instance parameter file and SYS password and, for Windows systems, the owner of the ASM-related service.

You can also use the ASM Disk Groups page in DBCA for standalone ASM management. That is, you can configure ASM storage separately from database creation. For example, from the ASM Disk Groups page, you can create new disk groups, add disks to existing disk groups, or mount disk groups that are not currently mounted.

Performing Automatic Storage Management Rolling Upgrades

ASM rolling upgrade enables you to upgrade or patch clustered ASM nodes one at a time, without affecting database availability. During a rolling upgrade, you can maintain a functional cluster while one or more of the nodes in the cluster are running different software versions.


An ASM rolling upgrade applies only to clustered ASM instances. You can perform rolling upgrades only in environments with Oracle Database 11g release 1 (11.1) and later releases. In other words, you cannot use the rolling upgrade feature to upgrade from Oracle Database 10g to Oracle Database 11g.

See Also:

Oracle Database Storage Administrator's Guide for conceptual information about performing ASM rolling upgrades and patching ASM instances, and the Oracle Database Upgrade Guide for step-by-step instructions to upgrade an ASM instance with DBUA and to upgrade an ASM instance manually

Configuring Preferred Mirror Read Disks in Extended Distance Clusters

When you configure ASM failure groups, it may be more efficient for a node to read from an extent that is closest to the node, even if that extent is a secondary extent. You can configure ASM to read from a secondary extent if that extent is closer to the node instead of ASM reading from the primary copy which might be farther from the node. Using preferred read failure groups is most beneficial in an extended distance cluster.

To configure this feature, set the ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter to specify a list of failure group names as preferred read disks. Oracle recommends that you configure at least one mirrored extent copy from a disk that is local to a node in an extended cluster. However, a failure group that is preferred for one instance might be remote to another instance in the same Oracle RAC database. The parameter setting for preferred read failure groups is instance specific.

See Also:

Converting Single-Instance ASM to Clustered ASM

You can use either the rconfig command or Oracle Enterprise Manager Grid Control to convert an existing ASM instance from a single-instance storage manager to a cluster storage manager. You can convert ASM instances that are running Oracle Database 10g release 10.2 (or later) directly to Oracle Database 11g.

See Also:

Extending ASM to Nodes Running Single-Instance or Oracle RAC Databases

This section describes how to add a new ASM instance to a node that is running either a single-instance database or an Oracle RAC database instance.

Perform the following steps to extend ASM from an existing node to a new node:

  1. Start DBCA from the node where you already have configured the ASM instance. (In this case, the node is called stbdq18).

    You should run the DBCA from the ASM home in the existing cluster not the node you just added. By running DBCA out of the existing ASM home, you ensure that ASM will be running from the correct home.

  2. On the Welcome screen of the Database Upgrade Assistant (shown in Figure 2-1), select Oracle Real Application Clusters database.

    Figure 2-1 Extending ASM to a New Node: DBCA Welcome Page

    Description of Figure 2-1 follows
    Description of "Figure 2-1 Extending ASM to a New Node: DBCA Welcome Page"

  3. On the DBCA Operations screen (shown in Figure 2-2), select the Configure Automatic Storage Management option.

    Figure 2-2 Extending ASM to a New Node: DBCA Operations Page

    Description of Figure 2-2 follows
    Description of "Figure 2-2 Extending ASM to a New Node: DBCA Operations Page"

  4. On the Node Selection screen (shown in Figure 2-3), you should see both the source node (stbdq18) and the target node (stbdq19). Select both nodes and click Next.

    Figure 2-3 Extending ASM to a New Node: DBCA Node Selection Screen

    Description of Figure 2-3 follows
    Description of "Figure 2-3 Extending ASM to a New Node: DBCA Node Selection Screen"

  5. DBCA checks for ASM availability on the new node provisioned (stbdq19) and displays the confirmation window shown in Figure 2-4.

    Figure 2-4 Extending ASM to a New Node: Confirmation Window

    Description of Figure 2-4 follows
    Description of "Figure 2-4 Extending ASM to a New Node: Confirmation Window"

  6. Select Yes to create ASM on the new node. DBCA creates ASM in the new node.

  7. You can confirm that ASM has been extended to the new node by issuing the following command:

    crs_stat | grep asm

Administering ASM Instances and Disk Groups with Enterprise Manager in Oracle RAC

You can administer ASM with Oracle Enterprise Manager Database Control (Database Control). Database Control enables you to more easily manage ASM instances, disks, disk groups, and failure groups in Oracle RAC environments.

To begin administering ASM, go to the ASM Home page in Database Control. To access the ASM Home page, you must:

  1. Log in to Oracle Enterprise Manager on any node that is running the Oracle Management Service (OMS).

    OMS is automatically started on the node where Database Configuration Assistant (DBCA) was run to create the cluster database. Depending on your configuration, OMS may also be running on other nodes.

  2. On the Cluster Database Home page, under the Instances heading, click the link for the desired ASM instance.

You can perform administrative operations on ASM disk groups such as adding and deleting them. You can also monitor ASM disk group performance as well as control disk group availability at the instance level. For example, some of the tasks specific to Oracle RAC and ASM you can perform with Database Control include:

  • When you add a disk group, the disk group definition includes a checkbox to indicate whether the disk group is automatically mounted to all of the cluster database instances.

  • Monitoring ASM disk group performance—The default Disk Group Performance page displays instance-level performance details when you click a performance characteristic such as Write Response Time or I/O Throughput.

  • When you mount and dismount ASM disk groups, you can use a checkbox to indicate which instances should mount or dismount a particular ASM Disk Group.

  • You can manage disk resynchronization, control preferred read settings, and manage ASM rolling upgrades.

See Also:

Oracle Database Storage Administrator's Guide for complete information about using Database Control to manage ASM in Oracle RAC environments

Administering ASM Instances with SRVCTL in Oracle Real Application Clusters

You can use the Server Control Utility (SRVCTL) to add, remove, enable, and disable an ASM instance. To issue SRVCTL commands to manage ASM, log in as the operating system user that owns the ASM home and issue the SRVCTL commands from the bin directory of the ASM home.

Use the following syntax to add configuration information about an existing ASM instance:

srvctl add asm -n node_name -i +asm_instance_name -o oracle_home


For all of the SRVCTL commands in this section for which the -i option is not required, if you do not specify an instance name, then the command applies to all of the ASM instances on the node.

Use the following syntax to remove an ASM instance:

srvctl remove asm -n node_name [-i +asm_instance_name]

Use the following syntax to enable an ASM instance:

srvctl enable asm -n node_name [-i ] +asm_instance_name

Use the following syntax to disable an ASM instance:

srvctl disable asm -n node_name [-i +asm_instance_name]

You can also use SRVCTL to start, stop, and obtain the status of an ASM instance as in the following examples.

Use the following syntax to start an ASM instance:

srvctl start asm -n node_name [-i +asm_instance_name] [-o start_options]

Use the following syntax to stop an ASM instance:

srvctl stop asm -n node_name [-i +asm_instance_name] [-o stop_options]

Use the following syntax to show the configuration of an ASM instance:

srvctl config asm -n node_name 

Use the following syntax to obtain the status of an ASM instance:

srvctl status asm -n node_name