Skip Headers
Oracle® Database Storage Administrator's Guide
11g Release 1 (11.1)

Part Number B31107-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

4 Administering ASM Disk Groups

This chapter describes how to administer Automatic Storage Management (ASM) disk groups. The examples in this chapter use SQL statements and assume that an ASM instance is running. This chapter contains the following topics:

For information about starting up an ASM instance, refer to "Starting Up an ASM Instance". For information about administering ASM disk groups with Enterprise Manager, refer to Chapter 6, "Administering ASM with Oracle Enterprise Manager". For information about administering ASM disk groups with ASMCMD, refer to Chapter 7, "ASM Command-Line Utility—ASMCMD". For information about administering ASM disk groups with DBCA, refer to your platform-specific Oracle Database installation guide.

See Also:

The ASM home page for more information about ASM best practices at: http://www.oracle.com/technology/products/database/asm/index.html

Managing Automatic Storage Management (ASM) Disk Groups

This section explains how to create, alter, drop, mount, and dismount Automatic Storage Management (ASM) disk groups. Note that database instances that use ASM can continue operating while you administer disk groups. This section contains the following topics:

Creating Disk Groups

The CREATE DISKGROUP SQL statement is used to create disk groups. When creating a disk group, you need to:

  • Assign a unique name to the disk group.

  • Specify the redundancy level of the disk group.

    If you want ASM to mirror files, you specify the redundancy level as NORMAL REDUNDANCY (2-way mirroring by default for most file types) or HIGH REDUNDANCY (3-way mirroring for all files). You specify EXTERNAL REDUNDANCY if you do not want mirroring by ASM. For example, you might choose EXTERNAL REDUNDANCY if you want to use storage array protection features.

    For more information about redundancy levels, refer to "ASM Mirroring and Failure Groups".

  • Specify the disks as belonging to specific failure groups.

    For information about failure groups, refer to "Understanding ASM Concepts" and "ASM Mirroring and Failure Groups".

  • Specify the disks that are to be formatted as ASM disks belonging to the disk group.

    The disks can be specified using operating system dependent wildcard characters in search strings that ASM then uses to find the disks. You can specify names for the disks with the NAME clause or use the system-generated names.

  • Optionally specify disk group attributes, such software compatibility or allocation unit size.

ASM programmatically determines the size of each disk. If for some reason this is not possible, or if you want to restrict the amount of space used on a disk, you are able to specify a SIZE clause for each disk. ASM creates operating system–independent names for the disks in a disk group that you can use to reference the disks in other SQL statements. Optionally, you can provide your own name for a disk using the NAME clause. Disk names are available in the V$ASM_DISK view.

Note:

A disk cannot belong to multiple disk groups.

The ASM instance ensures that any disk in a newly created disk group is addressable and is not currently a member of another disk group. You must use FORCE only when adding a disk that was dropped FORCE. If a disk is dropped NOFORCE, then use can add it NOFORCE. For example, a disk might have failed and was dropped from its disk group. After the disk is repaired, it is no longer part of any disk group, but ASM still recognizes that the disk had been a member of a disk group. You must use the FORCE flag to include the disk in a new disk group. In addition, the disk must be addressable, and the original disk group must not be mounted. Otherwise, the operation fails.

Note:

Use caution when using the FORCE option to add a previously used disk to a disk group; you might cause another disk group to become unusable.

The CREATE DISKGROUP statement mounts the disk group for the first time, and adds the disk group name to the ASM_DISKGROUPS initialization parameter if a server parameter file is being used. If a text initialization parameter file is being used and you want the disk group to be automatically mounted at instance startup, then you must remember to add the disk group name to the ASM_DISKGROUPS initialization parameter before the next time that you shut down and restart the ASM instance. You can also create disk groups with Oracle Enterprise Manager. Refer to "Creating Disk Groups".

Example: Creating a Disk Group

The following examples assume that the ASM_DISKSTRING initialization parameter is set to '/devices/*'. Assume the following:

ASM disk discovery identifies the following disks in the /devices directory.

Controller 1:


/devices/diska1
/devices/diska2
/devices/diska3
/devices/diska4

Controller 2:
/devices/diskb1
/devices/diskb2
/devices/diskb3
/devices/diskb4

The SQL statement in Example 4-1 creates a disk group named dgroup1 with normal redundancy consisting of two failure groups controller1 or controller2 with four disks in each failure group.

Example 4-1 Creating a Disk Group

% SQLPLUS /NOLOG
SQL> CONNECT / AS SYSASM
Connected to an idle instance.
SQL> STARTUP NOMOUNT

CREATE DISKGROUP dgroup1 NORMAL REDUNDANCY
FAILGROUP controller1 DISK
'/devices/diska1',
'/devices/diska2',
'/devices/diska3',
'/devices/diska4'
FAILGROUP controller2 DISK
'/devices/diskb1',
'/devices/diskb2',
'/devices/diskb3',
'/devices/diskb4'
ATTRIBUTE 'AU_SIZE'='4M';

See Also:

For information about using ASMLIB when creating disk groups, refer to the Oracle ASMLib page on the Oracle Technology Network Web site at http://www.oracle.com/technology/tech/linux/asmlib/index.html

Altering Disk Groups

You can use the ALTER DISKGROUP statement to alter a disk group configuration. You can add, resize, or drop disks while the database remains online. Whenever possible, multiple operations in a single ALTER DISKGROUP statement are recommended.

ASM automatically rebalances when the configuration of a disk group changes. By default, the ALTER DISKGROUP statement does not wait until the operation is complete before returning. Query the V$ASM_OPERATION view to monitor the status of this operation.

You can use the REBALANCE WAIT clause if you want the ALTER DISKGROUP statement processing to wait until the rebalance operation is complete before returning. This is especially useful in scripts. The statement also accepts a REBALANCE NOWAIT clause that invokes the default behavior of conducting the rebalance operation asynchronously in the background.

You can interrupt a rebalance running in wait mode by typing CTRL-C on most platforms. This causes the statement to return immediately with the message ORA-01013: user requested cancel of current operation, and to continue the operation asynchronously. Typing CTRL-C does not cancel the rebalance operation or any disk add, drop, or resize operations.

To control the speed and resource consumption of the rebalance operation, you can include the REBALANCE POWER clause in statements that add, drop, or resize disks. Refer to "Manually Rebalancing Disk Groups" for more information about this clause.

Adding Disks to a Disk Group

You can use the ADD clause of the ALTER DISKGROUP statement to add a disk or a failure group to a disk group. The same syntax that you use to add a disk or failure group with the CREATE DISKGROUP statement can be used with the ALTER DISKGROUP statement. For an example of the CREATE DISKGROUP SQL statement, refer to Example 4-1. After you add new disks, the new disks gradually begin to accommodate their share of the workload as rebalancing progresses.

ASM behavior when adding disks to a disk group is best illustrated through"Example: Adding Disks to a Disk Group". You can also add disks to a disk group with Oracle Enterprise Manager, described in "Adding Disks to Disk Groups".

Example: Adding Disks to a Disk Group

The statements presented in this example demonstrate the interactions of disk discovery with the ADD DISK operation.

Assume that disk discovery identifies the following disks in directory /devices:


/devices/diska1 -- member of dgroup1
/devices/diska2 -- member of dgroup1
/devices/diska3 -- member of dgroup1
/devices/diska4 -- member of dgroup1
/devices/diska5 -- candidate disk
/devices/diska6 -- candidate disk
/devices/diska7 -- candidate disk
/devices/diska8 -- candidate disk

/devices/diskb1 -- member of dgroup1
/devices/diskb2 -- member of dgroup1
/devices/diskb3 -- member of dgroup1
/devices/diskb4 -- member of dgroup2

/devices/diskc1 -- member of dgroup2
/devices/diskc2 -- member of dgroup2
/devices/diskc3 -- member of dgroup3
/devices/diskc4 -- candidate disk

/devices/diskd1 -- candidate disk
/devices/diskd2 -- candidate disk
/devices/diskd3 -- candidate disk
/devices/diskd4 -- candidate disk
/devices/diskd5 -- candidate disk
/devices/diskd6 -- candidate disk
/devices/diskd7 -- candidate disk
/devices/diskd8 -- candidate disk

You can query the V$ASM_DISK view to display the status of ASM disks. See "Using Views to Obtain ASM Information".

The following statement would fail because /devices/diska1 - /devices/diska4 already belong to dgroup1.

ALTER DISKGROUP dgroup1 ADD DISK
     '/devices/diska*';

The following statement would successfully add disks /devices/diska5 through /devices/diska8 to dgroup1. Because no FAILGROUP clauses are included in the ALTER DISKGROUP statement, each disk is assigned to its own failure group. The NAME clauses assign names to the disks, otherwise they would have been assigned system-generated names.

ALTER DISKGROUP dgroup1 ADD DISK
     '/devices/diska5' NAME diska5,
     '/devices/diska6' NAME diska6,
     '/devices/diska7' NAME diska7,
     '/devices/diska8' NAME diska8,

The following statement would fail because the search string matches disks that are contained in other disk groups. Specifically, /devices/diska4 belongs to disk group dgroup1 and /devices/diskb4 belongs to disk group dgroup2.

ALTER DISKGROUP dgroup1 ADD DISK
     '/devices/disk*4';

The following statement would successfully add /devices/diskd1 through /devices/diskd8 to disk group dgroup1. This statement runs with a rebalance power of 5, and does not return until the rebalance operation is complete.

ALTER DISKGROUP dgroup1 ADD DISK
      '/devices/diskd*'
       REBALANCE POWER 5 WAIT;

If /devices/diskc3 was previously a member of a disk group that no longer exists, then you could use the FORCE option to add them as members of another disk group. For example, the following use of the FORCE clause enables /devices/diskc3 to be added to dgroup2, even though it is a current member of dgroup3. For this statement to succeed, dgroup3 cannot be mounted.

ALTER DISKGROUP dgroup2 ADD DISK
     '/devices/diskc3' FORCE;

Dropping Disks from Disk Groups

To drop disks from a disk group, use the DROP DISK clause of the ALTER DISKGROUP statement. You can also drop all of the disks in specified failure groups using the DROP DISKS IN FAILGROUP clause.

When a disk is dropped, the disk group is rebalanced by moving all of the file extents from the dropped disk to other disks in the disk group. A drop disk operation might fail if not enough space is available on the other disks. The best approach is to perform both the add and drop operation with the same ALTER DISKGROUP statement. This has the benefit of rebalancing data extents once and ensuring that there is enough space for the rebalance operation to succeed.

Caution:

The ALTER DISKGROUP...DROP DISK statement returns before the drop and rebalance operations are complete. Do not reuse, remove, or disconnect the dropped disk until the HEADER_STATUS column for this disk in the V$ASM_DISK view changes to FORMER. You can query the V$ASM_OPERATION view to determine the amount of time remaining for the drop/rebalance operation to complete. For more information, refer to the Oracle Database SQL Language Reference and the Oracle Database Reference.

If you specify the FORCE clause for the drop operation, the disk is dropped even if ASM cannot read or write to the disk. You cannot use the FORCE flag when dropping a disk from an external redundancy disk group.

Caution:

A DROP FORCE operation leaves data at reduced redundancy for as long as it takes for the subsequent rebalance operation to complete. This increases your exposure to data loss if there is a subsequent disk failure during rebalancing. DROP FORCE should be used only with great care.

You can also drop disks from a disk group with Oracle Enterprise Manager. See "Dropping Disks from Disk Groups".

Example: Dropping Disks from Disk Groups

The statements in this example demonstrate how to drop disks from the disk group dgroup1 described in "Example: Adding Disks to a Disk Group".

The following example drops diska5 from disk group dgroup1.

ALTER DISKGROUP dgroup1 DROP DISK diska5;

The following example drops diska5 from disk group dgroup1, and also illustrates how multiple actions are possible with one ALTER DISKGROUP statement.

ALTER DISKGROUP dgroup1 DROP DISK diska5
     ADD FAILGROUP failgrp1 DISK '/devices/diska9' NAME diska9;

Resizing Disks in Disk Groups

The RESIZE clause of ALTER DISKGROUP enables you to perform the following operations:

  • Resize all disks in the disk group

  • Resize specific disks

  • Resize all of the disks in a specified failure group

If you do not specify a new size in the SIZE clause then ASM uses the size of the disk as returned by the operating system. The new size is written to the ASM disk header and if the size of the disk is increasing, then the new space is immediately available for allocation. If the size is decreasing, rebalancing must relocate file extents beyond the new size limit to available space below the limit. If the rebalance operation can successfully relocate all extents, then the new size is made permanent, otherwise the rebalance fails.

Example: Resizing Disks in Disk Groups

The following example resizes all of the disks in failure group failgrp1 of disk group dgroup1. If the new size is greater than disk capacity, the statement will fail.

ALTER DISKGROUP dgroup1 
     RESIZE DISKS IN FAILGROUP failgrp1 SIZE 100G;

Undropping Disks in Disk Groups

The UNDROP DISKS clause of the ALTER DISKGROUP statement enables you to cancel all pending drops of disks within disk groups. If a drop disk operation has already completed, then this statement cannot be used to restore it. This statement cannot be used to restore disks that are being dropped as the result of a DROP DISKGROUP statement, or for disks that are being dropped using the FORCE clause.

Example: Undropping Disks in Disk Groups

The following example cancels the dropping of disks from disk group dgroup1:

ALTER DISKGROUP dgroup1 UNDROP DISKS;

Manually Rebalancing Disk Groups

You can manually rebalance the files in a disk group using the REBALANCE clause of the ALTER DISKGROUP statement. This would normally not be required, because ASM automatically rebalances disk groups when their configuration changes. You might want to do a manual rebalance operation if you want to control the speed of what would otherwise be an automatic rebalance operation.

The POWER clause of the ALTER DISKGROUP...REBALANCE statement specifies the degree of parallelism, and thus the speed of the rebalance operation. It can be set to a value from 0 to 11. A value of 0 halts a rebalancing operation until the statement is either implicitly or explicitly re-run. The default rebalance power is set by the ASM_POWER_LIMIT initialization parameter. See "Tuning Rebalance Operations" for more information.

The power level of an ongoing rebalance operation can be changed by entering the rebalance statement with a new level.

The ALTER DISKGROUP...REBALANCE command by default returns immediately so that you can issue other commands while the rebalance operation takes place asynchronously in the background. You can query the V$ASM_OPERATION view for the status of the rebalance operation.

If you want the ALTER DISKGROUP...REBALANCE command to wait until the rebalance operation is complete before returning, you can add the WAIT keyword to the REBALANCE clause. This is especially useful in scripts. The command also accepts a NOWAIT keyword, which invokes the default behavior of conducting the rebalance operation asynchronously. You can interrupt a rebalance running in wait mode by typing CTRL-C on most platforms. This causes the command to return immediately with the message ORA-01013: user requested cancel of current operation, and to continue the rebalance operation asynchronously.

Additional rules for the rebalance operation include the following:

  • An ongoing rebalance command will be restarted if the storage configuration changes either when you alter the configuration, or if the configuration changes due to a failure or an outage. Furthermore, if the new rebalance fails because of a user error, then a manual rebalance may be required.

  • The ALTER DISKGROUP...REBALANCE statement runs on a single node even if you are using Oracle Real Application Clusters (Oracle RAC).

  • ASM can perform one disk group rebalance at a time on a given instance. Therefore, if you have initiated multiple rebalances on different disk groups, then Oracle processes this operation serially. However, you can initiate rebalances on different disk groups on different nodes in parallel.

  • Rebalancing continues across a failure of the ASM instance performing the rebalance.

  • The REBALANCE clause (with its associated POWER and WAIT/NOWAIT keywords) can also be used in ALTER DISKGROUP commands that add, drop, or resize disks.

    Note:

    Oracle will restart the processing of an ongoing rebalance operation if the storage configuration changes. Furthermore, if the next rebalance operation fails because of a user error, then you may need to perform a manual rebalance.
Example: Manually Rebalancing a Disk Group

The following example manually rebalances the disk group dgroup2. The command does not return until the rebalance operation is complete.

ALTER DISKGROUP dgroup2 REBALANCE POWER 5 WAIT;

For more information about rebalancing operations, refer to "Tuning Rebalance Operations".

Tuning Rebalance Operations

If the POWER clause is not specified in an ALTER DISKGROUP statement, or when rebalance is implicitly run by adding or dropping a disk, then the rebalance power defaults to the value of the ASM_POWER_LIMIT initialization parameter. You can adjust the value of this parameter dynamically.

The higher the power limit, the more quickly a rebalance operation can complete. Rebalancing takes longer with lower power values, but consumes fewer processing and I/O resources which are shared by other applications, such as the database.

The default value of 1 minimizes disruption to other applications. The appropriate value is dependent on your hardware configuration, performance requirements, and availability requirements

If a rebalance is in progress because a disk is manually or automatically dropped, then increasing the power of the rebalance shortens the time frame during which redundant copies of that data on the dropped disk are reconstructed on other disks.

The V$ASM_OPERATION view provides information for adjusting ASM_POWER_LIMIT and the resulting power of rebalance operations. The V$ASM_OPERATION view also gives an estimate in the EST_MINUTES column of the amount of time remaining for the rebalance operation to complete. You can see the effect of changing the rebalance power by observing the change in the time estimate.

See Also:

"Manually Rebalancing Disk Groups" for more information.

ASM Disk Discovery

Disk discovery is the mechanism used to find the operating system names for disks ASM can access. It is used to find all the disks that comprise a disk group to be mounted, the disks an administrator wants to add to a disk group, or the disks the administrator might consider adding to a disk group. This section contains the following topics:

For additional information about disk discovery and the ASM_DISKSTRING initialization parameter, refer to "ASM_DISKSTRING".

How A Disk is Discovered

While an ASM instance is initialized, ASM discovers and examines the contents of all of the disks that are in the paths that you designated with values in the ASM_DISKSTRING initialization parameter. Disk discovery also occurs when you:

  • Run the ALTER DISKGROUP...ADD DISK and ALTER DISKGROUP...RESIZE DISK commands

  • Query the V$ASM_DISKGROUP and V$ASM_DISK views

After ASM successfully discovers a disk, the disk appears in the V$ASM_DISK view. Disks that belong to a disk group, that is, disks that have a disk group name in the disk header, show a status of MEMBER. Disks that were discovered, but that have not yet been assigned to a disk group, have a status of either CANDIDATE or PROVISIONED.

The PROVISIONED status implies that an additional platform-specific action has been taken by an administrator to make the disk available for ASM. For example, on Windows computers, the administrator might have used asmtool or asmtoolg to stamp the disk with a header. On Linux computers, the administrator might have used ASMLIB to prepare the disk for ASM.

The following SQL query shows one candidate and six member disks:

SELECT name, header_status, path FROM V$ASM_DISK;

NAME      HEADER_STATUS PATH
--------- ------------- ---------------------
          CANDIDATE     /dev/rdsk/disk07
DISK06    MEMBER        /dev/rdsk/disk06
DISK05    MEMBER        /dev/rdsk/disk05
DISK04    MEMBER        /dev/rdsk/disk04
DISK03    MEMBER        /dev/rdsk/disk03
DISK02    MEMBER        /dev/rdsk/disk02
DISK01    MEMBER        /dev/rdsk/disk01

7 rows selected.

Disk Discovery Rules

The rules for discovering ASM disks are as follows:

  • ASM can discover up to 10,000 disks. That is, if more than 10,000 disks match the ASM_DISKSTRING initialization parameter, then ASM discovers only the first 10,000.

  • ASM only discovers disks that contain a partition table.

    Note:

    ASM does not discover a disk that contains an operating system partition table, even if the disk is in an ASM disk string search path and ASM has read and write permission for the disk.
  • When adding a disk, the FORCE option must be used if ASM recognizes that the disk was managed by Oracle. Such a disk appears in the V$ASM_DISK view with a status of FOREIGN. In this case, you can only add the disk to a disk group by using the FORCE keyword.

In addition, ASM identifies the following configuration errors during discovery:

  • Multiple paths to the same disk

    In this case, if the disk is part of a disk group, then disk group mount fails. If the disk is being added to a disk group with the ADD DISK or CREATE DISKGROUP command, then the command fails. To correct the error, adjust the ASM_DISKSTRING value so that ASM will not discover multiple paths to the same disk. Or if you are using multipathing software, then ensure that you include only the pseudo-device name in the ASM_DISKSTRING value. See "ASM and Multipathing".

  • Multiple ASM disks with the same disk header

    This can be caused by having copied one disk onto another. In this case, the disk group mount operation fails.

Improving Disk Discovery Time

The value for the ASM_DISKSTRING initialization parameter is an operating system–dependent value that ASM uses to limit the set of paths that the discovery process uses to search for disks. When a new disk is added to a disk group, each ASM instance that has the disk group mounted must be able to discover the new disk using its ASM_DISKSTRING.

In many cases, the default value (NULL) is sufficient. Using a more restrictive value might reduce the time required for ASM to perform discovery, and thus improve disk group mount time or the time for adding a disk to a disk group. Oracle might need to dynamically change the ASM_DISKSTRING before adding a disk so that the new disk will be discovered through this parameter.

Note that the default value of ASM_DISKSTRING might not find all disks in all situations. If your site is using a third-party vendor ASMLIB, then the vendor might have discovery string conventions that you must use for ASM_DISKSTRING. In addition, if your installation uses multipathing software, then the software might place pseudo-devices in a path that is different from the operating system default. See "ASM and Multipathing" and consult the multipathing vendor documentation for details.

Managing Capacity in Disk Groups

When ASM provides redundancy, such as when you create a disk group with NORMAL or HIGH redundancy, you must have sufficient capacity in each disk group to manage a re-creation of data that is lost after a failure of one or two failure groups. After one or more disks fail, the process of restoring redundancy for all data requires space from the surviving disks in the disk group. If not enough space remains, then some files might end up with reduced redundancy.

Reduced redundancy means that one or more extents in the file are not mirrored at the expected level. For example, a reduced redundancy file in a high redundancy disk group has at least one file extent with two or fewer total copies of the extent instead of three. In the case of unprotected files, data extents could be missing altogether. Other causes of reduced redundancy files are disks running out of space or an insufficient number of failure groups. The REDUNDANCY_LOWERED column in the V$ASM_FILE view provides information about files with reduced redundancy.

The following guidelines help ensure that you have sufficient space to restore full redundancy for all disk group data after the failure of one or more disks.

The V$ASM_DISKGROUP view contains the following columns that contain information to help you manage capacity:

The results from the following query show capacity metrics for a normal redundancy disk group that consists of six 1 GB (1024 MB) disks, each in its own failure group:

SELECT name, type, total_mb, free_mb, required_mirror_free_mb, 
usable_file_mb FROM V$ASM_DISKGROUP;

NAME         TYPE     TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
------------ ------ ---------- ---------- ----------------------- --------------
DISKGROUP1   NORMAL       6144       3768                    1024           1372

The REQUIRED_MIRROR_FREE_MB column shows that 1 GB of extra capacity must be available to restore full redundancy after one or more disks fail. Note that the first three numeric columns in the query results are raw numbers. That is, they do not take redundancy into account. Only the last column is adjusted for normal redundancy. Note in the example that:

FREE_MB - REQUIRED_MIRROR_FREE_MB = 2 * USABLE_FILE_MB

3768 - 1024 = 2 * 1372 = 2744

Negative Values of USABLE_FILE_MB

Due to the relationship between FREE_MB, REQUIRED_MIRROR_FREE_MB, and USABLE_FILE_MB, USABLE_FILE_MB can become negative. Although this is not necessarily a critical situation, it does mean that:

  • Depending on the value of FREE_MB, you may not be able to create new files.

  • The next failure might result in files with reduced redundancy.

If USABLE_FILE_MB becomes negative, it is strongly recommended that you add more space to the disk group as soon as possible.

ASM Disk Group Redundancy

This section contains the following topics:

ASM Mirroring and Failure Groups

If you specify mirroring for a file, then ASM automatically stores redundant copies of the file extents in separate failure groups. Failure groups apply only to normal and high redundancy disk groups. You can define the failure groups for each disk group when you create or alter the disk group.

There are three types of disk groups based on the ASM redundancy level. Table 4-1 lists the types with their supported and default mirroring levels. The default mirroring levels indicate the mirroring level with which each file is created unless a different mirroring level is designated.

Table 4-1 Mirroring Options for ASM Disk Group Types

Disk Group Type Supported Mirroring Levels Default Mirroring Level

External redundancy

Unprotected (none)

Unprotected

Normal redundancy


Two-way
Three-way
Unprotected (None)

Two-way

High redundancy

Three-way

Three-way


The redundancy level controls how many disk failures are tolerated without dismounting the disk group or losing data. Each file is allocated based on its own redundancy, but the default comes from the disk group. The redundancy levels are:

  • External redundancy

    ASM does not provide mirroring redundancy and relies on the storage system to provide RAID functionality. Any write error cause a forced dismount of the disk group. All disks must be located to successfully mount the disk group.

  • Normal redundancy

    ASM provides two-way mirroring. By default all files are mirrored so that there are two copies of every data extent. A loss of one ASM disk is tolerated.

  • High redundancy

    ASM provides triple mirroring by default. A loss of two ASM disks in different failure groups is tolerated.

Failure groups enable the mirroring of metadata and user data. System reliability can diminish if your environment has an insufficient number of failure groups.

This section contains these topics:

ASM Failure Groups

Failure groups are used to store mirror copies of data. When ASM allocates an extent for a normal redundancy file, ASM allocates a primary copy and a secondary copy. ASM chooses the disk on which to store the secondary copy so that it is in a different failure group than the primary copy. Each copy is on a disk in a different failure group so that the simultaneous failure of all disks in a failure group does not result in data loss.

A failure group is a subset of the disks in a disk group, which could fail at the same time because they share hardware. The failure of common hardware must be tolerated. Four drives that are in a single removable tray of a large JBOD array should be in the same failure group because the tray could be removed making all four drives fail at the same time. Drives in the same cabinet could be in multiple failure groups if the cabinet has redundant power and cooling so that it is not necessary to protect against failure of the entire cabinet. However, ASM mirroring is not intended to protect against a fire in the computer room that destroys the entire cabinet.

There are always failure groups even if they are not explicitly created. If you do not specify a failure group for a disk, then Oracle automatically creates a new failure group containing just that disk. A normal redundancy disk group must contain at least two failure groups. A high redundancy disk group must contain at least three failure groups. However, Oracle recommends using several failure groups. A small number of failure groups, or failure groups of uneven capacity, can create allocation problems that prevent full use of all of the available storage.

How ASM Manages Disk Failures

Depending on the redundancy level of a disk group and how you define failure groups, the failure of one or more disks could result in either of the following:

  • The disks are first taken offline and then automatically dropped. In this case, the disk group remains mounted and serviceable. In addition, because of mirroring, all of the disk group data remains accessible. After the disk drop operation, ASM performs a rebalance to restore full redundancy for the data on the failed disks.

  • The entire disk group is automatically dismounted, which means loss of data accessibility.

Guidelines for Using Failure Groups

The following are guidelines for using failure groups:

  • Each disk in a disk group can belong to only one failure group.

  • Failure groups should all be of the same size. Failure groups of different sizes may lead to reduced availability.

  • ASM requires at least two failure groups to create a normal redundancy disk group and at least three failure groups to create a high redundancy disk group.

Failure Group Frequently Asked Questions

This section discusses frequently asked questions about failure group under the following topics:

How Many Failure Groups Should I Create?

Choosing the number of failure groups to create depends on the types of failures that need to be tolerated without data loss. For small numbers of disks, such as fewer than 20, it is usually best to use the default failure group creation that puts every disk in its own failure group.

Using the default failure group creation for small numbers of disks is also applicable for large numbers of disks where your main concern is disk failure. For example, a disk group might be configured from several small modular disk arrays.If the system needs to continue operating when an entire modular array fails, then a failure group should consist of all of the disks in one module. If one module fails, then all of the data on that module will be relocated to other modules to restore redundancy. Disks should be placed in the same failure group if they depend on a common piece of hardware whose failure needs to be tolerated with no loss of availability.

How are Multiple Failure Groups Recovered after Simultaneous Failures?

A simultaneous failure can occur if there is a failure of a piece of hardware used by multiple failure groups. This type of failure usually forces a dismount of the disk group if all disks are unavailable.

When Should External, Normal, or High Redundancy Be Used?

ASM mirroring runs on the database server and Oracle recommends to off load this processing to the storage hardware RAID controller by using external redundancy. You can use normal redundancy in the following scenarios:

  • Storage system does not have RAID controller

  • Mirroring across storage arrays

  • Extended cluster configurations

In general, ASM mirroring is the Oracle alternative to third party logical volume managers. ASM mirroring eliminates the need to deploy additional layers of software complexity in your Oracle database environment.

ASM Fast Mirror Resync

Restoring the redundancy of an ASM disk group after a transient disk path failure can be time consuming. This is especially true if the recovery process requires rebuilding an entire ASM failure group. ASM fast mirror resync significantly reduces the time to resynchronize a failed disk in such situations. When you replace the failed disk, ASM can quickly resynchronize the ASM disk extents.

Note:

To use this feature, the disk group compatibility attributes must be set to 11.1 or higher. For more information, refer to "Disk Group Compatibility".

Any problems that make a failure group temporarily unavailable are considered transient failures that can be recovered by the ASM fast mirror resync feature. Disk path malfunctions; such as cable failures, host bus adapter failures, controller failures, or disk power supply interruptions; can cause transient failures.

ASM fast resync keeps track of pending changes to extents on an OFFLINE disk during an outage. The extents are resynced when the disk is brought back online.

By default, ASM drops a disk in 3.6 hours after it is taken offline. You can set the DISK_REPAIR_TIME attribute to prevent this operation by specifying a time interval to repair the disk and bring it back online. The time can be specified in units of minutes (m or M) or hours (h or H). If you omit the unit, then the default unit is hours. If the attribute is not set explicitly, then the default value is 3.6h. Note that this default applies to disks that have been set to OFFLINE mode without an explicit DROP AFTER clause. The default DISK_REPAIR_TIME attribute value of 3.6h should be adequate for most environments.

The elapsed time (since the disk was set to OFFLINE mode) is incremented only when the disk group containing the offline disks is mounted. The REPAIR_TIME column of V$ASM_DISK shows the amount of time left (in seconds) before an offline disk is dropped. After the specified time has elapsed, ASM drops the disk. You can override this attribute with an ALTER DISKGROUP DISK OFFLINE statement and the DROP AFTER clause.

If an offline disk is taken offline for a second time, then the elapsed time is reset and restarted. If another time is specified with the DROP AFTER clause for this disk, the first value is overridden and the new value applies. A disk that is in OFFLINE mode cannot be dropped with an ALTER DISKGROUP DROP DISK statement; an error is returned if attempted. If for some reason the disk needs to be dropped (such as the disk cannot be repaired) before the repair time has expired, a disk can be dropped immediately by issuing a second OFFLINE statement with a DROP AFTER clause specifying 0h or 0m.

You can use ALTER DISKGROUP to set the DISK_REPAIR_TIME attribute to a specified hour or minute value, such as 4.5 hours or 270 minutes. For example:

ALTER DISKGROUP dg01 SET ATTRIBUTE 'disk_repair_time' = '4.5h'
ALTER DISKGROUP dg01 SET ATTRIBUTE 'disk_repair_time' = '270m'

After you repair the disk, run the SQL statement ALTER DISKGROUP DISK ONLINE. This statement brings a repaired disk group back online to enable writes so that no new writes are missed. This statement also starts a procedure to copy of all of the extents that are marked as stale on their redundant copies.

If a disk goes offline when the ASM instance is in rolling upgrade mode, the disk remains offline until the rolling upgrade has ended and the timer for dropping the disk is stopped until the ASM cluster is out of rolling upgrade mode. See "Using ASM Rolling Upgrades". Examples of taking disks offline and bringing them online follow.

The following example takes disk D3_0001 offline and drops it after five minutes.

ALTER DISKGROUP D3 OFFLINE DISK D3_0001 DROP AFTER 5m;

The next example takes the disk D3_0001 offline and drops it after the time period designated by DISK_REPAIR_TIME elapses:

ALTER DISKGROUP D3 OFFLINE DISK D3_0001;

This example takes all of the disk in failure group F2 offline and drops them after the time period designated by DISK_REPAIR_TIME elapses. IF you used a DROP AFTER clause, then the disks would be dropped after the specified time:

ALTER DISKGROUP D3 OFFLINE DISKS IN FAILGROUP F2;

The next example brings all of the disks in failure group F2 online:

ALTER DISKGROUP D3 ONLINE DISKS IN FAILGROUP F2;

This example brings only disk D3_0001 online:

ALTER DISKGROUP D3 ONLINE DISK D3_0001;

This example brings all of the disks in disk group D3 online:

ALTER DISKGROUP D3 ONLINE ALL;

Querying the V$ASM_OPERATION view after you run any of these types of online and offline statements displays the current operation that you are performing. For example, the query:

SELECT GROUP_NUMBER, OPERATION, STATE FROM V$ASM_OPERATION;

Displays output similar to the following:

GROUP_NUMBER OPERA STAT 
------------ ----- ---- 
           1 ONLIN RUN  
           1 ONLIN RUN 

See Also:

Oracle Database SQL Language Reference for information about ALTER DISKGROUP, CREATE DISKGROUP, and ASM disk group attributes

Preferred Read Failure Groups

When you configure ASM failure groups, it might be more efficient for a node to read from an extent that is closest to the node, even if that extent is a secondary extent. In other words, you can configure ASM to read from a secondary extent if that extent is closer to the node instead of ASM reading from the primary copy which might be farther from the node. Using preferred read failure groups is most useful in extended clusters.

To use this feature, Oracle recommends that you configure at least one mirrored extent copy from a disk that is local to a node in an extended cluster. However, a failure group that is preferred for one instance might be remote to another instance in the same Oracle RAC database. The parameter setting for preferred read failure groups is instance specific.

Note:

By default, when you create a disk group, every disk in the disk group belongs to one failure group. Oracle does not recommend that you configure more than one preferred read failure group for each instance in a disk group. If you configure more than one preferred read failure group for each instance, then Oracle writes messages to an alert log.

See Also:

Oracle Real Application Clusters Administration and Deployment Guide for information about configuring preferred read disks in extended clusters

Configuring and Administering Preferred Read Failure Groups

To configure this feature, set the ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter to specify a list of failure group names as preferred read disks. For more information about this initialization parameter, refer to "ASM_PREFERRED_READ_FAILURE_GROUPS".

Set the parameter where diskgroup_name is the name of the disk group and failure_group_name is the name of the failure group, separating these variables with a period. ASM ignores the name of a failure group that you use in this parameter setting if the failure group does not exist in the named disk group. You can append multiple values using commas as a separator as follows:

ASM_PREFERRED_READ_FAILURE_GROUPS = diskgroup_name.failure_group_name,...

In an extended cluster, the failure groups that you specify with settings for the ASM_PREFERRED_READ_FAILURE_GROUPS parameter should only contain disks that are local to the instance. For normal redundancy disk groups, there should be only one failure group on each site of the extended cluster.

If there is more than one mirrored copy and you have set a value for the ASM_PREFERRED_READ_FAILURE_GROUPS parameter, then ASM first reads the copy that resides on a preferred read disk. If that read fails, then ASM attempts to read from the next mirrored copy that might not be on a preferred read disk.

Having more than one failure group on one site can cause the loss of access to the disk group by the other sites if the site containing more than one failure group fails. In addition, by having more than one failure group on a site, an extent might not be mirrored to another site. This can diminish the read performance of the failure group on the other site.

For example, for a normal redundancy disk group, if a site contains two failure groups of a disk group, then ASM might put both mirror copies of an extent on the same site. In this configuration, ASM cannot protect against data loss from a site failure.

You should configure at most two failure groups on a site for a high redundancy disk group. If there are three sites in an extended cluster, for the same reason previously mentioned, then you should only create one failure group.

For a two-site extended cluster, a normal redundancy disk group only has two failure groups. In this case, you can only specify one failure group as a preferred read failure group for each instance.

You can use views to identify preferred read failure groups, such as the V$ASM_DISK view that shows whether a disk is a preferred read disk by the value in the PREFERRED_READ column. You can also use V$ASM_DISK to verify whether local disks in an extended cluster are preferred read disks. Use the ASM disk I/O statistics to verify that read operations are using the preferred read disks that you configured.

If a disk group is not optimally configured for an extended cluster, then ASM records warning messages in the alert logs. To identify specific performance issues with ASM preferred read failure groups, use the V$ASM_DISK_IOSTAT view. This view displays disk I/O statistics for each ASM client. You can also query the V$ASM_DISK_IOSTAT view on a database instance. However, this query only shows the I/O statistics for the database instance. In general, optimal preferred read extended cluster configurations balance performance with disk group availability.

See Also:

Oracle Database Reference for details about the V$ASM* dynamic performance views

Performance and Scalability Considerations for Disk Groups

This section discusses the following considerations for evaluating disk group performance:

Determining the Number of Disk Groups

Use the following criteria to determine the number of disk groups to create:

  • Disks in a given disk group should have similar size and performance characteristics. If you have several different types of disks in terms of size and performance, then create several disk groups that contain similar characteristics.

  • Create separate disk groups for your database files and flash recovery area for backup files. This configuration allows fast recovery in case of a disk group failure.

Performance Characteristics When Grouping Disks

ASM load balances the file activity by uniformly distributing file extents across all of the disks in a disk group. For this technique to be effective it is important that disks in a disk group be of similar performance characteristics. For example, the newest and fastest disks might reside in a disk group reserved for the database work area, and slower drives could reside in a disk group reserved for the flash recovery area.

There might be situations where it is acceptable to temporarily have disks of different sizes and performance co-existing in a disk group. This would be the case when migrating from an old set of disks to a new set of disks. The new disks would be added and the old disks dropped. As the old disks are dropped, their storage is migrated to the new disks while the disk group is online.

ASM Storage Limits

ASM has the following limits:

  • 63 disk groups in a storage system

  • 10,000 ASM disks in a storage system

  • 4 PB maximum storage for each ASM disk

  • 40 exabyte maximum storage for each storage system

  • 1 million files for each disk group

Oracle Database supports datafile sizes up to 128 TB. ASM supports file sizes greater than 128 TB in any redundancy mode. This provides near unlimited capacity for future growth. The ASM file size limits are as follows:

  • External redundancy - 140 PB

  • Normal redundancy - 42 PB

  • High redundancy - 15 PB

Disk Group Compatibility

This section describes disk group compatibility under the following topics:

Overview of Disk Group Compatibility

The disk group compatibility feature enables environments to interoperate when they use disk groups from both Oracle Database 10g and Oracle Database 11g. Compatibility settings that are set to previous releases enable database clients to access disk groups of higher releases. For example, Oracle Database 10g clients could access an Oracle Database 11g ASM disk group.

The ASM and Oracle Database disk group attribute settings for compatibility determine the minimum ASM and Oracle Database software version numbers that a system can use. For instance, if ASM compatibility is 11.1, and Oracle Database compatibility is 10.1, then the ASM software version must be at least 11.1, and the Oracle Database client software version must be at least 10.1. The two attribute settings are COMPATIBLE.ASM and COMPATIBLE.RDBMS.

The software version of ASM determines the default compatibility of newly created disk groups. You can override the disk group compatibility default setting when you create disk groups with the CREATE DISKGROUP SQL statement. The ALTER DISKGROUP SQL statement can update the compatibility settings for existing disk groups. The compatibility settings for a disk group can only be advanced; you cannot revert to a lower compatibility setting.

Advancing the disk group Oracle Database and ASM compatibility settings enables you to use the new ASM features that are available in latest release. For example, a disk group with the RDBMS and ASM compatibility set to 11.1 can take advantage of new Oracle Database 11g features, such as variable extent sizes and fast mirror resync.

Note:

The disk group compatibility settings determine whether your environment can use the latest ASM features.

Disk Group Compatibility Attributes

The COMPATIBLE.ASM and COMPATIBLE.RDBMS disk group attributes specify the compatibility settings for Oracle Database and the ASM instance respectively. To enable disk compatibility, you must first set the COMPATIBLE.ASM attribute and then set the COMPATIBLE.RDBMS attribute. These attributes are described under the following topics:

COMPATIBLE.ASM

The value for the COMPATIBLE.ASM attribute determines the minimum software version for any ASM instance that uses a disk group. This setting also determines the format of the data structures for the ASM metadata on the disk. The format of the file contents is determined by the database instance. For ASM in Oracle Database 11g, 10.1 is the default setting for the COMPATIBLE.ASM attribute. To advance disk group compatibility, first set COMPATIBLE.ASM before setting COMPATIBLE.RDBMS.

See Also:

Oracle Database Reference for more information about the COMPATIBLE initialization parameter

Table 4-2 shows the valid combinations of the COMPATIBLE.ASM and the COMPATIBLE.RDBMS attributes, the valid ASM and database versions for each combination, and the features that each combination can enable.

Table 4-2 Disk Group Compatibility Attribute Settings Matrix

COMPATIBLE.ASM COMPATIBLE.RDBMS ASM Instance Version DB Instance Version Features Enabled

10.1

10.1

>=10.1

>=10.1

ASM Disk Group 10g R1 enabled

10.1

10.2

Not Valid

Not Valid

Not Valid

10.1

11.1

Not Valid

Not Valid

Not Valid

10.2

10.1

>=10.2

>=10.1

ASM Disk Group 10g R1 enabled

10.2

10.2

>=10.2

>=10.2

ASM Disk Group 10g R2 enabled

10.2

11.1

Not Valid

Not Valid

Not Valid

11.1

10.1

>=11.1

>=10.1

ASM Disk Group 10g R1 enabled

11.1

10.2

>=11.1

>=10.2

ASM Disk Group 10g R2 enabled

11.1

11.1

>=11.1

>=11.1

ASM Disk Group 11g R1 enabled, Fast mirror resync, variable size extents, preferred mirror read, different AU sizes, and ASM/RDBMS compatibility attributes


When setting the values for the COMPATIBLE.RDBMS and COMPATIBLE.ASM attributes, specify at least the first two digits of a valid Oracle Database release number. For example, you can specify compatibility as '10.2' or '11.1'; Oracle assumes that any missing version number digits are zeros. See "Setting Disk Group Compatibility".

Note:

Advancing the values for on-disk compatibility attributes is an irreversible operation. To revert to the previous value, you must create a new disk group with the old compatibility attributes and then restore the database files that were in the disk group.

In addition to appearing in the V$ASM_ATTRIBUTE view, the compatibility attribute values also appear in the columns labelled DATABASE_COMPATIBILITY and COMPATIBILITY in the V$ASM_DISKGROUP view.

See Also:

COMPATIBLE.RDBMS

The value for the COMPATIBLE.RDBMS attribute for all of the disk groups that are used by a database must be lower than or equal to the setting of the COMPATIBLE database initialization parameter. For ASM in Oracle Database 11g, 10.1 is the default setting for the COMPATIBLE.RDBMS attribute. For example, if the database COMPATIBLE initialization parameter is set to 11.1.0, then COMPATIBLE.RDBMS can be set to any value between 10.1 and 11.1 inclusively.

Caution:

If you advance the COMPATIBLE.RDBMS attribute, then you cannot revert to the previous setting. Therefore, before advancing the COMPATIBLE.RDBMS attribute, ensure that the values for the COMPATIBLE initialization parameter for all of the databases that use the disk group are set to at least the new setting for COMPATIBLE.RDBMS before you advance the attribute value.

Note:

The database initialization parameter COMPATIBLE enables you to use a new release of Oracle, while at the same time guaranteeing backward compatibility with an earlier release.

Setting Disk Group Compatibility

You can set disk group compatibility as shown in Table 4-2 with the CREATE DISKGROUP or ALTER DISKGROUP SQL statement.

Using CREATE DISKGROUP with Compatibility Attributes

You can specify the compatibility settings for a disk group with the CREATE DISKGROUP statement when creating the disk group.

asmdskgrp1 with ASM compatibility set to 10.2 and database compatibility set to the default (assuming that the COMPATIBLE.RDBMS default is less than or equal to 10.2):

CREATE DISKGROUP asmdskgrp1 DISK '/dev/raw/*' 
       SET ATTRIBUTE 'compatible.asm' = '10.2';

The following example creates a normal redundancy disk group asmdskgrp2 with the ASM compatibility set to 11.1 and the database compatibility set to the default (assuming that the COMPATIBLE.RDBMS default is less than or equal to 11.1):

CREATE DISKGROUP asmdskgrp2 DISK '/dev/raw/*' 
       SET ATTRIBUTE 'compatible.asm' = '11.1';

The following example creates a normal redundancy disk group asmdskgrp3 with both the ASM and the database compatibility set to 11.1:

CREATE DISKGROUP asmdskgrp3 DISK '/dev/raw/*' 
       SET ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

Using ALTER DISKGROUP with Compatibility Attributes

After a disk group has been created, you can use the ALTER DISKGROUP SQL statement to change the compatibility attributes. Using the ALTER DISKGROUP SQL statement ensures that Oracle can advance the compatibility of the specified disk group before committing the change. All of the affected databases and file systems should be online when running ALTER DISKGROUP to ensure that advancing compatibility does not reduce the database and file system functionality.

The following example advances the database compatibility of the disk group asmdskgrp4 to 11.1. This example assumes that the ASM compatibility is already advanced to 11.1.

ALTER DISKGROUP asmdskgrp4 SET ATTRIBUTE 'compatible.rdbms' = '11.1'

The following example advances the ASM compatibility for disk group asmdskgrp5 to 11.1. An ASM instance must be at release 11.1 and higher to access the asmdskgrp5 disk group.

ALTER DISKGROUP asmdskgrp5 SET ATTRIBUTE 'compatible.asm' = '11.1'

See Also:

Oracle Database SQL Language Reference for more information about the disk group compatibility SQL statements

Considerations When Setting Disk Group Compatibility

When changing the disk group compatibility settings, there are some considerations that you should be aware of.

  • If a backup of a disk group was made with the ASMCMD md_backup command prior to changing the compatibility settings, that full backup file would be incorrect for the updated disk group. Restoring that the full previous backup would set the disk group to the previous compatibility settings.

    You can still use the previous backup to restore some metadata. For example, you could create a new disk group and use the backup file to restore templates and alias directories metadata. For information about using md_backup and md_restore, refer to "md_backup Command" and "md_restore Command".

  • The disk group compatibility settings should be the same for all replication environments.

  • The compatibility settings can only be advanced and the settings are irreversible.

  • Not all combinations of the disk group compatibility settings are valid and some features are not enabled in various combinations. Table 4-2 lists the possible COMPATIBLE.ASM and COMPATIBLE.RDBMS disk group settings and the features enabled.

Considerations When Setting Disk Group Compatibility in Replicated Environments

If you advance disk group compatibility, then you could enable the creation of files that are too large to be managed by a previous Oracle database release. You need to be aware of the file size limits because replicated sites cannot continue using the software from a previous release to manage these large files.

Table 4-3 show the maximum ASM file sizes. Note that memory consumption is identical in all cases: 280 MB. For Oracle Database 11g, the number of extent pointers for each size is 16800, with the largest size using the remainder.

Table 4-3 Maximum ASM File Size

Redundancy 10.1.0.4 11g 1/4/16/64 11g 1/8/64 11g 64

External

35 TB

140 PB

140 PB

140 PB

Normal

5.8 TB

23 PB

23 PB

23 PB

High

3.9 TB

15 PB

15 PB

15 PB


Table 4-3 shows that Oracle Database 10g can only support a file size of up to 35 TB for external redundancy. If you advance the compatibility to 11.1, then a file can grow beyond 35 TB making the file unusable in replicated and disaster recovery sites.

See Also:

Mounting and Dismounting Disk Groups

Disk groups that are specified in the ASM_DISKGROUPS initialization parameter are mounted automatically at ASM instance startup. This makes them available to all database instances running on the same node as ASM. The disk groups are dismounted at ASM instance shutdown. ASM also automatically mounts a disk group when you initially create it, and dismounts a disk group if you drop it.

There might be times that you want to mount or dismount disk groups manually. For these actions use the ALTER DISKGROUP...MOUNT or ALTER DISKGROUP...DISMOUNT statement. You can mount or dismount disk groups by name, or specify ALL.

If you try to dismount a disk group that contains open files, the statement will fail, unless you also specify the FORCE clause.

In a clustered ASM environment in RESTRICTED mode, a disk group is mounted in single instance exclusive mode. No other ASM instance in that cluster can mount that disk group. In this mode the disk group is not usable by any ASM client. Use this mode to perform a fast rebalance.

Example: Dismounting Disk Groups

The following statement dismounts all disk groups that are currently mounted to the ASM instance:

ALTER DISKGROUP ALL DISMOUNT;

Example: Mounting Disk Groups

The following statement mounts disk group dgroup1:

ALTER DISKGROUP dgroup1 MOUNT;

Mounting Disk Groups Using the FORCE Option

For normal and high redundancy disk groups, you can use the FORCE option of the ALTER DISKGROUP statement's MOUNT clause to mount disk groups if there are sufficient ASM disks available. The disk group mount succeeds if ASM finds at least one complete set of extents in a disk group. If ASM determines that one or more disks are not available, then ASM moves those disks off line and drops the disks after the DISK_REPAIR_TIME expires.

In clustered ASM environments, if an ASM instance is not the first instance to mount the disk group, then using the MOUNT FORCE statement fails. This is because the disks have been accessed by another instance and the disks are not locally accessible.

Use the FORCE option as in the following example where disk_group_name is the name of the disk group that you want to force mount:

ALTER DISKGROUP disk_group_name MOUNT FORCE

See Also:

The Oracle Database SQL Language Reference for additional information about the ALTER DISKGROUP statement and the FORCE option

Checking the Internal Consistency of Disk Group Metadata

You can check the internal consistency of disk group metadata using the ALTER DISKGROUP statement with the CHECK keyword. You can use this statement to check specific files in a disk group, specific disks or all disks in a disk group, or specific failure groups within a disk group. The disk group must be mounted to perform these checks.

By default, the CHECK DISK GROUP clause verifies all of the metadata directories. ASM displays summary errors and writes the details about the errors in an alert log. The CHECK keyword performs the following operations:

The REPAIR | NOREPAIR clause specifies whether ASM should attempt to repair errors that are found during the check. The default is REPAIR. Use the NOREPAIR clause to receive alerts about inconsistencies and to suppress ASM from resolving the errors automatically. The following example statement checks for consistency in the metadata for all disks in the dgroup1 disk group:

ALTER DISKGROUP dgroup1 CHECK ALL;

See Also:

The Oracle Database SQL Language Reference for additional information about the CHECK clause syntax

Dropping Disk Groups

The DROP DISKGROUP statement enables you to delete an ASM disk group and optionally, all of its files. You can specify the INCLUDING CONTENTS clause if you also want to delete any files that might be contained in the disk group. The default is EXCLUDING CONTENTS, which provides syntactic consistency and prevents you from dropping the disk group if it has any contents

The ASM instance must be started and the disk group must be mounted with none of the disk group files open, in order for the DROP DISKGROUP statement to succeed. The statement does not return until the disk group has been dropped.

When you drop a disk group, ASM dismounts the disk group and removes the disk group name from the ASM_DISKGROUPS initialization parameter if a server parameter file is being used. If a text initialization parameter file is being used, and the disk group is mentioned in the ASM_DISKGROUPS initialization parameter, then you must remove the disk group name from the ASM_DISKGROUPS initialization parameter before the next time that you shut down and restart the ASM instance.

The following statement deletes dgroup1:

DROP DISKGROUP dgroup1;

After ensuring that none of the files contained in dgroup1 are open, ASM rewrites the header of each disk in the disk group to remove ASM formatting information. The statement does not specify INCLUDING CONTENTS, so the drop operation will fail if the disk group contains any files.

If you cannot mount a disk group but need to drop it, you can use the FORCE option of the DROP DISKGROUP statement. This command enables you to remove the headers on disks that belong to a disk group that cannot be mounted by any ASM instances as in the following example where disk_group_name is the name of the disk group:

DROP DISKGROUP disk_group_name FORCE

The disk group on which you perform this operation should not be mounted anywhere in the cluster. When you use the FORCE option, the ASM instance does not attempt to verify that a disk group is being used by another ASM instance in the same storage subsystem.

Note:

Only use the FORCE option with extreme caution.

You can also drop a disk group with Oracle Enterprise Manager. See "Dropping Disk Groups".

Using Views to Obtain ASM Information

You can use the views in Table 4-4 to obtain information about ASM:

Table 4-4 ASM Dynamic Views

View Description

V$ASM_ALIAS

In an ASM instance, contains one row for every alias present in every disk group mounted by the ASM instance.

In a DB instance, contains no rows.

V$ASM_ATTRIBUTE

Displays one row for each attribute defined. In addition to attributes specified by CREATE DISKGROUP and ALTER DISKGROUP statements, the view may show other attributes that are created automatically.

V$ASM_CLIENT

In an ASM instance, identifies databases using disk groups managed by the ASM instance.

In a DB instance, contains one row for the ASM instance if the database has any open ASM files.

V$ASM_DISK

In an ASM instance, contains one row for every disk discovered by the ASM instance, including disks that are not part of any disk group.

In a DB instance, contains rows only for disks in the disk groups in use by that DB instance.

This view performs disk discovery every time it is queried.

V$ASM_DISK_IOSTAT

Displays information about disk I/O statistics for each ASM client.

In a DB instance, only the rows for that instance are shown.

V$ASM_DISK_STAT

In an ASM instance, contains the same columns as V$ASM_DISK, but to reduce overhead, does not perform a discovery when it is queried. It does not return information about any disks that are new to the storage system. For the most accurate data, use V$ASM_DISK instead.

V$ASM_DISKGROUP

In an ASM instance, describes a disk group (number, name, size related info, state, and redundancy type).

In a DB instance, contains one row for every ASM disk group mounted by the local ASM instance.

This view performs disk discovery every time it is queried.

V$ASM_DISKGROUP_STAT

In an ASM instance, contains the same columns as V$ASM_DISKGROUP, but to reduce overhead, does not perform a discovery when it is queried. It does not return information about any disks that are new to the storage system. For the most accurate data, use V$ASM_DISKGROUP instead.

V$ASM_FILE

In an ASM instance, contains one row for every ASM file in every disk group mounted by the ASM instance.

In a DB instance, contains no rows.

V$ASM_OPERATION

In an ASM instance, contains one row for every active ASM long running operation executing in the ASM instance.

In a DB instance, contains no rows.

V$ASM_TEMPLATE

In an ASM or DB instance, contains one row for every template present in every disk group mounted by the ASM instance.


See Also:

Oracle Database Reference for details on all of these dynamic performance views