Skip Headers
Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide
10g Release 2 (10.2) for AIX

Part Number B14201-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Feedback

Go to previous page
Previous
Go to next page
Next
View PDF

3 Configuring Oracle Clusterware and Oracle Database Storage

This chapter describes the storage configuration tasks that you must complete before you start Oracle Universal Installer. It includes information about the following tasks:

3.1 Reviewing Storage Options for Oracle Clusterware, Database, and Recovery Files

This section describes supported options for storing Oracle Clusterware files, Oracle Database files, and data files. It includes the following sections:

3.1.1 Overview of Storage Options

Use the following overview to help you select your storage option:

3.1.1.1 Overview of Oracle Clusterware Storage Options

There are two ways of storing Oracle Clusterware files:

  • A supported shared file system: The supported file system is:

    • General Parallel File System (GPFS): A cluster file system for AIX that provides concurrent file access

  • Raw partitions: Raw partitions are raw disks that are accessed either through a logical volume manager (LVM), or through non-LVM file systems.

3.1.1.2 Overview of Oracle Database and Recovery File Options

There are three ways of storing Oracle Database and recovery files:

  • Automatic Storage Management: Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager for Oracle files.

  • A supported shared file system: The supported file system is:

    • General Parallel File System (GPFS): Note that if you intend to use GPFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware. If you intend to store Oracle Clusterware files on GPFS, then you must ensure that GPFS volume sizes are at least 500 MB each.

  • Raw partitions (database files only): A raw partition is required for each database file.


See Also:

For information about certified compatible storage options, refer to the Oracle Storage Compatibility Program (OSCP) Web site, which is at the following URL:

http://www.oracle.com/technology/deploy/availability/htdocs/oscp.html


3.1.1.3 General Storage Considerations

For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files. If you want to enable automated backups during the installation, then you must also choose the storage option that you want to use for recovery files (the flash recovery area). You do not have to use the same storage option for each file type.

For single-instance Oracle Database installations using Oracle Clusterware for failover, you must use GPFS, ASM, or shared raw disks if you do not want the failover processing to include dismounting and remounting disks.

The following table shows the storage options supported for storing Oracle Clusterware files, Oracle Database files, and Oracle Database recovery files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file. Oracle Clusterware files include the Oracle Cluster Registry (OCR), a mirrored OCR file (optional), the Oracle Clusterware voting disk, and additional voting disk files (optional).


Note:

For the most up-to-date information about supported storage options for RAC installations, refer to the Certify pages on the OracleMetaLink Web site:
http://metalink.oracle.com

Storage Option File Types Supported
OCR and Voting Disk Oracle Software Database Recovery
Automatic Storage Management No No Yes Yes
General Parallel File System (GPFS) Yes Yes Yes Yes
Local storage No Yes No No
Raw Logical Volumes Managed by HACMP Yes No Yes Yes

Use the following guidelines when choosing the storage options that you want to use for each file type:

  • You can choose any combination of the supported storage options for each file type as long as you satisfy any requirements listed for the chosen storage options.

  • Oracle recommends that you choose Automatic Storage Management (ASM) as the storage option for database and recovery files.

  • For Standard Edition RAC installations, ASM is the only supported storage option for database or recovery files.

  • You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any Oracle instance starts.

  • If you intend to use ASM with RAC, and you are configuring a new ASM instance, then you must ensure that your system meets the following conditions:

    • All nodes on the cluster have the release 2 (10.2) version of Oracle Clusterware installed

    • Any existing ASM instance on any node in the cluster is shut down

  • If you intend to upgrade an existing RAC database, or a RAC database with ASM instances, then you must ensure that your system meets the following conditions:

    • The RAC database or RAC database with ASM instance is running on the node from which the Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA) is run

    • The RAC database or RAC database with an ASM instance is running on the same nodes that you intend to make members of the new cluster installation. For example, if you have an existing RAC database running on a three-node cluster, then you must install the upgrade on all three nodes. You cannot upgrade only 2 nodes of the cluster, removing the third instance in the upgrade.


    See Also:

    Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing database

  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.

3.1.1.4 After You Have Selected Disk Storage Options

When you have determined your disk storage options, you must perform the following tasks in the following order:

1: Check for available shared storage with CVU

Refer to Checking for Available Shared Storage with CVU

2: Configure shared storage for Oracle Clusterware files

3: Configure storage for Oracle Database files and recovery files

3.1.2 Checking for Available Shared Storage with CVU

To check for all shared file systems available across all nodes on the cluster with GPFS, use the following command:

/mountpoint/clusterware/cluvfy/runcluvfy.sh comp ssa -n node_list

If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:

/mountpoint/clusterware/cluvfy/runcluvfy.sh comp ssa -n node_list -s storageID_list

In the preceding syntax examples, the variable mountpoint is the mountpoint path of the installation media, the variable node_list is the list of nodes you want to check, separated by commas, and the variable storageID_list is the list of storage device IDs for the storage devices managed by the file system type that you want to check.

For example, if you want to check the shared accessibility from node1 and node2 of storage devices /dev/sdb and /dev/sdc, and your mountpoint is /dev/dvdrom/, then enter the following command:

/dev/dvdrom/clusterware/cluvfy/runcluvfy.sh comp ssa -n node1,node2 -s /dev/sdb,/dev/sdc

If you do not specify specific storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list

3.2 Configuring Storage for Oracle Clusterware Files on a Supported Shared File System

Oracle Universal Installer (OUI) does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:

3.2.1 Requirements for Using a File System for Oracle Clusterware Files

To use a file system for Oracle Clusterware files, the file system must comply with the following requirements

  • To use a cluster file system on AIX you must use GPFS

  • If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then you should ensure one of the following is true:

    • Either ensure that the disks used for the file system are on a highly available storage device, (for example, a RAID device that implements file redundancy), or mount at least two file systems, and use the features of Oracle Database 10g Release 2 (10.2) to provide redundancy for the OCR.

  • If you intend to use a shared file system to store database files, then you should ensure that you use at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.

  • The oracle user must have write permissions to create the files in the path that you specify.


Note:

If you are upgrading from Oracle9i release 2, then you can continue to use the raw device or shared file that you used for the SRVM configuration repository instead of creating a new file for the OCR.

Use Table 3-1 to determine the partition size for shared file systems:

Table 3-1 Shared file system Volume Size Requirements

File Types Stored Number of Volumes Volume Size
Oracle Clusterware files (OCR and voting disks) with external redundancy 1 At least 120 MB for each volume
Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle 1 At least 120 MB for each volume
Redundant Oracle Clusterware files with redundancy provided by Oracle (mirrored OCR and two additional voting disks) 1 At least 140 MB (100 MB for the mirrored OCR, and 20 MB each for the additional voting disks)
Oracle Database files 1 At least 1.2 GB for each volume
Recovery files

Note: Recovery files must be on a different volume than database files

1 At least 2 GB for each volume

In Table 3-1, the total required volume size is cumulative. For example, to store all files on the shared file system, you should have at least 3.4 GB of storage available over a minimum of two volumes.

3.2.2 Creating Required Directories for Oracle Clusterware Files on Shared File Systems

Use the following instructions to create directories for Oracle Clusterware files. If you intend to use a file system to store Oracle Clusterware files, then you can also configure file systems for the Oracle Database and recovery files.


Note:

For GPFS storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system to the Oracle base directory.

To create directories for the Oracle Clusterware files on separate file systems to the Oracle base directory, follow these steps:

  1. If necessary, configure the shared file systems that you want to use and mount them on each node.


    Note:

    The mount point that you use for the file system must be identical on each node. Make sure that the file systems are configured to mount automatically when a node restarts.

  2. Use the df -k command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems that you want to use:

    File Type File System Requirements
    Oracle Clusterware files Choose a file system with at least 120 MB of free disk space
    Database files Choose either:
    • A single file system with at least 1.2 GB of free disk space

    • Two or more file systems with at least 1.2 GB of free disk space in total

    Recovery files Choose a file system with at least 2 GB of free disk space.

    If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation (typically, oracle) has permissions to create directories on the disks where you plan to install Oracle Clusterware and Oracle Database, then OUI creates the Oracle Clusterware file directory, and DBCA creates the Oracle Database file directory, and the Recovery file directory.

    If the user performing the installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:

    • Oracle Clusterware file directory:

      # mkdir /mount_point/oracrs
      # chown oracle:oinstall /mount_point/oracrs
      # chmod 775 /mount_point/oracrs
      
      
    • Database file directory:

      # mkdir /mount_point/oradata
      # chown oracle:oinstall /mount_point/oradata
      # chmod 775 /mount_point/oradata
      
      
    • Recovery file directory (flash recovery area):

      # mkdir /mount_point/flash_recovery_area
      # chown oracle:oinstall /mount_point/flash_recovery_area
      # chmod 775 /mount_point/flash_recovery_area
      
      

By making the Oracle user the owner of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.

When you have completed creating subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions, you have completed the GPFS configuration.

3.3 Configuring Storage for Oracle Clusterware Files on Raw Devices

The following subsection describes how to configure Oracle Clusterware files on raw partitions.

3.3.1 Identifying Required Raw Partitions for Clusterware Files

Table 3-2 lists the number and size of the raw partitions that you must configure for Oracle Clusterware files.


Note:

Because each file requires exclusive use of a complete disk device, Oracle recommends that, if possible, you use disk devices with sizes that closely match the size requirements of the files that they will store. You cannot use the disks that you choose for these files for any other purpose.

Table 3-2 Raw Partitions Required for Oracle Clusterware Files on AIX

Number Size for Each Partition (MB) Purpose
2

(or 1, if you have external redundancy support for this file)

100 Oracle Cluster Registry

Note: You need to create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Cluster Registry (OCR).

You should create two partitions: One for the OCR, and one for a mirrored OCR.

If you are upgrading from Oracle9i release 2, then you can continue to use the raw device that you used for the SRVM configuration repository instead of creating this new raw device.

3

(or 1, if you have external redundancy support for this file)

20 Oracle Clusterware voting disks

Note: You need to create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Clusterware voting disk.

You should create three partitions: One for the voting disk, and two for additional voting disks.


3.3.2 Configuring Raw Disk Devices for Oracle Clusterware Without HACMP or GPFS

If you are installing RAC on an AIX cluster without HACMP or GPFS, then you must use shared raw disk devices for the Oracle Clusterware files. You can also use shared raw disk devices for database file storage, however, Oracle recommends that you use Automatic Storage Management to store database files in this situation. This section describes how to configure the shared raw disk devices for Oracle Clusterware files (Oracle Cluster Registry and Oracle Clusterware voting disk) and database files.

To configure shared raw disk devices for Oracle Clusterware files:

  1. Identify or configure the required disk devices.

    The disk devices must be shared on all of the cluster nodes.

  2. As the root user, enter the following command on any node to identify the device names for the disk devices that you want to use:

    # /usr/sbin/lspv | grep -i none 
    
    

    This command displays information similar to the following for each disk device that is not configured in a volume group:

    hdisk17         0009005fb9c23648                    None  
    
    

    In this example, hdisk17 is the device name of the disk and 0009005fb9c23648 is the physical volume ID (PVID).

  3. If a disk device that you want to use does not have a PVID, then enter a command similar to the following to assign one to it:

    # /usr/sbin/chdev -l hdiskn -a pv=yes
    
    
  4. On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node:

    # /usr/sbin/lspv | grep -i "0009005fb9c23648"
    
    

    The output from this command should be similar to the following:

    hdisk18         0009005fb9c23648                    None  
    
    

    In this example, the device name associated with the disk device (hdisk18) is different on this node.

  5. If the device names are the same on all nodes, then enter commands similar to the following on all nodes to change the owner, group, and permissions on the character raw device files for the disk devices:

    • OCR device:

      # chown root:oinstall /dev/rhdiskn
      # chmod 640 /dev/rhdiskn
      
      
    • Other devices:

      # chown oracle:dba /dev/rhdiskn
      # chmod 660 /dev/rhdiskn
      
      
  6. If the device name associated with the PVID for a disk that you want to use is different on any node, then you must create a new device file for the disk on each of the nodes using a common unused name.

    For the new device files, choose an alternative device file name that identifies the purpose of the disk device. The previous table suggests alternative device file names for each file. For database files, replace dbname in the alternative device file name with the name that you chose for the database in step 1.


    Note:

    Alternatively, you could choose a name that contains a number that will never be used on any of the nodes, for example hdisk99.

    To create a new common device file for a disk device on all nodes, perform these steps on each node:

    1. Enter the following command to determine the device major and minor numbers that identify the disk device, where n is the disk number for the disk device on this node:

      # ls -alF /dev/*hdiskn
      
      

      The output from this command is similar to the following:

      brw------- 1 root system    24,8192 Dec 05 2001  /dev/hdiskn
      crw------- 1 root system    24,8192 Dec 05 2001  /dev/rhdiskn
      
      

      In this example, the device file /dev/rhdiskn represents the character raw device, 24 is the device major number, and 8192 is the device minor number.

    2. Enter a command similar to the following to create the new device file, specifying the new device file name and the device major and minor numbers that you identified in the previous step:


      Note:

      In the following example, you must specify the character c to create a character raw device file.

      # mknod /dev/ora_ocr_raw_100m c 24 8192
      
      
    3. Enter commands similar to the following to change the owner, group, and permissions on the character raw device file for the disk:

      • OCR:

        # chown root:oinstall /dev/ora_ocr_raw_100m
        # chmod 640 /dev/ora_ocr_raw_100m
        
        
      • Oracle Clusterware voting disk:

        # chown oracle:dba /dev/ora_vote_raw_20m
        # chmod 660 /dev/ora_vote_raw_20m
        
        
    4. Enter a command similar to the following to verify that you have created the new device file successfully:

      # ls -alF /dev | grep "24,8192"
      
      

      The output should be similar to the following:

      brw------- 1 root   system   24,8192 Dec 05 2001  /dev/hdiskn
      crw-r----- 1 root   oinstall 24,8192 Dec 05 2001  /dev/ora_ocr_raw_100m
      crw------- 1 root   system   24,8192 Dec 05 2001  /dev/rhdiskn
      
      
  7. To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute listed in the following table to the value shown, depending on the disk type:

    Disk Type Attribute Value
    SSA, FAStT, or non-MPIO-capable disks reserve_lock no
    ESS, EMC, HDS, CLARiiON, or MPIO-capable disks reserve_policy no_reserve

    To determine whether the attribute has the correct value, enter a command similar to the following on all cluster nodes for each disk device that you want to use:

    # /usr/sbin/lsattr -E -l hdiskn
    
    

    If the required attribute is not set to the correct value on any node, then enter a command similar to one of the following on that node:

    • SSA and FAStT devices

      # /usr/sbin/chdev -l hdiskn  -a reserve_lock=no
      
      
    • ESS, EMC, HDS, CLARiiON, and MPIO-capable devices

      # /usr/sbin/chdev -l hdiskn  -a reserve_policy=no_reserve
      
      
  8. Enter commands similar to the following on any node to clear the PVID from each disk device that you want to use:

    # /usr/sbin/chdev -l hdiskn -a pv=clear
    
    

    When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk, for example:

    /dev/rhdisk10
    
    

3.3.3 Configuring Raw Logical Volumes for Oracle Clusterware


Note:

To use raw logical volumes for Oracle Clusterware, HACMP must be installed and configured on all cluster nodes.

This section describes how to configure raw logical volumes for Oracle Clusterware and database file storage. The procedures in this section describe how to create a new volume group that contains the logical volumes required for both types of files.

Before you continue, review the following guidelines which contain important information about using volume groups with this release of RAC:

  • You must use concurrent-capable volume groups for Oracle Clusterware.

  • The Oracle Clusterware files require less than 200 MB of disk space. To make efficient use of the disk space in a volume group, Oracle recommends that you use the same volume group for the logical volumes for both the Oracle Clusterware files and the database files.

  • If you are upgrading an existing Oracle9i release 2 RAC installation that uses raw logical volumes, then you can use the existing SRVM configuration repository logical volume for the OCR and create a new logical volume in the same volume group for the Oracle Clusterware voting disk. However, you must remove this volume group from the HACMP concurrent resource group that activates it before you install Oracle Clusterware.


    See Also:

    The HACMP documentation for information about removing a volume group from a concurrent resource group.


    Note:

    If you are upgrading a database, then you must also create a new logical volume for the SYSAUX tablespace. Refer to the "Configuring Raw Logical Volumes in the New Oracle Clusterware Volume Group" section for more information about the requirements for the Oracle Clusterware voting disk and SYSAUX logical volumes.

  • You must use a HACMP concurrent resource group to activate new or existing volume groups that contain only database files (not Oracle Clusterware files).


    See Also:

    The HACMP documentation for information about adding a volume group to a new or existing concurrent resource group.

  • All volume groups that you intend to use for Oracle Clusterware must be activated in concurrent mode before you start the installation.

  • The procedures in this section describe how to create basic volumes groups and volumes. If you want to configure more complex volumes, (using mirroring, for example), then use this section in conjunction with the HACMP documentation.

3.3.4 Creating a Volume Group for Oracle Clusterware

To create a volume group for the Oracle Clusterware files:

  1. If necessary, install the shared disks that you intend to use.

  2. To ensure that the disks are available, enter the following command on every node:

    # /usr/sbin/lsdev -Cc disk
    
    

    The output from this command is similar to the following:

    hdisk0 Available 1A-09-00-8,0  16 Bit LVD SCSI Disk Drive
    hdisk1 Available 1A-09-00-9,0  16 Bit LVD SCSI Disk Drive
    hdisk2 Available 17-08-L       SSA Logical Disk Drive
    
    
  3. If a disk is not listed as available on any node, then enter the following command to configure the new disks:

    # /usr/sbin/cfgmgr
    
    
  4. Enter the following command on any node to identify the device names and any associated volume group for each disk:

    # /usr/sbin/lspv
    
    

    The output from this command is similar to the following:

    hdisk0     0000078752249812   rootvg
    hdisk1     none               none
    hdisk4     00034b6fd4ac1d71   ccvg1
    
    

    For each disk, this command shows:

    • The disk device name

    • Either the 16 character physical volume identifier (PVID) if the disk has one, or none

    • Either the volume group to which the disk belongs, or none

    The disks that you want to use may have a PVID, but they must not belong to existing volume groups.

  5. If a disk that you want to use for the volume group does not have a PVID, then enter a command similar to the following to assign one to it:

    # /usr/sbin/chdev -l hdiskn -a pv=yes
    
    
  6. To identify used device major numbers, enter the following command on each node of the cluster:

    # ls -la /dev | more
    
    

    This command displays information about all configured devices, similar to the following:

    crw-rw----   1 root     system    45,  0 Jul 19 11:56 vg1
    
    

    In this example, 45 is the major number of the vg1 volume group device.

  7. Identify an appropriate major number that is unused on all nodes in the cluster.

  8. To create a volume group, enter a command similar to the following, or use SMIT (smit mkvg):

    # /usr/sbin/mkvg -y VGname -B -s PPsize -V majornum -n \
    -C PhysicalVolumes
    
    
  9. The following table describes the options and variables used in this example. Refer to the mkvg man page for more information about these options.

    Command Option SMIT Field Sample Value and Description
    -y VGname
    
    VOLUME GROUP name
    oracle_vg1
    
    
    Specify the name for the volume group. The name that you specify could be a generic name, as shown, or it could specify the name of the database that you intend to create.
    -y VGname
    
    VOLUME GROUP name
    oracle_vg1
    
    
    Specify the name for the volume group. The name that you specify could be a generic name, as shown, or for a database volume group, it could specify the name of the database that you intend to create.
    -B
    
    Create a big VG format Volume Group Specify this option to create a big VG format volume group.

    Note: If you are using SMIT, then choose yes for this field.

    -s PPsize
    
    Physical partition SIZE in megabytes
    32
    
    
    Specify the size of the physical partitions for the database. The sample value shown enables you to include a disk up to 32 GB in size (32 MB * 1016).
    -V Majornum
    
    Volume Group MAJOR NUMBER
    46
    
    
    Specify the device major number for the volume group that you identified in Step 7.
    -n
    
    Activate volume group AUTOMATICALLY at system restart Specify this option to prevent the volume group from being activated at system restart.

    Note: If you are using SMIT, then choose no for this field.

    -C
    
    Create VG Concurrent Capable Specify this option to create a concurrent capable volume group.

    Note: If you are using SMIT, then choose yes for this field.

    PhysicalVolumes
    
    PHYSICAL VOLUME names
    hdisk3 hdisk4
    
    
    Specify the device names of the disks that you want to add to the volume group.

  10. Enter a command similar to the following to vary on the volume group that you created:

    # /usr/sbin/varyonvg VGname
    

3.3.5 Configuring Raw Logical Volumes in the New Oracle Clusterware Volume Group

To create the required raw logical volumes in the new Oracle Clusterware volume group:

  1. Identify the logical volumes that you must create.

  2. If you prefer, you can also use the command smit mklv to create raw logical volumes.

    The following example shows the command used to create a logical volume for the ocr volume group in the SYSAUX tablespace with a physical partition size of 114 MB (800/7 = 114):

    # /usr/sbin/mklv -y test_sysaux_raw_800m -T O -w n -s n -r n ocr 7
    
    
  3. Change the owner, group, and permissions on the character device files associated with the logical volumes that you created, as follows:


    Note:

    The device file associated with the Oracle Cluster Registry must be owned by root. All other device files must be owned by the Oracle software owner user (oracle).

    # chown oracle:dba /dev/rora_vote_raw_20m
    # chmod 660 /dev/rora_vote_raw_20m
    # chown root:oinstall /dev/rora_ocr_raw_100m
    # chmod 640 /dev/rora_ocr_raw_100m
    

3.3.6 Importing the Volume Group on the Other Cluster Nodes

To make the volume group available to all nodes in the cluster, you must import it on each node, as follows:

  1. Because the physical volume names may be different on the other nodes, enter the following command to determine the PVID of the physical volumes used by the volume group:

    # /usr/sbin/lspv
    
    
  2. Note the PVIDs of the physical devices used by the volume group.

  3. To vary off the volume group that you want to use, enter a command similar to the following on the node where you created it:

    # /usr/sbin/varyoffvg VGname
    
    
  4. On each cluster node, complete the following steps:

    1. Enter the following command to determine the physical volume names associated with the PVIDs you noted previously:

      # /usr/sbin/lspv
      
      
    2. On each node of the cluster, enter commands similar to the following to import the volume group definitions:

      # /usr/sbin/importvg -y VGname -V MajorNumber PhysicalVolume
      
      

      In this example, MajorNumber is the device major number for the volume group and PhysicalVolume is the name of one of the physical volumes in the volume group.

      For example, to import the definition of the oracle_vg1 volume group with device major number 45 on the hdisk3 and hdisk4 physical volumes, enter the following command:

      # /usr/sbin/importvg -y oracle_vg1 -V 45 hdisk3
      
      
    3. Change the owner, group, and permissions on the character device files associated with the logical volumes you created, as follows:

      # chown oracle:dba /dev/rora_vote_raw_20m
      # chmod 660 /dev/rora_vote_raw_20m
      # chown root:oinstall /dev/rora_ocr_raw_100m
      # chmod 640 /dev/rora_ocr_raw_100m
      
      
    4. Enter the following command to ensure that the volume group will not be activated by the operating system when the node starts:

      # /usr/sbin/chvg -a n VGname
      

3.3.7 Activating the Volume Group in Concurrent Mode on All Cluster Nodes

To activate the volume group in concurrent mode on all cluster nodes, enter the following command on each node:

# /usr/sbin/varyonvg -c VGname

3.4 Choosing a Storage Option for Oracle Database Files

Database files consist of the files that make up the database, and the recovery area files. There are four options for storing database files:

During configuration of Oracle Clusterware, if you selected GPFS and the volumes that you created are large enough to hold the database files and recovery files, then you have completed required pre-installation steps. You can proceed to Chapter 4, "Installing Oracle Clusterware".

If you want to place your database files on ASM, then proceed to "Configuring Disks for Automatic Storage Management".

If you want to place your database files on raw devices, and manually provide storage management for your database and recovery files, then proceed to "Configuring Database File Storage on Raw Devices".


Note:

Databases can consist of a mixture of ASM files and non-ASM files. Refer to Oracle Database Administrator's Guide for additional information about ASM.

3.5 Configuring Disks for Automatic Storage Management

This section describes how to configure disks for use with ASM. Before you configure the disks, you must determine the number of disks and the amount of free disk space that you require. The following sections describe how to identify the requirements and configure the disks on each platform:


Note:

Although this section refers to disks, you can also use zero-padded files on a certified NAS storage device in an ASM disk group. Refer to the Oracle Database Installation Guide for UNIX Systems for information about creating and configuring NAS-based files for use in an ASM disk group.

3.5.1 Identifying Storage Requirements for ASM

To identify the storage requirements for using ASM, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:

  1. Determine whether you want to use ASM for Oracle database files, recovery files, or both.


    Note:

    You do not have to use the same storage mechanism for database files and recovery files. You can use the file system for one file type and ASM for the other.

    For RAC installations, if you choose to enable automated backups and you do not have a shared file system available, you must choose ASM for recovery file storage.


    If you enable automated backups during the installation, you can choose ASM as the storage mechanism for recovery files by specifying an ASM disk group for the flash recovery area. Depending on how you choose to create a database during the installation, you have the following options:

    • If you select an installation method that runs DBCA in interactive mode, by choosing the Advanced database configuration option for example, you can decide whether you want to use the same ASM disk group for database files and recovery files, or you can choose to use different disk groups for each file type.

      The same choice is available to you if you use DBCA after the installation to create a database.

    • If you select an installation method that runs DBCA in non-interactive mode, you must use the same ASM disk group for database files and recovery files.

  2. Choose the ASM redundancy level that you want to use for the ASM disk group.

    The redundancy level that you choose for the ASM disk group determines how ASM mirrors files in the disk group and determines the number of disks and amount of disk space that you require, as follows:

    • External redundancy

      An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.

      Because ASM does not mirror data in an external redundancy disk group, Oracle recommends that you use only RAID or similar devices that provide their own data protection mechanisms as disk devices in this type of disk group.

    • Normal redundancy

      In a normal redundancy disk group, ASM uses two-way mirroring by default, to increase performance and reliability. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.

      For most installations, Oracle recommends that you use normal redundancy disk groups.

    • High redundancy

      In a high redundancy disk group, ASM uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.

      While high redundancy disk groups do provide a high level of data protection, you must consider the higher cost of additional storage devices before deciding to use this redundancy level.

  3. Determine the total amount of disk space that you require for the database files and recovery files.

    Use the following table to determine the minimum number of disks and the minimum disk space requirements for the installation:

    Redundancy Level Minimum Number of Disks Database Files Recovery Files Both File Types
    External 1 1.15 GB 2.3 GB 3.45 GB
    Normal 2 2.3 GB 4.6 GB 6.9 GB
    High 3 3.45 GB 6.9 GB 10.35 GB

    For RAC installations, you must also add additional disk space for the ASM metadata. You can use the following formula to calculate the additional disk space requirements (in MB):

    15 + (2 * number_of_disks) + (126 * number_of_ASM_instances)

    For example, for a four-node RAC installation, using three disks in a high redundancy disk group, you require an additional 525 MB of disk space:

    15 + (2 * 3) + (126 * 4) = 525

    If an ASM instance is already running on the system, you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.

    The following section describes how to identify existing disk groups and determine the free disk space that they contain.

  4. Optionally identify failure groups for the ASM disk group devices.


    Note:

    You need to complete this step only if you intend to use an installation method that runs DBCA in interactive mode, for example, if you intend to choose the Custom installation type or the Advanced database configuration option. Other installation types do not enable you to specify failure groups.

    If you intend to use a normal or high redundancy disk group, you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.


    Note:

    If you define custom failure groups, you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.

  5. If you are sure that a suitable disk group does not exist on the system, install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

    • All of the devices in an ASM disk group should be the same size and have the same performance characteristics.

    • Do not specify more than one partition on a single physical disk as a disk group device. ASM expects each disk group device to be on a separate physical disk.

    • Although you can specify a logical volume as a device in an ASM disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing ASM from optimizing I/O across the physical devices.

    For information about completing this task, refer to the "Configuring Database File Storage for ASM and Raw Devices" section.

3.5.2 Using an Existing ASM Disk Group

If you want to store either database or recovery files in an existing ASM disk group, you have the following choices, depending on the installation method that you select:

  • If you select an installation method that runs DBCA in interactive mode, by choosing the Advanced database configuration option for example, you can decide whether you want to create a new disk group or use an existing one.

    The same choice is available to you if you use DBCA after the installation to create a database.

  • If you select an installation method that runs DBCA in non-interactive mode, you must choose an existing disk group for the new database; you cannot create a new disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.


Note:

The ASM instance that manages the existing disk group can be running in a different Oracle home directory.

To determine whether an existing ASM disk group exists, or to determine whether there is sufficient disk space in a disk group, you can use Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:

  1. View the contents of the oratab file to determine whether an ASM instance is configured on the system:

    # more /etc/oratab
    
    

    If an ASM instance is configured on the system, the oratab file should contain a line similar to the following:

    +ASM:oracle_home_path:N
    
    

    In this example, +ASM is the system identifier (SID) of the ASM instance and oracle_home_path is the Oracle home directory where it is installed. By convention, the SID for an ASM instance begins with a plus sign.

  2. Set the ORACLE_SID and ORACLE_HOME environment variables to specify the appropriate values for the ASM instance that you want to use.

  3. Connect to the ASM instance as the SYS user with SYSDBA privilege and start the instance if necessary:

    # $ORACLE_HOME/bin/sqlplus "SYS/SYS_password as SYSDBA"
    SQL> STARTUP
    
    
  4. Enter the following command to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:

    SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
    
    
  5. From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.

  6. If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.


    Note:

    If you are adding devices to an existing disk group, Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.

3.5.3 Configuring Database File Storage for ASM and Raw Devices

To configure disks for use with ASM on AIX, follow these steps:

  1. On AIX-based systems, you must apply Program Technical Fix (PTF) U496549 or higher to your system before you use ASM.

  2. If necessary, install the shared disks that you intend to use for the ASM disk group and restart the system.

  3. To make sure that the disks are available, enter the following command on every node:

    # /usr/sbin/lsdev -Cc disk
    
    

    The output from this command is similar to the following:

    hdisk0 Available 1A-09-00-8,0  16 Bit LVD SCSI Disk Drive
    hdisk1 Available 1A-09-00-9,0  16 Bit LVD SCSI Disk Drive
    hdisk2 Available 17-08-L       SSA Logical Disk Drive
    
    
  4. If a disk is not listed as available on any node, enter the following command to configure the new disks:

    # /usr/sbin/cfgmgr
    
    
  5. Enter the following command on any node to identify the device names for the physical disks that you want to use:

    # /usr/sbin/lspv | grep -i none
    
    

    This command displays information similar to the following for each disk that is not configured in a volume group:

    hdisk2     0000078752249812   None
    
    

    In this example, hdisk2 is the device name of the disk and 0000078752249812 is the physical volume ID (PVID). The disks that you want to use might have a PVID, but they must not belong to a volume group.

  6. If a disk that you want to use for the disk group does not have a PVID, enter a command similar to the following to assign one to it:

    # /usr/sbin/chdev -l hdiskn -a pv=yes
    
    
  7. On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node:

    # /usr/sbin/lspv | grep -i 0000078752249812
    
    

    The output from this command should be similar to the following:

    hdisk18        0000078752249812        None  
    
    

    Depending on how each node is configured, the device names may differ between nodes.

  8. If the device names are the same on all nodes, enter commands similar to the following on all nodes to change the owner, group, and permissions on the character raw device files for the disk devices:

    • OCR device:

      # chown root:oinstall /dev/rhdiskn
      # chmod 640 /dev/rhdiskn
      
      
    • Other devices:

      # chown oracle:dba /dev/rhdiskn
      # chmod 660 /dev/rhdiskn
      
      
  9. If the device name associated with the PVID for a disk that you want to use is different on any node, you must create a new device file for the disk on each of the nodes using a common unused name.

    For the new device files, choose an alternative device file name that identifies the purpose of the disk device. The previous table suggests alternative device file names for each file. For database files, replace dbname in the alternative device file name with the name that you chose for the database in step 1.


    Note:

    Alternatively, you could choose a name that contains a number that will never be used on any of the nodes, for example hdisk99.

    To create a new common device file for a disk device on all nodes, follow these steps on each node:

    1. Enter the following command to determine the device major and minor numbers that identify the disk device, where n is the disk number for the disk device on this node:

      # ls -alF /dev/*hdiskn
      
      

      The output from this command is similar to the following:

      brw------- 1 root system    24,8192 Dec 05 2001  /dev/hdiskn
      crw------- 1 root system    24,8192 Dec 05 2001  /dev/rhdiskn
      
      

      In this example, the device file /dev/rhdiskn represents the character raw device, 24 is the device major number, and 8192 is the device minor number.

    2. Enter a command similar to the following to create the new device file, specifying the new device file name and the device major and minor numbers that you identified in the previous step:


      Note:

      In the following example, you must specify the character c to create a character raw device file.

      # mknod /dev/ora_ocr_raw_100m c 24 8192
      
      
    3. Enter commands similar to the following to change the owner, group, and permissions on the character raw device file for the disk:

      • OCR:

        # chown root:oinstall /dev/ora_ocr_raw_100m
        # chmod 640 /dev/ora_ocr_raw_100m
        
        
      • Voting disk or database files:

        # chown oracle:dba /dev/ora_vote_raw_20m
        # chmod 660 /dev/ora_vote_raw_20m
        
        
    4. Enter a command similar to the following to verify that you have created the new device file successfully:

      # ls -alF /dev | grep "24,8192"
      
      

      The output should be similar to the following:

      brw------- 1 root   system   24,8192 Dec 05 2001  /dev/hdiskn
      crw-r----- 1 root   oinstall 24,8192 Dec 05 2001  /dev/ora_ocr_raw_100m
      crw------- 1 root   system   24,8192 Dec 05 2001  /dev/rhdiskn
      
      
  10. To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute listed in the following table to the value shown, depending on the disk type:

    Disk Type Attribute Value
    SSA or FAStT disks reserve_lock no
    ESS, EMC, HDS, CLARiiON, or MPIO-capable disks reserve_policy no_reserve

    To determine whether the attribute has the correct value, enter a command similar to the following on all cluster nodes for each disk device that you want to use:

    # /usr/sbin/lsattr -E -l hdiskn
    
    

    If the required attribute is not set to the correct value on any node, enter a command similar to one of the following on that node:

    • SSA and FAStT devices:

      # /usr/sbin/chdev -l hdiskn  -a reserve_lock=no
      
      
    • ESS, EMC, HDS, CLARiiON, and MPIO-capable devices:

      # /usr/sbin/chdev -l hdiskn  -a reserve_policy=no_reserve
      
      
  11. Enter commands similar to the following on any node to clear the PVID from each disk device that you want to use:

    # /usr/sbin/chdev -l hdiskn -a pv=clear
    
    
  12. Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk that you want to add to the disk group:

    # chown oracle:dba /dev/rhdiskn
    # chmod 660 /dev/rhdiskn
    

    Note:

    If you are using a multi-pathing disk driver with ASM, make sure that you set the permissions only on the correct logical device name for the disk.

    The device name associated with a disk might be different on other nodes. Make sure that you specify the correct device name on each node.


    When you are installing Oracle Clusterware Services, you must enter the paths to the appropriate device files when prompted for the path of the OCR and voting disk, for example:

    /dev/rhdisk10
    
    

When you have completed creating and configuring ASM with raw partitions, proceed to Chapter 4, "Installing Oracle Clusterware"

3.6 Configuring Database File Storage on Raw Devices

The following subsections describe how to configure raw partitions for database files.

3.6.1 Identifying Required Raw Partitions for Database Files

Table 3-3 lists the number and size of the raw partitions that you must configure for database files.


Note:

Because each file requires exclusive use of a complete disk device, Oracle recommends that, if possible, you use disk devices with sizes that closely match the size requirements of the files that they will store. You cannot use the disks that you choose for these files for any other purpose.

Table 3-3 Raw Disk Devices Required for Database Files on AIX

Number Size (MB) Purpose and Sample Alternative Device File Name
1 500 SYSTEM tablespace:
dbname_system_raw_500m
1 300 + (Number of instances * 250) SYSAUX tablespace:
dbname_sysaux_raw_800m
Number of instances 500 UNDOTBSn tablespace (One tablespace for each instance, where n is the number of the instance):
dbname_undotbsn_raw_500m
1 250 TEMP tablespace:
dbname_temp_raw_250m
1 160 EXAMPLE tablespace:
dbname_example_raw_160m
1 120 USERS tablespace:
dbname_users_raw_120m
2 * number of instances 120 Two online redo log files for each instance (where n is the number of the instance and m is the log number, 1 or 2):
dbname_redon_m_raw_120m
2 110 First and second control files:
dbname_control{1|2}_raw_110m
1 5 Server parameter file (SPFILE):
dbname_spfile_raw_5m
1 5 Password file:
dbname_pwdfile_raw_5m


Note:

If you prefer to use manual undo management, instead of automatic undo management, then, instead of the UNDOTBSn raw devices, you must create a single rollback segment tablespace (RBS) raw device that is at least 500 MB in size.

3.6.2 Configuring Raw Disk Devices for Database File Storage Without HACMP or GPFS

If you are installing RAC on an AIX cluster without HACMP or GPFS, you can use shared raw disk devices for database file storage. However, Oracle recommends that you use Automatic Storage Management to store database files in this situation. This section describes how to configure the shared raw disk devices for database files.

To configure shared raw disk devices for Oracle Clusterware files, database files, or both:

  1. If you intend to use raw disk devices for database file storage, then specify a name for the database that you want to create.

    The name that you specify must start with a letter and have no more than four characters. For example: orcl.

  2. Identify or configure the required disk devices.

    The disk devices must be shared on all of the cluster nodes.

  3. As the root user, enter the following command on any node to identify the device names for the disk devices that you want to use:

    # /usr/sbin/lspv | grep -i none 
    
    

    This command displays information similar to the following for each disk device that is not configured in a volume group:

    hdisk17         0009005fb9c23648                    None  
    
    

    In this example, hdisk17 is the device name of the disk and 0009005fb9c23648 is the physical volume ID (PVID).

  4. If a disk device that you want to use does not have a PVID, then enter a command similar to the following to assign one to it:

    # /usr/sbin/chdev -l hdiskn -a pv=yes
    
    
  5. On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node:

    # /usr/sbin/lspv | grep -i "0009005fb9c23648"
    
    

    The output from this command should be similar to the following:

    hdisk18         0009005fb9c23648                    None  
    
    

    In this example, the device name associated with the disk device (hdisk18) is different on this node.

  6. If the device names are the same on all nodes, then enter commands similar to the following on all nodes to change the owner, group, and permissions on the character raw device files for the disk devices you want to use for database files:

    # chown oracle:dba /dev/rhdiskn
    # chmod 660 /dev/rhdiskn
    
    
  7. If the device name associated with the PVID for a disk that you want to use is different on any node, then you must create a new device file for the disk on each of the nodes using a common unused name.

    For the new device files, choose an alternative device file name that identifies the purpose of the disk device. The previous table suggests alternative device file names for each file. For database files, replace dbname in the alternative device file name with the name that you chose for the database in step 1.


    Note:

    Alternatively, you could choose a name that contains a number that will never be used on any of the nodes, for example hdisk99.

    To create a new common device file for a disk device on all nodes, perform these steps on each node:

    1. Enter the following command to determine the device major and minor numbers that identify the disk device, where n is the disk number for the disk device on this node:

      # ls -alF /dev/*hdiskn
      
      

      The output from this command is similar to the following:

      brw------- 1 root system    24,8192 Dec 05 2001  /dev/hdiskn
      crw------- 1 root system    24,8192 Dec 05 2001  /dev/rhdiskn
      
      

      In this example, the device file /dev/rhdiskn represents the character raw device, 24 is the device major number, and 8192 is the device minor number.

    2. Enter a command similar to the following to create the new device file, specifying the new device file name and the device major and minor numbers that you identified in the previous step:


      Note:

      In the following example, you must specify the character c to create a character raw device file.

      # mknod /dev/ora_ocr_raw_100m c 24 8192
      
      
    3. Enter a command similar to the following to verify that you have created the new device file successfully:

      # ls -alF /dev | grep "24,8192"
      
      

      The output should be similar to the following:

      brw------- 1 root   system   24,8192 Dec 05 2001  /dev/hdiskn
      crw-r----- 1 root   oinstall 24,8192 Dec 05 2001  /dev/ora_ocr_raw_100m
      crw------- 1 root   system   24,8192 Dec 05 2001  /dev/rhdiskn
      
      
  8. To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute listed in the following table to the value shown, depending on the disk type:

    Disk Type Attribute Value
    SSA, FAStT, or non-MPIO-capable disks reserve_lock no
    ESS, EMC, HDS, CLARiiON, or MPIO-capable disks reserve_policy no_reserve

    To determine whether the attribute has the correct value, enter a command similar to the following on all cluster nodes for each disk device that you want to use:

    # /usr/sbin/lsattr -E -l hdiskn
    
    

    If the required attribute is not set to the correct value on any node, then enter a command similar to one of the following on that node:

    • SSA and FAStT devices

      # /usr/sbin/chdev -l hdiskn  -a reserve_lock=no
      
      
    • ESS, EMC, HDS, CLARiiON, and MPIO-capable devices

      # /usr/sbin/chdev -l hdiskn  -a reserve_policy=no_reserve
      
      
  9. Enter commands similar to the following on any node to clear the PVID from each disk device that you want to use:

    # /usr/sbin/chdev -l hdiskn -a pv=clear
    
    
  10. If you are using raw disk devices for database files, then follow these steps to create the Database Configuration Assistant raw device mapping file:


    Note:

    You must complete this procedure only if you are using raw devices for database files. The Database Configuration Assistant raw device mapping file enables Database Configuration Assistant to identify the appropriate raw disk device for each database file. You do not specify the raw devices for the Oracle Clusterware files in the Database Configuration Assistant raw device mapping file.

    1. Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:

      • Bourne, Bash, or Korn shell:

        $ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
        
        
      • C shell:

        % setenv ORACLE_BASE /u01/app/oracle
        
        
    2. Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:

      # mkdir -p $ORACLE_BASE/oradata/dbname
      # chown -R oracle:oinstall $ORACLE_BASE/oradata
      # chmod -R 775 $ORACLE_BASE/oradata
      
      

      In this example, dbname is the name of the database that you chose previously.

    3. Change directory to the $ORACLE_BASE/oradata/dbname directory.

    4. Using any text editor, create a text file similar to the following that identifies the disk device file name associated with each database file.

      Oracle recommends that you use a file name similar to dbname_raw.conf for this file.


      Note:

      The following example shows a sample mapping file for a two-instance RAC cluster. Some of the devices use alternative disk device file names. Ensure that the device file name that you specify identifies the same disk device on all nodes.

      system=/dev/rhdisk11
      sysaux=/dev/rhdisk12
      example=/dev/rhdisk13
      users=/dev/rhdisk14
      temp=/dev/rhdisk15
      undotbs1=/dev/rhdisk16
      undotbs2=/dev/rhdisk17
      redo1_1=/dev/rhdisk18
      redo1_2=/dev/rhdisk19
      redo2_1=/dev/rhdisk20
      redo2_2=/dev/rhdisk22
      control1=/dev/rhdisk23
      control2=/dev/rhdisk24
      spfile=/dev/dbname_spfile_raw_5m
      pwdfile=/dev/dbname_pwdfile_raw_5m
      
      

      In this example, dbname is the name of the database.

      Use the following guidelines when creating or editing this file:

      • Each line in the file must have the following format:

        database_object_identifier=device_file_name
        
        

        The alternative device file names suggested in the previous table include the database object identifier that you must use in this mapping file. For example, in the following alternative disk device name, redo1_1 is the database object identifier:

        /dev/rac_redo1_1_raw_120m
        
        
      • For a RAC database, the file must specify one automatic undo tablespace datafile (undotbsn) and two redo log files (redon_1, redon_2) for each instance.

      • Specify at least two control files (control1, control2).

      • To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs) instead of the automatic undo management tablespace data files.

    5. Save the file and note the file name that you specified.

    6. When you are configuring the oracle user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.

  11. When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk, for example:

    /dev/rhdisk10
    
    

3.6.3 Configuring Raw Logical Volumes for Database File Storage


Note:

To use raw logical volumes for database file storage, HACMP must be installed and configured on all cluster nodes.

This section describes how to configure raw logical volumes for database file storage. The procedures in this section describe how to create a new volume group that contains the logical volumes required for both types of files.

Before you continue, review the following guidelines which contain important information about using volume groups with this release of RAC:

  • You must use concurrent-capable volume groups for database files.

  • Oracle Clusterware files require less than 200 MB of disk space. To make efficient use of the disk space in a volume group, Oracle recommends that you use the same volume group for the logical volumes for both the Oracle Clusterware files and the database files.

  • If you are upgrading a database, then you must also create a new logical volume for the SYSAUX tablespace. Refer to the "Configuring Raw Logical Volumes for Database File Storage" section for more information about the requirements for the SYSAUX logical volumes.


    See Also:

    The HACMP documentation for information about removing a volume group from a concurrent resource group.

  • You must use a HACMP concurrent resource group to activate new or existing volume groups that contain only database files (not Oracle Clusterware files).


    See Also:

    The HACMP documentation for information about adding a volume group to a new or existing concurrent resource group.

  • All volume groups that you intend to use for database files must be activated in concurrent mode before you start the installation.

  • The procedures in this section describe how to create basic volumes groups and volumes. If you want to configure more complex volumes (for example, using mirroring), then use this section in conjunction with the HACMP documentation.

3.6.4 Creating a Volume Group for Database Files

To create a volume group for the Oracle Database files:

  1. If necessary, install the shared disks that you intend to use.

  2. To ensure that the disks are available, enter the following command on every node:

    # /usr/sbin/lsdev -Cc disk
    
    

    The output from this command is similar to the following:

    hdisk0 Available 1A-09-00-8,0  16 Bit LVD SCSI Disk Drive
    hdisk1 Available 1A-09-00-9,0  16 Bit LVD SCSI Disk Drive
    hdisk2 Available 17-08-L       SSA Logical Disk Drive
    
    
  3. If a disk is not listed as available on any node, then enter the following command to configure the new disks:

    # /usr/sbin/cfgmgr
    
    
  4. Enter the following command on any node to identify the device names and any associated volume group for each disk:

    # /usr/sbin/lspv
    
    

    The output from this command is similar to the following:

    hdisk0     0000078752249812   rootvg
    hdisk1     none               none
    hdisk4     00034b6fd4ac1d71   ccvg1
    
    

    For each disk, this command shows:

    • The disk device name

    • Either the 16 character physical volume identifier (PVID) if the disk has one, or none

    • Either the volume group to which the disk belongs, or none

    The disks that you want to use may have a PVID, but they must not belong to existing volume groups.

  5. If a disk that you want to use for the volume group does not have a PVID, then enter a command similar to the following to assign one to it:

    # /usr/sbin/chdev -l hdiskn -a pv=yes
    
    
  6. To identify used device major numbers, enter the following command on each node of the cluster:

    # ls -la /dev | more
    
    

    This command displays information about all configured devices, similar to the following:

    crw-rw----   1 root     system    45,  0 Jul 19 11:56 vg1
    
    

    In this example, 45 is the major number of the vg1 volume group device.

  7. Identify an appropriate major number that is unused on all nodes in the cluster.

  8. To create a volume group, enter a command similar to the following, or use SMIT (smit mkvg):

    # /usr/sbin/mkvg -y VGname -B -s PPsize -V majornum -n \
    -C PhysicalVolumes
    
    
  9. The following table describes the options and variables used in this example. Refer to the mkvg man page for more information about these options.

    Command Option SMIT Field Sample Value and Description
    -y VGname
    
    VOLUME GROUP name
    oracle_vg1
    
    
    Specify the name for the volume group. The name that you specify could be a generic name, as shown, or it could specify the name of the database that you intend to create.
    -y VGname
    
    VOLUME GROUP name
    oracle_vg1
    
    
    Specify the name for the volume group. The name that you specify could be a generic name, as shown, or for a database volume group, it could specify the name of the database that you intend to create.
    -B
    
    Create a big VG format Volume Group Specify this option to create a big VG format volume group.

    Note: If you are using SMIT, then choose yes for this field.

    -s PPsize
    
    Physical partition SIZE in megabytes
    32
    
    
    Specify the size of the physical partitions for the database. The sample value shown enables you to include a disk up to 32 GB in size (32 MB * 1016).
    -V Majornum
    
    Volume Group MAJOR NUMBER
    46
    
    
    Specify the device major number for the volume group that you identified in Step 7.
    -n
    
    Activate volume group AUTOMATICALLY at system restart Specify this option to prevent the volume group from being activated at system restart.

    Note: If you are using SMIT, then choose no for this field.

    -C
    
    Create VG Concurrent Capable Specify this option to create a concurrent capable volume group.

    Note: If you are using SMIT, then choose yes for this field.

    PhysicalVolumes
    
    PHYSICAL VOLUME names
    hdisk3 hdisk4
    
    
    Specify the device names of the disks that you want to add to the volume group.

  10. Enter a command similar to the following to vary on the volume group that you created:

    # /usr/sbin/varyonvg VGname
    

3.6.5 Creating Database File Raw Logical Volumes in the New Volume Group

To create the required raw logical volumes in the new volume group:

  1. Choose a name for the database that you want to create.

    The name that you choose must start with a letter and have no more than four characters, for example, orcl.

  2. Identify the logical volumes that you must create.

    Table 3-4 lists the number and size of the logical volumes that you must create for database files.

    Table 3-4 Raw Logical Volumes Required for Database Files

    Number Size (MB) Purpose and Sample Logical Volume Name
    1 500 SYSTEM tablespace:
    dbname_system_raw_500m
    
    1 500 SYSAUX tablespace:
    dbname_sysaux_raw_500m
    
    1 300 + (Number of instances * 250) SYSAUX tablespace:
    dbname_sysaux_raw_800m
    
    1 500 UNDOTBS1 tablespace:
    dbname_undotbs1_raw_500m
    
    Number of instances 500 UNDOTBSn tablespace (One tablespace for each instance, where n is the number of the instance):
    dbname_undotbsn_raw_500m
    
    1 250 TEMP tablespace:
    dbname_temp_raw_250m
    
    1 160 EXAMPLE tablespace:
    dbname_example_raw_160m
    
    1 120 USERS tablespace:
    dbname_users_raw_120m
    
    2 120 Two online redo log files (where m is the log number, 1 or 2):
    dbname_redo1_m_raw_120m
    
    2 * number of instances 120 Two online redo log files for each instance (where n is the number of the instance and m is the log number, 1 or 2):
    dbname_redon_m_raw_120m
    
    2 110 First and second control files:
    dbname_control{1|2}_raw_110m
    
    1 5 Server parameter file (SPFILE):
    dbname_spfile_raw_5m
    
    1 5 Password file:
    dbname_pwdfile_raw_5m
    

  3. To create each required logical volume for data files, Oracle recommends that you use a command similar to the following to create logical volumes with a zero offset:

    # /usr/sbin/mklv -y LVname -T O -w n -s n -r n VGname NumPPs
    
    

    In this example:

    • LVname is the name of the logical volume that you want to create

    • The -T O option specifies that the device subtype should be z, which causes Oracle to use a zero offset when accessing this raw logical volume

    • VGname is the name of the volume group where you want to create the logical volume

    • NumPPs is the number of physical partitions to use

      To determine the value to use for NumPPs, divide the required size of the logical volume by the size of the physical partition and round the value up to an integer. For example, if the size of the physical partition is 32 MB and you want to create a 500 MB logical volume, then you should specify 16 for the NumPPs (500/32 = 15.625).

    Using a zero offset improves database performance and fixes the issues described in Oracle bug 2620053.


    Note:

    On raw logical volumes, if you create tablespaces in datafiles that are not created in this way, a message is recorded in the alert.log file.

    If you prefer, you can also use the command smit mklv to create raw logical volumes.

    The following example shows the command used to create a logical volume for the SYSAUX tablespace of the test database in the oracle_vg1 volume group with a physical partition size of 32 MB (800/32 = 25):

    # /usr/sbin/mklv -y test_sysaux_raw_800m -T O -w n -s n -r n oracle_vg1 25
    
    
  4. Change the owner, group, and permissions on the character device files associated with the logical volumes that you created, as follows:


    Note:

    The device file associated with the Oracle Cluster Registry must be owned by root. All other device files must be owned by the Oracle software owner user (oracle).

    # chown oracle:dba /dev/rdbname*
    # chmod 660 /dev/rdbname*
    
    

3.6.6 Importing the Database File Volume Group on the Other Cluster Nodes

To make the database file volume group available to all nodes in the cluster, you must import it on each node, as follows:

  1. Because the physical volume names may be different on the other nodes, enter the following command to determine the PVID of the physical volumes used by the volume group:

    # /usr/sbin/lspv
    
    
  2. Note the PVIDs of the physical devices used by the volume group.

  3. To vary off the volume group that you want to use, enter a command similar to the following on the node where you created it:

    # /usr/sbin/varyoffvg VGname
    
    
  4. On each cluster node, complete the following steps:

    1. Enter the following command to determine the physical volume names associated with the PVIDs you noted previously:

      # /usr/sbin/lspv
      
      
    2. On each node of the cluster, enter commands similar to the following to import the volume group definitions:

      # /usr/sbin/importvg -y VGname -V MajorNumber PhysicalVolume
      
      

      In this example, MajorNumber is the device major number for the volume group and PhysicalVolume is the name of one of the physical volumes in the volume group.

      For example, to import the definition of the oracle_vg1 volume group with device major number 45 on the hdisk3 and hdisk4 physical volumes, enter the following command:

      # /usr/sbin/importvg -y oracle_vg1 -V 45 hdisk3
      
      
    3. Change the owner, group, and permissions on the character device files associated with the logical volumes you created, as follows:

      # chown oracle:dba /dev/rdbname*
      # chmod 660 /dev/rdbname*
      
      
    4. Enter the following command to ensure that the volume group will not be activated by the operating system when the node starts:

      # /usr/sbin/chvg -a n VGname
      

3.6.7 Activating the Database File Volume Group in Concurrent Mode on All Cluster Nodes

To activate the volume group in concurrent mode on all cluster nodes, enter the following command on each node:

# /usr/sbin/varyonvg -c VGname

3.6.8 Creating the Database Configuration Assistant Raw Device Mapping File


Note:

You must complete this procedure only if you are using raw devices for database files. You do not specify the raw devices for the Oracle Clusterware files in the Database Configuration Assistant raw device mapping file.

To enable Database Configuration Assistant to identify the appropriate raw device for each database file, you must create a raw device mapping file, as follows:

  1. Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:

    • Bourne, Bash, or Korn shell:

      $ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
      
      
    • C shell:

      % setenv ORACLE_BASE /u01/app/oracle
      
      
  2. Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:

    # mkdir -p $ORACLE_BASE/oradata/dbname
    # chown -R oracle:oinstall $ORACLE_BASE/oradata
    # chmod -R 775 $ORACLE_BASE/oradata
    
    

    In this example, dbname is the name of the database that you chose previously.

  3. Change directory to the $ORACLE_BASE/oradata/dbname directory.

  4. Enter the following command to create a text file that you can use to create the raw device mapping file:

    # find /dev -user oracle -name 'r*' -print > dbname_raw.conf
    
    
  5. Edit the dbname_raw.conf file in any text editor to create a file similar to the following:


    Note:

    The following example shows a sample mapping file for a two-instance RAC cluster.

    system=/dev/rdbname_system_raw_500m
    sysaux=/dev/rdbname_sysaux_raw_800m
    example=/dev/rdbname_example_raw_160m
    users=/dev/rdbname_users_raw_120m
    temp=/dev/rdbname_temp_raw_250m
    undotbs1=/dev/rdbname_undotbs1_raw_500m
    undotbs2=/dev/rdbname_undotbs2_raw_500m
    redo1_1=/dev/rdbname_redo1_1_raw_120m
    redo1_2=/dev/rdbname_redo1_2_raw_120m
    redo2_1=/dev/rdbname_redo2_1_raw_120m
    redo2_2=/dev/rdbname_redo2_2_raw_120m
    control1=/dev/rdbname_control1_raw_110m
    control2=/dev/rdbname_control2_raw_110m
    spfile=/dev/rdbname_spfile_raw_5m
    pwdfile=/dev/rdbname_pwdfile_raw_5m
    
    

    In this example, dbname is the name of the database.

    Use the following guidelines when creating or editing this file:

    • Each line in the file must have the following format:

      database_object_identifier=logical_volume
      
      

      The logical volume names suggested in this manual include the database object identifier that you must use in this mapping file. For example, in the following logical volume name, redo1_1 is the database object identifier:

      /dev/rrac_redo1_1_raw_120m
      
      
    • The file must specify one automatic undo tablespace datafile (undotbsn) and two redo log files (redon_1, redon_2) for each instance.

    • Specify at least two control files (control1, control2).

    • To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs) instead of the automatic undo management tablespace data files.

  6. Save the file and note the file name that you specified.

  7. When you are configuring the oracle user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.