Previous  |  Next  >  
Product: Volume Manager Guides   
Manual: Volume Manager 4.1 Administrator's Guide   

How DMP Works

Multiported disk arrays can be connected to host systems through multiple paths. To detect the various paths to a disk, DMP uses a mechanism that is specific to each supported array type. DMP can also differentiate between different enclosures of a supported array type that are connected to the same host system.

See Discovering and Configuring Newly Added Disk Devices for a description of how to make newly added disk hardware known to a host system.

The multipathing policy used by DMP depends on the characteristics of the disk array:

  • An Active/Passive array (A/P array) allows access to its LUNs (logical units; real disks or virtual disks created using hardware) via the primary (active) path on a single controller (also known as an access port or a storage processor) during normal operation.
  • In implicit failover mode (or autotrespass mode), an A/P array automatically fails over by scheduling I/O to the secondary (passive) path on a separate controller if the primary path fails. This passive port is not used for I/O until the active port fails. In A/P arrays, path failover can occur for a single LUN if I/O fails on the primary path.
    For Active/Passive arrays with LUN group failover (A/PG arrays), a group of LUNs that are connected through a controller is treated as a single failover entity. Unlike A/P arrays, failover occurs at the controller level, and not for individual LUNs. The primary and secondary controller are each connected to a separate group of LUNs. If a single LUN in the primary controller's LUN group fails, all LUNs in that group fail over to the secondary controller.
    Active/Passive arrays in explicit failover mode (or non-autotrespass mode) are termed A/PF arrays. DMP issues the appropriate low-level command to make the LUNs fail over to the secondary path.
    A/P-C, A/PF-C and A/PG-C arrays are variants of the A/P, AP/F and A/PG array types that support concurrent I/O and load balancing by having multiple primary paths into a controller. This functionality is provided by a controller with multiple ports, or by the insertion of a SAN hub or switch between an array and a controller. Failover to the secondary (passive) path occurs only if all the active primary paths fail.
  • An Active/Active disk array (A/A arrays) permits several paths to be used concurrently for I/O. Such arrays allow DMP to provide greater I/O throughput by balancing the I/O load uniformly across the multiple paths to the LUNs. In the event that one path fails, DMP automatically routes I/O over the other available paths.

VxVM uses DMP metanodes (DMP nodes) to access disk devices connected to the system. For each disk in a supported array, DMP maps one node to the set of paths that are connected to the disk. Additionally, DMP associates the appropriate multipathing policy for the disk array with the node. For disks in an unsupported array, DMP maps a separate node to each path that is connected to a disk. The raw and block devices for the nodes are created in the directories /dev/vx/rdmp and /dev/vx/dmp respectively.

See the figure, How DMP Represents Multiple Physical Paths to a Disk as One Node, for an illustration of how DMP sets up a node for a disk in a supported disk array.

How DMP Represents Multiple Physical Paths to a Disk as One Node

How DMP Represents Multiple Physical Paths to a Disk as One Node

Click the thumbnail above to view full-sized image.

As described in Enclosure-Based Naming, VxVM implements a disk device naming scheme that allows you to recognize to which array a disk belongs. The figure, Example of Multipathing for a Disk Enclosure in a SAN Environment, shows that two paths, c1t99d0 and c2t99d0, exist to a single disk in the enclosure, but VxVM uses the single DMP node, enc0_0, to access it.

Example of Multipathing for a Disk Enclosure in a SAN Environment

Example of Multipathing for a Disk Enclosure in a SAN Environment

Click the thumbnail above to view full-sized image.

See Changing the Disk-Naming Scheme for details of how to change the naming scheme that VxVM uses for disk devices.


Note   Note    The persistent device naming feature, introduced in VxVM 4.1, makes the names of disk devices (DMP node names) persistent across system reboots. If operating system-based naming is selected, each disk name is usually set to the name of one of the paths to the disk. After hardware reconfiguration and a subsequent reboot, the operating system may generate different names for the paths to the disks. As DDL assigns persistent disk names using the persistent device name database that was generated during a previous boot session, a disk name may no longer correspond to an actual path to the disk. Since DMP device node names are arbitrary, this does not prevent the disks from being used. See Regenerating the Persistent Device Name Database for details of how to regenerate the persistent device name database, and restore the relationship between the disk and path names.

See Discovering and Configuring Newly Added Disk Devices for a description of how to make newly added disk hardware known to a host system.

Path Failover Mechanism

The DMP feature of VxVM enhances system reliability when used with multiported disk arrays. In the event of the loss of one connection to the disk array, DMP automatically selects the next available I/O path for I/O requests dynamically without action from the administrator.

DMP is also informed when you repair or restore a connection, and when you add or remove devices after the system has been fully booted (provided that the operating system recognizes the devices correctly).

Load Balancing

By default, DMP uses the balanced path mechanism to provide load balancing across paths for Active/Active, A/P-C, A/PF-C and A/PG-C disk arrays. Load balancing maximizes I/O throughput by using the total bandwidth of all available paths. Sequential I/O starting within a certain range is sent down the same path in order to benefit from disk track caching. Large sequential I/O that does not fall within the range is distributed across the available paths to reduce the overhead on any one path.

For Active/Passive disk arrays, I/O is sent down the primary path. If the primary path fails, I/O is switched over to the other available primary paths or secondary paths. As the continuous transfer of ownership of LUNs from one controller to another results in severe I/O slowdown, load balancing across paths is not performed for Active/Passive disk arrays unless they support concurrent I/O.


Note   Note    Both paths of an Active/Passive array are not considered to be on different controllers when mirroring across controllers (for example, when creating a volume using vxassist make specified with the mirror=ctlr attribute).

For A/P-C, A/PF-C and A/PG-C arrays, load balancing is performed across all the currently active paths as is done for Active/Active arrays.

You can use the vxdmpadm command to change the I/O policy for the paths to an enclosure or disk array as described in Specifying the I/O Policy.

DMP in a Clustered Environment


Note   Note    You need an additional license to use the cluster feature of VxVM.

In a clustered environment where Active/Passive type disk arrays are shared by multiple hosts, all nodes in the cluster must access the disk via the same physical path. Accessing a disk via multiple paths simultaneously can severely degrade I/O performance (sometimes referred to as the ping-pong effect). Path failover on a single cluster node is also coordinated across the cluster so that all the nodes continue to share the same physical path.

Prior to release 4.1 of VxVM, the clustering and DMP features could not handle automatic failback in A/P arrays when a path was restored, and did not support failback for explicit failover mode arrays. Failback could only be implemented manually by running the vxdctl enable command on each cluster node after the path failure had been corrected. In release 4.1, failback is now an automatic cluster-wide operation that is coordinated by the master node. Automatic failback in explicit failover mode arrays is also handled by issuing the appropriate low-level command. If required, this feature can be disabled by selecting the "no failback" option that is defined in the array policy module (APM) for an array.


Note   Note    Support for automatic failback of an A/P array requires that an appropriate ASL (and APM, if required) is available for the array, and has been installed on the system. See Administering the Device Discovery Layer, Configuring Array Policy Modules, and the VERITAS Volume Manager Hardware Notes for more information.

For Active/Active type disk arrays, any disk can be simultaneously accessed through all available physical paths to it. In a clustered environment, the nodes do not all need to access a disk via the same physical path.

Enabling or Disabling Controllers with Shared Disk Groups

VxVM does not allow enabling or disabling of controllers connected to a disk that is part of a shared VERITAS Volume Manager disk group.

For example, consider a disk array, containing all or part of a shared disk group, that is connected through a controller on each of the cluster nodes. In such a situation, the vxdmpadm enable and disable operations fail when applied to the controller on any of the nodes, and the following error message is displayed:


VxVM vxio ERROR V-5-1-3490 Operation not supported for shared disk arrays.
 ^ Return to Top Previous  |  Next  >  
Product: Volume Manager Guides  
Manual: Volume Manager 4.1 Administrator's Guide  
VERITAS Software Corporation
www.veritas.com