Previous  |  Next  >  
Product: Volume Manager Guides   
Manual: Volume Manager 4.1 Administrator's Guide   

Administering VxVM in Cluster Environments

The following sections describe the administration of VxVM's cluster functionality.


Note   Note    Most VxVM commands require superuser or equivalent privileges.

Requesting Node Status and Discovering the Master Node

The vxdctl utility controls the operation of the vxconfigd volume configuration daemon. The -c option can be used to request cluster information and to find out which node is the master. To determine whether the vxconfigd daemon is enabled and/or running, use the following command:


vxdctl -c mode

This produces different output messages depending on the current status of the cluster node:

Status Message Description

mode: enabled: cluster active - MASTER
master: mozart

The node is the master.

mode: enabled: cluster active - SLAVE
master: mozart

The node is a slave.

mode: enabled: cluster active - role not set
master: mozart
state: joining
reconfig: master update

The node has not yet been assigned a role, and is in the process of joining the cluster.

mode: enabled: cluster active - SLAVE
master: mozart
state: joining

The node is configured as a slave, and is in the process of joining the cluster.

mode: enabled: cluster inactive

The cluster is not active.


Note   Note    If the vxconfigd daemon is disabled, no cluster information is displayed.

See the vxdctl(1M) manual page for more information.

Determining if a Disk is Shareable

The vxdisk utility manages VxVM disks. To use the vxdisk utility to determine whether a disk is part of a cluster-shareable disk group, use the following command:


vxdisk list accessname

where accessname is the disk access name (or device name). A portion of the output from this command (for the device c4t1d0) is shown here:


Device:         c4t1d0
devicetag:        c4t1d0
type:        auto
clusterid:        cvm2
disk:        name=shdg01 id=963616090.1034.cvm2
timeout:        30
group:        name=shdg id=963616065.1032.cvm2
flags:        online ready autoconfig shared imported
...

Note that the clusterid field is set to cvm2 (the name of the cluster), and the flags field includes an entry for shared. When a node is not joined to the cluster, the flags field contains the autoimport flag instead of imported.

Listing Shared Disk Groups

vxdg can be used to list information about shared disk groups. To display information for all disk groups, use the following command:


vxdg list

Example output from this command is displayed here:


NAME         STATE             ID
rootdg       enabled           774215886.1025.teal
group2       enabled,shared    774575420.1170.teal
group1       enabled,shared    774222028.1090.teal

Shared disk groups are designated with the flag shared.

To display information for shared disk groups only, use the following command:


vxdg -s list

Example output from this command is as follows:


NAME         STATE            ID
group2       enabled,shared   774575420.1170.teal
group1       enabled,shared   774222028.1090.teal 

To display information about one specific disk group, use the following command:


vxdg list diskgroup

The following is example output for the command vxdg list group1 on the master:


Group:     group1
dgid:      774222028.1090.teal
import-id: 32768.1749
flags:     shared
version:   120
alignment: 8192 (bytes)
ssb:        on
local-activation: exclusive-write
cluster-actv-modes: node0=ew node1=off
detach-policy: local
private_region_failure: leave
copies:    nconfig=2 nlog=2
config:    seqno=0.1976 permlen=1456 free=1448 templen=6 loglen=220
config disk c1t0d0 copy 1 len=1456 state=clean online
config disk c1t0d0 copy 1 len=1456 state=clean online
log disk c1t0d0 copy 1 len=220
log disk c1t0d0 copy 1 len=220

Note that the flags field is set to shared. The output for the same command when run on a slave is slightly different. The local-activation and cluster-actv-modes fields display the activation mode for this node and for each node in the cluster respectively. The detach-policy and private_region_failure fields indicate how the cluster behaves in the event of loss of connectivity to the disks, and to the configuration and log copies on the disks.

Creating a Shared Disk Group


Note   Note    Shared disk groups can only be created on the master node.

If the cluster software has been run to set up the cluster, a shared disk group can be created using the following command:


vxdg -s init diskgroup [diskname=]devicename

where diskgroup is the disk group name, diskname is the administrative name chosen for a VM disk, and devicename is the device name (or disk access name).


Caution  Caution    The operating system cannot tell if a disk is shared. To protect data integrity when dealing with disks that can be accessed by multiple systems, use the correct designation when adding a disk to a disk group. VxVM allows you to add a disk that is not physically shared to a shared disk group if the node where the disk is accessible is the only node in the cluster. However, this means that other nodes cannot join the cluster. Furthermore, if you attempt to add the same disk to different disk groups (private or shared) on two nodes at the same time, the results are undefined. Perform all configuration on one node only, and preferably on the master node.

Forcibly Adding a Disk to a Disk Group


Note   Note    Disks can only be forcibly added to a shared disk group on the master node.

If VxVM does not add a disk to an existing disk group because that disk is not attached to the same nodes as the other disks in the disk group, you can forcibly add the disk using the following command:


vxdg -f adddisk -g diskgroup [diskname=]devicename
Caution  Caution    Only use the force option(-f) if you are fully aware of the consequences such as possible data corruption.

Importing Disk Groups as Shared


Note   Note    Shared disk groups can only be imported on the master node.

Disk groups can be imported as shared using the vxdg -s import command. If the disk groups are set up before the cluster software is run, the disk groups can be imported into the cluster arrangement using the following command:


vxdg -s import diskgroup

where diskgroup is the disk group name or ID. On subsequent cluster restarts, the disk group is automatically imported as shared. Note that it can be necessary to deport the disk group (using the vxdg deport diskgroup command) before invoking the vxdg utility.

Forcibly Importing a Disk Group

You can use the -f option to the vxdg command to import a disk group forcibly.


Caution  Caution    The force option(-f) must be used with caution and only if you are fully aware of the consequences such as possible data corruption.

When a cluster is restarted, VxVM can refuse to auto-import a disk group for one of the following reasons:

  • A disk in the disk group is no longer accessible because of hardware errors on the disk. In this case, use the following command to forcibly reimport the disk group:

  • vxdg -s -f import diskgroup
  • Some of the nodes to which disks in the disk group are attached are not currently in the cluster, so the disk group cannot access all of its disks. In this case, a forced import is unsafe and must not be attempted because it can result in inconsistent mirrors.

Converting a Disk Group from Shared to Private


Note   Note    Shared disk groups can only be deported on the master node.

To convert a shared disk group to a private disk group, first deport it on the master node using this command:


# vxdg deport diskgroup

Then reimport the disk group on any cluster node using this command:


# vxdg import diskgroup

Moving Objects Between Disk Groups

As described in Moving Objects Between Disk Groups, you can use the vxdg move command to move a self-contained set of VxVM objects such as disks and top-level volumes between disk groups. In a cluster, you can move such objects between private disk groups on any cluster node where those disk groups are imported.


Note   Note    You can only move objects between shared disk groups on the master node. You cannot move objects between private and shared disk groups.

Splitting Disk Groups

As described in Splitting Disk Groups, you can use the vxdg split command to remove a self-contained set of VxVM objects from an imported disk group, and move them to a newly created disk group.

Splitting a private disk group creates a private disk group, and splitting a shared disk group creates a shared disk group. You can split a private disk group on any cluster node where that disk group is imported. You can only split a shared disk group or create a shared target disk group on the master node.

For a description of the other options, see Moving Objects Between Disk Groups.

Joining Disk Groups

As described in Joining Disk Groups, you can use the vxdg join command to merge the contents of two imported disk groups. In a cluster, you can join two private disk groups on any cluster node where those disk groups are imported.

If the source disk group and the target disk group are both shared, you must perform the join on the master node.


Note   Note    You cannot join a private disk group and a shared disk group.

Changing the Activation Mode on a Shared Disk Group


Note   Note    The activation mode for access by a cluster node to a shared disk group is set on that node.

The activation mode of a shared disk group can be changed using the following command:


vxdg -g diskgroup set activation=mode

The activation mode is one of exclusive-write or ew, read-only or ro, shared-read or sr, shared-write or sw, or off. See Activation Modes of Shared Disk Groups for more information.

Setting the DIsk Detach Policy on a Shared Disk Group


Note   Note    The disk detach policy for a shared disk group can only be set on the master node.

The vxdg command may be used to set either the global or local disk detach policy for a shared disk group:


vxdg -g diskgroup set diskdetpolicy=global|local

The default disk detach policy is global. See Connectivity Policy of Shared Disk Groups for more information.

Setting the Disk Group Failure Policy on a Shared Disk Group


Note   Note    The disk group failure policy for a shared disk group can only be set on the master node.

The vxdg command may be used to set either the dgdisable or leave failure policy for a shared disk group:


vxdg -g diskgroup set dgfailurepolicy=dgdisable|leave

The default failure policy is dgdisable. See Disk Group Failure Policy for more information.

Creating Volumes with Exclusive Open Access by a Node


Note   Note    All shared volumes, including those with exclusive open access, can only be created on the master node.

When using the vxassist command to create a volume, you can use the exclusive=on attribute to specify that the volume may only be opened by one node in the cluster at a time. For example, to create the mirrored volume volmir in the disk group dskgrp, and configure it for exclusive open, use the following command:


vxassist -g dskgrp make volmir 5g layout=mirror exclusive=on

Multiple opens by the same node are also supported. Any attempts by other nodes to open the volume fail until the final close of the volume by the node that opened it.

Specifying exclusive=off instead means that more than one node in a cluster can open a volume simultaneously.

Setting Exclusive Open Access to a Volume by a Node


Note   Note    Exclusive open access on a volume can only be set on the master node. Ensure that none of the nodes in the cluster have the volume open when setting this attribute.

You can set the exclusive=on attribute with the vxvol command to specify that an existing volume may only be opened by one node in the cluster at a time.

For example, to set exclusive open on the volume volmir in the disk group dskgrp, use the following command:


vxvol -g dskgrp set exclusive=on volmir

Multiple opens by the same node are also supported. Any attempts by other nodes to open the volume fail until the final close of the volume by the node that opened it.

Specifying exclusive=off instead means that more than one node in a cluster can open a volume simultaneously.

Displaying the Cluster Protocol Version

The following command displays the cluster protocol version running on a node:


vxdctl list

This command produces output similar to the following:


Volboot file
version: 3/1
seqno: 0.19
cluster protocol version: 60
hostid: giga
entries:

You can also check the existing cluster protocol version using the following command:


vxdctl protocolversion

This produces output similar to the following:


Cluster running at protocol 60

Displaying the Supported Cluster Protocol Version Range

The following command displays the maximum and minimum protocol version supported by the node and the current protocol version:


vxdctl support

This command produces out put similar to the following:


Support information:
  vxconfigd_vrsn:              21
  dg_minimum:               20
  dg_maximum:               120
  kernel:              15
  protocol_minimum:               40
  protocol_maximum:               60
  protocol_current:               60

You can also use the following command to display the maximum and minimum cluster protocol version supported by the current VERITAS Volume Manager release:


vxdctl protocolrange

This produces output similar to the following:


minprotoversion: 40, maxprotoversion: 60

Upgrading the Cluster Protocol Version


Note   Note    The cluster protocol version can only be updated on the master node.

After all the nodes in the cluster have been updated with a new cluster protocol, you can upgrade the entire cluster using the following command on the master node:


vxdctl upgrade

Recovering Volumes in Shared Disk Groups


Note   Note    Volumes can only be recovered on the master node.

The vxrecover utility is used to recover plexes and volumes after disk replacement. When a node leaves a cluster, it can leave some mirrors in an inconsistent state. The vxrecover utility can be used to recover such volumes. The -c option to vxrecover causes it to recover all volumes in shared disk groups. The vxconfigd daemon automatically calls the vxrecover utility with the -c option when necessary.


Note   Note    While the vxrecover utility is active, there can be some degradation in system performance.

Obtaining Cluster Performance Statistics

The vxstat utility returns statistics for specified objects. In a cluster environment, vxstat gathers statistics from all of the nodes in the cluster. The statistics give the total usage, by all nodes, for the requested objects. If a local object is specified, its local usage is returned.

You can optionally specify a subset of nodes using the following form of the command:


vxstat -g diskgroup -n node[,node...]

where node is an integer. If a comma-separated list of nodes is supplied, the vxstat utility displays the sum of the statistics for the nodes in the list.

For example, to obtain statistics for node 2, volume vol1,use the following command:


vxstat -g group1 -n 2 vol1

This command produces output similar to the following:


        OPERATIONS        BLOCKS         AVG TIME(ms)
TYP  NAME        READ    WRITE    READ     WRITE    READ    WRITE
vol  vol1        2421    0    600000     0    99.0    0.0 

To obtain and display statistics for the entire cluster, use the following command:


vxstat -b

The statistics for all nodes are summed. For example, if node 1 performed 100 I/O operations and node 2 performed 200 I/O operations, vxstat -b displays a total of 300 I/O operations.

 ^ Return to Top Previous  |  Next  >  
Product: Volume Manager Guides  
Manual: Volume Manager 4.1 Administrator's Guide  
VERITAS Software Corporation
www.veritas.com