The terms in this glossary are commonly used in a TruCluster Software environment.
Scripts that are used to make an application or data highly available by configuring an application or data on a member system. Action scripts break down a procedure (for example, starting an application or exporting data) into a series of steps, which are performed in order when executing that procedure. There are five types of action scripts: add, delete, start, stop, and check action scripts, and there are two versions of each type: internal action scripts, which cannot be modified manually, and user-defined action scripts, which allow you to customize the behavior of the service.
A device that converts the protocol and hardware interface of one bus type into that of another bus.
Electrical switches on the side or rear of some disk drives that determine the SCSI address setting for the drive.
External interface to console firmware for operating systems that expect firmware compliance with the Advanced RISC Computing Standard Specification.
See advanced RISC computing.
A set of systems, disks, shared SCSI buses and software that allows you to configure applications and disks so that they are highly available to client systems.
See available server environment.
A number from 0 to 63 that identifies an ASE within a cluster and allows the asemgr utility to generate unique clusterwide names for DRD special files. Each ASE in a cluster has its own distinct ASE ID. All cluster members in the same ASE use the same ASE ID.
A service that an administrator sets up in an ASE by using the asemgr utility. TruCluster software uses a service to maintain the availability of applications or data. A service consists of a unique name, an ASP policy, an application or disk specification, and action scripts that contain the commands to start and stop the application or to fail over the disk data. The action scripts implement the status changes for the service by performing necessary configuration changes and starting and stopping processes.
A member system in an ASE runs a service until a hardware or software failure or an explicit action by an administrator causes the service to run on another member system in the ASE.
Enables you to control which member systems are allowed to run a service. You must specify an ASP policy when you add a service. For example, you can allow any member system to run a service, or you can restrict a service to a specific member system or systems.
See automatic service placement policy.
The amount of time that hardware or software is available
during the time it is scheduled to be available.
For the TruCluster software,
the ability to function despite a specific hardware or software failure.
See
also
highly available.
To make an ASE service available
despite a particular failure, it is necessary to make the hardware and
software it depends on capable of operating despite that failure.
For
example, a DRD service can be made available despite an MEMORY CHANNEL
interconnect failure by configuring a redundant MEMORY CHANNEL interconnect so
that if the primary MEMORY CHANNEL interconnect fails, the DRD service will use
the other MEMORY CHANNEL interconnect.
Flat or twisted-wire cable or a blackplane composed of individual identical circuits. A bus interconnects computer system components to provide communications paths for addresses, data, and control information.
A computer system that uses resources provided by another
computer, called a
server.
A loosely coupled collection of servers that share storage and other resources and make applications and data highly available. A cluster consists of communications media, member systems, peripheral devices, and applications. The systems communicate over a high-performance interconnect.
A file
(/etc/CCM)
that statically records
the hardware configuration of a cluster for display by the Cluster Monitor
utility.
You use the cluster_map_create utility to generate a cluster configuration
map when you first configure a cluster and, subsequently, each time you add
or remove hardware.
Private physical bus employed by cluster members for intracluster communications.
Cluster software component that provides a graphical view of the cluster configuration. You can use the Cluster Monitor utility to monitor the availability of services and the connectivity among member systems in the cluster. You can also use it to manage services and to start disk management applications.
The ability to turn off power to a device, replace it, and then turn on power to the device.
Cluster software component that coordinates participation of systems in the cluster, and maintains cluster integrity when computers join or leave the cluster.
A SCSI bus where the signal's level is determined by the potential difference between two wires.
Cluster software component that synchronizes access to shared resources among cooperating processes throughout the cluster.
See distributed lock manager.
A storage technology that uses an ASE service to provide clusterwide access to a disk. The service exports a raw disk to all member systems. The raw disk must be on a shared SCSI bus. If the member system running the DRD service fails, the service can fail over to another member system on the same shared SCSI bus.
See distributed raw disk.
A transfer of the responsibility to provide an ASE service. A failover occurs when a hardware or software failure causes a service to restart on a viable member system.
An optional mode of SCSI-2 that allows transmission rates of up to 10 MB per second.
A bus speed that uses the fast synchronous transfer option, enabling I/O devices to attain high peak-rate transfers (10 MB per second) in synchronous mode.
Software code stored in hardware.
In the TruCluster software, the ability to survive any single hardware or software failure.
A cluster can be considered highly available if the hardware and software provides protection against any single failure, such as a system or disk failure or a SCSI cable disconnection.
An ASE service can be considered highly available if the hardware it depends on provides protection against any single failure, and the service is configured to fail over in case of a failure.
The ability to replace a device on a shared bus while the bus is active.
A member system that is available to run an ASE service if the primary member system running the service fails.
See private SCSI bus.
A file that indicates that operations on one or more other files are restricted or prohibited. The presence of the lock file can be used as the indication, or the lock file can contain information describing the nature of the restrictions.
A disk storage management tool that protects against data loss, improves disk I/O performance, and customizes the disk configuration.
System administrators use LSM to perform disk management functions without disrupting users or applications accessing data on those disks.
In an ASE, you can use LSM to mirror disks across shared SCSI buses. This results in greater data reliability and integrity. You can use a DRD service to make an LSM volume accessible clusterwide.
See Logical Storage Manager.
A group of LSM disks that share a common configuration. The configuration information for an LSM disk group consists of a set of records describing objects including LSM disks, LSM volumes, LSM plexes, and LSM subdisks that are associated with the LSM disk group. Each LSM disk group has an administrator-assigned name that can be used to reference that LSM disk group.
An LSM volume is a DIGITAL UNIX special device that contains data used by a UNIX file system, a database, or other applications. LSM transparently places an LSM volume between applications and a physical disk. Applications then operate on the LSM volume rather than on the physical disk. For example, a file system is created on an LSM volume rather than on a physical disk.
An LSM volume presents block and raw interfaces that are compatible in their use with disk partition special devices. Because an LSM volume is a virtual device, it can be mirrored, spanned across disk drives, moved to use different storage, and striped using administrative commands. The configuration of an LSM volume can be changed using LSM utilities without disrupting applications or file systems that are using the LSM volume.
An LSM plex is a copy of an LSM volume's logical data address space, sometimes known as a mirror. An LSM volume can have up to eight LSM plexes associated with it. A read can be satisfied from any LSM plex, while a write is directed to all LSM plexes.
A physical or virtual peripheral device addressable through a target. LUNs use their target's bus connection to communicate on a SCSI bus.
See logical unit number.
The basic computing resource in a cluster. A member system must be physically connected to a cluster interconnect and at least one shared SCSI bus. The connection manager dynamically determines cluster membership based on communications among the cluster members.
A PCI-based cluster interconnect that promotes fast and reliable communications between cluster members.
MEMORY CHANNEL interconnect. A type of cluster interconnect that consists of a MEMORY CHANNEL adapter installed in a PCI slot in each member system, one or more MEMORY CHANNEL link cables to connect the adapters, and an optional MEMORY CHANNEL hub.
A directory file that is the name of a mounted file system.
Two or more computing systems that are linked for the purpose of exchanging information and sharing resources.
The network adapter and the software that allows a system to communicate over a network.
An abnormal condition in which nodes in an existing TruCluster software configuration divide into two independent clusters.
An industry-standard expansion I/O bus that is a synchronous, asymmetrical I/O channel.
See peripheral component interconnect.
A SCSI bus that connects private storage to the local system.
A storage device on a private SCSI bus. Storage devices include hard disk, floppy disk, and compact disk drives, tape drives, and other devices.
A technique that organizes disk data to improve performance and reliability. RAID has three attributes:
It is a set of physical disks viewed by the user as a single logical device or multiple logical devices.
Disk data is distributed across the physical set of drives in a defined manner.
Redundant disk capacity is added so data can be recovered if a drive fails.
See redundant array of inexpensive disks.
Describes duplicate hardware that provides spare capacity that can be used when a component fails.
To stop an ASE service on one member system and restart it on another member system.
See ASP policy.
A program to be interpreted and executed by the shell.
See Small Computer System Interface.
An extension to the original SCSI standard featuring multiple systems on the same bus and hot swap. Hot swap is the ability to replace a device on a shared bus while the bus is active. The SCSI-2 standard is ANSI standard X3.T9.2/86-109.
A storage adapter that provides a connection between an I/O bus and a SCSI bus.
A bus that supports the transmission and signalling requirements of a SCSI protocol. See shared SCSI bus and private SCSI bus.
The data transfer speed for a SCSI bus. SCSI bus speed can be either slow, up to 5 million bytes per second, or fast, up to 10 million bytes per second.
An adapter or module that is installed in a member system's I/O bus slot that provides a connection to a shared SCSI bus.
A SCSI controller, peripheral controller, or intelligent peripheral that can be attached to a SCSI bus.
Unique address that identifies a device on a SCSI bus.
A computing system that provides a specific set of applications or data to clients. For a service in an ASE, the server is the member system that is currently running the service.
See ASE service.
A SCSI bus that is connected to more than one member system and, optionally, one or more storage devices.
Disks that are connected to a shared SCSI bus.
Converts signals between a single-ended SCSI bus and a differential SCSI bus.
A signal path in which one data lead and one ground lead are utilized to make a device connection. This transmission method is economical, but is more susceptible to noise than a differential SCSI bus.
An American National Standards Institute (ANSI) standard interface for connecting disks and other peripheral devices to a computer system. SCSI-based devices can be configured in a series, with multiple devices on the same bus. In this manual, SCSI refers to SCSI-2. SCSI is pronounced skuh-zee.
External interface to console firmware for operating systems that expect firmware compliance with the Alpha System Reference Manual (SRM).
A MEMORY CHANNEL interconnect configuration that uses an MEMORY CHANNEL hub to connect MEMORY CHANNEL adapters. To set up an MEMORY CHANNEL interconnect in standard mode, use a link cable to connect each MEMORY CHANNEL adapter to a line card installed in an MEMORY CHANNEL hub.
DIGITAL's modular storage subsystem (MSS), which consists of a family of mass storage products that can be configured to meet current and future storage needs.
An installable software module that is compatible with the DIGITAL UNIX setld software installation utility.
The private (nonshared) interconnect used on the CPU subsystem. This bus connects the processor module, the memory module, and the I/O module.
A device that can be addressed by a SCSI ID on a SCSI bus.
Resistor array device used for terminating a SCSI bus. A SCSI bus must be terminated at its two physical ends.
One to three disks used by the connection manager to prevent cluster partitions in a two-member cluster that does not use a hub.
A connector that joins two cables to a single device.
A MEMORY CHANNEL interconnect configuration that does not use an MEMORY CHANNEL hub to connect MEMORY CHANNEL adapters. Virtual hub mode is supported only for clusters that have two member systems. To set up an MEMORY CHANNEL interconnect in virtual hub mode, use an MEMORY CHANNEL link cable to connect the MEMORY CHANNEL adapter in one member system to the corresponding MEMORY CHANNEL adapter in the other member system.
To replace a device on a shared bus while the bus is not active.
A cable that joins two cables to a single device.