Cluster

From VSI OpenVMS Wiki
Revision as of 07:09, 7 April 2019 by Darya.zelenina (talk | contribs) (Created page with "An OpenVMS '''cluster''' is a highly integrated organization of OpenVMS software, Alpha, VAX, or Integrity servers or a combination of Alpha and VAX or A...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

An OpenVMS cluster is a highly integrated organization of OpenVMS software, Alpha, VAX, or Integrity servers or a combination of Alpha and VAX or Alpha and Integrity servers, and storage devices that operate as a single system. As members of an OpenVMS Cluster system, Alpha and VAX or Alpha and Integrity server systems can share processing resources, data storage, and queues under a single security and management domain, yet they can boot or shut down independently.

Benefits

The benefits of an OpenVMS Cluster include:

  • resource sharing: members of a cluster can share print and batch queues, storage devices, and other resources
  • flexibility: application programmers do not have to change their application code, and users do not have to know anything about the OpenVMS Cluster environment to take

advantage of common resources.

  • high availability: redundant hardware components can be configured to eliminate or withstand a single point of failure
  • nonstop processing: OpenVMS clusters facilitate dynamic adjustments to the changes in the configuration; one of the nodes can be taken out of the cluster for updates or maintenance while all of the workload fails over to other nodes
  • scalability: computing and storage resources can be added or changed without shutting down the whole system or applications running on it
  • performance: OpenVMS clusters provide high performance
  • management: management tasks can be performed concurrently for one or more nodes in a cluster
  • security: OpenVMS clusters share a single security database
  • load balancing: OpenVMS cluster systems distribute work across cluster members based on the current load of each member

Hardware Componenets

OpenVMS clusters consist of computers, interconnects, and storage devices.

Computers

Up to 96 computers, ranging from desktop to mainframe systems, can be members of an OpenVMS Cluster system. Active members that run the OpenVMS Alpha or OpenVMS Integrity server operating system and participate fully in OpenVMS Cluster negotiations include Alpha servers and Integrity servers.

Physical Interconnects

An interconnect is a physical path that connects computers to other computers and storage subsystems.

Interconnect Platform Support Comments
IP: UDP Alpha and Integrity Supports Fast Ethernet and Gigabit Internet on Alpha and Integrity, 10 Gb Ethernet on Integrity only
Fibre Channel Alpha and Integrity Node-to-storage only
SAS Integrity Node-to-storage only
SCSI Alpha and Integrity Node-to-storage and limited storage configurations only
LAN: Ethernet, fast Ethernet, Gigabit Ethernet, 10 Gb Ethernet Alpha and Integrity 10 Gb Ethernet is supported on Integrity only
Memory Channel Alpha Node-to-node communications only
SMCI Supports communications between OpenVMS Galaxy instances

Storage Devices

Systems within an OpenVMS cluster support the following types of storage devices:

Controller colspan="col" ! Interconnect
HSG and HSV FC
LSI 1068 and LSI Logic 1068e SAS
HSZ SCSI
K.SCSI HSC in StorageWorks arrays on the HSC storage subsystem SCSI

Software Components

The OpenVMS operating system, which runs on each node in an OpenVMS Cluster,includes several software components that facilitate resource sharing and dynamic adjustments to changes in the underlying hardware configuration. If one computer becomes unavailable, the OpenVMS Cluster system continues operating because OpenVMS is still running on the remaining computers.

OpenVMS Cluster Software

Component Facilitates Functions
Connection Manager Member integrity Coordinates participation of computers in the cluster and maintains cluster integrity when computers join or leave the cluster.
Distributed Lock Manager Resource synchronization Synchronizes operations of the distributed file system, job controller, device allocation, and other cluster facilities. If an OpenVMS Cluster computer shuts down, all locks that it holds

are released so that processing can continue on the remaining computers.

Distributed File System Resource sharing Allows all computers to share access to mass storage and file records, regardless of the type of storage device (DSA, RF, SCSI, and solid state subsystem) or its location.
Distributed Job controller Queueing Makes generic and execution queues available across the cluster.
MSCP Server Disk serving Implements the proprietary mass storage control protocol in order to make disks available to all nodes that do not have direct access to those disks.
TMSCP Server Tape serving Implements the proprietary tape mass storage control protocol in order to make tape drives available to all nodes that do not have direct access to those tape drives.

System Communications Architecture

The System Communications Architecture (SCA) defines the communications mechanisms that allow nodes in an OpenVMS Cluster system to co-operate. SCA governs the sharing of data between resources at the nodes and binds togetherSystem Applications (SYSAPs) that run on different Integrity server systems and Alpha computers.

SCA consists of the following hierarchy of components:

Communications Software Function
System applications (SYSAPs) Consists of clusterwide applications (for example, disk and tape class drivers, connection manager, and MSCP server) that use SCS software for interprocessor communication.
System Communications Services (SCS) Provides basic connection management and communication services, implemented as a logical path between system applications (SYSAPs) on nodes in an OpenVMS Cluster system.
Port drivers Control the communication paths between local and remote ports.
Physical interconnects Consists of ports or adapters for CI, DSSI, Ethernet, ATM, FDDI, and MEMORY CHANNEL interconnects. PEDRIVER is the port driver for LAN (Ethernet) interconnect and starting with OpenVMS Version 8.4 PEDRIVER is also enabled to use TCP/IP for cluster communication.

System Management

An OpenVMS Cluster system is easily managed because the multiple members,hardware, and software are designed to co-operate as a single system:

  • Smaller configurations usually include only one system disk (or two for an OpenVMS Cluster configuration with both OpenVMS Alpha and OpenVMS Integrity server operating systems), regardless of the number or location of computers in the configuration.
  • Software must be installed only once for each operating system (Alpha or Integrity servers), and is accessible by every user and node of the OpenVMS Cluster.
  • With the shared UAF file, users must be added once to access the resources of the entire OpenVMS Cluster
  • Several system management utilities and commands facilitate cluster management:
Tool Supplied with the OS Function
Accounting
VMS Accounting YES Tracks how resources are being used
Configuration and capacity planning
License Management Facility YES Helps the system manager determine which software products are licensed and installed on a standalone system and on each computer in an OpenVMS Cluster system.
System Generation Utility YES Allows you to tailor your system for a specific hardware and software configuration. Use SYSGEN

to modify system parameters, load device drivers, and create additional page and swap files.

CLUSTER_CONFIG.COM YES Automates the configuration or reconfiguration of an OpenVMS Cluster system and assumes the use of

DECnet.

CLUSTER_CONFIG_LAN.COM YES Automates configuration or reconfiguration of an OpenVMS Cluster system without the use of DECnet.
HP Management Agents for OpenVMS YES Consists of a web server for system management with management agents that allow you to look at devices on your OpenVMS systems.
HP Insight Manager XE Supplied with every HPNT server Centralizes system management in one system to reduce cost, improve operational efficiency and

effectiveness, and minimize system down time. You can use HP Insight Manager XE on an NT server to monitor every system in an OpenVMS Cluster system. In a configuration of heterogeneous VSI systems, you can use HP Insight Manager XE on an NT server to monitor all systems.

Event and fault tolerance
OPCOM message routing YES Provides event information.
Operations management
Clusterwide process services YES Allows OpenVMS system management commands, such as SHOW USERS, SHOW SYSTEM, and STOP/ID=, to operate clusterwide.
Availability Manager YES From either an OpenVMS system or a Windows node, enables you to monitor one or more OpenVMS nodes

on an extended LAN or wide area network (WAN). That is, the nodes for which you are collecting the information must be in the same extended LAN and there should be an interface that communicates with the collector nodes as well as the WAN analyzer. The Availability Manager collects system and process data from multiple OpenVMS nodes simultaneously, and then analyzes the data and displays the output using a native Java GUI.

HP WEBM Services for OpenVMS YES WBEM (Web-Based Enterprise Management) enables management applications to retrieve system

information and request system operations wherever and whenever required. It allows customers to manage their systems consistently across multiple platforms and operating systems, providing integrated solutions that optimize your infrastructure for greater operational efficiency.

Systems Communications Architecture Control Program YES Enables you to monitor, manage, and diagnose cluster communications and cluster interconnects
DNS NO Configures certain network nodes as name servers that associate objects with network names.
Local Area Transport Control Program YES Provides the function to control and obtain information from LAT port driver.
LAN Control Program YES Allows the system manager to configure and control the LAN software on OpenVMS systems.
Network Control Protocol utility NO Allows the system manager to supply and access information about the DECnet for OpenVMS (Phase IV) network from a configuration database.
Network Control Language utility NO Allows the system manager to supply and access information about the DECnet–Plus network from a

configuration database.

POLYCENTER Software Installation Utility (PCSI) YES Provides rapid installations of software products.
Queue Manager YES Uses OpenVMS Cluster generic and execution queues to feed node-specific queues across the cluster.
Show Cluster utility YES Monitors activity and performance in an OpenVMS Cluster configuration, then collects and sends

information about that activity to a terminal or other output device.

SDA (System Dump Analyzer) YES Allows you to inspect the contents of memory saved in the dump taken at crash time or as it exists in a running system. You can use SDA interactively or in batch mode.
System Management utility YES Enables device and processor control commands to take effect across an OpenVMS Cluster.
VMSINSTAL YES Provides software installations.
Performance
AUTOGEN utility YES Optimizes system parameter settings based on usage.
Monitor utility YES Provides basic performance data.
Security
Authorize utility YES Modifies user account profiles.
SET ACL command YES Sets complex protection on many system objects.
SET AUDIT command YES Facilitates tracking of sensitive system objects.
Storage management
BACKUP utility YES Allows OpenVMS Cluster system managers to create backup copies of files and directories from storage media and then restore them. This utility can be used on one node to back up data stored on disks throughout the OpenVMS Cluster system.
MOUNT utility YES Enables a disk or tape volume for processing by one computer, a subset of OpenVMS Cluster computers, or all OpenVMS Cluster computers.
Volume Shadowing for OpenVMS NO Replicates disk data across multiple disks to help OpenVMS Cluster systems survive disk failures

See also