The cluster
utility enables you to manage almost all of the functions and administrative needs of a server cluster from the command line, making it easy to integrate such functions into scripts and dynamic web pages you might create. In this section, I'll take a look at the various options available with cluster
and what you can do with the utility.
A couple of notes before I begin: when using cluster
, the locale settings for the user account under which you're logged in must match the system default locale on the computer used to manage the cluster. It's best to match the locales on all cluster nodes and all computers from which you will use the command-line utility.
With that out of the way, let's take a look at using the utility. You can create new clusters from the command line; for example, to create a new cluster called "testcluster" at the IP address 192.168.1.140 with the administrator account, use the following:
cluster testcluster /create ipaddress:192.168.1.140 /pass:Password /user:HASSELLTECH\ admnistratior /verbose
The /verbose
option outputs detailed information to the screen about the process of creating the cluster.
You can add a node or multiple node (as shown in the example below) by using the /add
switch. In the next command, I'm adding three nodes, called test1, test2
, and test3
, respectively, to the testcluster
cluster.
cluster testcluster /add:test1,test2,test3 /pass:Password /verbose
You might also wish to change the quorum resource via the command line. You can do so as follows:
cluster testcluster /quorum:disk2 /path:D:\
One thing to note in the preceding command: if you change the location of the quorum resource, do not omit the drive letter, the colon, or any backslashes. Write out the path name as if you were entering the full path at the command line.
The node
option in cluster allows you to check on the status of and administer a cluster node. Some example commands include:
cluster node test1 /status
This command displays the cluster node status (for example, if the node is up, down, or paused).
cluster node test1 /forcecleanup
This command manually restores the configuration of the cluster service on the specified node to its original state.
cluster node test1 /start
(or /stop
or /pause
or /resume
)This command starts, stops, pauses, or resumes the cluster service on the specified node.
cluster node test1 /evict
This command evicts a node from a cluster.
cluster node test1 /listinterfaces
This command lists the node's network interfaces.
There is also a command, called clussvc
, that allows you to take action against a few things that might cause the cluster service to present trouble. You should only use this command if the cluster service fails to start, and it should only be run locally from the node that is presenting problems.
To enable the debugging of the resource dynamic-link libraries (DLLs) that are loaded by the resource monitor process, use the following:
clussvc /debug /debugresmon
To allow the cluster service to start up, despite problems with the quorum device, issue the following command:
clussvc /debug /fixquorum
When the /fixquorum
command is issued on a particular node, the cluster service starts, but all the resources, including the quorum resource, remain offline. This allows you to then manually bring the quorum resource online and more easily diagnose quorum device failures.
The new quorum file is created using information in the cluster database located in %systemroot%\cluster\CLUSDB. Be careful, however, as that information might be out of date; only use this if no backup is available.
Use the following command to disallow replication of event log entries:
clussvc /debug /norepevtlogging
This command is useful in reducing the amount of information displayed in the command window by filtering out events already recorded in the event log.
And in the event that nothing else works, you can use the following command to force a quorum between a list of cluster nodes for a majority node set cluster:
clussvc /debug /forcequorum node1,node2,node3
You might use that command in a case where all nodes in one location have lost the ability to communicate with nodes in another location.