Friday, October 4, 2013

Low Latency Transport Protocol (LLT)


The Low Latency Transport protocol is used for all cluster communications as a high-performance, low-latency replacement for the IP stack.

LLT has the following two major functions:

Traffic distribution

LLT provides the communications backbone for GAB. LLT distributes (load balances) inter-system communication across all configured network links. This distribution ensures all cluster communications are evenly distributed across all network links for performance and fault resilience. If a link fails, traffic is redirected to the remaining links. A maximum of eight network links are supported.


LLT is responsible for sending and receiving heartbeat traffic over each configured network link. The heartbeat traffic is point to point unicast. LLT uses ethernet broadcast to learn the address of the nodes in the cluster. All other cluster communications, including all status and configuration traffic is point to point unicast. The heartbeat is used by the Group Membership Services to determine cluster membership.

The heartbeat signal is defined as follows

LLT on each system in the cluster sends heartbeat packets out on all configured LLT interfaces every half second.

LLT on each system tracks the heartbeat status from each peer on each configured LLT interface.

LLT on each system forwards the heartbeat status of each system in the cluster to the local Group Membership Services function of GAB. GAB receives the status of heartbeat from all cluster systems from LLT and makes membership determination based on this information.
LLT can be configured to designate specific cluster interconnect links as either high priority or low priority. High priority links are used for cluster communications to GAB as well as heartbeat signals. Low priority links, during normal operation, are used for heartbeat and link state maintenance only, and the frequency of heartbeats is reduced to 50% of normal to reduce network overhead.

If there is a failure of all configured high priority links, LLT will switch all cluster communications traffic to the first available low priority link. Communication traffic will revert back to the high priority links as soon as they become available.
While not required, best practice recommends to configure at least one low priority link, and to configure two high priority links on dedicated cluster interconnects to provide redundancy in the communications path. Low priority links are typically configured on the public or administrative network.
If you use different media speed for the private NICs, Symantec recommends that you configure the NICs with lesser speed as low-priority links to enhance LLT performance. With this setting, LLT does active-passive load balancing across the private links. At the time of configuration and failover, LLT automatically chooses the link with high-priority as the active link and uses the low-priority links only when a high-priority link fails.

LLT sends packets on all the configured links in weighted round-robin manner. LLT uses the linkburst parameter which represents the number of back-to-back packets that LLT sends on a link before the next link is chosen. In addition to the default weighted round-robin based load balancing, LLT also provides destination-based load balancing. LLT implements destination-based load balancing where the LLT link is chosen based on the destination node id and the port. With destination-based load balancing, LLT sends all the packets of a particular destination on a link. However, a potential problem with the destination-based load balancing approach is that LLT may not fully utilize the available links if the ports have dissimilar traffic. Symantec recommends destination-based load balancing when the setup has more than two cluster nodes and more active LLT ports. You must manually configure destination-based load balancing for your cluster to set up the port to LLT link mapping.

LLT on startup sends broadcast packets with LLT node id and cluster id information onto the LAN to discover any node in the network that has same node id and cluster id pair. Each node in the network replies to this broadcast message with its cluster id, node id, and node name.

LLT on the original node does not start and gives appropriate error in the following cases:

LLT on any other node in the same network is running with the same node id and cluster id pair that it owns.

LLT on the original node receives response from a node that does not have a node name entry in the /etc/llthosts file.LLT (Low Latency Transport) provides fast, kernel-to-kernel comms and monitors network connections. The system admin configures the LLT by creating a configuration file (llttab) that describes the systems in the cluster and private network links among them. The LLT runs in layer 2 of the network stack


The file is a database, containing one entry per system, that links the LLT system ID with the hosts name. The file is identical on each server in the cluster.


The file contains information that is derived during installation and is used by the utility lltconfig

verbose output of the lltstat command:

lltstat -nvv | more

stop the LLT running:

lltconfig -U

start the LLT:

lltconfig -c

Setting up /etc/llttab for a manual installation

The /etc/llttab file must specify the system's ID number (or its node name), its cluster ID, and the network links that correspond to the system. In addition, the file can contain other directives. Refer also to the sample llttab file in /opt/VRTSllt.

Use vi or another editor to create the file /etc/llttab that contains the entries that resemble: set-node galaxy set-cluster 2 link en1 /dev/dlpi/en:1 - ether - - link en2 /dev/dlpi/en:2 - ether - -
The first line must identify the system where the file exists. In the example, the value for set-node can be: galaxy, 0, or the file name /etc/nodename. The file needs to contain the name of the system (galaxy in this example). The next two lines, beginning with the link command, identify the two private network cards that the LLT protocol uses. The order of directives must be the same as in the sample llttab file in /opt/VRTSllt.


The file llthosts(4) is a database, containing one entry per system, that links the LLT system ID (in the first column) with the LLT host name. This file is identical on each node in the cluster.

For example, the file /etc/llthosts contains entries that resemble:

0 node1

1 nodee2


Post a Comment

Design by BABU | Dedicated to grandfather | welcome to BABU-UNIX-FORUM