seesaw

module
v0.0.0-...-f11734a Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 2, 2016 License: Apache-2.0

README

Seesaw v2

GoDoc

Note: This is not an official Google product.

About

Seesaw v2 is a Linux Virtual Server (LVS) based load balancing platform.

It is capable of providing basic load balancing for servers that are on the same network, through to advanced load balancing functionality such as anycast, Direct Server Return (DSR), support for multiple VLANs and centralised configuration.

Above all, it is designed to be reliable and easy to maintain.

Requirements

A Seesaw v2 load balancing cluster requires two Seesaw nodes - these can be physical machines or virtual instances. Each node must have two network interfaces - one for the host itself and the other for the cluster VIP. All four interfaces should be connected to the same layer 2 network.

Building

Seesaw v2 is developed in Go and depends on several Go packages:

Additionally, there is a compile and runtime dependency on libnl and a compile time dependency on the Go protobuf compiler.

On a Debian/Ubuntu style system, you should be able to prepare for building by running:

apt-get install golang
apt-get install libnl-3-dev libnl-genl-3-dev

If your distro has a go version before 1.5, you may need to fetch a newer release from https://golang.org/dl/.

After setting GOPATH to an appropriate location (for example ~/go):

go get -u golang.org/x/crypto/ssh
go get -u github.com/dlintw/goconf
go get -u github.com/golang/glog
go get -u github.com/miekg/dns
go get -u github.com/kylelemons/godebug/pretty

Ensure that ${GOPATH}/bin is in your ${PATH} and in the seesaw directory:

make test
make install

If you wish to regenerate the protobuf code, the protobuf compiler and Go protobuf compiler generator are also needed:

apt-get install protobuf-compiler
go get -u github.com/golang/protobuf/{proto,protoc-gen-go}

The protobuf code can then be regenerated with:

make proto

Installing

After make install has run successfully, there should be a number of binaries in ${GOPATH}/bin with a seesaw_ prefix. Install these to the appropriate locations:

SEESAW_BIN="/usr/local/seesaw"
SEESAW_ETC="/etc/seesaw"
SEESAW_LOG="/var/log/seesaw"

INIT=`ps -p 1 -o comm=`

install -d "${SEESAW_BIN}" "${SEESAW_ETC}" "${SEESAW_LOG}"

install "${GOPATH}/bin/seesaw_cli" /usr/bin/seesaw

for component in {ecu,engine,ha,healthcheck,ncc,watchdog}; do
  install "${GOPATH}/bin/seesaw_${component}" "${SEESAW_BIN}"
done

if [ $INIT = "init" ]; then
  install "etc/init/seesaw_watchdog.conf" "/etc/init"
elif [ $INIT = "systemd" ]; then
  install "etc/systemd/system/seesaw_watchdog.service" "/etc/systemd/system"
  systemctl --system daemon-reload
fi
install "etc/seesaw/watchdog.cfg" "${SEESAW_ETC}"

# Enable CAP_NET_RAW for seesaw binaries that require raw sockets.
/sbin/setcap cap_net_raw+ep "${SEESAW_BIN}/seesaw_ha"
/sbin/setcap cap_net_raw+ep "${SEESAW_BIN}/seesaw_healthcheck"

The setcap binary can be found in the libcap2-bin package on Debian/Ubuntu.

Configuring

Each node needs a /etc/seesaw/seesaw.cfg configuration file, which provides information about the node and who its peer is. Additionally, each load balancing cluster needs a cluster configuration, which is in the form of a text-based protobuf - this is stored in /etc/seesaw/cluster.pb.

An example seesaw.cfg file can be found in etc/seesaw/seesaw.cfg.example - a minimal seesaw.cfg provides the following:

  • anycast_enabled - True if anycast should be enabled for this cluster.
  • name - The short name of this cluster.
  • node_ipv4 - The IPv4 address of this Seesaw node.
  • peer_ipv4 - The IPv4 address of our peer Seesaw node.
  • vip_ipv4 - The IPv4 address for this cluster VIP.

The VIP floats between the Seesaw nodes and is only active on the current master. This address needs to be allocated within the same netblock as both the node IP address and peer IP address.

An example cluster.pb file can be found in etc/seesaw/cluster.pb.example - a minimal cluster.pb contains a seesaw_vip entry and two node entries. For each service that you want to load balance, a separate vserver entry is needed, with one or more vserver_entry sections (one per port/proto pair), one or more backends and one or more healthchecks. Further information is available in the protobuf definition - see pb/config/config.proto.

On an upstart based system, running restart seesaw_watchdog will start (or restart) the watchdog process, which will in turn start the other components.

Anycast

Seesaw v2 provides full support for anycast VIPs - that is, it will advertise an anycast VIP when it becomes available and will withdraw the anycast VIP if it becomes unavailable. For this to work the Quagga BGP daemon needs to be installed and configured, with the BGP peers accepting host-specific routes that are advertised from the Seesaw nodes within the anycast range (currently hardcoded as 192.168.255.0/24).

Command Line

Once initial configuration has been performed and the Seesaw components are running, the state of the Seesaw can be viewed and controlled via the Seesaw command line interface. Running seesaw (assuming /usr/bin is in your path) will give you an interactive prompt - type ? for a list of top level commands. A quick summary:

  • config reload - reload the cluster.pb from the current config source.
  • failover - failover between the Seesaw nodes.
  • show vservers - list all vservers configured on this cluster.
  • show vserver <name> - show the current state for the named vserver.

Troubleshooting

A Seesaw should have five components that are running under the watchdog - the process table should show processes for:

  • seesaw_ecu
  • seesaw_engine
  • seesaw_ha
  • seesaw_healthcheck
  • seesaw_ncc
  • seesaw_watchdog

All Seesaw v2 components have their own logs, in addition to the logging provided by the watchdog. If any of the processes are not running, check the corresponding logs in /var/log/seesaw (e.g. seesaw_engine.{log,INFO}).

Directories

Path Synopsis
binaries
seesaw_cli
The seesaw_cli binary implements the Seesaw v2 Command Line Interface (CLI), which allows for user control of the Seesaw v2 Engine component.
The seesaw_cli binary implements the Seesaw v2 Command Line Interface (CLI), which allows for user control of the Seesaw v2 Engine component.
seesaw_ecu
The seesaw_ecu binary implements the Seesaw ECU component, which provides an externally accessible interface to monitor and control the Seesaw Node.
The seesaw_ecu binary implements the Seesaw ECU component, which provides an externally accessible interface to monitor and control the Seesaw Node.
seesaw_engine
The seesaw_engine binary implements the Seesaw Engine component, which is responsible for maintaining configuration, handling state transitions and communicating with other Seesaw v2 components.
The seesaw_engine binary implements the Seesaw Engine component, which is responsible for maintaining configuration, handling state transitions and communicating with other Seesaw v2 components.
seesaw_ha
The seesaw_ha binary implements HA peering between Seesaw nodes.
The seesaw_ha binary implements HA peering between Seesaw nodes.
seesaw_healthcheck
The seesaw_healthcheck binary implements the Seesaw Healthcheck component and is responsible for managing the configuration, scheduling and invocation of healthchecks against services and backends.
The seesaw_healthcheck binary implements the Seesaw Healthcheck component and is responsible for managing the configuration, scheduling and invocation of healthchecks against services and backends.
seesaw_ncc
The seesaw_ncc binary implements the Seesaw Network Control Centre component, which provides an interface that allows the Seesaw Engine to control network configuration, including IPVS, iptables, network interfaces, Quagga and routing.
The seesaw_ncc binary implements the Seesaw Network Control Centre component, which provides an interface that allows the Seesaw Engine to control network configuration, including IPVS, iptables, network interfaces, Quagga and routing.
Package cli provides a command line interface that allows for interaction with a Seesaw cluster.
Package cli provides a command line interface that allows for interaction with a Seesaw cluster.
common
conn
Package conn provides connectivity and communication with the Seesaw Engine, either over IPC or RPC.
Package conn provides connectivity and communication with the Seesaw Engine, either over IPC or RPC.
ipc
Package ipc contains types and functions used for interprocess communication between Seesaw components.
Package ipc contains types and functions used for interprocess communication between Seesaw components.
seesaw
Package seesaw contains structures, interfaces and utility functions that are used by Seesaw v2 components.
Package seesaw contains structures, interfaces and utility functions that are used by Seesaw v2 components.
server
Package server contains utility functions for Seesaw v2 server components.
Package server contains utility functions for Seesaw v2 server components.
Package ecu implements the Seesaw v2 ECU component, which provides an externally accessible interface to monitor and control the Seesaw Node.
Package ecu implements the Seesaw v2 ECU component, which provides an externally accessible interface to monitor and control the Seesaw Node.
Package engine implements the Seesaw v2 engine component, which is responsible for maintaining configuration information, handling state transitions and providing communication between Seesaw v2 components.
Package engine implements the Seesaw v2 engine component, which is responsible for maintaining configuration information, handling state transitions and providing communication between Seesaw v2 components.
config
Package config implements functions to manage the configuration for a Seesaw v2 engine.
Package config implements functions to manage the configuration for a Seesaw v2 engine.
Package ha implements high availability (HA) peering between Seesaw nodes using VRRP v3.
Package ha implements high availability (HA) peering between Seesaw nodes using VRRP v3.
Package healthcheck implements backend and service healthchecks.
Package healthcheck implements backend and service healthchecks.
Package ipvs provides a Go interface to Linux IPVS.
Package ipvs provides a Go interface to Linux IPVS.
ncc
Package ncc implements the Seesaw v2 Network Control Centre component, which provides an interface for the Seesaw engine to manipulate and control network related configuration.
Package ncc implements the Seesaw v2 Network Control Centre component, which provides an interface for the Seesaw engine to manipulate and control network related configuration.
client
Package client implements the client interface to the Seesaw v2 Network Control Centre component, which provides an interface for the Seesaw engine to manipulate and control network related configuration.
Package client implements the client interface to the Seesaw v2 Network Control Centre component, which provides an interface for the Seesaw engine to manipulate and control network related configuration.
types
Package types contains types used to exchange data between the NCC client and the NCC server.
Package types contains types used to exchange data between the NCC client and the NCC server.
Package netlink provides a Go interface to netlink via the libnl library.
Package netlink provides a Go interface to netlink via the libnl library.
pb
config
Package config is a generated protocol buffer package.
Package config is a generated protocol buffer package.
Package quagga provides a library for interfacing with Quagga daemons.
Package quagga provides a library for interfacing with Quagga daemons.
test_tools
healthcheck_test_tool
The healthcheck_test_tool binary is used to test healthcheckers.
The healthcheck_test_tool binary is used to test healthcheckers.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL