ipvs

package
v1.28.6 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 17, 2024 License: Apache-2.0 Imports: 36 Imported by: 26

README

IPVS

This document intends to show users

  • what is IPVS
  • difference between IPVS and IPTABLES
  • how to run kube-proxy in IPVS mode and info on debugging

What is IPVS

IPVS (IP Virtual Server) implements transport-layer load balancing, usually called Layer 4 LAN switching, as part of Linux kernel.

IPVS runs on a host and acts as a load balancer in front of a cluster of real servers. IPVS can direct requests for TCP and UDP-based services to the real servers, and make services of real servers appear as virtual services on a single IP address.

IPVS vs. IPTABLES

IPVS mode was introduced in Kubernetes v1.8, goes beta in v1.9 and GA in v1.11. IPTABLES mode was added in v1.1 and become the default operating mode since v1.2. Both IPVS and IPTABLES are based on netfilter. Differences between IPVS mode and IPTABLES mode are as follows:

  1. IPVS provides better scalability and performance for large clusters.

  2. IPVS supports more sophisticated load balancing algorithms than IPTABLES (least load, least connections, locality, weighted, etc.).

  3. IPVS supports server health checking and connection retries, etc.

When IPVS falls back to IPTABLES

IPVS proxier will employ IPTABLES in doing packet filtering, SNAT or masquerade. Specifically, IPVS proxier will use ipset to store source or destination address of traffics that need DROP or do masquerade, to make sure the number of IPTABLES rules be constant, no matter how many services we have.

Here is the table of ipset sets that IPVS proxier used.

set name members usage
KUBE-CLUSTER-IP All service IP + port Mark-Masq for cases that masquerade-all=true or clusterCIDR specified
KUBE-LOOP-BACK All service IP + port + IP masquerade for solving hairpin purpose
KUBE-EXTERNAL-IP service external IP + port masquerade for packages to external IPs
KUBE-LOAD-BALANCER load balancer ingress IP + port masquerade for packages to load balancer type service
KUBE-LOAD-BALANCER-LOCAL LB ingress IP + port with externalTrafficPolicy=local accept packages to load balancer with externalTrafficPolicy=local
KUBE-LOAD-BALANCER-FW load balancer ingress IP + port with loadBalancerSourceRanges package filter for load balancer with loadBalancerSourceRanges specified
KUBE-LOAD-BALANCER-SOURCE-CIDR load balancer ingress IP + port + source CIDR package filter for load balancer with loadBalancerSourceRanges specified
KUBE-NODE-PORT-TCP nodeport type service TCP port masquerade for packets to nodePort(TCP)
KUBE-NODE-PORT-LOCAL-TCP nodeport type service TCP port with externalTrafficPolicy=local accept packages to nodeport service with externalTrafficPolicy=local
KUBE-NODE-PORT-UDP nodeport type service UDP port masquerade for packets to nodePort(UDP)
KUBE-NODE-PORT-LOCAL-UDP nodeport type service UDP port with externalTrafficPolicy=local accept packages to nodeport service with externalTrafficPolicy=local

IPVS proxier will fall back on IPTABLES in the following scenarios.

1. kube-proxy starts with --masquerade-all=true

If kube-proxy starts with --masquerade-all=true, IPVS proxier will masquerade all traffic accessing service Cluster IP, which behaves the same as what IPTABLES proxier. Suppose kube-proxy has flag --masquerade-all=true specified, then the IPTABLES installed by IPVS proxier should be like what is shown below.

# iptables -t nat -nL

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-POSTROUTING  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */

Chain KUBE-MARK-MASQ (2 references)
target     prot opt source               destination
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK or 0x4000

Chain KUBE-POSTROUTING (1 references)
target     prot opt source               destination
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOOP-BACK dst,dst,src

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination
KUBE-MARK-MASQ  all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-CLUSTER-IP dst,dst
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-CLUSTER-IP dst,dst

2. Specify cluster CIDR in kube-proxy startup

If kube-proxy starts with --cluster-cidr=<cidr>, IPVS proxier will masquerade off-cluster traffic accessing service Cluster IP, which behaves the same as what IPTABLES proxier. Suppose kube-proxy is provided with the cluster cidr 10.244.16.0/24, then the IPTABLES installed by IPVS proxier should be like what is shown below.

# iptables -t nat -nL

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-POSTROUTING  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */

Chain KUBE-MARK-MASQ (3 references)
target     prot opt source               destination
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK or 0x4000

Chain KUBE-POSTROUTING (1 references)
target     prot opt source               destination
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOOP-BACK dst,dst,src

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination
KUBE-MARK-MASQ  all  -- !10.244.16.0/24       0.0.0.0/0            match-set KUBE-CLUSTER-IP dst,dst
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-CLUSTER-IP dst,dst

3. Load Balancer type service

For loadBalancer type service, IPVS proxier will install IPTABLES with match of ipset KUBE-LOAD-BALANCER. Specially when service's LoadBalancerSourceRanges is specified or specified externalTrafficPolicy=local, IPVS proxier will create ipset sets KUBE-LOAD-BALANCER-LOCAL/KUBE-LOAD-BALANCER-FW/KUBE-LOAD-BALANCER-SOURCE-CIDR and install IPTABLES accordingly, which should look like what is shown below.

# iptables -t nat -nL

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-POSTROUTING  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */

Chain KUBE-FIREWALL (1 references)
target     prot opt source               destination
RETURN     all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOAD-BALANCER-SOURCE-CIDR dst,dst,src
KUBE-MARK-DROP  all  --  0.0.0.0/0            0.0.0.0/0

Chain KUBE-LOAD-BALANCER (1 references)
target     prot opt source               destination
KUBE-FIREWALL  all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOAD-BALANCER-FW dst,dst
RETURN     all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOAD-BALANCER-LOCAL dst,dst
KUBE-MARK-MASQ  all  --  0.0.0.0/0            0.0.0.0/0

Chain KUBE-MARK-DROP (1 references)
target     prot opt source               destination
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK or 0x8000

Chain KUBE-MARK-MASQ (2 references)
target     prot opt source               destination
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK or 0x4000

Chain KUBE-POSTROUTING (1 references)
target     prot opt source               destination
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOOP-BACK dst,dst,src

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination
KUBE-LOAD-BALANCER  all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOAD-BALANCER dst,dst
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOAD-BALANCER dst,dst

4. NodePort type service

For NodePort type service, IPVS proxier will install IPTABLES with match of ipset KUBE-NODE-PORT-TCP/KUBE-NODE-PORT-UDP. When specified externalTrafficPolicy=local, IPVS proxier will create ipset sets KUBE-NODE-PORT-LOCAL-TCP/KUBE-NODE-PORT-LOCAL-UDP and install IPTABLES accordingly, which should look like what is shown below.

Suppose service with TCP type nodePort.

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-POSTROUTING  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */

Chain KUBE-MARK-MASQ (2 references)
target     prot opt source               destination
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK or 0x4000

Chain KUBE-NODE-PORT (1 references)
target     prot opt source               destination
RETURN     all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-NODE-PORT-LOCAL-TCP dst
KUBE-MARK-MASQ  all  --  0.0.0.0/0            0.0.0.0/0

Chain KUBE-POSTROUTING (1 references)
target     prot opt source               destination
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOOP-BACK dst,dst,src

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination
KUBE-NODE-PORT  all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-NODE-PORT-TCP dst

5. Service with externalIPs specified

For service with externalIPs specified, IPVS proxier will install IPTABLES with match of ipset KUBE-EXTERNAL-IP, Suppose we have service with externalIPs specified, IPTABLES rules should look like what is shown below.

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-POSTROUTING  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */

Chain KUBE-MARK-MASQ (2 references)
target     prot opt source               destination
MARK       all  --  0.0.0.0/0            0.0.0.0/0            MARK or 0x4000

Chain KUBE-POSTROUTING (1 references)
target     prot opt source               destination
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
MASQUERADE  all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOOP-BACK dst,dst,src

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination
KUBE-MARK-MASQ  all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP dst,dst
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP dst,dst PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP dst,dst ADDRTYPE match dst-type LOCAL

Run kube-proxy in IPVS mode

Currently, local-up scripts, GCE scripts and kubeadm support switching IPVS proxy mode via exporting environment variables or specifying flags.

Prerequisite

Ensure IPVS required kernel modules (Notes: use nf_conntrack instead of nf_conntrack_ipv4 for Linux kernel 4.19 and later)

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
  1. have been compiled into the node kernel. Use

grep -e ipvs -e nf_conntrack_ipv4 /lib/modules/$(uname -r)/modules.builtin

and get results like the followings if compiled into kernel.

kernel/net/ipv4/netfilter/nf_conntrack_ipv4.ko
kernel/net/netfilter/ipvs/ip_vs.ko
kernel/net/netfilter/ipvs/ip_vs_rr.ko
kernel/net/netfilter/ipvs/ip_vs_wrr.ko
kernel/net/netfilter/ipvs/ip_vs_lc.ko
kernel/net/netfilter/ipvs/ip_vs_wlc.ko
kernel/net/netfilter/ipvs/ip_vs_fo.ko
kernel/net/netfilter/ipvs/ip_vs_ovf.ko
kernel/net/netfilter/ipvs/ip_vs_lblc.ko
kernel/net/netfilter/ipvs/ip_vs_lblcr.ko
kernel/net/netfilter/ipvs/ip_vs_dh.ko
kernel/net/netfilter/ipvs/ip_vs_sh.ko
kernel/net/netfilter/ipvs/ip_vs_sed.ko
kernel/net/netfilter/ipvs/ip_vs_nq.ko
kernel/net/netfilter/ipvs/ip_vs_ftp.ko

OR

  1. have been loaded.
# load module <module_name>
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

# to check loaded modules, use
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
# or
cut -f1 -d " "  /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4

Packages such as ipset should also be installed on the node before using IPVS mode.

Kube-proxy will fall back to IPTABLES mode if those requirements are not met.

Local UP Cluster

Kube-proxy will run in IPTABLES mode by default in a local-up cluster.

To use IPVS mode, users should export the env KUBE_PROXY_MODE=ipvs to specify the IPVS mode before starting the cluster:

# before running `hack/local-up-cluster.sh`
export KUBE_PROXY_MODE=ipvs
GCE Cluster

Similar to local-up cluster, kube-proxy in clusters running on GCE run in IPTABLES mode by default. Users need to export the env KUBE_PROXY_MODE=ipvs before starting a cluster:

#before running one of the commands chosen to start a cluster:
# curl -sS https://get.k8s.io | bash
# wget -q -O - https://get.k8s.io | bash
# cluster/kube-up.sh
export KUBE_PROXY_MODE=ipvs
Cluster Created by Kubeadm

If you are using kubeadm with a configuration file, you have to add mode: ipvs in a KubeProxyConfiguration (separated by -- that is also passed to kubeadm init).

...
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
...

before running

kubeadm init --config <path_to_configuration_file>

to specify the ipvs mode before deploying the cluster.

Notes If ipvs mode is successfully on, you should see IPVS proxy rules (use ipvsadm) like

 # ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.1:443 rr persistent 10800
  -> 192.168.0.1:6443             Masq    1      1          0

or similar logs occur in kube-proxy logs (for example, /tmp/kube-proxy.log for local-up cluster) when the local cluster is running:

Using ipvs Proxier.

While there is no IPVS proxy rules or the following logs occurs indicate that the kube-proxy fails to use IPVS mode:

Can't use ipvs proxier, trying iptables proxier
Using iptables Proxier.

See the following section for more details on debugging.

Debug

Check IPVS proxy rules

Users can use ipvsadm tool to check whether kube-proxy are maintaining IPVS rules correctly. For example, we have the following services in the cluster:

 # kubectl get svc --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP         1d
kube-system   kube-dns     ClusterIP   10.0.0.10    <none>        53/UDP,53/TCP   1d

We may get IPVS proxy rules like:

 # ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.1:443 rr persistent 10800
  -> 192.168.0.1:6443             Masq    1      1          0
TCP  10.0.0.10:53 rr
  -> 172.17.0.2:53                Masq    1      0          0
UDP  10.0.0.10:53 rr
  -> 172.17.0.2:53                Masq    1      0          0
Why kube-proxy can't start IPVS mode

Use the following check list to help you solve the problems:

1. Specify proxy-mode=ipvs

Check whether the kube-proxy mode has been set to ipvs.

2. Install required kernel modules and packages

Check whether the IPVS required kernel modules have been compiled into the kernel and packages installed. (see Prerequisite)

Documentation

Index

Constants

View Source
const (
	// MinIPSetCheckVersion is the min ipset version we need.  IPv6 is supported in ipset 6.x
	MinIPSetCheckVersion = "6.0"
)

Variables

This section is empty.

Functions

func CanUseIPVSProxier

func CanUseIPVSProxier(ipvs utilipvs.Interface, ipsetver IPSetVersioner, scheduler string) error

CanUseIPVSProxier checks if we can use the ipvs Proxier. The ipset version and the scheduler are checked. If any virtual servers (VS) already exist with the configured scheduler, we just return. Otherwise we check if a dummy VS can be configured with the configured scheduler. Kernel modules will be loaded automatically if necessary.

func CleanupLeftovers

func CleanupLeftovers(ipvs utilipvs.Interface, ipt utiliptables.Interface, ipset utilipset.Interface) (encounteredError bool)

CleanupLeftovers clean up all ipvs and iptables rules created by ipvs Proxier.

func GetUniqueRSName added in v1.11.5

func GetUniqueRSName(vs *utilipvs.VirtualServer, rs *utilipvs.RealServer) string

GetUniqueRSName return a string type unique rs name with vs information

func NewDualStackProxier added in v1.16.0

func NewDualStackProxier(
	ipt [2]utiliptables.Interface,
	ipvs utilipvs.Interface,
	ipset utilipset.Interface,
	sysctl utilsysctl.Interface,
	exec utilexec.Interface,
	syncPeriod time.Duration,
	minSyncPeriod time.Duration,
	excludeCIDRs []string,
	strictARP bool,
	tcpTimeout time.Duration,
	tcpFinTimeout time.Duration,
	udpTimeout time.Duration,
	masqueradeAll bool,
	masqueradeBit int,
	localDetectors [2]proxyutiliptables.LocalTrafficDetector,
	hostname string,
	nodeIPs map[v1.IPFamily]net.IP,
	recorder events.EventRecorder,
	healthzServer healthcheck.ProxierHealthUpdater,
	scheduler string,
	nodePortAddresses []string,
	kernelHandler KernelHandler,
) (proxy.Provider, error)

NewDualStackProxier returns a new Proxier for dual-stack operation

Types

type GracefulTerminationManager added in v1.11.5

type GracefulTerminationManager struct {
	// contains filtered or unexported fields
}

GracefulTerminationManager manage rs graceful termination information and do graceful termination work rsList is the rs list to graceful termination, ipvs is the ipvsinterface to do ipvs delete/update work

func NewGracefulTerminationManager added in v1.11.5

func NewGracefulTerminationManager(ipvs utilipvs.Interface) *GracefulTerminationManager

NewGracefulTerminationManager create a gracefulTerminationManager to manage ipvs rs graceful termination work

func (*GracefulTerminationManager) GracefulDeleteRS added in v1.11.5

GracefulDeleteRS to update rs weight to 0, and add rs to graceful terminate list

func (*GracefulTerminationManager) InTerminationList added in v1.11.5

func (m *GracefulTerminationManager) InTerminationList(uniqueRS string) bool

InTerminationList to check whether specified unique rs name is in graceful termination list

func (*GracefulTerminationManager) MoveRSOutofGracefulDeleteList added in v1.11.5

func (m *GracefulTerminationManager) MoveRSOutofGracefulDeleteList(uniqueRS string) error

MoveRSOutofGracefulDeleteList to delete an rs and remove it from the rsList immediately

func (*GracefulTerminationManager) Run added in v1.11.5

func (m *GracefulTerminationManager) Run()

Run start a goroutine to try to delete rs in the graceful delete rsList with an interval 1 minute

type IPSet added in v1.9.0

type IPSet struct {
	utilipset.IPSet
	// contains filtered or unexported fields
}

IPSet wraps util/ipset which is used by IPVS proxier.

func NewIPSet added in v1.9.0

func NewIPSet(handle utilipset.Interface, name string, setType utilipset.Type, isIPv6 bool, comment string) *IPSet

NewIPSet initialize a new IPSet struct

type IPSetVersioner added in v1.9.0

type IPSetVersioner interface {
	// returns "X.Y"
	GetVersion() (string, error)
}

IPSetVersioner can query the current ipset version.

type KernelHandler added in v1.10.0

type KernelHandler interface {
	GetKernelVersion() (string, error)
}

KernelHandler can handle the current installed kernel modules.

type LinuxKernelHandler added in v1.10.0

type LinuxKernelHandler struct {
	// contains filtered or unexported fields
}

LinuxKernelHandler implements KernelHandler interface.

func NewLinuxKernelHandler added in v1.10.0

func NewLinuxKernelHandler() *LinuxKernelHandler

NewLinuxKernelHandler initializes LinuxKernelHandler with exec.

func (*LinuxKernelHandler) GetKernelVersion added in v1.16.0

func (handle *LinuxKernelHandler) GetKernelVersion() (string, error)

GetKernelVersion returns currently running kernel version.

type NetLinkHandle added in v1.9.0

type NetLinkHandle interface {
	// EnsureAddressBind checks if address is bound to the interface and, if not, binds it.  If the address is already bound, return true.
	EnsureAddressBind(address, devName string) (exist bool, err error)
	// UnbindAddress unbind address from the interface
	UnbindAddress(address, devName string) error
	// EnsureDummyDevice checks if dummy device is exist and, if not, create one.  If the dummy device is already exist, return true.
	EnsureDummyDevice(devName string) (exist bool, err error)
	// DeleteDummyDevice deletes the given dummy device by name.
	DeleteDummyDevice(devName string) error
	// ListBindAddress will list all IP addresses which are bound in a given interface
	ListBindAddress(devName string) ([]string, error)
	// GetAllLocalAddresses return all local addresses on the node.
	// Only the addresses of the current family are returned.
	// IPv6 link-local and loopback addresses are excluded.
	GetAllLocalAddresses() (sets.Set[string], error)
	// GetLocalAddresses return all local addresses for an interface.
	// Only the addresses of the current family are returned.
	// IPv6 link-local and loopback addresses are excluded.
	GetLocalAddresses(dev string) (sets.Set[string], error)
	// GetAllLocalAddressesExcept return all local addresses on the node, except from the passed dev.
	// This is not the same as to take the diff between GetAllLocalAddresses and GetLocalAddresses
	// since an address can be assigned to many interfaces. This problem raised
	// https://github.com/kubernetes/kubernetes/issues/114815
	GetAllLocalAddressesExcept(dev string) (sets.Set[string], error)
}

NetLinkHandle for revoke netlink interface

func NewNetLinkHandle added in v1.9.0

func NewNetLinkHandle(isIPv6 bool) NetLinkHandle

NewNetLinkHandle will create a new NetLinkHandle

type Proxier

type Proxier struct {
	// contains filtered or unexported fields
}

Proxier is an ipvs based proxy for connections between a localhost:lport and services that provide the actual backends.

func NewProxier

func NewProxier(ipFamily v1.IPFamily,
	ipt utiliptables.Interface,
	ipvs utilipvs.Interface,
	ipset utilipset.Interface,
	sysctl utilsysctl.Interface,
	exec utilexec.Interface,
	syncPeriod time.Duration,
	minSyncPeriod time.Duration,
	excludeCIDRs []string,
	strictARP bool,
	tcpTimeout time.Duration,
	tcpFinTimeout time.Duration,
	udpTimeout time.Duration,
	masqueradeAll bool,
	masqueradeBit int,
	localDetector proxyutiliptables.LocalTrafficDetector,
	hostname string,
	nodeIP net.IP,
	recorder events.EventRecorder,
	healthzServer healthcheck.ProxierHealthUpdater,
	scheduler string,
	nodePortAddressStrings []string,
	kernelHandler KernelHandler,
) (*Proxier, error)

NewProxier returns a new Proxier given an iptables and ipvs Interface instance. Because of the iptables and ipvs logic, it is assumed that there is only a single Proxier active on a machine. An error will be returned if it fails to update or acquire the initial lock. Once a proxier is created, it will keep iptables and ipvs rules up to date in the background and will not terminate if a particular iptables or ipvs call fails.

func (*Proxier) OnEndpointSliceAdd added in v1.16.0

func (proxier *Proxier) OnEndpointSliceAdd(endpointSlice *discovery.EndpointSlice)

OnEndpointSliceAdd is called whenever creation of a new endpoint slice object is observed.

func (*Proxier) OnEndpointSliceDelete added in v1.16.0

func (proxier *Proxier) OnEndpointSliceDelete(endpointSlice *discovery.EndpointSlice)

OnEndpointSliceDelete is called whenever deletion of an existing endpoint slice object is observed.

func (*Proxier) OnEndpointSliceUpdate added in v1.16.0

func (proxier *Proxier) OnEndpointSliceUpdate(_, endpointSlice *discovery.EndpointSlice)

OnEndpointSliceUpdate is called whenever modification of an existing endpoint slice object is observed.

func (*Proxier) OnEndpointSlicesSynced added in v1.16.0

func (proxier *Proxier) OnEndpointSlicesSynced()

OnEndpointSlicesSynced is called once all the initial event handlers were called and the state is fully propagated to local cache.

func (*Proxier) OnNodeAdd added in v1.17.0

func (proxier *Proxier) OnNodeAdd(node *v1.Node)

OnNodeAdd is called whenever creation of new node object is observed.

func (*Proxier) OnNodeDelete added in v1.17.0

func (proxier *Proxier) OnNodeDelete(node *v1.Node)

OnNodeDelete is called whenever deletion of an existing node object is observed.

func (*Proxier) OnNodeSynced added in v1.17.0

func (proxier *Proxier) OnNodeSynced()

OnNodeSynced is called once all the initial event handlers were called and the state is fully propagated to local cache.

func (*Proxier) OnNodeUpdate added in v1.17.0

func (proxier *Proxier) OnNodeUpdate(oldNode, node *v1.Node)

OnNodeUpdate is called whenever modification of an existing node object is observed.

func (*Proxier) OnServiceAdd

func (proxier *Proxier) OnServiceAdd(service *v1.Service)

OnServiceAdd is called whenever creation of new service object is observed.

func (*Proxier) OnServiceDelete

func (proxier *Proxier) OnServiceDelete(service *v1.Service)

OnServiceDelete is called whenever deletion of an existing service object is observed.

func (*Proxier) OnServiceSynced

func (proxier *Proxier) OnServiceSynced()

OnServiceSynced is called once all the initial event handlers were called and the state is fully propagated to local cache.

func (*Proxier) OnServiceUpdate

func (proxier *Proxier) OnServiceUpdate(oldService, service *v1.Service)

OnServiceUpdate is called whenever modification of an existing service object is observed.

func (*Proxier) Sync

func (proxier *Proxier) Sync()

Sync is called to synchronize the proxier state to iptables and ipvs as soon as possible.

func (*Proxier) SyncLoop

func (proxier *Proxier) SyncLoop()

SyncLoop runs periodic work. This is expected to run as a goroutine or as the main loop of the app. It does not return.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL