Documentation ¶
Index ¶
- type Proxy
- func (p *Proxy) Close(ctx context.Context)
- func (p *Proxy) InterfaceCreated(intf networking.Iface)
- func (p *Proxy) InterfaceDeleted(intf networking.Iface)
- func (p *Proxy) SetIPContext(ctx context.Context, conn *networkservice.Connection, ...) error
- func (p *Proxy) SetVIPs(vips []string)
- func (p *Proxy) UnsetIPContext(ctx context.Context, conn *networkservice.Connection, ...) error
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Proxy ¶
type Proxy struct { Bridge networking.Bridge Subnets map[ipamAPI.IPFamily]*ipamAPI.Subnet IpamClient ipamAPI.IpamClient // contains filtered or unexported fields }
Proxy -
func NewProxy ¶
func NewProxy(conduit *nspAPI.Conduit, nodeName string, ipamClient ipamAPI.IpamClient, ipFamily string, netUtils networking.Utils) *Proxy
NewProxy -
func (*Proxy) InterfaceCreated ¶
func (p *Proxy) InterfaceCreated(intf networking.Iface)
InterfaceCreated -
Note: A NSM connection for which there's already a networking.Iface linked to the bridge can be updated on the fly. It might involve update of the src/dst addresses for example, which might confuse the bridge if the intefaces would be always compared by considering all the networking.Iface fields. Therefore, it is expected that the bridge can handle such cases for example by doing a lookup based on interface index as well. (We expect only one networking.Iface per interface index at a time. Which means, the proxy must handle interface delete events originating from the kernel to reconfigure linked interfaces.)
Note: Seemingly, upon NSM connncetion refresh the src and dst addresses might appear in a different order then before.
Note: Watch out when using intf.GetName(), because by default it will load the interface name into the underlying struct. Which can break the logic built on top of comparing interfaces as it relies on "DeepEqual".
func (*Proxy) InterfaceDeleted ¶
func (p *Proxy) InterfaceDeleted(intf networking.Iface)
InterfaceDeleted - Used to get only called upon NSM connection close when the network interface was still available in the kernel. Due to improvements around interfaceMonitor, also expect a callback on interface remove events originating from kernel (only interface index and name will be set).
Note: IMHO there's no need for a callback with missing interface index upon NSM connection close, since either the interface exists during close, or it does not. And in the latter case a kernel event is supposed to take care of unlinking. (Therefore, no need for an additional callback that contains only address info.) Moreover, this way we don't have to deal with InterfaceDeleted events being spammed by NSM heal when reconnect attempt fails and heal orders close of non-established connection.
Note: I'm a bit puzzled about abrupt proxy container restart. Although the proxy won't remember the next hop addresses. Yet, the associated routes might remain in the kernel until source routing of VIPs gets updated for the first time. This might even be beneficial as such route can provide continuity of egress communication assuming the associated LB is up. (Old interfaces remain in POD until NSM cleans up the related connections.)
Note: Might get called parallel for multiple NSC connections for example during NSM heal, so locking of resources is required.
func (*Proxy) SetIPContext ¶
func (p *Proxy) SetIPContext(ctx context.Context, conn *networkservice.Connection, interfaceType networking.InterfaceType) error
SetIPContext XXX: What should we do about new connection establishment requests that fail, and thus allocated IP addresses might be leaked if the originating NSC gives up for some reason? On the other hand, would there by any unwanted side-effects of calling UnsetIPContext() from ipcontext Server/Client when Request to establish a new connection fails? NSM retry clients like fullMeshClient clones the request on each new try, thus won't cache any assigned IPs. However, NSM heal with reselect seems weird, as it keeps Closing and re-requeting the connection including the old IPs until it succeeds to reconnect or the "user" closes the connection. Thus, due to heal (with reconnect) the IPs might get updated in the NSC case, if someone happanned to allocat them between two reconnect attempts. This doesn't seem to be a problem, as it should update the connection accordingly. Based on the above, IMHO it would make sense releasing allocated IPs by ipcontext Server/Client upon unsuccesful Requests where NSM connection was not established. In the server case though, it's unlikely that the Request would fail at the proxy.
func (*Proxy) UnsetIPContext ¶ added in v0.6.0
func (p *Proxy) UnsetIPContext(ctx context.Context, conn *networkservice.Connection, interfaceType networking.InterfaceType, delay time.Duration) error