Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type CertCache ¶
type CertCache struct {
// contains filtered or unexported fields
}
func New ¶
func New(certs []*x509.Certificate, ocspCache string) *CertCache
Must call Init() on the returned CertCache before you can use it.
func (*CertCache) IsHealthy ¶
If we've been unable to fetch a fresh OCSP response before expiry of the old one, or, at server start-up, if we're unable to fetch a valid OCSP request at all (either from disk or network), then return false. This signals to the packager that it should not try to package anything; just proxy the content unsigned. This is per sleevi requirement:
- Some idea of what to do when "things go bad". What happens when it's been 7 days, no new OCSP response can be obtained, and the current response is about to expire?
type Chained ¶
type Chained struct {
// contains filtered or unexported fields
}
Represents a file backed by two updateables. If the first is expired, then the second is consulted, and only if both are expired is update() run (and the contents of both updateables updated).
type InMemory ¶
type InMemory struct {
// contains filtered or unexported fields
}
Represents an in-memory copy of a file.
type LocalFile ¶
type LocalFile struct {
// contains filtered or unexported fields
}
Uses the OS's file locking mechanisms to obtain shared/exclusive locks to ensure update() is only called once. This is probably good enough for a few processes running on one server.
For more processes than that, or for a distributed deployment over NFS, it would require more reading / testing to see if this is OK. I'm not an expert on distributed systems and http://0pointer.de/blog/projects/locking.html and https://gavv.github.io/blog/file-locks/ have lots of warnings, and I haven't found any documentation on how NFS decides on an exclusive lock owner if there's contention. https://tools.ietf.org/html/rfc3530#section-8.1.5 suggests NFSv4 supports some lock sequencing mechanism that I assume won't result in starvation, but I don't know how well that's supported by various clients & servers.
Users interested in scaling this widely may want to implement their own Updateable using some reasonable remote storage / leader election libraries.
type Updateable ¶
type Updateable interface { // Reads the contents of the file. Calls isExpired(contents); if true, // then it calls update() and writes the returned contents back to the // file. Read(ctx context.Context, isExpired func([]byte) bool, update func([]byte) []byte) ([]byte, error) }
This is an abstraction over a single file on a remote storage mechanism. It is meant for use-cases where there will be mostly reads. The update callback is assumed to be expensive, and thus it should be coordinated among all replicas and only done once.