Documentation ¶
Overview ¶
Package accesslogs can handle collection and upload of arbitrarily formatted server access logs in the fashion of S3's server access logging.
Copyright (C) 2024 Storj Labs, Inc. See LICENSE for copying information.
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ( // Error is the error class for this package. Error = errs.Class("accesslogs") // ErrClosed means that the peer has already been closed. ErrClosed = errors.New("closed") // ErrTooLarge means that the provided payload is too large. ErrTooLarge = errors.New("entry too large") // ErrQueueLimit means that the upload queue limit has been reached. ErrQueueLimit = errors.New("upload queue limit reached") )
Functions ¶
This section is empty.
Types ¶
type Key ¶
Key is a key that logs for the specified project ID and bucket can be queued. It's not a key under which packed logs are saved.
type Options ¶
type Options struct { DefaultEntryLimit memory.Size `user:"true" help:"log entry size limit" default:"2KiB"` DefaultShipmentLimit memory.Size `user:"true" help:"log file size limit" default:"63MiB"` UploadingOptions struct { QueueLimit int `user:"true" help:"log file upload queue limit" default:"100"` RetryLimit int `user:"true" help:"maximum number of retries for log file uploads" default:"3"` ShutdownTimeout time.Duration `user:"true" help:"time limit waiting for queued logs to finish uploading when gateway is shutting down" default:"1m"` } }
Options define how Processor should be configured when initialized.
type Processor ¶
type Processor struct {
// contains filtered or unexported fields
}
Processor is a log collection engine that works together with a concurrently running uploader tasked with uploading to the Storage implementation. Logs are collected, packaged and uploaded when a certain (configurable) size of the package is hit.
func NewProcessor ¶
NewProcessor returns initialized Processor.
func (*Processor) Close ¶
Close stops Processor. Upon call to Close, all buffers are flushed, and the call is blocked until all flushing and uploading is done.
Close is like http.Server's Shutdown, which means it must be called while Processor is still Run-ning to gracefully shut it down.
TODO(artur): rename to Shutdown? TODO(artur): make it take context.Context instead of exposing just the shutdown timer?
func (*Processor) QueueEntry ¶
QueueEntry saves another entry under key for packaging and upload. Provided access will be used for upload.
type S3AccessLogEntry ¶ added in v1.82.0
type S3AccessLogEntry struct {
// contains filtered or unexported fields
}
S3AccessLogEntry represents the S3-style server access log entry.
func NewS3AccessLogEntry ¶ added in v1.82.0
func NewS3AccessLogEntry(o S3AccessLogEntryOptions) *S3AccessLogEntry
NewS3AccessLogEntry creates new S3AccessLogEntry.
It assumes that all relevant fields are already escaped.
func (S3AccessLogEntry) Size ¶ added in v1.82.0
func (e S3AccessLogEntry) Size() memory.Size
Size returns the size of the entry.
func (S3AccessLogEntry) String ¶ added in v1.82.0
func (e S3AccessLogEntry) String() string
String returns the formatted entry (as per https://docs.aws.amazon.com/AmazonS3/latest/userguide/LogFormat.html).
type S3AccessLogEntryOptions ¶ added in v1.82.0
type S3AccessLogEntryOptions struct { BucketOwner string // example: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be Bucket string // example: DOC-EXAMPLE-BUCKET1 Time time.Time // example: [06/Feb/2019:00:00:38 +0000] RemoteIP string // example: 192.0.2.3 Requester string // example: 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be RequestID string // example: 3E57427F33A59F07 Operation string // example: REST.PUT.OBJECT Key string // example: /photos/2019/08/puppy.jpg RequestURI string // example: "GET /DOC-EXAMPLE-BUCKET1/photos/2019/08/puppy.jpg?x-foo=bar HTTP/1.1" HTTPStatus int // example: 200 ErrorCode string // example: NoSuchBucket BytesSent int64 // example: 2662992 ObjectSize *int64 // example: 3462992 TotalTime time.Duration // example: 70 TurnAroundTime time.Duration // example: 10 Referer string // example: "http://www.example.com/webservices" UserAgent string // example: "curl/7.15.1" VersionID string // example: 3HL4kqtJvjVBH40Nrjfkd HostID string // example: s9lzHYrFp76ZVxRcpX9+5cjAnEH2ROuNkd2BHfIa6UkFVdtjf5mKR3/eTPFvsiP/XV/VLi31234= SignatureVersion string // example: SigV2 CipherSuite string // example: ECDHE-RSA-AES128-GCM-SHA256 AuthenticationType string // example: AuthHeader HostHeader string // example: s3.us-west-2.amazonaws.com TLSVersion string // example: TLSv1.2 AccessPointARN string // example: arn:aws:s3:us-east-1:123456789012:accesspoint/example-AP ACLRequired string // example: Yes }
S3AccessLogEntryOptions represents all fields needed for producing S3-style server access log entry.
type StorjStorage ¶
type StorjStorage struct {
// contains filtered or unexported fields
}
StorjStorage is an implementation of Storage that allows uploading to Storj via libuplink.
func NewStorjStorage ¶ added in v1.84.0
func NewStorjStorage(access *uplink.Access) *StorjStorage
NewStorjStorage creates a new instance of StorjStorage with the given access grant. It initializes and returns a pointer to the StorjStorage struct.
func (StorjStorage) SerializedAccessGrant ¶ added in v1.84.0
func (s StorjStorage) SerializedAccessGrant() (string, error)
SerializedAccessGrant returns a serialized form of the access grant used for this storage.