Documentation ¶
Overview ¶
Package moxio has common i/o functions.
Index ¶
- Variables
- func Base64Writer(w io.Writer) io.WriteCloser
- func IsClosed(err error) bool
- func IsStorageSpace(err error) bool
- func LinkOrCopy(log mlog.Log, dst, src string, srcReaderOpt io.Reader, sync bool) (rerr error)
- func SyncDir(log mlog.Log, dir string) error
- func TLSInfo(conn *tls.Conn) (version, ciphersuite string)
- type AtReader
- type Bufpool
- type LimitAtReader
- type LimitReader
- type PrefixConn
- type TraceReader
- type TraceWriter
- type Work
- type WorkQueue
Constants ¶
This section is empty.
Variables ¶
var ErrLimit = errors.New("input exceeds maximum size") // Returned by LimitReader.
var ErrLineTooLong = errors.New("line from remote too long") // Returned by Bufpool.Readline.
Functions ¶
func Base64Writer ¶ added in v0.0.6
func Base64Writer(w io.Writer) io.WriteCloser
Base64Writer turns a writer for data into one that writes base64 content on \r\n separated lines of max 78+2 characters length.
func IsClosed ¶
IsClosed returns whether i/o failed, typically because the connection is closed or otherwise cannot be used for further i/o.
Used to prevent error logging for connections that are closed.
func IsStorageSpace ¶
IsStorageSpace returns whether the error is for storage space issue. Like disk full, no inodes, quota reached.
func LinkOrCopy ¶ added in v0.0.6
LinkOrCopy attempts to make a hardlink dst. If that fails, it will try to do a regular file copy. If srcReaderOpt is not nil, it will be used for reading. If sync is true and the file is copied, Sync is called on the file after writing to ensure the file is written on disk. Callers should also sync the directory of the destination file, but may want to do that after linking/copying multiple files. If dst was created and an error occurred, it is removed.
Types ¶
type Bufpool ¶
type Bufpool struct {
// contains filtered or unexported fields
}
Bufpool caches byte slices for reuse during parsing of line-terminated commands.
func NewBufpool ¶
NewBufpool makes a new pool, initially empty, but holding at most "max" buffers of "size" bytes each.
type LimitAtReader ¶
LimitAtReader is a reader at that returns ErrLimit if reads would extend beyond Limit.
type LimitReader ¶
LimitReader reads up to Limit bytes, returning an error if more bytes are read. LimitReader can be used to enforce a maximum input length.
type PrefixConn ¶
type PrefixConn struct { PrefixReader io.Reader // If not nil, reads are fulfilled from here. It is cleared when a read returns io.EOF. net.Conn }
PrefixConn is a net.Conn prefixed with a reader that is first drained. Used for STARTTLS where already did a buffered read of initial TLS data.
type TraceReader ¶
type TraceReader struct {
// contains filtered or unexported fields
}
func NewTraceReader ¶
NewTraceReader wraps reader "r" into a reader that logs all reads to "log" with log level trace, prefixed with "prefix".
func (*TraceReader) Read ¶
func (r *TraceReader) Read(buf []byte) (int, error)
Read does a single Read on its underlying reader, logs data of successful reads, and returns the data read.
func (*TraceReader) SetTrace ¶
func (r *TraceReader) SetTrace(level slog.Level)
type TraceWriter ¶
type TraceWriter struct {
// contains filtered or unexported fields
}
func NewTraceWriter ¶
NewTraceWriter wraps "w" into a writer that logs all writes to "log" with log level trace, prefixed with "prefix".
func (*TraceWriter) SetTrace ¶
func (w *TraceWriter) SetTrace(level slog.Level)
type WorkQueue ¶ added in v0.0.7
type WorkQueue[T, R any] struct { // contains filtered or unexported fields }
WorkQueue can be used to execute a work load where many items are processed with a slow step and where a pool of workers goroutines to execute the slow step helps. Reading messages from the database file is fast and cannot be easily done concurrently, but reading the message file from disk and parsing the headers is the bottleneck. The workqueue can manage the goroutines that read the message file from disk and parse.
func NewWorkQueue ¶ added in v0.0.7
func NewWorkQueue[T, R any](procs, size int, preparer func(in, out chan Work[T, R]), process func(T, R) error) *WorkQueue[T, R]
NewWorkQueue creates a new work queue with "procs" goroutines, and a total work queue size of "size" (e.g. 2*procs). The worker goroutines run "preparer", which should be a loop receiving work from "in" and sending the work result (with Err or Out set) on "out". The preparer function should return when the "in" channel is closed, the signal to stop. WorkQueue processes the results in the order they went in, so prepared work that was scheduled after earlier work that is not yet prepared will wait and be queued.
func (*WorkQueue[T, R]) Add ¶ added in v0.0.7
Add adds new work to be prepared to the queue. If the queue is full, it waits until space becomes available, i.e. when the head of the queue has work that becomes prepared. Add processes the prepared items to make space available.