Documentation ¶
Index ¶
- Variables
- type BigLog
- func (bl *BigLog) After(t time.Time) (int64, error)
- func (bl *BigLog) Close() error
- func (bl *BigLog) Delete(force bool) (err error)
- func (bl *BigLog) DirPath() string
- func (bl *BigLog) Info() (*Info, error)
- func (bl *BigLog) Latest() int64
- func (bl *BigLog) Name() string
- func (bl *BigLog) Oldest() int64
- func (bl *BigLog) ReadFrom(src io.Reader) (written int64, err error)
- func (bl *BigLog) SetOpts(opts ...Option)
- func (bl *BigLog) Split() error
- func (bl *BigLog) Sync() error
- func (bl *BigLog) Trim() (err error)
- func (bl *BigLog) Write(b []byte) (written int, err error)
- func (bl *BigLog) WriteN(b []byte, n int) (written int, err error)
- type Entry
- type IndexReader
- func (r *IndexReader) Close() error
- func (r *IndexReader) Head() int64
- func (r *IndexReader) ReadEntries(n int) (entries []*Entry, err error)
- func (r *IndexReader) ReadSection(maxOffsets, maxBytes int64) (is *IndexSection, err error)
- func (r *IndexReader) Seek(offset int64, whence int) (ret int64, err error)
- type IndexSection
- type Info
- type Option
- type Reader
- type Scanner
- type ScannerOption
- type SegInfo
- type StreamDelta
- type Streamer
- type Watcher
Constants ¶
This section is empty.
Variables ¶
var ( // ErrEmbeddedOffset is returned when the offset in embedded in a batch and can not be retrieved individually ErrEmbeddedOffset = errors.New("biglog: embedded offset") // ErrNotFound is returned when the requested offset is not in the log ErrNotFound = errors.New("biglog: offset not found") // ErrLastSegment is returned trying to delete the only segment in the biglog // To delete all segments use BigLog.Delete() ErrLastSegment = errors.New("biglog: last segment can't be deleted") // ErrInvalid is returned when the big log format is not recognized ErrInvalid = errors.New("biglog: invalid biglog") // ErrBusy is returned when there are active readers or watchers while trying // to close/delete the biglog ErrBusy = errors.New("biglog: resource busy") )
var ( // ErrSegmentFull is returned when the index does not have capacity left. ErrSegmentFull = errors.New("biglog: segment full") // ErrSegmentBusy is returned when trying to delete a segment that is being read. ErrSegmentBusy = errors.New("biglog: segment busy") // ErrLoadSegment is returned when segment files could not be loaded, the reason should be logged. ErrLoadSegment = errors.New("biglog: failed to load segment") // ErrRONotFound is returned when the requested relative offset is not in the segment. ErrRONotFound = errors.New("biglog: relative offset not found in segment") // ErrROInvalid is returned when the requested offset is out of range. ErrROInvalid = errors.New("biglog: invalid relative offset 0 < RO < 4294967295") )
var ErrEntryTooLong = errors.New("biglog.Scanner: entry too long")
ErrEntryTooLong is returned when the entry is too big to fit in the allowed buffer size.
var ErrInvalidIndexReader = errors.New("biglog: invalid reader - use NewIndexReader")
ErrInvalidIndexReader is returned on read with nil pointers
var ErrInvalidReader = errors.New("biglog: invalid reader - use NewReader")
ErrInvalidReader is returned on read with nil pointers
var ErrInvalidScanner = errors.New("biglog: invalid reader - use NewScanner")
ErrInvalidScanner is returned when using an uninitialized scanner.
var ErrNeedMoreBytes = errors.New("biglog: maxBytes too low for any entry")
ErrNeedMoreBytes is returned by in the index reader when a single entry does not fit the requested byte limit usually the client should double the size when possible and request again
var ErrNeedMoreOffsets = errors.New("biglog: maxOffsets too low for any entry")
ErrNeedMoreOffsets is returned by in the index reader when a single entry does not fit the requested offset limit usually the client should double the size when possible and request again
Logger is the logger instance used by BigLog in case of error.
Functions ¶
This section is empty.
Types ¶
type BigLog ¶
type BigLog struct {
// contains filtered or unexported fields
}
BigLog is the main structure TODO ...
func Create ¶
Create new biglog `dirPath` does not include the name `indexSize` represents the number of entries that the index can hold current entry size in the index is 4 bytes, so every segment will have a preallocated index file of disk size = maxIndexEntries * 4 bytes. In the index each write will consumed an entry, independently of how many offsets are contained.
func Open ¶
Open loads a BigLog from disk by loading all segments from the index files inside the given directory. The last segment - ordered by the base offset encoded in the file names - is set to be the hot segment of the BigLog. The created BigLog is ready to serve any watchers or readers immediately.
ErrInvalid is returned if there are no index files within dirPath. ErrLoadSegment is returned if a segment can not be loaded.
func (*BigLog) Close ¶
Close frees all resources, rendering the BigLog unusable without touching the data persisted on disk.
func (*BigLog) Name ¶
Name returns the big log's name, which maps to directory path that contains the index and data files.
func (*BigLog) ReadFrom ¶
ReadFrom reads data from src into the currently active segment until EOF or the first error. All read data is indexed as a single entry. Splitting up the BigLog into more segments is handled transparently if the currently active segment is full. It returns the number of bytes written and any error encountered.
func (*BigLog) Split ¶
Split creates a new segment in bl's dirPath starting at the highest available offset+1. The new segment has the same size as the old one and becomes the new hot (active) segment.
func (*BigLog) Write ¶
Write writes len(b) bytes from b into the currently active segment of bl as a single entry. Splitting up the BigLog into more segments is handled transparently if the currently active segment is full. It returns the number of bytes written from b (0 <= n <= len(b)) and any error encountered that caused the write to stop early.
func (*BigLog) WriteN ¶
WriteN writes a batch of n entries from b into the currently active segment. Splitting up the BigLog into more segments is handled transparently if the currently active segment is full. It returns the number of bytes written from b (0 <= n <= len(b)) and any error encountered that caused the write to stop early.
type Entry ¶
type Entry struct { // Timestamp of the entry Timestamp time.Time // Offset corresponds with the offset of the first message Offset int64 // ODelta is the number of offsets held by this entry ODelta int // Size is the size of the data mapped by this entry Size int }
Entry n holds information about one single entry in the index
type IndexReader ¶
type IndexReader struct {
// contains filtered or unexported fields
}
IndexReader keeps the state among separate concurrent reads IndexReaders handle segment transitions transparently
func NewIndexReader ¶
func NewIndexReader(bl *BigLog, from int64) (r *IndexReader, ret int64, err error)
NewIndexReader returns an IndexReader that will start reading from a given offset
func (*IndexReader) Close ¶
func (r *IndexReader) Close() error
Close frees up the segments and renders the reader unusable returns nil error to satisfy io.Closer
func (*IndexReader) Head ¶
func (r *IndexReader) Head() int64
Head returns the current offset position of the reader
func (*IndexReader) ReadEntries ¶
func (r *IndexReader) ReadEntries(n int) (entries []*Entry, err error)
ReadEntries reads n entries from the index. This method is useful when scanning single entries one by one for streaming use, ReadSection is recommended
func (*IndexReader) ReadSection ¶
func (r *IndexReader) ReadSection(maxOffsets, maxBytes int64) (is *IndexSection, err error)
ReadSection reads the section of the index that contains a maximum of offsets or bytes it's lower precision than ReadEntries but better suited for streaming since it does not need to allocate an Entry struct for every entry read in the index.
type IndexSection ¶
type IndexSection struct { // Offset where the section begins, corresponds with the offset of the first message Offset int64 // ODelta is the number of offsets held by the section ODelta int64 // EDelta is the number of index entries held by the section EDelta int64 // Size is the size of the data mapped by this index section Size int64 }
IndexSection holds information about a set of entries of an index this information is often used to "drive" a data reader, since this is more performant than getting a set of entries.
type Info ¶
type Info struct { Name string `json:"name"` Path string `json:"path"` DiskSize int64 `json:"disk_size"` FirstOffset int64 `json:"first_offset"` LatestOffset int64 `json:"latest_offset"` Segments []*SegInfo `json:"segments"` ModTime time.Time `json:"mod_time"` }
Info holds all BigLog meta data
type Option ¶
type Option func(*BigLog)
Option is the type of function used to set internal parameters
func BufioWriter ¶
BufioWriter option defines the buffer size to use for writing segments
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
Reader keeps the state among separate concurrent reads Readers handle segment transitions transparently
func NewReader ¶
NewReader returns a Reader that will start reading from a given offset the reader implements the io.ReaderCloser interface
func (*Reader) Close ¶
Close frees up the segments and renders the reader unusable returns nil error to satisfy io.Closer
type Scanner ¶
type Scanner struct {
// contains filtered or unexported fields
}
Scanner facilitates reading a BigLog's content one index entry at a time. Instantiate always via NewScanner. A scanner is NOT thread-safe, to use it concurrently your application must protect it with a mutex.
func NewScanner ¶
func NewScanner(bl *BigLog, from int64, opts ...ScannerOption) (s *Scanner, err error)
NewScanner returns a new Scanner starting at `from` offset.
type ScannerOption ¶
type ScannerOption func(*Scanner)
ScannerOption is the type of function used to set internal scanner parameters
func MaxBufferSize ¶
func MaxBufferSize(size int) ScannerOption
MaxBufferSize option defines the maximum buffer size that the scanner is allowed to use. If an entry is larger than the max buffer scanning will fail with error ErrTooLong. Default size 0 means no limit.
func UseBuffer ¶
func UseBuffer(buf []byte) ScannerOption
UseBuffer option facilitates to bring your own buffer to the scanner, if the buffer is not big enough for an entry it will be replaced with a bigger one. You can prevent this behaviours by setting MaxBufferSize to len(buf).
type SegInfo ¶
type SegInfo struct { FirstOffset int64 `json:"first_offset"` DiskSize int64 `json:"disk_size"` DataSize int64 `json:"data_size"` ModTime time.Time `json:"mod_time"` }
SegInfo contains information about a segment as returned by segment.Info().
type StreamDelta ¶
type StreamDelta struct {
// contains filtered or unexported fields
}
StreamDelta holds a chunk of data from the BigLog. Metadata can be inspected with the associated methods. StreamDelta implements the io.Reader interface to access the stored data.
func (*StreamDelta) EntryDelta ¶
func (d *StreamDelta) EntryDelta() int64
EntryDelta returns the number of index entries to be streamed
func (*StreamDelta) Offset ¶
func (d *StreamDelta) Offset() int64
Offset returns the first message to be streamed
func (*StreamDelta) OffsetDelta ¶
func (d *StreamDelta) OffsetDelta() int64
OffsetDelta returns the number of offsets to be streamed
func (*StreamDelta) Read ¶
func (d *StreamDelta) Read(p []byte) (n int, err error)
Reader implements the io.Reader interface for this delta
func (*StreamDelta) Size ¶
func (d *StreamDelta) Size() int64
Size returns the number of bytes to be streamed
type Streamer ¶
type Streamer struct {
// contains filtered or unexported fields
}
Streamer reads the BigLog in Streams deltas which underneath are just chunks of the data file with some metadata. Always instantiate with NewScanner. See Get and Put methods for usage details.
func NewStreamer ¶
NewStreamer returns a new Streamer starting at `from` offset.
func (*Streamer) Get ¶
func (st *Streamer) Get(maxOffsets, maxBytes int64) (delta *StreamDelta, err error)
Get given a maximum number of offsets and a maximum number of bytes, returns a StreamDelta for the biggest chunk that satisfied the limits until EOF. If the next entry is too big for either limit, either ErrNeedMoreOffsets or ErrNeedMoreBytes is returned. IMPORTANT: The StreamDelta must be "Put" back before a new one can issued.
func (*Streamer) Put ¶
func (st *Streamer) Put(delta *StreamDelta) (err error)
Put must be called once StreamDelta has been successfully read so the reader can advance and a new StreamDelta can be issued.
type Watcher ¶
type Watcher struct {
// contains filtered or unexported fields
}
Watcher provides a notification channel for changes in a given BigLog
func NewWatcher ¶
NewWatcher creates a new Watcher for the provided BigLog.