Documentation ¶
Overview ¶
Copyright 2024 The Solaris Authors
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Index ¶
- Constants
- type AppendRecordsResult
- type Chunk
- type ChunkAccessor
- type ChunkReader
- type Config
- type Provider
- func (p *Provider) Close() error
- func (p *Provider) DeleteFileIfEmpty(cID string)
- func (p *Provider) GetFileNameByID(cID string) string
- func (p *Provider) GetOpenedChunk(ctx context.Context, cID string, newFile bool) (lru.Releasable[*Chunk], error)
- func (p *Provider) ReleaseChunk(r *lru.Releasable[*Chunk])
- func (p *Provider) Shutdown()
- type Replicator
- type Scanner
- type ScannerConfig
- type UnsafeRecord
Constants ¶
const ( RFRemoteDelete = 1 RFRemoteSync = 1 << 1 )
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AppendRecordsResult ¶ added in v0.4.0
type AppendRecordsResult struct { // Written is the number of records added to the chunk Written int // StartID is the first added record ID StartID ulid.ULID // LastID is the last added record ID LastID ulid.ULID }
AppendRecordsResult is used to report the append records operation result
type Chunk ¶
type Chunk struct {
// contains filtered or unexported fields
}
Chunk allows to write data into a file.
func (*Chunk) AppendRecords ¶
func (c *Chunk) AppendRecords(recs []*solaris.Record) (AppendRecordsResult, error)
AppendRecords allows to add new records into the chunk. The chunk size can be extended if the records do not fit into the existing chunk. If the chunk reaches its maximum capacity it will not grow anymore. Only some records, that fit into the chunk will be written. The result will contain the number of records actually written
func (*Chunk) Close ¶
Close implements io.Closer. It allows to close the chunk, so the Append and Read operations will not be available after that. All readers must be closed befor the call, otherwise it will be blocked
func (*Chunk) Open ¶
Open allows to map the chunk file context to the memory and start working with the chunk
func (*Chunk) OpenChunkReader ¶
func (c *Chunk) OpenChunkReader(descending bool) (*ChunkReader, error)
OpenChunkReader opens new read operation. The function returns ChunkReader, which may be used for reading the chunk records. The ChunkReader must be closed. The AppendRecords and Close() operations will be blocked until ALL ChunkReaders are closed. So the ChunkReader should be requested for a short period of time and be closed as soon as possible
type ChunkAccessor ¶ added in v0.17.0
type ChunkAccessor struct {
// contains filtered or unexported fields
}
ChunkAccessor implements FSM for sharing access to the local chunk files. It keeps the states for every chunk file, and it serves as a synchronization barrier between Chunk and Replicator objects, that may touch the chunk files in parallel.
func NewChunkAccessor ¶ added in v0.17.0
func NewChunkAccessor() *ChunkAccessor
NewChunkAccessor creates the new ChunkAccessor
func (*ChunkAccessor) SetIdle ¶ added in v0.17.0
func (cc *ChunkAccessor) SetIdle(cID string)
SetIdle closes the Writing (SetWriting) and Deleting (SetDeleting) exclusive access
func (*ChunkAccessor) SetWriting ¶ added in v0.17.0
func (cc *ChunkAccessor) SetWriting(ctx context.Context, cID string) error
SetWriting requests writing access to the chunk. The function must followed by SetIdle() call to release the write access
func (*ChunkAccessor) Shutdown ¶ added in v0.17.0
func (cc *ChunkAccessor) Shutdown()
Shutdown - closes the ChunkAccessor
type ChunkReader ¶
type ChunkReader struct {
// contains filtered or unexported fields
}
ChunkReader is a helper structure which allows to read records from a chunk. The ChunkReader implements interable.Iterator interface. When a ChunkReader is opened, the Write operations to the chunk are blocked, so the records must be read ASAP and the ChunkReader must be closed.
func (*ChunkReader) HasNext ¶
func (cr *ChunkReader) HasNext() bool
func (*ChunkReader) Next ¶
func (cr *ChunkReader) Next() (UnsafeRecord, bool)
func (*ChunkReader) SetStartID ¶
func (cr *ChunkReader) SetStartID(startID ulid.ULID) int
SetStartID moves the iterator offset to the position startID. The function returns the number of records which will be available for read after the call taking into account the direction of the iterator.
type Config ¶ added in v0.6.0
Config defines the chunk settings
func GetDefaultConfig ¶ added in v0.6.0
func GetDefaultConfig() Config
type Provider ¶ added in v0.3.0
type Provider struct { Replicator *Replicator `inject:""` CA *ChunkAccessor `inject:""` // contains filtered or unexported fields }
Provider manages a pull of opened chunks and allows to return a Chunk object by request. The Provider limits the number of opened file descriptors and the space on the local drive borrowed for the chunks
func NewProvider ¶ added in v0.3.0
NewProvider creates the new Provider instance
func (*Provider) DeleteFileIfEmpty ¶ added in v0.17.0
DeleteFileIfEmpty deletes the file chunk if it is empty
func (*Provider) GetFileNameByID ¶ added in v0.17.0
GetFileNameByID returns the filename for the chunk ID cID provided
func (*Provider) GetOpenedChunk ¶ added in v0.3.0
func (p *Provider) GetOpenedChunk(ctx context.Context, cID string, newFile bool) (lru.Releasable[*Chunk], error)
GetOpenedChunk returns a lru.Releasable object for the *Chunk (ready to be used) by its ID. The function may return ctx.Err() or ErrClosed errors
func (*Provider) ReleaseChunk ¶ added in v0.3.0
func (p *Provider) ReleaseChunk(r *lru.Releasable[*Chunk])
ReleaseChunk must be called as soon as the chunk is not needed anymore
type Replicator ¶ added in v0.14.0
type Replicator struct { Storage sss.Storage `inject:""` CA *ChunkAccessor `inject:""` // contains filtered or unexported fields }
Replicator struct implements the object which controls the state of the local file-system and allows to move the chunks from the local FS to a remote Storage forth and back.
func NewReplicator ¶ added in v0.17.0
func NewReplicator(fileNameByID func(id string) string) *Replicator
NewReplicator creates new instance of Replicator
func (*Replicator) DeleteChunk ¶ added in v0.14.0
DeleteChunk allows to delete the chunk locally. The function may upload the chunk to the remote Storage before being deleted (the flags&RFRemoteSync != 0), or to remove the chunk locally only (no flags required) and remove it locally and remotely (flags&RFRemoteDelete != 0)
func (*Replicator) DownloadChunk ¶ added in v0.14.0
DownloadChunk allows to download the chunk by its ID from the remote Storage to the local FS. The RFRemoteSync flag specifies whether the chunk will be downloaded even if the chunk file already exists on the file system. If the chunk file doesn't exist locally, it will be downloaded anyway from the remote Storage
func (*Replicator) UploadChunk ¶ added in v0.14.0
func (r *Replicator) UploadChunk(ctx context.Context, cID string) error
UploadChunk moves the chunk with ID from the local FS to the remote Storage.
type Scanner ¶ added in v0.17.0
type Scanner struct { Replicator *Replicator `inject:""` // contains filtered or unexported fields }
Scanner structs represents a component which provides the file-system monitoring functionality and which provides the local file-system cleans up and replication chunks to the remote Storage.
func NewScanner ¶ added in v0.17.0
func NewScanner(r *Replicator, cfg ScannerConfig) *Scanner
NewScanner creates the new instance of Scanner
type ScannerConfig ¶ added in v0.17.0
type ScannerConfig struct { // DataPath contains the path to the folder where the chunks are stored DataPath string // SweepMaxThresholdSize defines the maximum value of the local chunks' folder size when the sweeper // starts to remove chunks from the local file-system SweepMaxThresholdSize int64 // SweepMinThresholdSize defines the lower size of the sweep threshold. If sweeper will stop deleting // chunks from the local Storage if the size of the folder becomes less than this value. SweepMinThresholdSize int64 // RemoteSyncThreshold defines the timeout between the last modification and now should be passed before // the chunk will be replicated remotely RemoteSyncThreshold time.Duration // SyncWorkers defines how many folders can be scanned and synced in parallel SyncWorkers int // GlobalSyncTimeout defines the timeout between scanning ALL chunk folders. GlobalSyncTimeout time.Duration }
ScannerConfig defines settings for the Scanner for replicating chunks from the local file-system to the remote Storage
func GetDefaultScannerConfig ¶ added in v0.17.0
func GetDefaultScannerConfig() ScannerConfig
GetDefaultScannerConfig returns the default stand-alone Scanner config!
func (ScannerConfig) String ¶ added in v0.17.0
func (sc ScannerConfig) String() string
String implements fmt.Stringer
type UnsafeRecord ¶
type UnsafeRecord struct { ID ulid.ULID UnsafePayload []byte }
UnsafeRecord represent a chunk record. This is a short-life object which may be used ONLY when ChunkReader is open. If the record time should be longer, the UnsafePayload MUST be copied to another memory.