pool

package
v0.2.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 10, 2023 License: Apache-2.0 Imports: 10 Imported by: 0

Documentation

Index

Constants

View Source
const DefaultBufferSize = 2 * 1024 * 1024

Variables

View Source
var FilePool = sync.Pool{
	New: func() any {
		return &File{CompressedData: bytes.NewBuffer(make([]byte, DefaultBufferSize))}
	},
}

Functions

This section is empty.

Types

type Config

type Config struct {
	Concurrency int
	Capacity    int
}

type File

type File struct {
	Info           fs.FileInfo
	Header         *zip.FileHeader
	CompressedData *bytes.Buffer
	Overflow       *os.File
	Compressor     *flate.Writer
	Path           string
	// contains filtered or unexported fields
}

A File refers to a file-backed buffer

func NewFile

func NewFile(path string, info fs.FileInfo, relativeTo string) (*File, error)

func (*File) Overflowed

func (f *File) Overflowed() bool

Overflowed returns true if the compressed contents of the file was too large to fit in the in-memory buffer. The overflowed contents are written to a temporary file.

func (*File) Reset

func (f *File) Reset(path string, info fs.FileInfo, relativeTo string) error

Reset resets the file-backed buffer ready to be used by another file.

func (*File) Write

func (f *File) Write(p []byte) (n int, err error)

func (*File) Written

func (f *File) Written() int64

Written returns the number of bytes of the file compressed and written to a destination

type FileWorkerPool

type FileWorkerPool[T any] struct {
	// contains filtered or unexported fields
}

A FileWorkerPool is a worker pool in which files are enqueued and for each file, the executor function is called. The number of files that can be enqueued for processing at any time is defined by the capacity. The number of workers processing files is set by configuring concurrency.

func NewFileWorkerPool

func NewFileWorkerPool[T any](executor func(f *T) error, config *Config) (*FileWorkerPool[T], error)

func (*FileWorkerPool[T]) Close

func (f *FileWorkerPool[T]) Close() error

Close gracefully shuts down the FileWorkerPool, ensuring all enqueued tasks have been processed. Files cannot be enqueued after Close has been called; attempting this will cause a panic. Close returns the first error that was encountered during file processing.

func (*FileWorkerPool[T]) Enqueue

func (f *FileWorkerPool[T]) Enqueue(file *T)

Enqueue enqueues a file for processing

func (FileWorkerPool[T]) PendingFiles

func (f FileWorkerPool[T]) PendingFiles() int

PendingFiles returns the number of tasks that are waiting to be processed

func (*FileWorkerPool[T]) Start

func (f *FileWorkerPool[T]) Start(ctx context.Context)

Start creates n goroutine workers, where n can be configured by setting the concurrency option of the FileWorkerPool. The workers listen and execute tasks as they are enqueued. The workers are shut down when an error occurs or the associated ctx is canceled.

type WorkerPool

type WorkerPool[T any] interface {
	Start(ctx context.Context)
	Close() error
	Enqueue(v *T)
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL