ioutil

package
v0.0.0-...-a6a3a47 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 5, 2023 License: AGPL-3.0 Imports: 11 Imported by: 0

Documentation

Overview

Package ioutil implements some I/O utility functions which are not covered by the standard library.

Package ioutil implements some I/O utility functions which are not covered by the standard library.

Index

Constants

View Source
const (
	BlockSizeSmall       = 32 * humanize.KiByte // Default r/w block size for smaller objects.
	BlockSizeLarge       = 2 * humanize.MiByte  // Default r/w block size for larger objects.
	BlockSizeReallyLarge = 4 * humanize.MiByte  // Default write block size for objects per shard >= 64MiB
)

Block sizes constant.

View Source
const DirectioAlignSize = 4096

DirectioAlignSize - DirectIO alignment needs to be 4K. Defined here as directio.AlignSize is defined as 0 in MacOS causing divide by 0 error.

Variables

View Source
var (
	ODirectPoolXLarge = sync.Pool{
		New: func() interface{} {
			b := disk.AlignedBlock(BlockSizeReallyLarge)
			return &b
		},
	}
	ODirectPoolLarge = sync.Pool{
		New: func() interface{} {
			b := disk.AlignedBlock(BlockSizeLarge)
			return &b
		},
	}
	ODirectPoolSmall = sync.Pool{
		New: func() interface{} {
			b := disk.AlignedBlock(BlockSizeSmall)
			return &b
		},
	}
)

O_DIRECT aligned sync.Pool's

View Source
var (
	// OpenFileDirectIO allows overriding default function.
	OpenFileDirectIO = disk.OpenFileDirectIO
	// OsOpen allows overriding default function.
	OsOpen = os.Open
	// OsOpenFile allows overriding default function.
	OsOpenFile = os.OpenFile
)
View Source
var ErrOverread = errors.New("input provided more bytes than specified")

ErrOverread is returned to the reader when the hard limit of HardLimitReader is exceeded.

Functions

func AppendFile

func AppendFile(dst string, src string, osync bool) error

AppendFile - appends the file "src" to the file "dst"

func Copy

func Copy(dst io.Writer, src io.Reader) (written int64, err error)

Copy is exactly like io.Copy but with re-usable buffers.

func CopyAligned

func CopyAligned(w io.Writer, r io.Reader, alignedBuf []byte, totalSize int64, file *os.File) (int64, error)

CopyAligned - copies from reader to writer using the aligned input buffer, it is expected that input buffer is page aligned to 4K page boundaries. Without passing aligned buffer may cause this function to return error.

This code is similar in spirit to io.Copy but it is only to be used with DIRECT I/O based file descriptor and it is expected that input writer *os.File not a generic io.Writer. Make sure to have the file opened for writes with syscall.O_DIRECT flag.

func HardLimitReader

func HardLimitReader(r io.Reader, n int64) io.Reader

HardLimitReader returns a Reader that reads from r but returns an error if the source provides more data than allowed. This means the source *will* be overread unless EOF is returned prior. The underlying implementation is a *HardLimitedReader. This will ensure that at most n bytes are returned and EOF is reached.

func NewDeadlineReader

func NewDeadlineReader(r io.ReadCloser, timeout time.Duration) io.ReadCloser

NewDeadlineReader wraps a writer to make it respect given deadline value per Write(). If there is a blocking write, the returned Reader will return whenever the timer hits (the return values are n=0 and err=context.DeadlineExceeded.)

func NewDeadlineWriter

func NewDeadlineWriter(w io.WriteCloser, timeout time.Duration) io.WriteCloser

NewDeadlineWriter wraps a writer to make it respect given deadline value per Write(). If there is a blocking write, the returned Writer will return whenever the timer hits (the return values are n=0 and err=context.DeadlineExceeded.)

func NewSkipReader

func NewSkipReader(r io.Reader, n int64) io.Reader

NewSkipReader - creates a SkipReader

func NopCloser

func NopCloser(w io.Writer) io.WriteCloser

NopCloser returns a WriteCloser with a no-op Close method wrapping the provided Writer w.

func ReadFile

func ReadFile(name string) ([]byte, error)

ReadFile reads the named file and returns the contents. A successful call returns err == nil, not err == EOF. Because ReadFile reads the whole file, it does not treat an EOF from Read as an error to be reported.

passes NOATIME flag for reads on Unix systems to avoid atime updates.

func ReadFileWithFileInfo

func ReadFileWithFileInfo(name string) ([]byte, fs.FileInfo, error)

ReadFileWithFileInfo reads the named file and returns the contents. A successful call returns err == nil, not err == EOF. Because ReadFile reads the whole file, it does not treat an EOF from Read as an error to be reported.

func SameFile

func SameFile(fi1, fi2 os.FileInfo) bool

SameFile returns if the files are same.

func WaitPipe

func WaitPipe() (*PipeReader, *PipeWriter)

WaitPipe implements wait-group backend io.Pipe to provide synchronization between read() end with write() end.

Types

type DeadlineReader

type DeadlineReader struct {
	io.ReadCloser
	// contains filtered or unexported fields
}

DeadlineReader deadline reader with timeout

func (*DeadlineReader) Close

func (r *DeadlineReader) Close() error

Close closer interface to close the underlying closer

func (*DeadlineReader) Read

func (r *DeadlineReader) Read(buf []byte) (int, error)

type DeadlineWorker

type DeadlineWorker struct {
	// contains filtered or unexported fields
}

DeadlineWorker implements the deadline/timeout resiliency pattern.

func NewDeadlineWorker

func NewDeadlineWorker(timeout time.Duration) *DeadlineWorker

NewDeadlineWorker constructs a new DeadlineWorker with the given timeout.

func (*DeadlineWorker) Run

func (d *DeadlineWorker) Run(work func() error) error

Run runs the given function, passing it a stopper channel. If the deadline passes before the function finishes executing, Run returns ErrTimeOut to the caller and closes the stopper channel so that the work function can attempt to exit gracefully. It does not (and cannot) simply kill the running function, so if it doesn't respect the stopper channel then it may keep running after the deadline passes. If the function finishes before the deadline, then the return value of the function is returned from Run.

type DeadlineWriter

type DeadlineWriter struct {
	io.WriteCloser
	// contains filtered or unexported fields
}

DeadlineWriter deadline writer with timeout

func (*DeadlineWriter) Close

func (w *DeadlineWriter) Close() error

Close closer interface to close the underlying closer

func (*DeadlineWriter) Write

func (w *DeadlineWriter) Write(buf []byte) (int, error)

type HardLimitedReader

type HardLimitedReader struct {
	R io.Reader // underlying reader
	N int64     // max bytes remaining
}

A HardLimitedReader reads from R but limits the amount of data returned to just N bytes. Each call to Read updates N to reflect the new amount remaining. Read returns EOF when N <= 0 or when the underlying R returns EOF.

func (*HardLimitedReader) Read

func (l *HardLimitedReader) Read(p []byte) (n int, err error)

type LimitWriter

type LimitWriter struct {
	io.Writer
	// contains filtered or unexported fields
}

LimitWriter implements io.WriteCloser.

This is implemented such that we want to restrict an enscapsulated writer upto a certain length and skip a certain number of bytes.

func LimitedWriter

func LimitedWriter(w io.Writer, skipBytes int64, limit int64) *LimitWriter

LimitedWriter takes an io.Writer and returns an ioutil.LimitWriter.

func (*LimitWriter) Close

func (w *LimitWriter) Close() error

Close closes the LimitWriter. It behaves like io.Closer.

func (*LimitWriter) Write

func (w *LimitWriter) Write(p []byte) (n int, err error)

Write implements the io.Writer interface limiting upto configured length, also skips the first N bytes.

type ODirectReader

type ODirectReader struct {
	File      *os.File
	SmallFile bool
	// contains filtered or unexported fields
}

ODirectReader - to support O_DIRECT reads for erasure backends.

func (*ODirectReader) Close

func (o *ODirectReader) Close() error

Close - Release the buffer and close the file.

func (*ODirectReader) Read

func (o *ODirectReader) Read(buf []byte) (n int, err error)

Read - Implements Reader interface.

type PipeReader

type PipeReader struct {
	*io.PipeReader
	// contains filtered or unexported fields
}

PipeReader is similar to io.PipeReader with wait group

func (*PipeReader) CloseWithError

func (r *PipeReader) CloseWithError(err error) error

CloseWithError close with supplied error the reader end

type PipeWriter

type PipeWriter struct {
	*io.PipeWriter
	// contains filtered or unexported fields
}

PipeWriter is similar to io.PipeWriter with wait group

func (*PipeWriter) CloseWithError

func (w *PipeWriter) CloseWithError(err error) error

CloseWithError close with supplied error the writer end.

type SkipReader

type SkipReader struct {
	io.Reader
	// contains filtered or unexported fields
}

SkipReader skips a given number of bytes and then returns all remaining data.

func (*SkipReader) Read

func (s *SkipReader) Read(p []byte) (int, error)

type WriteOnCloser

type WriteOnCloser struct {
	io.Writer
	// contains filtered or unexported fields
}

WriteOnCloser implements io.WriteCloser and always executes at least one write operation if it is closed.

This can be useful within the context of HTTP. At least one write operation must happen to send the HTTP headers to the peer.

func WriteOnClose

func WriteOnClose(w io.Writer) *WriteOnCloser

WriteOnClose takes an io.Writer and returns an ioutil.WriteOnCloser.

func (*WriteOnCloser) Close

func (w *WriteOnCloser) Close() error

Close closes the WriteOnCloser. It behaves like io.Closer.

func (*WriteOnCloser) HasWritten

func (w *WriteOnCloser) HasWritten() bool

HasWritten returns true if at least one write operation was performed.

func (*WriteOnCloser) Write

func (w *WriteOnCloser) Write(p []byte) (int, error)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL