Documentation ¶
Index ¶
- type ByteReaderPile
- type ByteScannerPile
- type ByteWriterPile
- type CloserPile
- type LimitedReaderPile
- func (d *LimitedReaderPile) Close() (err error)
- func (d *LimitedReaderPile) Done() (done <-chan []*io.LimitedReader)
- func (d *LimitedReaderPile) Iter() (item *io.LimitedReader, ok bool)
- func (d *LimitedReaderPile) Next() (item *io.LimitedReader, ok bool)
- func (d *LimitedReaderPile) Pile(item *io.LimitedReader)
- type PipeReaderPile
- type PipeWriterPile
- type ReadCloserPile
- type ReadSeekerPile
- type ReadWriteCloserPile
- func (d *ReadWriteCloserPile) Close() (err error)
- func (d *ReadWriteCloserPile) Done() (done <-chan []io.ReadWriteCloser)
- func (d *ReadWriteCloserPile) Iter() (item io.ReadWriteCloser, ok bool)
- func (d *ReadWriteCloserPile) Next() (item io.ReadWriteCloser, ok bool)
- func (d *ReadWriteCloserPile) Pile(item io.ReadWriteCloser)
- type ReadWriteSeekerPile
- func (d *ReadWriteSeekerPile) Close() (err error)
- func (d *ReadWriteSeekerPile) Done() (done <-chan []io.ReadWriteSeeker)
- func (d *ReadWriteSeekerPile) Iter() (item io.ReadWriteSeeker, ok bool)
- func (d *ReadWriteSeekerPile) Next() (item io.ReadWriteSeeker, ok bool)
- func (d *ReadWriteSeekerPile) Pile(item io.ReadWriteSeeker)
- type ReadWriterPile
- type ReaderAtPile
- type ReaderFromPile
- type ReaderPile
- type RuneReaderPile
- type RuneScannerPile
- type SectionReaderPile
- func (d *SectionReaderPile) Close() (err error)
- func (d *SectionReaderPile) Done() (done <-chan []*io.SectionReader)
- func (d *SectionReaderPile) Iter() (item *io.SectionReader, ok bool)
- func (d *SectionReaderPile) Next() (item *io.SectionReader, ok bool)
- func (d *SectionReaderPile) Pile(item *io.SectionReader)
- type SeekerPile
- type WriteCloserPile
- type WriteSeekerPile
- type WriterAtPile
- type WriterPile
- type WriterToPile
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type ByteReaderPile ¶
type ByteReaderPile struct {
// contains filtered or unexported fields
}
ByteReaderPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.ByteReader`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeByteReaderPile(128, 32)
Have it grow concurrently using multiple:
var item io.ByteReader = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ByteReaderPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeByteReaderPile ¶
func MakeByteReaderPile(size, buff int) *ByteReaderPile
MakeByteReaderPile returns a (pointer to a) fresh pile of items (of type `io.ByteReader`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ByteReaderPile) Close ¶
func (d *ByteReaderPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ByteReaderPile) Done ¶
func (d *ByteReaderPile) Done() (done <-chan []io.ByteReader)
Done returns a channel which emits the result (as slice of ByteReader) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ByteReaderPile) Iter ¶
func (d *ByteReaderPile) Iter() (item io.ByteReader, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ByteReaderPile) Next ¶
func (d *ByteReaderPile) Next() (item io.ByteReader, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ByteReaderPile) Pile ¶
func (d *ByteReaderPile) Pile(item io.ByteReader)
Pile appends an `io.ByteReader` item to the ByteReaderPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type ByteScannerPile ¶
type ByteScannerPile struct {
// contains filtered or unexported fields
}
ByteScannerPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.ByteScanner`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeByteScannerPile(128, 32)
Have it grow concurrently using multiple:
var item io.ByteScanner = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ByteScannerPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeByteScannerPile ¶
func MakeByteScannerPile(size, buff int) *ByteScannerPile
MakeByteScannerPile returns a (pointer to a) fresh pile of items (of type `io.ByteScanner`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ByteScannerPile) Close ¶
func (d *ByteScannerPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ByteScannerPile) Done ¶
func (d *ByteScannerPile) Done() (done <-chan []io.ByteScanner)
Done returns a channel which emits the result (as slice of ByteScanner) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ByteScannerPile) Iter ¶
func (d *ByteScannerPile) Iter() (item io.ByteScanner, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ByteScannerPile) Next ¶
func (d *ByteScannerPile) Next() (item io.ByteScanner, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ByteScannerPile) Pile ¶
func (d *ByteScannerPile) Pile(item io.ByteScanner)
Pile appends an `io.ByteScanner` item to the ByteScannerPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type ByteWriterPile ¶
type ByteWriterPile struct {
// contains filtered or unexported fields
}
ByteWriterPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.ByteWriter`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeByteWriterPile(128, 32)
Have it grow concurrently using multiple:
var item io.ByteWriter = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ByteWriterPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeByteWriterPile ¶
func MakeByteWriterPile(size, buff int) *ByteWriterPile
MakeByteWriterPile returns a (pointer to a) fresh pile of items (of type `io.ByteWriter`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ByteWriterPile) Close ¶
func (d *ByteWriterPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ByteWriterPile) Done ¶
func (d *ByteWriterPile) Done() (done <-chan []io.ByteWriter)
Done returns a channel which emits the result (as slice of ByteWriter) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ByteWriterPile) Iter ¶
func (d *ByteWriterPile) Iter() (item io.ByteWriter, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ByteWriterPile) Next ¶
func (d *ByteWriterPile) Next() (item io.ByteWriter, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ByteWriterPile) Pile ¶
func (d *ByteWriterPile) Pile(item io.ByteWriter)
Pile appends an `io.ByteWriter` item to the ByteWriterPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type CloserPile ¶
type CloserPile struct {
// contains filtered or unexported fields
}
CloserPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.Closer`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeCloserPile(128, 32)
Have it grow concurrently using multiple:
var item io.Closer = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the CloserPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeCloserPile ¶
func MakeCloserPile(size, buff int) *CloserPile
MakeCloserPile returns a (pointer to a) fresh pile of items (of type `io.Closer`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*CloserPile) Close ¶
func (d *CloserPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*CloserPile) Done ¶
func (d *CloserPile) Done() (done <-chan []io.Closer)
Done returns a channel which emits the result (as slice of Closer) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*CloserPile) Iter ¶
func (d *CloserPile) Iter() (item io.Closer, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*CloserPile) Next ¶
func (d *CloserPile) Next() (item io.Closer, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*CloserPile) Pile ¶
func (d *CloserPile) Pile(item io.Closer)
Pile appends an `io.Closer` item to the CloserPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type LimitedReaderPile ¶
type LimitedReaderPile struct {
// contains filtered or unexported fields
}
LimitedReaderPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `*io.LimitedReader`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeLimitedReaderPile(128, 32)
Have it grow concurrently using multiple:
var item *io.LimitedReader = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the LimitedReaderPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeLimitedReaderPile ¶
func MakeLimitedReaderPile(size, buff int) *LimitedReaderPile
MakeLimitedReaderPile returns a (pointer to a) fresh pile of items (of type `*io.LimitedReader`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*LimitedReaderPile) Close ¶
func (d *LimitedReaderPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*LimitedReaderPile) Done ¶
func (d *LimitedReaderPile) Done() (done <-chan []*io.LimitedReader)
Done returns a channel which emits the result (as slice of LimitedReader) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*LimitedReaderPile) Iter ¶
func (d *LimitedReaderPile) Iter() (item *io.LimitedReader, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*LimitedReaderPile) Next ¶
func (d *LimitedReaderPile) Next() (item *io.LimitedReader, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*LimitedReaderPile) Pile ¶
func (d *LimitedReaderPile) Pile(item *io.LimitedReader)
Pile appends an `*io.LimitedReader` item to the LimitedReaderPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type PipeReaderPile ¶
type PipeReaderPile struct {
// contains filtered or unexported fields
}
PipeReaderPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `*io.PipeReader`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakePipeReaderPile(128, 32)
Have it grow concurrently using multiple:
var item *io.PipeReader = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the PipeReaderPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakePipeReaderPile ¶
func MakePipeReaderPile(size, buff int) *PipeReaderPile
MakePipeReaderPile returns a (pointer to a) fresh pile of items (of type `*io.PipeReader`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*PipeReaderPile) Close ¶
func (d *PipeReaderPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*PipeReaderPile) Done ¶
func (d *PipeReaderPile) Done() (done <-chan []*io.PipeReader)
Done returns a channel which emits the result (as slice of PipeReader) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*PipeReaderPile) Iter ¶
func (d *PipeReaderPile) Iter() (item *io.PipeReader, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*PipeReaderPile) Next ¶
func (d *PipeReaderPile) Next() (item *io.PipeReader, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*PipeReaderPile) Pile ¶
func (d *PipeReaderPile) Pile(item *io.PipeReader)
Pile appends an `*io.PipeReader` item to the PipeReaderPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type PipeWriterPile ¶
type PipeWriterPile struct {
// contains filtered or unexported fields
}
PipeWriterPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `*io.PipeWriter`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakePipeWriterPile(128, 32)
Have it grow concurrently using multiple:
var item *io.PipeWriter = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the PipeWriterPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakePipeWriterPile ¶
func MakePipeWriterPile(size, buff int) *PipeWriterPile
MakePipeWriterPile returns a (pointer to a) fresh pile of items (of type `*io.PipeWriter`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*PipeWriterPile) Close ¶
func (d *PipeWriterPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*PipeWriterPile) Done ¶
func (d *PipeWriterPile) Done() (done <-chan []*io.PipeWriter)
Done returns a channel which emits the result (as slice of PipeWriter) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*PipeWriterPile) Iter ¶
func (d *PipeWriterPile) Iter() (item *io.PipeWriter, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*PipeWriterPile) Next ¶
func (d *PipeWriterPile) Next() (item *io.PipeWriter, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*PipeWriterPile) Pile ¶
func (d *PipeWriterPile) Pile(item *io.PipeWriter)
Pile appends an `*io.PipeWriter` item to the PipeWriterPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type ReadCloserPile ¶
type ReadCloserPile struct {
// contains filtered or unexported fields
}
ReadCloserPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.ReadCloser`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeReadCloserPile(128, 32)
Have it grow concurrently using multiple:
var item io.ReadCloser = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ReadCloserPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeReadCloserPile ¶
func MakeReadCloserPile(size, buff int) *ReadCloserPile
MakeReadCloserPile returns a (pointer to a) fresh pile of items (of type `io.ReadCloser`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ReadCloserPile) Close ¶
func (d *ReadCloserPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ReadCloserPile) Done ¶
func (d *ReadCloserPile) Done() (done <-chan []io.ReadCloser)
Done returns a channel which emits the result (as slice of ReadCloser) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ReadCloserPile) Iter ¶
func (d *ReadCloserPile) Iter() (item io.ReadCloser, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ReadCloserPile) Next ¶
func (d *ReadCloserPile) Next() (item io.ReadCloser, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ReadCloserPile) Pile ¶
func (d *ReadCloserPile) Pile(item io.ReadCloser)
Pile appends an `io.ReadCloser` item to the ReadCloserPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type ReadSeekerPile ¶
type ReadSeekerPile struct {
// contains filtered or unexported fields
}
ReadSeekerPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.ReadSeeker`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeReadSeekerPile(128, 32)
Have it grow concurrently using multiple:
var item io.ReadSeeker = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ReadSeekerPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeReadSeekerPile ¶
func MakeReadSeekerPile(size, buff int) *ReadSeekerPile
MakeReadSeekerPile returns a (pointer to a) fresh pile of items (of type `io.ReadSeeker`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ReadSeekerPile) Close ¶
func (d *ReadSeekerPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ReadSeekerPile) Done ¶
func (d *ReadSeekerPile) Done() (done <-chan []io.ReadSeeker)
Done returns a channel which emits the result (as slice of ReadSeeker) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ReadSeekerPile) Iter ¶
func (d *ReadSeekerPile) Iter() (item io.ReadSeeker, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ReadSeekerPile) Next ¶
func (d *ReadSeekerPile) Next() (item io.ReadSeeker, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ReadSeekerPile) Pile ¶
func (d *ReadSeekerPile) Pile(item io.ReadSeeker)
Pile appends an `io.ReadSeeker` item to the ReadSeekerPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type ReadWriteCloserPile ¶
type ReadWriteCloserPile struct {
// contains filtered or unexported fields
}
ReadWriteCloserPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.ReadWriteCloser`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeReadWriteCloserPile(128, 32)
Have it grow concurrently using multiple:
var item io.ReadWriteCloser = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ReadWriteCloserPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeReadWriteCloserPile ¶
func MakeReadWriteCloserPile(size, buff int) *ReadWriteCloserPile
MakeReadWriteCloserPile returns a (pointer to a) fresh pile of items (of type `io.ReadWriteCloser`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ReadWriteCloserPile) Close ¶
func (d *ReadWriteCloserPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ReadWriteCloserPile) Done ¶
func (d *ReadWriteCloserPile) Done() (done <-chan []io.ReadWriteCloser)
Done returns a channel which emits the result (as slice of ReadWriteCloser) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ReadWriteCloserPile) Iter ¶
func (d *ReadWriteCloserPile) Iter() (item io.ReadWriteCloser, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ReadWriteCloserPile) Next ¶
func (d *ReadWriteCloserPile) Next() (item io.ReadWriteCloser, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ReadWriteCloserPile) Pile ¶
func (d *ReadWriteCloserPile) Pile(item io.ReadWriteCloser)
Pile appends an `io.ReadWriteCloser` item to the ReadWriteCloserPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type ReadWriteSeekerPile ¶
type ReadWriteSeekerPile struct {
// contains filtered or unexported fields
}
ReadWriteSeekerPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.ReadWriteSeeker`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeReadWriteSeekerPile(128, 32)
Have it grow concurrently using multiple:
var item io.ReadWriteSeeker = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ReadWriteSeekerPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeReadWriteSeekerPile ¶
func MakeReadWriteSeekerPile(size, buff int) *ReadWriteSeekerPile
MakeReadWriteSeekerPile returns a (pointer to a) fresh pile of items (of type `io.ReadWriteSeeker`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ReadWriteSeekerPile) Close ¶
func (d *ReadWriteSeekerPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ReadWriteSeekerPile) Done ¶
func (d *ReadWriteSeekerPile) Done() (done <-chan []io.ReadWriteSeeker)
Done returns a channel which emits the result (as slice of ReadWriteSeeker) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ReadWriteSeekerPile) Iter ¶
func (d *ReadWriteSeekerPile) Iter() (item io.ReadWriteSeeker, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ReadWriteSeekerPile) Next ¶
func (d *ReadWriteSeekerPile) Next() (item io.ReadWriteSeeker, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ReadWriteSeekerPile) Pile ¶
func (d *ReadWriteSeekerPile) Pile(item io.ReadWriteSeeker)
Pile appends an `io.ReadWriteSeeker` item to the ReadWriteSeekerPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type ReadWriterPile ¶
type ReadWriterPile struct {
// contains filtered or unexported fields
}
ReadWriterPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.ReadWriter`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeReadWriterPile(128, 32)
Have it grow concurrently using multiple:
var item io.ReadWriter = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ReadWriterPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeReadWriterPile ¶
func MakeReadWriterPile(size, buff int) *ReadWriterPile
MakeReadWriterPile returns a (pointer to a) fresh pile of items (of type `io.ReadWriter`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ReadWriterPile) Close ¶
func (d *ReadWriterPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ReadWriterPile) Done ¶
func (d *ReadWriterPile) Done() (done <-chan []io.ReadWriter)
Done returns a channel which emits the result (as slice of ReadWriter) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ReadWriterPile) Iter ¶
func (d *ReadWriterPile) Iter() (item io.ReadWriter, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ReadWriterPile) Next ¶
func (d *ReadWriterPile) Next() (item io.ReadWriter, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ReadWriterPile) Pile ¶
func (d *ReadWriterPile) Pile(item io.ReadWriter)
Pile appends an `io.ReadWriter` item to the ReadWriterPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type ReaderAtPile ¶
type ReaderAtPile struct {
// contains filtered or unexported fields
}
ReaderAtPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.ReaderAt`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeReaderAtPile(128, 32)
Have it grow concurrently using multiple:
var item io.ReaderAt = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ReaderAtPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeReaderAtPile ¶
func MakeReaderAtPile(size, buff int) *ReaderAtPile
MakeReaderAtPile returns a (pointer to a) fresh pile of items (of type `io.ReaderAt`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ReaderAtPile) Close ¶
func (d *ReaderAtPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ReaderAtPile) Done ¶
func (d *ReaderAtPile) Done() (done <-chan []io.ReaderAt)
Done returns a channel which emits the result (as slice of ReaderAt) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ReaderAtPile) Iter ¶
func (d *ReaderAtPile) Iter() (item io.ReaderAt, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ReaderAtPile) Next ¶
func (d *ReaderAtPile) Next() (item io.ReaderAt, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ReaderAtPile) Pile ¶
func (d *ReaderAtPile) Pile(item io.ReaderAt)
Pile appends an `io.ReaderAt` item to the ReaderAtPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type ReaderFromPile ¶
type ReaderFromPile struct {
// contains filtered or unexported fields
}
ReaderFromPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.ReaderFrom`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeReaderFromPile(128, 32)
Have it grow concurrently using multiple:
var item io.ReaderFrom = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ReaderFromPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeReaderFromPile ¶
func MakeReaderFromPile(size, buff int) *ReaderFromPile
MakeReaderFromPile returns a (pointer to a) fresh pile of items (of type `io.ReaderFrom`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ReaderFromPile) Close ¶
func (d *ReaderFromPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ReaderFromPile) Done ¶
func (d *ReaderFromPile) Done() (done <-chan []io.ReaderFrom)
Done returns a channel which emits the result (as slice of ReaderFrom) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ReaderFromPile) Iter ¶
func (d *ReaderFromPile) Iter() (item io.ReaderFrom, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ReaderFromPile) Next ¶
func (d *ReaderFromPile) Next() (item io.ReaderFrom, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ReaderFromPile) Pile ¶
func (d *ReaderFromPile) Pile(item io.ReaderFrom)
Pile appends an `io.ReaderFrom` item to the ReaderFromPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type ReaderPile ¶
type ReaderPile struct {
// contains filtered or unexported fields
}
ReaderPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.Reader`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeReaderPile(128, 32)
Have it grow concurrently using multiple:
var item io.Reader = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the ReaderPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeReaderPile ¶
func MakeReaderPile(size, buff int) *ReaderPile
MakeReaderPile returns a (pointer to a) fresh pile of items (of type `io.Reader`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*ReaderPile) Close ¶
func (d *ReaderPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*ReaderPile) Done ¶
func (d *ReaderPile) Done() (done <-chan []io.Reader)
Done returns a channel which emits the result (as slice of Reader) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*ReaderPile) Iter ¶
func (d *ReaderPile) Iter() (item io.Reader, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*ReaderPile) Next ¶
func (d *ReaderPile) Next() (item io.Reader, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*ReaderPile) Pile ¶
func (d *ReaderPile) Pile(item io.Reader)
Pile appends an `io.Reader` item to the ReaderPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type RuneReaderPile ¶
type RuneReaderPile struct {
// contains filtered or unexported fields
}
RuneReaderPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.RuneReader`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeRuneReaderPile(128, 32)
Have it grow concurrently using multiple:
var item io.RuneReader = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the RuneReaderPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeRuneReaderPile ¶
func MakeRuneReaderPile(size, buff int) *RuneReaderPile
MakeRuneReaderPile returns a (pointer to a) fresh pile of items (of type `io.RuneReader`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*RuneReaderPile) Close ¶
func (d *RuneReaderPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*RuneReaderPile) Done ¶
func (d *RuneReaderPile) Done() (done <-chan []io.RuneReader)
Done returns a channel which emits the result (as slice of RuneReader) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*RuneReaderPile) Iter ¶
func (d *RuneReaderPile) Iter() (item io.RuneReader, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*RuneReaderPile) Next ¶
func (d *RuneReaderPile) Next() (item io.RuneReader, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*RuneReaderPile) Pile ¶
func (d *RuneReaderPile) Pile(item io.RuneReader)
Pile appends an `io.RuneReader` item to the RuneReaderPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type RuneScannerPile ¶
type RuneScannerPile struct {
// contains filtered or unexported fields
}
RuneScannerPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.RuneScanner`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeRuneScannerPile(128, 32)
Have it grow concurrently using multiple:
var item io.RuneScanner = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the RuneScannerPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeRuneScannerPile ¶
func MakeRuneScannerPile(size, buff int) *RuneScannerPile
MakeRuneScannerPile returns a (pointer to a) fresh pile of items (of type `io.RuneScanner`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*RuneScannerPile) Close ¶
func (d *RuneScannerPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*RuneScannerPile) Done ¶
func (d *RuneScannerPile) Done() (done <-chan []io.RuneScanner)
Done returns a channel which emits the result (as slice of RuneScanner) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*RuneScannerPile) Iter ¶
func (d *RuneScannerPile) Iter() (item io.RuneScanner, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*RuneScannerPile) Next ¶
func (d *RuneScannerPile) Next() (item io.RuneScanner, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*RuneScannerPile) Pile ¶
func (d *RuneScannerPile) Pile(item io.RuneScanner)
Pile appends an `io.RuneScanner` item to the RuneScannerPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type SectionReaderPile ¶
type SectionReaderPile struct {
// contains filtered or unexported fields
}
SectionReaderPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `*io.SectionReader`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeSectionReaderPile(128, 32)
Have it grow concurrently using multiple:
var item *io.SectionReader = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the SectionReaderPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeSectionReaderPile ¶
func MakeSectionReaderPile(size, buff int) *SectionReaderPile
MakeSectionReaderPile returns a (pointer to a) fresh pile of items (of type `*io.SectionReader`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*SectionReaderPile) Close ¶
func (d *SectionReaderPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*SectionReaderPile) Done ¶
func (d *SectionReaderPile) Done() (done <-chan []*io.SectionReader)
Done returns a channel which emits the result (as slice of SectionReader) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*SectionReaderPile) Iter ¶
func (d *SectionReaderPile) Iter() (item *io.SectionReader, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*SectionReaderPile) Next ¶
func (d *SectionReaderPile) Next() (item *io.SectionReader, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*SectionReaderPile) Pile ¶
func (d *SectionReaderPile) Pile(item *io.SectionReader)
Pile appends an `*io.SectionReader` item to the SectionReaderPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type SeekerPile ¶
type SeekerPile struct {
// contains filtered or unexported fields
}
SeekerPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.Seeker`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeSeekerPile(128, 32)
Have it grow concurrently using multiple:
var item io.Seeker = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the SeekerPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeSeekerPile ¶
func MakeSeekerPile(size, buff int) *SeekerPile
MakeSeekerPile returns a (pointer to a) fresh pile of items (of type `io.Seeker`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*SeekerPile) Close ¶
func (d *SeekerPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*SeekerPile) Done ¶
func (d *SeekerPile) Done() (done <-chan []io.Seeker)
Done returns a channel which emits the result (as slice of Seeker) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*SeekerPile) Iter ¶
func (d *SeekerPile) Iter() (item io.Seeker, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*SeekerPile) Next ¶
func (d *SeekerPile) Next() (item io.Seeker, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*SeekerPile) Pile ¶
func (d *SeekerPile) Pile(item io.Seeker)
Pile appends an `io.Seeker` item to the SeekerPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type WriteCloserPile ¶
type WriteCloserPile struct {
// contains filtered or unexported fields
}
WriteCloserPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.WriteCloser`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeWriteCloserPile(128, 32)
Have it grow concurrently using multiple:
var item io.WriteCloser = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the WriteCloserPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeWriteCloserPile ¶
func MakeWriteCloserPile(size, buff int) *WriteCloserPile
MakeWriteCloserPile returns a (pointer to a) fresh pile of items (of type `io.WriteCloser`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*WriteCloserPile) Close ¶
func (d *WriteCloserPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*WriteCloserPile) Done ¶
func (d *WriteCloserPile) Done() (done <-chan []io.WriteCloser)
Done returns a channel which emits the result (as slice of WriteCloser) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*WriteCloserPile) Iter ¶
func (d *WriteCloserPile) Iter() (item io.WriteCloser, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*WriteCloserPile) Next ¶
func (d *WriteCloserPile) Next() (item io.WriteCloser, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*WriteCloserPile) Pile ¶
func (d *WriteCloserPile) Pile(item io.WriteCloser)
Pile appends an `io.WriteCloser` item to the WriteCloserPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type WriteSeekerPile ¶
type WriteSeekerPile struct {
// contains filtered or unexported fields
}
WriteSeekerPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.WriteSeeker`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeWriteSeekerPile(128, 32)
Have it grow concurrently using multiple:
var item io.WriteSeeker = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the WriteSeekerPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeWriteSeekerPile ¶
func MakeWriteSeekerPile(size, buff int) *WriteSeekerPile
MakeWriteSeekerPile returns a (pointer to a) fresh pile of items (of type `io.WriteSeeker`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*WriteSeekerPile) Close ¶
func (d *WriteSeekerPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*WriteSeekerPile) Done ¶
func (d *WriteSeekerPile) Done() (done <-chan []io.WriteSeeker)
Done returns a channel which emits the result (as slice of WriteSeeker) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*WriteSeekerPile) Iter ¶
func (d *WriteSeekerPile) Iter() (item io.WriteSeeker, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*WriteSeekerPile) Next ¶
func (d *WriteSeekerPile) Next() (item io.WriteSeeker, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*WriteSeekerPile) Pile ¶
func (d *WriteSeekerPile) Pile(item io.WriteSeeker)
Pile appends an `io.WriteSeeker` item to the WriteSeekerPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type WriterAtPile ¶
type WriterAtPile struct {
// contains filtered or unexported fields
}
WriterAtPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.WriterAt`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeWriterAtPile(128, 32)
Have it grow concurrently using multiple:
var item io.WriterAt = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the WriterAtPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeWriterAtPile ¶
func MakeWriterAtPile(size, buff int) *WriterAtPile
MakeWriterAtPile returns a (pointer to a) fresh pile of items (of type `io.WriterAt`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*WriterAtPile) Close ¶
func (d *WriterAtPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*WriterAtPile) Done ¶
func (d *WriterAtPile) Done() (done <-chan []io.WriterAt)
Done returns a channel which emits the result (as slice of WriterAt) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*WriterAtPile) Iter ¶
func (d *WriterAtPile) Iter() (item io.WriterAt, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*WriterAtPile) Next ¶
func (d *WriterAtPile) Next() (item io.WriterAt, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*WriterAtPile) Pile ¶
func (d *WriterAtPile) Pile(item io.WriterAt)
Pile appends an `io.WriterAt` item to the WriterAtPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type WriterPile ¶
type WriterPile struct {
// contains filtered or unexported fields
}
WriterPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.Writer`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeWriterPile(128, 32)
Have it grow concurrently using multiple:
var item io.Writer = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the WriterPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeWriterPile ¶
func MakeWriterPile(size, buff int) *WriterPile
MakeWriterPile returns a (pointer to a) fresh pile of items (of type `io.Writer`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*WriterPile) Close ¶
func (d *WriterPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*WriterPile) Done ¶
func (d *WriterPile) Done() (done <-chan []io.Writer)
Done returns a channel which emits the result (as slice of Writer) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*WriterPile) Iter ¶
func (d *WriterPile) Iter() (item io.Writer, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*WriterPile) Next ¶
func (d *WriterPile) Next() (item io.Writer, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*WriterPile) Pile ¶
func (d *WriterPile) Pile(item io.Writer)
Pile appends an `io.Writer` item to the WriterPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
type WriterToPile ¶
type WriterToPile struct {
// contains filtered or unexported fields
}
WriterToPile is a hybrid container for a lazily and concurrently populated growing-only slice of items (of type `io.WriterTo`) which may be traversed in parallel to it's growth.
Usage for a pile `p`:
p := MakeWriterToPile(128, 32)
Have it grow concurrently using multiple:
var item io.WriterTo = something p.Pile(item)
in as many go routines as You may seem fit.
In parallel, You may either traverse `p` in parallel right away:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
Here p.Iter() starts a new transversal with the first item (if any), and p.Next() keeps traverses the WriterToPile.
or traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available:
r, p := <-p.Done(), nil
Hint: here we get the result in `r` and at the same time discard / deallocate / forget the pile `p` itself.
Note: The traversal is *not* intended to be concurrency safe! Thus: You may call `Pile` concurrently to Your traversal, but use of either `Done` or `Iter` and `Next` *must* be confined to a single go routine (thread).
func MakeWriterToPile ¶
func MakeWriterToPile(size, buff int) *WriterToPile
MakeWriterToPile returns a (pointer to a) fresh pile of items (of type `io.WriterTo`) with size as initial capacity and with buff as initial leeway, allowing as many Pile's to execute non-blocking before respective Done or Next's.
func (*WriterToPile) Close ¶
func (d *WriterToPile) Close() (err error)
Close - call once when everything has been piled.
Close intentionally implements io.Closer ¶
Note: After Close(), any Close(...) will panic and any Pile(...) will panic and any Done() or Next() will return immediately: no eventual blocking, that is.
func (*WriterToPile) Done ¶
func (d *WriterToPile) Done() (done <-chan []io.WriterTo)
Done returns a channel which emits the result (as slice of WriterTo) once the pile is closed.
Users of Done() *must not* iterate (via Iter() Next()...) before the done-channel is closed!
Done is a convenience - useful iff You do not like/need to start any traversal before the pile is fully populated. Once the pile is closed, Done() will signal in constant time.
Note: Upon signalling, the pile is reset to it's tip, so You may traverse it (via Next) right away. Usage for a pile `p`: Traverse blocking / awaiting close first:
for item := range <-p.Done() { ... do sth with item ... }
or use the result when available
r, p := <-p.Done(), nil
while discaring the pile itself.
func (*WriterToPile) Iter ¶
func (d *WriterToPile) Iter() (item io.WriterTo, ok bool)
Iter puts the pile iterator back to the beginning and returns the first `Next()`, iff any. Usage for a pile `p`:
for item, ok := p.Iter(); ok; item, ok = p.Next() { ... do sth with item ... }
func (*WriterToPile) Next ¶
func (d *WriterToPile) Next() (item io.WriterTo, ok bool)
Next returns the next item, or false iff the pile is exhausted.
Note: Iff the pile is not closed yet, Next may block, awaiting some Pile().
func (*WriterToPile) Pile ¶
func (d *WriterToPile) Pile(item io.WriterTo)
Pile appends an `io.WriterTo` item to the WriterToPile.
Note: Pile will block iff buff is exceeded and no Done() or Next()'s are used.
Source Files ¶
- ByteReaderPile.dot.go
- ByteScannerPile.dot.go
- ByteWriterPile.dot.go
- CloserPile.dot.go
- LimitedReaderPile.dot.go
- PipeReaderPile.dot.go
- PipeWriterPile.dot.go
- ReadCloserPile.dot.go
- ReadSeekerPile.dot.go
- ReadWriteCloserPile.dot.go
- ReadWriteSeekerPile.dot.go
- ReadWriterPile.dot.go
- ReaderAtPile.dot.go
- ReaderFromPile.dot.go
- ReaderPile.dot.go
- RuneReaderPile.dot.go
- RuneScannerPile.dot.go
- SectionReaderPile.dot.go
- SeekerPile.dot.go
- WriteCloserPile.dot.go
- WriteSeekerPile.dot.go
- WriterAtPile.dot.go
- WriterPile.dot.go
- WriterToPile.dot.go
- dot.go