Documentation ¶
Overview ¶
Package object implements repository support for content-addressable objects of arbitrary size.
Index ¶
- Variables
- func IDsToStrings(input []ID) []string
- func PrefetchBackingContents(ctx context.Context, contentMgr contentManager, objectIDs []ID, hint string) ([]content.ID, error)
- func VerifyObject(ctx context.Context, cr contentReader, oid ID) ([]content.ID, error)
- type HasObjectID
- type ID
- type IndirectObjectEntry
- type Manager
- type Reader
- type Writer
- type WriterOptions
Constants ¶
This section is empty.
Variables ¶
var EmptyID = ID{}
EmptyID is an empty object ID equivalent to an empty string.
var ErrObjectNotFound = errors.New("object not found")
ErrObjectNotFound is returned when an object cannot be found.
Functions ¶
func IDsToStrings ¶ added in v0.10.6
IDsToStrings converts the IDs to strings.
func PrefetchBackingContents ¶ added in v0.10.6
func PrefetchBackingContents(ctx context.Context, contentMgr contentManager, objectIDs []ID, hint string) ([]content.ID, error)
PrefetchBackingContents attempts to brings contents backing the provided object IDs into the cache. This may succeed only partially due to cache size limits and other. Returns the list of content IDs prefetched.
Types ¶
type HasObjectID ¶
type HasObjectID interface {
ObjectID() ID
}
HasObjectID exposes the identifier of an object.
type ID ¶
type ID struct {
// contains filtered or unexported fields
}
ID is an identifier of a repository object. Repository objects can be stored.
- In a single content block, this is the most common case for small objects.
- In a series of content blocks with an indirect block pointing at them (multiple indirections are allowed). This is used for larger files. Object IDs using indirect blocks start with "I"
func Compressed ¶ added in v0.4.0
Compressed returns object ID with 'Z' prefix indicating it's compressed.
func DirectObjectID ¶
DirectObjectID returns direct object ID based on the provided block ID.
func IDsFromStrings ¶ added in v0.10.6
IDsFromStrings converts strings to IDs.
func IndirectObjectID ¶
IndirectObjectID returns indirect object ID based on the underlying index object ID.
func (ID) Append ¶ added in v0.12.0
Append appends string representation of ObjectID that is suitable for displaying in the UI.
func (ID) IndexObjectID ¶
IndexObjectID returns the object ID of the underlying index object.
func (ID) MarshalJSON ¶ added in v0.11.0
MarshalJSON implements JSON serialization of IDs.
func (ID) String ¶
String returns string representation of ObjectID that is suitable for displaying in the UI.
func (*ID) UnmarshalJSON ¶ added in v0.11.0
UnmarshalJSON implements JSON deserialization of IDs.
type IndirectObjectEntry ¶ added in v0.11.0
type IndirectObjectEntry struct { Start int64 `json:"s,omitempty"` Length int64 `json:"l,omitempty"` Object ID `json:"o,omitempty"` }
IndirectObjectEntry represents an entry in indirect object stream.
func LoadIndexObject ¶ added in v0.11.0
func LoadIndexObject(ctx context.Context, cr contentReader, indexObjectID ID) ([]IndirectObjectEntry, error)
LoadIndexObject returns entries comprising index object.
type Manager ¶
type Manager struct { Format format.ObjectFormat // contains filtered or unexported fields }
Manager implements a content-addressable storage on top of blob storage.
func NewObjectManager ¶
func NewObjectManager(ctx context.Context, bm contentManager, f format.ObjectFormat) (*Manager, error)
NewObjectManager creates an ObjectManager with the specified content manager and format.
func (*Manager) Concatenate ¶ added in v0.7.0
Concatenate creates an object that's a result of concatenation of other objects. This is more efficient than reading and rewriting the objects because Concatenate can efficiently merge index entries without reading the underlying contents.
This function exists primarily to facilitate efficient parallel uploads of very large files (>1GB). Due to bottleneck of splitting which is inherently sequential, we can only one use CPU core for each Writer, which limits throughput.
For example when uploading a 100 GB file it is beneficial to independently upload sections of [0..25GB), [25..50GB), [50GB..75GB) and [75GB..100GB) and concatenate them together as this allows us to run four splitters in parallel utilizing more CPU cores. Because some split points now start at fixed bounaries and not content-specific, this causes some slight loss of deduplication at concatenation points (typically 1-2 contents, usually <10MB), so this method should only be used for very large files where this overhead is relatively small.
type Reader ¶
Reader allows reading, seeking, getting the length of and closing of a repository object.
type Writer ¶
type Writer interface { io.WriteCloser // Checkpoint returns ID of an object consisting of all contents written to storage so far. // This may not include some data buffered in the writer. // In case nothing has been written yet, returns empty object ID. Checkpoint() (ID, error) // Result returns object ID representing all bytes written to the writer. Result() (ID, error) }
Writer allows writing content to the storage and supports automatic deduplication and encryption of written data.
type WriterOptions ¶
type WriterOptions struct { Description string Prefix content.IDPrefix // empty string or a single-character ('g'..'z') Compressor compression.Name AsyncWrites int // allow up to N content writes to be asynchronous }
WriterOptions can be passed to Repository.NewWriter().