Documentation ¶
Overview ¶
Package gridfs provides a MongoDB GridFS API. See https://www.mongodb.com/docs/manual/core/gridfs/ for more information about GridFS and its use cases.
Buckets ¶
The main type defined in this package is Bucket. A Bucket wraps a mongo.Database instance and operates on two collections in the database. The first is the files collection, which contains one metadata document per file stored in the bucket. This collection is named "<bucket name>.files". The second is the chunks collection, which contains chunks of files. This collection is named "<bucket name>.chunks".
Uploading a File ¶
Files can be uploaded in two ways:
OpenUploadStream/OpenUploadStreamWithID - These methods return an UploadStream instance. UploadStream implements the io.Writer interface and the Write() method can be used to upload a file to the database.
UploadFromStream/UploadFromStreamWithID - These methods take an io.Reader, which represents the file to upload. They internally create a new UploadStream and close it once the operation is complete.
Downloading a File ¶
Similar to uploads, files can be downloaded in two ways:
OpenDownloadStream/OpenDownloadStreamByName - These methods return a DownloadStream instance. DownloadStream implements the io.Reader interface. A file can be read either using the Read() method or any standard library methods that reads from an io.Reader such as io.Copy.
DownloadToStream/DownloadToStreamByName - These methods take an io.Writer, which represents the download destination. They internally create a new DownloadStream and close it once the operation is complete.
Index ¶
- Constants
- Variables
- type Bucket
- func (b *Bucket) Delete(fileID interface{}) error
- func (b *Bucket) DeleteContext(ctx context.Context, fileID interface{}) error
- func (b *Bucket) DownloadToStream(fileID interface{}, stream io.Writer) (int64, error)
- func (b *Bucket) DownloadToStreamByName(filename string, stream io.Writer, opts ...*options.NameOptions) (int64, error)
- func (b *Bucket) Drop() error
- func (b *Bucket) DropContext(ctx context.Context) error
- func (b *Bucket) Find(filter interface{}, opts ...*options.GridFSFindOptions) (*mongo.Cursor, error)
- func (b *Bucket) FindContext(ctx context.Context, filter interface{}, opts ...*options.GridFSFindOptions) (*mongo.Cursor, error)
- func (b *Bucket) GetChunksCollection() *mongo.Collection
- func (b *Bucket) GetFilesCollection() *mongo.Collection
- func (b *Bucket) OpenDownloadStream(fileID interface{}) (*DownloadStream, error)
- func (b *Bucket) OpenDownloadStreamByName(filename string, opts ...*options.NameOptions) (*DownloadStream, error)
- func (b *Bucket) OpenUploadStream(filename string, opts ...*options.UploadOptions) (*UploadStream, error)
- func (b *Bucket) OpenUploadStreamWithID(fileID interface{}, filename string, opts ...*options.UploadOptions) (*UploadStream, error)
- func (b *Bucket) Rename(fileID interface{}, newFilename string) error
- func (b *Bucket) RenameContext(ctx context.Context, fileID interface{}, newFilename string) error
- func (b *Bucket) SetReadDeadline(t time.Time) error
- func (b *Bucket) SetWriteDeadline(t time.Time) error
- func (b *Bucket) UploadFromStream(filename string, source io.Reader, opts ...*options.UploadOptions) (primitive.ObjectID, error)
- func (b *Bucket) UploadFromStreamWithID(fileID interface{}, filename string, source io.Reader, ...) error
- type DownloadStream
- type File
- type Upload
- type UploadStream
Examples ¶
Constants ¶
const DefaultChunkSize int32 = 255 * 1024 // 255 KiB
DefaultChunkSize is the default size of each file chunk.
const UploadBufferSize = 16 * 1024 * 1024 // 16 MiB
UploadBufferSize is the size in bytes of one stream batch. Chunks will be written to the db after the sum of chunk lengths is equal to the batch size.
Variables ¶
var ErrFileNotFound = errors.New("file with given parameters not found")
ErrFileNotFound occurs if a user asks to download a file with a file ID that isn't found in the files collection.
var ErrMissingChunkSize = errors.New("files collection document does not contain a 'chunkSize' field")
ErrMissingChunkSize occurs when downloading a file if the files collection document is missing the "chunkSize" field.
var ErrStreamClosed = errors.New("stream is closed or aborted")
ErrStreamClosed is an error returned if an operation is attempted on a closed/aborted stream.
var ErrWrongIndex = errors.New("chunk index does not match expected index")
ErrWrongIndex is used when the chunk retrieved from the server does not have the expected index.
var ErrWrongSize = errors.New("chunk size does not match expected size")
ErrWrongSize is used when the chunk retrieved from the server does not have the expected size.
Functions ¶
This section is empty.
Types ¶
type Bucket ¶
type Bucket struct {
// contains filtered or unexported fields
}
Bucket represents a GridFS bucket.
func (*Bucket) Delete ¶
Delete deletes all chunks and metadata associated with the file with the given file ID.
If this operation requires a custom write deadline to be set on the bucket, it cannot be done concurrently with other write operations operations on this bucket that also require a custom deadline.
Use SetWriteDeadline to set a deadline for the delete operation.
Example ¶
package main import ( "log" "go.mongodb.org/mongo-driver/bson/primitive" "go.mongodb.org/mongo-driver/mongo/gridfs" ) func main() { var bucket *gridfs.Bucket var fileID primitive.ObjectID if err := bucket.Delete(fileID); err != nil { log.Fatal(err) } }
Output:
func (*Bucket) DeleteContext ¶ added in v1.11.0
DeleteContext deletes all chunks and metadata associated with the file with the given file ID and runs the underlying delete operations with the provided context.
Use the context parameter to time-out or cancel the delete operation. The deadline set by SetWriteDeadline is ignored.
func (*Bucket) DownloadToStream ¶
DownloadToStream downloads the file with the specified fileID and writes it to the provided io.Writer. Returns the number of bytes written to the stream and an error, or nil if there was no error.
If this download requires a custom read deadline to be set on the bucket, it cannot be done concurrently with other read operations operations on this bucket that also require a custom deadline.
Example ¶
package main import ( "bytes" "log" "go.mongodb.org/mongo-driver/bson/primitive" "go.mongodb.org/mongo-driver/mongo/gridfs" ) func main() { var bucket *gridfs.Bucket var fileID primitive.ObjectID fileBuffer := bytes.NewBuffer(nil) if _, err := bucket.DownloadToStream(fileID, fileBuffer); err != nil { log.Fatal(err) } }
Output:
func (*Bucket) DownloadToStreamByName ¶
func (b *Bucket) DownloadToStreamByName(filename string, stream io.Writer, opts ...*options.NameOptions) (int64, error)
DownloadToStreamByName downloads the file with the given name to the given io.Writer.
If this download requires a custom read deadline to be set on the bucket, it cannot be done concurrently with other read operations operations on this bucket that also require a custom deadline.
func (*Bucket) Drop ¶
Drop drops the files and chunks collections associated with this bucket.
If this operation requires a custom write deadline to be set on the bucket, it cannot be done concurrently with other write operations operations on this bucket that also require a custom deadline
Use SetWriteDeadline to set a deadline for the drop operation.
Example ¶
package main import ( "log" "go.mongodb.org/mongo-driver/mongo/gridfs" ) func main() { var bucket *gridfs.Bucket if err := bucket.Drop(); err != nil { log.Fatal(err) } }
Output:
func (*Bucket) DropContext ¶ added in v1.11.0
DropContext drops the files and chunks collections associated with this bucket and runs the drop operations with the provided context.
Use the context parameter to time-out or cancel the drop operation. The deadline set by SetWriteDeadline is ignored.
func (*Bucket) Find ¶
func (b *Bucket) Find(filter interface{}, opts ...*options.GridFSFindOptions) (*mongo.Cursor, error)
Find returns the files collection documents that match the given filter.
If this download requires a custom read deadline to be set on the bucket, it cannot be done concurrently with other read operations operations on this bucket that also require a custom deadline.
Use SetReadDeadline to set a deadline for the find operation.
Example ¶
package main import ( "context" "fmt" "log" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo/gridfs" ) func main() { var bucket *gridfs.Bucket // Specify a filter to find all files with a length greater than 1000 bytes. filter := bson.D{ {"length", bson.D{{"$gt", 1000}}}, } cursor, err := bucket.Find(filter) if err != nil { log.Fatal(err) } defer func() { if err := cursor.Close(context.TODO()); err != nil { log.Fatal(err) } }() type gridfsFile struct { Name string `bson:"filename"` Length int64 `bson:"length"` } var foundFiles []gridfsFile if err = cursor.All(context.TODO(), &foundFiles); err != nil { log.Fatal(err) } for _, file := range foundFiles { fmt.Printf("filename: %s, length: %d\n", file.Name, file.Length) } }
Output:
func (*Bucket) FindContext ¶ added in v1.11.0
func (b *Bucket) FindContext(ctx context.Context, filter interface{}, opts ...*options.GridFSFindOptions) (*mongo.Cursor, error)
FindContext returns the files collection documents that match the given filter and runs the underlying find query with the provided context.
Use the context parameter to time-out or cancel the find operation. The deadline set by SetReadDeadline is ignored.
func (*Bucket) GetChunksCollection ¶ added in v1.4.0
func (b *Bucket) GetChunksCollection() *mongo.Collection
GetChunksCollection returns a handle to the collection that stores the file chunks for this bucket.
func (*Bucket) GetFilesCollection ¶ added in v1.4.0
func (b *Bucket) GetFilesCollection() *mongo.Collection
GetFilesCollection returns a handle to the collection that stores the file documents for this bucket.
func (*Bucket) OpenDownloadStream ¶
func (b *Bucket) OpenDownloadStream(fileID interface{}) (*DownloadStream, error)
OpenDownloadStream creates a stream from which the contents of the file can be read.
Example ¶
package main import ( "bytes" "io" "log" "time" "go.mongodb.org/mongo-driver/bson/primitive" "go.mongodb.org/mongo-driver/mongo/gridfs" ) func main() { var bucket *gridfs.Bucket var fileID primitive.ObjectID downloadStream, err := bucket.OpenDownloadStream(fileID) if err != nil { log.Fatal(err) } defer func() { if err := downloadStream.Close(); err != nil { log.Fatal(err) } }() // Use SetReadDeadline to force a timeout if the download does not succeed // in 2 seconds. err = downloadStream.SetReadDeadline(time.Now().Add(2 * time.Second)) if err != nil { log.Fatal(err) } fileBuffer := bytes.NewBuffer(nil) if _, err := io.Copy(fileBuffer, downloadStream); err != nil { log.Fatal(err) } }
Output:
func (*Bucket) OpenDownloadStreamByName ¶
func (b *Bucket) OpenDownloadStreamByName(filename string, opts ...*options.NameOptions) (*DownloadStream, error)
OpenDownloadStreamByName opens a download stream for the file with the given filename.
func (*Bucket) OpenUploadStream ¶
func (b *Bucket) OpenUploadStream(filename string, opts ...*options.UploadOptions) (*UploadStream, error)
OpenUploadStream creates a file ID new upload stream for a file given the filename.
Example ¶
package main import ( "log" "time" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo/gridfs" "go.mongodb.org/mongo-driver/mongo/options" ) func main() { var fileContent []byte var bucket *gridfs.Bucket // Specify the Metadata option to include a "metadata" field in the files // collection document. uploadOpts := options.GridFSUpload(). SetMetadata(bson.D{{"metadata tag", "tag"}}) uploadStream, err := bucket.OpenUploadStream("filename", uploadOpts) if err != nil { log.Fatal(err) } defer func() { if err = uploadStream.Close(); err != nil { log.Fatal(err) } }() // Use SetWriteDeadline to force a timeout if the upload does not succeed in // 2 seconds. err = uploadStream.SetWriteDeadline(time.Now().Add(2 * time.Second)) if err != nil { log.Fatal(err) } if _, err = uploadStream.Write(fileContent); err != nil { log.Fatal(err) } }
Output:
func (*Bucket) OpenUploadStreamWithID ¶
func (b *Bucket) OpenUploadStreamWithID(fileID interface{}, filename string, opts ...*options.UploadOptions) (*UploadStream, error)
OpenUploadStreamWithID creates a new upload stream for a file given the file ID and filename.
func (*Bucket) Rename ¶
Rename renames the stored file with the specified file ID.
If this operation requires a custom write deadline to be set on the bucket, it cannot be done concurrently with other write operations operations on this bucket that also require a custom deadline
Use SetWriteDeadline to set a deadline for the rename operation.
Example ¶
package main import ( "log" "go.mongodb.org/mongo-driver/bson/primitive" "go.mongodb.org/mongo-driver/mongo/gridfs" ) func main() { var bucket *gridfs.Bucket var fileID primitive.ObjectID if err := bucket.Rename(fileID, "new file name"); err != nil { log.Fatal(err) } }
Output:
func (*Bucket) RenameContext ¶ added in v1.11.0
RenameContext renames the stored file with the specified file ID and runs the underlying update with the provided context.
Use the context parameter to time-out or cancel the rename operation. The deadline set by SetWriteDeadline is ignored.
func (*Bucket) SetReadDeadline ¶
SetReadDeadline sets the read deadline for this bucket
func (*Bucket) SetWriteDeadline ¶
SetWriteDeadline sets the write deadline for this bucket.
func (*Bucket) UploadFromStream ¶
func (b *Bucket) UploadFromStream(filename string, source io.Reader, opts ...*options.UploadOptions) (primitive.ObjectID, error)
UploadFromStream creates a fileID and uploads a file given a source stream.
If this upload requires a custom write deadline to be set on the bucket, it cannot be done concurrently with other write operations operations on this bucket that also require a custom deadline.
Example ¶
package main import ( "bytes" "fmt" "log" "go.mongodb.org/mongo-driver/bson" "go.mongodb.org/mongo-driver/mongo/gridfs" "go.mongodb.org/mongo-driver/mongo/options" ) func main() { var fileContent []byte var bucket *gridfs.Bucket // Specify the Metadata option to include a "metadata" field in the files // collection document. uploadOpts := options.GridFSUpload(). SetMetadata(bson.D{{"metadata tag", "tag"}}) fileID, err := bucket.UploadFromStream( "filename", bytes.NewBuffer(fileContent), uploadOpts) if err != nil { log.Fatal(err) } fmt.Printf("new file created with ID %s", fileID) }
Output:
func (*Bucket) UploadFromStreamWithID ¶
func (b *Bucket) UploadFromStreamWithID(fileID interface{}, filename string, source io.Reader, opts ...*options.UploadOptions) error
UploadFromStreamWithID uploads a file given a source stream.
If this upload requires a custom write deadline to be set on the bucket, it cannot be done concurrently with other write operations operations on this bucket that also require a custom deadline.
type DownloadStream ¶
type DownloadStream struct {
// contains filtered or unexported fields
}
DownloadStream is a io.Reader that can be used to download a file from a GridFS bucket.
func (*DownloadStream) Close ¶
func (ds *DownloadStream) Close() error
Close closes this download stream.
func (*DownloadStream) GetFile ¶ added in v1.4.0
func (ds *DownloadStream) GetFile() *File
GetFile returns a File object representing the file being downloaded.
func (*DownloadStream) Read ¶
func (ds *DownloadStream) Read(p []byte) (int, error)
Read reads the file from the server and writes it to a destination byte slice.
func (*DownloadStream) SetReadDeadline ¶
func (ds *DownloadStream) SetReadDeadline(t time.Time) error
SetReadDeadline sets the read deadline for this download stream.
type File ¶ added in v1.4.0
type File struct { // ID is the file's ID. This will match the file ID specified when uploading the file. If an upload helper that // does not require a file ID was used, this field will be a primitive.ObjectID. ID interface{} // Length is the length of this file in bytes. Length int64 // ChunkSize is the maximum number of bytes for each chunk in this file. ChunkSize int32 // UploadDate is the time this file was added to GridFS in UTC. This field is set by the driver and is not configurable. // The Metadata field can be used to store a custom date. UploadDate time.Time // Name is the name of this file. Name string // Metadata is additional data that was specified when creating this file. This field can be unmarshalled into a // custom type using the bson.Unmarshal family of functions. Metadata bson.Raw }
File represents a file stored in GridFS. This type can be used to access file information when downloading using the DownloadStream.GetFile method.
func (*File) UnmarshalBSON ¶ added in v1.4.0
UnmarshalBSON implements the bson.Unmarshaler interface.
type Upload ¶
type Upload struct {
// contains filtered or unexported fields
}
Upload contains options to upload a file to a bucket.
type UploadStream ¶
type UploadStream struct { *Upload // chunk size and metadata FileID interface{} // contains filtered or unexported fields }
UploadStream is used to upload a file in chunks. This type implements the io.Writer interface and a file can be uploaded using the Write method. After an upload is complete, the Close method must be called to write file metadata.
func (*UploadStream) Abort ¶
func (us *UploadStream) Abort() error
Abort closes the stream and deletes all file chunks that have already been written.
func (*UploadStream) Close ¶
func (us *UploadStream) Close() error
Close writes file metadata to the files collection and cleans up any resources associated with the UploadStream.
func (*UploadStream) SetWriteDeadline ¶
func (us *UploadStream) SetWriteDeadline(t time.Time) error
SetWriteDeadline sets the write deadline for this stream.