Documentation ¶
Overview ¶
Package mgo (pronounced as "mango") offers a rich MongoDB driver for Go.
Detailed documentation of the API is available at GoDoc:
https://godoc.org/github.com/globalsign/mgo
Usage of the driver revolves around the concept of sessions. To get started, obtain a session using the Dial function:
session, err := mgo.Dial(url)
This will establish one or more connections with the cluster of servers defined by the url parameter. From then on, the cluster may be queried with multiple consistency rules (see SetMode) and documents retrieved with statements such as:
c := session.DB(database).C(collection) err := c.Find(query).One(&result)
New sessions are typically created by calling session.Copy on the initial session obtained at dial time. These new sessions will share the same cluster information and connection pool, and may be easily handed into other methods and functions for organizing logic. Every session created must have its Close method called at the end of its life time, so its resources may be put back in the pool or collected, depending on the case.
There is a sub-package that provides support for BSON, which can be used by itself as well:
https://godoc.org/github.com/globalsign/mgo/bson
For more details, see the documentation for the types and methods.
Index ¶
- Constants
- Variables
- func IsDup(err error) bool
- func ResetStats()
- func SetDebug(debug bool)
- func SetLogger(logger logLogger)
- func SetStats(enabled bool)
- type BuildInfo
- type Bulk
- func (b *Bulk) Insert(docs ...interface{})
- func (b *Bulk) Remove(selectors ...interface{})
- func (b *Bulk) RemoveAll(selectors ...interface{})
- func (b *Bulk) Run() (*BulkResult, error)
- func (b *Bulk) Unordered()
- func (b *Bulk) Update(pairs ...interface{})
- func (b *Bulk) UpdateAll(pairs ...interface{})
- func (b *Bulk) Upsert(pairs ...interface{})
- type BulkError
- type BulkErrorCase
- type BulkResult
- type Change
- type ChangeInfo
- type ChangeStream
- type ChangeStreamOptions
- type Client
- type Collation
- type Collection
- func (c *Collection) Bulk() *Bulk
- func (c *Collection) Count() (n int, err error)
- func (c *Collection) Create(info *CollectionInfo) error
- func (c *Collection) DropAllIndexes() error
- func (c *Collection) DropCollection() error
- func (c *Collection) DropIndex(key ...string) error
- func (c *Collection) DropIndexName(name string) error
- func (c *Collection) EnsureIndex(index Index) error
- func (c *Collection) EnsureIndexKey(key ...string) error
- func (c *Collection) Find(query interface{}) *Query
- func (c *Collection) FindId(id interface{}) *Query
- func (c *Collection) Indexes() (indexes []Index, err error)
- func (c *Collection) Insert(docs ...interface{}) error
- func (c *Collection) NewIter(session *Session, firstBatch []bson.Raw, cursorId int64, err error) *Iter
- func (c *Collection) Pipe(pipeline interface{}) *Pipe
- func (c *Collection) Remove(selector interface{}) error
- func (c *Collection) RemoveAll(selector interface{}) (info *ChangeInfo, err error)
- func (c *Collection) RemoveId(id interface{}) error
- func (c *Collection) Repair() *Iter
- func (c *Collection) Update(selector interface{}, update interface{}) error
- func (c *Collection) UpdateAll(selector interface{}, update interface{}) (info *ChangeInfo, err error)
- func (c *Collection) UpdateId(id interface{}, update interface{}) error
- func (c *Collection) Upsert(selector interface{}, update interface{}) (info *ChangeInfo, err error)
- func (c *Collection) UpsertId(id interface{}, update interface{}) (info *ChangeInfo, err error)
- func (coll *Collection) Watch(pipeline interface{}, options ChangeStreamOptions) (*ChangeStream, error)
- func (c *Collection) With(s *Session) *Collection
- type CollectionInfo
- type Credential
- type DBRef
- type Database
- func (db *Database) AddUser(username, password string, readOnly bool) error
- func (db *Database) C(name string) *Collection
- func (db *Database) CollectionNames() (names []string, err error)
- func (db *Database) CreateView(view string, source string, pipeline interface{}, collation *Collation) error
- func (db *Database) DropDatabase() error
- func (db *Database) FindRef(ref *DBRef) *Query
- func (db *Database) GridFS(prefix string) *GridFS
- func (db *Database) Login(user, pass string) error
- func (db *Database) Logout()
- func (db *Database) RemoveUser(user string) error
- func (db *Database) Run(cmd interface{}, result interface{}) error
- func (db *Database) UpsertUser(user *User) error
- func (db *Database) With(s *Session) *Database
- type DialInfo
- type FullDocument
- type GridFS
- func (gfs *GridFS) Create(name string) (file *GridFile, err error)
- func (gfs *GridFS) Find(query interface{}) *Query
- func (gfs *GridFS) Open(name string) (file *GridFile, err error)
- func (gfs *GridFS) OpenId(id interface{}) (file *GridFile, err error)
- func (gfs *GridFS) OpenNext(iter *Iter, file **GridFile) bool
- func (gfs *GridFS) Remove(name string) (err error)
- func (gfs *GridFS) RemoveId(id interface{}) error
- type GridFile
- func (file *GridFile) Abort()
- func (file *GridFile) Close() (err error)
- func (file *GridFile) ContentType() string
- func (file *GridFile) GetMeta(result interface{}) (err error)
- func (file *GridFile) Id() interface{}
- func (file *GridFile) MD5() (md5 string)
- func (file *GridFile) Name() string
- func (file *GridFile) Read(b []byte) (n int, err error)
- func (file *GridFile) Seek(offset int64, whence int) (pos int64, err error)
- func (file *GridFile) SetChunkSize(bytes int)
- func (file *GridFile) SetContentType(ctype string)
- func (file *GridFile) SetId(id interface{})
- func (file *GridFile) SetMeta(metadata interface{})
- func (file *GridFile) SetName(name string)
- func (file *GridFile) SetUploadDate(t time.Time)
- func (file *GridFile) Size() (bytes int64)
- func (file *GridFile) UploadDate() time.Time
- func (file *GridFile) Write(data []byte) (n int, err error)
- type Index
- type Iter
- func (iter *Iter) All(result interface{}) error
- func (iter *Iter) Close() error
- func (iter *Iter) Done() bool
- func (iter *Iter) Err() error
- func (iter *Iter) For(result interface{}, f func() error) (err error)
- func (iter *Iter) Next(result interface{}) bool
- func (iter *Iter) State() (int64, []bson.Raw)
- func (iter *Iter) Timeout() bool
- type LastError
- type MapReduce
- type MapReduceInfo
- type MapReduceTime
- type Method
- type Mode
- type Pipe
- func (p *Pipe) All(result interface{}) error
- func (p *Pipe) AllowDiskUse() *Pipe
- func (p *Pipe) Batch(n int) *Pipe
- func (p *Pipe) Collation(collation *Collation) *Pipe
- func (p *Pipe) Explain(result interface{}) error
- func (p *Pipe) Iter() *Iter
- func (p *Pipe) One(result interface{}) error
- func (p *Pipe) SetMaxTime(d time.Duration) *Pipe
- type Query
- func (q *Query) All(result interface{}) error
- func (q *Query) Apply(change Change, result interface{}) (info *ChangeInfo, err error)
- func (q *Query) Batch(n int) *Query
- func (q *Query) Collation(collation *Collation) *Query
- func (q *Query) Comment(comment string) *Query
- func (q *Query) Count() (n int, err error)
- func (q *Query) Distinct(key string, result interface{}) error
- func (q *Query) Explain(result interface{}) error
- func (q *Query) For(result interface{}, f func() error) error
- func (q *Query) Hint(indexKey ...string) *Query
- func (q *Query) Iter() *Iter
- func (q *Query) Limit(n int) *Query
- func (q *Query) LogReplay() *Query
- func (q *Query) MapReduce(job *MapReduce, result interface{}) (info *MapReduceInfo, err error)
- func (q *Query) One(result interface{}) (err error)
- func (q *Query) Prefetch(p float64) *Query
- func (q *Query) Select(selector interface{}) *Query
- func (q *Query) SetMaxScan(n int) *Query
- func (q *Query) SetMaxTime(d time.Duration) *Query
- func (q *Query) Skip(n int) *Query
- func (q *Query) Snapshot() *Query
- func (q *Query) Sort(fields ...string) *Query
- func (q *Query) Tail(timeout time.Duration) *Iter
- type QueryError
- type ReadPreference
- type Role
- type Safe
- type ServerAddr
- type Session
- func (s *Session) BuildInfo() (info BuildInfo, err error)
- func (s *Session) Clone() *Session
- func (s *Session) Close()
- func (s *Session) Copy() *Session
- func (s *Session) DB(name string) *Database
- func (s *Session) DatabaseNames() (names []string, err error)
- func (s *Session) EnsureSafe(safe *Safe)
- func (s *Session) FindRef(ref *DBRef) *Query
- func (s *Session) Fsync(async bool) error
- func (s *Session) FsyncLock() error
- func (s *Session) FsyncUnlock() error
- func (s *Session) LiveServers() (addrs []string)
- func (s *Session) Login(cred *Credential) error
- func (s *Session) LogoutAll()
- func (s *Session) Mode() Mode
- func (s *Session) New() *Session
- func (s *Session) Ping() error
- func (s *Session) Refresh()
- func (s *Session) ResetIndexCache()
- func (s *Session) Run(cmd interface{}, result interface{}) error
- func (s *Session) Safe() (safe *Safe)
- func (s *Session) SelectServers(tags ...bson.D)
- func (s *Session) SetBatch(n int)
- func (s *Session) SetBypassValidation(bypass bool)
- func (s *Session) SetCursorTimeout(d time.Duration)
- func (s *Session) SetMode(consistency Mode, refresh bool)
- func (s *Session) SetPoolLimit(limit int)
- func (s *Session) SetPoolTimeout(timeout time.Duration)
- func (s *Session) SetPrefetch(p float64)
- func (s *Session) SetSafe(safe *Safe)
- func (s *Session) SetSocketTimeout(d time.Duration)
- func (s *Session) SetSyncTimeout(d time.Duration)
- type Stats
- type User
Examples ¶
Constants ¶
const ( // ScramSha1 use the SCRAM-SHA-1 variant ScramSha1 = "SCRAM-SHA-1" // ScramSha256 use the SCRAM-SHA-256 variant ScramSha256 = "SCRAM-SHA-256" )
const ( Default = "default" UpdateLookup = "updateLookup" )
Variables ¶
var ( // ErrNotFound error returned when a document could not be found ErrNotFound = errors.New("not found") // ErrCursor error returned when trying to retrieve documents from // an invalid cursor ErrCursor = errors.New("invalid cursor") )
Functions ¶
func IsDup ¶
IsDup returns whether err informs of a duplicate key error because a primary key index or a secondary unique index already has an entry with the given value.
func SetDebug ¶
func SetDebug(debug bool)
SetDebug enable the delivery of debug messages to the logger. Only meaningful if a logger is also set.
Types ¶
type BuildInfo ¶
type BuildInfo struct { Version string VersionArray []int `bson:"versionArray"` // On MongoDB 2.0+; assembled from Version otherwise GitVersion string `bson:"gitVersion"` OpenSSLVersion string `bson:"OpenSSLVersion"` SysInfo string `bson:"sysInfo"` // Deprecated and empty on MongoDB 3.2+. Bits int Debug bool MaxObjectSize int `bson:"maxBsonObjectSize"` }
The BuildInfo type encapsulates details about the running MongoDB server.
Note that the VersionArray field was introduced in MongoDB 2.0+, but it is internally assembled from the Version information for previous versions. In both cases, VersionArray is guaranteed to have at least 4 entries.
func (*BuildInfo) VersionAtLeast ¶
VersionAtLeast returns whether the BuildInfo version is greater than or equal to the provided version number. If more than one number is provided, numbers will be considered as major, minor, and so on.
type Bulk ¶
type Bulk struct {
// contains filtered or unexported fields
}
Bulk represents an operation that can be prepared with several orthogonal changes before being delivered to the server.
MongoDB servers older than version 2.6 do not have proper support for bulk operations, so the driver attempts to map its API as much as possible into the functionality that works. In particular, in those releases updates and removals are sent individually, and inserts are sent in bulk but have suboptimal error reporting compared to more recent versions of the server. See the documentation of BulkErrorCase for details on that.
Relevant documentation:
http://blog.mongodb.org/post/84922794768/mongodbs-new-bulk-api
func (*Bulk) Insert ¶
func (b *Bulk) Insert(docs ...interface{})
Insert queues up the provided documents for insertion.
func (*Bulk) Remove ¶
func (b *Bulk) Remove(selectors ...interface{})
Remove queues up the provided selectors for removing matching documents. Each selector will remove only a single matching document.
func (*Bulk) RemoveAll ¶
func (b *Bulk) RemoveAll(selectors ...interface{})
RemoveAll queues up the provided selectors for removing all matching documents. Each selector will remove all matching documents.
func (*Bulk) Run ¶
func (b *Bulk) Run() (*BulkResult, error)
Run runs all the operations queued up.
If an error is reported on an unordered bulk operation, the error value may be an aggregation of all issues observed. As an exception to that, Insert operations running on MongoDB versions prior to 2.6 will report the last error only due to a limitation in the wire protocol.
func (*Bulk) Unordered ¶
func (b *Bulk) Unordered()
Unordered puts the bulk operation in unordered mode.
In unordered mode the indvidual operations may be sent out of order, which means latter operations may proceed even if prior ones have failed.
func (*Bulk) Update ¶
func (b *Bulk) Update(pairs ...interface{})
Update queues up the provided pairs of updating instructions. The first element of each pair selects which documents must be updated, and the second element defines how to update it. Each pair matches exactly one document for updating at most.
func (*Bulk) UpdateAll ¶
func (b *Bulk) UpdateAll(pairs ...interface{})
UpdateAll queues up the provided pairs of updating instructions. The first element of each pair selects which documents must be updated, and the second element defines how to update it. Each pair updates all documents matching the selector.
func (*Bulk) Upsert ¶
func (b *Bulk) Upsert(pairs ...interface{})
Upsert queues up the provided pairs of upserting instructions. The first element of each pair selects which documents must be updated, and the second element defines how to update it. Each pair matches exactly one document for updating at most.
type BulkError ¶
type BulkError struct {
// contains filtered or unexported fields
}
BulkError holds an error returned from running a Bulk operation. Individual errors may be obtained and inspected via the Cases method.
func (*BulkError) Cases ¶
func (e *BulkError) Cases() []BulkErrorCase
Cases returns all individual errors found while attempting the requested changes.
See the documentation of BulkErrorCase for limitations in older MongoDB releases.
type BulkErrorCase ¶
type BulkErrorCase struct { Index int // Position of operation that failed, or -1 if unknown. Err error }
BulkErrorCase holds an individual error found while attempting a single change within a bulk operation, and the position in which it was enqueued.
MongoDB servers older than version 2.6 do not have proper support for bulk operations, so the driver attempts to map its API as much as possible into the functionality that works. In particular, only the last error is reported for bulk inserts and without any positional information, so the Index field is set to -1 in these cases.
type BulkResult ¶
type BulkResult struct { Matched int Modified int // Available only for MongoDB 2.6+ // contains filtered or unexported fields }
BulkResult holds the results for a bulk operation.
type Change ¶
type Change struct { Update interface{} // The update document Upsert bool // Whether to insert in case the document isn't found Remove bool // Whether to remove the document found rather than updating ReturnNew bool // Should the modified document be returned rather than the old one }
Change holds fields for running a findAndModify MongoDB command via the Query.Apply method.
type ChangeInfo ¶
type ChangeInfo struct { // Updated reports the number of existing documents modified. // Due to server limitations, this reports the same value as the Matched field when // talking to MongoDB <= 2.4 and on Upsert and Apply (findAndModify) operations. Updated int Removed int // Number of documents removed Matched int // Number of documents matched but not necessarily changed UpsertedId interface{} // Upserted _id field, when not explicitly provided }
ChangeInfo holds details about the outcome of an update operation.
type ChangeStream ¶
type ChangeStream struct {
// contains filtered or unexported fields
}
func (*ChangeStream) Close ¶
func (changeStream *ChangeStream) Close() error
Close kills the server cursor used by the iterator, if any, and returns nil if no errors happened during iteration, or the actual error otherwise.
func (*ChangeStream) Err ¶
func (changeStream *ChangeStream) Err() error
Err returns nil if no errors happened during iteration, or the actual error otherwise.
func (*ChangeStream) Next ¶
func (changeStream *ChangeStream) Next(result interface{}) bool
Next retrieves the next document from the change stream, blocking if necessary. Next returns true if a document was successfully unmarshalled into result, and false if an error occured. When Next returns false, the Err method should be called to check what error occurred during iteration. If there were no events available (ErrNotFound), the Err method returns nil so the user can retry the invocaton.
For example:
pipeline := []bson.M{} changeStream := collection.Watch(pipeline, ChangeStreamOptions{}) for changeStream.Next(&changeDoc) { fmt.Printf("Change: %v\n", changeDoc) } if err := changeStream.Close(); err != nil { return err }
If the pipeline used removes the _id field from the result, Next will error because the _id field is needed to resume iteration when an error occurs.
func (*ChangeStream) ResumeToken ¶
func (changeStream *ChangeStream) ResumeToken() *bson.Raw
ResumeToken returns a copy of the current resume token held by the change stream. This token should be treated as an opaque token that can be provided to instantiate a new change stream.
func (*ChangeStream) Timeout ¶
func (changeStream *ChangeStream) Timeout() bool
Timeout returns true if the last call of Next returned false because of an iterator timeout.
type ChangeStreamOptions ¶
type ChangeStreamOptions struct { // FullDocument controls the amount of data that the server will return when // returning a changes document. FullDocument FullDocument // ResumeAfter specifies the logical starting point for the new change stream. ResumeAfter *bson.Raw // MaxAwaitTimeMS specifies the maximum amount of time for the server to wait // on new documents to satisfy a change stream query. MaxAwaitTimeMS time.Duration // BatchSize specifies the number of documents to return per batch. BatchSize int }
type Client ¶
type Client struct {
// contains filtered or unexported fields
}
type Collation ¶
type Collation struct { // Locale defines the collation locale. Locale string `bson:"locale"` // CaseFirst may be set to "upper" or "lower" to define whether // to have uppercase or lowercase items first. Default is "off". CaseFirst string `bson:"caseFirst,omitempty"` // Strength defines the priority of comparison properties, as follows: // // 1 (primary) - Strongest level, denote difference between base characters // 2 (secondary) - Accents in characters are considered secondary differences // 3 (tertiary) - Upper and lower case differences in characters are // distinguished at the tertiary level // 4 (quaternary) - When punctuation is ignored at level 1-3, an additional // level can be used to distinguish words with and without // punctuation. Should only be used if ignoring punctuation // is required or when processing Japanese text. // 5 (identical) - When all other levels are equal, the identical level is // used as a tiebreaker. The Unicode code point values of // the NFD form of each string are compared at this level, // just in case there is no difference at levels 1-4 // // Strength defaults to 3. Strength int `bson:"strength,omitempty"` // Alternate controls whether spaces and punctuation are considered base characters. // May be set to "non-ignorable" (spaces and punctuation considered base characters) // or "shifted" (spaces and punctuation not considered base characters, and only // distinguished at strength > 3). Defaults to "non-ignorable". Alternate string `bson:"alternate,omitempty"` // MaxVariable defines which characters are affected when the value for Alternate is // "shifted". It may be set to "punct" to affect punctuation or spaces, or "space" to // affect only spaces. MaxVariable string `bson:"maxVariable,omitempty"` // Normalization defines whether text is normalized into Unicode NFD. Normalization bool `bson:"normalization,omitempty"` // CaseLevel defines whether to turn case sensitivity on at strength 1 or 2. CaseLevel bool `bson:"caseLevel,omitempty"` // NumericOrdering defines whether to order numbers based on numerical // order and not collation order. NumericOrdering bool `bson:"numericOrdering,omitempty"` // Backwards defines whether to have secondary differences considered in reverse order, // as done in the French language. Backwards bool `bson:"backwards,omitempty"` }
Collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks.
type Collection ¶
type Collection struct { Database *Database Name string // "collection" FullName string // "db.collection" }
Collection stores documents
Relevant documentation:
https://docs.mongodb.com/manual/core/databases-and-collections/#collections
func (*Collection) Bulk ¶
func (c *Collection) Bulk() *Bulk
Bulk returns a value to prepare the execution of a bulk operation.
func (*Collection) Count ¶
func (c *Collection) Count() (n int, err error)
Count returns the total number of documents in the collection.
func (*Collection) Create ¶
func (c *Collection) Create(info *CollectionInfo) error
Create explicitly creates the c collection with details of info. MongoDB creates collections automatically on use, so this method is only necessary when creating collection with non-default characteristics, such as capped collections.
Relevant documentation:
http://www.mongodb.org/display/DOCS/createCollection+Command http://www.mongodb.org/display/DOCS/Capped+Collections
func (*Collection) DropAllIndexes ¶
func (c *Collection) DropAllIndexes() error
DropAllIndexes drops all the indexes from the c collection
func (*Collection) DropCollection ¶
func (c *Collection) DropCollection() error
DropCollection removes the entire collection including all of its documents.
func (*Collection) DropIndex ¶
func (c *Collection) DropIndex(key ...string) error
DropIndex drops the index with the provided key from the c collection.
See EnsureIndex for details on the accepted key variants.
For example:
err1 := collection.DropIndex("firstField", "-secondField") err2 := collection.DropIndex("customIndexName")
func (*Collection) DropIndexName ¶
func (c *Collection) DropIndexName(name string) error
DropIndexName removes the index with the provided index name.
For example:
err := collection.DropIndex("customIndexName")
func (*Collection) EnsureIndex ¶
func (c *Collection) EnsureIndex(index Index) error
EnsureIndex ensures an index with the given key exists, creating it with the provided parameters if necessary. EnsureIndex does not modify a previously existent index with a matching key. The old index must be dropped first instead.
Once EnsureIndex returns successfully, following requests for the same index will not contact the server unless Collection.DropIndex is used to drop the same index, or Session.ResetIndexCache is called.
For example:
index := Index{ Key: []string{"lastname", "firstname"}, Unique: true, DropDups: true, Background: true, // See notes. Sparse: true, } err := collection.EnsureIndex(index)
The Key value determines which fields compose the index. The index ordering will be ascending by default. To obtain an index with a descending order, the field name should be prefixed by a dash (e.g. []string{"-time"}). It can also be optionally prefixed by an index kind, as in "$text:summary" or "$2d:-point". The key string format is:
[$<kind>:][-]<field name>
If the Unique field is true, the index must necessarily contain only a single document per Key. With DropDups set to true, documents with the same key as a previously indexed one will be dropped rather than an error returned.
If Background is true, other connections will be allowed to proceed using the collection without the index while it's being built. Note that the session executing EnsureIndex will be blocked for as long as it takes for the index to be built.
If Sparse is true, only documents containing the provided Key fields will be included in the index. When using a sparse index for sorting, only indexed documents will be returned.
If ExpireAfter is non-zero, the server will periodically scan the collection and remove documents containing an indexed time.Time field with a value older than ExpireAfter. See the documentation for details:
http://docs.mongodb.org/manual/tutorial/expire-data
Other kinds of indexes are also supported through that API. Here is an example:
index := Index{ Key: []string{"$2d:loc"}, Bits: 26, } err := collection.EnsureIndex(index)
The example above requests the creation of a "2d" index for the "loc" field.
The 2D index bounds may be changed using the Min and Max attributes of the Index value. The default bound setting of (-180, 180) is suitable for latitude/longitude pairs.
The Bits parameter sets the precision of the 2D geohash values. If not provided, 26 bits are used, which is roughly equivalent to 1 foot of precision for the default (-180, 180) index bounds.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Indexes http://www.mongodb.org/display/DOCS/Indexing+Advice+and+FAQ http://www.mongodb.org/display/DOCS/Indexing+as+a+Background+Operation http://www.mongodb.org/display/DOCS/Geospatial+Indexing http://www.mongodb.org/display/DOCS/Multikeys
func (*Collection) EnsureIndexKey ¶
func (c *Collection) EnsureIndexKey(key ...string) error
EnsureIndexKey ensures an index with the given key exists, creating it if necessary.
This example:
err := collection.EnsureIndexKey("a", "b")
Is equivalent to:
err := collection.EnsureIndex(mgo.Index{Key: []string{"a", "b"}})
See the EnsureIndex method for more details.
func (*Collection) Find ¶
func (c *Collection) Find(query interface{}) *Query
Find prepares a query using the provided document. The document may be a map or a struct value capable of being marshalled with bson. The map may be a generic one using interface{} for its key and/or values, such as bson.M, or it may be a properly typed map. Providing nil as the document is equivalent to providing an empty document such as bson.M{}.
Further details of the query may be tweaked using the resulting Query value, and then executed to retrieve results using methods such as One, For, Iter, or Tail.
In case the resulting document includes a field named $err or errmsg, which are standard ways for MongoDB to return query errors, the returned err will be set to a *QueryError value including the Err message and the Code. In those cases, the result argument is still unmarshalled into with the received document so that any other custom values may be obtained if desired.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Querying http://www.mongodb.org/display/DOCS/Advanced+Queries
func (*Collection) FindId ¶
func (c *Collection) FindId(id interface{}) *Query
FindId is a convenience helper equivalent to:
query := collection.Find(bson.M{"_id": id})
See the Find method for more details.
func (*Collection) Indexes ¶
func (c *Collection) Indexes() (indexes []Index, err error)
Indexes returns a list of all indexes for the collection.
See the EnsureIndex method for more details on indexes.
func (*Collection) Insert ¶
func (c *Collection) Insert(docs ...interface{}) error
Insert inserts one or more documents in the respective collection. In case the session is in safe mode (see the SetSafe method) and an error happens while inserting the provided documents, the returned error will be of type *LastError.
func (*Collection) NewIter ¶
func (c *Collection) NewIter(session *Session, firstBatch []bson.Raw, cursorId int64, err error) *Iter
NewIter returns a newly created iterator with the provided parameters. Using this method is not recommended unless the desired functionality is not yet exposed via a more convenient interface (Find, Pipe, etc).
The optional session parameter associates the lifetime of the returned iterator to an arbitrary session. If nil, the iterator will be bound to c's session.
Documents in firstBatch will be individually provided by the returned iterator before documents from cursorId are made available. If cursorId is zero, only the documents in firstBatch are provided.
If err is not nil, the iterator's Err method will report it after exhausting documents in firstBatch.
NewIter must not be called on a collection in Eventual mode, because the cursor id is associated with the specific server that returned it. The provided session parameter may be in any mode or state, though.
The new Iter fetches documents in batches of the server defined default, however this can be changed by setting the session Batch method.
When using MongoDB 3.2+ NewIter supports re-using an existing cursor on the server. Ensure the connection has been established (i.e. by calling session.Ping()) before calling NewIter.
func (*Collection) Pipe ¶
func (c *Collection) Pipe(pipeline interface{}) *Pipe
func (*Collection) Remove ¶
func (c *Collection) Remove(selector interface{}) error
Remove finds a single document matching the provided selector document and removes it from the database. If the session is in safe mode (see SetSafe) a ErrNotFound error is returned if a document isn't found, or a value of type *LastError when some other error is detected.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Removing
func (*Collection) RemoveAll ¶
func (c *Collection) RemoveAll(selector interface{}) (info *ChangeInfo, err error)
RemoveAll finds all documents matching the provided selector document and removes them from the database. In case the session is in safe mode (see the SetSafe method) and an error happens when attempting the change, the returned error will be of type *LastError.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Removing
func (*Collection) RemoveId ¶
func (c *Collection) RemoveId(id interface{}) error
RemoveId is a convenience helper equivalent to:
err := collection.Remove(bson.M{"_id": id})
See the Remove method for more details.
func (*Collection) Repair ¶
func (c *Collection) Repair() *Iter
Repair returns an iterator that goes over all recovered documents in the collection, in a best-effort manner. This is most useful when there are damaged data files. Multiple copies of the same document may be returned by the iterator.
Repair is supported in MongoDB 2.7.8 and later.
func (*Collection) Update ¶
func (c *Collection) Update(selector interface{}, update interface{}) error
Update finds a single document matching the provided selector document and modifies it according to the update document. If the session is in safe mode (see SetSafe) a ErrNotFound error is returned if a document isn't found, or a value of type *LastError when some other error is detected.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Updating http://www.mongodb.org/display/DOCS/Atomic+Operations
func (*Collection) UpdateAll ¶
func (c *Collection) UpdateAll(selector interface{}, update interface{}) (info *ChangeInfo, err error)
UpdateAll finds all documents matching the provided selector document and modifies them according to the update document. If the session is in safe mode (see SetSafe) details of the executed operation are returned in info or an error of type *LastError when some problem is detected. It is not an error for the update to not be applied on any documents because the selector doesn't match.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Updating http://www.mongodb.org/display/DOCS/Atomic+Operations
func (*Collection) UpdateId ¶
func (c *Collection) UpdateId(id interface{}, update interface{}) error
UpdateId is a convenience helper equivalent to:
err := collection.Update(bson.M{"_id": id}, update)
See the Update method for more details.
func (*Collection) Upsert ¶
func (c *Collection) Upsert(selector interface{}, update interface{}) (info *ChangeInfo, err error)
Upsert finds a single document matching the provided selector document and modifies it according to the update document. If no document matching the selector is found, the update document is applied to the selector document and the result is inserted in the collection. If the session is in safe mode (see SetSafe) details of the executed operation are returned in info, or an error of type *LastError when some problem is detected.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Updating http://www.mongodb.org/display/DOCS/Atomic+Operations
func (*Collection) UpsertId ¶
func (c *Collection) UpsertId(id interface{}, update interface{}) (info *ChangeInfo, err error)
UpsertId is a convenience helper equivalent to:
info, err := collection.Upsert(bson.M{"_id": id}, update)
See the Upsert method for more details.
func (*Collection) Watch ¶
func (coll *Collection) Watch(pipeline interface{}, options ChangeStreamOptions) (*ChangeStream, error)
Watch constructs a new ChangeStream capable of receiving continuing data from the database.
func (*Collection) With ¶
func (c *Collection) With(s *Session) *Collection
With returns a copy of c that uses session s.
type CollectionInfo ¶
type CollectionInfo struct { // DisableIdIndex prevents the automatic creation of the index // on the _id field for the collection. DisableIdIndex bool // ForceIdIndex enforces the automatic creation of the index // on the _id field for the collection. Capped collections, // for example, do not have such an index by default. ForceIdIndex bool // If Capped is true new documents will replace old ones when // the collection is full. MaxBytes must necessarily be set // to define the size when the collection wraps around. // MaxDocs optionally defines the number of documents when it // wraps, but MaxBytes still needs to be set. Capped bool MaxBytes int MaxDocs int // Validator contains a validation expression that defines which // documents should be considered valid for this collection. Validator interface{} // ValidationLevel may be set to "strict" (the default) to force // MongoDB to validate all documents on inserts and updates, to // "moderate" to apply the validation rules only to documents // that already fulfill the validation criteria, or to "off" for // disabling validation entirely. ValidationLevel string // ValidationAction determines how MongoDB handles documents that // violate the validation rules. It may be set to "error" (the default) // to reject inserts or updates that violate the rules, or to "warn" // to log invalid operations but allow them to proceed. ValidationAction string // StorageEngine allows specifying collection options for the // storage engine in use. The map keys must hold the storage engine // name for which options are being specified. StorageEngine interface{} // Specifies the default collation for the collection. // Collation allows users to specify language-specific rules for string // comparison, such as rules for lettercase and accent marks. Collation *Collation }
The CollectionInfo type holds metadata about a collection.
Relevant documentation:
http://www.mongodb.org/display/DOCS/createCollection+Command http://www.mongodb.org/display/DOCS/Capped+Collections
type Credential ¶
type Credential struct { // Username and Password hold the basic details for authentication. // Password is optional with some authentication mechanisms. Username string Password string // Source is the database used to establish credentials and privileges // with a MongoDB server. Defaults to the default database provided // during dial, or "admin" if that was unset. Source string // Service defines the service name to use when authenticating with the GSSAPI // mechanism. Defaults to "mongodb". Service string // ServiceHost defines which hostname to use when authenticating // with the GSSAPI mechanism. If not specified, defaults to the MongoDB // server's address. ServiceHost string // Mechanism defines the protocol for credential negotiation. // Defaults to "MONGODB-CR". Mechanism string // Certificate sets the x509 certificate for authentication, see: // // https://docs.mongodb.com/manual/tutorial/configure-x509-client-authentication/ // // If using certificate authentication the Username, Mechanism and Source // fields should not be set. Certificate *x509.Certificate }
Credential holds details to authenticate with a MongoDB server.
Example (X509Authentication) ¶
// MongoDB follows RFC2253 for the ordering of the DN - if the order is // incorrect when creating the user in Mongo, the client will not be able to // connect. // // The best way to generate the DN with the correct ordering is with // openssl: // // openssl x509 -in client.crt -inform PEM -noout -subject -nameopt RFC2253 // subject= CN=Example App,OU=MongoDB Client Authentication,O=GlobalSign,C=GB // // // And then create the user in MongoDB with the above DN: // // db.getSiblingDB("$external").runCommand({ // createUser: "CN=Example App,OU=MongoDB Client Authentication,O=GlobalSign,C=GB", // roles: [ // { role: 'readWrite', db: 'bananas' }, // { role: 'userAdminAnyDatabase', db: 'admin' } // ], // writeConcern: { w: "majority" , wtimeout: 5000 } // }) // // // References: // - https://docs.mongodb.com/manual/tutorial/configure-x509-client-authentication/ // - https://docs.mongodb.com/manual/core/security-x.509/ // // Read in the PEM encoded X509 certificate. // // See the client.pem file at the path below. clientCertPEM, err := ioutil.ReadFile("harness/certs/client.pem") // Read in the PEM encoded private key. clientKeyPEM, err := ioutil.ReadFile("harness/certs/client.key") // Parse the private key, and the public key contained within the // certificate. clientCert, err := tls.X509KeyPair(clientCertPEM, clientKeyPEM) // Parse the actual certificate data clientCert.Leaf, err = x509.ParseCertificate(clientCert.Certificate[0]) // Use the cert to set up a TLS connection to Mongo tlsConfig := &tls.Config{ Certificates: []tls.Certificate{clientCert}, // This is set to true so the example works within the test // environment. // // DO NOT set InsecureSkipVerify to true in a production // environment - if you use an untrusted CA/have your own, load // its certificate into the RootCAs value instead. // // RootCAs: myCAChain, InsecureSkipVerify: true, } // Connect to Mongo using TLS host := "localhost:40003" session, err := DialWithInfo(&DialInfo{ Addrs: []string{host}, DialServer: func(addr *ServerAddr) (net.Conn, error) { return tls.Dial("tcp", host, tlsConfig) }, }) // Authenticate using the certificate cred := &Credential{Certificate: tlsConfig.Certificates[0].Leaf} if err := session.Login(cred); err != nil { panic(err) } // Done! Use mgo as normal from here. // // You should actually check the error code at each step. _ = err
Output:
type DBRef ¶
type DBRef struct { Collection string `bson:"$ref"` Id interface{} `bson:"$id"` Database string `bson:"$db,omitempty"` }
The DBRef type implements support for the database reference MongoDB convention as supported by multiple drivers. This convention enables cross-referencing documents between collections and databases using a structure which includes a collection name, a document id, and optionally a database name.
See the FindRef methods on Session and on Database.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Database+References
type Database ¶
Database holds collections of documents
Relevant documentation:
https://docs.mongodb.com/manual/core/databases-and-collections/#databases
func (*Database) AddUser ¶
AddUser creates or updates the authentication credentials of user within the db database.
WARNING: This method is obsolete and should only be used with MongoDB 2.2 or earlier. For MongoDB 2.4 and on, use UpsertUser instead.
func (*Database) C ¶
func (db *Database) C(name string) *Collection
C returns a value representing the named collection.
Creating this value is a very lightweight operation, and involves no network communication.
func (*Database) CollectionNames ¶
CollectionNames returns the collection names present in the db database.
func (*Database) CreateView ¶
func (db *Database) CreateView(view string, source string, pipeline interface{}, collation *Collation) error
CreateView creates a view as the result of the applying the specified aggregation pipeline to the source collection or view. Views act as read-only collections, and are computed on demand during read operations. MongoDB executes read operations on views as part of the underlying aggregation pipeline.
For example:
db := session.DB("mydb") db.CreateView("myview", "mycoll", []bson.M{{"$match": bson.M{"c": 1}}}, nil) view := db.C("myview")
Relevant documentation:
https://docs.mongodb.com/manual/core/views/ https://docs.mongodb.com/manual/reference/method/db.createView/
func (*Database) DropDatabase ¶
DropDatabase removes the entire database including all of its collections.
func (*Database) FindRef ¶
FindRef returns a query that looks for the document in the provided reference. If the reference includes the DB field, the document will be retrieved from the respective database.
See also the DBRef type and the FindRef method on Session.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Database+References
func (*Database) GridFS ¶
GridFS returns a GridFS value representing collections in db that follow the standard GridFS specification. The provided prefix (sometimes known as root) will determine which collections to use, and is usually set to "fs" when there is a single GridFS in the database.
See the GridFS Create, Open, and OpenId methods for more details.
Relevant documentation:
http://www.mongodb.org/display/DOCS/GridFS http://www.mongodb.org/display/DOCS/GridFS+Tools http://www.mongodb.org/display/DOCS/GridFS+Specification
func (*Database) Login ¶
Login authenticates with MongoDB using the provided credential. The authentication is valid for the whole session and will stay valid until Logout is explicitly called for the same database, or the session is closed.
func (*Database) Logout ¶
func (db *Database) Logout()
Logout removes any established authentication credentials for the database.
func (*Database) RemoveUser ¶
RemoveUser removes the authentication credentials of user from the database.
func (*Database) Run ¶
Run issues the provided command on the db database and unmarshals its result in the respective argument. The cmd argument may be either a string with the command name itself, in which case an empty document of the form bson.M{cmd: 1} will be used, or it may be a full command document.
Note that MongoDB considers the first marshalled key as the command name, so when providing a command with options, it's important to use an ordering-preserving document, such as a struct value or an instance of bson.D. For instance:
db.Run(bson.D{{"create", "mycollection"}, {"size", 1024}})
For privilleged commands typically run on the "admin" database, see the Run method in the Session type.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Commands http://www.mongodb.org/display/DOCS/List+of+Database+CommandSkips
func (*Database) UpsertUser ¶
UpsertUser updates the authentication credentials and the roles for a MongoDB user within the db database. If the named user doesn't exist it will be created.
This method should only be used from MongoDB 2.4 and on. For older MongoDB releases, use the obsolete AddUser method instead.
Relevant documentation:
http://docs.mongodb.org/manual/reference/user-privileges/ http://docs.mongodb.org/manual/reference/privilege-documents/
type DialInfo ¶
type DialInfo struct { // Addrs holds the addresses for the seed servers. Addrs []string // Timeout is the amount of time to wait for a server to respond when // first connecting and on follow up operations in the session. If // timeout is zero, the call may block forever waiting for a connection // to be established. Timeout does not affect logic in DialServer. Timeout time.Duration // Database is the default database name used when the Session.DB method // is called with an empty name, and is also used during the initial // authentication if Source is unset. Database string // ReplicaSetName, if specified, will prevent the obtained session from // communicating with any server which is not part of a replica set // with the given name. The default is to communicate with any server // specified or discovered via the servers contacted. ReplicaSetName string // Source is the database used to establish credentials and privileges // with a MongoDB server. Defaults to the value of Database, if that is // set, or "admin" otherwise. Source string // Service defines the service name to use when authenticating with the GSSAPI // mechanism. Defaults to "mongodb". Service string // ServiceHost defines which hostname to use when authenticating // with the GSSAPI mechanism. If not specified, defaults to the MongoDB // server's address. ServiceHost string // Mechanism defines the protocol for credential negotiation. // Defaults to "MONGODB-CR". Mechanism string // Username and Password inform the credentials for the initial authentication // done on the database defined by the Source field. See Session.Login. Username string Password string // PoolLimit defines the per-server socket pool limit. Defaults to // DefaultConnectionPoolLimit. See Session.SetPoolLimit for details. PoolLimit int // PoolTimeout defines max time to wait for a connection to become available // if the pool limit is reached. Defaults to zero, which means forever. See // Session.SetPoolTimeout for details PoolTimeout time.Duration // ReadTimeout defines the maximum duration to wait for a response to be // read from MongoDB. // // This effectively limits the maximum query execution time. If a MongoDB // query duration exceeds this timeout, the caller will receive a timeout, // however MongoDB will continue processing the query. This duration must be // large enough to allow MongoDB to execute the query, and the response be // received over the network connection. // // Only limits the network read - does not include unmarshalling / // processing of the response. Defaults to DialInfo.Timeout. If 0, no // timeout is set. ReadTimeout time.Duration // WriteTimeout defines the maximum duration of a write to MongoDB over the // network connection. // // This is can usually be low unless writing large documents, or over a high // latency link. Only limits network write time - does not include // marshalling/processing the request. Defaults to DialInfo.Timeout. If 0, // no timeout is set. WriteTimeout time.Duration // The identifier of the client application which ran the operation. AppName string // ReadPreference defines the manner in which servers are chosen. See // Session.SetMode and Session.SelectServers. ReadPreference *ReadPreference // Safe mostly defines write options, though there is RMode. See Session.SetSafe Safe Safe // FailFast will cause connection and query attempts to fail faster when // the server is unavailable, instead of retrying until the configured // timeout period. Note that an unavailable server may silently drop // packets instead of rejecting them, in which case it's impossible to // distinguish it from a slow server, so the timeout stays relevant. FailFast bool // Direct informs whether to establish connections only with the // specified seed servers, or to obtain information for the whole // cluster and establish connections with further servers too. Direct bool // MinPoolSize defines The minimum number of connections in the connection pool. // Defaults to 0. MinPoolSize int // The maximum number of milliseconds that a connection can remain idle in the pool // before being removed and closed. MaxIdleTimeMS int // DialServer optionally specifies the dial function for establishing // connections with the MongoDB servers. DialServer func(addr *ServerAddr) (net.Conn, error) // WARNING: This field is obsolete. See DialServer above. Dial func(addr net.Addr) (net.Conn, error) }
DialInfo holds options for establishing a session with a MongoDB cluster. To use a URL, see the Dial function.
type FullDocument ¶
type FullDocument string
type GridFS ¶
type GridFS struct { Files *Collection Chunks *Collection }
GridFS stores files in two collections:
- chunks stores the binary chunks. For details, see the chunks Collection. - files stores the file’s metadata. For details, see the files Collection.
GridFS places the collections in a common bucket by prefixing each with the bucket name. By default, GridFS uses two collections with a bucket named fs:
- fs.files - fs.chunks
You can choose a different bucket name, as well as create multiple buckets in a single database. The full collection name, which includes the bucket name, is subject to the namespace length limit.
Relevant documentation:
https://docs.mongodb.com/manual/core/gridfs/ https://docs.mongodb.com/manual/core/gridfs/#gridfs-chunks-collection https://docs.mongodb.com/manual/core/gridfs/#gridfs-files-collection
func (*GridFS) Create ¶
Create creates a new file with the provided name in the GridFS. If the file name already exists, a new version will be inserted with an up-to-date uploadDate that will cause it to be atomically visible to the Open and OpenId methods. If the file name is not important, an empty name may be provided and the file Id used instead.
It's important to Close files whether they are being written to or read from, and to check the err result to ensure the operation completed successfully.
A simple example inserting a new file:
func check(err error) { if err != nil { panic(err.String()) } } file, err := db.GridFS("fs").Create("myfile.txt") check(err) n, err := file.Write([]byte("Hello world!")) check(err) err = file.Close() check(err) fmt.Printf("%d bytes written\n", n)
The io.Writer interface is implemented by *GridFile and may be used to help on the file creation. For example:
file, err := db.GridFS("fs").Create("myfile.txt") check(err) messages, err := os.Open("/var/log/messages") check(err) defer messages.Close() err = io.Copy(file, messages) check(err) err = file.Close() check(err)
func (*GridFS) Find ¶
Find runs query on GridFS's files collection and returns the resulting Query.
This logic:
gfs := db.GridFS("fs") iter := gfs.Find(nil).Iter()
Is equivalent to:
files := db.C("fs" + ".files") iter := files.Find(nil).Iter()
func (*GridFS) Open ¶
Open returns the most recently uploaded file with the provided name, for reading. If the file isn't found, err will be set to mgo.ErrNotFound.
It's important to Close files whether they are being written to or read from, and to check the err result to ensure the operation completed successfully.
The following example will print the first 8192 bytes from the file:
file, err := db.GridFS("fs").Open("myfile.txt") check(err) b := make([]byte, 8192) n, err := file.Read(b) check(err) fmt.Println(string(b)) check(err) err = file.Close() check(err) fmt.Printf("%d bytes read\n", n)
The io.Reader interface is implemented by *GridFile and may be used to deal with it. As an example, the following snippet will dump the whole file into the standard output:
file, err := db.GridFS("fs").Open("myfile.txt") check(err) err = io.Copy(os.Stdout, file) check(err) err = file.Close() check(err)
func (*GridFS) OpenId ¶
OpenId returns the file with the provided id, for reading. If the file isn't found, err will be set to mgo.ErrNotFound.
It's important to Close files whether they are being written to or read from, and to check the err result to ensure the operation completed successfully.
The following example will print the first 8192 bytes from the file:
func check(err error) { if err != nil { panic(err.String()) } } file, err := db.GridFS("fs").OpenId(objid) check(err) b := make([]byte, 8192) n, err := file.Read(b) check(err) fmt.Println(string(b)) check(err) err = file.Close() check(err) fmt.Printf("%d bytes read\n", n)
The io.Reader interface is implemented by *GridFile and may be used to deal with it. As an example, the following snippet will dump the whole file into the standard output:
file, err := db.GridFS("fs").OpenId(objid) check(err) err = io.Copy(os.Stdout, file) check(err) err = file.Close() check(err)
func (*GridFS) OpenNext ¶
OpenNext opens the next file from iter for reading, sets *file to it, and returns true on the success case. If no more documents are available on iter or an error occurred, *file is set to nil and the result is false. Errors will be available via iter.Err().
The iter parameter must be an iterator on the GridFS files collection. Using the GridFS.Find method is an easy way to obtain such an iterator, but any iterator on the collection will work.
If the provided *file is non-nil, OpenNext will close it before attempting to iterate to the next element. This means that in a loop one only has to worry about closing files when breaking out of the loop early (break, return, or panic).
For example:
gfs := db.GridFS("fs") query := gfs.Find(nil).Sort("filename") iter := query.Iter() var f *mgo.GridFile for gfs.OpenNext(iter, &f) { fmt.Printf("Filename: %s\n", f.Name()) } if iter.Close() != nil { panic(iter.Close()) }
type GridFile ¶
type GridFile struct {
// contains filtered or unexported fields
}
GridFile document in files collection
func (*GridFile) Abort ¶
func (file *GridFile) Abort()
Abort cancels an in-progress write, preventing the file from being automically created and ensuring previously written chunks are removed when the file is closed.
It is a runtime error to call Abort when the file was not opened for writing.
func (*GridFile) Close ¶
Close flushes any pending changes in case the file is being written to, waits for any background operations to finish, and closes the file.
It's important to Close files whether they are being written to or read from, and to check the err result to ensure the operation completed successfully.
func (*GridFile) ContentType ¶
ContentType returns the optional file content type. An empty string will be returned in case it is unset.
func (*GridFile) GetMeta ¶
GetMeta unmarshals the optional "metadata" field associated with the file into the result parameter. The meaning of keys under that field is user-defined. For example:
result := struct{ INode int }{} err = file.GetMeta(&result) if err != nil { panic(err.String()) } fmt.Printf("inode: %d\n", result.INode)
func (*GridFile) Name ¶
Name returns the optional file name. An empty string will be returned in case it is unset.
func (*GridFile) Read ¶
Read reads into b the next available data from the file and returns the number of bytes written and an error in case something wrong happened. At the end of the file, n will be zero and err will be set to io.EOF.
The parameters and behavior of this function turn the file into an io.Reader.
func (*GridFile) Seek ¶
Seek sets the offset for the next Read or Write on file to offset, interpreted according to whence: 0 means relative to the origin of the file, 1 means relative to the current offset, and 2 means relative to the end. It returns the new offset and an error, if any.
func (*GridFile) SetChunkSize ¶
SetChunkSize sets size of saved chunks. Once the file is written to, it will be split in blocks of that size and each block saved into an independent chunk document. The default chunk size is 255kb.
It is a runtime error to call this function once the file has started being written to.
func (*GridFile) SetContentType ¶
SetContentType changes the optional file content type. An empty string may be used to unset it.
It is a runtime error to call this function when the file is not open for writing.
func (*GridFile) SetId ¶
func (file *GridFile) SetId(id interface{})
SetId changes the current file Id.
It is a runtime error to call this function once the file has started being written to, or when the file is not open for writing.
func (*GridFile) SetMeta ¶
func (file *GridFile) SetMeta(metadata interface{})
SetMeta changes the optional "metadata" field associated with the file. The meaning of keys under that field is user-defined. For example:
file.SetMeta(bson.M{"inode": inode})
It is a runtime error to call this function when the file is not open for writing.
func (*GridFile) SetName ¶
SetName changes the optional file name. An empty string may be used to unset it.
It is a runtime error to call this function when the file is not open for writing.
func (*GridFile) SetUploadDate ¶
SetUploadDate changes the file upload time.
It is a runtime error to call this function when the file is not open for writing.
func (*GridFile) UploadDate ¶
UploadDate returns the file upload time.
func (*GridFile) Write ¶
Write writes the provided data to the file and returns the number of bytes written and an error in case something wrong happened.
The file will internally cache the data so that all but the last chunk sent to the database have the size defined by SetChunkSize. This also means that errors may be deferred until a future call to Write or Close.
The parameters and behavior of this function turn the file into an io.Writer.
type Index ¶
type Index struct { Key []string // Index key fields; prefix name with dash (-) for descending order Unique bool // Prevent two documents from having the same index key DropDups bool // Drop documents with the same index key as a previously indexed one Background bool // Build index in background and return immediately Sparse bool // Only index documents containing the Key fields PartialFilter bson.M // Partial index filter expression // If ExpireAfter is defined the server will periodically delete // documents with indexed time.Time older than the provided delta. ExpireAfter time.Duration // Name holds the stored index name. On creation if this field is unset it is // computed by EnsureIndex based on the index key. Name string // Properties for spatial indexes. // // Min and Max were improperly typed as int when they should have been // floats. To preserve backwards compatibility they are still typed as // int and the following two fields enable reading and writing the same // fields as float numbers. In mgo.v3, these fields will be dropped and // Min/Max will become floats. Min, Max int Minf, Maxf float64 BucketSize float64 Bits int // Properties for text indexes. DefaultLanguage string LanguageOverride string // Weights defines the significance of provided fields relative to other // fields in a text index. The score for a given word in a document is derived // from the weighted sum of the frequency for each of the indexed fields in // that document. The default field weight is 1. Weights map[string]int // Collation defines the collation to use for the index. Collation *Collation }
Index are special data structures that store a small portion of the collection’s data set in an easy to traverse form. The index stores the value of a specific field or set of fields, ordered by the value of the field. The ordering of the index entries supports efficient equality matches and range-based query operations. In addition, MongoDB can return sorted results by using the ordering in the index.
type Iter ¶
type Iter struct {
// contains filtered or unexported fields
}
Iter stores informations about a Cursor
Relevant documentation:
https://docs.mongodb.com/manual/tutorial/iterate-a-cursor/
func (*Iter) All ¶
All retrieves all documents from the result set into the provided slice and closes the iterator.
The result argument must necessarily be the address for a slice. The slice may be nil or previously allocated.
WARNING: Obviously, All must not be used with result sets that may be potentially large, since it may consume all memory until the system crashes. Consider building the query with a Limit clause to ensure the result size is bounded.
For instance:
var result []struct{ Value int } iter := collection.Find(nil).Limit(100).Iter() err := iter.All(&result) if err != nil { return err }
func (*Iter) Close ¶
Close kills the server cursor used by the iterator, if any, and returns nil if no errors happened during iteration, or the actual error otherwise.
Server cursors are automatically closed at the end of an iteration, which means close will do nothing unless the iteration was interrupted before the server finished sending results to the driver. If Close is not called in such a situation, the cursor will remain available at the server until the default cursor timeout period is reached. No further problems arise.
Close is idempotent. That means it can be called repeatedly and will return the same result every time.
In case a resulting document included a field named $err or errmsg, which are standard ways for MongoDB to report an improper query, the returned value has a *QueryError type.
func (*Iter) Done ¶
Done returns true only if a follow up Next call is guaranteed to return false.
For an iterator created with Tail, Done may return false for an iterator that has no more data. Otherwise it's guaranteed to return false only if there is data or an error happened.
Done may block waiting for a pending query to verify whether more data is actually available or not.
func (*Iter) Err ¶
Err returns nil if no errors happened during iteration, or the actual error otherwise.
In case a resulting document included a field named $err or errmsg, which are standard ways for MongoDB to report an improper query, the returned value has a *QueryError type, and includes the Err message and the Code.
func (*Iter) For ¶
For method is obsolete and will be removed in a future release. See Iter as an elegant replacement.
func (*Iter) Next ¶
Next retrieves the next document from the result set, blocking if necessary. This method will also automatically retrieve another batch of documents from the server when the current one is exhausted, or before that in background if pre-fetching is enabled (see the Query.Prefetch and Session.SetPrefetch methods).
Next returns true if a document was successfully unmarshalled onto result, and false at the end of the result set or if an error happened. When Next returns false, either the Err method or the Close method should be called to verify if there was an error during iteration. While both will return the error (or nil), Close will also release the cursor on the server. The Timeout method may also be called to verify if the false return value was caused by a timeout (no available results).
For example:
iter := collection.Find(nil).Iter() for iter.Next(&result) { fmt.Printf("Result: %v\n", result.Id) } if iter.Timeout() { // react to timeout } if err := iter.Close(); err != nil { return err }
func (*Iter) State ¶
State returns the current state of Iter. When combined with NewIter an existing cursor can be reused on Mongo 3.2+. Like NewIter, this method should be avoided if the desired functionality is exposed via a more convenient interface.
Care must be taken to resume using Iter only when connected directly to the same server that the cursor was created on (with a Monotonic connection or with the connect=direct connection option).
type LastError ¶
type LastError struct { Err string Code, N, Waited int FSyncFiles int `bson:"fsyncFiles"` WTimeout bool UpdatedExisting bool `bson:"updatedExisting"` UpsertedId interface{} `bson:"upserted"` // contains filtered or unexported fields }
LastError the error status of the preceding write operation on the current connection.
Relevant documentation:
https://docs.mongodb.com/manual/reference/command/getLastError/
mgo.v3: Use a single user-visible error type.
type MapReduce ¶
type MapReduce struct { Map string // Map Javascript function code (required) Reduce string // Reduce Javascript function code (required) Finalize string // Finalize Javascript function code (optional) Out interface{} // Output collection name or document. If nil, results are inlined into the result parameter. Scope interface{} // Optional global scope for Javascript functions Verbose bool }
MapReduce used to perform Map Reduce operations
Relevant documentation:
https://docs.mongodb.com/manual/core/map-reduce/
type MapReduceInfo ¶
type MapReduceInfo struct { InputCount int // Number of documents mapped EmitCount int // Number of times reduce called emit OutputCount int // Number of documents in resulting collection Database string // Output database, if results are not inlined Collection string // Output collection, if results are not inlined Time int64 // Time to run the job, in nanoseconds VerboseTime *MapReduceTime // Only defined if Verbose was true }
MapReduceInfo stores informations on a MapReduce operation
type MapReduceTime ¶
type MapReduceTime struct { Total int64 // Total time, in nanoseconds Map int64 `bson:"mapTime"` // Time within map function, in nanoseconds EmitLoop int64 `bson:"emitLoop"` // Time within the emit/map loop, in nanoseconds }
MapReduceTime stores execution time of a MapReduce operation
type Method ¶
type Method struct {
// contains filtered or unexported fields
}
Method defines the variant of SCRAM to use
type Mode ¶
type Mode int
Mode read preference mode. See Eventual, Monotonic and Strong for details
Relevant documentation on read preference modes:
http://docs.mongodb.org/manual/reference/read-preference/
const ( // Primary mode is default mode. All operations read from the current replica set primary. Primary Mode = 2 // PrimaryPreferred mode: read from the primary if available. Read from the secondary otherwise. PrimaryPreferred Mode = 3 // Secondary mode: read from one of the nearest secondary members of the replica set. Secondary Mode = 4 // SecondaryPreferred mode: read from one of the nearest secondaries if available. Read from primary otherwise. SecondaryPreferred Mode = 5 // Nearest mode: read from one of the nearest members, irrespective of it being primary or secondary. Nearest Mode = 6 // Eventual mode is specific to mgo, and is same as Nearest, but may change servers between reads. Eventual Mode = 0 // Monotonic mode is specifc to mgo, and is same as SecondaryPreferred before first write. Same as Primary after first write. Monotonic Mode = 1 // Strong mode is specific to mgo, and is same as Primary. Strong Mode = 2 // DefaultConnectionPoolLimit defines the default maximum number of // connections in the connection pool. // // To override this value set DialInfo.PoolLimit. DefaultConnectionPoolLimit = 4096 )
type Pipe ¶
type Pipe struct {
// contains filtered or unexported fields
}
Pipe is used to run aggregation queries against a collection.
func (*Pipe) AllowDiskUse ¶
AllowDiskUse enables writing to the "<dbpath>/_tmp" server directory so that aggregation pipelines do not have to be held entirely in memory.
func (*Pipe) Batch ¶
Batch sets the batch size used when fetching documents from the database. It's possible to change this setting on a per-session basis as well, using the Batch method of Session.
The default batch size is defined by the database server.
func (*Pipe) Collation ¶
Collation allows to specify language-specific rules for string comparison, such as rules for lettercase and accent marks. When specifying collation, the locale field is mandatory; all other collation fields are optional
Relevant documentation:
https://docs.mongodb.com/manual/reference/collation/
func (*Pipe) Explain ¶
Explain returns a number of details about how the MongoDB server would execute the requested pipeline, such as the number of objects examined, the number of times the read lock was yielded to allow writes to go in, and so on.
For example:
var m bson.M err := collection.Pipe(pipeline).Explain(&m) if err == nil { fmt.Printf("Explain: %#v\n", m) }
func (*Pipe) Iter ¶
Iter executes the pipeline and returns an iterator capable of going over all the generated results.
type Query ¶
type Query struct {
// contains filtered or unexported fields
}
Query keeps info on the query.
func (*Query) Apply ¶
func (q *Query) Apply(change Change, result interface{}) (info *ChangeInfo, err error)
Apply runs the findAndModify MongoDB command, which allows updating, upserting or removing a document matching a query and atomically returning either the old version (the default) or the new version of the document (when ReturnNew is true). If no objects are found Apply returns ErrNotFound.
If the session is in safe mode, the LastError result will be returned as err.
The Sort and Select query methods affect the result of Apply. In case multiple documents match the query, Sort enables selecting which document to act upon by ordering it first. Select enables retrieving only a selection of fields of the new or old document.
This simple example increments a counter and prints its new value:
change := mgo.Change{ Update: bson.M{"$inc": bson.M{"n": 1}}, ReturnNew: true, } info, err = col.Find(M{"_id": id}).Apply(change, &doc) fmt.Println(doc.N)
This method depends on MongoDB >= 2.0 to work properly.
Relevant documentation:
http://www.mongodb.org/display/DOCS/findAndModify+Command http://www.mongodb.org/display/DOCS/Updating http://www.mongodb.org/display/DOCS/Atomic+Operations
func (*Query) Batch ¶
Batch sets the batch size used when fetching documents from the database. It's possible to change this setting on a per-session basis as well, using the Batch method of Session.
The default batch size is defined by the database itself. As of this writing, MongoDB will use an initial size of min(100 docs, 4MB) on the first batch, and 4MB on remaining ones.
func (*Query) Collation ¶
Collation allows to specify language-specific rules for string comparison, such as rules for lettercase and accent marks. When specifying collation, the locale field is mandatory; all other collation fields are optional
For example, to perform a case and diacritic insensitive query:
var res []bson.M collation := &mgo.Collation{Locale: "en", Strength: 1} err = db.C("mycoll").Find(bson.M{"a": "a"}).Collation(collation).All(&res) if err != nil { return err }
This query will match following documents:
{"a": "a"} {"a": "A"} {"a": "â"}
Relevant documentation:
https://docs.mongodb.com/manual/reference/collation/
func (*Query) Comment ¶
Comment adds a comment to the query to identify it in the database profiler output.
Relevant documentation:
http://docs.mongodb.org/manual/reference/operator/meta/comment http://docs.mongodb.org/manual/reference/command/profile http://docs.mongodb.org/manual/administration/analyzing-mongodb-performance/#database-profiling
func (*Query) Distinct ¶
Distinct unmarshals into result the list of distinct values for the given key.
For example:
var result []int err := collection.Find(bson.M{"gender": "F"}).Distinct("age", &result)
Relevant documentation:
http://www.mongodb.org/display/DOCS/Aggregation
func (*Query) Explain ¶
Explain returns a number of details about how the MongoDB server would execute the requested query, such as the number of objects examined, the number of times the read lock was yielded to allow writes to go in, and so on.
For example:
m := bson.M{} err := collection.Find(bson.M{"filename": name}).Explain(m) if err == nil { fmt.Printf("Explain: %#v\n", m) }
Relevant documentation:
http://www.mongodb.org/display/DOCS/Optimization http://www.mongodb.org/display/DOCS/Query+Optimizer
func (*Query) For ¶
For method is obsolete and will be removed in a future release. See Iter as an elegant replacement.
func (*Query) Hint ¶
Hint will include an explicit "hint" in the query to force the server to use a specified index, potentially improving performance in some situations. The provided parameters are the fields that compose the key of the index to be used. For details on how the indexKey may be built, see the EnsureIndex method.
For example:
query := collection.Find(bson.M{"firstname": "Joe", "lastname": "Winter"}) query.Hint("lastname", "firstname")
Relevant documentation:
http://www.mongodb.org/display/DOCS/Optimization http://www.mongodb.org/display/DOCS/Query+Optimizer
func (*Query) Iter ¶
Iter executes the query and returns an iterator capable of going over all the results. Results will be returned in batches of configurable size (see the Batch method) and more documents will be requested when a configurable number of documents is iterated over (see the Prefetch method).
func (*Query) Limit ¶
Limit restricts the maximum number of documents retrieved to n, and also changes the batch size to the same value. Once n documents have been returned by Next, the following call will return ErrNotFound.
func (*Query) LogReplay ¶
LogReplay enables an option that optimizes queries that are typically made on the MongoDB oplog for replaying it. This is an internal implementation aspect and most likely uninteresting for other uses. It has seen at least one use case, though, so it's exposed via the API.
func (*Query) MapReduce ¶
func (q *Query) MapReduce(job *MapReduce, result interface{}) (info *MapReduceInfo, err error)
MapReduce executes a map/reduce job for documents covered by the query. That kind of job is suitable for very flexible bulk aggregation of data performed at the server side via Javascript functions.
Results from the job may be returned as a result of the query itself through the result parameter in case they'll certainly fit in memory and in a single document. If there's the possibility that the amount of data might be too large, results must be stored back in an alternative collection or even a separate database, by setting the Out field of the provided MapReduce job. In that case, provide nil as the result parameter.
These are some of the ways to set Out:
nil Inline results into the result parameter. bson.M{"replace": "mycollection"} The output will be inserted into a collection which replaces any existing collection with the same name. bson.M{"merge": "mycollection"} This option will merge new data into the old output collection. In other words, if the same key exists in both the result set and the old collection, the new key will overwrite the old one. bson.M{"reduce": "mycollection"} If documents exist for a given key in the result set and in the old collection, then a reduce operation (using the specified reduce function) will be performed on the two values and the result will be written to the output collection. If a finalize function was provided, this will be run after the reduce as well. bson.M{...., "db": "mydb"} Any of the above options can have the "db" key included for doing the respective action in a separate database.
The following is a trivial example which will count the number of occurrences of a field named n on each document in a collection, and will return results inline:
job := &mgo.MapReduce{ Map: "function() { emit(this.n, 1) }", Reduce: "function(key, values) { return Array.sum(values) }", } var result []struct { Id int "_id"; Value int } _, err := collection.Find(nil).MapReduce(job, &result) if err != nil { return err } for _, item := range result { fmt.Println(item.Value) }
This function is compatible with MongoDB 1.7.4+.
Relevant documentation:
http://www.mongodb.org/display/DOCS/MapReduce
func (*Query) One ¶
One executes the query and unmarshals the first obtained document into the result argument. The result must be a struct or map value capable of being unmarshalled into by gobson. This function blocks until either a result is available or an error happens. For example:
err := collection.Find(bson.M{"a": 1}).One(&result)
In case the resulting document includes a field named $err or errmsg, which are standard ways for MongoDB to return query errors, the returned err will be set to a *QueryError value including the Err message and the Code. In those cases, the result argument is still unmarshalled into with the received document so that any other custom values may be obtained if desired.
func (*Query) Prefetch ¶
Prefetch sets the point at which the next batch of results will be requested. When there are p*batch_size remaining documents cached in an Iter, the next batch will be requested in background. For instance, when using this:
query.Batch(200).Prefetch(0.25)
and there are only 50 documents cached in the Iter to be processed, the next batch of 200 will be requested. It's possible to change this setting on a per-session basis as well, using the SetPrefetch method of Session.
The default prefetch value is 0.25.
func (*Query) Select ¶
Select enables selecting which fields should be retrieved for the results found. For example, the following query would only retrieve the name field:
err := collection.Find(nil).Select(bson.M{"name": 1}).One(&result)
Relevant documentation:
http://www.mongodb.org/display/DOCS/Retrieving+a+Subset+of+Fields
func (*Query) SetMaxScan ¶
SetMaxScan constrains the query to stop after scanning the specified number of documents.
This modifier is generally used to prevent potentially long running queries from disrupting performance by scanning through too much data.
func (*Query) SetMaxTime ¶
SetMaxTime constrains the query to stop after running for the specified time.
When the time limit is reached MongoDB automatically cancels the query. This can be used to efficiently prevent and identify unexpectedly slow queries.
A few important notes about the mechanism enforcing this limit:
Requests can block behind locking operations on the server, and that blocking time is not accounted for. In other words, the timer starts ticking only after the actual start of the query when it initially acquires the appropriate lock;
Operations are interrupted only at interrupt points where an operation can be safely aborted – the total execution time may exceed the specified value;
The limit can be applied to both CRUD operations and commands, but not all commands are interruptible;
While iterating over results, computing follow up batches is included in the total time and the iteration continues until the alloted time is over, but network roundtrips are not taken into account for the limit.
This limit does not override the inactive cursor timeout for idle cursors (default is 10 min).
This mechanism was introduced in MongoDB 2.6.
Relevant documentation:
http://blog.mongodb.org/post/83621787773/maxtimems-and-query-optimizer-introspection-in
func (*Query) Skip ¶
Skip skips over the n initial documents from the query results. Note that this only makes sense with capped collections where documents are naturally ordered by insertion time, or with sorted results.
func (*Query) Snapshot ¶
Snapshot will force the performed query to make use of an available index on the _id field to prevent the same document from being returned more than once in a single iteration. This might happen without this setting in situations when the document changes in size and thus has to be moved while the iteration is running.
Because snapshot mode traverses the _id index, it may not be used with sorting or explicit hints. It also cannot use any other index for the query.
Even with snapshot mode, items inserted or deleted during the query may or may not be returned; that is, this mode is not a true point-in-time snapshot.
The same effect of Snapshot may be obtained by using any unique index on field(s) that will not be modified (best to use Hint explicitly too). A non-unique index (such as creation time) may be made unique by appending _id to the index when creating it.
Relevant documentation:
http://www.mongodb.org/display/DOCS/How+to+do+Snapshotted+Queries+in+the+Mongo+Database
func (*Query) Sort ¶
Sort asks the database to order returned documents according to the provided field names. A field name may be prefixed by - (minus) for it to be sorted in reverse order.
For example:
query1 := collection.Find(nil).Sort("firstname", "lastname") query2 := collection.Find(nil).Sort("-age") query3 := collection.Find(nil).Sort("$natural") query4 := collection.Find(nil).Select(bson.M{"score": bson.M{"$meta": "textScore"}}).Sort("$textScore:score")
Relevant documentation:
http://www.mongodb.org/display/DOCS/Sorting+and+Natural+Order
func (*Query) Tail ¶
Tail returns a tailable iterator. Unlike a normal iterator, a tailable iterator may wait for new values to be inserted in the collection once the end of the current result set is reached, A tailable iterator may only be used with capped collections.
The timeout parameter indicates how long Next will block waiting for a result before timing out. If set to -1, Next will not timeout, and will continue waiting for a result for as long as the cursor is valid and the session is not closed. If set to 0, Next times out as soon as it reaches the end of the result set. Otherwise, Next will wait for at least the given number of seconds for a new document to be available before timing out.
On timeouts, Next will unblock and return false, and the Timeout method will return true if called. In these cases, Next may still be called again on the same iterator to check if a new value is available at the current cursor position, and again it will block according to the specified timeoutSecs. If the cursor becomes invalid, though, both Next and Timeout will return false and the query must be restarted.
The following example demonstrates timeout handling and query restarting:
iter := collection.Find(nil).Sort("$natural").Tail(5 * time.Second) for { for iter.Next(&result) { fmt.Println(result.Id) lastId = result.Id } if iter.Err() != nil { return iter.Close() } if iter.Timeout() { continue } query := collection.Find(bson.M{"_id": bson.M{"$gt": lastId}}) iter = query.Sort("$natural").Tail(5 * time.Second) } iter.Close()
Relevant documentation:
http://www.mongodb.org/display/DOCS/Tailable+Cursors http://www.mongodb.org/display/DOCS/Capped+Collections http://www.mongodb.org/display/DOCS/Sorting+and+Natural+Order
type QueryError ¶
QueryError is returned when a query fails
func (*QueryError) Error ¶
func (err *QueryError) Error() string
type ReadPreference ¶
type ReadPreference struct { // Mode determines the consistency of results. See Session.SetMode. Mode Mode // TagSets indicates which servers are allowed to be used. See Session.SelectServers. TagSets []bson.D }
ReadPreference defines the manner in which servers are chosen.
type Role ¶
type Role string
Role available role for users
Relevant documentation:
http://docs.mongodb.org/manual/reference/user-privileges/
const ( // RoleRoot provides access to the operations and all the resources // of the readWriteAnyDatabase, dbAdminAnyDatabase, userAdminAnyDatabase, // clusterAdmin roles, restore, and backup roles combined. RoleRoot Role = "root" // RoleRead provides the ability to read data on all non-system collections // and on the following system collections: system.indexes, system.js, and // system.namespaces collections on a specific database. RoleRead Role = "read" // RoleReadAny provides the same read-only permissions as read, except it // applies to it applies to all but the local and config databases in the cluster. // The role also provides the listDatabases action on the cluster as a whole. RoleReadAny Role = "readAnyDatabase" //RoleReadWrite provides all the privileges of the read role plus ability to modify data on //all non-system collections and the system.js collection on a specific database. RoleReadWrite Role = "readWrite" // RoleReadWriteAny provides the same read and write permissions as readWrite, except it // applies to all but the local and config databases in the cluster. The role also provides // the listDatabases action on the cluster as a whole. RoleReadWriteAny Role = "readWriteAnyDatabase" // RoleDBAdmin provides all the privileges of the dbAdmin role on a specific database RoleDBAdmin Role = "dbAdmin" // RoleDBAdminAny provides all the privileges of the dbAdmin role on all databases RoleDBAdminAny Role = "dbAdminAnyDatabase" // RoleUserAdmin Provides the ability to create and modify roles and users on the // current database. This role also indirectly provides superuser access to either // the database or, if scoped to the admin database, the cluster. The userAdmin role // allows users to grant any user any privilege, including themselves. RoleUserAdmin Role = "userAdmin" // RoleUserAdminAny provides the same access to user administration operations as userAdmin, // except it applies to all but the local and config databases in the cluster RoleUserAdminAny Role = "userAdminAnyDatabase" // RoleClusterAdmin Provides the greatest cluster-management access. This role combines // the privileges granted by the clusterManager, clusterMonitor, and hostManager roles. // Additionally, the role provides the dropDatabase action. RoleClusterAdmin Role = "clusterAdmin" )
type Safe ¶
type Safe struct { W int // Min # of servers to ack before success WMode string // Write mode for MongoDB 2.0+ (e.g. "majority") RMode string // Read mode for MonogDB 3.2+ ("majority", "local", "linearizable") WTimeout int // Milliseconds to wait for W before timing out FSync bool // Sync via the journal if present, or via data files sync otherwise J bool // Sync via the journal if present }
Safe session safety mode. See SetSafe for details on the Safe type.
type ServerAddr ¶
type ServerAddr struct {
// contains filtered or unexported fields
}
ServerAddr represents the address for establishing a connection to an individual MongoDB server.
func (*ServerAddr) String ¶
func (addr *ServerAddr) String() string
String returns the address that was provided for the server before resolution.
func (*ServerAddr) TCPAddr ¶
func (addr *ServerAddr) TCPAddr() *net.TCPAddr
TCPAddr returns the resolved TCP address for the server.
type Session ¶
type Session struct {
// contains filtered or unexported fields
}
Session represents a communication session with the database.
All Session methods are concurrency-safe and may be called from multiple goroutines. In all session modes but Eventual, using the session from multiple goroutines will cause them to share the same underlying socket. See the documentation on Session.SetMode for more details.
Example (Concurrency) ¶
// This example shows the best practise for concurrent use of a mgo session. // // Internally mgo maintains a connection pool, dialling new connections as // required. // // Some general suggestions: // - Define a struct holding the original session, database name and // collection name instead of passing them explicitly. // - Define an interface abstracting your data access instead of exposing // mgo to your application code directly. // - Limit concurrency at the application level, not with SetPoolLimit(). // This will be our concurrent worker var doStuff = func(wg *sync.WaitGroup, session *Session) { defer wg.Done() // Copy the session - if needed this will dial a new connection which // can later be reused. // // Calling close returns the connection to the pool. conn := session.Copy() defer conn.Close() // Do something(s) with the connection _, _ = conn.DB("").C("my_data").Count() } /////////////////////////////////////////////// // Dial a connection to Mongo - this creates the connection pool session, err := Dial("localhost:40003/my_database") if err != nil { panic(err) } // Concurrently do things, passing the session to the worker wg := &sync.WaitGroup{} for i := 0; i < 10; i++ { wg.Add(1) go doStuff(wg, session) } wg.Wait() session.Close()
Output:
func Dial ¶
Dial establishes a new session to the cluster identified by the given seed server(s). The session will enable communication with all of the servers in the cluster, so the seed servers are used only to find out about the cluster topology.
Dial will timeout after 10 seconds if a server isn't reached. The returned session will timeout operations after one minute by default if servers aren't available. To customize the timeout, see DialWithTimeout, SetSyncTimeout, and DialInfo Read/WriteTimeout.
This method is generally called just once for a given cluster. Further sessions to the same cluster are then established using the New or Copy methods on the obtained session. This will make them share the underlying cluster, and manage the pool of connections appropriately.
Once the session is not useful anymore, Close must be called to release the resources appropriately.
The seed servers must be provided in the following format:
[mongodb://][user:pass@]host1[:port1][,host2[:port2],...][/database][?options]
For example, it may be as simple as:
localhost
Or more involved like:
mongodb://myuser:mypass@localhost:40001,otherhost:40001/mydb
If the port number is not provided for a server, it defaults to 27017.
The username and password provided in the URL will be used to authenticate into the database named after the slash at the end of the host names, or into the "admin" database if none is provided. The authentication information will persist in sessions obtained through the New method as well.
The following connection options are supported after the question mark:
connect=direct Disables the automatic replica set server discovery logic, and forces the use of servers provided only (even if secondaries). Note that to talk to a secondary the consistency requirements must be relaxed to Monotonic or Eventual via SetMode. connect=replicaSet Discover replica sets automatically. Default connection behavior. replicaSet=<setname> If specified will prevent the obtained session from communicating with any server which is not part of a replica set with the given name. The default is to communicate with any server specified or discovered via the servers contacted. authSource=<db> Informs the database used to establish credentials and privileges with a MongoDB server. Defaults to the database name provided via the URL path, and "admin" if that's unset. authMechanism=<mechanism> Defines the protocol for credential negotiation. Defaults to "MONGODB-CR", which is the default username/password challenge-response mechanism. gssapiServiceName=<name> Defines the service name to use when authenticating with the GSSAPI mechanism. Defaults to "mongodb". maxPoolSize=<limit> Defines the per-server socket pool limit. Defaults to 4096. See Session.SetPoolLimit for details. minPoolSize=<limit> Defines the per-server socket pool minium size. Defaults to 0. maxIdleTimeMS=<millisecond> The maximum number of milliseconds that a connection can remain idle in the pool before being removed and closed. If maxIdleTimeMS is 0, connections will never be closed due to inactivity. appName=<appName> The identifier of this client application. This parameter is used to annotate logs / profiler output and cannot exceed 128 bytes. ssl=<true|false> true: Initiate the connection with TLS/SSL. false: Initiate the connection without TLS/SSL. The default value is false.
Relevant documentation:
http://docs.mongodb.org/manual/reference/connection-string/
Example (TlsConfig) ¶
// You can define a custom tlsConfig, this one enables TLS, like if you have // ssl=true in the connection string. url := "mongodb://localhost:40003" tlsConfig := &tls.Config{ // This can be configured to use a private root CA - see the Credential // x509 Authentication example. // // Please don't set InsecureSkipVerify to true - it makes using TLS // pointless and is never the right answer! } dialInfo, err := ParseURL(url) dialInfo.DialServer = func(addr *ServerAddr) (net.Conn, error) { return tls.Dial("tcp", addr.String(), tlsConfig) } session, err := DialWithInfo(dialInfo) if err != nil { panic(err) } // Use session as normal session.Close()
Output:
Example (UsingSSL) ¶
// To connect via TLS/SSL (enforced for MongoDB Atlas for example) requires // to set the ssl query param to true. url := "mongodb://localhost:40003?ssl=true" session, err := Dial(url) if err != nil { panic(err) } // Use session as normal session.Close()
Output:
func DialWithInfo ¶
DialWithInfo establishes a new session to the cluster identified by info.
func DialWithTimeout ¶
DialWithTimeout works like Dial, but uses timeout as the amount of time to wait for a server to respond when first connecting and also on follow up operations in the session. If timeout is zero, the call may block forever waiting for a connection to be made.
See SetSyncTimeout for customizing the timeout for the session.
func (*Session) BuildInfo ¶
BuildInfo retrieves the version and other details about the running MongoDB server.
func (*Session) Clone ¶
Clone works just like Copy, but also reuses the same socket as the original session, in case it had already reserved one due to its consistency guarantees. This behavior ensures that writes performed in the old session are necessarily observed when using the new session, as long as it was a strong or monotonic session. That said, it also means that long operations may cause other goroutines using the original session to wait.
func (*Session) Close ¶
func (s *Session) Close()
Close terminates the session. It's a runtime error to use a session after it has been closed.
func (*Session) Copy ¶
Copy works just like New, but preserves the exact authentication information from the original session.
func (*Session) DB ¶
DB returns a value representing the named database. If name is empty, the database name provided in the dialed URL is used instead. If that is also empty, "test" is used as a fallback in a way equivalent to the mongo shell.
Creating this value is a very lightweight operation, and involves no network communication.
func (*Session) DatabaseNames ¶
DatabaseNames returns the names of non-empty databases present in the cluster.
func (*Session) EnsureSafe ¶
EnsureSafe compares the provided safety parameters with the ones currently in use by the session and picks the most conservative choice for each setting.
That is:
- safe.WMode is always used if set.
- safe.RMode is always used if set.
- safe.W is used if larger than the current W and WMode is empty.
- safe.FSync is always used if true.
- safe.J is used if FSync is false.
- safe.WTimeout is used if set and smaller than the current WTimeout.
For example, the following statement will ensure the session is at least checking for errors, without enforcing further constraints. If a more conservative SetSafe or EnsureSafe call was previously done, the following call will be ignored.
session.EnsureSafe(&mgo.Safe{})
See also the SetSafe method for details on what each option means.
Relevant documentation:
http://www.mongodb.org/display/DOCS/getLastError+Command http://www.mongodb.org/display/DOCS/Verifying+Propagation+of+Writes+with+getLastError http://www.mongodb.org/display/DOCS/Data+Center+Awareness
func (*Session) FindRef ¶
FindRef returns a query that looks for the document in the provided reference. For a DBRef to be resolved correctly at the session level it must necessarily have the optional DB field defined.
See also the DBRef type and the FindRef method on Database.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Database+References
func (*Session) Fsync ¶
Fsync flushes in-memory writes to disk on the server the session is established with. If async is true, the call returns immediately, otherwise it returns after the flush has been made.
func (*Session) FsyncLock ¶
FsyncLock locks all writes in the specific server the session is established with and returns. Any writes attempted to the server after it is successfully locked will block until FsyncUnlock is called for the same server.
This method works on secondaries as well, preventing the oplog from being flushed while the server is locked, but since only the server connected to is locked, for locking specific secondaries it may be necessary to establish a connection directly to the secondary (see Dial's connect=direct option).
As an important caveat, note that once a write is attempted and blocks, follow up reads will block as well due to the way the lock is internally implemented in the server. More details at:
https://jira.mongodb.org/browse/SERVER-4243
FsyncLock is often used for performing consistent backups of the database files on disk.
Relevant documentation:
http://www.mongodb.org/display/DOCS/fsync+Command http://www.mongodb.org/display/DOCS/Backups
func (*Session) FsyncUnlock ¶
FsyncUnlock releases the server for writes. See FsyncLock for details.
func (*Session) LiveServers ¶
LiveServers returns a list of server addresses which are currently known to be alive.
func (*Session) Login ¶
func (s *Session) Login(cred *Credential) error
Login authenticates with MongoDB using the provided credential. The authentication is valid for the whole session and will stay valid until Logout is explicitly called for the same database, or the session is closed.
func (*Session) LogoutAll ¶
func (s *Session) LogoutAll()
LogoutAll removes all established authentication credentials for the session.
func (*Session) New ¶
New creates a new session with the same parameters as the original session, including consistency, batch size, prefetching, safety mode, etc. The returned session will use sockets from the pool, so there's a chance that writes just performed in another session may not yet be visible.
Login information from the original session will not be copied over into the new session unless it was provided through the initial URL for the Dial function.
See the Copy and Clone methods.
func (*Session) Refresh ¶
func (s *Session) Refresh()
Refresh puts back any reserved sockets in use and restarts the consistency guarantees according to the current consistency setting for the session.
func (*Session) ResetIndexCache ¶
func (s *Session) ResetIndexCache()
ResetIndexCache clears the cache of previously ensured indexes. Following requests to EnsureIndex will contact the server.
func (*Session) Run ¶
Run issues the provided command on the "admin" database and and unmarshals its result in the respective argument. The cmd argument may be either a string with the command name itself, in which case an empty document of the form bson.M{cmd: 1} will be used, or it may be a full command document.
Note that MongoDB considers the first marshalled key as the command name, so when providing a command with options, it's important to use an ordering-preserving document, such as a struct value or an instance of bson.D. For instance:
db.Run(bson.D{{"create", "mycollection"}, {"size", 1024}})
For commands on arbitrary databases, see the Run method in the Database type.
Relevant documentation:
http://www.mongodb.org/display/DOCS/Commands http://www.mongodb.org/display/DOCS/List+of+Database+CommandSkips
func (*Session) SelectServers ¶
SelectServers restricts communication to servers configured with the given tags. For example, the following statement restricts servers used for reading operations to those with both tag "disk" set to "ssd" and tag "rack" set to 1:
session.SelectServers(bson.D{{"disk", "ssd"}, {"rack", 1}})
Multiple sets of tags may be provided, in which case the used server must match all tags within any one set.
If a connection was previously assigned to the session due to the current session mode (see Session.SetMode), the tag selection will only be enforced after the session is refreshed.
Relevant documentation:
http://docs.mongodb.org/manual/tutorial/configure-replica-set-tag-sets
func (*Session) SetBatch ¶
SetBatch sets the default batch size used when fetching documents from the database. It's possible to change this setting on a per-query basis as well, using the Query.Batch method.
The default batch size is defined by the database itself. As of this writing, MongoDB will use an initial size of min(100 docs, 4MB) on the first batch, and 4MB on remaining ones.
func (*Session) SetBypassValidation ¶
SetBypassValidation sets whether the server should bypass the registered validation expressions executed when documents are inserted or modified, in the interest of preserving invariants in the collection being modified. The default is to not bypass, and thus to perform the validation expressions registered for modified collections.
Document validation was introuced in MongoDB 3.2.
Relevant documentation:
https://docs.mongodb.org/manual/release-notes/3.2/#bypass-validation
func (*Session) SetCursorTimeout ¶
SetCursorTimeout changes the standard timeout period that the server enforces on created cursors. The only supported value right now is 0, which disables the timeout. The standard server timeout is 10 minutes.
func (*Session) SetMode ¶
SetMode changes the consistency mode for the session.
The default mode is Strong.
In the Strong consistency mode reads and writes will always be made to the primary server using a unique connection so that reads and writes are fully consistent, ordered, and observing the most up-to-date data. This offers the least benefits in terms of distributing load, but the most guarantees. See also Monotonic and Eventual.
In the Monotonic consistency mode reads may not be entirely up-to-date, but they will always see the history of changes moving forward, the data read will be consistent across sequential queries in the same session, and modifications made within the session will be observed in following queries (read-your-writes).
In practice, the Monotonic mode is obtained by performing initial reads on a unique connection to an arbitrary secondary, if one is available, and once the first write happens, the session connection is switched over to the primary server. This manages to distribute some of the reading load with secondaries, while maintaining some useful guarantees.
In the Eventual consistency mode reads will be made to any secondary in the cluster, if one is available, and sequential reads will not necessarily be made with the same connection. This means that data may be observed out of order. Writes will of course be issued to the primary, but independent writes in the same Eventual session may also be made with independent connections, so there are also no guarantees in terms of write ordering (no read-your-writes guarantees either).
The Eventual mode is the fastest and most resource-friendly, but is also the one offering the least guarantees about ordering of the data read and written.
If refresh is true, in addition to ensuring the session is in the given consistency mode, the consistency guarantees will also be reset (e.g. a Monotonic session will be allowed to read from secondaries again). This is equivalent to calling the Refresh function.
Shifting between Monotonic and Strong modes will keep a previously reserved connection for the session unless refresh is true or the connection is unsuitable (to a secondary server in a Strong session).
func (*Session) SetPoolLimit ¶
SetPoolLimit sets the maximum number of sockets in use in a single server before this session will block waiting for a socket to be available. The default limit is 4096.
This limit must be set to cover more than any expected workload of the application. It is a bad practice and an unsupported use case to use the database driver to define the concurrency limit of an application. Prevent such concurrency "at the door" instead, by properly restricting the amount of used resources and number of goroutines before they are created.
func (*Session) SetPoolTimeout ¶
SetPoolTimeout sets the maxinum time connection attempts will wait to reuse an existing connection from the pool if the PoolLimit has been reached. If the value is exceeded, the attempt to use a session will fail with an error. The default value is zero, which means to wait forever with no timeout.
func (*Session) SetPrefetch ¶
SetPrefetch sets the default point at which the next batch of results will be requested. When there are p*batch_size remaining documents cached in an Iter, the next batch will be requested in background. For instance, when using this:
session.SetBatch(200) session.SetPrefetch(0.25)
and there are only 50 documents cached in the Iter to be processed, the next batch of 200 will be requested. It's possible to change this setting on a per-query basis as well, using the Prefetch method of Query.
The default prefetch value is 0.25.
func (*Session) SetSafe ¶
SetSafe changes the session safety mode.
If the safe parameter is nil, the session is put in unsafe mode, and writes become fire-and-forget, without error checking. The unsafe mode is faster since operations won't hold on waiting for a confirmation.
If the safe parameter is not nil, any changing query (insert, update, ...) will be followed by a getLastError command with the specified parameters, to ensure the request was correctly processed.
The default is &Safe{}, meaning check for errors and use the default behavior for all fields.
The safe.W parameter determines how many servers should confirm a write before the operation is considered successful. If set to 0 or 1, the command will return as soon as the primary is done with the request. If safe.WTimeout is greater than zero, it determines how many milliseconds to wait for the safe.W servers to respond before returning an error.
Starting with MongoDB 2.0.0 the safe.WMode parameter can be used instead of W to request for richer semantics. If set to "majority" the server will wait for a majority of members from the replica set to respond before returning. Custom modes may also be defined within the server to create very detailed placement schemas. See the data awareness documentation in the links below for more details (note that MongoDB internally reuses the "w" field name for WMode).
If safe.J is true, servers will block until write operations have been committed to the journal. Cannot be used in combination with FSync. Prior to MongoDB 2.6 this option was ignored if the server was running without journaling. Starting with MongoDB 2.6 write operations will fail with an exception if this option is used when the server is running without journaling.
If safe.FSync is true and the server is running without journaling, blocks until the server has synced all data files to disk. If the server is running with journaling, this acts the same as the J option, blocking until write operations have been committed to the journal. Cannot be used in combination with J.
Since MongoDB 2.0.0, the safe.J option can also be used instead of FSync to force the server to wait for a group commit in case journaling is enabled. The option has no effect if the server has journaling disabled.
For example, the following statement will make the session check for errors, without imposing further constraints:
session.SetSafe(&mgo.Safe{})
The following statement will force the server to wait for a majority of members of a replica set to return (MongoDB 2.0+ only):
session.SetSafe(&mgo.Safe{WMode: "majority"})
The following statement, on the other hand, ensures that at least two servers have flushed the change to disk before confirming the success of operations:
session.EnsureSafe(&mgo.Safe{W: 2, FSync: true})
The following statement, on the other hand, disables the verification of errors entirely:
session.SetSafe(nil)
See also the EnsureSafe method.
Relevant documentation:
https://docs.mongodb.com/manual/reference/read-concern/ http://www.mongodb.org/display/DOCS/getLastError+Command http://www.mongodb.org/display/DOCS/Verifying+Propagation+of+Writes+with+getLastError http://www.mongodb.org/display/DOCS/Data+Center+Awareness
func (*Session) SetSocketTimeout ¶
SetSocketTimeout is deprecated - use DialInfo read/write timeouts instead.
SetSocketTimeout sets the amount of time to wait for a non-responding socket to the database before it is forcefully closed.
The default timeout is 1 minute.
func (*Session) SetSyncTimeout ¶
SetSyncTimeout sets the amount of time an operation with this session will wait before returning an error in case a connection to a usable server can't be established. Set it to zero to wait forever. The default value is 7 seconds.
type Stats ¶
type Stats struct { Clusters int MasterConns int SlaveConns int SentOps int ReceivedOps int ReceivedDocs int SocketsAlive int SocketsInUse int SocketRefs int TimesSocketAcquired int TimesWaitedForPool int TotalPoolWaitTime time.Duration PoolTimeouts int }
Stats holds info on the database state
Relevant documentation:
https://docs.mongodb.com/manual/reference/command/serverStatus/
TODO outdated fields ?
type User ¶
type User struct { // Username is how the user identifies itself to the system. Username string `bson:"user"` // Password is the plaintext password for the user. If set, // the UpsertUser method will hash it into PasswordHash and // unset it before the user is added to the database. Password string `bson:",omitempty"` // PasswordHash is the MD5 hash of Username+":mongo:"+Password. PasswordHash string `bson:"pwd,omitempty"` // CustomData holds arbitrary data admins decide to associate // with this user, such as the full name or employee id. CustomData interface{} `bson:"customData,omitempty"` // Roles indicates the set of roles the user will be provided. // See the Role constants. Roles []Role `bson:"roles"` // OtherDBRoles allows assigning roles in other databases from // user documents inserted in the admin database. This field // only works in the admin database. OtherDBRoles map[string][]Role `bson:"otherDBRoles,omitempty"` // UserSource indicates where to look for this user's credentials. // It may be set to a database name, or to "$external" for // consulting an external resource such as Kerberos. UserSource // must not be set if Password or PasswordHash are present. // // WARNING: This setting was only ever supported in MongoDB 2.4, // and is now obsolete. UserSource string `bson:"userSource,omitempty"` }
User represents a MongoDB user.
Relevant documentation:
http://docs.mongodb.org/manual/reference/privilege-documents/ http://docs.mongodb.org/manual/reference/user-privileges/
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
Package bson is an implementation of the BSON specification for Go:
|
Package bson is an implementation of the BSON specification for Go: |
internal
|
|
json
Package json implements encoding and decoding of JSON as defined in RFC 4627.
|
Package json implements encoding and decoding of JSON as defined in RFC 4627. |
scram
Package scram implements a SCRAM-{SHA-1,etc} client per RFC5802.
|
Package scram implements a SCRAM-{SHA-1,etc} client per RFC5802. |
Package txn implements support for multi-document transactions.
|
Package txn implements support for multi-document transactions. |