dynamo

package module
v1.23.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 18, 2024 License: BSD-2-Clause Imports: 26 Imported by: 269

README

dynamo GoDoc

import "github.com/guregu/dynamo"

dynamo is an expressive DynamoDB client for Go, with an easy but powerful API. dynamo integrates with the official AWS SDK.

This library is stable and versioned with Go modules.

Example

package dynamo

import (
	"time"

	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/guregu/dynamo"
)

// Use struct tags much like the standard JSON library,
// you can embed anonymous structs too!
type widget struct {
	UserID int       // Hash key, a.k.a. partition key
	Time   time.Time // Range key, a.k.a. sort key

	Msg       string              `dynamo:"Message"`    // Change name in the database
	Count     int                 `dynamo:",omitempty"` // Omits if zero value
	Children  []widget            // List of maps
	Friends   []string            `dynamo:",set"` // Sets
	Set       map[string]struct{} `dynamo:",set"` // Map sets, too!
	SecretKey string              `dynamo:"-"`    // Ignored
}


func main() {
	sess := session.Must(session.NewSession())
	db := dynamo.New(sess, &aws.Config{Region: aws.String("us-west-2")})
	table := db.Table("Widgets")

	// put item
	w := widget{UserID: 613, Time: time.Now(), Msg: "hello"}
	err := table.Put(w).Run()

	// get the same item
	var result widget
	err = table.Get("UserID", w.UserID).
		Range("Time", dynamo.Equal, w.Time).
		One(&result)

	// get all items
	var results []widget
	err = table.Scan().All(&results)

	// use placeholders in filter expressions (see Expressions section below)
	var filtered []widget
	err = table.Scan().Filter("'Count' > ?", 10).All(&filtered)
}

Expressions

dynamo will help you write expressions used to filter results in queries and scans, and add conditions to puts and deletes.

Attribute names may be written as is if it is not a reserved word, or be escaped with single quotes (''). You may also use dollar signs ($) as placeholders for attribute names and list indexes. DynamoDB has very large amount of reserved words so it may be a good idea to just escape everything.

Question marks (?) are used as placeholders for attribute values. DynamoDB doesn't have value literals, so you need to substitute everything.

Please see the DynamoDB reference on expressions for more information. The Comparison Operator and Function Reference is also handy.

// Using single quotes to escape a reserved word, and a question mark as a value placeholder.
// Finds all items whose date is greater than or equal to lastUpdate.
table.Scan().Filter("'Date' >= ?", lastUpdate).All(&results)

// Using dollar signs as a placeholder for attribute names.
// Deletes the item with an ID of 42 if its score is at or below the cutoff, and its name starts with G.
table.Delete("ID", 42).If("Score <= ? AND begins_with($, ?)", cutoff, "Name", "G").Run()

// Put a new item, only if it doesn't already exist.
table.Put(item{ID: 42}).If("attribute_not_exists(ID)").Run()

Encoding support

dynamo automatically handles the following interfaces:

This allows you to define custom encodings and provides built-in support for types such as time.Time.

Struct tags and fields

dynamo handles struct tags similarly to the standard library encoding/json package. It uses dynamo for the struct tag's name, taking the form of: dynamo:"attributeName,option1,option2,etc". You can omit the attribute name to use the default: dynamo:",option1,etc".

Renaming

By default, dynamo will use the name of your fields as the name of the DynamoDB attribute it corresponds do. You can specify a different name with the dynamo struct tag like so: dynamo:"other_name_goes_here". If two fields have the same name, dynamo will prioritize the higher-level field.

Omission

If you set a field's name to "-" (as in dynamo:"-") that field will be ignored. It will be omitted when marshaling and ignored when unmarshaling. Also, fields that start with a lowercase letter will be ignored. However, embedding a struct whose type has a lowercase letter but contains uppercase fields is OK.

Sets

By default, slices will be marshaled as DynamoDB lists. To marshal a field to sets instead, use the dynamo:",set" option. Empty sets will be automatically omitted.

You can use maps as sets too. The following types are supported:

  • []T
  • map[T]struct{}
  • map[T]bool

where T represents any type that marshals into a DynamoDB string, number, or binary value.

Note that the order of objects within a set is undefined.

Omitting empty values (omitempty)

Using the omitempty option (as in dynamo:",omitempty") will omit the field if it has a zero (ex. an empty string, 0, nil pointer) value. Structs are supported.

It also supports the isZeroer interface below:

type isZeroer interface {
	IsZero() bool
}

If IsZero() returns true, the field will be omitted. This gives us built-in support for time.Time.

You can also use the dynamo:",omitemptyelem" option to omit empty values inside of slices.

Automatic omission

Some values will be automatically omitted.

  • Empty strings
  • Empty sets
  • Empty structs
  • Nil pointers and interfaces
  • Types that implement encoding.TextMarshaler and whose MarshalText method returns 0-length or nil slice.
  • Zero-length binary (byte slices)

To override this behavior, use the dynamo:",allowempty" flag. Not all empty types can be stored by DynamoDB. For example, empty sets will still be omitted.

To override auto-omit behavior for children of a map, for example map[string]string, use the dynamo:",allowemptyelem" option.

Using the NULL type

DynamoDB has a special NULL type to represent null values. In general, this library avoids marshaling things as NULL and prefers to omit those values instead. If you want empty/nil values to marshal to NULL, use the dynamo:",null" option.

Unix time

By default, time.Time will marshal to a string because it implements encoding.TextMarshaler.

If you want time.Time to marshal as a Unix time value (number of seconds since the Unix epoch), you can use the dynamo:",unixtime" option. This is useful for TTL fields, which must be Unix time.

Creating tables

You can use struct tags to specify hash keys, range keys, and indexes when creating a table.

For example:

type UserAction struct {
	UserID string    `dynamo:"ID,hash" index:"Seq-ID-index,range"`
	Time   time.Time `dynamo:",range"`
	Seq    int64     `localIndex:"ID-Seq-index,range" index:"Seq-ID-index,hash"`
	UUID   string    `index:"UUID-index,hash"`
}

This creates a table with the primary hash key ID and range key Time. It creates two global secondary indices called UUID-index and Seq-ID-index, and a local secondary index called ID-Seq-index.

Retrying

Requests that fail with certain errors (e.g. ThrottlingException) are automatically retried. Methods that take a context.Context will retry until the context is canceled. Methods without a context will use the RetryTimeout global variable, which can be changed; using context is recommended instead.

Limiting or disabling retrying

The maximum number of retries can be configured via the MaxRetries field in the *aws.Config passed to dynamo.New(). A value of 0 will disable retrying. A value of -1 means unlimited and is the default (however, context or RetryTimeout will still apply).

db := dynamo.New(session, &aws.Config{
	MaxRetries: aws.Int(0), // disables automatic retrying
})
Custom retrying logic

If a custom request.Retryer is set via the Retryer field in *aws.Config, dynamo will delegate retrying entirely to it, taking precedence over other retrying settings. This allows you to have full control over all aspects of retrying.

Example using client.DefaultRetryer:

retryer := client.DefaultRetryer{
	NumMaxRetries:    10,
	MinThrottleDelay: 500 * time.Millisecond,
	MaxThrottleDelay: 30 * time.Second,
}
db := dynamo.New(session, &aws.Config{
	Retryer: retryer,
})

Compatibility with the official AWS library

dynamo has been in development before the official AWS libraries were stable. We use a different encoder and decoder than the dynamodbattribute package. dynamo uses the dynamo struct tag instead of the dynamodbav struct tag, and we also prefer to automatically omit invalid values such as empty strings, whereas the dynamodbattribute package substitutes null values for them. Items that satisfy the dynamodbattribute.(Un)marshaler interfaces are compatibile with both libraries.

In order to use dynamodbattribute's encoding facilities, you must wrap objects passed to dynamo with dynamo.AWSEncoding. Here is a quick example:

// Notice the use of the dynamodbav struct tag
type book struct {
	ID    int    `dynamodbav:"id"`
	Title string `dynamodbav:"title"`
}
// Putting an item
err := db.Table("Books").Put(dynamo.AWSEncoding(book{
	ID:    42,
	Title: "Principia Discordia",
})).Run()
// When getting an item you MUST pass a pointer to AWSEncoding!
var someBook book
err := db.Table("Books").Get("ID", 555).One(dynamo.AWSEncoding(&someBook))

Integration tests

By default, tests are run in offline mode. In order to run the integration tests, some environment variables need to be set.

To run the tests against DynamoDB Local:

# Use Docker to run DynamoDB local on port 8880
docker compose -f '.github/docker-compose.yml' up -d

# Run the tests with a fresh table
# The tables will be created automatically
# The '%' in the table name will be replaced the current timestamp
DYNAMO_TEST_ENDPOINT='http://localhost:8880' \
	DYNAMO_TEST_REGION='local' \
	DYNAMO_TEST_TABLE='TestDB-%' \
	AWS_ACCESS_KEY_ID='dummy' \
	AWS_SECRET_ACCESS_KEY='dummy' \
	AWS_REGION='local' \
	go test -v -race ./... -cover -coverpkg=./...

License

BSD 2-Clause

Documentation

Overview

Package dynamo offers a rich DynamoDB client.

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrNotFound is returned when no items could be found in Get or OldValue and similar operations.
	ErrNotFound = errors.New("dynamo: no item found")
	// ErrTooMany is returned when one item was requested, but the query returned multiple items.
	ErrTooMany = errors.New("dynamo: too many items")
)
View Source
var ErrNoInput = errors.New("dynamo: no input items")

ErrNoInput is returned when APIs that can take multiple inputs are run with zero inputs. For example, in a transaction with no operations.

View Source
var RetryTimeout = 1 * time.Minute

RetryTimeout defines the maximum amount of time that requests will attempt to automatically retry for. In other words, this is the maximum amount of time that dynamo operations will block. RetryTimeout is only considered by methods that do not take a context. Higher values are better when using tables with lower throughput.

Functions

func IsCondCheckFailed added in v1.17.0

func IsCondCheckFailed(err error) bool

IsCondCheckFailed returns true if the given error is a "conditional check failed" error. This corresponds with a ConditionalCheckFailedException in most APIs, or a TransactionCanceledException with a ConditionalCheckFailed cancellation reason in transactions.

func Marshal

func Marshal(v interface{}) (*dynamodb.AttributeValue, error)

Marshal converts the given value into a DynamoDB attribute value.

func MarshalItem

func MarshalItem(v interface{}) (map[string]*dynamodb.AttributeValue, error)

MarshalItem converts the given struct into a DynamoDB item.

func Unmarshal

func Unmarshal(av *dynamodb.AttributeValue, out interface{}) error

Unmarshal decodes a DynamoDB value into out, which must be a pointer.

func UnmarshalItem

func UnmarshalItem(item map[string]*dynamodb.AttributeValue, out interface{}) error

Unmarshal decodes a DynamoDB item into out, which must be a pointer.

Types

type Batch

type Batch struct {
	// contains filtered or unexported fields
}

Batch stores the names of the hash key and range key for creating new batches.

func (Batch) Get

func (b Batch) Get(keys ...Keyed) *BatchGet

Get creates a new batch get item request with the given keys.

table.Batch("ID", "Month").
	Get(dynamo.Keys{1, "2015-10"}, dynamo.Keys{42, "2015-12"}, dynamo.Keys{42, "1992-02"}).
	All(&results)

func (Batch) Write

func (b Batch) Write() *BatchWrite

Write creates a new batch write request, to which puts and deletes can be added.

type BatchGet

type BatchGet struct {
	// contains filtered or unexported fields
}

BatchGet is a BatchGetItem operation.

func (*BatchGet) All

func (bg *BatchGet) All(out interface{}) error

All executes this request and unmarshals all results to out, which must be a pointer to a slice.

func (*BatchGet) AllWithContext added in v1.1.0

func (bg *BatchGet) AllWithContext(ctx context.Context, out interface{}) error

AllWithContext executes this request and unmarshals all results to out, which must be a pointer to a slice.

func (*BatchGet) And

func (bg *BatchGet) And(keys ...Keyed) *BatchGet

And adds more keys to be gotten from the default table. To get items from other tables, use BatchGet.From or BatchGet.FromRange.

func (*BatchGet) Consistent

func (bg *BatchGet) Consistent(on bool) *BatchGet

Consistent will, if on is true, make this batch use a strongly consistent read. Reads are eventually consistent by default. Strongly consistent reads are more resource-heavy than eventually consistent reads.

func (*BatchGet) ConsumedCapacity

func (bg *BatchGet) ConsumedCapacity(cc *ConsumedCapacity) *BatchGet

ConsumedCapacity will measure the throughput capacity consumed by this operation and add it to cc.

func (*BatchGet) From added in v1.22.0

func (bg *BatchGet) From(table Table, hashKey string, keys ...Keyed) *BatchGet

From adds more keys to be gotten from the given table. The given table's primary key must be a hash key (partition key) only. For tables with a range key (sort key) primary key, use BatchGet.FromRange.

func (*BatchGet) FromRange added in v1.22.0

func (bg *BatchGet) FromRange(table Table, hashKey, rangeKey string, keys ...Keyed) *BatchGet

FromRange adds more keys to be gotten from the given table. For tables without a range key (sort key) primary key, use BatchGet.From.

func (*BatchGet) Iter

func (bg *BatchGet) Iter() Iter

Iter returns a results iterator for this batch.

func (*BatchGet) IterWithTable added in v1.22.0

func (bg *BatchGet) IterWithTable(tablePtr *string) Iter

IterWithTable is like BatchGet.Iter, but will update the value pointed by tablePtr after each iteration. This can be useful when getting from multiple tables to determine which table the latest item came from.

func (*BatchGet) Merge added in v1.22.0

func (bg *BatchGet) Merge(srcs ...*BatchGet) *BatchGet

Merge copies operations and settings from src to this batch get.

func (*BatchGet) Project added in v1.18.0

func (bg *BatchGet) Project(paths ...string) *BatchGet

Project limits the result attributes to the given paths. This will apply to all tables, but can be overriden by BatchGet.ProjectTable to set specific per-table projections.

func (*BatchGet) ProjectTable added in v1.22.0

func (bg *BatchGet) ProjectTable(table Table, paths ...string) *BatchGet

Project limits the result attributes to the given paths for the given table.

type BatchWrite

type BatchWrite struct {
	// contains filtered or unexported fields
}

BatchWrite is a BatchWriteItem operation.

func (*BatchWrite) ConsumedCapacity

func (bw *BatchWrite) ConsumedCapacity(cc *ConsumedCapacity) *BatchWrite

ConsumedCapacity will measure the throughput capacity consumed by this operation and add it to cc.

func (*BatchWrite) Delete

func (bw *BatchWrite) Delete(keys ...Keyed) *BatchWrite

Delete adds delete operations for the given keys to this batch, using the default table.

func (*BatchWrite) DeleteIn added in v1.22.0

func (bw *BatchWrite) DeleteIn(table Table, hashKey string, keys ...Keyed) *BatchWrite

DeleteIn adds delete operations for the given keys to this batch, using the given table. hashKey must be the name of the primary key hash (partition) attribute. This function is for tables with a hash key (partition key) only. For tables including a range key (sort key) primary key, use BatchWrite.DeleteInRange instead.

func (*BatchWrite) DeleteInRange added in v1.22.0

func (bw *BatchWrite) DeleteInRange(table Table, hashKey, rangeKey string, keys ...Keyed) *BatchWrite

DeleteInRange adds delete operations for the given keys to this batch, using the given table. hashKey must be the name of the primary key hash (parition) attribute, rangeKey must be the name of the primary key range (sort) attribute. This function is for tables with a hash key (partition key) and range key (sort key). For tables without a range key primary key, use BatchWrite.DeleteIn instead.

func (*BatchWrite) Merge added in v1.22.0

func (bw *BatchWrite) Merge(srcs ...*BatchWrite) *BatchWrite

Merge copies operations from src to this batch.

func (*BatchWrite) Put

func (bw *BatchWrite) Put(items ...interface{}) *BatchWrite

Put adds put operations for items to this batch using the default table.

func (*BatchWrite) PutIn added in v1.22.0

func (bw *BatchWrite) PutIn(table Table, items ...interface{}) *BatchWrite

PutIn adds put operations for items to this batch using the given table. This can be useful for writing to multiple different tables.

func (*BatchWrite) Run

func (bw *BatchWrite) Run() (wrote int, err error)

Run executes this batch. For batches with more than 25 operations, an error could indicate that some records have been written and some have not. Consult the wrote return amount to figure out which operations have succeeded.

func (*BatchWrite) RunWithContext

func (bw *BatchWrite) RunWithContext(ctx context.Context) (wrote int, err error)

RunWithContext executes this batch. For batches with more than 25 operations, an error could indicate that some records have been written and some have not. Consult the wrote return amount to figure out which operations have succeeded.

type Coder

type Coder interface {
	Marshaler
	Unmarshaler
}

func AWSEncoding

func AWSEncoding(v interface{}) Coder

AWSEncoding wraps an object, forcing it to use AWS's official dynamodbattribute package for encoding and decoding. This allows you to use the "dynamodbav" struct tags. When decoding, v must be a pointer.

type ConditionCheck added in v1.1.0

type ConditionCheck struct {
	// contains filtered or unexported fields
}

ConditionCheck represents a condition for a write transaction to succeed. It is used along with WriteTx.Check.

func (*ConditionCheck) If added in v1.1.0

func (check *ConditionCheck) If(expr string, args ...interface{}) *ConditionCheck

If specifies a conditional expression for this coniditon check to succeed. Use single quotes to specificy reserved names inline (like 'Count'). Use the placeholder ? within the expression to substitute values, and use $ for names. You need to use quoted or placeholder names when the name is a reserved word in DynamoDB. Multiple calls to If will be combined with AND.

func (*ConditionCheck) IfExists added in v1.1.0

func (check *ConditionCheck) IfExists() *ConditionCheck

IfExists sets this check to succeed if the item exists.

func (*ConditionCheck) IfNotExists added in v1.1.0

func (check *ConditionCheck) IfNotExists() *ConditionCheck

IfNotExists sets this check to succeed if the item does not exist.

func (*ConditionCheck) Range added in v1.1.0

func (check *ConditionCheck) Range(rangeKey string, value interface{}) *ConditionCheck

Range specifies the name and value of the range key for this item.

type ConsumedCapacity

type ConsumedCapacity struct {
	// Total is the total number of capacity units consumed during this operation.
	Total float64
	// Read is the total number of read capacity units consumed during this operation.
	// This seems to be only set for transactions.
	Read float64
	// Write is the total number of write capacity units consumed during this operation.
	// This seems to be only set for transactions.
	Write float64
	// GSI is a map of Global Secondary Index names to total consumed capacity units.
	GSI map[string]float64
	// GSIRead is a map of Global Secondary Index names to consumed read capacity units.
	// This seems to be only set for transactions.
	GSIRead map[string]float64
	// GSIWrite is a map of Global Secondary Index names to consumed write capacity units.
	// This seems to be only set for transactions.
	GSIWrite map[string]float64
	// LSI is a map of Local Secondary Index names to total consumed capacity units.
	LSI map[string]float64
	// LSIRead is a map of Local Secondary Index names to consumed read capacity units.
	// This seems to be only set for transactions.
	LSIRead map[string]float64
	// LSIWrite is a map of Local Secondary Index names to consumed write capacity units.
	// This seems to be only set for transactions.
	LSIWrite map[string]float64
	// Table is the amount of total throughput consumed by the table.
	Table float64
	// TableRead is the amount of read throughput consumed by the table.
	// This seems to be only set for transactions.
	TableRead float64
	// TableWrite is the amount of write throughput consumed by the table.
	// This seems to be only set for transactions.
	TableWrite float64
	// TableName is the name of the table affected by this operation.
	TableName string
}

ConsumedCapacity represents the amount of throughput capacity consumed during an operation.

type CreateTable

type CreateTable struct {
	// contains filtered or unexported fields
}

CreateTable is a request to create a new table. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html

func (*CreateTable) Index

func (ct *CreateTable) Index(index Index) *CreateTable

Index specifies an index to add to this table.

func (*CreateTable) OnDemand added in v1.2.0

func (ct *CreateTable) OnDemand(enabled bool) *CreateTable

OnDemand specifies to create the table with on-demand (pay per request) billing mode, if enabled. On-demand mode is disabled by default.

func (*CreateTable) Project

func (ct *CreateTable) Project(index string, projection IndexProjection, includeAttribs ...string) *CreateTable

Project specifies the projection type for the given table. When using IncludeProjection, you must specify the additional attributes to include via includeAttribs.

func (*CreateTable) Provision

func (ct *CreateTable) Provision(readUnits, writeUnits int64) *CreateTable

Provision specifies the provisioned read and write capacity for this table. If Provision isn't called and on-demand mode is disabled, the table will be created with 1 unit each.

func (*CreateTable) ProvisionIndex

func (ct *CreateTable) ProvisionIndex(index string, readUnits, writeUnits int64) *CreateTable

ProvisionIndex specifies the provisioned read and write capacity for the given global secondary index. Local secondary indices share their capacity with the table.

func (*CreateTable) Run

func (ct *CreateTable) Run() error

Run creates this table or returns an error.

func (*CreateTable) RunWithContext

func (ct *CreateTable) RunWithContext(ctx context.Context) error

RunWithContext creates this table or returns an error.

func (*CreateTable) SSEEncryption added in v1.13.0

func (ct *CreateTable) SSEEncryption(enabled bool, keyID string, sseType SSEType) *CreateTable

SSEEncryption specifies the server side encryption for this table. Encryption is disabled by default.

func (*CreateTable) Stream

func (ct *CreateTable) Stream(view StreamView) *CreateTable

Stream enables DynamoDB Streams for this table which the specified type of view. Streams are disabled by default.

func (*CreateTable) Tag added in v1.4.0

func (ct *CreateTable) Tag(key, value string) *CreateTable

Tag specifies a metadata tag for this table. Multiple tags may be specified.

func (*CreateTable) Wait added in v1.15.0

func (ct *CreateTable) Wait() error

Wait creates this table and blocks until it exists and is ready to use.

func (*CreateTable) WaitWithContext added in v1.15.0

func (ct *CreateTable) WaitWithContext(ctx context.Context) error

WaitWithContext creates this table and blocks until it exists and is ready to use.

type DB

type DB struct {
	// contains filtered or unexported fields
}

DB is a DynamoDB client.

func New

func New(p client.ConfigProvider, cfgs ...*aws.Config) *DB

New creates a new client with the given configuration. If Retryer is configured, retrying responsibility will be delegated to it. If MaxRetries is configured, the maximum number of retry attempts will be limited to the specified value (0 for no retrying, -1 for default behavior of unlimited retries). MaxRetries is ignored if Retryer is set.

func NewFromIface

func NewFromIface(client dynamodbiface.DynamoDBAPI) *DB

NewFromIface creates a new client with the given interface.

func (*DB) Client

func (db *DB) Client() dynamodbiface.DynamoDBAPI

Client returns this DB's internal client used to make API requests.

func (*DB) CreateTable

func (db *DB) CreateTable(name string, from interface{}) *CreateTable

CreateTable begins a new operation to create a table with the given name. The second parameter must be a struct with appropriate hash and range key struct tags for the primary key and all indices.

An example of a from struct follows:

type UserAction struct {
	UserID string    `dynamo:"ID,hash" index:"Seq-ID-index,range"`
	Time   time.Time `dynamo:",range"`
	Seq    int64     `localIndex:"ID-Seq-index,range" index:"Seq-ID-index,hash"`
	UUID   string    `index:"UUID-index,hash"`
}

This creates a table with the primary hash key ID and range key Time. It creates two global secondary indices called UUID-index and Seq-ID-index, and a local secondary index called ID-Seq-index.

func (*DB) GetTx added in v1.1.0

func (db *DB) GetTx() *GetTx

GetTx begins a new get transaction.

func (*DB) ListTables

func (db *DB) ListTables() *ListTables

ListTables begins a new request to list all tables.

func (*DB) Table

func (db *DB) Table(name string) Table

Table returns a Table handle specified by name.

func (*DB) WriteTx added in v1.1.0

func (db *DB) WriteTx() *WriteTx

WriteTx begins a new write transaction.

type Delete

type Delete struct {
	// contains filtered or unexported fields
}

Delete is a request to delete an item. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteItem.html

func (*Delete) ConsumedCapacity

func (d *Delete) ConsumedCapacity(cc *ConsumedCapacity) *Delete

ConsumedCapacity will measure the throughput capacity consumed by this operation and add it to cc.

func (*Delete) If

func (d *Delete) If(expr string, args ...interface{}) *Delete

If specifies a conditional expression for this delete to succeed. Use single quotes to specificy reserved names inline (like 'Count'). Use the placeholder ? within the expression to substitute values, and use $ for names. You need to use quoted or placeholder names when the name is a reserved word in DynamoDB. Multiple calls to If will be combined with AND.

func (*Delete) OldValue

func (d *Delete) OldValue(out interface{}) error

OldValue executes this delete request, unmarshaling the previous value to out. Returns ErrNotFound is there was no previous value.

func (*Delete) OldValueWithContext

func (d *Delete) OldValueWithContext(ctx context.Context, out interface{}) error

func (*Delete) Range

func (d *Delete) Range(name string, value interface{}) *Delete

Range specifies the range key (a.k.a. sort key) to delete. Name is the name of the range key. Value is the value of the range key.

func (*Delete) Run

func (d *Delete) Run() error

Run executes this delete request.

func (*Delete) RunWithContext

func (d *Delete) RunWithContext(ctx context.Context) error

type DeleteTable

type DeleteTable struct {
	// contains filtered or unexported fields
}

DeleteTable is a request to delete a table. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DeleteTable.html

func (*DeleteTable) Run

func (dt *DeleteTable) Run() error

Run executes this request and deletes the table.

func (*DeleteTable) RunWithContext

func (dt *DeleteTable) RunWithContext(ctx context.Context) error

RunWithContext executes this request and deletes the table.

func (*DeleteTable) Wait added in v1.15.0

func (dt *DeleteTable) Wait() error

Wait executes this request and blocks until the table is finished deleting.

func (*DeleteTable) WaitWithContext added in v1.15.0

func (dt *DeleteTable) WaitWithContext(ctx context.Context) error

WaitWithContext executes this request and blocks until the table is finished deleting.

type DescribeTTL added in v1.5.0

type DescribeTTL struct {
	// contains filtered or unexported fields
}

DescribeTTL is a request to obtain details about a table's time to live configuration.

func (*DescribeTTL) Run added in v1.5.0

func (d *DescribeTTL) Run() (TTLDescription, error)

Run executes this request and returns details about time to live, or an error.

func (*DescribeTTL) RunWithContext added in v1.5.0

func (d *DescribeTTL) RunWithContext(ctx context.Context) (TTLDescription, error)

RunWithContext executes this request and returns details about time to live, or an error.

type DescribeTable

type DescribeTable struct {
	// contains filtered or unexported fields
}

DescribeTable is a request for information about a table and its indexes. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html

func (*DescribeTable) Run

func (dt *DescribeTable) Run() (Description, error)

Run executes this request and describe the table.

func (*DescribeTable) RunWithContext

func (dt *DescribeTable) RunWithContext(ctx context.Context) (Description, error)

type Description

type Description struct {
	Name    string
	ARN     string
	Status  Status
	Created time.Time

	// Attribute name of the hash key (a.k.a. partition key).
	HashKey     string
	HashKeyType KeyType
	// Attribute name of the range key (a.k.a. sort key) or blank if nonexistant.
	RangeKey     string
	RangeKeyType KeyType

	// Provisioned throughput for this table.
	Throughput Throughput
	// OnDemand is true if on-demand (pay per request) billing mode is enabled.
	OnDemand bool

	// The number of items of the table, updated every 6 hours.
	Items int64
	// The size of this table in bytes, updated every 6 hours.
	Size int64

	// Global secondary indexes.
	GSI []Index
	// Local secondary indexes.
	LSI []Index

	StreamEnabled     bool
	StreamView        StreamView
	LatestStreamARN   string
	LatestStreamLabel string

	SSEDescription SSEDescription
}

Description contains information about a table.

func (Description) Active

func (d Description) Active() bool

type ExpressionLiteral added in v1.14.0

type ExpressionLiteral struct {
	// Expression is a raw DynamoDB expression.
	Expression string
	// AttributeNames is a map of placeholders (such as #foo) to attribute names.
	AttributeNames map[string]*string
	// AttributeValues is a map of placeholders (such as :bar) to attribute values.
	AttributeValues map[string]*dynamodb.AttributeValue
}

ExpressionLiteral is a raw DynamoDB expression. Its fields are equivalent to FilterExpression (and similar), ExpressionAttributeNames, and ExpressionAttributeValues in the DynamoDB API. This can be passed to any function that takes an expression, as either $ or ?. Your placeholders will be automatically prefixed to avoid clobbering regular placeholder substitution.

dynamo provides many convenience functions around expressions to avoid having to use this. However, this can be useful when you need to handle complex dynamic expressions.

type GetTx added in v1.1.0

type GetTx struct {
	// contains filtered or unexported fields
}

GetTx is a transaction to retrieve items. It can contain up to 100 operations and works across multiple tables. GetTx is analogous to TransactGetItems in DynamoDB's API. See: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactGetItems.html

func (*GetTx) All added in v1.1.0

func (tx *GetTx) All(out interface{}) error

All executes this transaction and unmarshals every value to out, which must be a pointer to a slice.

func (*GetTx) AllWithContext added in v1.1.0

func (tx *GetTx) AllWithContext(ctx context.Context, out interface{}) error

AllWithContext executes this transaction and unmarshals every value to out, which must be a pointer to a slice.

func (*GetTx) ConsumedCapacity added in v1.1.0

func (tx *GetTx) ConsumedCapacity(cc *ConsumedCapacity) *GetTx

ConsumedCapacity will measure the throughput capacity consumed by this transaction and add it to cc.

func (*GetTx) Get added in v1.1.0

func (tx *GetTx) Get(q *Query) *GetTx

Get adds a get request to this transaction.

func (*GetTx) GetOne added in v1.1.0

func (tx *GetTx) GetOne(q *Query, out interface{}) *GetTx

GetOne adds a get request to this transaction, and specifies out to which the results are marshaled. Out must be a pointer. You can use this multiple times in one transaction.

func (*GetTx) Run added in v1.1.0

func (tx *GetTx) Run() error

Run executes this transaction and unmarshals everything specified by GetOne.

func (*GetTx) RunWithContext added in v1.1.0

func (tx *GetTx) RunWithContext(ctx context.Context) error

RunWithContext executes this transaction and unmarshals everything specified by GetOne.

type Index

type Index struct {
	Name        string
	ARN         string
	Status      Status
	Backfilling bool // only for GSI

	// Local is true when this index is a local secondary index, otherwise it is a global secondary index.
	Local bool

	// Attribute name of the hash key (a.k.a. partition key).
	HashKey     string
	HashKeyType KeyType
	// Attribute name of the range key (a.k.a. sort key) or blank if nonexistant.
	RangeKey     string
	RangeKeyType KeyType

	// The provisioned throughput for this index.
	Throughput Throughput

	Items int64
	Size  int64

	ProjectionType IndexProjection
	// Non-key attributes for this index's projection (if ProjectionType is IncludeProjection).
	ProjectionAttribs []string
}

type IndexProjection

type IndexProjection string

IndexProjection determines which attributes are mirrored into indices.

var (
	// Only the key attributes of the modified item are written to the stream.
	KeysOnlyProjection IndexProjection = dynamodb.ProjectionTypeKeysOnly
	// All of the table attributes are projected into the index.
	AllProjection IndexProjection = dynamodb.ProjectionTypeAll
	// Only the specified table attributes are projected into the index.
	IncludeProjection IndexProjection = dynamodb.ProjectionTypeInclude
)

type Item added in v1.22.0

type Item = map[string]*dynamodb.AttributeValue

Item is a type alias for the raw DynamoDB item type.

type ItemMarshaler added in v1.8.0

type ItemMarshaler interface {
	MarshalDynamoItem() (map[string]*dynamodb.AttributeValue, error)
}

ItemMarshaler is the interface implemented by objects that can marshal themselves into an Item (a map of strings to AttributeValues).

type ItemUnmarshaler added in v1.8.0

type ItemUnmarshaler interface {
	UnmarshalDynamoItem(item map[string]*dynamodb.AttributeValue) error
}

ItemUnmarshaler is the interface implemented by objects that can unmarshal an Item (a map of strings to AttributeValues) into themselves.

type Iter

type Iter interface {
	// Next tries to unmarshal the next result into out.
	// Returns false when it is complete or if it runs into an error.
	Next(out interface{}) bool
	// NextWithContext tries to unmarshal the next result into out.
	// Returns false when it is complete or if it runs into an error.
	NextWithContext(ctx context.Context, out interface{}) bool
	// Err returns the error encountered, if any.
	// You should check this after Next is finished.
	Err() error
}

Iter is an iterator for request results.

type KeyType

type KeyType string

KeyType is used to specify the type of hash and range keys for tables and indexes.

const (
	BinaryType KeyType = "B"
	StringType KeyType = "S"
	NumberType KeyType = "N"
	NoneType   KeyType = ""
)

Key types for table and index hash/range keys.

type Keyed

type Keyed interface {
	HashKey() interface{}
	RangeKey() interface{}
}

Keyed provides hash key and range key values.

type Keys

type Keys [2]interface{}

Keys provides an easy way to specify the hash and range keys.

table.Batch("ID", "Month").
	Get(dynamo.Keys{1, "2015-10"}, dynamo.Keys{42, "2015-12"}, dynamo.Keys{42, "1992-02"}).
	All(&results)

func (Keys) HashKey

func (k Keys) HashKey() interface{}

HashKey returns the hash key's value.

func (Keys) RangeKey

func (k Keys) RangeKey() interface{}

RangeKey returns the range key's value.

type ListTables

type ListTables struct {
	// contains filtered or unexported fields
}

ListTables is a request to list tables. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListTables.html

func (*ListTables) All

func (lt *ListTables) All() ([]string, error)

All returns every table or an error.

func (*ListTables) AllWithContext

func (lt *ListTables) AllWithContext(ctx context.Context) ([]string, error)

AllWithContext returns every table or an error.

func (*ListTables) Iter

func (lt *ListTables) Iter() Iter

Iter returns an iterator of table names. This iterator's Next functions will only accept type *string as their out parameter.

type Marshaler

type Marshaler interface {
	MarshalDynamo() (*dynamodb.AttributeValue, error)
}

Marshaler is the interface implemented by objects that can marshal themselves into an AttributeValue.

type Operator

type Operator string

Operator is an operation to apply in key comparisons.

const (
	Equal          Operator = "EQ"
	NotEqual       Operator = "NE"
	Less           Operator = "LT"
	LessOrEqual    Operator = "LE"
	Greater        Operator = "GT"
	GreaterOrEqual Operator = "GE"
	BeginsWith     Operator = "BEGINS_WITH"
	Between        Operator = "BETWEEN"
)

Operators used for comparing against the range key in queries.

type Order

type Order bool

Order is used for specifying the order of results.

const (
	Ascending  Order = true  // ScanIndexForward = true
	Descending       = false // ScanIndexForward = false
)

Orders for sorting results.

type PagingIter

type PagingIter interface {
	Iter
	// LastEvaluatedKey returns a key that can be passed to StartFrom in Query or Scan.
	// Combined with SearchLimit, it is useful for paginating partial results.
	LastEvaluatedKey() PagingKey
}

PagingIter is an iterator of request results that can also return a key used for splitting results.

type PagingKey

type PagingKey map[string]*dynamodb.AttributeValue

PagingKey is a key used for splitting up partial results. Get a PagingKey from a PagingIter and pass it to StartFrom in Query or Scan.

type ParallelIter added in v1.20.0

type ParallelIter interface {
	Iter
	// LastEvaluatedKeys returns each parallel segment's last evaluated key in order of segment number.
	// The slice will be the same size as the number of segments, and the keys can be nil.
	LastEvaluatedKeys() []PagingKey
}

PagingIter is an iterator of combined request results from multiple iterators running in parallel.

type Put

type Put struct {
	// contains filtered or unexported fields
}

Put is a request to create or replace an item. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html

func (*Put) ConsumedCapacity

func (p *Put) ConsumedCapacity(cc *ConsumedCapacity) *Put

ConsumedCapacity will measure the throughput capacity consumed by this operation and add it to cc.

func (*Put) If

func (p *Put) If(expr string, args ...interface{}) *Put

If specifies a conditional expression for this put to succeed. Use single quotes to specificy reserved names inline (like 'Count'). Use the placeholder ? within the expression to substitute values, and use $ for names. You need to use quoted or placeholder names when the name is a reserved word in DynamoDB. Multiple calls to If will be combined with AND.

func (*Put) OldValue

func (p *Put) OldValue(out interface{}) error

OldValue executes this put, unmarshaling the previous value into out. Returns ErrNotFound is there was no previous value.

func (*Put) OldValueWithContext

func (p *Put) OldValueWithContext(ctx context.Context, out interface{}) error

OldValueWithContext executes this put, unmarshaling the previous value into out. Returns ErrNotFound is there was no previous value.

func (*Put) Run

func (p *Put) Run() error

Run executes this put.

func (*Put) RunWithContext

func (p *Put) RunWithContext(ctx context.Context) error

Run executes this put.

type Query

type Query struct {
	// contains filtered or unexported fields
}

Query is a request to get one or more items in a table. Query uses the DynamoDB query for requests for multiple items, and GetItem for one. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html and http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_GetItem.html

func (*Query) All

func (q *Query) All(out interface{}) error

All executes this request and unmarshals all results to out, which must be a pointer to a slice.

func (*Query) AllWithContext

func (q *Query) AllWithContext(ctx context.Context, out interface{}) error

func (*Query) AllWithLastEvaluatedKey

func (q *Query) AllWithLastEvaluatedKey(out interface{}) (PagingKey, error)

AllWithLastEvaluatedKey executes this request and unmarshals all results to out, which must be a pointer to a slice. This returns a PagingKey you can use with StartFrom to split up results.

func (*Query) AllWithLastEvaluatedKeyContext

func (q *Query) AllWithLastEvaluatedKeyContext(ctx context.Context, out interface{}) (PagingKey, error)

func (*Query) Consistent

func (q *Query) Consistent(on bool) *Query

Consistent will, if on is true, make this query a strongly consistent read. Queries are eventually consistent by default. Strongly consistent reads are more resource-heavy than eventually consistent reads.

func (*Query) ConsumedCapacity

func (q *Query) ConsumedCapacity(cc *ConsumedCapacity) *Query

ConsumedCapacity will measure the throughput capacity consumed by this operation and add it to cc.

func (*Query) Count

func (q *Query) Count() (int64, error)

Count executes this request, returning the number of results.

func (*Query) CountWithContext

func (q *Query) CountWithContext(ctx context.Context) (int64, error)

func (*Query) Filter

func (q *Query) Filter(expr string, args ...interface{}) *Query

Filter takes an expression that all results will be evaluated against. Use single quotes to specificy reserved names inline (like 'Count'). Use the placeholder ? within the expression to substitute values, and use $ for names. You need to use quoted or placeholder names when the name is a reserved word in DynamoDB. Multiple calls to Filter will be combined with AND.

func (*Query) Index

func (q *Query) Index(name string) *Query

Index specifies the name of the index that this query will operate on.

func (*Query) Iter

func (q *Query) Iter() PagingIter

Iter returns a results iterator for this request.

func (*Query) Limit

func (q *Query) Limit(limit int64) *Query

Limit specifies the maximum amount of results to return.

func (*Query) One

func (q *Query) One(out interface{}) error

One executes this query and retrieves a single result, unmarshaling the result to out.

func (*Query) OneWithContext

func (q *Query) OneWithContext(ctx context.Context, out interface{}) error

func (*Query) Order

func (q *Query) Order(order Order) *Query

Order specifies the desired result order. Requires a range key (a.k.a. partition key) to be specified.

func (*Query) Project

func (q *Query) Project(paths ...string) *Query

Project limits the result attributes to the given paths.

func (*Query) ProjectExpr

func (q *Query) ProjectExpr(expr string, args ...interface{}) *Query

ProjectExpr limits the result attributes to the given expression. Use single quotes to specificy reserved names inline (like 'Count'). Use the placeholder ? within the expression to substitute values, and use $ for names. You need to use quoted or placeholder names when the name is a reserved word in DynamoDB.

func (*Query) Range

func (q *Query) Range(name string, op Operator, values ...interface{}) *Query

Range specifies the range key (a.k.a. sort key) or keys to get. For single item requests using One, op must be Equal. Name is the name of the range key. Op specifies the operator to use when comparing values.

func (*Query) RequestLimit added in v1.23.0

func (q *Query) RequestLimit(limit int) *Query

RequestLimit specifies the maximum amount of requests to make against DynamoDB's API. A limit of zero or less means unlimited requests.

func (*Query) SearchLimit

func (q *Query) SearchLimit(limit int64) *Query

SearchLimit specifies the maximum amount of results to examine. If a filter is not specified, the number of results will be limited. If a filter is specified, the number of results to consider for filtering will be limited. SearchLimit > 0 implies RequestLimit(1).

func (*Query) StartFrom

func (q *Query) StartFrom(key PagingKey) *Query

StartFrom makes this query continue from a previous one. Use Query.Iter's LastEvaluatedKey.

type SSEDescription added in v1.13.0

type SSEDescription struct {
	InaccessibleEncryptionDateTime time.Time
	KMSMasterKeyArn                string
	SSEType                        SSEType
	Status                         string
}

type SSEType added in v1.13.0

type SSEType string

SSEType is used to specify the type of server side encryption to use on a table

const (
	SSETypeAES256 SSEType = "AES256"
	SSETypeKMS    SSEType = "KMS"
)

Possible SSE types for tables

type Scan

type Scan struct {
	// contains filtered or unexported fields
}

Scan is a request to scan all the data in a table. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html

func (*Scan) All

func (s *Scan) All(out interface{}) error

All executes this request and unmarshals all results to out, which must be a pointer to a slice.

func (*Scan) AllParallel added in v1.20.0

func (s *Scan) AllParallel(ctx context.Context, segments int64, out interface{}) error

AllParallel executes this request by running the given number of segments in parallel, then unmarshaling all results to out, which must be a pointer to a slice.

func (*Scan) AllParallelStartFrom added in v1.20.0

func (s *Scan) AllParallelStartFrom(ctx context.Context, keys []PagingKey, out interface{}) ([]PagingKey, error)

AllParallelStartFrom executes this request by continuing parallel scans from the given LastEvaluatedKeys, then unmarshaling all results to out, which must be a pointer to a slice. Returns a new slice of LastEvaluatedKeys after the scan finishes.

func (*Scan) AllParallelWithLastEvaluatedKeys added in v1.20.0

func (s *Scan) AllParallelWithLastEvaluatedKeys(ctx context.Context, segments int64, out interface{}) ([]PagingKey, error)

AllParallelWithLastEvaluatedKeys executes this request by running the given number of segments in parallel, then unmarshaling all results to out, which must be a pointer to a slice. Returns a slice of LastEvalutedKeys that can be used to continue the query later.

func (*Scan) AllWithContext

func (s *Scan) AllWithContext(ctx context.Context, out interface{}) error

AllWithContext executes this request and unmarshals all results to out, which must be a pointer to a slice.

func (*Scan) AllWithLastEvaluatedKey

func (s *Scan) AllWithLastEvaluatedKey(out interface{}) (PagingKey, error)

AllWithLastEvaluatedKey executes this request and unmarshals all results to out, which must be a pointer to a slice. It returns a key you can use with StartWith to continue this query.

func (*Scan) AllWithLastEvaluatedKeyContext

func (s *Scan) AllWithLastEvaluatedKeyContext(ctx context.Context, out interface{}) (PagingKey, error)

AllWithLastEvaluatedKeyContext executes this request and unmarshals all results to out, which must be a pointer to a slice. It returns a key you can use with StartWith to continue this query.

func (*Scan) Consistent

func (s *Scan) Consistent(on bool) *Scan

Consistent will, if on is true, make this scan use a strongly consistent read. Scans are eventually consistent by default. Strongly consistent reads are more resource-heavy than eventually consistent reads.

func (*Scan) ConsumedCapacity

func (s *Scan) ConsumedCapacity(cc *ConsumedCapacity) *Scan

ConsumedCapacity will measure the throughput capacity consumed by this operation and add it to cc.

func (*Scan) Count added in v1.7.0

func (s *Scan) Count() (int64, error)

Count executes this request and returns the number of items matching the scan. It takes into account the filter, limit, search limit, and all other parameters given. It may return a higher count than the limits.

func (*Scan) CountWithContext added in v1.7.0

func (s *Scan) CountWithContext(ctx context.Context) (int64, error)

CountWithContext executes this request and returns the number of items matching the scan. It takes into account the filter, limit, search limit, and all other parameters given. It may return a higher count than the limits.

func (*Scan) Filter

func (s *Scan) Filter(expr string, args ...interface{}) *Scan

Filter takes an expression that all results will be evaluated against. Use single quotes to specificy reserved names inline (like 'Count'). Use the placeholder ? within the expression to substitute values, and use $ for names. You need to use quoted or placeholder names when the name is a reserved word in DynamoDB. Multiple calls to Filter will be combined with AND.

func (*Scan) Index

func (s *Scan) Index(name string) *Scan

Index specifies the name of the index that Scan will operate on.

func (*Scan) Iter

func (s *Scan) Iter() PagingIter

Iter returns a results iterator for this request.

func (*Scan) IterParallel added in v1.20.0

func (s *Scan) IterParallel(ctx context.Context, segments int64) ParallelIter

IterParallel returns a results iterator for this request, running the given number of segments in parallel. Canceling the context given here will cancel the processing of all segments.

func (*Scan) IterParallelStartFrom added in v1.20.0

func (s *Scan) IterParallelStartFrom(ctx context.Context, keys []PagingKey) ParallelIter

IterParallelFrom returns a results iterator continued from a previous ParallelIter's LastEvaluatedKeys. Canceling the context given here will cancel the processing of all segments.

func (*Scan) Limit

func (s *Scan) Limit(limit int64) *Scan

Limit specifies the maximum amount of results to return.

func (*Scan) Project

func (s *Scan) Project(paths ...string) *Scan

Project limits the result attributes to the given paths.

func (*Scan) RequestLimit added in v1.23.0

func (s *Scan) RequestLimit(limit int) *Scan

RequestLimit specifies the maximum amount of requests to make against DynamoDB's API. A limit of zero or less means unlimited requests.

func (*Scan) SearchLimit

func (s *Scan) SearchLimit(limit int64) *Scan

SearchLimit specifies the maximum amount of results to evaluate. Use this along with StartFrom and Iter's LastEvaluatedKey to split up results. Note that DynamoDB limits result sets to 1MB. SearchLimit > 0 implies RequestLimit(1).

func (*Scan) Segment added in v1.19.0

func (s *Scan) Segment(segment int64, totalSegments int64) *Scan

Segment specifies the Segment and Total Segments to operate on in a manual parallel scan. This is useful if you want to control the parallel scans by yourself instead of using ParallelIter. Ignored by ParallelIter and friends.

func (*Scan) StartFrom

func (s *Scan) StartFrom(key PagingKey) *Scan

StartFrom makes this scan continue from a previous one. Use Scan.Iter's LastEvaluatedKey. Ignored by ParallelIter and friends, pass multiple keys to ParallelIterStartFrom instead.

type Status

type Status string

Status is an enumeration of table and index statuses.

const (
	// The table or index is ready for use.
	ActiveStatus Status = "ACTIVE"
	// The table or index is being created.
	CreatingStatus Status = "CREATING"
	// The table or index is being updated.
	UpdatingStatus Status = "UPDATING"
	// The table or index is being deleted.
	DeletingStatus Status = "DELETING"

	// NotExistsStatus is a special status you can pass to table.Wait() to wait until a table doesn't exist.
	// DescribeTable will return a ResourceNotFound AWS error instead of this.
	NotExistsStatus Status = "_gone"
)

Table and index statuses.

type StreamView

type StreamView string

StreamView determines what information is written to a table's stream.

var (
	// Only the key attributes of the modified item are written to the stream.
	KeysOnlyView StreamView = dynamodb.StreamViewTypeKeysOnly
	// The entire item, as it appears after it was modified, is written to the stream.
	NewImageView StreamView = dynamodb.StreamViewTypeNewImage
	// The entire item, as it appeared before it was modified, is written to the stream.
	OldImageView StreamView = dynamodb.StreamViewTypeOldImage
	// Both the new and the old item images of the item are written to the stream.
	NewAndOldImagesView StreamView = dynamodb.StreamViewTypeNewAndOldImages
)

type TTLDescription added in v1.5.0

type TTLDescription struct {
	// Attribute is the name of the time to live attribute for the table. Empty if disabled.
	Attribute string
	// Status is the table's time to live status.
	Status TTLStatus
}

TTLDescription represents time to live configuration details for a table.

func (TTLDescription) Enabled added in v1.5.0

func (td TTLDescription) Enabled() bool

Enabled returns true if time to live is enabled (and has finished enabling).

type TTLStatus added in v1.5.0

type TTLStatus string

TTLStatus represents a table's time to live status.

const (
	TTLEnabled   TTLStatus = "ENABLED"
	TTLEnabling  TTLStatus = "ENABLING"
	TTLDisabled  TTLStatus = "DISABLED"
	TTLDisabling TTLStatus = "DISABLING"
)

Possible time to live statuses.

type Table

type Table struct {
	// contains filtered or unexported fields
}

Table is a DynamoDB table.

func (Table) Batch

func (table Table) Batch(hashAndRangeKeyName ...string) Batch

Batch creates a new batch with the given hash key name, and range key name if provided. For purely Put batches, neither is necessary.

func (Table) Check added in v1.1.0

func (table Table) Check(hashKey string, value interface{}) *ConditionCheck

Check creates a new ConditionCheck, which represents a condition for a write transaction to succeed. hashKey specifies the name of the table's hash key and value specifies the value of the hash key. You must use Range to specify a range key for tables with hash and range keys.

func (Table) Delete

func (table Table) Delete(name string, value interface{}) *Delete

Delete creates a new request to delete an item. Key is the name of the hash key (a.k.a. partition key). Value is the value of the hash key.

func (Table) DeleteTable

func (table Table) DeleteTable() *DeleteTable

DeleteTable begins a new request to delete this table.

func (Table) Describe

func (table Table) Describe() *DescribeTable

Describe begins a new request to describe this table.

func (Table) DescribeTTL added in v1.5.0

func (table Table) DescribeTTL() *DescribeTTL

DescribeTTL begins a new request to obtain details about this table's time to live configuration.

func (Table) Get

func (table Table) Get(name string, value interface{}) *Query

Get creates a new request to get an item. Name is the name of the hash key (a.k.a. partition key). Value is the value of the hash key.

func (Table) Name

func (table Table) Name() string

Name returns this table's name.

func (Table) Put

func (table Table) Put(item interface{}) *Put

Put creates a new request to create or replace an item.

func (Table) Scan

func (table Table) Scan() *Scan

Scan creates a new request to scan this table.

func (Table) Update

func (table Table) Update(hashKey string, value interface{}) *Update

Update creates a new request to modify an existing item.

func (Table) UpdateTTL added in v1.5.0

func (table Table) UpdateTTL(attribute string, enabled bool) *UpdateTTL

UpdateTTL begins a new request to enable or disable this table's time to live. The name of the attribute to use for expiring items is specified by attribute. TTL will be enabled when enabled is true and disabled when it is false. The time to live attribute must be stored as Unix time in seconds. Items without this attribute won't be deleted.

func (Table) UpdateTable

func (table Table) UpdateTable() *UpdateTable

UpdateTable makes changes to this table's settings.

func (Table) Wait added in v1.15.0

func (table Table) Wait(want ...Status) error

Wait blocks until this table's status matches any status provided by want. If no statuses are specified, the active status is used.

func (Table) WaitWithContext added in v1.15.0

func (table Table) WaitWithContext(ctx context.Context, want ...Status) error

Wait blocks until this table's status matches any status provided by want. If no statuses are specified, the active status is used.

type Throughput

type Throughput struct {
	// Read capacity units.
	Read int64
	// Write capacity units.
	Write int64

	// Time at which throughput was last increased for this table.
	LastInc time.Time
	// Time at which throughput was last decreased for this table.
	LastDec time.Time
	// The number of throughput decreases in this UTC calendar day.
	DecsToday int64
}

type Unmarshaler

type Unmarshaler interface {
	UnmarshalDynamo(av *dynamodb.AttributeValue) error
}

Unmarshaler is the interface implemented by objects that can unmarshal an AttributeValue into themselves.

type Update

type Update struct {
	// contains filtered or unexported fields
}

Update represents changes to an existing item. It uses the UpdateItem API. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateItem.html

func (*Update) Add

func (u *Update) Add(path string, value interface{}) *Update

Add adds value to path. Path can be a number or a set. If path represents a number, value is atomically added to the number. If path represents a set, value must be a slice, a map[*]struct{}, or map[*]bool. Path must be a top-level attribute.

func (*Update) AddFloatsToSet

func (u *Update) AddFloatsToSet(path string, values ...float64) *Update

AddFloatsToSet adds the given values to the number set specified by path.

func (*Update) AddIntsToSet

func (u *Update) AddIntsToSet(path string, values ...int) *Update

AddIntsToSet adds the given values to the number set specified by path.

func (*Update) AddStringsToSet

func (u *Update) AddStringsToSet(path string, values ...string) *Update

AddStringsToSet adds the given values to the string set specified by path.

func (*Update) Append

func (u *Update) Append(path string, value interface{}) *Update

Append appends value to the end of the list specified by path.

func (*Update) ConsumedCapacity

func (u *Update) ConsumedCapacity(cc *ConsumedCapacity) *Update

ConsumedCapacity will measure the throughput capacity consumed by this operation and add it to cc.

func (*Update) DeleteFloatsFromSet

func (u *Update) DeleteFloatsFromSet(path string, values ...float64) *Update

DeleteFloatsFromSet deletes the given values from the number set specified by path.

func (*Update) DeleteFromSet added in v1.10.0

func (u *Update) DeleteFromSet(path string, value interface{}) *Update

DeleteFromSet deletes value from the set given by path. If value marshals to a set, those values will be deleted. If value marshals to a number, string, or binary, that value will be deleted. Delete is only for deleting values from sets. See Remove for removing entire attributes.

func (*Update) DeleteIntsFromSet

func (u *Update) DeleteIntsFromSet(path string, values ...int) *Update

DeleteIntsFromSet deletes the given values from the number set specified by path.

func (*Update) DeleteStringsFromSet

func (u *Update) DeleteStringsFromSet(path string, values ...string) *Update

DeleteStringsFromSet deletes the given values from the string set specified by path.

func (*Update) If

func (u *Update) If(expr string, args ...interface{}) *Update

If specifies a conditional expression for this update to succeed. Use single quotes to specificy reserved names inline (like 'Count'). Use the placeholder ? within the expression to substitute values, and use $ for names. You need to use quoted or placeholder names when the name is a reserved word in DynamoDB. Multiple calls to Update will be combined with AND.

func (*Update) OldValue

func (u *Update) OldValue(out interface{}) error

OldValue executes this update, encoding out with the old value before the update. This is equivalent to ReturnValues = ALL_OLD in the DynamoDB API.

func (*Update) OldValueWithContext

func (u *Update) OldValueWithContext(ctx context.Context, out interface{}) error

OldValueWithContext executes this update, encoding out with the old value before the update. This is equivalent to ReturnValues = ALL_OLD in the DynamoDB API.

func (*Update) OnlyUpdatedOldValue added in v1.10.0

func (u *Update) OnlyUpdatedOldValue(out interface{}) error

OnlyUpdatedOldValue executes this update, encoding out with only with old values of the attributes that were changed. This is equivalent to ReturnValues = UPDATED_OLD in the DynamoDB API.

func (*Update) OnlyUpdatedOldValueWithContext added in v1.10.0

func (u *Update) OnlyUpdatedOldValueWithContext(ctx context.Context, out interface{}) error

OnlyUpdatedOldValueWithContext executes this update, encoding out with only with old values of the attributes that were changed. This is equivalent to ReturnValues = UPDATED_OLD in the DynamoDB API.

func (*Update) OnlyUpdatedValue added in v1.10.0

func (u *Update) OnlyUpdatedValue(out interface{}) error

OnlyUpdatedValue executes this update, encoding out with only with new values of the attributes that were changed. This is equivalent to ReturnValues = UPDATED_NEW in the DynamoDB API.

func (*Update) OnlyUpdatedValueWithContext added in v1.10.0

func (u *Update) OnlyUpdatedValueWithContext(ctx context.Context, out interface{}) error

OnlyUpdatedValueWithContext executes this update, encoding out with only with new values of the attributes that were changed. This is equivalent to ReturnValues = UPDATED_NEW in the DynamoDB API.

func (*Update) Prepend

func (u *Update) Prepend(path string, value interface{}) *Update

Prepend inserts value to the beginning of the list specified by path.

func (*Update) Range

func (u *Update) Range(name string, value interface{}) *Update

Range specifies the range key (sort key) for the item to update.

func (*Update) Remove

func (u *Update) Remove(paths ...string) *Update

Remove removes the paths from this item, deleting the specified attributes.

func (*Update) RemoveExpr

func (u *Update) RemoveExpr(expr string, args ...interface{}) *Update

RemoveExpr performs a custom remove expression, substituting the args into expr as in filter expressions.

RemoveExpr("MyList[$]", 5)

func (*Update) Run

func (u *Update) Run() error

Run executes this update.

func (*Update) RunWithContext

func (u *Update) RunWithContext(ctx context.Context) error

RunWithContext executes this update.

func (*Update) Set

func (u *Update) Set(path string, value interface{}) *Update

Set changes path to the given value. If value is an empty string or nil, path will be removed instead. Paths that are reserved words are automatically escaped. Use single quotes to escape complex values like 'User'.'Count'.

func (*Update) SetExpr

func (u *Update) SetExpr(expr string, args ...interface{}) *Update

SetExpr performs a custom set expression, substituting the args into expr as in filter expressions. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateItem.html#DDB-UpdateItem-request-UpdateExpression

SetExpr("MyMap.$.$ = ?", key1, key2, val)
SetExpr("'Counter' = 'Counter' + ?", 1)

func (*Update) SetIfNotExists

func (u *Update) SetIfNotExists(path string, value interface{}) *Update

SetIfNotExists changes path to the given value, if it does not already exist.

func (*Update) SetNullable added in v1.9.0

func (u *Update) SetNullable(path string, value interface{}) *Update

SetNullable changes path to the given value, allowing empty and nil values. If value is an empty string or []byte, it will be set as-is. If value is nil, the DynamoDB NULL type will be used. Paths that are reserved words are automatically escaped. Use single quotes to escape complex values like 'User'.'Count'.

func (*Update) SetSet

func (u *Update) SetSet(path string, value interface{}) *Update

SetSet changes a set at the given path to the given value. SetSet marshals value to a string set, number set, or binary set. If value is of zero length or nil, path will be removed instead. Paths that are reserved words are automatically escaped. Use single quotes to escape complex values like 'User'.'Count'.

func (*Update) Value

func (u *Update) Value(out interface{}) error

Value executes this update, encoding out with the new value after the update. This is equivalent to ReturnValues = ALL_NEW in the DynamoDB API.

func (*Update) ValueWithContext

func (u *Update) ValueWithContext(ctx context.Context, out interface{}) error

ValueWithContext executes this update, encoding out with the new value after the update. This is equivalent to ReturnValues = ALL_NEW in the DynamoDB API.

type UpdateTTL added in v1.5.0

type UpdateTTL struct {
	// contains filtered or unexported fields
}

UpdateTTL is a request to enable or disable a table's time to live functionality. Note that when time to live is enabled, items will typically be deleted within 48 hours and items that are expired but not yet deleted will still appear in your database. See: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTimeToLive.html

func (*UpdateTTL) Run added in v1.5.0

func (ttl *UpdateTTL) Run() error

Run executes this request.

func (*UpdateTTL) RunWithContext added in v1.5.0

func (ttl *UpdateTTL) RunWithContext(ctx context.Context) error

RunWithContext executes this request.

type UpdateTable

type UpdateTable struct {
	// contains filtered or unexported fields
}

UpdateTable is a request to change a table's settings. See: http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html

func (*UpdateTable) CreateIndex

func (ut *UpdateTable) CreateIndex(index Index) *UpdateTable

CreateIndex adds a new secondary global index. You must specify the index name, keys, key types, projection. If this table is not on-demand you must also specify throughput.

func (*UpdateTable) DeleteIndex

func (ut *UpdateTable) DeleteIndex(name string) *UpdateTable

DeleteIndex deletes the specified index.

func (*UpdateTable) DisableStream

func (ut *UpdateTable) DisableStream() *UpdateTable

DisableStream disables this table's stream.

func (*UpdateTable) OnDemand added in v1.2.0

func (ut *UpdateTable) OnDemand(enabled bool) *UpdateTable

OnDemand sets this table to use on-demand (pay per request) billing mode if enabled is true. If enabled is false, this table will be changed to provisioned billing mode.

func (*UpdateTable) Provision

func (ut *UpdateTable) Provision(read, write int64) *UpdateTable

Provision sets this table's read and write throughput capacity.

func (*UpdateTable) ProvisionIndex

func (ut *UpdateTable) ProvisionIndex(name string, read, write int64) *UpdateTable

ProvisionIndex updates a global secondary index's read and write throughput capacity.

func (*UpdateTable) Run

func (ut *UpdateTable) Run() (Description, error)

Run executes this request and describes the table.

func (*UpdateTable) RunWithContext

func (ut *UpdateTable) RunWithContext(ctx context.Context) (Description, error)

func (*UpdateTable) Stream

func (ut *UpdateTable) Stream(view StreamView) *UpdateTable

Stream enables streaming and sets the stream view type.

type WriteTx added in v1.1.0

type WriteTx struct {
	// contains filtered or unexported fields
}

WriteTx is a transaction to delete, put, update, and check items. It can contain up to 100 operations and works across multiple tables. Two operations cannot target the same item. WriteTx is analogous to TransactWriteItems in DynamoDB's API. See: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_TransactWriteItems.html

func (*WriteTx) Check added in v1.1.0

func (tx *WriteTx) Check(check *ConditionCheck) *WriteTx

Check adds a conditional check to this transaction.

func (*WriteTx) ConsumedCapacity added in v1.1.0

func (tx *WriteTx) ConsumedCapacity(cc *ConsumedCapacity) *WriteTx

ConsumedCapacity will measure the throughput capacity consumed by this transaction and add it to cc.

func (*WriteTx) Delete added in v1.1.0

func (tx *WriteTx) Delete(d *Delete) *WriteTx

Delete adds a new delete operation to this transaction.

func (*WriteTx) Idempotent added in v1.1.0

func (tx *WriteTx) Idempotent(enabled bool) *WriteTx

Idempotent marks this transaction as idempotent when enabled is true. This automatically generates a unique idempotency token for you. An idempotent transaction ran multiple times will have the same effect as being run once. An idempotent request is only good for 10 minutes, after that it will be considered a new request.

func (*WriteTx) IdempotentWithToken added in v1.5.0

func (tx *WriteTx) IdempotentWithToken(token string) *WriteTx

IdempotentWithToken marks this transaction as idempotent and explicitly specifies the token value. If token is empty, idempotency will be disabled instead. Unless you have special circumstances that require a custom token, consider using Idempotent to generate a token for you. An idempotent transaction ran multiple times will have the same effect as being run once. An idempotent request (token) is only good for 10 minutes, after that it will be considered a new request.

func (*WriteTx) Put added in v1.1.0

func (tx *WriteTx) Put(p *Put) *WriteTx

Put adds a put operation to this transaction.

func (*WriteTx) Run added in v1.1.0

func (tx *WriteTx) Run() error

Run executes this transaction.

func (*WriteTx) RunWithContext added in v1.1.0

func (tx *WriteTx) RunWithContext(ctx context.Context) error

RunWithContext executes this transaction.

func (*WriteTx) Update added in v1.1.0

func (tx *WriteTx) Update(u *Update) *WriteTx

Update adds an update operation to this transaction.

Directories

Path Synopsis
internal
exprs
Package exprs is the internal package for parsing DynamoDB "expressions", including condition expressions and filter expressions.
Package exprs is the internal package for parsing DynamoDB "expressions", including condition expressions and filter expressions.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL