dat

package module
v0.0.0-...-4a0de68 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 25, 2015 License: MIT Imports: 19 Imported by: 0

README

dat

GoDoc

dat (Data Access Toolkit) is a fast, lightweight and intuitive Postgres library for Go. dat likes SQL.

Highlights

  • Ordinal placeholders - friendlier than ?

    conn.SQL(`SELECT * FROM people WHERE state = $1`, "CA").Exec()
    
  • Intuitive - looks like SQL

    err := conn.
        Select("id, user_name").
        From("users").
        Where("id = $1", id).
    
        QueryStruct(&user)
    
  • Performant

    • ordinal placeholder logic has been optimized to be nearly as fast as ? placeholders
    • dat can interpolate queries locally before sending to server

Getting Started

import (
    "database/sql"

    "github.com/mgutz/dat"
    "github.com/mgutz/dat/sqlx-runner"
    _ "github.com/lib/pq"
)

// global connection (pooling provided by SQL driver)
var connection *runner.Connection

func init() {
    // create a normal database connection through database/sql
    db, err := sql.Open("postgres", "dbname=dat_test user=dat password=!test host=localhost sslmode=disable")
    if err != nil {
        panic(err)
    }

    // set this to enable interpolation
    dat.EnableInterpolation = true
    // set to log SQL, etc
    dat.SetVerbose(false)
    // set to check things like sessions closing.
    // Should be disabled in production/release builds.
    dat.Strict = false
    conn = runner.NewConnection(db, "postgres")
}

type Post struct {
    ID        int64         `db:"id"`
    Title     string        `db:"title"`
    Body      string        `db:"body"`
    UserID    int64         `db:"user_id"`
    State     string        `db:"state"`
    UpdatedAt dat.Nulltime  `db:"updated_at"`
    CreatedAt dat.NullTime  `db:"created_at"`
}

func main() {
    var post Post
    err := conn.
        Select("id, title").
        From("posts").
        Where("id = $1", 13).
        QueryStruct(&post)
    fmt.Println("Title", post.Title)
}

Feature highlights

Use Builders or SQL

Query Builder

var posts []*Post
err := conn.
    Select("title", "body").
    From("posts").
    Where("created_at > $1", someTime).
    OrderBy("id ASC").
    Limit(10).
    QueryStructs(&posts)

Plain SQL

conn.SQL(`
    SELECT title, body
    FROM posts WHERE created_at > $1
    ORDER BY id ASC LIMIT 10`,
    someTime,
).QueryStructs(&posts)

Note: dat does not clean the SQL string, thus any extra whitespace is transmitted to the database.

In practice, SQL is easier to write with backticks. Indeed, the reason this library exists is my dissatisfaction with other SQL builders introducing another domain language or AST-like expressions.

Query builders shine when dealing with data transfer objects, records (input structs).

Fetch Data Simply

Query then scan result to struct(s)

var post Post
err := sess.
    Select("id, title, body").
    From("posts").
    Where("id = $1", id).
    QueryStruct(&post)

var posts []*Post
err = sess.
    Select("id, title, body").
    From("posts").
    Where("id > $1", 100).
    QueryStructs(&posts)

Query scalar values or a slice of values

var n int64
conn.SQL("SELECT count(*) FROM posts WHERE title=$1", title).QueryScalar(&n)

var ids []int64
conn.SQL("SELECT id FROM posts", title).QuerySlice(&ids)
Blacklist and Whitelist

Control which columns get inserted or updated when processing external data

// userData came in from http.Handler, prevent them from setting protected fields
conn.InsertInto("payments").
    Blacklist("id", "updated_at", "created_at").
    Record(userData).
    Returning("id").
    QueryScalar(&userData.ID)

// ensure session user can only update his information
conn.Update("users").
    SetWhitelist(user, "user_name", "avatar", "quote").
    Where("id = $1", session.UserID).
    Exec()
IN queries

applicable when dat.EnableInterpolation == true

Simpler IN queries which expand correctly

ids := []int64{10,20,30,40,50}
b := conn.SQL("SELECT * FROM posts WHERE id IN $1", ids)
b.MustInterpolate() == "SELECT * FROM posts WHERE id IN (10,20,30,40,50)"
Runners

dat was designed to have clear separation between SQL builders and Query execers. This is why the runner is in its own package.

  • sqlx-runner - based on sqlx

CRUD

Create

Use Returning and QueryStruct to insert and update struct fields in one trip

post := Post{Title: "Swith to Postgres", State: "open"}

err := conn.
    InsertInto("posts").
    Columns("title", "state").
    Values("My Post", "draft").
    Returning("id", "created_at", "updated_at").
    QueryStruct(&post)

Use Blacklist and Whitelist to control which record (input struct) fields are inserted.

post := Post{Title: "Go is awesome", State: "open"}

err := conn.
    InsertInto("posts").
    Blacklist("id", "user_id", "created_at", "updated_at").
    Record(post).
    Returning("id", "created_at", "updated_at").
    QueryStruct(&post)

// use wildcard to include all columns
err := sess.
    InsertInto("posts").
    Whitelist("*").
    Record(post).
    Returning("id", "created_at", "updated_at").
    QueryStruct(&post)

Insert Multiple Records

// create builder
b := conn.InsertInto("posts").Columns("title")

// add some new posts
for i := 0; i < 3; i++ {
	b.Record(&Post{Title: fmt.Sprintf("Article %s", i)})
}

// OR (this is more efficient as it does not do any reflection)
for i := 0; i < 3; i++ {
	b.Values(fmt.Sprintf("Article %s", i))
}

// execute statement
_, err := b.Exec()
Read
var other Post
err = conn.
    Select("id, title").
    From("posts").
    Where("id = $1", post.ID).
    QueryStruct(&other)
Update

Use Returning to fetch columns updated by triggers. For example, an update trigger on "updated_at" column

err = conn.
    Update("posts").
    Set("title", "My New Title").
    Set("body", "markdown text here").
    Where("id = $1", post.ID).
    Returning("updated_at").
    QueryScalar(&post.UpdatedAt)

applicable when dat.EnableInterpolation == true

To reset columns to their default DDL value, use DEFAULT. For example, to reset payment\_type

res, err := conn.
    Update("payments").
    Set("payment_type", dat.DEFAULT).
    Where("id = $1", 1).
    Exec()

Use SetBlacklist and SetWhitelist to control which fields are updated.

// create blacklists for each of your structs
blacklist := []string{"id", "created_at"}
p := paymentStructFromHandler

err := conn.
    Update("payments").
    SetBlacklist(p, blacklist...)
    Where("id = $1", p.ID).
    Exec()

Use a map of attributes

attrsMap := map[string]interface{}{"name": "Gopher", "language": "Go"}
result, err := conn.
    Update("developers").
    SetMap(attrsMap).
    Where("language = $1", "Ruby").
    Exec()
Delete
result, err = conn.
    DeleteFrom("posts").
    Where("id = $1", otherPost.ID).
    Limit(1).
    Exec()
Create a Session

All queries are made in the context of a session which are acquired from the underlying SQL driver's pool

For one-off operations, use a Connection directly

// a global connection usually created in `init`
var conn *dat.Connection
conn = runner.NewConnection(db, "postgres")

err := conn.SQL(...).QueryStruct(&post)

For multiple operations, create a session. Note that session is really a transaction due to database/sql connection pooling. Session.AutoCommit() or Session.AutoRollback() MUST be called


func PostsIndex(rw http.ResponseWriter, r *http.Request) {
    sess := conn.NewSession()
    defer sess.AutoRollback()

    // Do queries with the session
    var post Post
    err := sess.Select("id, title").
        From("posts").
        Where("id = $1", post.ID).
        QueryStruct(&post)
    )
    if err != nil {
    	// `defer AutoRollback()` is used, no need to rollback on error
    	r.WriteHeader(500)
    	return
    }

    // do more queries with session ...

    // MUST commit or AutoRollback() will rollback
    sess.Commit()
}
Constants

applicable when dat.EnableInterpolation == true

dat provides often used constants in SQL statements

  • dat.DEFAULT - inserts DEFAULT
  • dat.NOW - inserts NOW()
Defining Constants

UnsafeStrings and constants will panic unless dat.EnableInterpolation == true

To define SQL constants, use UnsafeString

const CURRENT_TIMESTAMP = dat.UnsafeString("NOW()")
conn.SQL("UPDATE table SET updated_at = $1", CURRENT_TIMESTAMP)

UnsafeString is exactly that, UNSAFE. If you must use it, create a constant and NEVER use UnsafeString directly as an argument. This is asking for a SQL injection attack

conn.SQL("UPDATE table SET updated_at = $1", dat.UnsafeString(someVar))
Primitive Values

Load scalar and slice values.

var id int64
var userID string
err := conn.
    Select("id", "user_id").From("posts").Limit(1).QueryScalar(&id, &userID)

var ids []int64
err = conn.Select("id").From("posts").QuerySlice(&ids)
Embedded structs
// Columns are mapped to fields breadth-first
type Post struct {
    ID        int64      `db:"id"`
    Title     string     `db:"title"`
    User      *struct {
        ID int64         `db:"user_id"`
    }
}

var post Post
err := conn.
    Select("id, title, user_id").
    From("posts").
    Limit(1).
    QueryStruct(&post)
JSON encoding of Null* types
// dat.Null* types serialize to JSON properly
post := Post{ID: 1, Title: "Test Title"}
jsonBytes, err := json.Marshal(&post)
fmt.Println(string(jsonBytes)) // {"id":1,"title":"Test Title","created_at":null}
Transactions
// Start transaction
tx, err := conn.Begin()
if err != nil {
    return err
}
// safe to call tx.Rollback() or tx.Commit() when deferring AutoCommit()
defer tx.AutoCommit()

// AutoRollback() is also available if you would rather Commit() at the end
// and not deal with Rollback on every error.

// Issue statements that might cause errors
res, err := tx.
    Update("posts").
    Set("state", "deleted").
    Where("deleted_at IS NOT NULL").
    Exec()

if err != nil {
    tx.Rollback()
    return err
}
Local Interpolation

TL;DR: Interpolation avoids prepared statements and argument processing.

Interpolation is DISABLED by default. Set dat.EnableInterpolation = true to enable.

dat can interpolate locally to inline query arguments. Let's start with a normal SQL statements with arguments

db.Exec(
    "INSERT INTO (a, b, c, d) VALUES ($1, $2, $3, $4)",
    []interface{}[1, 2, 3, 4],
)

The driver first asks the database to create a prepared statement for the INSERT statement, then it will send the arguments with the prepared statement to the database

In contrast, dat can interpolate the statement locally resulting in a SQL statement with often no arguments. The code above results in this interpolated SQL

"INSERT INTO (a, b, c, d) VALUES (1, 2, 3, 4)"

Some of the reasons you might want to use interpolation:

  • Performance improvement
  • Debugging is simpler with interpolated SQL
  • Use SQL constants like NOW and DEFAULT
  • Expand placeholders with expanded slice values $1 => (1, 2, 3)

[]byte, []*byte and any unhandled values are passed through to the driver when interpolating.

Interpolation Safety

Postgres 9.1+ does not allow any escape sequences by default. See String Constants with C-style Escapes. In short, all backslashes are treated literally not as escape sequences.

It's rare to need backslashes to represent special characters in user input. Do you trust users to enter C-like expressions? dat only escapes apostrophes to double apostrophes, eg "Go's world" becomes 'Go''s world'.

As an added safety measure, dat checks the Postgres database standard_conforming_strings setting value on a new connection when dat.EnableInterpolation == true. If standard_conforming_strings != "on" you should either set it to "on" or disable interpolation. dat will panic if you try to use interpolation with an unsafe setting.

Why is Interpolation Faster?

Here is a comment from lib/pq connection source, which was prompted by me asking why was Python's psycopg2 so much faster in my benchmarks a year or so back:

// Check to see if we can use the "simpleExec" interface, which is
// *much* faster than going through prepare/exec
if len(args) == 0 {
    // ignore commandTag, our caller doesn't care
    r, _, err := cn.simpleExec(query)
    return r, err
}

That snippet bypasses the prepare/exec roundtrip to the database.

Keep in mind that prepared statements are only valid for the current session and unless the same query is be executed MANY times in the same session there is little benefit in using prepared statements other than protecting against SQL injections. See Interpolation Safety below.

Interpolation also offloads dabatabase workload to your application servers. The database does less work and less network chatter when interpolation is performed locally. It's usually much more cost effective to add application servers than to vertically scale a database server.

Benchmarks
  • Dat2 - mgutz/dat runner with 2 args
  • Sql2 - database/sql with 2 args
  • Sqx2 - jmoiron/sqlx with 2 args

Replace 2 with 4, 8 for variants of argument benchmarks. All source is under sqlx-runner/benchmark*

Interpolated v Non-Interpolated Queries

This benchmark compares the time to execute an interpolated SQL statement with zero args against executing the same SQL statement with args.

BenchmarkExecSQLDat2       5000   208345   ns/op  280   B/op  10  allocs/op
BenchmarkExecSQLSql2       5000   298789   ns/op  881   B/op  30  allocs/op
BenchmarkExecSQLSqx2       5000   296948   ns/op  881   B/op  30  allocs/op

BenchmarkExecSQLDat4       5000   210759   ns/op  296   B/op  10  allocs/op
BenchmarkExecSQLSql4       5000   306558   ns/op  978   B/op  35  allocs/op
BenchmarkExecSQLSqx4       5000   305569   ns/op  978   B/op  35  allocs/op

The logic is something like this

// already interpolated
for i := 0; i < b.N; i++ {
    conn.Exec("INSERT INTO t (a, b, c, d) VALUES (1, 2, 3 4)")
}

// not interpolated
for i := 0; i < b.N; i++ {
    db.Exec("INSERT INTO t (a, b, c, d) VALUES ($1, $2, $3, $4)", 1, 2, 3, 4)
}

To be fair, this benchmark is not meaningful. It does not take into account the time to perform the interpolation. It is only meant to show that interpolated queries avoid the overhead of arguments and skip the prepared statement logic in the underlying driver.

Interpolating then Execing

This benchmark compares the time to build and execute interpolated SQL statement resulting in zero args against executing the same SQL statement with args.

BenchmarkBuildExecSQLDat2  5000   215449   ns/op  832   B/op  21  allocs/op
BenchmarkBuildExecSQLSql2  5000   296281   ns/op  881   B/op  30  allocs/op
BenchmarkBuildExecSQLSqx2  5000   296259   ns/op  881   B/op  30  allocs/op

BenchmarkBuildExecSQLDat4  5000   221287   ns/op  1232  B/op  26  allocs/op
BenchmarkBuildExecSQLSql4  5000   305807   ns/op  978   B/op  35  allocs/op
BenchmarkBuildExecSQLSqx4  5000   305671   ns/op  978   B/op  35  allocs/op

BenchmarkBuildExecSQLDat8  5000   254252   ns/op  1480  B/op  33  allocs/op
BenchmarkBuildExecSQLSql8  5000   347407   ns/op  1194  B/op  44  allocs/op
BenchmarkBuildExecSQLSqx8  5000   346576   ns/op  1194  B/op  44  allocs/op

The logic is something like this

// dat's SQL interpolates the statement then executes it
for i := 0; i < b.N; i++ {
    conn.SQL("INSERT INTO (a, b, c, d) VALUES ($1, $2, $3, $4)", 1, 2, 3, 4).Exec()
}

// non interpolated
for i := 0; i < b.N; i++ {
    db.Exec("INSERT INTO (a, b, c, d) VALUES ($1, $2, $3, $4)", 1, 2, 3, 4)
}

The results suggests that local interpolation is both faster and does less allocations. Interpolation comes with a cost of more bytes used as it has to inspect the args and splice them into the statement.

database/sql when presented with arguments prepares a statement on the connection by sending it to the database then using the prepared statement on the same connection to execute the query. Keep in mind, these benchmarks are local so network latency is not a factor which would favor interpolation even more.

Interpolation and Transactions

This benchmark compares the performance of interpolation within a transaction on "level playing field" with database/sql. As mentioned in a previous section, prepared statements MUST be prepared and executed on the same connection to utilize them.

BenchmarkTransactedDat2    10000  111959   ns/op  832   B/op  21  allocs/op
BenchmarkTransactedSql2    10000  173137   ns/op  881   B/op  30  allocs/op
BenchmarkTransactedSqx2    10000  175342   ns/op  881   B/op  30  allocs/op

BenchmarkTransactedDat4    10000  115383   ns/op  1232  B/op  26  allocs/op
BenchmarkTransactedSql4    10000  182626   ns/op  978   B/op  35  allocs/op
BenchmarkTransactedSqx4    10000  181641   ns/op  978   B/op  35  allocs/op

BenchmarkTransactedDat8    10000  145419   ns/op  1480  B/op  33  allocs/op
BenchmarkTransactedSql8    10000  221476   ns/op  1194  B/op  44  allocs/op
BenchmarkTransactedSqx8    10000  222460   ns/op  1194  B/op  44  allocs/op

The logic is something like this

// dat interpolates the statement then execute it as part of transaction
tx := conn.Begin()
defer tx.Commit()
for i := 0; i < b.N; i++ {
	tx.SQL("INSERT INTO (a, b, c, d) VALUES ($1, $2, $3, $4)", 1, 2, 3, 4).Exec()
}

// non-interpolated
tx = db.Begin()
defer tx.Commit()
for i := 0; i < b.N; i++ {
	tx.Exec("INSERT INTO (a, b, c, d) VALUES ($1, $2, $3, $4)", 1, 2, 3, 4)
}

Again, interpolation seems faster with less allocations. The underlying driver still has to process and send the arguments with the prepared statement name. I expected database/sql to better interpolation here. Still thinking about this one.

Use With Other Libraries
import "github.com/mgutz/dat"

builder := dat.Select("*").From("posts").Where("user_id = $1", 1)

// Get builder's SQL and arguments
sql, args := builder.ToSQL()
fmt.Println(sql)    // SELECT * FROM posts WHERE (user_id = $1)
fmt.Println(args)   // [1]

// Use raw database/sql for actual query
rows, err := db.Query(sql, args...)

// Alternatively build the interpolated sql statement
sql, args := builder.MustInterpolate()
if len(args) {
    rows, err = db.Query(sql)
} else {
    rows, err = db.Query(sql, args...)
}

Running Tests and Benchmarks

To setup the task runner and create database

# install godo task runner
go get -u gopkg.in/godo.v1/cmd/godo

# install dependencies
cd tasks
go get -a

# back to root and run
cd ..

Then run any task

# (re)create database
godo createdb

# run tests
godo test

# run benchmarks
godo bench

# see other tasks
godo

When createdb prompts for superuser, enter superuser like 'postgres' to create the test database. On Mac + Postgress.app use your own user name and password.

TODO

  • more tests
  • hstore query suppport
  • stored procedure support

Inspiration

  • mapper

    My SQL builder for node.js which has builder, interpolation and exec functionality.

  • dbr

    used this as starting point instead of porting mapper from scratch

Documentation

Index

Constants

View Source
const DEFAULT = UnsafeString("DEFAULT")

DEFAULT SQL value

View Source
const NOW = UnsafeString("NOW()")

NOW SQL value

Variables

View Source
var (
	// ErrNotFound ...
	ErrNotFound = errors.New("not found")
	// ErrNotUTF8 ...
	ErrNotUTF8 = errors.New("invalid UTF-8")
	// ErrInvalidSliceLength ...
	ErrInvalidSliceLength = errors.New("length of slice is 0. length must be >= 1")
	// ErrInvalidSliceValue ...
	ErrInvalidSliceValue = errors.New("trying to interpolate invalid slice value into query")
	// ErrInvalidValue ...
	ErrInvalidValue = errors.New("trying to interpolate invalid value into query")
	// ErrArgumentMismatch ...
	ErrArgumentMismatch = errors.New("mismatch between ? (placeholders) and arguments")
)
View Source
var EnableInterpolation = false

Whether to enable interpolation

View Source
var NameMapping = camelCaseToSnakeCase

NameMapping is the routine to use when mapping column names to struct properties

View Source
var Strict = false

Strict tells dat to raise errors

Functions

func CalculateFieldMap

func CalculateFieldMap(recordType reflect.Type, columns []string,
	requireAllColumns bool) ([][]int, error)

CalculateFieldMap recordType is the type of a structure

func Interpolate

func Interpolate(sql string, vals []interface{}) (string, []interface{}, error)

Interpolate takes a SQL string with placeholders and a list of arguments to replace them with. Returns a blank string and error if the number of placeholders does not match the number of arguments.

func PrepareHolderFor

func PrepareHolderFor(record reflect.Value, fieldMap [][]int, holder []interface{}) ([]interface{}, error)

PrepareHolderFor creates holders for a record.

TODO: fill this in

func SQLMapFromFile

func SQLMapFromFile(filename string) (map[string]string, error)

SQLMapFromFile loads a file containing special markers and loads the SQL statements into a map.

func SQLMapFromReader

func SQLMapFromReader(r io.Reader) (map[string]string, error)

SQLMapFromReader creates a SQL map from an io.Reader.

This string

`
--@selectUsers
SELECT * FROM users;

--@selectAccounts
SELECT * FROM accounts;
`

returns map[string]string{
	"selectUsers": "SELECT * FROM users;",
	"selectACcounts": "SELECT * FROM accounts;",
}

func SQLMapFromString

func SQLMapFromString(s string) (map[string]string, error)

SQLMapFromString creates a map of strings from s.

func SQLSliceFromFile

func SQLSliceFromFile(filename string) ([]string, error)

SQLSliceFromFile reads a file's text then passes it to SQLSliceFromString.

func SQLSliceFromString

func SQLSliceFromString(s string) ([]string, error)

SQLSliceFromString converts a multiline string marked by `^GO$` into a slice of SQL statements.

This string

SELECT *
FROM users;
GO
SELECT *
FROM accounts;

returns []string{"SELECT *\nFROM users;", "SELECT *\nFROM accounts"}

func SetVerbose

func SetVerbose(verbose bool)

SetVerbose sets the verbosity of logging which defaults to none

func ValuesFor

func ValuesFor(recordType reflect.Type, record reflect.Value, columns []string) ([]interface{}, error)

ValuesFor does soemthing

TODO:

Types

type Builder

type Builder interface {
	ToSQL() (string, []interface{})
	Interpolate() (string, []interface{}, error)
	MustInterpolate() (string, []interface{})
}

Builder interface is used to tie SQL generators to executors.

type DeleteBuilder

type DeleteBuilder struct {
	Execer
	// contains filtered or unexported fields
}

DeleteBuilder contains the clauses for a DELETE statement

func DeleteFrom

func DeleteFrom(table string) *DeleteBuilder

DeleteFrom creates a new DeleteBuilder for the given table.

func NewDeleteBuilder

func NewDeleteBuilder(table string) *DeleteBuilder

NewDeleteBuilder creates a new DeleteBuilder for the given table.

func (*DeleteBuilder) Interpolate

func (b *DeleteBuilder) Interpolate() (string, []interface{}, error)

Interpolate interpolates this builder's SQL.

func (*DeleteBuilder) Limit

func (b *DeleteBuilder) Limit(limit uint64) *DeleteBuilder

Limit sets a LIMIT clause for the statement; overrides any existing LIMIT

func (*DeleteBuilder) MustInterpolate

func (b *DeleteBuilder) MustInterpolate() (string, []interface{})

MustInterpolate interpolates this builder's SQL or panics.

func (*DeleteBuilder) Offset

func (b *DeleteBuilder) Offset(offset uint64) *DeleteBuilder

Offset sets an OFFSET clause for the statement; overrides any existing OFFSET

func (*DeleteBuilder) OrderBy

func (b *DeleteBuilder) OrderBy(ord string) *DeleteBuilder

OrderBy appends an ORDER BY clause to the statement

func (*DeleteBuilder) ToSQL

func (b *DeleteBuilder) ToSQL() (string, []interface{})

ToSQL serialized the DeleteBuilder to a SQL string It returns the string with placeholders and a slice of query arguments

func (*DeleteBuilder) Where

func (b *DeleteBuilder) Where(whereSqlOrMap interface{}, args ...interface{}) *DeleteBuilder

Where appends a WHERE clause to the statement whereSqlOrMap can be a string or map. If it's a string, args wil replaces any places holders

type Eq

type Eq map[string]interface{}

Eq is a map column -> value pairs which must be matched in a query

type EventReceiver

type EventReceiver interface {
	Event(eventName string)
	EventKv(eventName string, kvs map[string]string)
	EventErr(eventName string, err error) error
	EventErrKv(eventName string, err error, kvs map[string]string) error
	Timing(eventName string, nanoseconds int64)
	TimingKv(eventName string, nanoseconds int64, kvs map[string]string)
}

EventReceiver gets events from dbr methods for logging purposes

var Events EventReceiver

Events is the event receiver.

type Execer

type Execer interface {
	Exec() (*Result, error)
	QueryScalar(destinations ...interface{}) error
	QuerySlice(dest interface{}) error
	QueryStruct(dest interface{}) error
	QueryStructs(dest interface{}) error
}

Execer is an object that can be execute/query a database.

type Expression

type Expression struct {
	Sql    string
	Values []interface{}
}

Expression holds a sub expression.

func Expr

func Expr(sql string, values ...interface{}) *Expression

Expr is a SQL expression with placeholders, and a slice of args to replace them with

type InsertBuilder

type InsertBuilder struct {
	Execer
	// contains filtered or unexported fields
}

InsertBuilder contains the clauses for an INSERT statement

func InsertInto

func InsertInto(table string) *InsertBuilder

InsertInto creates a new InsertBuilder for the given table.

func NewInsertBuilder

func NewInsertBuilder(table string) *InsertBuilder

NewInsertBuilder creates a new InsertBuilder for the given table.

func (*InsertBuilder) Blacklist

func (b *InsertBuilder) Blacklist(columns ...string) *InsertBuilder

Blacklist defines a blacklist of columns and should only be used in conjunction with Record.

func (*InsertBuilder) Columns

func (b *InsertBuilder) Columns(columns ...string) *InsertBuilder

Columns appends columns to insert in the statement

func (*InsertBuilder) Interpolate

func (b *InsertBuilder) Interpolate() (string, []interface{}, error)

Interpolate interpolates this builders sql.

func (*InsertBuilder) MustInterpolate

func (b *InsertBuilder) MustInterpolate() (string, []interface{})

MustInterpolate must interpolate or panic.

func (*InsertBuilder) Pair

func (b *InsertBuilder) Pair(column string, value interface{}) *InsertBuilder

Pair adds a key/value pair to the statement

func (*InsertBuilder) Record

func (b *InsertBuilder) Record(record interface{}) *InsertBuilder

Record pulls in values to match Columns from the record

func (*InsertBuilder) Returning

func (b *InsertBuilder) Returning(columns ...string) *InsertBuilder

Returning sets the columns for the RETURNING clause

func (*InsertBuilder) ToSQL

func (b *InsertBuilder) ToSQL() (string, []interface{})

ToSQL serialized the InsertBuilder to a SQL string It returns the string with placeholders and a slice of query arguments

func (*InsertBuilder) Values

func (b *InsertBuilder) Values(vals ...interface{}) *InsertBuilder

Values appends a set of values to the statement

func (*InsertBuilder) Whitelist

func (b *InsertBuilder) Whitelist(columns ...string) *InsertBuilder

Whitelist defines a whitelist of columns to be inserted. To specify all columsn of a record use "*".

type JSONText

type JSONText json.RawMessage

JSONText is a json.RawMessage, which is a []byte underneath. Value() validates the json format in the source, and returns an error if the json is not valid. Scan does no validation. JSONText additionally implements `Unmarshal`, which unmarshals the json within to an interface{}

func (*JSONText) MarshalJSON

func (j *JSONText) MarshalJSON() ([]byte, error)

MarshalJSON returns the *j as the JSON encoding of j.

func (*JSONText) Scan

func (j *JSONText) Scan(src interface{}) error

Scan stores the src in *j. No validation is done.

func (*JSONText) Unmarshal

func (j *JSONText) Unmarshal(v interface{}) error

Unmarshal unmarshal's the json in j to v, as in json.Unmarshal.

func (*JSONText) UnmarshalJSON

func (j *JSONText) UnmarshalJSON(data []byte) error

UnmarshalJSON sets *j to a copy of data

func (JSONText) Value

func (j JSONText) Value() (driver.Value, error)

Value returns j as a value. This does a validating unmarshal into another RawMessage. If j is invalid json, it returns an error.

type LogEventReceiver

type LogEventReceiver struct {
	// contains filtered or unexported fields
}

LogEventReceiver is a sentinel EventReceiver; use it if the caller doesn't supply one

func NewLogEventReceiver

func NewLogEventReceiver(prefix string) *LogEventReceiver

NewLogEventReceiver creates a new LogEventReceiver.

func (*LogEventReceiver) Event

func (ler *LogEventReceiver) Event(eventName string)

Event receives a simple notification when various events occur

func (*LogEventReceiver) EventErr

func (ler *LogEventReceiver) EventErr(eventName string, err error) error

EventErr receives a notification of an error if one occurs

func (*LogEventReceiver) EventErrKv

func (ler *LogEventReceiver) EventErrKv(eventName string, err error, kvs map[string]string) error

EventErrKv receives a notification of an error if one occurs along with optional key/value data

func (*LogEventReceiver) EventKv

func (ler *LogEventReceiver) EventKv(eventName string, kvs map[string]string)

EventKv receives a notification when various events occur along with optional key/value data

func (*LogEventReceiver) Timing

func (ler *LogEventReceiver) Timing(eventName string, nanoseconds int64)

Timing receives the time an event took to happen

func (*LogEventReceiver) TimingKv

func (ler *LogEventReceiver) TimingKv(eventName string, nanoseconds int64, kvs map[string]string)

TimingKv receives the time an event took to happen along with optional key/value data

type NullBool

type NullBool struct {
	sql.NullBool
}

NullBool is a type that can be null or a bool

func (*NullBool) MarshalJSON

func (n *NullBool) MarshalJSON() ([]byte, error)

MarshalJSON correctly serializes a NullBool to JSON

type NullEventReceiver

type NullEventReceiver struct{}

NullEventReceiver is a sentinel EventReceiver; use it if the caller doesn't supply one

func (*NullEventReceiver) Event

func (n *NullEventReceiver) Event(eventName string)

Event receives a simple notification when various events occur

func (*NullEventReceiver) EventErr

func (n *NullEventReceiver) EventErr(eventName string, err error) error

EventErr receives a notification of an error if one occurs

func (*NullEventReceiver) EventErrKv

func (n *NullEventReceiver) EventErrKv(eventName string, err error, kvs map[string]string) error

EventErrKv receives a notification of an error if one occurs along with optional key/value data

func (*NullEventReceiver) EventKv

func (n *NullEventReceiver) EventKv(eventName string, kvs map[string]string)

EventKv receives a notification when various events occur along with optional key/value data

func (*NullEventReceiver) Timing

func (n *NullEventReceiver) Timing(eventName string, nanoseconds int64)

Timing receives the time an event took to happen

func (*NullEventReceiver) TimingKv

func (n *NullEventReceiver) TimingKv(eventName string, nanoseconds int64, kvs map[string]string)

TimingKv receives the time an event took to happen along with optional key/value data

type NullFloat64

type NullFloat64 struct {
	sql.NullFloat64
}

NullFloat64 is a type that can be null or a float64

func (*NullFloat64) MarshalJSON

func (n *NullFloat64) MarshalJSON() ([]byte, error)

MarshalJSON correctly serializes a NullFloat64 to JSON

type NullInt64

type NullInt64 struct {
	sql.NullInt64
}

NullInt64 is a type that can be null or an int

func (*NullInt64) MarshalJSON

func (n *NullInt64) MarshalJSON() ([]byte, error)

MarshalJSON correctly serializes a NullInt64 to JSON

type NullString

type NullString struct {
	sql.NullString
}

NullString is a type that can be null or a string

func (*NullString) MarshalJSON

func (n *NullString) MarshalJSON() ([]byte, error)

MarshalJSON correctly serializes a NullString to JSON

type NullTime

type NullTime struct {
	pq.NullTime
}

NullTime is a type that can be null or a time

func (*NullTime) MarshalJSON

func (n *NullTime) MarshalJSON() ([]byte, error)

MarshalJSON correctly serializes a NullTime to JSON

type PostgresDialect

type PostgresDialect struct{}

PostgresDialect is the PostgeSQL dialect.

func (*PostgresDialect) WriteIdentifier

func (pd *PostgresDialect) WriteIdentifier(buf *bytes.Buffer, ident string)

WriteIdentifier is part of Dialect implementation.

func (*PostgresDialect) WriteStringLiteral

func (pd *PostgresDialect) WriteStringLiteral(buf *bytes.Buffer, val string)

WriteStringLiteral is part of Dialect implementation.

Postgres 9.1+ does not allow any escape sequences by default. See http://www.postgresql.org/docs/9.3/interactive/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-ESCAPE In short, all backslashes are treated literally not as escape sequences.

type RawBuilder

type RawBuilder struct {
	Execer
	// contains filtered or unexported fields
}

RawBuilder builds SQL from raw SQL.

func NewRawBuilder

func NewRawBuilder(sql string, args ...interface{}) *RawBuilder

NewRawBuilder creates a new RawBuilder for the given SQL string and arguments

func SQL

func SQL(sql string, args ...interface{}) *RawBuilder

SQL creates a new raw SQL builder.

func (*RawBuilder) Interpolate

func (b *RawBuilder) Interpolate() (string, []interface{}, error)

Interpolate interpolates this builder's SQL.

func (*RawBuilder) MustInterpolate

func (b *RawBuilder) MustInterpolate() (string, []interface{})

MustInterpolate interpolates this builder's SQL.

func (*RawBuilder) ToSQL

func (b *RawBuilder) ToSQL() (string, []interface{})

ToSQL implements builder interface

type Result

type Result struct {
	LastInsertID int64
	RowsAffected int64
}

Result is Execer result.

type SQLDialect

type SQLDialect interface {
	// WriteStringLiteral writes a string literal.
	WriteStringLiteral(buf *bytes.Buffer, value string)
	// WriteIdentifier writes a quoted identifer such as a column or table.
	WriteIdentifier(buf *bytes.Buffer, column string)
}

SQLDialect represents a vendor specific SQL dialect.

var Dialect SQLDialect = &PostgresDialect{}

Dialect is the active SQLDialect.

type SelectBuilder

type SelectBuilder struct {
	Execer
	// contains filtered or unexported fields
}

SelectBuilder contains the clauses for a SELECT statement

func NewSelectBuilder

func NewSelectBuilder(columns ...string) *SelectBuilder

NewSelectBuilder creates a new SelectBuilder for the given columns

func Select

func Select(columns ...string) *SelectBuilder

Select creates a new SelectBuilder for the given columns.

func (*SelectBuilder) Distinct

func (b *SelectBuilder) Distinct() *SelectBuilder

Distinct marks the statement as a DISTINCT SELECT

func (*SelectBuilder) From

func (b *SelectBuilder) From(from string) *SelectBuilder

From sets the table to SELECT FROM

func (*SelectBuilder) GroupBy

func (b *SelectBuilder) GroupBy(group string) *SelectBuilder

GroupBy appends a column to group the statement

func (*SelectBuilder) Having

func (b *SelectBuilder) Having(whereSqlOrMap interface{}, args ...interface{}) *SelectBuilder

Having appends a HAVING clause to the statement

func (*SelectBuilder) Interpolate

func (b *SelectBuilder) Interpolate() (string, []interface{}, error)

Interpolate interpolates this builders sql.

func (*SelectBuilder) Limit

func (b *SelectBuilder) Limit(limit uint64) *SelectBuilder

Limit sets a limit for the statement; overrides any existing LIMIT

func (*SelectBuilder) MustInterpolate

func (b *SelectBuilder) MustInterpolate() (string, []interface{})

MustInterpolate interpolates this builders sql or panics.

func (*SelectBuilder) Offset

func (b *SelectBuilder) Offset(offset uint64) *SelectBuilder

Offset sets an offset for the statement; overrides any existing OFFSET

func (*SelectBuilder) OrderBy

func (b *SelectBuilder) OrderBy(ord string) *SelectBuilder

OrderBy appends a column to ORDER the statement by

func (*SelectBuilder) Paginate

func (b *SelectBuilder) Paginate(page, perPage uint64) *SelectBuilder

Paginate sets LIMIT/OFFSET for the statement based on the given page/perPage Assumes page/perPage are valid. Page and perPage must be >= 1

func (*SelectBuilder) ToSQL

func (b *SelectBuilder) ToSQL() (string, []interface{})

ToSQL serialized the SelectBuilder to a SQL string It returns the string with placeholders and a slice of query arguments

func (*SelectBuilder) Where

func (b *SelectBuilder) Where(whereSqlOrMap interface{}, args ...interface{}) *SelectBuilder

Where appends a WHERE clause to the statement for the given string and args or map of column/value pairs

type UnsafeString

type UnsafeString string

UnsafeString is interpolated as an unescaped and unquoted value and should only be used to create constants.

func (UnsafeString) Value

func (u UnsafeString) Value() (driver.Value, error)

Value implements a valuer for compatibility

type UpdateBuilder

type UpdateBuilder struct {
	Execer
	// contains filtered or unexported fields
}

UpdateBuilder contains the clauses for an UPDATE statement

func NewUpdateBuilder

func NewUpdateBuilder(table string) *UpdateBuilder

NewUpdateBuilder creates a new UpdateBuilder for the given table

func Update

func Update(table string) *UpdateBuilder

Update creates a new UpdateBuilder for the given table.

func (*UpdateBuilder) Interpolate

func (b *UpdateBuilder) Interpolate() (string, []interface{}, error)

Interpolate interpolates this builders sql.

func (*UpdateBuilder) Limit

func (b *UpdateBuilder) Limit(limit uint64) *UpdateBuilder

Limit sets a limit for the statement; overrides any existing LIMIT

func (*UpdateBuilder) MustInterpolate

func (b *UpdateBuilder) MustInterpolate() (string, []interface{})

MustInterpolate interpolates this builders sql or panics.

func (*UpdateBuilder) Offset

func (b *UpdateBuilder) Offset(offset uint64) *UpdateBuilder

Offset sets an offset for the statement; overrides any existing OFFSET

func (*UpdateBuilder) OrderBy

func (b *UpdateBuilder) OrderBy(ord string) *UpdateBuilder

OrderBy appends a column to ORDER the statement by

func (*UpdateBuilder) Returning

func (b *UpdateBuilder) Returning(columns ...string) *UpdateBuilder

Returning sets the columns for the RETURNING clause

func (*UpdateBuilder) Set

func (b *UpdateBuilder) Set(column string, value interface{}) *UpdateBuilder

Set appends a column/value pair for the statement

func (*UpdateBuilder) SetBlacklist

func (b *UpdateBuilder) SetBlacklist(rec interface{}, columns ...string) *UpdateBuilder

SetBlacklist creates SET clause(s) using a record and blacklist of columns

func (*UpdateBuilder) SetMap

func (b *UpdateBuilder) SetMap(clauses map[string]interface{}) *UpdateBuilder

SetMap appends the elements of the map as column/value pairs for the statement

func (*UpdateBuilder) SetWhitelist

func (b *UpdateBuilder) SetWhitelist(rec interface{}, columns ...string) *UpdateBuilder

SetWhitelist creates SET clause(s) using a record and whitelist of columns. To specify all columns, use "*".

func (*UpdateBuilder) ToSQL

func (b *UpdateBuilder) ToSQL() (string, []interface{})

ToSQL serialized the UpdateBuilder to a SQL string It returns the string with placeholders and a slice of query arguments

func (*UpdateBuilder) Where

func (b *UpdateBuilder) Where(whereSqlOrMap interface{}, args ...interface{}) *UpdateBuilder

Where appends a WHERE clause to the statement

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL