duckdb

package module
v1.7.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 12, 2024 License: MIT Imports: 20 Imported by: 0

README

Go SQL driver for DuckDB

The DuckDB driver conforms to the built-in database/sql interface.

Tests status

Installation

go get github.com/marcboeker/go-duckdb

go-duckdb uses CGO to make calls to DuckDB. You must build your binaries with CGO_ENABLED=1.

Usage

go-duckdb hooks into the database/sql interface provided by the Go stdlib. To open a connection, simply specify the driver type as duckdb.

db, err := sql.Open("duckdb", "")
if err != nil {
    ...
}
defer db.Close()

This creates an in-memory instance of DuckDB. To open a persistent database, you need to specify a filepath to the database file. If the file does not exist, then DuckDB creates it.

db, err := sql.Open("duckdb", "/path/to/foo.db")
if err != nil {
	...
}
defer db.Close()

If you want to set specific config options for DuckDB, you can add them as query style parameters in the form of name=value pairs to the DSN.

db, err := sql.Open("duckdb", "/path/to/foo.db?access_mode=read_only&threads=4")
if err != nil {
    ...
}
defer db.Close()

Alternatively, you can use sql.OpenDB. That way, you can perform initialization steps in a callback function before opening the database. Here's an example that installs and loads the JSON extension when opening a database with sql.OpenDB(connector).

connector, err := duckdb.NewConnector("/path/to/foo.db?access_mode=read_only&threads=4", func(execer driver.ExecerContext) error {
    bootQueries := []string{
        "INSTALL 'json'",
        "LOAD 'json'",
    }

    for _, query := range bootQueries {
        _, err = execer.ExecContext(context.Background(), query, nil)
        if err != nil {
            ...
        }
    }
    return nil
})
if err != nil {
    ...
}

db := sql.OpenDB(connector)
defer db.Close()

Please refer to the database/sql documentation for further usage instructions.

Memory Allocation

DuckDB lives in-process. Therefore, all its memory lives in the driver. All allocations live in the host process, which is the Go application. Especially for long-running applications, it is crucial to call the corresponding Close-functions as specified in database/sql. The following is a list of examples.

db, err := sql.Open("duckdb", "")
defer db.Close()

conn, err := db.Conn(context.Background())
defer conn.Close()

rows, err := conn.QueryContext(context.Background(), "SELECT 42")
// alternatively, rows.Next() has to return false
rows.Close()

appender, err := NewAppenderFromConn(conn, "", "test")
defer appender.Close()

// if not passed to sql.OpenDB
connector, err := NewConnector("", nil)
defer connector.Close()

DuckDB Appender API

If you want to use the DuckDB Appender API, you can obtain a new Appender by passing a DuckDB connection to NewAppenderFromConn().

connector, err := duckdb.NewConnector("test.db", nil)
if err != nil {
	...
}
defer connector.Close()

conn, err := connector.Connect(context.Background())
if err != nil {
	...
}
defer conn.Close()

// obtain an appender from the connection
// NOTE: the table 'test_tbl' must exist in test.db
appender, err := NewAppenderFromConn(conn, "", "test_tbl")
if err != nil {
	...
}
defer appender.Close()

err = appender.AppendRow(...)
if err != nil {
	...
}

DuckDB Apache Arrow Interface

If you want to use the DuckDB Arrow Interface, you can obtain a new Arrow by passing a DuckDB connection to NewArrowFromConn().

connector, err := duckdb.NewConnector("", nil)
if err != nil {
	...
}
defer connector.Close()

conn, err := connector.Connect(context.Background())
if err != nil {
	...
}
defer conn.Close()

// obtain the Arrow from the connection
arrow, err := duckdb.NewArrowFromConn(conn)
if err != nil {
	...
}

rdr, err := arrow.QueryContext(context.Background(), "SELECT * FROM generate_series(1, 10)")
if err != nil {
	...
}
defer rdr.Release()

for rdr.Next() {
  // process records
}

The Arrow interface is a heavy dependency. If you do not need it, you can disable it by passing -tags=no_duckdb_arrow to go build. This will be made opt-in in V2.

go build -tags="no_duckdb_arrow"

Vendoring

If you want to vendor a module containing go-duckdb, please use modvendor to include the missing header files and libraries. See issue #174 for more details.

  1. go install github.com/goware/modvendor@latest
  2. go mod vendor
  3. modvendor -copy="**/*.a **/*.h" -v

Now you can build your module as usual.

Linking DuckDB

By default, go-duckdb statically links DuckDB into your binary. Statically linking DuckDB adds around 30 MB to your binary size. On Linux (Intel) and macOS (Intel and ARM), go-duckdb bundles pre-compiled static libraries for fast builds.

Alternatively, you can dynamically link DuckDB by passing -tags=duckdb_use_lib to go build. You must have a copy of libduckdb available on your system (.so on Linux or .dylib on macOS), which you can download from the DuckDB releases page. For example:

# On Linux
CGO_ENABLED=1 CGO_LDFLAGS="-L/path/to/libs" go build -tags=duckdb_use_lib main.go
LD_LIBRARY_PATH=/path/to/libs ./main

# On macOS
CGO_ENABLED=1 CGO_LDFLAGS="-L/path/to/libs" go build -tags=duckdb_use_lib main.go
DYLD_LIBRARY_PATH=/path/to/libs ./main

Notes

TIMESTAMP vs. TIMESTAMP_TZ

In the C API, DuckDB stores both TIMESTAMP and TIMESTAMP_TZ as duckdb_timestamp, which holds the number of microseconds elapsed since January 1, 1970 UTC (i.e., an instant without offset information). When passing a time.Time to go-duckdb, go-duckdb transforms it to an instant with UnixMicro(), even when using TIMESTAMP_TZ. Later, scanning either type of value returns an instant, as SQL types do not model time zone information for individual values.

Documentation

Overview

Package duckdb implements a database/sql driver for the DuckDB database.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func GetDataChunkCapacity

func GetDataChunkCapacity() int

GetDataChunkCapacity returns the capacity of a data chunk.

func RegisterReplacementScan

func RegisterReplacementScan(connector *Connector, cb ReplacementScanCallback)

Types

type Appender

type Appender struct {
	// contains filtered or unexported fields
}

Appender holds the DuckDB appender. It allows efficient bulk loading into a DuckDB database.

func NewAppenderFromConn

func NewAppenderFromConn(driverConn driver.Conn, schema, table string) (*Appender, error)

NewAppenderFromConn returns a new Appender from a DuckDB driver connection.

func (*Appender) AppendRow

func (a *Appender) AppendRow(args ...driver.Value) error

AppendRow loads a row of values into the appender. The values are provided as separate arguments.

func (*Appender) Close

func (a *Appender) Close() error

Close the appender. This will flush the appender to the underlying table. It is vital to call this when you are done with the appender to avoid leaking memory.

func (*Appender) Flush

func (a *Appender) Flush() error

Flush the data chunks to the underlying table and clear the internal cache. Does not close the appender, even if it returns an error. Unless you have a good reason to call this, call Close when you are done with the appender.

type Arrow

type Arrow struct {
	// contains filtered or unexported fields
}

Arrow exposes DuckDB Apache Arrow interface. https://duckdb.org/docs/api/c/api#arrow-interface

func NewArrowFromConn

func NewArrowFromConn(driverConn driver.Conn) (*Arrow, error)

NewArrowFromConn returns a new Arrow from a DuckDB driver connection.

func (*Arrow) QueryContext

func (a *Arrow) QueryContext(ctx context.Context, query string, args ...any) (array.RecordReader, error)

QueryContext prepares statements, executes them, returns Apache Arrow array.RecordReader as a result of the last executed statement. Arguments are bound to the last statement.

type Composite

type Composite[T any] struct {
	// contains filtered or unexported fields
}

Use as the `Scanner` type for any composite types (maps, lists, structs)

func (Composite[T]) Get

func (s Composite[T]) Get() T

func (*Composite[T]) Scan

func (s *Composite[T]) Scan(v any) error

type Connector

type Connector struct {
	// contains filtered or unexported fields
}

func NewConnector

func NewConnector(dsn string, connInitFn func(execer driver.ExecerContext) error) (*Connector, error)

NewConnector opens a new Connector for a DuckDB database. The user must close the Connector, if it is not passed to the sql.OpenDB function. Otherwise, sql.DB closes the Connector when calling sql.DB.Close().

func (*Connector) Close

func (c *Connector) Close() error

func (*Connector) Connect

func (c *Connector) Connect(context.Context) (driver.Conn, error)

func (*Connector) Driver

func (*Connector) Driver() driver.Driver

type DataChunk

type DataChunk struct {
	// contains filtered or unexported fields
}

DataChunk storage of a DuckDB table.

func (*DataChunk) GetSize

func (chunk *DataChunk) GetSize() int

GetSize returns the internal size of the data chunk.

func (*DataChunk) GetValue

func (chunk *DataChunk) GetValue(colIdx int, rowIdx int) (any, error)

GetValue returns a single value of a column.

func (*DataChunk) SetSize

func (chunk *DataChunk) SetSize(size int) error

SetSize sets the internal size of the data chunk. Cannot exceed GetCapacity().

func (*DataChunk) SetValue

func (chunk *DataChunk) SetValue(colIdx int, rowIdx int, val any) error

SetValue writes a single value to a column in a data chunk. Note that this requires casting the type for each invocation. NOTE: Custom ENUM types must be passed as string.

type Decimal

type Decimal struct {
	Width uint8
	Scale uint8
	Value *big.Int
}

func (*Decimal) Float64

func (d *Decimal) Float64() float64

type Driver

type Driver struct{}

func (Driver) Open

func (d Driver) Open(dsn string) (driver.Conn, error)

func (Driver) OpenConnector

func (Driver) OpenConnector(dsn string) (driver.Connector, error)

type Error

type Error struct {
	Type ErrorType
	Msg  string
}

func (*Error) Error

func (e *Error) Error() string

func (*Error) Is

func (e *Error) Is(err error) bool

type ErrorType

type ErrorType int
const (
	ErrorTypeInvalid              ErrorType = iota // invalid type
	ErrorTypeOutOfRange                            // value out of range error
	ErrorTypeConversion                            // conversion/casting error
	ErrorTypeUnknownType                           // unknown type error
	ErrorTypeDecimal                               // decimal related
	ErrorTypeMismatchType                          // type mismatch
	ErrorTypeDivideByZero                          // divide by 0
	ErrorTypeObjectSize                            // object size exceeded
	ErrorTypeInvalidType                           // incompatible for operation
	ErrorTypeSerialization                         // serialization
	ErrorTypeTransaction                           // transaction management
	ErrorTypeNotImplemented                        // method not implemented
	ErrorTypeExpression                            // expression parsing
	ErrorTypeCatalog                               // catalog related
	ErrorTypeParser                                // parser related
	ErrorTypePlanner                               // planner related
	ErrorTypeScheduler                             // scheduler related
	ErrorTypeExecutor                              // executor related
	ErrorTypeConstraint                            // constraint related
	ErrorTypeIndex                                 // index related
	ErrorTypeStat                                  // stat related
	ErrorTypeConnection                            // connection related
	ErrorTypeSyntax                                // syntax related
	ErrorTypeSettings                              // settings related
	ErrorTypeBinder                                // binder related
	ErrorTypeNetwork                               // network related
	ErrorTypeOptimizer                             // optimizer related
	ErrorTypeNullPointer                           // nullptr exception
	ErrorTypeIO                                    // IO exception
	ErrorTypeInterrupt                             // interrupt
	ErrorTypeFatal                                 // Fatal exceptions are non-recoverable, and render the entire DB in an unusable state
	ErrorTypeInternal                              // Internal exceptions indicate something went wrong internally (i.e. bug in the code base)
	ErrorTypeInvalidInput                          // Input or arguments error
	ErrorTypeOutOfMemory                           // out of memory
	ErrorTypePermission                            // insufficient permissions
	ErrorTypeParameterNotResolved                  // parameter types could not be resolved
	ErrorTypeParameterNotAllowed                   // parameter types not allowed
	ErrorTypeDependency                            // dependency
	ErrorTypeHTTP
	ErrorTypeMissingExtension // Thrown when an extension is used but not loaded
	ErrorTypeAutoLoad         // Thrown when an extension is used but not loaded
	ErrorTypeSequence
)

type Interval

type Interval struct {
	Days   int32 `json:"days"`
	Months int32 `json:"months"`
	Micros int64 `json:"micros"`
}

type Map

type Map map[any]any

func (*Map) Scan

func (m *Map) Scan(v any) error

type ReplacementScanCallback

type ReplacementScanCallback func(tableName string) (string, []any, error)

type UUID

type UUID [16]byte

func (*UUID) Scan

func (u *UUID) Scan(v any) error

Directories

Path Synopsis
deps
alpine_amd64
Package alpine_amd64 is required to provide support for vendoring modules DO NOT REMOVE
Package alpine_amd64 is required to provide support for vendoring modules DO NOT REMOVE
darwin_amd64
Package darwin_amd64 is required to provide support for vendoring modules DO NOT REMOVE
Package darwin_amd64 is required to provide support for vendoring modules DO NOT REMOVE
darwin_arm64
Package darwin_arm64 is required to provide support for vendoring modules DO NOT REMOVE
Package darwin_arm64 is required to provide support for vendoring modules DO NOT REMOVE
freebsd_amd64
Package freebsd_amd64 is required to provide support for vendoring modules DO NOT REMOVE
Package freebsd_amd64 is required to provide support for vendoring modules DO NOT REMOVE
linux_amd64
Package linux_amd64 is required to provide support for vendoring modules DO NOT REMOVE
Package linux_amd64 is required to provide support for vendoring modules DO NOT REMOVE
linux_arm64
Package linux_arm64 is required to provide support for vendoring modules DO NOT REMOVE
Package linux_arm64 is required to provide support for vendoring modules DO NOT REMOVE

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL