Documentation ¶
Overview ¶
Package duckdb implements a database/sql driver for the DuckDB database.
Example (SimpleConnection) ¶
// Connect to DuckDB using '[database/sql.Open]'. db, err := sql.Open("duckdb", "?access_mode=READ_WRITE") checkErr(err, "failed to open connection to duckdb: %s") defer db.Close() ctx := context.Background() createStmt := `CREATE table users(name VARCHAR, age INTEGER)` _, err = db.ExecContext(ctx, createStmt) checkErr(err, "failed to create table: %s") insertStmt := `INSERT INTO users(name, age) VALUES (?, ?);` res, err := db.ExecContext(ctx, insertStmt, "Marc", 30) checkErr(err, "failed to insert users: %s") rowsAffected, err := res.RowsAffected() checkErr(err, "failed to get number of rows affected") fmt.Printf("Inserted %d row(s) into users table", rowsAffected)
Output: Inserted 1 row(s) into users table
Index ¶
- func GetDataChunkCapacity() int
- func RegisterReplacementScan(connector *Connector, cb ReplacementScanCallback)
- func RegisterScalarUDF(c *sql.Conn, name string, f ScalarFunc) error
- func RegisterScalarUDFSet(c *sql.Conn, name string, functions ...ScalarFunc) error
- func RegisterTableUDF[TFT TableFunction](c *sql.Conn, name string, f TFT) error
- func SetChunkValue[T any](chunk DataChunk, colIdx int, rowIdx int, val T) error
- func SetRowValue[T any](row Row, colIdx int, val T) error
- type Appender
- type Arrow
- type CardinalityInfo
- type ChunkTableFunction
- type ChunkTableSource
- type ColumnInfo
- type Composite
- type Connector
- type DataChunk
- type Decimal
- type Driver
- type Error
- type ErrorType
- type Interval
- type Map
- type ParallelChunkTableFunction
- type ParallelChunkTableSource
- type ParallelRowTableFunction
- type ParallelRowTableSource
- type ParallelTableSourceInfo
- type ProfilingInfo
- type ReplacementScanCallback
- type Row
- type RowTableFunction
- type RowTableSource
- type ScalarFunc
- type ScalarFuncConfig
- type ScalarFuncExecutor
- type StructEntry
- type TableFunction
- type TableFunctionConfig
- type Type
- type TypeInfo
- func NewDecimalInfo(width uint8, scale uint8) (TypeInfo, error)
- func NewEnumInfo(first string, others ...string) (TypeInfo, error)
- func NewListInfo(childInfo TypeInfo) (TypeInfo, error)
- func NewMapInfo(keyInfo TypeInfo, valueInfo TypeInfo) (TypeInfo, error)
- func NewStructInfo(firstEntry StructEntry, others ...StructEntry) (TypeInfo, error)
- func NewTypeInfo(t Type) (TypeInfo, error)
- type UUID
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GetDataChunkCapacity ¶
func GetDataChunkCapacity() int
GetDataChunkCapacity returns the capacity of a data chunk.
func RegisterReplacementScan ¶
func RegisterReplacementScan(connector *Connector, cb ReplacementScanCallback)
func RegisterScalarUDF ¶
func RegisterScalarUDF(c *sql.Conn, name string, f ScalarFunc) error
RegisterScalarUDF registers a user-defined scalar function. *sql.Conn is the SQL connection on which to register the scalar function. name is the function name, and f is the scalar function's interface ScalarFunc. RegisterScalarUDF takes ownership of f, so you must pass it as a pointer.
func RegisterScalarUDFSet ¶
func RegisterScalarUDFSet(c *sql.Conn, name string, functions ...ScalarFunc) error
RegisterScalarUDFSet registers a set of user-defined scalar functions with the same name. This enables overloading of scalar functions. E.g., the function my_length() can have implementations like my_length(LIST(ANY)) and my_length(VARCHAR). *sql.Conn is the SQL connection on which to register the scalar function set. name is the function name of each function in the set. functions contains all ScalarFunc functions of the scalar function set.
func RegisterTableUDF ¶
func RegisterTableUDF[TFT TableFunction](c *sql.Conn, name string, f TFT) error
RegisterTableUDF registers a user-defined table function. Projection pushdown is enabled by default.
func SetChunkValue ¶
SetChunkValue writes a single value to a column in a data chunk. The difference with `chunk.SetValue` is that `SetChunkValue` does not require casting the value to `any` (implicitly). NOTE: Custom ENUM types must be passed as string.
Types ¶
type Appender ¶
type Appender struct {
// contains filtered or unexported fields
}
Appender holds the DuckDB appender. It allows efficient bulk loading into a DuckDB database.
func NewAppenderFromConn ¶
NewAppenderFromConn returns a new Appender from a DuckDB driver connection.
func (*Appender) AppendRow ¶
AppendRow loads a row of values into the appender. The values are provided as separate arguments.
type Arrow ¶
type Arrow struct {
// contains filtered or unexported fields
}
Arrow exposes DuckDB Apache Arrow interface. https://duckdb.org/docs/api/c/api#arrow-interface
func NewArrowFromConn ¶
NewArrowFromConn returns a new Arrow from a DuckDB driver connection.
func (*Arrow) QueryContext ¶
func (a *Arrow) QueryContext(ctx context.Context, query string, args ...any) (array.RecordReader, error)
QueryContext prepares statements, executes them, returns Apache Arrow array.RecordReader as a result of the last executed statement. Arguments are bound to the last statement.
func (*Arrow) RegisterView ¶
func (a *Arrow) RegisterView(reader array.RecordReader, name string) (release func(), err error)
RegisterView registers an Arrow record reader as a view with the given name in DuckDB. The returned release function must be called to release the memory once the view is no longer needed.
type CardinalityInfo ¶
type CardinalityInfo struct { // The absolute Cardinality. Cardinality uint // IsExact indicates whether the cardinality is exact. Exact bool }
CardinalityInfo contains the cardinality of a (table) function. If it is impossible or difficult to determine the exact cardinality, an approximate cardinality may be used.
type ChunkTableFunction ¶
type ChunkTableFunction = tableFunction[ChunkTableSource]
A ChunkTableFunction is a type which can be bound to return a ChunkTableSource.
type ChunkTableSource ¶
type ChunkTableSource interface { // FillChunk takes a Chunk and fills it with values. // It returns true, if there are more chunks to fill. FillChunk(DataChunk) error // contains filtered or unexported methods }
A ChunkTableSource represents anything that produces rows in a vectorised way. The cardinality is requested before function initialization. After initializing the ChunkTableSource, go-duckdb requests the rows. It sequentially calls the FillChunk method with a single thread.
type ColumnInfo ¶
ColumnInfo contains the metadata of a column.
type Composite ¶
type Composite[T any] struct { // contains filtered or unexported fields }
Use as the `Scanner` type for any composite types (maps, lists, structs)
type Connector ¶
type Connector struct {
// contains filtered or unexported fields
}
func NewConnector ¶
func NewConnector(dsn string, connInitFn func(execer driver.ExecerContext) error) (*Connector, error)
NewConnector opens a new Connector for a DuckDB database. The user must close the Connector, if it is not passed to the sql.OpenDB function. Otherwise, sql.DB closes the Connector when calling sql.DB.Close().
Example ¶
c, err := NewConnector("duckdb?access_mode=READ_WRITE", func(execer driver.ExecerContext) error { initQueries := []string{ `SET memory_limit = '10GB';`, `SET threads TO 1;`, } ctx := context.Background() for _, query := range initQueries { _, err := execer.ExecContext(ctx, query, nil) if err != nil { return err } } return nil }) checkErr(err, "failed to create new duckdb connector: %s") defer c.Close() db := sql.OpenDB(c) defer db.Close() var value string row := db.QueryRow(`SELECT value FROM duckdb_settings() WHERE name = 'memory_limit';`) if row.Scan(&value) != nil { log.Fatalf("failed to scan row: %s", err) } fmt.Printf("Setting memory_limit is %s", value)
Output: Setting memory_limit is 9.3 GiB
type DataChunk ¶
type DataChunk struct {
// contains filtered or unexported fields
}
DataChunk storage of a DuckDB table.
type ErrorType ¶
type ErrorType int
const ( ErrorTypeInvalid ErrorType = iota // invalid type ErrorTypeOutOfRange // value out of range error ErrorTypeConversion // conversion/casting error ErrorTypeUnknownType // unknown type error ErrorTypeDecimal // decimal related ErrorTypeMismatchType // type mismatch ErrorTypeDivideByZero // divide by 0 ErrorTypeObjectSize // object size exceeded ErrorTypeInvalidType // incompatible for operation ErrorTypeSerialization // serialization ErrorTypeTransaction // transaction management ErrorTypeNotImplemented // method not implemented ErrorTypeExpression // expression parsing ErrorTypeCatalog // catalog related ErrorTypeParser // parser related ErrorTypePlanner // planner related ErrorTypeScheduler // scheduler related ErrorTypeExecutor // executor related ErrorTypeConstraint // constraint related ErrorTypeIndex // index related ErrorTypeStat // stat related ErrorTypeConnection // connection related ErrorTypeSyntax // syntax related ErrorTypeSettings // settings related ErrorTypeBinder // binder related ErrorTypeNetwork // network related ErrorTypeOptimizer // optimizer related ErrorTypeNullPointer // nullptr exception ErrorTypeIO // IO exception ErrorTypeInterrupt // interrupt ErrorTypeFatal // Fatal exceptions are non-recoverable, and render the entire DB in an unusable state ErrorTypeInternal // Internal exceptions indicate something went wrong internally (i.e. bug in the code base) ErrorTypeInvalidInput // Input or arguments error ErrorTypeOutOfMemory // out of memory ErrorTypePermission // insufficient permissions ErrorTypeParameterNotResolved // parameter types could not be resolved ErrorTypeParameterNotAllowed // parameter types not allowed ErrorTypeDependency // dependency ErrorTypeHTTP ErrorTypeMissingExtension // Thrown when an extension is used but not loaded ErrorTypeAutoLoad // Thrown when an extension is used but not loaded ErrorTypeSequence ErrorTypeInvalidConfiguration // An invalid configuration was detected (e.g. a Secret param was missing, or a required setting not found) )
type ParallelChunkTableFunction ¶
type ParallelChunkTableFunction = tableFunction[ParallelChunkTableSource]
A ParallelChunkTableFunction is a type which can be bound to return a ParallelChunkTableSource.
type ParallelChunkTableSource ¶
type ParallelChunkTableSource interface { // FillChunk takes a Chunk and fills it with values. // It returns true, if there are more chunks to fill. FillChunk(any, DataChunk) error // contains filtered or unexported methods }
A ParallelChunkTableSource represents anything that produces rows in a vectorised way. The cardinality is requested before function initialization. After initializing the ParallelChunkTableSource, go-duckdb requests the rows. It simultaneously calls the FillChunk method with multiple threads. If ParallelTableSourceInfo.MaxThreads is greater than one, FillChunk must use synchronization primitives to avoid race conditions.
type ParallelRowTableFunction ¶
type ParallelRowTableFunction = tableFunction[ParallelRowTableSource]
A ParallelRowTableFunction is a type which can be bound to return a ParallelRowTableSource.
type ParallelRowTableSource ¶
type ParallelRowTableSource interface { // FillRow takes a Row and fills it with values. // It returns true, if there are more rows to fill. FillRow(any, Row) (bool, error) // contains filtered or unexported methods }
A ParallelRowTableSource represents anything that produces rows in a non-vectorised way. The cardinality is requested before function initialization. After initializing the ParallelRowTableSource, go-duckdb requests the rows. It simultaneously calls the FillRow method with multiple threads. If ParallelTableSourceInfo.MaxThreads is greater than one, FillRow must use synchronisation primitives to avoid race conditions.
type ParallelTableSourceInfo ¶
type ParallelTableSourceInfo struct { // MaxThreads is the maximum number of threads on which to run the table source function. // If set to 0, it uses DuckDB's default thread configuration. MaxThreads int }
ParallelTableSourceInfo contains information for initializing a parallelism-aware table source.
type ProfilingInfo ¶
type ProfilingInfo struct { // Metrics contains all key-value pairs of the current node. // The key represents the name and corresponds to the measured value. Metrics map[string]string // Children contains all children of the node and their respective metrics. Children []ProfilingInfo }
ProfilingInfo is a recursive type containing metrics for each node in DuckDB's query plan. There are two types of nodes: the QUERY_ROOT and OPERATOR nodes. The QUERY_ROOT refers exclusively to the top-level node; its metrics are measured over the entire query. The OPERATOR nodes refer to the individual operators in the query plan.
func GetProfilingInfo ¶
func GetProfilingInfo(c *sql.Conn) (ProfilingInfo, error)
GetProfilingInfo obtains all available metrics set by the current connection.
type ReplacementScanCallback ¶
type Row ¶
type Row struct {
// contains filtered or unexported fields
}
Row represents one row in duckdb. It references the internal vectors.
func (Row) IsProjected ¶
IsProjected returns whether the column is projected.
type RowTableFunction ¶
type RowTableFunction = tableFunction[RowTableSource]
A RowTableFunction is a type which can be bound to return a RowTableSource.
type RowTableSource ¶
type RowTableSource interface { // FillRow takes a Row and fills it with values. // It returns true, if there are more rows to fill. FillRow(Row) (bool, error) // contains filtered or unexported methods }
A RowTableSource represents anything that produces rows in a non-vectorised way. The cardinality is requested before function initialization. After initializing the RowTableSource, go-duckdb requests the rows. It sequentially calls the FillRow method with a single thread.
type ScalarFunc ¶
type ScalarFunc interface { // Config returns ScalarFuncConfig to configure the scalar function. Config() ScalarFuncConfig // Executor returns ScalarFuncExecutor to execute the scalar function. Executor() ScalarFuncExecutor }
ScalarFunc is the user-defined scalar function interface. Any scalar function must implement a Config function, and an Executor function.
type ScalarFuncConfig ¶
type ScalarFuncConfig struct { // InputTypeInfos contains Type information for each input parameter of the scalar function. InputTypeInfos []TypeInfo // ResultTypeInfo holds the Type information of the scalar function's result type. ResultTypeInfo TypeInfo // VariadicTypeInfo configures the number of input parameters. // If this field is nil, then the input parameters match InputTypeInfos. // Otherwise, the scalar function's input parameters are set to variadic, allowing any number of input parameters. // The Type of the first len(InputTypeInfos) parameters is configured by InputTypeInfos, and all // remaining parameters must match the variadic Type. To configure different variadic parameter types, // you must set the VariadicTypeInfo's Type to TYPE_ANY. VariadicTypeInfo TypeInfo // Volatile sets the stability of the scalar function to volatile, if true. // Volatile scalar functions might create a different result per row. // E.g., random() is a volatile scalar function. Volatile bool // SpecialNullHandling disables the default NULL handling of scalar functions, if true. // The default NULL handling is: NULL in, NULL out. I.e., if any input parameter is NULL, then the result is NULL. SpecialNullHandling bool }
ScalarFuncConfig contains the fields to configure a user-defined scalar function.
type ScalarFuncExecutor ¶
type ScalarFuncExecutor struct { // RowExecutor accepts a row-based execution function. // []driver.Value contains the row values, and it returns the row execution result, or error. RowExecutor func(values []driver.Value) (any, error) }
ScalarFuncExecutor contains the callback function to execute a user-defined scalar function. Currently, its only field is a row-based executor.
type StructEntry ¶
type StructEntry interface { // Info returns a STRUCT entry's type information. Info() TypeInfo // Name returns a STRUCT entry's name. Name() string }
StructEntry is an interface to provide STRUCT entry information.
func NewStructEntry ¶
func NewStructEntry(info TypeInfo, name string) (StructEntry, error)
NewStructEntry returns a STRUCT entry. info contains information about the entry's type, and name holds the entry's name.
type TableFunction ¶
type TableFunction interface { RowTableFunction | ParallelRowTableFunction | ChunkTableFunction | ParallelChunkTableFunction }
TableFunction implements different table function types: RowTableFunction, ParallelRowTableFunction, ChunkTableFunction, and ParallelChunkTableFunction.
type TableFunctionConfig ¶
type TableFunctionConfig struct { // The Arguments of the table function. Arguments []TypeInfo // The NamedArguments of the table function. NamedArguments map[string]TypeInfo }
TableFunctionConfig contains any information passed to DuckDB when registering the table function.
type Type ¶
type Type C.duckdb_type
Type wraps the corresponding DuckDB type enum.
const ( TYPE_INVALID Type = C.DUCKDB_TYPE_INVALID TYPE_BOOLEAN Type = C.DUCKDB_TYPE_BOOLEAN TYPE_TINYINT Type = C.DUCKDB_TYPE_TINYINT TYPE_SMALLINT Type = C.DUCKDB_TYPE_SMALLINT TYPE_INTEGER Type = C.DUCKDB_TYPE_INTEGER TYPE_BIGINT Type = C.DUCKDB_TYPE_BIGINT TYPE_UTINYINT Type = C.DUCKDB_TYPE_UTINYINT TYPE_USMALLINT Type = C.DUCKDB_TYPE_USMALLINT TYPE_UINTEGER Type = C.DUCKDB_TYPE_UINTEGER TYPE_UBIGINT Type = C.DUCKDB_TYPE_UBIGINT TYPE_FLOAT Type = C.DUCKDB_TYPE_FLOAT TYPE_DOUBLE Type = C.DUCKDB_TYPE_DOUBLE TYPE_TIMESTAMP Type = C.DUCKDB_TYPE_TIMESTAMP TYPE_DATE Type = C.DUCKDB_TYPE_DATE TYPE_TIME Type = C.DUCKDB_TYPE_TIME TYPE_INTERVAL Type = C.DUCKDB_TYPE_INTERVAL TYPE_HUGEINT Type = C.DUCKDB_TYPE_HUGEINT TYPE_UHUGEINT Type = C.DUCKDB_TYPE_UHUGEINT TYPE_VARCHAR Type = C.DUCKDB_TYPE_VARCHAR TYPE_BLOB Type = C.DUCKDB_TYPE_BLOB TYPE_DECIMAL Type = C.DUCKDB_TYPE_DECIMAL TYPE_TIMESTAMP_S Type = C.DUCKDB_TYPE_TIMESTAMP_S TYPE_TIMESTAMP_MS Type = C.DUCKDB_TYPE_TIMESTAMP_MS TYPE_TIMESTAMP_NS Type = C.DUCKDB_TYPE_TIMESTAMP_NS TYPE_ENUM Type = C.DUCKDB_TYPE_ENUM TYPE_LIST Type = C.DUCKDB_TYPE_LIST TYPE_STRUCT Type = C.DUCKDB_TYPE_STRUCT TYPE_MAP Type = C.DUCKDB_TYPE_MAP TYPE_ARRAY Type = C.DUCKDB_TYPE_ARRAY TYPE_UUID Type = C.DUCKDB_TYPE_UUID TYPE_UNION Type = C.DUCKDB_TYPE_UNION TYPE_BIT Type = C.DUCKDB_TYPE_BIT TYPE_TIME_TZ Type = C.DUCKDB_TYPE_TIME_TZ TYPE_TIMESTAMP_TZ Type = C.DUCKDB_TYPE_TIMESTAMP_TZ TYPE_ANY Type = C.DUCKDB_TYPE_ANY TYPE_VARINT Type = C.DUCKDB_TYPE_VARINT TYPE_SQLNULL Type = C.DUCKDB_TYPE_SQLNULL )
type TypeInfo ¶
type TypeInfo interface { // InternalType returns the Type. InternalType() Type // contains filtered or unexported methods }
TypeInfo is an interface for a DuckDB type.
func NewDecimalInfo ¶
NewDecimalInfo returns DECIMAL type information. Its input parameters are the width and scale of the DECIMAL type.
func NewEnumInfo ¶
NewEnumInfo returns ENUM type information. Its input parameters are the dictionary values.
func NewListInfo ¶
NewListInfo returns LIST type information. childInfo contains the type information of the LIST's elements.
func NewMapInfo ¶
NewMapInfo returns MAP type information. keyInfo contains the type information of the MAP keys. valueInfo contains the type information of the MAP values.
func NewStructInfo ¶
func NewStructInfo(firstEntry StructEntry, others ...StructEntry) (TypeInfo, error)
NewStructInfo returns STRUCT type information. Its input parameters are the STRUCT entries.
func NewTypeInfo ¶
NewTypeInfo returns type information for DuckDB's primitive types. It returns the TypeInfo, if the Type parameter is a valid primitive type. Else, it returns nil, and an error. Valid types are: TYPE_[BOOLEAN, TINYINT, SMALLINT, INTEGER, BIGINT, UTINYINT, USMALLINT, UINTEGER, UBIGINT, FLOAT, DOUBLE, TIMESTAMP, DATE, TIME, INTERVAL, HUGEINT, VARCHAR, BLOB, TIMESTAMP_S, TIMESTAMP_MS, TIMESTAMP_NS, UUID, TIMESTAMP_TZ, ANY].
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
deps
|
|
darwin_amd64
Package darwin_amd64 is required to provide support for vendoring modules DO NOT REMOVE
|
Package darwin_amd64 is required to provide support for vendoring modules DO NOT REMOVE |
darwin_arm64
Package darwin_arm64 is required to provide support for vendoring modules DO NOT REMOVE
|
Package darwin_arm64 is required to provide support for vendoring modules DO NOT REMOVE |
freebsd_amd64
Package freebsd_amd64 is required to provide support for vendoring modules DO NOT REMOVE
|
Package freebsd_amd64 is required to provide support for vendoring modules DO NOT REMOVE |
linux_amd64
Package linux_amd64 is required to provide support for vendoring modules DO NOT REMOVE
|
Package linux_amd64 is required to provide support for vendoring modules DO NOT REMOVE |
linux_arm64
Package linux_arm64 is required to provide support for vendoring modules DO NOT REMOVE
|
Package linux_arm64 is required to provide support for vendoring modules DO NOT REMOVE |
examples
|
|