database

package
v5.0.0-rcx.14 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 31, 2024 License: MIT Imports: 19 Imported by: 10

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func New

func New(cfg *config.Config, logger func() *slog.Logger) (*gorm.DB, error)

New create a new connection pool using the settings defined in the given configuration.

In order to use a specific driver / dialect ("mysql", "sqlite3", ...), you must not forget to blank-import it in your main file.

import _ "goyave.dev/goyave/v5/database/dialect/mysql"
import _ "goyave.dev/goyave/v5/database/dialect/postgres"
import _ "goyave.dev/goyave/v5/database/dialect/sqlite"
import _ "goyave.dev/goyave/v5/database/dialect/mssql"

func NewFromDialector

func NewFromDialector(cfg *config.Config, logger func() *slog.Logger, dialector gorm.Dialector) (*gorm.DB, error)

NewFromDialector create a new connection pool from a gorm dialector and using the settings defined in the given configuration.

This can be used in tests to create a mock connection pool.

func RegisterDialect

func RegisterDialect(name, template string, initializer DialectorInitializer)

RegisterDialect registers a connection string template for the given dialect.

You cannot override a dialect that already exists.

Template format accepts the following placeholders, which will be replaced with the corresponding configuration entries automatically:

  • "{username}"
  • "{password}"
  • "{host}"
  • "{port}"
  • "{name}"
  • "{options}"

Example template for the "mysql" dialect:

{username}:{password}@({host}:{port})/{name}?{options}

Types

type DialectorInitializer

type DialectorInitializer func(dsn string) gorm.Dialector

DialectorInitializer function initializing a GORM Dialector using the given data source name (DSN).

type Factory

type Factory[T any] struct {
	BatchSize int
	// contains filtered or unexported fields
}

Factory an object used to generate records or seed the database.

func NewFactory

func NewFactory[T any](generator func() *T) *Factory[T]

NewFactory create a new Factory. The given generator function will be used to generate records.

func (*Factory[T]) Generate

func (f *Factory[T]) Generate(count int) []*T

Generate a number of records using the given factory.

func (*Factory[T]) Override

func (f *Factory[T]) Override(override *T) *Factory[T]

Override set an override model for generated records. Values present in the override model will replace the ones in the generated records. This function expects a struct pointer as parameter. Returns the same instance of `Factory` so this method can be chained.

func (*Factory[T]) Save

func (f *Factory[T]) Save(db *gorm.DB, count int) []*T

Save generate a number of records using the given factory, insert them in the database and return the inserted records.

type Logger

type Logger struct {

	// SlowThreshold defines the minimum query execution time to be considered "slow".
	// If a query takes more time than `SlowThreshold`, the query will be logged at the WARN level.
	// If 0, disables query execution time checking.
	SlowThreshold time.Duration
	// contains filtered or unexported fields
}

Logger adapter between `*slog.Logger` and GORM's logger.

func NewLogger

func NewLogger(slogger func() *slog.Logger) *Logger

NewLogger create a new `Logger` adapter between GORM and `*slog.Logger`. Use a `SlowThreshold` of 200ms.

func (Logger) Error

func (l Logger) Error(ctx context.Context, msg string, data ...any)

Error logs at `LevelError`.

func (Logger) Info

func (l Logger) Info(ctx context.Context, msg string, data ...any)

Info logs at `LevelInfo`.

func (*Logger) LogMode

func (l *Logger) LogMode(_ logger.LogLevel) logger.Interface

LogMode returns a copy of this logger. The level argument actually has no effect as it is handled by the underlying `*slog.Logger`.

func (Logger) Trace

func (l Logger) Trace(ctx context.Context, begin time.Time, fc func() (sql string, rowsAffected int64), err error)

Trace SQL logs at

  • `LevelDebug`
  • `LevelWarn` if the query is slow
  • `LevelError` if the given error is not nil

func (Logger) Warn

func (l Logger) Warn(ctx context.Context, msg string, data ...any)

Warn logs at `LevelWarn`.

type Paginator

type Paginator[T any] struct {
	DB *gorm.DB `json:"-"`

	Records *[]T `json:"records"`

	MaxPage     int64 `json:"maxPage"`
	Total       int64 `json:"total"`
	PageSize    int   `json:"pageSize"`
	CurrentPage int   `json:"currentPage"`
	// contains filtered or unexported fields
}

Paginator structure containing pagination information and result records.

func NewPaginator

func NewPaginator[T any](db *gorm.DB, page, pageSize int, dest *[]T) *Paginator[T]

NewPaginator create a new Paginator.

Given DB transaction can contain clauses already, such as WHERE, if you want to filter results.

articles := []model.Article{}
tx := db.Where("title LIKE ?", "%"+sqlutil.EscapeLike(search)+"%")
paginator := database.NewPaginator(tx, page, pageSize, &articles)
err := paginator.Find()
if response.WriteDBError(err) {
	return
}
response.JSON(http.StatusOK, paginator)

func (*Paginator[T]) Find

func (p *Paginator[T]) Find() error

Find requests page information (total records and max page) if not already fetched using `UpdatePageInfo()` and executes the query. The `Paginator` struct is updated automatically, as well as the destination slice given in `NewPaginator()`.

The two queries are executed inside a transaction.

func (*Paginator[T]) Raw

func (p *Paginator[T]) Raw(query string, vars []any, countQuery string, countVars []any) *Paginator[T]

Raw set a raw SQL query and count query. The Paginator will execute the raw queries instead of automatically creating them. The raw query should not contain the "LIMIT" and "OFFSET" clauses, they will be added automatically. The count query should return a single number (`COUNT(*)` for example).

func (*Paginator[T]) UpdatePageInfo

func (p *Paginator[T]) UpdatePageInfo() error

UpdatePageInfo executes count request to calculate the `Total` and `MaxPage`. When calling this function manually, it is advised to use a transaction that is calling `Find()` too, to avoid inconsistencies.

type PaginatorDTO

type PaginatorDTO[T any] struct {
	Records     []T   `json:"records"`
	MaxPage     int64 `json:"maxPage"`
	Total       int64 `json:"total"`
	PageSize    int   `json:"pageSize"`
	CurrentPage int   `json:"currentPage"`
}

PaginatorDTO structure sent to clients as a response.

type TimeoutPlugin

type TimeoutPlugin struct {
	ReadTimeout  time.Duration
	WriteTimeout time.Duration
}

TimeoutPlugin GORM plugin adding a default timeout to SQL queries if none is applied on the statement already. It works by replacing the statement's context with a child context having the configured timeout. The context is replaced in a "before" callback on all GORM operations. In a "after" callback, the new context is canceled.

The `ReadTimeout` is applied on the `Query` and `Raw` GORM callbacks. The `WriteTimeout` is applied on the rest of the callbacks.

Supports all GORM operations except `Scan()`.

A timeout duration inferior or equal to 0 disables the plugin for the relevant operations.

func (*TimeoutPlugin) Initialize

func (p *TimeoutPlugin) Initialize(db *gorm.DB) error

Initialize registers the callbacks for all operations.

func (*TimeoutPlugin) Name

func (p *TimeoutPlugin) Name() string

Name returns the name of the plugin

Directories

Path Synopsis
dialect

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL