Documentation ¶
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BatchLoader ¶
type BatchLoader struct {
// contains filtered or unexported fields
}
NopBatchLoader will:
- accept records row-by-row to insert as a batch
- remove records to be replaced or updated.
- get a count of removed records
- insert all rows in efficient bulk batches
- the entire batch operation is performed in a transaction so the table is updated atomically and can be rolled back if there is a problem.
Transaction: 1. Begin Transaction 3. Remove old records (if any) 4. Insert new records 5. Commit 6. Rollback if there is a problem
Need to know: - table to insert into - columns names and number of columns - column values
func NewBatchLoader ¶
func NewBatchLoader(dbType string, sqlDB *sql.DB) *BatchLoader
NewBatchLoader will return an instance of a NopBatchLoader that is tested to work with MySQL and Postgres. It will likely work with most other sql adapters that support the same standard insert syntax used in MySQL and Postgres and use '?' as the value placeholder.
Example ¶
un := "root" pass := "" host := "127.0.0.1:5432" dbName := "ci-test" connStr := fmt.Sprintf("user=%s password=%s host=%s dbname=%s", un, pass, host, dbName) _, err := sql.Open("postgres", connStr) fmt.Println(err)
Output: <nil>
func (*BatchLoader) AddRow ¶
func (l *BatchLoader) AddRow(row []interface{})
func (*BatchLoader) Delete ¶
func (l *BatchLoader) Delete(query string, vals ...interface{})
type NopBatchLoader ¶
type NopBatchLoader struct { // Modes supported: // commit_err - will return an error on calling Commit Mode string Stats Stats }
func NewNopBatchLoader ¶
func NewNopBatchLoader(mode string) *NopBatchLoader
func (*NopBatchLoader) AddRow ¶
func (l *NopBatchLoader) AddRow(row []interface{})
func (*NopBatchLoader) Delete ¶
func (l *NopBatchLoader) Delete(query string, vals ...interface{})
type Stats ¶
type Stats struct { // Started is the time when the query was started. Started string `json:"started"` // Dur is the execution duration. Dur Duration `json:"dur"` // Table is the table or schema.table value Table string `json:"table"` // Removed is the number of records removed before // the bulk insert. Removed int64 `json:"removed"` // Rows is the number of raw rows added. This is not // the actual insert numbers reported back by the db // after inserting but should be. Rows int64 `json:"rows"` // Inserted is the number of records inserted with the bulk insert. // This is the actual number reported back by the db. Inserted int64 `json:"inserted"` // Cols is the number of columns of each row inserted. Cols int `json:"cols"` // BatchDate is the hour of data for which the // batch data belongs. Not populated by bulk // inserter. BatchDate string `json:"batch_hour"` // contains filtered or unexported fields }
func NewStatsFromBytes ¶
NewFromBytes creates Stats from json bytes.
func (*Stats) AddRow ¶
func (s *Stats) AddRow()
AddRow will atomically increment the Inserted value.
func (Stats) Clone ¶
Clone will create a copy of stat that won't trigger race conditions. Use Clone if you are updating and reading from stats at the same time. Read from the clone.
func (Stats) JSONString ¶
func (*Stats) ParseBatchDate ¶
ParseBatchDate will attempt to parse the Created field to a time.Time object. ParseCreated expects the Created time string is in time.RFC3339. If there is a parse error then the time.Time zero value is returned.
func (*Stats) ParseStarted ¶
ParseStarted will attempt to parse the Created field to a time.Time object. ParseCreated expects the Created time string is in time.RFC3339. If there is a parse error then the time.Time zero value is returned.
func (*Stats) SetBatchDate ¶
SetBatchDate will set the Created field in the format time.RFC3339.
func (*Stats) SetStarted ¶
SetStarted will set the Created field in the format time.RFC3339.