ndl

package
v0.4.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 21, 2023 License: Apache-2.0 Imports: 14 Imported by: 0

Documentation

Overview

Package ndl implements generic table API of Nasdaq Data Link (NDL).

Official documentation is at https://docs.data.nasdaq.com/docs/tables-1 .

Each NDL table has a schema, which is the list of column names and their types, in the order they appear in the table. This schema can be obtained for the originial table using FetchTableMetadata(). The relevant schema is also included in each downloaded table page, which may be a subset of the full schema if only a subset of columns was requested.

The raw NDL API can return only up to 10K rows in a single page. However, the JSON format used in this package includes a cursor for the next page, thus allowing paging when downloading more than 10K rows. This package implements transparent paging in RowIterator.

APIs for specific providers and products, such as Sharadar Equities and ETFs, are implemented in the subpackages.

Index

Constants

View Source
const (
	StatusFresh        = "fresh"
	StatusRegenerating = "regenerating"
	StatusCreating     = "creating"
)

Values of the Status field of BulkDownloadHandle.

Variables

View Source
var URL = "https://data.nasdaq.com/api/v3"

URL is the default base URL of the server. It may be overwritten in tests before creating a new client.

Functions

func TestTablePage

func TestTablePage(data [][]Value, schema Schema, cursor string) (string, error)

TestTablePage generates the JSON string in a format as returned by the NDL Table API. For use in tests.

func UseClient

func UseClient(ctx context.Context, apiKey string) context.Context

UseClient creates a new client based on the API key and injects it into the context.

Types

type BulkDownloadHandle

type BulkDownloadHandle struct {
	Link              string
	Status            string
	SnapshotTime      string
	LastRefreshedTime string
	MonitorFactory    DownloadMonitorFactory
	// contains filtered or unexported fields
}

BulkDownloadHandle is a simplified result of the first asynchronous bulk download call.

func BulkDownload

func BulkDownload(ctx context.Context, table string) (*BulkDownloadHandle, error)

BulkDownload receives the bulk download metadata with the data link.

type CSVReader

type CSVReader struct {
	// contains filtered or unexported fields
}

CSVReader implements a streaming CSV reader, one row at a time, with a Close() method to release its resources.

func BulkDownloadCSV

func BulkDownloadCSV(ctx context.Context, h *BulkDownloadHandle) (*CSVReader, error)

BulkDownloadCSV starts downloading the actual data pointed to by BulkDownloadHandle. It downloads the zip archive with a single CSV file into memory, and returns a CSVReader which streams the contents of that file. When error is nil, make sure to call CSVReader.Close() when done with the CSV stream.

func (*CSVReader) AddCloser

func (r *CSVReader) AddCloser(c io.Closer)

AddCloser to the list of closers. Method Close() will call each registered closer in LIFO order.

func (*CSVReader) Close

func (r *CSVReader) Close()

Close CSVReader and release all the resources.

func (*CSVReader) Read

func (r *CSVReader) Read() ([]string, error)

Read the next CSV row as a slice of strings. It returns the same errors as encoding/csv.Reader.Read() method. In particular, it returns nil, io.EOF when there are no more rows.

type Client

type Client struct {
	// contains filtered or unexported fields
}

Client for querying NDL tables and time-series.

func GetClient

func GetClient(ctx context.Context) *Client

GetClient extracts the Client from the context, if any.

type DatatableMeta

type DatatableMeta struct {
	VendorCode  string       `json:"vendor_code"`
	TableCode   string       `json:"datatable_code"`
	Name        string       `json:"name"`
	Description string       `json:"description"`
	Schema      Schema       `json:"columns"`
	Filters     []string     `json:"filters"`
	PrimaryKey  []string     `json:"primary_key"`
	Premium     bool         `json:"premium"`
	Status      TableStatus  `json:"status"`
	Version     TableVersion `json:"data_version"`
}

DatatableMeta is the JSON struct for the table metadata.

type DownloadMonitorFactory

type DownloadMonitorFactory = func(*http.Response) io.ReadCloser

DownloadMonitorFactory creates a pass-through io.Reader from http.Response.Body which allows to monitor bulk data download progress.

func LoggingMonitorFactory

func LoggingMonitorFactory(ctx context.Context, name string, interval int64) DownloadMonitorFactory

LoggingMonitorFactory is the default download monitor factory which logs the progress report of data identified by name every interval bytes. If interval is not positive, it is set to 10MB.

type RowIterator

type RowIterator struct {
	// contains filtered or unexported fields
}

RowIterator iterates over query results row by row. Paging is handled transparently.

func (*RowIterator) Next

func (it *RowIterator) Next(row ValueLoader) (bool, error)

Next loads the next row. If there are no more rows, the second value is false. Note, that error may be non-nil regardless of the end of iterator.

type Schema

type Schema []SchemaField

Schema definition for a table.

func (Schema) Equal

func (s Schema) Equal(s2 Schema) bool

Equal tests two schemas for exact equality, including the field ordering.

func (Schema) MapCSVColumns

func (s Schema) MapCSVColumns(header []string) (map[string]int, error)

MapCSVColumns creates a map of {field name -> CSV column index} based on the CSV header. Every schema field must be present in the header, otherwise it's an error.

func (Schema) MapFields

func (s Schema) MapFields() map[string]int

MapFields creates a map of {field name -> field index} in the schema.

func (Schema) String

func (s Schema) String() string

String prints a string representation of the schema.

func (Schema) SubsetOf

func (s Schema) SubsetOf(s2 Schema) bool

SubsetOf tests if self is a subset of the other schema. This is useful for robust ValueLoader's that can continue to work when the schema adds new fields.

type SchemaField

type SchemaField struct {
	Name string `json:"name"`
	Type string `json:"type"`
}

SchemaField is the schema definition for a single table column.

type TableMetadata

type TableMetadata struct {
	Datatable DatatableMeta `json:"datatable"`
}

TableMetadata is the format returned by the metadata API.

func FetchTableMetadata

func FetchTableMetadata(ctx context.Context, table string) (*TableMetadata, error)

FetchTableMetadata obtains metadata about the requested table specified as PUBLISHER/TABLE.

type TableQuery

type TableQuery struct {
	// contains filtered or unexported fields
}

TableQuery is a builder for a table query.

func NewTableQuery

func NewTableQuery(table string) *TableQuery

NewTableQuery creates a new query.

func (*TableQuery) Columns

func (q *TableQuery) Columns(columns ...string) *TableQuery

Columns constraints the query result to only these columns.

func (*TableQuery) Copy

func (q *TableQuery) Copy() *TableQuery

Copy creates a deep copy of the query. It is primarily used in its builder methods.

func (*TableQuery) Cursor

func (q *TableQuery) Cursor(cursor string) *TableQuery

Cursor sets the cursor ID for a paging query.

func (*TableQuery) Equal

func (q *TableQuery) Equal(column string, values ...string) *TableQuery

Equal adds an equality filter: the value of the column must equal one of the given values. This and other builder methods always create a deep copy of the query, leaving the original intact.

func (*TableQuery) Ge

func (q *TableQuery) Ge(column string, value string) *TableQuery

Ge adds a strict inequality filter: a numerical column's value must be >= value.

func (*TableQuery) Gt

func (q *TableQuery) Gt(column string, value string) *TableQuery

Gt adds a strict inequality filter: a numerical column's value must be > value.

func (*TableQuery) Le

func (q *TableQuery) Le(column string, value string) *TableQuery

Le adds a strict inequality filter: a numerical column's value must be <= value.

func (*TableQuery) Lt

func (q *TableQuery) Lt(column string, value string) *TableQuery

Lt adds a strict inequality filter: a numerical column's value must be < value.

func (*TableQuery) Path

func (q *TableQuery) Path() string

Path returns the URL path to add to the base URL.

func (*TableQuery) PerPage

func (q *TableQuery) PerPage(size int) *TableQuery

PerPage sets the maximum number of results in a single response, [0..10000].

func (*TableQuery) Read

func (q *TableQuery) Read(ctx context.Context) *RowIterator

Read sets up the iterator over the result rows, which will execute the query as needed and handle paging transparently.

func (*TableQuery) Values

func (q *TableQuery) Values() url.Values

Values returns the query values for the query. Each call creates a new object, so the caller is free to modify it without affecting the query.

type TableStatus

type TableStatus struct {
	RefreshedAt     db.Time `json:"refreshed_at"`
	Status          string  `json:"status"`
	ExpectedAt      string  `json:"expected_at"`
	UpdateFrequency string  `json:"update_frequency"`
}

TableStatus is a part of DatatableMeta.

type TableVersion

type TableVersion struct {
	Code        string `json:"code"`
	Default     bool   `json:"default"`
	Description string `json:"description"`
}

TableVersion is a part of DatatableMeta.

type Value

type Value any

Value is an arbitrary value of a table cell.

type ValueLoader

type ValueLoader interface {
	Load(v []Value, s Schema) error
}

ValueLoader is the interface that a row type of a specific table must implement.

Directories

Path Synopsis
Package sharadar implements specific schemas and methods for downloading Sharadar equity and fund prices tables through Nasdaq Data Link.
Package sharadar implements specific schemas and methods for downloading Sharadar equity and fund prices tables through Nasdaq Data Link.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL