Documentation ¶
Overview ¶
* * package btcexport provides a service for parsing the blockchain as stored by a * btcd full node and exporting it as CSV in chunked files. The BlockExporter * struct runs a concurrent process to perform the export and BlockEncoder * defines the transformation from deserialized btcd structures to CSV records. * The BlockExporter supports multiple file output backends, currently including * filesystem output and uploads to Amazon S3 buckets.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BlockEncoder ¶
type BlockEncoder interface { // BlockRecordHeader returns column headers for the blocks table. BlockRecordHeader() []string // TxRecordHeader returns column headers for the transactions table. TxRecordHeader() []string // TxInRecordHeader returns column headers for the transaction inputs table. TxInRecordHeader() []string // TxOutRecordHeader returns column headers for the transaction outputs // table. TxOutRecordHeader() []string // GenBlockRecord returns a row in the blocks table derived from the given // block. GenBlockRecord(block *btcutil.Block) ([]string, error) // GenTxRecord returns a row in the transactions table derived from the // given transaction. GenTxRecord(tx *btcutil.Tx, height uint) ([]string, error) // GetTxInRecord returns a row in the transaction inputs table derived from // the given input. GenTxInRecord(txHash *chainhash.Hash, index int, txIn *wire.TxIn, isCoinbase bool) ([]string, error) // GetTxOutRecord returns a row in the transaction outputs table derived from // the given output. GenTxOutRecord(txHash *chainhash.Hash, index int, txOut *wire.TxOut) ([]string, error) }
BlockEncoder is used to encode block, tx, input, and output data as rows in a tabular data format.
type BlockExporter ¶
type BlockExporter struct {
// contains filtered or unexported fields
}
BlockExporter is used to read a range of raw Bitcoin blocks from the database, encode them as tabular data in a text format (ie. CSV), and write them to an output data store. Data is not guaranteed to be written to the output in any particular order as blocks are processed in parallel and data is split across multiple files with a maximum size limit.
func New ¶
func New(cfg Config) (*BlockExporter, error)
New constructs and returns a new BlockExporter.
func (*BlockExporter) BlocksProcessed ¶
func (be *BlockExporter) BlocksProcessed() uint
BlocksProcessed returns the number of blocks processed so far.
func (*BlockExporter) Done ¶
func (be *BlockExporter) Done() <-chan struct{}
Done returns a channel that is closed when the process completes or ends prematurely.
func (*BlockExporter) Errors ¶
func (be *BlockExporter) Errors() <-chan error
Errors returns a channel with errors that occur during the export process. The channel is closed when the process completes or ends prematurely.
func (*BlockExporter) Start ¶
func (be *BlockExporter) Start() error
Start begins the export process. The process is organized as a pipeline of goroutines that run concurrently. This function returns as soon as the process has been started, and the caller can watch the Done() channel to be informed when the process completes.
func (*BlockExporter) Stop ¶
func (be *BlockExporter) Stop()
Stop signals to all goroutines that are part of the export process to end immediately. This function may return before they all exit.
func (*BlockExporter) TotalBlocks ¶
func (be *BlockExporter) TotalBlocks() uint
TotalBlocks returns the total number of blocks that are to be processed.
type Config ¶
type Config struct { DB database.DB Chain *blockchain.BlockChain NetParams *chaincfg.Params // OpenWriter creates a new rotating writer to the output destination. OpenWriter WriterFactory // StartHeight is the block height to export from. StartHeight uint // EndHeight is the block height to export until. If this field is 0, it // will default to the height of the last confirmed block on current chain. EndHeight uint // ConfirmedDepth is the depth required for a block to be considered // confirmed. A confirmed block should never be reorganized out of the // chain. The confirmed depth is used to determine the ending height of the // export if no EndHeight is set explicitly. ConfirmedDepth uint // FileSizeLimit is the maximum size in bytes that an output file is allowed // to be. If the record data exceeds the file size limit, it will be split // across multiple files. FileSizeLimit int }
Config is used to construct a new BlockExporter.
type RotatingWriter ¶
type RotatingWriter struct {
// contains filtered or unexported fields
}
RotatingWriter is used to write to a backing writer that can be rotated periodically. This is used to write to different files with an approximate size limit and start a new file when that size limit is exceeded.
func NewRotatingWriter ¶
func NewRotatingWriter(openWriter func() (io.WriteCloser, error), ) (*RotatingWriter, error)
NewRotatingWriter constructs a new RotatingWriter which uses the openWriter parameter function to generate a new backing writer each time it is rotated.
func (*RotatingWriter) BytesWritten ¶
func (w *RotatingWriter) BytesWritten() int
BytesWritten returns the total number of bytes written to the current backing writer. This is reset each time the writer is rotated.
func (*RotatingWriter) Close ¶
func (w *RotatingWriter) Close() error
Close closes the current backing writer.
func (*RotatingWriter) RotateWriter ¶
func (w *RotatingWriter) RotateWriter() error
RotateWriter closes the current backing writer and opens a new one, resetting the count of bytes written.
type WriterFactory ¶
type WriterFactory func(filename string, indexPtr *uint32) (*RotatingWriter, error)
WriterFactory is a function that can be used to create new RotatingWriters with unique file names.
func RotatingFileWriter ¶
func RotatingFileWriter(dir string) WriterFactory
RotatingFileWriter returns a factory for creating new RotatingWriters with a file output destination. File names are generated sequentially using a shared incrementing index.
func RotatingS3Writer ¶
func RotatingS3Writer(uploader *s3manager.Uploader, options *s3manager.UploadInput) WriterFactory
RotatingS3Writer returns a factory for creating new RotatingWriters with an S3 object output destination. Object keys are generated sequentially using a shared incrementing index.