firecore

package module
v1.5.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 3, 2024 License: Apache-2.0 Imports: 43 Imported by: 17

README

Firehose multi-chain executor

This repository contains all the base components of Firehose and run the software for multiple block chains (Bitcoin, Solana, ...) or be used as a library (firehose-ethereum, firehose-antelope)

Compiling

  • go install -v ./cmd/firecore

Or download the latest Release from https://github.com/streamingfast/firehose-core/releases/

Running directly

  • firehose-core can run one or many of the following components:

    • reader-node
    • merger
    • relayer
    • firehose
    • substreams-tier1
    • substreams-tier2
    • (reader-node-stdin -- not run by default)
  • You can use a config file like this (default: firehose.yaml)

start:
  args:
  - reader-node
  - merger
  flags:
    reader-node-path: "/usr/local/bin/firesol"
    reader-node-args: ["fetch", "rpc", "http://localhost:8545", "0"]
  • Run it with firecore start --config-file=./firehose.yaml or set an empty value for config-file (--config-file=) to use the default values.

Using as a library

For chains that implement "firehose block filters" and extensions like "eth_call", this repository can be used as a library for those implementations, like these:

Philosophy

Firehose maintenance cost comes from two sides. First, there is the chain integration that needs to be maintained. This is done within the chain's code directly by the chain's core developers. The second side of things is the maintenance of the Golang part of the Firehose stack.

Each chain creates its own Firehose Golang repository named firehose-<chain>. Firehose-acme repository acts as an example of this. Firehose is composed of multiple smaller components that can be run independently and each of them has a set of CLI flags and other configuration parameters.

The initial "Acme" template we had contained a lot of boilerplate code to properly configure and run the Firehose Golang stack. This meant that if we needed to add a new feature that required a new flag or change a flag default value or any kind of improvements, chain integrators that were maintaining their firehose-<chain> repository were in the obligation of tracking changes made in firehose-acme and apply those back on their repository by hand.

This was true also for continuously tracking updates to the various small libraries that form the Firehose stack. With Firehose starting to get more and more streamlined across different chains, that was a recipe for a maintenance hell for every chain integration.

This repository aims at solving this maintenance burden by acting as a facade for all the Golang code required to have a functional and up-to-date Firehose stack. This way, we maintain the firehose-core project, adding/changing/removing flags, bumping dependencies, and adding new features, while you, as a maintainer of firehose-<chain> repository, simply need to track firehose-core for new releases and bump a single dependency to be up to date with the latest changes.

Changelog

The CHANGELOG.md of this project is written in such way so that you can copy-paste recent changes straight into your own release notes so that operators that are using your firehose-<chain> repository are made aware of deprecation notes, removal, changes and other important elements.

Maintainers, you should copy/paste content of this content straight to your project. It is written and meant to be copied over to your project. If you were at firehose-core version 1.0.0 and are bumping to 1.1.0, you should copy the content between those 2 version to your own repository, replacing placeholder value fire{chain} with your chain's own binary.

The bash command awk '/## v0.1.11/,/## v0.1.8/' CHANGELOG.md | grep -v '## v0.1.8' to obtain the content between 2 versions. You can then merged the different Added, Changed, Removed and others into single merged section.

Update

When bumping firehose-core to a breaking version, details of such upgrade will be described in UPDATE.md. Breaking version can be be noticed currently if the minor version is bumped up, for example going from v0.1.11 to v0.2.0 introduces some breaking changes. Once we will release the very first major version 1, breaking changes will be when going from v0.y.z to v1.0.0.

Documentation

Index

Constants

View Source
const BlockLogPrefix = "BLOCK "
View Source
const BlockLogPrefixLen = len(BlockLogPrefix)
View Source
const FirePrefix = "FIRE "
View Source
const FirePrefixLen = len(FirePrefix)
View Source
const InitLogPrefix = "INIT "
View Source
const InitLogPrefixLen = len(InitLogPrefix)

Variables

View Source
var (
	MaxUint64 = ^uint64(0)
	// Common ports
	MetricsListenAddr string = ":9102"

	// Firehose chain specific port
	IndexBuilderServiceAddr        string = ":10009"
	ReaderNodeGRPCAddr             string = ":10010"
	ReaderNodeManagerAPIAddr       string = ":10011"
	MergerServingAddr              string = ":10012"
	RelayerServingAddr             string = ":10014"
	FirehoseGRPCServingAddr        string = ":10015"
	SubstreamsTier1GRPCServingAddr string = ":10016"
	SubstreamsTier2GRPCServingAddr string = ":10017"

	// Data storage default locations
	BlocksCacheDirectory string = "file://{data-dir}/storage/blocks-cache"
	MergedBlocksStoreURL string = "file://{data-dir}/storage/merged-blocks"
	OneBlockStoreURL     string = "file://{data-dir}/storage/one-blocks"
	ForkedBlocksStoreURL string = "file://{data-dir}/storage/forked-blocks"
	IndexStoreURL        string = "file://{data-dir}/storage/index"
)

Those are `var` and globally available so that some chains to keep backward-compatibility can change them. This is not advertised and should **not** be used by new chain.

View Source
var ConsoleReaderBlockReadCount = metrics.NewCounter("firecore_console_reader_block_read_count", "Number of blocks read by the console reader")
View Source
var Example = func(in string) string {
	return string(cli.Example(in))
}
View Source
var ReaderNodeVariablesDocumentation = map[string]string{
	"{data-dir}":        "The current data-dir path defined by the flag 'data-dir'",
	"{node-data-dir}":   "The node data dir path defined by the flag 'reader-node-data-dir'",
	"{hostname}":        "The machine's hostname",
	"{start-block-num}": "The resolved start block number defined by the flag 'reader-node-start-block-num' (can be overwritten)",
	"{stop-block-num}":  "The stop block number defined by the flag 'reader-node-stop-block-num'",
}
View Source
var StreamMergedBlocksPreprocThreads = 25
View Source
var UnsafeAllowExecutableNameToBeEmpty = false

UnsafeAllowedExecutableNameToBeEmpty is used internally and should not be altered.

View Source
var UnsafeResolveReaderNodeStartBlock = func(ctx context.Context, startBlockNum uint64, firstStreamableBlock uint64, runtime *launcher.Runtime, rootLog *zap.Logger) (uint64, error) {
	return startBlockNum, nil
}

UnsafeResolveReaderNodeStartBlock is a function that resolved the reader node start block num, by default it simply returns the value of the 'reader-node-start-block-num'. However, the function may be overwritten in certain chains to perform a more complex resolution logic.

View Source
var UnsafeRunningFromFirecore = false

UnsafeRunningFromFirecore is used internally and should not be altered.

Functions

func DefaultReaderNodeBootstrapDataURLFlagDescription added in v1.2.4

func DefaultReaderNodeBootstrapDataURLFlagDescription() string

func EncodeBlock added in v0.1.0

func EncodeBlock(b Block) (blk *pbbstream.Block, err error)

func ExamplePrefixed

func ExamplePrefixed[B Block](chain *Chain[B], prefix, in string) string

func GetCommonStoresURLs

func GetCommonStoresURLs(dataDir string) (mergedBlocksStoreURL, oneBlocksStoreURL, forkedBlocksStoreURL string, err error)

func GetIndexStore

func GetIndexStore(dataDir string) (indexStore dstore.Store, possibleIndexSizes []uint64, err error)

func HideGlobalFlagsOnChildCmd added in v0.9.9

func HideGlobalFlagsOnChildCmd(cmd *cobra.Command)

func LastMergedBlockNum added in v0.1.7

func LastMergedBlockNum(ctx context.Context, startBlockNum uint64, store dstore.Store, logger *zap.Logger) uint64

func LowBoundary added in v0.9.9

func LowBoundary(i uint64) uint64

func MakeDirs added in v0.9.9

func MakeDirs(directories []string) error

func MustParseUint64 added in v0.9.9

func MustParseUint64(s string) uint64

func MustReplaceDataDir

func MustReplaceDataDir(dataDir, in string) string

MustReplaceDataDir replaces `{data-dir}` from within the `in` received argument by the `dataDir` argument

func NewConsoleReader added in v0.9.9

func NewConsoleReader(lines chan string, blockEncoder BlockEncoder, logger *zap.Logger, tracer logging.Tracer) (mindreader.ConsolerReader, error)

func ReaderNodeVariablesValues added in v1.2.4

func ReaderNodeVariablesValues(resolver ReaderNodeArgumentResolver) map[string]string

func RegisterMetrics added in v1.1.0

func RegisterMetrics()

Types

type BashNodeBootstrapper added in v1.2.4

type BashNodeBootstrapper struct {
	// contains filtered or unexported fields
}

func NewBashNodeReaderBootstrapper added in v1.2.4

func NewBashNodeReaderBootstrapper(
	cmd *cobra.Command,
	url string,
	resolver ReaderNodeArgumentResolver,
	resolvedNodeArguments []string,
	logger *zap.Logger,
) *BashNodeBootstrapper

func (*BashNodeBootstrapper) Bootstrap added in v1.2.4

func (b *BashNodeBootstrapper) Bootstrap() error

type Block added in v0.1.0

type Block interface {
	proto.Message

	// GetFirehoseBlockID returns the block ID as a string, usually in the representation
	// used by your chain (hex, base58, base64, etc.). The block ID must be unique across
	// all blocks that will ever exist on your chain.
	GetFirehoseBlockID() string

	// GetFirehoseBlockNumber returns the block number as an unsigned integer. The block
	// number could be shared by multiple blocks in which case one is the canonical one
	// and the others are forks (resolution of forks is handled by Firehose core later in the
	// block processing pipeline).
	//
	// The value should be sequentially ordered which means that a block with block number 10
	// has come before block 11. Firehose core will deal with block skips without problem though
	// (e.g. block 1, is produced then block 3 where block 3's parent is block 1).
	GetFirehoseBlockNumber() uint64

	// GetFirehoseBlockParentID returns the block ID of the parent block as a string. All blocks
	// ever produced must have a parent block ID except for the genesis block which is the first
	// one. The value must be the same as the one returned by GetFirehoseBlockID() of the parent.
	//
	// If it's the genesis block, return an empty string.
	GetFirehoseBlockParentID() string

	// GetFirehoseBlockParentNumber returns the block number of the parent block as a uint64.
	// The value must be the same as the one returned by GetFirehoseBlockNumber() of the parent
	// or `0` if the block has no parent
	//
	// This is useful on chains that have holes. On other chains, this is as simple as "BlockNumber - 1".
	GetFirehoseBlockParentNumber() uint64

	// GetFirehoseBlockTime returns the block timestamp as a time.Time of when the block was
	// produced. This should the consensus agreed time of the block.
	GetFirehoseBlockTime() time.Time
}

Block represents the chain-specific Protobuf block. Chain specific's block model must implement this interface so that Firehose core is able to properly marshal/unmarshal your block into/to the Firehose block envelope binary format.

All the methods are prefixed with `GetFirehoseBlock` to avoid any potential conflicts with the fields/getters of your chain's block model that would prevent you from implementing this interface.

Consumer of your chain's protobuf block model don't need to be aware of those details, they are internal Firehose core information that are required to function properly.

The value you return for each of those methods must be done respecting Firehose rules which are enumarated in the documentation of each method.

type BlockEncoder added in v0.1.0

type BlockEncoder interface {
	Encode(block Block) (blk *pbbstream.Block, err error)
}

BlockEncoder is the interface of an object that is going to a chain specific block implementing Block interface that will be encoded into bstream.Block type which is the type used by Firehose core to "envelope" the block.

func NewBlockEncoder added in v0.2.1

func NewBlockEncoder() BlockEncoder

type BlockEncoderFunc added in v0.1.0

type BlockEncoderFunc func(block Block) (blk *pbbstream.Block, err error)

func (BlockEncoderFunc) Encode added in v0.1.0

func (f BlockEncoderFunc) Encode(block Block) (blk *pbbstream.Block, err error)

type BlockEnveloppe added in v0.2.1

type BlockEnveloppe struct {
	Block
	LIBNum uint64
}

func (BlockEnveloppe) GetFirehoseBlockLIBNum added in v0.2.1

func (b BlockEnveloppe) GetFirehoseBlockLIBNum() uint64

GetFirehoseBlockLIBNum implements LIBDerivable.

type BlockIndexer added in v0.1.0

type BlockIndexer[B Block] interface {
	ProcessBlock(block B) error
}

type BlockIndexerFactory added in v0.1.0

type BlockIndexerFactory[B Block] func(indexStore dstore.Store, indexSize uint64) (BlockIndexer[B], error)

type BlockLIBNumDerivable added in v0.2.1

type BlockLIBNumDerivable interface {
	// GetFirehoseBlockLIBNum returns the last irreversible block number as an unsigned integer
	// of this block. This is one of the most important piece of information for Firehose core.
	// as it determines when "forks" are now stalled and should be removed from memory and it
	// drives a bunch of important write processes that will write the block to disk only when the
	// block is now irreversible.
	//
	// The value returned should be the oldest block that should turned to be irreversible when this
	// block was produced. Assume for example the current block is 100. If finality rule of a chain
	// is that a block become irreversible after 12 blocks has been produced, then the value returned
	// in this case should be 88 (100 - 12) which means that when block 100 was produced, block 88
	// can now be considered irreversible.
	//
	// Irreversibility is chain specific and how the value here is returned depends on the chain. On
	// probabilistic irreversible chains, like Bitcoin, the value returned here is usually the current
	// block number - <threshold> where <threshold> is choosen to be safe enough in all situations (ensure
	// that is block number < <threshold>, then you properly cap to 0).
	//
	// On deterministic irreversible chains, usually the last irreversible block number if part of the
	// consensus and as such should be part of the Protobuf block model somewhere. In those cases, this
	// value should be returned here.
	GetFirehoseBlockLIBNum() uint64
}

BlockLIBNumDerivable is an optional interface that can be implemented by your chain's block model Block if the LIB can be derived from the Block model directly.

Implementing this make some Firehose core process more convenient since less configuration are necessary.

type BlockTransformerFactory added in v0.1.0

type BlockTransformerFactory func(indexStore dstore.Store, indexPossibleSizes []uint64) (*transform.Factory, error)

BlockTransformerFactory is a bit convoluted, but yes it's a function acting as a factory that returns itself a factory. The reason for this is that the factory needs to be able to access the index store and the index size to be able to create the actual factory.

In the context of `firehose-core` transform registration, this function will be called exactly once for the overall process. The returns transform.Factory will be used multiple times (one per request requesting this transform).

type Chain

type Chain[B Block] struct {
	// ShortName is the short name for your Firehose on <Chain> and is usually how
	// your chain's name is represented as a diminitutive. If your chain's name is already
	// short, we suggest to keep [ShortName] and [LongName] the same.
	//
	// As an example, Firehose on Ethereum [ShortName] is `eth` while Firehose on NEAR
	// short name is `near`.
	//
	// The [ShortName] **must** be  non-empty, lower cased and must **not** contain any spaces.
	ShortName string

	// LongName is the full name of your chain and the case sensitivy of this value is respected.
	// It is used in description of command and some logging output.
	//
	// The [LongName] **must** be non-empty.
	LongName string

	// ExecutableName is the name of the binary that is used to launch a syncing full node for this chain. For example,
	// on Ethereum, the binary by default is `geth`. This is used by the `reader-node` app to specify the
	// `reader-node-binary-name` flag.
	//
	// The [ExecutableName] **must** be non-empty.
	ExecutableName string

	// FullyQualifiedModule is the Go module of your actual `firehose-<chain>` repository and should
	// correspond to the `module` line of the `go.mod` file found at the root of your **own** `firehose-<chain>`
	// repository. The value can be seen using `head -1 go.mod | sed 's/module //'`.
	//
	// The [FullyQualifiedModule] **must** be non-empty.
	FullyQualifiedModule string

	// Version represents the actual version for your Firehose on <Chain>. It should be injected
	// via and `ldflags` through your `main` package.
	//
	// The [Version] **must** be non-empty.
	Version string

	// FirstStreamableBlock represents the block number of the first block that is streamable using Firehose,
	// for example on Ethereum it's set to `0`, the genesis block's number while on Antelope it's
	// set to 2 (genesis block is 1 there but our instrumentation on this chain instruments
	// only from block #2).
	//
	// This value is actually the default value of the `--common-first-streamable-block` flag and
	// all later usages are done using the flag's value and not this value.
	//
	// So this value is actually dynamic and can be changed at runtime using the
	// `--common-first-streamable-block`.
	//
	// The [FirstStreamableBlock] should be defined but the default 0 value is good enough
	// for most chains.
	FirstStreamableBlock uint64

	// BlockFactory is a factory function that returns a new instance of your chain's Block.
	// This new instance is usually used within `firecore` to unmarshal some bytes into your
	// chain's specific block model and return a [proto.Message] fully instantiated.
	//
	// The [BlockFactory] **must** be non-nil and must return a non-nil [proto.Message].
	BlockFactory func() Block

	// ConsoleReaderFactory is the function that should return the `ConsoleReader` that knowns
	// how to transform your your chain specific Firehose instrumentation logs into the proper
	// Block model of your chain.
	//
	// The [ConsoleReaderFactory] **must** be non-nil and must return a non-nil [mindreader.ConsolerReader] or an error.
	ConsoleReaderFactory func(lines chan string, blockEncoder BlockEncoder, logger *zap.Logger, tracer logging.Tracer) (mindreader.ConsolerReader, error)

	// BlockIndexerFactories defines the set of indexes built out of Firehose blocks to be served by Firehose
	// as custom filters.
	//
	// The [BlockIndexerFactories] is optional. If set, each key must be assigned to a non-nil [BlockIndexerFactory]. For now,
	// a single factory can be specified per chain. We use a map to allow for multiple factories in the future.
	//
	// If there is no indexer factories defined, the `index-builder` app will be disabled for this chain.
	//
	// The [BlockIndexerFactories] is optional.
	BlockIndexerFactories map[string]BlockIndexerFactory[B]

	// BlockTransformerFactories defines the set of transformer that will be enabled when the client request Firehose
	// blocks.
	//
	// The [BlockTransformerFactories] is optional. If set, each key must be assigned to a non-nil
	// [BlockTransformerFactory]. Multiple transformers can be defined.
	//
	// The [BlockTransformerFactories] is optional.
	BlockTransformerFactories map[protoreflect.FullName]BlockTransformerFactory

	// RegisterExtraStartFlags is a function that is called by the `reader-node` app to allow your chain
	// to register extra custom arguments. This function is called after the common flags are registered.
	//
	// The [RegisterExtraStartFlags] function is optional and not called if nil.
	RegisterExtraStartFlags func(flags *pflag.FlagSet)

	// ReaderNodeBootstrapperFactory enables the `reader-node` app to have a custom bootstrapper for your chain.
	// By default, no specialized bootstrapper is defined.
	//
	// If this is set, the `reader-node` app will use the one bootstrapper returned by this function. The function
	// will receive the `start` command where flags are defined as well as the node's absolute data directory as an
	// argument.
	ReaderNodeBootstrapperFactory func(
		ctx context.Context,
		logger *zap.Logger,
		cmd *cobra.Command,
		resolvedNodeArguments []string,
		resolver ReaderNodeArgumentResolver,
	) (operator.Bootstrapper, error)

	// Tools aggregate together all configuration options required for the various `fire<chain> tools`
	// to work properly for example to print block using chain specific information.
	//
	// The [Tools] element is optional and if not provided, sane defaults will be used.
	Tools *ToolsConfig[B]

	// BlockEncoder is the cached block encoder object that should be used for this chain. Populate
	// when Init() is called will be `nil` prior to that.
	//
	// When you need to encode your chain specific block like `pbeth.Block` into a `bstream.Block` you
	// should use this encoder:
	//
	//     bstreamBlock, err := chain.BlockEncoder.Encode(block)
	//
	BlockEncoder BlockEncoder

	DefaultBlockType string

	RegisterSubstreamsExtensions func() (wasm.WASMExtensioner, error)
}

Chain is the omni config object for configuring your chain specific information. It contains various fields that are used everywhere to properly configure the `firehose-<chain>` binary.

Each field is documented about where it's used. Throughtout the different Chain option, we will use `Acme` as the chain's name placeholder, replace it with your chain name.

func (*Chain[B]) BinaryName

func (c *Chain[B]) BinaryName() string

BinaryName represents the binary name for your Firehose on <Chain> is the [ShortName] lowered appended to 'fire' prefix to before for example `fireacme`.

func (*Chain[B]) Init added in v0.1.0

func (c *Chain[B]) Init()

Init is called when the chain is first loaded to initialize the `bstream` library with the chain specific configuration.

This must called only once per chain per process.

**Caveats** Two chain in the same Go binary will not work today as `bstream` uses global variables to store configuration which presents multiple chain to exist in the same process.

func (*Chain[B]) LoggerPackageID

func (c *Chain[B]) LoggerPackageID(subPackage string) string

LoggerPackageID computes a logger `packageID` value for a specific sub-package.

func (*Chain[B]) RootLoggerPackageID

func (c *Chain[B]) RootLoggerPackageID() string

RootLoggerPackageID is the `packageID` value when instantiating the root logger on the chain that is used by CLI command and other

func (*Chain[B]) Validate

func (c *Chain[B]) Validate()

Validate normalizes some aspect of the Chain values (spaces trimming essentially) and validates the chain by accumulating error an panic if all the error found along the way.

func (*Chain[B]) VersionString

func (c *Chain[B]) VersionString() string

VersionString computes the version string that will be display when calling `firexxx --version` and extract build information from Git via Golang `debug.ReadBuildInfo`.

type CommandExecutor

type CommandExecutor func(cmd *cobra.Command, args []string) (err error)

type ConsoleReader added in v0.9.9

type ConsoleReader struct {
	// contains filtered or unexported fields
}

func (*ConsoleReader) Close added in v1.1.0

func (r *ConsoleReader) Close() error

func (*ConsoleReader) Done added in v0.9.9

func (r *ConsoleReader) Done() <-chan interface{}

func (*ConsoleReader) ReadBlock added in v0.9.9

func (r *ConsoleReader) ReadBlock() (out *pbbstream.Block, err error)

type MergedBlocksWriter added in v0.9.9

type MergedBlocksWriter struct {
	Store        dstore.Store
	LowBlockNum  uint64
	StopBlockNum uint64

	Logger *zap.Logger
	Cmd    *cobra.Command

	TweakBlock func(*pbbstream.Block) (*pbbstream.Block, error)
	// contains filtered or unexported fields
}

func (*MergedBlocksWriter) ProcessBlock added in v0.9.9

func (w *MergedBlocksWriter) ProcessBlock(blk *pbbstream.Block, obj interface{}) error

func (*MergedBlocksWriter) WriteBundle added in v1.2.0

func (w *MergedBlocksWriter) WriteBundle() error

type ParsingStats added in v1.1.0

type ParsingStats struct {
}

type ReaderNodeArgumentResolver added in v0.2.1

type ReaderNodeArgumentResolver = func(in string) string

type ReaderNodeBootstrapperFactory added in v1.2.4

type ReaderNodeBootstrapperFactory func(
	ctx context.Context,
	logger *zap.Logger,
	cmd *cobra.Command,
	resolvedNodeArguments []string,
	resolver ReaderNodeArgumentResolver,
) (operator.Bootstrapper, error)

func DefaultReaderNodeBootstrapper added in v1.2.4

func DefaultReaderNodeBootstrapper(
	overrideFactory ReaderNodeBootstrapperFactory,
) ReaderNodeBootstrapperFactory

DefaultReaderNodeBootstrapper is a constrtuction you can when you want the default bootstrapper logic to be applied but you need support new bootstrap data URL(s) format or override the default behavior for some type.

The `overrideFactory` argument is a factory function that will be called first, if it returns a non-nil bootstrapper, it will be used and the default logic will be skipped. If it returns nil, the default logic will be applied.

type SanitizeBlockForCompareFunc added in v0.1.9

type SanitizeBlockForCompareFunc func(block *pbbstream.Block) *pbbstream.Block

SanitizeBlockForCompareFunc takes a chain agnostic [block] and transforms it in-place, removing fields that should not be compared.

type StreamFactory added in v1.1.1

type StreamFactory struct {
	// contains filtered or unexported fields
}

func NewStreamFactory added in v1.1.1

func NewStreamFactory(
	mergedBlocksStore dstore.Store,
	forkedBlocksStore dstore.Store,
	hub *hub.ForkableHub,
	transformRegistry *transform.Registry,
) *StreamFactory

func (*StreamFactory) New added in v1.1.1

func (sf *StreamFactory) New(
	ctx context.Context,
	handler bstream.Handler,
	request *pbfirehose.Request,
	logger *zap.Logger) (*stream.Stream, error)

type TarballNodeBootstrapper added in v1.2.4

type TarballNodeBootstrapper struct {
	// contains filtered or unexported fields
}

func NewTarballReaderNodeBootstrapper added in v1.2.4

func NewTarballReaderNodeBootstrapper(
	url string,
	dataDir string,
	logger *zap.Logger,
) *TarballNodeBootstrapper

func (*TarballNodeBootstrapper) Bootstrap added in v1.2.4

func (b *TarballNodeBootstrapper) Bootstrap() error

type ToolsConfig

type ToolsConfig[B Block] struct {
	// SanitizeBlockForCompare is a function that takes a chain agnostic [block] and transforms it in-place, removing fields
	// that should not be compared.
	//
	// The [SanitizeBlockForCompare] is optional, if nil, no-op sanitizer be used.
	SanitizeBlockForCompare SanitizeBlockForCompareFunc

	// RegisterExtraCmd enables you to register extra commands to the `fire<chain> tools` group.
	// The callback function is called with the `toolsCmd` command that is the root command of the `fire<chain> tools`
	// as well as the chain, the root logger and root tracer for tools.
	//
	// You are responsible of calling `toolsCmd.AddCommand` to register your extra commands.
	//
	// The [RegisterExtraCmd] function is optional and not called if nil.
	RegisterExtraCmd func(chain *Chain[B], toolsCmd *cobra.Command, zlog *zap.Logger, tracer logging.Tracer) error

	// TransformFlags specify chain specific transforms flags (and parsing of those flag's value). The flags defined
	// in there are added to all Firehose-client like tools commannd (`tools firehose-client`, `tools firehose-prometheus-exporter`, etc.)
	// automatically.
	//
	// Refer to the TransformFlags for further details on how respect the contract of this field.
	//
	// The [TransformFlags] is optional.
	TransformFlags *TransformFlags

	// MergedBlockUpgrader when define enables for your chain to upgrade between different versions of "merged-blocks".
	// It happens from time to time that a data bug is found in the way merged blocks and it's possible to fix it by
	// applying a transformation to the block. This is what this function is for.
	//
	// When defined, a new tools `fire<chain> tools upgrade-merged-blocks` is added. This command will enable operators
	// to upgrade from one version to another of the merged blocks.
	//
	// The [MergedBlockUpgrader] is optional and not specifying it disables command `fire<chain> tools upgrade-merged-blocks`.
	MergedBlockUpgrader func(block *pbbstream.Block) (*pbbstream.Block, error)
}

func (*ToolsConfig[B]) GetSanitizeBlockForCompare added in v0.1.9

func (t *ToolsConfig[B]) GetSanitizeBlockForCompare() SanitizeBlockForCompareFunc

GetSanitizeBlockForCompare returns the [SanitizeBlockForCompare] value if defined, otherwise a no-op sanitizer.

type TransformFlags added in v0.2.1

type TransformFlags struct {
	// Register is a function that will be called when we need to register the flags for the transforms.
	// You received the command's flag set and you are responsible of registering the flags.
	Register func(flags *pflag.FlagSet)

	// Parse is a function that will be called when we need to extract the transforms out of the flags.
	// You received the command and the logger and you are responsible of parsing the flags and returning
	// the transforms.
	//
	// Flags can be obtain with `sflags.MustGetString(cmd, "<flag-name>")` and you will obtain the value.
	Parse func(cmd *cobra.Command, logger *zap.Logger) ([]*anypb.Any, error)
}

Directories

Path Synopsis
cmd
internal
Code generated by 'go run github.com/streamingfast/firehose-core/protoregistry/generator well_known.go protoregistry', DO NOT EDIT!
Code generated by 'go run github.com/streamingfast/firehose-core/protoregistry/generator well_known.go protoregistry', DO NOT EDIT!

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL