Documentation ¶
Index ¶
- func BlockToProtoBuf(block *data.Block, db *gorm.DB) *pb.Block
- func EventToProtoBuf(event *data.Event) *pb.Event
- func EventsToProtoBuf(events *data.Events) []*pb.Event
- func ProcessBlock(db *gorm.DB, data []byte, control chan bool)
- func ProtoBufToBlock(block *pb.Block) *_db.PackedBlock
- func ProtoBufToEvent(event *pb.Event) *_db.Events
- func ProtoBufToEvents(events []*pb.Event) []*_db.Events
- func ProtoBufToTransaction(tx *pb.Transaction) *_db.PackedTransaction
- func ProtoBufToTransactions(txs []*pb.Transaction) []*_db.PackedTransaction
- func PutIntoSink(fd io.Writer, count uint64, data chan []byte, done chan bool)
- func RestoreFromSnapshot(db *gorm.DB, file string) (bool, uint64)
- func TakeSnapshot(db *gorm.DB, file string, start uint64, end uint64, count uint64) bool
- func TransactionToProtoBuf(tx *data.Transaction, db *gorm.DB) *pb.Transaction
- func TransactionsToProtoBuf(txs *data.Transactions, db *gorm.DB) []*pb.Transaction
- func UnmarshalCoordinator(control <-chan bool, count <-chan uint64, done chan bool)
- func UnmarshalData(data []byte) *pb.Block
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func BlockToProtoBuf ¶
BlockToProtoBuf - Creating proto buffer compatible data format for block data, which can be easily serialized & deserialized for taking snapshot and restoring from it
func EventToProtoBuf ¶
EventToProtoBuf - Creating proto buffer compatible data format for event data, which can be easily serialized & deserialized for taking snapshot and restoring from it
func EventsToProtoBuf ¶
EventsToProtoBuf - Creating proto buffer compatible data format for events data, which can be easily serialized & deserialized for taking snapshot and restoring from it
func ProcessBlock ¶
ProcessBlock - Given byte array read from file, attempting to unmarshall it into structured data, which will be later attempted to be persisted into DB
Also letting coordinator go routine know that this worker has completed its job
func ProtoBufToBlock ¶
func ProtoBufToBlock(block *pb.Block) *_db.PackedBlock
ProtoBufToBlock - Required while restoring from snapshot i.e. attempting to put whole block data into database
func ProtoBufToEvent ¶
ProtoBufToEvent - Required while restoring from snapshot i.e. attempting to put whole block data into database
func ProtoBufToEvents ¶
ProtoBufToEvents - Required while restoring from snapshot i.e. attempting to put whole block data into database
func ProtoBufToTransaction ¶
func ProtoBufToTransaction(tx *pb.Transaction) *_db.PackedTransaction
ProtoBufToTransaction - Required while restoring from snapshot i.e. attempting to put whole block data into database
func ProtoBufToTransactions ¶
func ProtoBufToTransactions(txs []*pb.Transaction) []*_db.PackedTransaction
ProtoBufToTransactions - Required while restoring from snapshot i.e. attempting to put whole block data into database
func PutIntoSink ¶
PutIntoSink - Given open file handle and communication channels, waits for receiving new data to be written to snapshot file. Works until all data is received, once done processing lets coordinator go routine know it has successfully persisted all contents into file.
This writer runs as an independent go routine, which simply writes data to file handle.
func RestoreFromSnapshot ¶
RestoreFromSnapshot - Given path to snapshot file and database handle where to restore entries, we're attempting to restore from snapshot file
Reading from file is done sequentially, but processing each read byte array of our interest i.e. chunk which holds block data, to be processed concurrently using multiple workers
Workers to be hired from worker pool of specific size.
func TakeSnapshot ¶
TakeSnapshot - Given sink file path & number of blocks to be read from database attempts to concurrently read whole blocks i.e. block header, transactions & events and serialize them into protocol buffer format, which are written into file with their respective size prepended in 4 bytes of reserved space.
This kind of encoding mechanism helps us in encoding & decoding efficiently while gracefully using resources i.e. buffered processing, we get to snapshot very large datasets while consuming too much memory.
func TransactionToProtoBuf ¶
func TransactionToProtoBuf(tx *data.Transaction, db *gorm.DB) *pb.Transaction
TransactionToProtoBuf - Creating proto buffer compatible data format for transaction data, which can be easily serialized & deserialized for taking snapshot and restoring from it
func TransactionsToProtoBuf ¶
func TransactionsToProtoBuf(txs *data.Transactions, db *gorm.DB) []*pb.Transaction
TransactionsToProtoBuf - Creating proto buffer compatible data format for transactions data, which can be easily serialized & deserialized for taking snapshot and restoring from it
func UnmarshalCoordinator ¶
UnmarshalCoordinator - Given a lot of unmarshal workers to be created for processing i.e. deserialize & put into DB, more entries in smaller amount of time, they need to be synchronized properly
That's all this go routine attempts to achieve
func UnmarshalData ¶
UnmarshalData - Given byte array attempts to deserialize that into structured block data, which will be attempted to be written into DB
Types ¶
This section is empty.