Let G be table be a large table with sequential binary data entries indexed as (0,...,2^H - 1) where each entry is of equal size (say 32 bits) where there are 2^H entries for some int H.
Implement and benchmark the Merkle Traversal in log space and time as defined here -> Algorithm 4: Logarithmic Merkle Tree Traversal.
Given a leaf with data with index i into the table, compute the Merkle Proof for the leaf.
The value of an internal node is the hash of its 2 children.
The Merkle proof is the set of the hashes of the siblings of the nodes on the path from the leaf to the root and the hash of the root.
table.go provides the table data strcuture table_test.go shows how to populate a table with binary data backed by a binary data file.
This functionality is required for our proof of space time protocol.
The main goal is to benchmark an efficient traversal implementation to be able to better model space and time parameters. In other words, how long does it take on a modern CPU to traverse a table with a large number of entries?
for example, given n=36 and entry size of 1 byte - what is the space / time used on a modern intel core to compute the merkle proof for the last (rightmost) item in the table?
Details
Let G be a table with 2^n fixed-sized binary data entries, say 1 byte per entry.
Input: index i in range [0,2^n-1]
Output: The set of the values/labels of the siblings of nodes on the path from leaf i to the entrie's Merkle root and the root node value (Merkle proof)
NewTable creates a new post data table.
id: node id - base58 encoded key
dir: node data dir directory
mul: table size factor, e.g. 2 for x2 of min table size of S*E