Documentation ¶
Overview ¶
Package fork handles tracking the hard fork status and is used to determine which consensus rules apply on a block
Index ¶
- Variables
- func Argon2i(bytes []byte) []byte
- func BigToCompact(n *big.Int) uint32
- func Blake14lr(bytes []byte) []byte
- func Blake2b(bytes []byte) []byte
- func Blake2s(bytes []byte) []byte
- func CompactToBig(compact uint32) *big.Int
- func Cryptonight7v2(bytes []byte) []byte
- func DivHash(hf func([]byte) []byte, blockbytes []byte, howmany int) []byte
- func GetAlgoID(algoname string, height int32) uint32
- func GetAlgoName(algoVer int32, height int32) (name string)
- func GetAlgoVer(name string, height int32) (version int32)
- func GetAveragingInterval(height int32) (r int64)
- func GetCurrent(height int32) (curr int)
- func GetMinBits(algoname string, height int32) (mb uint32)
- func GetMinDiff(algoname string, height int32) (md *big.Int)
- func GetTargetTimePerBlock(height int32) (r int64)
- func Hash(bytes []byte, name string, height int32) (out chainhash.Hash)
- func Keccak(bytes []byte) []byte
- func Lyra2REv2(bytes []byte) []byte
- func SHA256D(bytes []byte) []byte
- func Scrypt(bytes []byte) []byte
- func Skein(bytes []byte) []byte
- func Stribog(bytes []byte) []byte
- type AlgoParams
- type HardForks
Constants ¶
This section is empty.
Variables ¶
var ( // AlgoVers is the lookup for pre hardfork // AlgoVers = map[int32]string{ 2: "sha256d", 514: "scrypt", } // Algos are the specifications identifying the algorithm used in the // block proof Algos = map[string]AlgoParams{ AlgoVers[2]: { Version: 2, MinBits: MainPowLimitBits, NSperOp: 824, }, AlgoVers[514]: { Version: 514, MinBits: MainPowLimitBits, AlgoID: 1, NSperOp: 740839, }, } // FirstPowLimit is FirstPowLimit = func() big.Int { mplb, _ := hex.DecodeString( "0fffff0000000000000000000000000000000000000000000000000000000000") return *big.NewInt(0).SetBytes(mplb) }() // FirstPowLimitBits is FirstPowLimitBits = BigToCompact(&FirstPowLimit) // IsTestnet is set at startup here to be accessible to all other libraries IsTestnet bool // List is the list of existing hard forks and when they activate List = []HardForks{ { Number: 0, Name: "Halcyon days", ActivationHeight: 0, Algos: Algos, AlgoVers: AlgoVers, WorkBase: func() (out int64) { for i := range Algos { out += Algos[i].NSperOp } out /= int64(len(Algos)) return }(), TargetTimePerBlock: 300, AveragingInterval: 10, TestnetStart: 0, }, { Number: 1, Name: "Plan 9 from Crypto Space", ActivationHeight: 250000, Algos: P9Algos, AlgoVers: P9AlgoVers, WorkBase: func() (out int64) { for i := range P9Algos { out += P9Algos[i].NSperOp } out /= int64(len(P9Algos)) return }(), TargetTimePerBlock: 36, AveragingInterval: 3600, TestnetStart: 1, }, } // P9AlgoVers is the lookup for after 1st hardfork P9AlgoVers = map[int32]string{ 5: "blake2b", 6: "argon2i", 7: "cn7v2", 8: "keccak", 9: "scrypt", 10: "sha256d", 11: "skein", 12: "stribog", 13: "lyra2rev2", } // P9Algos is the algorithm specifications after the hard fork // given ns/op values are approximate and kopach bench writes them to // a file. These should vary on different architectures due to limitations // of division bit width and the cache behavior of hash functions, and // refers to one thread of execution, how this relates to number of cores // will vary also P9Algos = map[string]AlgoParams{ P9AlgoVers[5]: {5, FirstPowLimitBits, 0, 60425}, P9AlgoVers[6]: {6, FirstPowLimitBits, 1, 66177}, P9AlgoVers[7]: {7, FirstPowLimitBits, 2, 102982451}, P9AlgoVers[8]: {8, FirstPowLimitBits, 3, 64589}, P9AlgoVers[9]: {9, FirstPowLimitBits, 4, 1416611}, P9AlgoVers[10]: {10, FirstPowLimitBits, 5, 67040}, P9AlgoVers[11]: {11, FirstPowLimitBits, 7, 66803}, P9AlgoVers[12]: {12, FirstPowLimitBits, 6, 1858295}, P9AlgoVers[13]: {13, FirstPowLimitBits, 8, 134025}, } // SecondPowLimit is SecondPowLimit = func() big.Int { mplb, _ := hex.DecodeString( "001f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f1f") return *big.NewInt(0).SetBytes(mplb) }() SecondPowLimitBits = BigToCompact(&SecondPowLimit) MainPowLimit = func() big.Int { mplb, _ := hex.DecodeString( "00000fffff000000000000000000000000000000000000000000000000000000") return *big.NewInt(0).SetBytes(mplb) }() MainPowLimitBits = BigToCompact(&MainPowLimit) )
var HashReps = 2
HashReps allows the number of multiplication/division cycles to be repeated before the final hash, on release for mainnet this is probably set to 9 or so to raise the difficulty to a reasonable level for the hard fork. at 5 repetitions (first plus repeats, thus 4), an example block header produces a number around 48kb in byte size and ~119000 decimal digits, which is then finally hashed down to 32 bytes
Functions ¶
func BigToCompact ¶
BigToCompact converts a whole number N to a compact representation using an unsigned 32-bit number. The compact representation only provides 23 bits of precision, so values larger than (2^23 - 1) only encode the most significant digits of the number. See CompactToBig for details.
func CompactToBig ¶
CompactToBig converts a compact representation of a whole number N to an unsigned 32-bit number. The representation is similar to IEEE754 floating point numbers. Like IEEE754 floating point, there are three basic components: the sign, the exponent, and the mantissa. They are broken out as follows:
- the most significant 8 bits represent the unsigned base 256 exponent
- bit 23 (the 24th bit) represents the sign bit
- the least significant 23 bits represent the mantissa ------------------------------------------------- | Exponent | Sign | Mantissa | ------------------------------------------------- | 8 bits [31-24] | 1 bit [23] | 23 bits [22-00] | -------------------------------------------------
The formula to calculate N is:
N = (-1^sign) * mantissa * 256^(exponent-3)
This compact form is only used in bitcoin to encode unsigned 256-bit numbers which represent difficulty targets, thus there really is not a need for a sign bit, but it is implemented here to stay consistent with bitcoind.
func Cryptonight7v2 ¶
Cryptonight7v2 takes bytes and returns a cryptonight 7 v2 256 bit hash
func DivHash ¶
DivHash first runs an arbitrary big number calculation involving a very large integer, and hashes the result. In this way, this hash requires both one of 9 arbitrary hash functions plus a big number long division operation and three multiplication operations, unlikely to be satisfied on anything other than CPU and GPU, with contrary advantages on each - GPU division is 32 bits wide operations, CPU is 64, but GPU hashes about equal to a CPU to varying degrees of memory hardness (and CPU cache size then improves CPU performance at some hashes)
This hash generates very large random numbers from an 80 byte block using a procedure involving two squares of recombined spliced halves, multiplied together and then divided by the original block, reversed, repeated 4 more times, of over 48kb to represent the product, which is hashed afterwards. ( see example in fork/scratch/divhash.go )
This would be around half of the available level 1 cache of a ryzen 5 1600, likely distributed as 6 using the 64kb and 3 threads using the 32kb smaller ones. Here is a block diagram of Zen architecture: https://en.wikichip.org/w/images/thumb/0/02/zen_block_diagram.svg/1178px-zen_block_diagram.svg.png This is one, with Zen the cores are independently partitioned. But it shows that it has one divider per core. Thus for a 6 core, 6 would be the right number to use with it as other numbers will lead to contention and memory copies. Probably its ability to branch twice per cycle will be a big boost for its performance in this task.
Long division units are expensive and slow, and make a perfect application specific proof of work because a substantial part of the cost as proportional to the relative surface area of circuitry it is substantially more than 10% of the total logic on a CPU. There is low chances of putting these units into one package with half half IDIV and IMUL units and enough cache for each one, would be economic or accessible to most chip manufacturers at a scale and clock that beats the CPU price.
Most GPUs still only have 32 bit integer divide units because the type of mathematics done by GPUs is mainly based on multiplication, addition and subtraction, specifically, with matrixes, which are per-bit equivalent to big (128-512 bit) addition, built for walking graphs and generating directional or particle effects, under gravity, and the like. Video is very parallelisable so generally speaking GPU's main bulk of processing capability does not help here, caches holding a fraction of the number at a time as it is computed, and only 32 bits wide at a time for the special purpose dividers, that are relatively swamped by stream processors in big grids.
The cheaper arithmetic units can be programmed to also perform the calculations but they are going to be funny letter log differences to the point it adds up to less than 10% better due to complexity of the code and scheduling it.
Long story short, this hash function should be the end of big margin ASIC mining, and a lot of R&D funds going to improving smaller fabs for spewing out such processors.
func GetAlgoID ¶
GetAlgoID returns the 'algo_id' which in pre-hardfork is not the same as the block version number, but is afterwards
func GetAlgoName ¶
GetAlgoName returns the string identifier of an algorithm depending on hard fork activation status
func GetAlgoVer ¶
GetAlgoVer returns the version number for a given algorithm (by string name) at a given height. If "random" is given, a random number is taken from the system secure random source (for randomised cpu mining)
func GetAveragingInterval ¶
GetAveragingInterval returns the active block interval target based on hard fork status
func GetCurrent ¶
GetCurrent returns the hardfork number code
func GetMinBits ¶
GetMinBits returns the minimum diff bits based on height and testnet
func GetMinDiff ¶
GetMinDiff returns the minimum difficulty in uint256 form
func GetTargetTimePerBlock ¶
GetTargetTimePerBlock returns the active block interval target based on hard fork status
Types ¶
type AlgoParams ¶
AlgoParams are the identifying block version number and their minimum target bits
type HardForks ¶
type HardForks struct { Number uint32 ActivationHeight int32 Name string Algos map[string]AlgoParams AlgoVers map[int32]string WorkBase int64 TargetTimePerBlock int32 AveragingInterval int64 TestnetStart int32 }
HardForks is the details related to a hard fork, number, name and activation height