Documentation ¶
Overview ¶
Package xferspdy provides the basic interfaces around binary diff and patching process
Example ¶
//Create fingerprint of a file fingerprint := NewFingerprint("/path/foo_v1.binary", 1024) //Say the file was updated //Lets generate the diff diff := NewDiff("/path/foo_v2.binary", *fingerprint) //diff is sufficient to recover/recreate the modified file, given the base/source and the diff. modifiedFile, _ := os.OpenFile("/path/foo_v2_from_v1.binary", os.O_CREATE|os.O_WRONLY, 0777) //This writes the output to modifiedFile (Writer). The result will be the same binary as /path/foo_v2.binary PatchFile(diff, "/path/foo_v1.binary", modifiedFile)
Output:
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var (
DEFAULT_GENERATOR = &FingerprintGenerator{ConcurrentMode: true, NumWorkers: 8}
)
Functions ¶
func Patch ¶
func Patch(delta []Block, sign Fingerprint, t io.Writer)
Patch is a wrapper on PatchFile (current version only supports patching of local files)
Types ¶
type Block ¶
type Block struct {
Start, End int64
Checksum32 uint32
Sha256hash [sha256.Size]byte
HasData bool
RawBytes []byte
}
Block represent a byte slice from the file. For each block, following are computed.
* Adler-32 and SHA256 checksum,
* Start and End byte pos of the block,
* Whether or not its a data block -If this is a data block, RawBytes will capture the byte data represented by this block
func NewDiff ¶
func NewDiff(filename string, sign Fingerprint) []Block
NewDiff computes a diff between a given file and Fingerprint created from some other file The diff is represented as a slice of Blocks. Matching Blocks are represented just by their hashes, start and end byte position Non-matching blocks are raw binary arrays.
type Fingerprint ¶
type Fingerprint struct { Blocksz uint32 BlockMap map[uint32]map[[sha256.Size]byte]Block Source string }
Fingerprint of a given File, encapsulates the following mapping -
Adler-32 hash of Block --> SHA256 hash of Block -->Block
Also stores the block size and the source
func NewFingerprint ¶
func NewFingerprint(filename string, blocksize uint32) *Fingerprint
NewFingerprint creates a Fingerprint for a given file and blocksize. By default it does concurrent processing of blocks to generate fingerprint. The generation is switched to sequential mode if the number of blocks is less than 50.
func NewFingerprintFromReader ¶
func NewFingerprintFromReader(r io.Reader, blocksz uint32) *Fingerprint
NewFingerprint creates a Fingerprint for a given reader and blocksize. By default it does concurrent processing of blocks to generate fingerprint. However if the number of blocks is small <50 , then caller should use sequential generation, since the concurrent processing would not add much value. Or use the function NewFingerrprint(file, blocksize) when dealing with files, which switches mode based on the number of blocks. Number of blocks can be calculated as file size/block size
func (*Fingerprint) DeepEqual ¶
func (f *Fingerprint) DeepEqual(other *Fingerprint) bool
func (Fingerprint) String ¶
func (f Fingerprint) String() string
type FingerprintGenerator ¶
type FingerprintGenerator struct { Source io.Reader BlockSize uint32 ConcurrentMode bool NumWorkers int }
func (*FingerprintGenerator) Generate ¶
func (g *FingerprintGenerator) Generate() *Fingerprint
Generate creates a finger print using the FingerprintGenerator. Processing i.e. concurrent or sequential depends on the generator field ConcurrentMode
type State ¶
type State struct {
// contains filtered or unexported fields
}
State of Adler-32 computation It contants, the byte arary window from the most recent computation and interim sum values
func Checksum ¶
Checksum returns the Adler-32 checksum, computed for the given byte slice. In addition, it returns a State that captures the interim results during computation. This State can then be used to update the byte[] window and compute rolling hash
func (*State) UpdateWindow ¶
Update provides a mechanism to compute the checksum of a rolling window in single byte increments by using the hash parts computed earlier The checksum is not calculated from scratch. Instead the captured byte slice window in State struct is updated, similar to a circular buffer, and a rolling hash is calculated