Documentation ¶
Overview ¶
Package memory stores the entire data set in memory. There are different "layouts" in memory, in which URLs can be looked up using different algorithms. Each implementation is named after its search algorithm.
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BinarySearch ¶
type BinarySearch struct {
// contains filtered or unexported fields
}
BinarySearch is an implementation of the search of a (sorted) data set that has much better performance than the linear search, on average. Best case, its O(1) if the record is (somehow) in the middle, worst case its O(log(n))
TODO: This probably needs to be a linked list, rather than a slice. Slice will be enormously inefficient when we're talking about reallocating that dat.
func NewBinarySearch ¶
func NewBinarySearch() *BinarySearch
NewBinarySearch initializes a binary search object with its own properties initialized
type HashTable ¶
type HashTable struct {
// contains filtered or unexported fields
}
HashTable stores the entire dataset within Go's implementation of a hash table (a map). It has O(1) complexity, as it is always looking up something well known within a finite space.
func NewHashTable ¶
func NewHashTable() *HashTable
NewHashTable initializes a new hash table, with the appropriate default values. It also exposes the hash table outside this package, without needing to expose its internal properties (e.g. the table and mutexes) and so on.
Example ¶
ExampleNewHashTable describes how to use the hash table based lookup to query what the supplied URL would be.
package main import ( "context" "fmt" "log" "net/url" "github.com/andrewhowdencom/x40.link/storage/memory" ) func main() { // Consider the following several URLs // x40/a → andrewhowden.com/a // x40/n → andrewhowden.com/longer-n // x40.link/f00 → google.com ht := memory.NewHashTable() // Insert the URLs into the hash table based lookup for _, tu := range []struct { f, t string }{ {f: "x40/a", t: "andrewhowden.com/a"}, {f: "x40/n", t: "andrewhowden.com/longer-n"}, {f: "x40.link/f00", t: "google.com"}, } { // Normally, we should be handling errors from both the URL parsing as well as attempting to write // to storage. The hashmap has no failure modes, but we should not rely on this being persistent // indefinitely into the future (e.g. collisions) from, _ := url.Parse(tu.f) to, _ := url.Parse(tu.t) err := ht.Put(context.Background(), from, to) if err != nil { log.Println(err) } } // Lookup a value supplied by the user l, _ := url.Parse("x40/a") // Normally, we should handle the error from the fetch operation (e.g. not found) ret, _ := ht.Get(context.Background(), l) fmt.Println(ret.String()) }
Output: andrewhowden.com/a
type LinearSearch ¶
type LinearSearch struct {
// contains filtered or unexported fields
}
LinearSearch is an implementation of "worst case" search through the data. It has O(n) performance (i.e. for each new bit of data, the worst-case compelxity & performance of the lookup fails). Additionally, this is especially bad as "not found" requires iterating through the whole set.
func NewLinearSearch ¶
func NewLinearSearch() *LinearSearch
NewLinearSearch implements the most naive approach to querying data