Documentation
¶
Overview ¶
Package learn contains the learning algorithms.
Index ¶
- func CompleteDataToMatrix(D spn.Dataset) [][]int
- func CompleteDataToMatrixF(D spn.Dataset) [][]float64
- func CopyScope(Sc map[int]*Variable) map[int]*Variable
- func DataToMatrix(D spn.Dataset) ([][]int, map[int]int)
- func DataToMatrixF(D spn.Dataset) ([][]float64, map[int]int)
- func DataToVarData(D []map[int]int, Sc map[int]*Variable) []*utils.VarData
- func DeriveApplyWeights(S spn.SPN, eta float64, storage *spn.Storer, dtk, itk int, c common.Collection, ...) spn.SPN
- func DeriveHard(S spn.SPN, st *spn.Storer, tk int, I spn.VarSet) int
- func DeriveSPN(S spn.SPN, storage *spn.Storer, tk, itk int, c common.Collection) (spn.SPN, int)
- func DeriveWeights(S spn.SPN, storage *spn.Storer, tk, dtk, itk int, c common.Collection) (spn.SPN, int)
- func DeriveWeightsBatch(S spn.SPN, storage *spn.Storer, tk, dtk, itk int, c common.Collection) (spn.SPN, int)
- func Discriminative(S spn.SPN, D spn.Dataset, Y []*Variable) spn.SPN
- func DiscriminativeBGD(S spn.SPN, eta, eps float64, D spn.Dataset, Y []*Variable, norm bool, b int) spn.SPN
- func DiscriminativeGD(S spn.SPN, eta, eps float64, D spn.Dataset, Y []*Variable, norm bool) spn.SPN
- func DiscriminativeHardBGD(S spn.SPN, eta, eps float64, D spn.Dataset, Y []*Variable, norm bool, b int) spn.SPN
- func DiscriminativeHardGD(S spn.SPN, eta, eps float64, D spn.Dataset, Y []*Variable, norm bool) spn.SPN
- func ExtractInstance(v int, D spn.Dataset) []int
- func Generative(S spn.SPN, D spn.Dataset) spn.SPN
- func GenerativeBGD(S spn.SPN, eta, eps float64, data spn.Dataset, c common.Collection, norm bool, ...) spn.SPN
- func GenerativeGD(S spn.SPN, eta, eps float64, data spn.Dataset, c common.Collection, norm bool) spn.SPN
- func GenerativeHardBGD(S spn.SPN, eta, eps float64, data spn.Dataset, c common.Collection, norm bool, ...) spn.SPN
- func GenerativeHardGD(S spn.SPN, eta, eps float64, data spn.Dataset, c common.Collection, norm bool) spn.SPN
- func MatrixToData(M [][]int) spn.Dataset
- func Normalize(v []float64)
- func ReflectScope(Sc map[int]*Variable) map[int]*Variable
- type LearnFunc
- type Scope
- type Variable
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func CompleteDataToMatrix ¶
CompleteDataToMatrix returns a complete dataset's matrix form. A dataset is complete if every variable's varid in D's scope is equivalent to its map position and the scope's maximum varid is equal to the number of variables in D plus one (i.e. there are no "holes" in the scope).
func CompleteDataToMatrixF ¶
CompleteDataToMatrixF returns a complete dataset's matrix form. A dataset is complete if every variable's varid in D's scope is equivalent to its map position and the scope's maximum varid is equal to the number of variables in D plus one (i.e. there are no "holes" in the scope).
func DataToMatrix ¶
DataToMatrix returns a Dataset's matrix form. Assumes a consistent dataset.
func DataToMatrixF ¶
DataToMatrixF returns a Dataset's matrix form. Assumes a consistent dataset.
func DeriveApplyWeights ¶
func DeriveApplyWeights(S spn.SPN, eta float64, storage *spn.Storer, dtk, itk int, c common.Collection, norm bool) spn.SPN
DeriveApplyWeights does not store the weight derivatives like DeriveWeights. Instead, it computes and applies the gradient on the go.
func DeriveHard ¶
DeriveHard performs hard inference (MAP) derivation on the SPN. The hard derivative is the number of times MAP states pass a certain weight. The delta weight is then computed as
eta*c/w
where eta is the learning rate, c is the number of times hard inference passed through weight w and w is the weight of the current edge.
func DeriveSPN ¶
DeriveSPN computes the derivative dS/dS_i, for every child of S: S_i. The base case dS/dS is trivial and equal to one. For each child S_i and parent node S_n, the derivative is given by:
dS/dS_i <- dS/dS_i + w_{n,i} * dS/dS_n, if S_n is sum node dS/dS_j <- dS/dS_j + dS/dS_n * \prod_{k \in \Ch(n) \setminus \{j\}} S_k
Where w_{n,i} is the weight of edge S_n -> S_i and Ch(n) is the set of children of n. In other words, the derivative of a sum node with respect to the SPN is the weighted sum of the derivatives of its parent nodes. For product nodes, the derivative is a sum where the elements of such sum are the products of each parent node multiplied by all its siblings. It is relevant to note that since GoSPN treats values in logspace, all the derivatives are too in logspace. Argument tk is the ticket to be used for storing the derivatives. Argument itk is the ticket for the stored values of S(X) (i.e. soft inference). A Collection is required for the graph search, though if Collection c is nil, then we use a Queue. If a Queue is used, then the graph search is a breadth-first, if a Stack is used, then it performs a depth-first search. If tk < 0, then a new ticket will be created and returned alongside the SPN S. Returns the SPN S and the ticket used.
func DeriveWeights ¶
func DeriveWeights(S spn.SPN, storage *spn.Storer, tk, dtk, itk int, c common.Collection) (spn.SPN, int)
DeriveWeights computes the derivative dS/dW, where W is the multiset of weights in SPN S. The derivative of S with respect to W is given by
dS/dw_{n,j} <- S_j * dS/dS_n, if S_n is a sum node
It is only relevant to compute dS/dw_{n,j} in sum nodes since weights do not appear in product nodes. Argument S is the SPN to find the derivative of. Argument storage is the DP storage object we store the derivatives values and extract inference values from. Integers tk, dtk and itk are the tickets for where to store dS/dW, where to locate dS/dS_i and stored inference values respectively. Collection c is the data type to be used for the graph search. If c is a stack, then DeriveWeights performs a depth-first search. If c is a queue, then DeriveWeights's graph search is a breadth-first search. The default value for c is Queue. DeriveWeights returns the SPN S and a ticket if tk is a negative value.
func DeriveWeightsBatch ¶
func DeriveWeightsBatch(S spn.SPN, storage *spn.Storer, tk, dtk, itk int, c common.Collection) (spn.SPN, int)
DeriveWeightsBatch computes the derivative dS/dW, where W is the multiset of weights in SPN S and adds it to the given Storer. The derivative of S with respect to W is given by
dS/dw_{n,j} <- S_j * dS/dS_n, if S_n is a sum node
It is only relevant to compute dS/dw_{n,j} in sum nodes since weights do not appear in product nodes. Argument S is the SPN to find the derivative of. Argument storage is the DP storage object we store the derivatives values and extract inference values from. Integers tk, dtk and itk are the tickets for where to store dS/dW, where to locate dS/dS_i and stored inference values respectively. Collection c is the data type to be used for the graph search. If c is a stack, then DeriveWeights performs a depth-first search. If c is a queue, then DeriveWeights's graph search is a breadth-first search. The default value for c is Queue. DeriveWeights returns the SPN S and a ticket if tk is a negative value.
func Discriminative ¶
Discriminative performs discriminative parameter learning, taking parameters from the parameters.P object bounded to the SPN S. If no parameters.P is found, uses default parameters. See parameters.P for more information.
func DiscriminativeBGD ¶
func DiscriminativeBGD(S spn.SPN, eta, eps float64, D spn.Dataset, Y []*Variable, norm bool, b int) spn.SPN
DiscriminativeGD performs discriminative mini-batch gradient descent on SPN S given data D. Argument eta is the learning rate, eps is the convergence difference in likelihood, D is the dataset, signals whether to normalize weights at each update and b is the size of the mini-batch.
func DiscriminativeGD ¶
DiscriminativeGD performs discriminative gradient descent on SPN S given data D. Argument eta is the learning rate, eps is the convergence difference in likelihood, D is the dataset and norm signals whether to normalize weights at each update.
func DiscriminativeHardBGD ¶
func DiscriminativeHardBGD(S spn.SPN, eta, eps float64, D spn.Dataset, Y []*Variable, norm bool, b int) spn.SPN
DiscriminativeGD performs hard (MPE) discriminative gradient descent on SPN S given data D. Argument eta is the learning rate, eps is the convergence difference in likelihood, D is the dataset, norm signals whether to normalize weights at each update and b is the size of the mini-batch.
func DiscriminativeHardGD ¶
func DiscriminativeHardGD(S spn.SPN, eta, eps float64, D spn.Dataset, Y []*Variable, norm bool) spn.SPN
DiscriminativeGD performs hard (MPE) discriminative gradient descent on SPN S given data D. Argument eta is the learning rate, eps is the convergence difference in likelihood, D is the dataset and norm signals whether to normalize weights at each update.
func ExtractInstance ¶
ExtractInstance extracts all instances of variable v from dataset D and joins them into a single slice.
func Generative ¶
Generative performs generative parameter learning, taking parameters from the underlying bound parameters.P. If no parameters.P is found, uses default parameters. See parameters.P for more information.
func GenerativeBGD ¶
func GenerativeBGD(S spn.SPN, eta, eps float64, data spn.Dataset, c common.Collection, norm bool, bSize int) spn.SPN
GenerativeBGD performs a generative batch gradient descent parameter learning on SPN S. Argument eta is the learning rate; eps is the likelihood difference to consider convergence, the more will GenerativeGD try to fit data; data is the dataset; c is how we should perform the graph search. If a stack is used, perform a DFS. If a queue is used, BFS. If c is nil, we use a queue. Argument norm indicates whether GenerativeGD should normalize weights at each node. bSize is the size of the batch.
Batch means that all derivatives will be computed with the same structure and weights. Once we have completed a full iteration on the dataset, we then add all delta weights and apply them through gradient descent.
func GenerativeGD ¶
func GenerativeGD(S spn.SPN, eta, eps float64, data spn.Dataset, c common.Collection, norm bool) spn.SPN
GenerativeGD performs a generative gradient descent parameter learning on SPN S. Argument eta is the learning rate; eps is the likelihood difference to consider convergence, the more will GenerativeGD try to fit data; data is the dataset; c is how we should perform the graph search. If a stack is used, perform a DFS. If a queue is used, BFS. If c is nil, we use a queue. Argument norm indicates whether GenerativeGD should normalize weights at each node.
func GenerativeHardBGD ¶
func GenerativeHardBGD(S spn.SPN, eta, eps float64, data spn.Dataset, c common.Collection, norm bool, bSize int) spn.SPN
GenerativeHardBGD performs a batch generative gradient descent using hard inference.
func GenerativeHardGD ¶
func GenerativeHardGD(S spn.SPN, eta, eps float64, data spn.Dataset, c common.Collection, norm bool) spn.SPN
GenerativeHardGD performs a generative gradient descent using hard inference.
func MatrixToData ¶
MatrixToData returns a dataset from matrix M.