Documentation ¶
Index ¶
Constants ¶
View Source
const ( HardGD = iota // Hard gradient descent key. SoftGD // Soft gradient descent key. HardEM // Hard expectation-maximization key. SoftEM // Soft expectation-maximization key. )
Constants to be used for P.LearningType.
View Source
const ( GD = iota EM )
Constants to be used with Method(P.LearningType).
View Source
const ( Hard = iota Soft )
Constants to be used with Hardness(P.LearningType).
Variables ¶
This section is empty.
Functions ¶
func Bind ¶
func Bind(e Parametrizable, p *P)
func Exists ¶
func Exists(p Parametrizable) bool
func Unbind ¶
func Unbind(e Parametrizable)
Types ¶
type P ¶
type P struct { Normalize bool // Normalize on weight update. HardWeight bool // Hard weights (true) or soft weights (false). SmoothSum float64 // Constant for smoothing sum counts when hard weights is true. LearningType int // Soft or hard EM or GD (only applies to weight learning functions). Eta float64 // Learning rate. Epsilon float64 // Epsilon convergence criterion (in logspace). BatchSize int // Batch size if mini-batch. If bs <= 1, then no batching. Lambda float64 // Regularization constant. Iterations int // Number of iterations for gradient descent. }
P is a collection of available parameters for learning algorithms.
Disclaimer: Parameters do not work on inline methods (e.g. S.Value(E)) since that would require GoSPN storing a P pointer in each Node.
func Default ¶
func Default() *P
Default returns a P instance with the following default options:
Normalize = true HardWeight = false SmoothSum = 0.01 LearningType = parameters.SoftGD Eta = 0.1 Epsilon = 1.0 BatchSize = 0 Lambda = 0.01 Iterations = 4
func Retrieve ¶
func Retrieve(e Parametrizable) (*P, bool)
type Parametrizable ¶
type Parametrizable interface { // Parameters returns the parameters of this object. Parameters() *P }
Parametriable defines a type that has parameters.
Click to show internal directories.
Click to hide internal directories.