Documentation
¶
Index ¶
- func Achlioptas(m mat.Matrix, generator *rand.LockedRand) mat.Matrix
- func Constant(m mat.Matrix, n float64) mat.Matrix
- func Gain(f activation.Activation) float64
- func Normal(m mat.Matrix, mean, std float64, generator *rand.LockedRand) mat.Matrix
- func Ones(m mat.Matrix) mat.Matrix
- func Uniform(m mat.Matrix, min, max float64, generator *rand.LockedRand) mat.Matrix
- func XavierNormal(m mat.Matrix, gain float64, generator *rand.LockedRand) mat.Matrix
- func XavierUniform(m mat.Matrix, gain float64, generator *rand.LockedRand) mat.Matrix
- func Zeros(m mat.Matrix) mat.Matrix
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func Achlioptas ¶
Achlioptas fills the input matrix with values according to the mthod described on "Database-friendly random projections: Johnson-Lindenstrauss with binary coins", by Dimitris Achlioptas 2001 (https://core.ac.uk/download/pdf/82724427.pdf)
The matrix is returned for convenience.
func Constant ¶
Constant fills the input matrix with the value n.
The matrix is returned for convenience.
func Gain ¶
func Gain(f activation.Activation) float64
Gain returns a coefficient that help to initialize the params in a way to keep gradients stable. Use it to find the gain value for Xavier initializations.
func Normal ¶
Normal fills the input matrix with random samples from a normal (Gaussian) distribution.
The matrix is returned for convenience.
func Ones ¶
Ones fills the input matrix with the scalar value `1`.
The matrix is returned for convenience.
func Uniform ¶
Uniform fills the input matrix m with a uniform distribution where a is the lower bound and b is the upper bound.
The matrix is returned for convenience.
func XavierNormal ¶
XavierNormal fills the input matrix with values according to the method described in "Understanding the difficulty of training deep feedforward neural networks" - Glorot, X. & Bengio, Y. (2010), using a normal distribution.
The matrix is returned for convenience.
func XavierUniform ¶
XavierUniform fills the input `m` with values according to the method described in `Understanding the difficulty of training deep feedforward neural networks` - Glorot, X. & Bengio, Y. (2010), using a uniform distribution.
The matrix is returned for convenience.
Types ¶
This section is empty.