Documentation ¶
Index ¶
- Variables
- type Layer
- func (l *Layer) BackProp(prevDelta, weights matrix.Matrix) (matrix.Matrix, error)
- func (l *Layer) BackPropOutLayer(expected matrix.Matrix) (matrix.Matrix, error)
- func (l *Layer) Equal(other Layer) bool
- func (l *Layer) ForwProp(input matrix.Matrix) (matrix.Matrix, error)
- func (l *Layer) InSize() int
- func (l *Layer) MarshalBinary() ([]byte, error)
- func (l *Layer) OutSize() int
- func (l *Layer) String() (s string)
- func (l *Layer) UnmarshalBinary(data []byte) error
- func (l *Layer) UpdateBias(derived matrix.Matrix) error
- func (l *Layer) UpdateWeights(derived matrix.Matrix) error
Constants ¶
This section is empty.
Variables ¶
var ActivationFunctions = map[string]fnType{
"relu": relu,
"identity": identity,
"binary_step": binaryStep,
"sigmoid": sigmoid,
"tanh": tanH,
"lrelu": lReLU,
"rrelu": rReLU,
"arctan": arcTan,
"softmax": softmax,
}
ActivationFunctions is a collection of the available functions that can be used in the back and forward propagation steps. If one is not available, the Layer won't be instantiated. Their mathematical formulas can be seen in https://en.wikipedia.org/wiki/Activation_function.
var DerivativeFunctions = map[string]fnType{
"relu": dRelu,
"identity": dIdentity,
"binary_step": dBinaryStep,
"sigmoid": dSigmoid,
"tanh": dTanH,
"lrelu": dLReLU,
"rrelu": dRReLU,
"arctan": dArcTan,
"softmax": dSoftmax,
}
DerivativeFunctions is a map of the derivatives of the available activation functions. It uses the formulas defined in https://en.wikipedia.org/wiki/Activation_function.
Functions ¶
This section is empty.
Types ¶
type Layer ¶
type Layer struct {
Weights, Output, Sum, Bias matrix.Matrix
// contains filtered or unexported fields
}
Layer is an abstraction of an array of perceptrons. It contains the weights of all perceptron as well as its activation function. The output and sum matrices will store the vector calculated in the ForwProp needed for the BackProp.
func New ¶
New creates a layer with a given input and output size. It generates a random matrix of weights and allocates memory for the sum and output vectors. It checks weather the input size is valid and if the activation function exists in map. The bias is activated with a 0 vector.
func (*Layer) BackProp ¶
BackProp implements the backpropagation algorithm for any layer. It takes the delta loss and weights from layer (l+1) and calculates the error of the current layer.
func (*Layer) BackPropOutLayer ¶
BackPropOutLayer is the special case for the backpropagation algorithm, when performed starting with the output layer. It is very similar to the normal one if the cost function used is the quadratic function. TODO: It will stay as a seperate function to make the refactoring process easier when generalising to any cost function.
func (*Layer) Equal ¶
Equal is a comparison method that given another layer checks for the sizes, activation function and Weights and Bias matrices. Sum and Out are just placeholders so they don't need to be compared.
func (*Layer) ForwProp ¶
ForwProp gets called from the MultiLayerPerceptron for each layer sequentially and calculates the output saving each stage in the sum and output variables, which are needed for the BackProp method.
func (*Layer) InSize ¶
InSize returns expected input size vector. It is needed to check for valid input in the ForwProp method.
func (*Layer) MarshalBinary ¶
func (*Layer) OutSize ¶
OutSize returns the expected output size vector. Both these methods need to be public so they are accessible from the multilayerperceptron module.
func (*Layer) UnmarshalBinary ¶
func (*Layer) UpdateBias ¶
UpdateBias receives the multiplyed learning rate and error and is subtracted from the layers bias vector.