loss

package
v0.0.0-...-98db5b7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 7, 2020 License: CC-BY-4.0 Imports: 12 Imported by: 3

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Binary

type Binary struct {
}

Binary Struct holds the binary loss and derivative calculations

func MakeBinaryCalculator

func MakeBinaryCalculator() Binary

MakeBinaryCalculator returns a Binary struct used to calculate binary stuff

func (Binary) DerivativeNBatched

func (b Binary) DerivativeNBatched(target, predicted []float32) (allsame []float32)

DerivativeNBatched returns the binary derivative for N batches

func (Binary) DerivativeNSeperated

func (b Binary) DerivativeNSeperated(target, predicted []float32) (alldifferent []float32)

DerivativeNSeperated is the derivative seperated on the batches

func (Binary) LossN

func (b Binary) LossN(target, predicted []float32) float32

LossN returns the loss of N batches of binary outputs

type Huber

type Huber struct {
}

Huber holds the methods to do the huber loss

func MakeHuberCalculator

func MakeHuberCalculator() Huber

MakeHuberCalculator returns a Huber so that Huber calculations can be made

type MSE

type MSE struct {
	// contains filtered or unexported fields
}

MSE is Mean Squared Error

func CreateMSECalculatorGPU

func CreateMSECalculatorGPU(handle *cudnn.Handler, managed bool) (*MSE, error)

CreateMSECalculatorGPU creates a mean squared error calculator for gpu memory

func (*MSE) ErrorCPU

func (m *MSE) ErrorCPU(generated, target []float32) []float32

ErrorCPU is used for backprop

func (*MSE) ErrorGPU

func (m *MSE) ErrorGPU(h *cudnn.Handler, dx, y, dy *layers.Tensor) error

ErrorGPU does the error calculation y will have to contain.

y = NetworkOutput

dy = target

dx returns the errors.

func (*MSE) ErrorGPUEX

func (m *MSE) ErrorGPUEX(h *cudnn.Handler, x, dx, y *layers.Tensor) error

ErrorGPUEX y is the target values. x is the network output. errors will be put into dx.Volume. The target values are in y.Volume

func (*MSE) Loss

func (m *MSE) Loss() float32

Loss returns the loss. Should be called after ErrorGPU is called

func (*MSE) NumAlphaScalars

func (m *MSE) NumAlphaScalars() int

NumAlphaScalars returns the number of scalars the activation layer has both the forward and backward propigation.

func (*MSE) NumBetaScalars

func (m *MSE) NumBetaScalars() int

NumBetaScalars returns the number of scalars the activation layer has both the forward and backward propigation.

func (*MSE) SetAlphaScalars

func (m *MSE) SetAlphaScalars(alphas []float64) error

SetAlphaScalars sets the alpha scalers for the forward and backward in that order in the array

func (*MSE) SetBetaScalars

func (m *MSE) SetBetaScalars(betas []float64) error

SetBetaScalars sets the beta scalers for the forward and backward in that order in the array

type MSE2

type MSE2 struct {
	// contains filtered or unexported fields
}

MSE2 tries to do the mse with out calling kernel outside of cudnn

func CreateMSE2

func CreateMSE2(h *cudnn.Handler, target *layers.Tensor) (m *MSE2, err error)

CreateMSE2 creates a mse2 function

func (*MSE2) GetAverageBatchLoss

func (m *MSE2) GetAverageBatchLoss() float32

GetAverageBatchLoss gets the averagebatchloss It also satisfies the gocunets.LossLayer interface

func (*MSE2) GetBatchLoss

func (m *MSE2) GetBatchLoss() []float32

GetBatchLoss gets the loss by batch It also satisfies the gocunets.LossLayer interface

func (*MSE2) Inference

func (m *MSE2) Inference(x, y *layers.Tensor) (err error)

Inference satisfies the gocunets.LossLayer interface

func (*MSE2) PerformError

func (m *MSE2) PerformError(x, dx, y, dy *layers.Tensor) error

PerformError performs the error PerformError satisfies the loss layer interface

func (*MSE2) TestForward

func (m *MSE2) TestForward(x, y, dy *layers.Tensor) (err error)

type SoftMax

type SoftMax struct {
	// contains filtered or unexported fields
}

SoftMax Holds the methods to do softmax loss

func CreateSoftMax

func CreateSoftMax(h *cudnn.Handler) (s *SoftMax, err error)

CreateSoftMax creates the soft max function

func MakeSoftMaxLossCalculator

func MakeSoftMaxLossCalculator() SoftMax

MakeSoftMaxLossCalculator returns a loss calculator for softmax

func (SoftMax) BatchLossCPU

func (s SoftMax) BatchLossCPU(actual, desired []float32, batchsize, classificationsize int) (percent, loss float32)

BatchLossCPU takes the actual and desired arrays in the form of i=batchindex, j=classindex actual[i*classificationsize+j]

func (SoftMax) EpocLossCPU

func (s SoftMax) EpocLossCPU(actual, desired [][]float32, batchsize, classificationsize int) (percent, loss float32)

EpocLossCPU returns the loss epoc if BatchLoss was not calculated.

func (SoftMax) EpocLossFromBatchLosses

func (s SoftMax) EpocLossFromBatchLosses(percentb, lossb []float32) (percent, loss float32)

EpocLossFromBatchLosses takes an arrays of percent and loss accumulated over the batches and returns total loss over those batches

func (*SoftMax) GetAverageBatchLoss

func (s *SoftMax) GetAverageBatchLoss() float32

GetAverageBatchLoss gets the averagebatchloss

func (*SoftMax) Inference

func (s *SoftMax) Inference(x, y *layers.Tensor) (err error)

Inference just performs the forward propagation of the classifier

func (*SoftMax) PerformError

func (s *SoftMax) PerformError(x, dx, y, target *layers.Tensor) (err error)

PerformError performs softmax error

func (*SoftMax) TestForward

func (s *SoftMax) TestForward(x, y, target *layers.Tensor) (err error)

TestForward TestForward still calculates the AverageLoggLoss

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL