trainer

package
v0.0.0-...-98db5b7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 7, 2020 License: CC-BY-4.0 Imports: 10 Imported by: 8

Documentation

Overview

Package trainer is a package that is used for training networks. There is not much support for this yet. It will have vanilla and momentum on using a device. Its hard to build any kind of trainer using cudnn. the optensor is kind of limited.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func CreateTrainingMem

func CreateTrainingMem(handle *cudnn.Handler, trainer Trainer, w *layers.Tensor) error

CreateTrainingMem creates trainingmem for the trainer

func DebuggingAdam

func DebuggingAdam()

DebuggingAdam is for debugging purposes

func SetupAdamWandB

func SetupAdamWandB(tctx *xtra.Handle, decay1, decay2 float32, batch int32) (*Adam, *Adam, error)

SetupAdamWandB returns a trainer for both WandB

func SetupAdamWandB2

func SetupAdamWandB2(handle *cudnn.Handler, rate, dwalpha, decay1, decay2 float32, batch int32) (*Adam, *Adam, error)

SetupAdamWandB2 returns a trainer for both WandB and includes the learning rate

Types

type Adam

type Adam struct {
	// contains filtered or unexported fields
}

Adam is a struct that does the holds the params for adam optimization

func SetupAdam

func SetupAdam(tctx *xtra.Handle, decay1, decay2 float32, batch int32) (*Adam, error)

SetupAdam sets up adam

func (*Adam) Dims

func (a *Adam) Dims() []int32

Dims returns the dims of the training parameter holders

func (*Adam) L1L2Loss

func (a *Adam) L1L2Loss() (float32, float32)

L1L2Loss returns the l1l2 loss of the memory that adam was training

func (*Adam) SetBatch

func (a *Adam) SetBatch(batch float32)

SetBatch sets batch

func (*Adam) SetBeta1

func (a *Adam) SetBeta1(beta1 float32)

SetBeta1 sets beta1

func (*Adam) SetBeta2

func (a *Adam) SetBeta2(beta2 float32)

SetBeta2 sets beta2

func (*Adam) SetDecay1

func (a *Adam) SetDecay1(decay1 float32)

SetDecay1 sets decay1

func (*Adam) SetDecay2

func (a *Adam) SetDecay2(decay2 float32)

SetDecay2 sets decay 2

func (*Adam) SetDecays

func (a *Adam) SetDecays(l1, l2 float32)

SetDecays sets the decay rates for the trainer

func (*Adam) SetEps

func (a *Adam) SetEps(eps float32)

SetEps sets eps

func (*Adam) SetRates

func (a *Adam) SetRates(rate, dwalpha float32)

SetRates sets rate

func (*Adam) SetTrainingMem

func (a *Adam) SetTrainingMem(han *cudnn.Handler, w *layers.Tensor) error

SetTrainingMem creates the training mem for the adam trainer

func (*Adam) UpdateWeights

func (a *Adam) UpdateWeights(handle *cudnn.Handler, dw, w *layers.Tensor, batchsize, counter int) error

UpdateWeights updates the weights

type Momentum

type Momentum struct {
	// contains filtered or unexported fields
}

Momentum is a stuct that is used for the momentum operation in updating weights. E

func SetupMomentum

func SetupMomentum(decay1, decay2, rate, momentum, batch float64) *Momentum

SetupMomentum sets up the trainer for one and zero put the cscalar of 1 and 0 that matches the datatype there. (example gocudnn.CFloat(1.0) and gocudnn.CFloat(0.0). this is a hack, but it has to be done for sanity sake. (my sanity not yours :) ) I know of a way to fix this, but I am not able to do that right now. That being said. Maybe the reflect package might help (idk maybe not). The best thing I can think of is a type switch, but I would have to do that for every types, and I might add some more types in the gocudnn package. Or I could make some training stuff in C in the gocudnn Package.

func (*Momentum) L1L2Loss

func (t *Momentum) L1L2Loss() (float32, float32)

L1L2Loss returns the loss that was previously recorded.

func (*Momentum) SetDecays

func (t *Momentum) SetDecays(l1, l2 float32)

SetDecays sets the decay rates for the trainer

func (*Momentum) SetRate

func (t *Momentum) SetRate(rate float32)

SetRate the Learning Rate of momentum

func (*Momentum) SetTrainingMem

func (t *Momentum) SetTrainingMem(handle *cudnn.Handler, w *layers.Tensor) error

SetTrainingMem will load the gsum values

func (*Momentum) UpdateWeights

func (t *Momentum) UpdateWeights(handle *cudnn.Handler, dw, w *layers.Tensor, batch int) error

UpdateWeights for now is just the momentum operation. I might have to make a new cuda library for gocudnn. I will have to check that out.

type Settings

type Settings struct {
	Beta1    float64 `json:"beta_1,omitempty"`
	Beta2    float64 `json:"beta_2,omitempty"`
	Decay1   float64 `json:"decay_1,omitempty"`
	Decay2   float64 `json:"decay_2,omitempty"`
	Rate     float64 `json:"rate,omitempty"`
	Momentum float64 `json:"momentum,omitempty"`
	Eps      float64 `json:"eps,omitempty"`
	Batch    float64 `json:"batch,omitempty"`
	Managed  bool    `json:"managed,omitempty"`
}

Settings contains the settings of a trainer

type TSettings

type TSettings struct {
	Adam     Settings
	Momentum Settings
}

TSettings contains the trainer settings per trainer

type Trainer

type Trainer interface {
	UpdateWeights(ctx *cudnn.Handler, dw, w *layers.Tensor, batch, counter int) error
	L1L2Loss() (float32, float32)
	SetRates(rate, dwalpha float32)
	SetDecays(l1, l2 float32)
}

Trainer will be used for updating weights. Only momentum and adam are available right now

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL