Documentation
¶
Overview ¶
Package xtra is just some functions that use cuda and kernels to make functions that I use that are useful in deep learning. This package can also be used as an example of how to write functions using the cuda subpackage
Index ¶
- type ConcatEx
- func (c *ConcatEx) GetOutputDimsFromInputDims(srcs [][]int32, frmt gocudnn.TensorFormat) (outputdims []int32, err error)
- func (c *ConcatEx) GetOutputdims(srcs []*gocudnn.TensorD) (outdims []int32, err error)
- func (c *ConcatEx) Op(h *Handle, srcs []*gocudnn.TensorD, srcsmem []cutil.Mem, alpha float64, ...) error
- type Config
- type Config2d
- type Config3d
- type Handle
- type RegParams
- type SofMaxLogLoss
- type Swapper
- type TrainerD
- func (d *TrainerD) GetTrainingDescriptor() (TrainingMode, gocudnn.DataType)
- func (d *TrainerD) L1L2Regularization(h *Handle, desc *gocudnn.TensorD, dw, w, l1, l2 cutil.Mem, params RegParams) error
- func (d *TrainerD) TrainValues(h *Handle, desc *gocudnn.TensorD, dw, w, gsum, xsum cutil.Mem, ...) error
- type TrainingMode
- type TrainingModeFlag
- type TrainingParams
- type XActivationD
- type XActivationMode
- type XLossD
- type XLossMode
- type XLossModeFlag
- type XResizeD
- type XShapetoBatchD
- func (s *XShapetoBatchD) GetBatchtoShapeOutputProperties(descX *gocudnn.TensorD, h, w, hstride, wstride int32) (gocudnn.TensorFormat, gocudnn.DataType, []int32, error)
- func (s *XShapetoBatchD) GetShapetoBatchOutputProperties(descX *gocudnn.TensorD, h, w, hstride, wstride int32) (gocudnn.TensorFormat, gocudnn.DataType, []int32, error)
- func (s *XShapetoBatchD) GetShapetoBatchOutputPropertiesPLUS(descX *gocudnn.TensorD, h, w, hstride, wstride int32) (gocudnn.TensorFormat, gocudnn.DataType, []int32, []int32, error)
- func (s *XShapetoBatchD) ShapeToBatch4d(handle *Handle, xDesc *gocudnn.TensorD, x cutil.Mem, yDesc *gocudnn.TensorD, ...) error
- type XTransposeD
- type Xtra
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type ConcatEx ¶
type ConcatEx struct {
// contains filtered or unexported fields
}
ConcatEx holds the concat kernels
func CreateConcatEx ¶
CreateConcatEx holds does concat for nchw and nhwc half and float tensors
func (*ConcatEx) GetOutputDimsFromInputDims ¶
func (c *ConcatEx) GetOutputDimsFromInputDims(srcs [][]int32, frmt gocudnn.TensorFormat) (outputdims []int32, err error)
GetOutputDimsFromInputDims gets the outputdims from inputdims passed
func (*ConcatEx) GetOutputdims ¶
GetOutputdims gets the concat tensor dims for the output tensor
type Config2d ¶
type Config2d struct { Dimx int32 Dimy int32 ThreadPerBlockx uint32 ThreadPerBlocky uint32 BlockCountx uint32 BlockCounty uint32 }
Config2d are parameters for the kernel launch
type Config3d ¶
type Config3d struct { Dimx int32 Dimy int32 Dimz int32 ThreadPerBlockx uint32 ThreadPerBlocky uint32 ThreadPerBlockz uint32 BlockCountx uint32 BlockCounty uint32 BlockCountz uint32 }
Config3d are parameters for the kernel launch
type Handle ¶
type Handle struct {
// contains filtered or unexported fields
}
Handle is a handle for xtra functions. Right now all functions that use Handle are strictly float32. Because I use gtx 1080ti(s) and there is basically no motivation to expand the capability. Maybe if someone wants to get me A RTX2080ti I will do something about that. heh heh heh
func MakeHandle ¶
MakeHandle makes one of them there "Xtra" Handles used for the xtra functions I added to gocudnn.
func MakeHandleEx ¶
MakeHandleEx makes a handle that uses a worker to feed the gpu functions if worker is nil then it will use the device on the current host thread this function was called on.
func (*Handle) LaunchConfig ¶
LaunchConfig returns a config struct that is used to configure some kernel launches
func (*Handle) LaunchConfig2d ¶
LaunchConfig2d returns configs for the kernel launch
func (*Handle) LaunchConfig3d ¶
LaunchConfig3d returns configs for the kernel launch
type RegParams ¶
type RegParams struct {
// contains filtered or unexported fields
}
RegParams holds the regulator paramaters
func CreateRegParamsFloat32 ¶
CreateRegParamsFloat32 creates the RegParams for float32 ,,, I really don't like this function. It was kind of a shortcut since I use a gtx1080ti. Since the new rtx line has come out. Users will probably want to take advantage of other types than single precision.
type SofMaxLogLoss ¶
type SofMaxLogLoss struct {
// contains filtered or unexported fields
}
SofMaxLogLoss holds the function needed to do the log loss of the soft max function
func NewSoftMaxNegLogLoss ¶
func NewSoftMaxNegLogLoss(h *Handle) (*SofMaxLogLoss, error)
NewSoftMaxNegLogLoss creates a softmaxlogloss handler.
type Swapper ¶
type Swapper struct {
// contains filtered or unexported fields
}
Swapper contains swap kernels that are used through methods
func NewBatchSwapper ¶
NewBatchSwapper makes a Swapper. This is handy if image data is already in tensors in gpu mem.
func (*Swapper) EveryOther ¶
func (s *Swapper) EveryOther(h *Handle, Adesc *gocudnn.TensorD, A cutil.Mem, Bdesc *gocudnn.TensorD, B cutil.Mem, start, stride int32) error
EveryOther swaps the two tensors by every other batch. Even does the evens if not even then it does the ood.
func (*Swapper) UpperLower ¶
func (s *Swapper) UpperLower(h *Handle, Adesc *gocudnn.TensorD, A cutil.Mem, Bdesc *gocudnn.TensorD, B cutil.Mem, Aupper, Bupper, inverse bool) error
UpperLower swaps two different tensor batches. Either the upper half of both tensors or the lower half of both tensors inverse is a holder variable. It doesn't do anything right now
type TrainerD ¶
type TrainerD struct {
// contains filtered or unexported fields
}
TrainerD is the descriptor of the trainer
func NewTrainingDescriptor ¶
NewTrainingDescriptor Creates and sets a TrainingD. All modes get decay1, decay2, rate, -- all but vanilla get eps,
func (*TrainerD) GetTrainingDescriptor ¶
func (d *TrainerD) GetTrainingDescriptor() (TrainingMode, gocudnn.DataType)
GetTrainingDescriptor returns the info that was set for the training descriptor
func (*TrainerD) L1L2Regularization ¶
func (d *TrainerD) L1L2Regularization(h *Handle, desc *gocudnn.TensorD, dw, w, l1, l2 cutil.Mem, params RegParams) error
L1L2Regularization does the l1l2 regularization
func (*TrainerD) TrainValues ¶
func (d *TrainerD) TrainValues(h *Handle, desc *gocudnn.TensorD, dw, w, gsum, xsum cutil.Mem, params TrainingParams, counter int32) error
TrainValues Adagrad requires gsum, but not xsum. If Adagrad is used then nil can be passed for xsum.
Counter is for adam. Counter starts at zero. Counter can be anything. Mostly, It is used to count epochs number of times weights have been trained. Maybe having a pso control it would give interesting results.
type TrainingModeFlag ¶
type TrainingModeFlag struct { }
TrainingModeFlag is a nil struct that passes TrainingMode Flags through methods.
func (TrainingModeFlag) AdaDelta ¶
func (t TrainingModeFlag) AdaDelta() TrainingMode
AdaDelta Performs the adadelta algo
func (TrainingModeFlag) AdaGrad ¶
func (t TrainingModeFlag) AdaGrad() TrainingMode
AdaGrad performs the adagrad algo
func (TrainingModeFlag) Adam ¶
func (t TrainingModeFlag) Adam() TrainingMode
Adam performs adam function
type TrainingParams ¶
type TrainingParams struct {
// contains filtered or unexported fields
}
TrainingParams is a struct can be use for training params. When selecting the training mode the params that are not part of the training mode will be ignored.
func CreateParamsFloat32 ¶
func CreateParamsFloat32(eps, rate, beta1, beta2, dwalpha float32) TrainingParams
CreateParamsFloat32 creates float32 paramaters for the different types of optimization
func (*TrainingParams) SetBeta1 ¶
func (a *TrainingParams) SetBeta1(beta1 float32)
SetBeta1 sets beta1
func (*TrainingParams) SetBeta2 ¶
func (a *TrainingParams) SetBeta2(beta2 float32)
SetBeta2 sets beta2
func (*TrainingParams) SetDWalpha ¶
func (a *TrainingParams) SetDWalpha(dwalpha float32)
SetDWalpha sets the dwalpha which is a smoothing factor of dw.
func (*TrainingParams) SetRo ¶
func (a *TrainingParams) SetRo(ro float32)
SetRo sets ro. Ro is used for adadelta
type XActivationD ¶
type XActivationD struct {
// contains filtered or unexported fields
}
XActivationD is the activation descriptor for the "Xtra" stuff that I added to cudnn
func NewXActivationDescriptor ¶
func NewXActivationDescriptor(h *Handle, amode XActivationMode, dtype gocudnn.DataType, nanprop gocudnn.NANProp, coef float64) (*XActivationD, error)
NewXActivationDescriptor - Creates a descriptor for the xtra functions made for gocudnn. Note: Only trainable activations will be trained. tmode will be ignored for unsupported activations Note: Only functions requiring coef will get it. coef will be ignored for unsupported activations
func (*XActivationD) BackProp ¶
func (xA *XActivationD) BackProp(h *Handle, xD *gocudnn.TensorD, x cutil.Mem, dxD *gocudnn.TensorD, dx cutil.Mem, dyD *gocudnn.TensorD, dy cutil.Mem, coefs, dcoefs, thresh, dthresh, coefs1, dcoefs1 cutil.Mem, alpha, beta float64) error
BackProp does the back propagation for xactivation All of the functions use xD, x ,dxD, dx, dyD, dy.. Prelu uses coefs and dcoefs. dx[i]=coefs[i]* dx[i] where x[i]<0 dcoefs=dy[i]*x[i] Threshhold uses coefs and coefs1 thresh, dcoefs,dthresh,and dcoefs1 for dx[i]=dy[i]*coefs[i] where x[i]<thresh[i] else dx[i]=coefs1[i]*dy[i]. and dcoefs[i]+=x[i]*dy[i] same for dcoefs1 The function will only use values that it is used to perform the calculation. It will ignore the ones that are not used for the function
func (*XActivationD) ForwardProp ¶
func (xA *XActivationD) ForwardProp(h *Handle, xD *gocudnn.TensorD, x cutil.Mem, yD *gocudnn.TensorD, y cutil.Mem, coefs, thresh, coefs1 cutil.Mem, alpha, beta float64) error
ForwardProp does the forward propagation for xactivation. All of the functions us xD, x ,yD,y.. Prelu uses coefs. y[i]=coefs[i]* x[i] where x[i]<0 Threshhold uses coefs and coefs1 for y[i]=x[i]*coefs[i] where x[i]>thres[i] else y[i]=x[i]*coefs1[i] The function will only use values that it is used to perform the calculation. It will ignore the ones that are not used for the function
type XActivationMode ¶
type XActivationMode uint
XActivationMode is flags for xtra activations
func (*XActivationMode) Leaky ¶
func (x *XActivationMode) Leaky() XActivationMode
Leaky returns the leaky flag
func (*XActivationMode) Prelu ¶
func (x *XActivationMode) Prelu() XActivationMode
Prelu returns the ParaChan flag and it is a weighted leaky on the just the channels
func (*XActivationMode) Threshhold ¶
func (x *XActivationMode) Threshhold() XActivationMode
Threshhold returns the Parametric flag
type XLossD ¶
type XLossD struct {
// contains filtered or unexported fields
}
XLossD is the loss descriptor for the loss function
func NewLossDescriptor ¶
NewLossDescriptor creates a loss destriptor to calculate loss
func (*XLossD) CalculateErrorAndLoss ¶
func (l *XLossD) CalculateErrorAndLoss(h *Handle, dxD *gocudnn.TensorD, dx cutil.Mem, yD *gocudnn.TensorD, y cutil.Mem, dyD *gocudnn.TensorD, dy cutil.Mem, alpha, beta float64, ) (float32, error)
CalculateErrorAndLoss calculates the error going back and the loss going forward dxD yD dyD need to have the same dims and size and right now they can only be datatype float
type XLossModeFlag ¶
type XLossModeFlag struct { }
XLossModeFlag passes XLossMode flags through methods
type XResizeD ¶
type XResizeD struct {
// contains filtered or unexported fields
}
XResizeD is a struct that holds the reshape functions
func CreateResizeDesc ¶
CreateResizeDesc creates a descriptor that holds the reshpaes
type XShapetoBatchD ¶
type XShapetoBatchD struct {
// contains filtered or unexported fields
}
XShapetoBatchD holds the kernel function
func CreateShapetoBatchDesc ¶
func CreateShapetoBatchDesc(handle *Handle) (*XShapetoBatchD, error)
CreateShapetoBatchDesc creates a shape to batch desc
func (*XShapetoBatchD) GetBatchtoShapeOutputProperties ¶
func (s *XShapetoBatchD) GetBatchtoShapeOutputProperties(descX *gocudnn.TensorD, h, w, hstride, wstride int32) (gocudnn.TensorFormat, gocudnn.DataType, []int32, error)
GetBatchtoShapeOutputProperties will place the batches into the shape. It will only work if xdims[0]/(h*w) doesn't have a remainder.
func (*XShapetoBatchD) GetShapetoBatchOutputProperties ¶
func (s *XShapetoBatchD) GetShapetoBatchOutputProperties(descX *gocudnn.TensorD, h, w, hstride, wstride int32) (gocudnn.TensorFormat, gocudnn.DataType, []int32, error)
GetShapetoBatchOutputProperties returns properties to make a new descriptor
func (*XShapetoBatchD) GetShapetoBatchOutputPropertiesPLUS ¶
func (s *XShapetoBatchD) GetShapetoBatchOutputPropertiesPLUS(descX *gocudnn.TensorD, h, w, hstride, wstride int32) (gocudnn.TensorFormat, gocudnn.DataType, []int32, []int32, error)
GetShapetoBatchOutputPropertiesPLUS returns properties to make a new descriptor. PLUS the N1,N2 used to resize the dims
func (*XShapetoBatchD) ShapeToBatch4d ¶
func (s *XShapetoBatchD) ShapeToBatch4d(handle *Handle, xDesc *gocudnn.TensorD, x cutil.Mem, yDesc *gocudnn.TensorD, y cutil.Mem, hstride int32, wstride int32, S2B bool) error
ShapeToBatch4d seperates chunks fo memory to blocks, so each window is the size of the block passed, and that those will becomoe the new batches. if S2B is true then it does the "Forward". Where the x values will be placed into the y tensor if S2B is false the y values will be placed into the x tensor. The C channel is the only thing that needs to be the same between tensor x and y. Any values that don't fit will get the zero value To get the y tensor please use FindShapetoBatchoutputTensor.
type XTransposeD ¶
type XTransposeD struct {
// contains filtered or unexported fields
}
XTransposeD holds the kernel function
type Xtra ¶
type Xtra struct {
// contains filtered or unexported fields
}
Xtra is a holder for Xtra functions that are made by me, and not cuda or cudnn
func (*Xtra) KernelLocation ¶
KernelLocation will set the direct kernel location and make for it kernel location