Documentation ¶
Index ¶
- type Info
- type Ops
- func (o *Ops) BackwardProp(handle *cudnn.Handler, ...) error
- func (o *Ops) BiasScaleProperties() (gocudnn.TensorFormat, gocudnn.DataType, []int32)
- func (o *Ops) ForwardInference(handle *cudnn.Handler, alpha, beta, epsilon float64, ...) error
- func (o *Ops) ForwardTraining(handle *cudnn.Handler, alpha, beta, averagingfactor, epsilon float64, ...) error
- func (o *Ops) Info(h *cudnn.Handler) (Info, error)
- func (o *Ops) Stage(handle *cudnn.Handler, x *tensor.Volume) (err error)
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Info ¶
type Info struct { Epsilon float64 `json:"Epsilon"` Exponentialfactor uint `json:"Exponentialfactor"` Mode gocudnn.BatchNormMode `json:"Mode"` Format gocudnn.TensorFormat `json:"Format"` DataType gocudnn.DataType `json:"DataType"` Nan gocudnn.NANProp `json:"Nan"` Dims []int32 `json:"Dims"` Stride []int32 `json:"Stride"` RRM []byte `json:"RRM"` RRV []byte `json:"RRV"` RSM []byte `json:"RSM"` RSV []byte `json:"RSV"` }
Info is used to save the layer for later use.
type Ops ¶
type Ops struct {
// contains filtered or unexported fields
}
Ops contains the operation of batchnorm.
func PreStagePerActivation ¶
PreStagePerActivation Normalization is performed per-activation. This mode is intended to be used after non-convolutional network layers. In this mode the tensor dimensions of bnBias and bnScale, the parameters used in the cudnnBatchNormalization* functions, are 1xCxHxW.
func PreStageSpatial ¶
PreStageSpatial Normalization is performed over N+spatial dimensions. This mode is intended for use after convolutional layers (where spatial invariance is desired). In this mode the bnBias, bnScale tensor dimensions are 1xCx1x1.
func PreStageSpatialPersistant ¶
PreStageSpatialPersistant - similar to spatial but can be faster
An optimized path may be selected for CUDNN_DATA_FLOAT and CUDNN_DATA_HALF types, compute capability 6.0 or higher for the following two batch normalization API calls: cudnnBatchNormalizationForwardTraining(), and cudnnBatchNormalizationBackward(). In the case of cudnnBatchNormalizationBackward(), the savedMean and savedInvVariance arguments should not be NULL.
The rest of this section applies for NCHW mode only:
This mode may use a scaled atomic integer reduction that is deterministic but imposes more restrictions on the input data range. When a numerical overflow occurs the algorithm may produce NaN-s or Inf-s (infinity) in output buffers.
When Inf-s/NaN-s are present in the input data, the output in this mode is the same as from a pure floating-point implementation.
For finite but very large input values, the algorithm may encounter overflows more frequently due to a lower dynamic range and emit Inf-s/NaN-s while CUDNN_BATCHNORM_SPATIAL will produce finite results. The user can invoke cudnnQueryRuntimeError() to check if a numerical overflow occurred in this mode.
func (*Ops) BackwardProp ¶
func (o *Ops) BackwardProp(handle *cudnn.Handler, alphadata, betadata, alphaparam, betaparam, epsilon float64, x, scale, dscale, dbias, dx, dy *tensor.Volume) error
BackwardProp is used for the forward training
func (*Ops) BiasScaleProperties ¶
BiasScaleProperties returns the bias and scale for the batch norm bias and scale forward and backprop
func (*Ops) ForwardInference ¶
func (o *Ops) ForwardInference(handle *cudnn.Handler, alpha, beta, epsilon float64, x, scale, bias, y *tensor.Volume) error
ForwardInference is the forward prop used for testing and production