gocunets

package module
v0.0.0-...-98db5b7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 7, 2020 License: CC-BY-4.0 Imports: 24 Imported by: 4

README

gocunets

The kame ha me ha of neural networks in go.

Pull requests are welcomed.

packages needed

go get github.com/nfnt/resize     // Will eventually get rid of this.
go get github.com/dereklstinson/gpu-monitoring-tools/bindings/go/nvml  //Will eventually get bindings in gocudnn. 
go get github.com/pkg/browser     // This is used with ui it auto launches browser.  Not so useful when used with a headless machine
go get github.com/dereklstinson/nccl
go get github.com/dereklstinson/gocudnn
go get github.com/dereklstinson/cutil
go get -u gonum.org/v1/gonum/...  // Will eventually get rid of this
go get gonum.org/v1/plot/...      // Will want to get rid of this and use javascript plotting.
 

GoCuNets is basically 100% GPU computing using the gocudnn package. Right now the only gpu support is Nvidia. Eventually, AMD GPUs will have support through HIP and MIOpen. I want to make it so that when using this package you don't have to download both cuda and hip libraries.

This package is separated into a few parts parts

github.com/dereklstinson/gocunets/devices
github.com/dereklstinson/gocunets/layers
github.com/dereklstinson/gocunets/loss
github.com/dereklstinson/gocunets/ui
github.com/dereklstinson/gocunets/trainer
github.com/dereklstinson/gocunets

The sub-package devices contains sub-packages for wrappers to device library bindings.

The sub-package layers contains other sub-packages to layers.

The sub-package loss contains different loss algorithms.

The sub-package ui contains a makeshift real time neural network monitoring server. Eventually, I would like to have it create a report / reports.

The sub-package trainer contains weight trainers.

The main package contains a higher level interface.

More on GoCuNets

A lot has changed since the beginning of GoCuNets. A lot of features that used to be available are not available for the moment. Like ui interface and model saving and loading. Saving and loading will be implemented soon. It will be part an upcoming Model interface.

type Builder has exposed flags that can be set through their methods. You only need to set the ones that are going to be used. A builder builds layers, tensors, randomly initialized tensors and more. This makes it easier to build networks without having pass flags all the time.

These flags are

    Frmt TensorFormat
	Dtype     DataType
	Cmode     ConvolutionMode
	Mtype     MathType
	Pmode     PoolingMode
	AMode     ActivationMode
	BNMode    BatchNormMode
	Nan       NanProp
```text

Type Handle is a handle to a GPU.  A GPU can have more than one Handle.  One worker can be used on multiple handles.
Handle and worker must use the same device.

Module interface is new.  Modules will eventually be used to implement graphs.  There are a few modules that I have made.
Eventually everything from layer sub-package will have wrappers in gocunets and will be made into a module.  

This is a working package, but it is pre alpha.
Better version management will be coming.  

Documentation

Index

Constants

This section is empty.

Variables

View Source
var Flags struct {
	Format TensorFormat
	Dtype  DataType
	Nan    NanProp
	CMode  ConvolutionMode
	BNMode BatchNormMode
	BNOps  BatchNormOps
	PMode  PoolingMode
	AMode  ActivationMode
	SMMode SoftmaxMode
	SMAlgo SoftmaxAlgo
	MType  MathType
}

Flags is a struct that should only be used for passing flags.

Functions

func BackPropForSharedInputForModuleNetworks

func BackPropForSharedInputForModuleNetworks(m []*SimpleModuleNetwork) (err error)

BackPropForSharedInputForModuleNetworks is a hack to make up if two module networks share the same input. It will zero out the dx values for the module and then run back propagation

func ModuleActivationDebug

func ModuleActivationDebug()

ModuleActivationDebug is for debugging

func ModuleBackwardDataDebug

func ModuleBackwardDataDebug()

ModuleBackwardDataDebug is for debugging

func ModuleBackwardFilterDebug

func ModuleBackwardFilterDebug()

ModuleBackwardFilterDebug is for debugging

func ModuleConcatDebug

func ModuleConcatDebug()

ModuleConcatDebug is for debugging

func ModuleForwardDebug

func ModuleForwardDebug()

ModuleForwardDebug is for debugging

func PerformanceDebugging

func PerformanceDebugging()

PerformanceDebugging - if function called it raises flag to print performance for inner layers

func SetPeerAccess

func SetPeerAccess(devs []Device) (connections int, err error)

SetPeerAccess sets peer access accross all devices

Types

type ActMode

type ActMode int

ActMode are the activationmode flags

type ActivaitonModeFlag

type ActivaitonModeFlag struct {
}

ActivaitonModeFlag passes ActivationMode flags

func (ActivaitonModeFlag) Logistic

func (a ActivaitonModeFlag) Logistic() ActMode

Logistic returns ActivationMode flag for Logistic

func (ActivaitonModeFlag) SoftMax

func (a ActivaitonModeFlag) SoftMax() ActMode

SoftMax returns ActivationMode flag for softmax

func (ActivaitonModeFlag) Tanh

func (a ActivaitonModeFlag) Tanh() ActMode

Tanh returns ActivationMode flag for Tanh

type ActivationMode

type ActivationMode struct {
	act.Mode
}

ActivationMode struct wrapper for gocudnn.ActivationMode. Look up methods in gocudnn.

func (*ActivationMode) ClippedRelu

func (a *ActivationMode) ClippedRelu() ActivationMode

ClippedRelu sets and returns the ClippedRelu flag

func (*ActivationMode) Elu

func (a *ActivationMode) Elu() ActivationMode

Elu sets and returns the Elu flag

func (*ActivationMode) Identity

func (a *ActivationMode) Identity() ActivationMode

Identity sets and returns the Identity flag

func (*ActivationMode) Leaky

func (a *ActivationMode) Leaky() ActivationMode

Leaky sets and returns the Leaky flag

func (*ActivationMode) PRelu

func (a *ActivationMode) PRelu() ActivationMode

PRelu sets and returns the PRelu flag

func (*ActivationMode) Relu

func (a *ActivationMode) Relu() ActivationMode

Relu sets and returns the Relu flag

func (*ActivationMode) Sigmoid

func (a *ActivationMode) Sigmoid() ActivationMode

Sigmoid sets and returns the Sigmoid flag

func (*ActivationMode) Tanh

func (a *ActivationMode) Tanh() ActivationMode

Tanh sets and returns the Tanh flag

func (*ActivationMode) Threshhold

func (a *ActivationMode) Threshhold() ActivationMode

Threshhold sets and returns the Threshhold flag

type Backwarder

type Backwarder interface {
	Backward() error
}

Backwarder does the backward operation

type BatchNormMode

type BatchNormMode struct {
	gocudnn.BatchNormMode
}

BatchNormMode struct wrapper for gocudnn.BatchNormMode. Look up methods in gocudnn.

func (*BatchNormMode) PerActivation

func (b *BatchNormMode) PerActivation() BatchNormMode

PerActivation sets and returns the PerActivation flag

func (*BatchNormMode) Spatial

func (b *BatchNormMode) Spatial() BatchNormMode

Spatial sets and returns the Spatial flag

func (*BatchNormMode) SpatialPersistent

func (b *BatchNormMode) SpatialPersistent() BatchNormMode

SpatialPersistent sets and returns the SpatialPersistent flag

type BatchNormOps

type BatchNormOps struct {
	gocudnn.BatchNormOps
}

BatchNormOps struct wrapper for gocudnn.BatchNormOps. Look up methods in gocudnn.

func (*BatchNormOps) Activation

func (b *BatchNormOps) Activation() BatchNormOps

Activation sets and returns the Activation flag

func (*BatchNormOps) AddActivation

func (b *BatchNormOps) AddActivation() BatchNormOps

AddActivation sets and returns the AddActivation flag

func (*BatchNormOps) Normal

func (b *BatchNormOps) Normal() BatchNormOps

Normal sets and returns the Normal flag

type Builder

type Builder struct {
	Frmt   TensorFormat
	Dtype  DataType
	Cmode  ConvolutionMode
	Mtype  MathType
	Pmode  PoolingMode
	AMode  ActivationMode
	BNMode BatchNormMode
	Nan    NanProp
	// contains filtered or unexported fields
}

Builder will create layers with the flags set within the struct

func CreateBuilder

func CreateBuilder(h *Handle) (b *Builder)

CreateBuilder creates a Builder. Flags can be set by flags' methods inside of Builder. Default Flags are set at:

	Frmt.NCHW()

	Mtype.Default()

	Nan.NotPropigate()

	Cmode.CrossCorrelation()

	Dtype.Float()

	Pmode.AverageCountExcludePadding()

 BNMode.Spatial()

	AMode.Leaky()

func (*Builder) Activation

func (l *Builder) Activation(id int64) (a *Layer, err error)

Activation creates an activation layer

func (*Builder) AllocateMemory

func (l *Builder) AllocateMemory(sib uint) (cutil.Pointer, error)

AllocateMemory allocates memory

func (*Builder) BatchNorm

func (l *Builder) BatchNorm(id int64) (batch *Layer, err error)

BatchNorm is the batch norm layer

func (*Builder) ConnectLayers

func (l *Builder) ConnectLayers(layer1, layer2 *Layer) error

ConnectLayers Creates the output of layer1 and connects it as the input to layer2.

func (*Builder) ConvolutionLayer

func (l *Builder) ConvolutionLayer(id int64, groupcount int32, w, dw, b, db *Tensor, pad, stride, dilation []int32) (conv *Layer, err error)

ConvolutionLayer creates a convolution layer

func (*Builder) CreateConvolutionWeights

func (l *Builder) CreateConvolutionWeights(dims []int32) (w, dw, b, db *Tensor, err error)

CreateConvolutionWeights creates the weights and delta weights of a convolution layer

func (*Builder) CreateDeconvolutionWeights

func (l *Builder) CreateDeconvolutionWeights(dims []int32) (w, dw, b, db *Tensor, err error)

CreateDeconvolutionWeights creates the weights and delta weights of a deconvolution layer

func (*Builder) CreateRandomTensor

func (l *Builder) CreateRandomTensor(dims []int32, mean, std float32, seed uint64) (t *Tensor, err error)

CreateRandomTensor creates a random tensor

func (*Builder) CreateTensor

func (l *Builder) CreateTensor(dims []int32) (t *Tensor, err error)

CreateTensor creates a tensor

func (*Builder) Dropout

func (l *Builder) Dropout(id int64, dropoutpercent float32, seed uint64) (d *Layer, err error)

Dropout creates an Dropout layer

func (*Builder) FindBiasTensor

func (l *Builder) FindBiasTensor(dims []int32) (b *Tensor, err error)

FindBiasTensor finds the bias tensor according to the dims

func (*Builder) GetHandle

func (l *Builder) GetHandle() *Handle

GetHandle returns the handle

func (*Builder) PoolingLayer

func (l *Builder) PoolingLayer(id int64, window, padding, stride []int32) (p *Layer, err error)

PoolingLayer creates a pooling layer with flags set in Builder

func (*Builder) ReverseConvolutionLayer

func (l *Builder) ReverseConvolutionLayer(id int64, groupcount int32, w, dw, b, db *Tensor, pad, stride, dilation []int32) (rconv *Layer, err error)

ReverseConvolutionLayer creates a reverse convolution layer

type Classifier

type Classifier struct {
	// contains filtered or unexported fields
}

Classifier will take the outputs of a neural network and find the error of it. To be passed back to the rest of the network.

type ClassifierModule

type ClassifierModule struct {
	// contains filtered or unexported fields
}

ClassifierModule is used to classify outputs

func CreateCustomLossLayer

func CreateCustomLossLayer(id int64, b *Builder, l LossLayer) (m *ClassifierModule)

CreateCustomLossLayer creates a module that uses l

func CreateMSEClassifier

func CreateMSEClassifier(id int64, bldr *Builder, x, dx, target *Tensor) (m *ClassifierModule, err error)

CreateMSEClassifier sets the mean squared error classifier

func CreateSoftMaxClassifier

func CreateSoftMaxClassifier(id int64, bldr *Builder, x, dx, y, target *Tensor) (m *ClassifierModule, err error)

CreateSoftMaxClassifier will create a simple module with each of the convolution layers being in parallel.

func (*ClassifierModule) GetAverageBatchLoss

func (m *ClassifierModule) GetAverageBatchLoss() float32

GetAverageBatchLoss gets the average batch loss

func (*ClassifierModule) GetTensorDX

func (m *ClassifierModule) GetTensorDX() (dx *Tensor)

GetTensorDX returns set dx tensor

func (*ClassifierModule) GetTensorDY

func (m *ClassifierModule) GetTensorDY() (dy *Tensor)

GetTensorDY returns set dy tensor

func (*ClassifierModule) GetTensorX

func (m *ClassifierModule) GetTensorX() (x *Tensor)

GetTensorX returns set x tensor

func (*ClassifierModule) GetTensorY

func (m *ClassifierModule) GetTensorY() (y *Tensor)

GetTensorY returns set y tensor

func (*ClassifierModule) ID

func (m *ClassifierModule) ID() int64

ID returns the id set for the module

func (*ClassifierModule) Inference

func (m *ClassifierModule) Inference() error

Inference does a forward propagation without calculating errors

func (*ClassifierModule) PerformError

func (m *ClassifierModule) PerformError() error

PerformError does the output and error calculation of the previous layer of the network

func (*ClassifierModule) SetTensorDX

func (m *ClassifierModule) SetTensorDX(dx *Tensor)

SetTensorDX sets dx tensor

func (*ClassifierModule) SetTensorDY

func (m *ClassifierModule) SetTensorDY(dy *Tensor)

SetTensorDY sets dy tensor

func (*ClassifierModule) SetTensorX

func (m *ClassifierModule) SetTensorX(x *Tensor)

SetTensorX sets x tensor

func (*ClassifierModule) SetTensorY

func (m *ClassifierModule) SetTensorY(y *Tensor)

SetTensorY sets y tensor

func (*ClassifierModule) TestForward

func (m *ClassifierModule) TestForward() error

TestForward does the testforward so that loss can be seen

type Comm

type Comm struct {
	// contains filtered or unexported fields
}

Comm is a communicator

func CreateComms

func CreateComms(hs []*Handle) (comm []*Comm, err error)

CreateComms creates Communicators for parallel processes.

type CompressionModule

type CompressionModule struct {
	// contains filtered or unexported fields
}

CompressionModule is a module that concats several layers together when doing the forward and backward passes

func CreateCompressionModule

func CreateCompressionModule(id int64, bldr *Builder,
	batch, inputchannels int32,
	paralleloutputchans, spacialdims []int32,
	paddingoffset int32,
	falpha, fbeta float64) (m *CompressionModule, err error)

CreateCompressionModule will create a simple module with each of the convolution layers being in parallel. The parallel convolution will have the same hw, but the channels for each can be changed. The number of convolutions depends on the length of the channel array. Each convolution will have pad = ((dim-1)*d +1 + offset)/2. Offset is usually 0, but offset can be used to change if the output will be even or odd. This will make the output for each convolution equal in the spacial dims. The strides are stuck at 2.

This considers a stride of 2 spacial dims (hw) need to be odd. This will preform a deconvolution with the formula for the output tensor:

N= batch;

C = [neurons[0]+ ... +neurons[i]];

H,W and more (spacial dims) = ((input-1)/2) + 1

the dx tensor is zeroed before back propagation If multiple modules share the same input. Each module will need its own tensor as its own dx output. Those tensors need to be summed into the output dy tensor of module it got its x input tensor from

func (CompressionModule) Backward

func (m CompressionModule) Backward() (err error)

Backward does the backward propagation

func (CompressionModule) FindOutputDims

func (m CompressionModule) FindOutputDims() (dims []int32, err error)

FindOutputDims returns the output dims of the module

func (CompressionModule) Forward

func (m CompressionModule) Forward() (err error)

Forward does the forward operation

func (CompressionModule) GetTensorDX

func (m CompressionModule) GetTensorDX() (dx *Tensor)

GetTensorDX returns set dx tensor

func (CompressionModule) GetTensorDY

func (m CompressionModule) GetTensorDY() (dy *Tensor)

GetTensorDY returns set dy tensor

func (CompressionModule) GetTensorX

func (m CompressionModule) GetTensorX() (x *Tensor)

GetTensorX returns set x tensor

func (CompressionModule) GetTensorY

func (m CompressionModule) GetTensorY() (y *Tensor)

GetTensorY returns set y tensor

func (CompressionModule) ID

func (m CompressionModule) ID() int64

ID is the id

func (CompressionModule) Inference

func (m CompressionModule) Inference() (err error)

Inference does the inference forward operation

func (CompressionModule) InitHiddenLayers

func (m CompressionModule) InitHiddenLayers(rate, decay1, decay2 float32) (err error)

InitHiddenLayers will init the hidden layers. If

func (CompressionModule) InitWorkspace

func (m CompressionModule) InitWorkspace() (err error)

InitWorkspace inits the hidden workspace

func (CompressionModule) SetTensorDX

func (m CompressionModule) SetTensorDX(dx *Tensor)

SetTensorDX sets dx tensor

func (CompressionModule) SetTensorDY

func (m CompressionModule) SetTensorDY(dy *Tensor)

SetTensorDY sets dy tensor

func (CompressionModule) SetTensorX

func (m CompressionModule) SetTensorX(x *Tensor)

SetTensorX sets x tensor

func (CompressionModule) SetTensorY

func (m CompressionModule) SetTensorY(y *Tensor)

SetTensorY sets y tensor

func (CompressionModule) Update

func (m CompressionModule) Update(epoch int) (err error)

Update updates the weights of the hidden convolution layer

type Concat

type Concat struct {
	// contains filtered or unexported fields
}

Concat does the concat operation

func CreateConcat

func CreateConcat(h *Handle) (c *Concat, err error)

CreateConcat creates a concat operation handler

func (*Concat) Backward

func (c *Concat) Backward() error

Backward Does backward with data flowing dest to srcs

func (*Concat) FindOutputDims

func (c *Concat) FindOutputDims(srcs []*Tensor) (outputdims []int32, err error)

FindOutputDims finds the output dims

func (*Concat) FindOutputDimsfromInputDims

func (c *Concat) FindOutputDimsfromInputDims(srcs [][]int32, frmt TensorFormat) (outputdims []int32, err error)

FindOutputDimsfromInputDims finds the input dims from output dims

func (*Concat) Forward

func (c *Concat) Forward() error

Forward Does forward with data flowing srcs to dest

func (*Concat) SetDeltaDest

func (c *Concat) SetDeltaDest(deltadest *Tensor)

SetDeltaDest sets the delta dest for back propagation

func (*Concat) SetDest

func (c *Concat) SetDest(dest *Tensor)

SetDest sets the output dest

func (*Concat) SetInputDeltaSrcs

func (c *Concat) SetInputDeltaSrcs(deltasrcs []*Tensor)

SetInputDeltaSrcs sets the delta srcs for back propagation

func (*Concat) SetInputSrcs

func (c *Concat) SetInputSrcs(srcs []*Tensor)

SetInputSrcs sets the input srcs

type ConvolutionMode

type ConvolutionMode struct {
	gocudnn.ConvolutionMode
}

ConvolutionMode struct wrapper for gocudnn.ConvolutionMode. Look up methods in gocudnn.

func (*ConvolutionMode) Convolution

func (c *ConvolutionMode) Convolution() ConvolutionMode

Convolution sets and returns the Convolution flag

func (*ConvolutionMode) CrossCorrelation

func (c *ConvolutionMode) CrossCorrelation() ConvolutionMode

CrossCorrelation sets and returns the CrossCorrelation flag

type DataType

type DataType struct {
	gocudnn.DataType
}

DataType struct wrapper for gocudnn.Datatype. Look up methods in gocudnn.

func (*DataType) Double

func (d *DataType) Double() DataType

Double sets and returns the Double flag

func (*DataType) Float

func (d *DataType) Float() DataType

Float sets and returns the Float flag

func (*DataType) Half

func (d *DataType) Half() DataType

Half sets and returns the Half flag

func (*DataType) Int32

func (d *DataType) Int32() DataType

Int32 sets and returns the Int32 flag

func (*DataType) Int8

func (d *DataType) Int8() DataType

Int8 sets and returns the Int8 flag

func (*DataType) Int8x32

func (d *DataType) Int8x32() DataType

Int8x32 sets and returns the Int32 flag

func (*DataType) Int8x4

func (d *DataType) Int8x4() DataType

Int8x4 sets and returns the Int32 flag

func (*DataType) UInt8

func (d *DataType) UInt8() DataType

UInt8 sets and returns the Uint8 flag

func (*DataType) UInt8x4

func (d *DataType) UInt8x4() DataType

UInt8x4 sets and returns the Int32 flag

type DecompressionModule

type DecompressionModule struct {
	// contains filtered or unexported fields
}

DecompressionModule is a module that concats several layers together when doing the forward and backward passes

func CreateDecompressionModule

func CreateDecompressionModule(id int64, bldr *Builder, batch, inputchannel int32, outputperparallellayer, spacialdims []int32, paddingoffset int32, falpha, fbeta float64) (m *DecompressionModule, err error)

CreateDecompressionModule will create a simple module with each of the deconvolution layers being in parallel. Deconvolution output channels is determined by the size of the neuron channels. The number of neurons needs to equal the input channels.

The parallel convolution will have the same hw, but the neuron channels for each can be changed. The number of convolutions depends on the length of the channel array. Each convolution will have pad = ((dim-1)/2) *dilation. This will make the output for each convolution equal in the spacial dims. The strides are stuck at 2.

This considers a stride of 2 spacial dims (hw) need to be odd. This will preform a deconvolution with the formula for the output tensor:

N= batch;

C = [neuronchannels[0]+ ... +neuronchannels[i]];

H,W and more (spacial dims) = 2*input - 1

the dx tensor is zeroed before back propagation If multiple modules share the same input. Each module will need its own tensor as its own dx output. Those tensors need to be summed into the output dy tensor of module it got its x input tensor from

func (DecompressionModule) Backward

func (m DecompressionModule) Backward() (err error)

Backward does the backward propagation

func (DecompressionModule) FindOutputDims

func (m DecompressionModule) FindOutputDims() (dims []int32, err error)

FindOutputDims returns the output dims of the module

func (DecompressionModule) Forward

func (m DecompressionModule) Forward() (err error)

Forward does the forward operation

func (DecompressionModule) GetTensorDX

func (m DecompressionModule) GetTensorDX() (dx *Tensor)

GetTensorDX returns set dx tensor

func (DecompressionModule) GetTensorDY

func (m DecompressionModule) GetTensorDY() (dy *Tensor)

GetTensorDY returns set dy tensor

func (DecompressionModule) GetTensorX

func (m DecompressionModule) GetTensorX() (x *Tensor)

GetTensorX returns set x tensor

func (DecompressionModule) GetTensorY

func (m DecompressionModule) GetTensorY() (y *Tensor)

GetTensorY returns set y tensor

func (DecompressionModule) ID

func (m DecompressionModule) ID() int64

ID is the id

func (DecompressionModule) Inference

func (m DecompressionModule) Inference() (err error)

Inference does the inference forward operation

func (DecompressionModule) InitHiddenLayers

func (m DecompressionModule) InitHiddenLayers(rate, decay1, decay2 float32) (err error)

InitHiddenLayers will init the hidden layers. If

func (DecompressionModule) InitWorkspace

func (m DecompressionModule) InitWorkspace() (err error)

InitWorkspace inits the hidden workspace

func (DecompressionModule) SetTensorDX

func (m DecompressionModule) SetTensorDX(dx *Tensor)

SetTensorDX sets dx tensor

func (DecompressionModule) SetTensorDY

func (m DecompressionModule) SetTensorDY(dy *Tensor)

SetTensorDY sets dy tensor

func (DecompressionModule) SetTensorX

func (m DecompressionModule) SetTensorX(x *Tensor)

SetTensorX sets x tensor

func (DecompressionModule) SetTensorY

func (m DecompressionModule) SetTensorY(y *Tensor)

SetTensorY sets y tensor

func (DecompressionModule) Update

func (m DecompressionModule) Update(epoch int) (err error)

Update updates the weights of the hidden convolution layer

type Device

type Device struct {
	cudart.Device
	// contains filtered or unexported fields
}

Device is a gpu device

func GetDeviceList

func GetDeviceList() (devices []Device, err error)

GetDeviceList gets a device from a list

func (Device) Num

func (d Device) Num() int32

Num is the numerical id of the device

type Forwarder

type Forwarder interface {
	Forward() error
}

Forwarder does the forward operation

type Handle

type Handle struct {
	*cudnn.Handler
}

Handle handles the functions of the libraries used in gocunet

func CreateHandle

func CreateHandle(w *Worker, d Device, seed uint64) (h *Handle)

CreateHandle creates a handle for gocunets

func CreateHandles

func CreateHandles(ws []*Worker, ds []Device, seeds []uint64) []*Handle

CreateHandles creates parrallel handles. With there own workers. It also creates non blocking streams

func (*Handle) GetWorker

func (h *Handle) GetWorker() *Worker

GetWorker returns the gocu.Worker.

type Layer

type Layer struct {
	// contains filtered or unexported fields
}

Layer is a layer inside a network it holds inputs and outputs

func CreateOperationLayer

func CreateOperationLayer(id int64, handle *Handle, op Operation) (l *Layer, err error)

CreateOperationLayer creates an operation layer

func (*Layer) Backward

func (l *Layer) Backward() error

Backward performs the backward propagation

func (*Layer) ChangeBatchSize

func (l *Layer) ChangeBatchSize(batchsize int)

ChangeBatchSize will change the batch size

func (*Layer) Forward

func (l *Layer) Forward() error

Forward performs the forward propagation

func (*Layer) GetOutputDims

func (l *Layer) GetOutputDims(input *Tensor) (output []int32, err error)

GetOutputDims gets the dims of the output tensor

func (*Layer) ID

func (l *Layer) ID() int64

ID is the ID of the layer

func (*Layer) LoadTrainer

func (l *Layer) LoadTrainer(handle *cudnn.Handler, batchsize int, trainers ...trainer.Trainer) error

LoadTrainer Loas the trainer to the layer

func (*Layer) SetBackwardScalars

func (l *Layer) SetBackwardScalars(alpha, beta float64)

SetBackwardScalars sets backward scalars

func (*Layer) SetForwardScalars

func (l *Layer) SetForwardScalars(alpha, beta float64)

SetForwardScalars sets the forward scalars.

func (*Layer) SetIOs

func (l *Layer) SetIOs(x, dx, y, dy *Tensor)

SetIOs sets the x,dx,y,dy used by the layer

func (*Layer) SetInputs

func (l *Layer) SetInputs(x, dx *Tensor)

SetInputs sets the inputs

func (*Layer) SetOtherScalars

func (l *Layer) SetOtherScalars(alpha, beta float64)

SetOtherScalars sets other scalars that the layer might have scalars

func (*Layer) SetOutputs

func (l *Layer) SetOutputs(y, dy *Tensor)

SetOutputs sets the outputs

func (*Layer) String

func (l *Layer) String() string

func (*Layer) ToggleAllHiddenLayerValues

func (l *Layer) ToggleAllHiddenLayerValues()

ToggleAllHiddenLayerValues toggles all hidden values on

func (*Layer) ToggleBiasPrint

func (l *Layer) ToggleBiasPrint()

ToggleBiasPrint If layer contains bias.

func (*Layer) ToggleDBiasPrint

func (l *Layer) ToggleDBiasPrint()

ToggleDBiasPrint If layer contains dbias

func (*Layer) ToggleDWPrint

func (l *Layer) ToggleDWPrint()

ToggleDWPrint If layer contains delta weights

func (*Layer) ToggleWPrint

func (l *Layer) ToggleWPrint()

ToggleWPrint If layer contains weights or hidden values it will toggle the printing

func (*Layer) Update

func (l *Layer) Update(epoch int) error

Update updates weights if layer has them

type LossLayer

type LossLayer interface {
	PerformError(x, dx, y, dy *layers.Tensor) (err error)
	Inference(x, y *layers.Tensor) (err error)
	TestForward(x, y, target *layers.Tensor) (err error)
	GetAverageBatchLoss() float32
}

LossLayer performs two functions. Be able to calculate loss, and to be able to calculate the inference forward.

type LossMode

type LossMode int

LossMode is the flags for loss mode

type LossModeFlag

type LossModeFlag struct {
}

LossModeFlag will return flags for LossMode These will be added over time.

func (LossModeFlag) Binary

func (l LossModeFlag) Binary() LossMode

Binary is a loss mode

func (LossModeFlag) Huber

func (l LossModeFlag) Huber() LossMode

Huber is a loss mode

type MathType

type MathType struct {
	gocudnn.MathType
}

MathType is math type for tensor cores

func (*MathType) AllowConversion

func (m *MathType) AllowConversion() MathType

AllowConversion sets and returns the AllowConversion flag

func (*MathType) Default

func (m *MathType) Default() MathType

Default sets and returns the Default flag

func (*MathType) TensorOpMath

func (m *MathType) TensorOpMath() MathType

TensorOpMath sets and returns the TensorOpMath flag

type Module

type Module interface {
	ID() int64
	Forward() error
	Backward() error
	Update(counter int) error //counter can count updates or it can count epochs.  I found updates to work best.
	FindOutputDims() ([]int32, error)
	Inference() error
	InitHiddenLayers(rate, decay1, decay2 float32) (err error)
	InitWorkspace() (err error)
	GetTensorX() (x *Tensor)
	GetTensorDX() (dx *Tensor)
	GetTensorY() (y *Tensor)
	GetTensorDY() (dy *Tensor)
	SetTensorX(x *Tensor)
	SetTensorDX(dx *Tensor)
	SetTensorY(y *Tensor)
	SetTensorDY(dy *Tensor)
}

Module is a wrapper around a neural network or set of operations

type NanProp

type NanProp struct {
	gocudnn.NANProp
}

NanProp struct wrapper for gocudnn.NanProp. Look up methods in gocudnn.

func (*NanProp) NotPropigate

func (n *NanProp) NotPropigate() NanProp

NotPropigate sets and returns the NotPropigate flag

func (*NanProp) Propigate

func (n *NanProp) Propigate() NanProp

Propigate sets and returns the Propigate flag

type NeutralModule

type NeutralModule struct {
	// contains filtered or unexported fields
}

NeutralModule is for nonsliding modules

func CreateSingleStridedModule

func CreateSingleStridedModule(id int64, bldr *Builder,
	batch, inputchannels int32,
	paralleloutputchans, spacialdims []int32,
	paddingoffset int32,
	falpha, fbeta float64, strides, deconv bool) (m *NeutralModule, err error)

CreateSingleStridedModule creates a Module with stride set to 1. Since there are parallel convolution in this layer the input is shared between all of them. the dx tensor is zeroed before back propagation If multiple modules share the same input. Each module will need its own tensor as its own dx output. Those tensors need to be summed into the output dy tensor of module it got its x input tensor from

func (NeutralModule) Backward

func (m NeutralModule) Backward() (err error)

Backward does the backward propagation

func (NeutralModule) FindOutputDims

func (m NeutralModule) FindOutputDims() (dims []int32, err error)

FindOutputDims returns the output dims of the module

func (NeutralModule) Forward

func (m NeutralModule) Forward() (err error)

Forward does the forward operation

func (NeutralModule) GetTensorDX

func (m NeutralModule) GetTensorDX() (dx *Tensor)

GetTensorDX returns set dx tensor

func (NeutralModule) GetTensorDY

func (m NeutralModule) GetTensorDY() (dy *Tensor)

GetTensorDY returns set dy tensor

func (NeutralModule) GetTensorX

func (m NeutralModule) GetTensorX() (x *Tensor)

GetTensorX returns set x tensor

func (NeutralModule) GetTensorY

func (m NeutralModule) GetTensorY() (y *Tensor)

GetTensorY returns set y tensor

func (NeutralModule) ID

func (m NeutralModule) ID() int64

ID is the id

func (NeutralModule) Inference

func (m NeutralModule) Inference() (err error)

Inference does the inference forward operation

func (NeutralModule) InitHiddenLayers

func (m NeutralModule) InitHiddenLayers(rate, decay1, decay2 float32) (err error)

InitHiddenLayers will init the hidden layers. If

func (NeutralModule) InitWorkspace

func (m NeutralModule) InitWorkspace() (err error)

InitWorkspace inits the hidden workspace

func (NeutralModule) SetTensorDX

func (m NeutralModule) SetTensorDX(dx *Tensor)

SetTensorDX sets dx tensor

func (NeutralModule) SetTensorDY

func (m NeutralModule) SetTensorDY(dy *Tensor)

SetTensorDY sets dy tensor

func (NeutralModule) SetTensorX

func (m NeutralModule) SetTensorX(x *Tensor)

SetTensorX sets x tensor

func (NeutralModule) SetTensorY

func (m NeutralModule) SetTensorY(y *Tensor)

SetTensorY sets y tensor

func (NeutralModule) Update

func (m NeutralModule) Update(epoch int) (err error)

Update updates the weights of the hidden convolution layer

type Operation

type Operation interface {
	Forward(handle *cudnn.Handler, x, dx, y, dy *layers.Tensor) error
	Inference(handle *cudnn.Handler, x, y *layers.Tensor) error
	Backward(handle *cudnn.Handler, x, dx, y, dy *Tensor) error
	UpdateWeights(handle *cudnn.Handler) error
	LoadTrainers(handle *cudnn.Handler, trainers ...trainer.Trainer) error
	TrainersNeeded() int
	SetOtherScalars(alpha, beta float64)
	SetForwardScalars(alpha, beta float64)
	SetBackwardScalars(alpha, beta float64)
	GetOutputDims(input *layers.Tensor) ([]int32, error)
}

Operation is a generic operation that a layer uses.

The forward and backward don't need to use all the x,dx,y,and dy, but they do need to be passed.

type OutputModule

type OutputModule struct {
	// contains filtered or unexported fields
}

OutputModule is just a single single convolution before it goes into the loss function

func CreateOutputModule

func CreateOutputModule(id int64, bldr *Builder, batch int32, fdims, pad, stride, dilation []int32, balpha, bbeta, falpha, fbeta float64) (m *OutputModule, err error)

CreateOutputModule creates an output module

func (*OutputModule) Backward

func (m *OutputModule) Backward() error

Backward satisfies module interface

func (*OutputModule) FindOutputDims

func (m *OutputModule) FindOutputDims() ([]int32, error)

FindOutputDims satisifies module interface

func (*OutputModule) Forward

func (m *OutputModule) Forward() error

Forward satisfies module interface

func (*OutputModule) GetTensorDX

func (m *OutputModule) GetTensorDX() (dx *Tensor)

GetTensorDX returns set dx tensor

func (*OutputModule) GetTensorDY

func (m *OutputModule) GetTensorDY() (dy *Tensor)

GetTensorDY returns set dy tensor

func (*OutputModule) GetTensorX

func (m *OutputModule) GetTensorX() (x *Tensor)

GetTensorX returns set x tensor

func (*OutputModule) GetTensorY

func (m *OutputModule) GetTensorY() (y *Tensor)

GetTensorY returns set y tensor

func (*OutputModule) ID

func (m *OutputModule) ID() int64

ID satisfies module interface

func (*OutputModule) Inference

func (m *OutputModule) Inference() error

Inference satisfies module interface

func (*OutputModule) InitHiddenLayers

func (m *OutputModule) InitHiddenLayers(rate, decay1, decay2 float32) (err error)

InitHiddenLayers will init the hidden operation

func (*OutputModule) InitWorkspace

func (m *OutputModule) InitWorkspace() (err error)

InitWorkspace inits the workspace

func (*OutputModule) SetTensorDX

func (m *OutputModule) SetTensorDX(dx *Tensor)

SetTensorDX sets dx tensor

func (*OutputModule) SetTensorDY

func (m *OutputModule) SetTensorDY(dy *Tensor)

SetTensorDY sets dy tensor

func (*OutputModule) SetTensorX

func (m *OutputModule) SetTensorX(x *Tensor)

SetTensorX sets x tensor

func (*OutputModule) SetTensorY

func (m *OutputModule) SetTensorY(y *Tensor)

SetTensorY sets y tensor

func (*OutputModule) Update

func (m *OutputModule) Update(epoch int) error

Update satisifies module interface

type PoolingMode

type PoolingMode struct {
	gocudnn.PoolingMode
}

PoolingMode struct wrapper for gocudnn.PoolingMode. Look up methods in gocudnn.

func (*PoolingMode) AverageCountExcludePadding

func (p *PoolingMode) AverageCountExcludePadding() PoolingMode

AverageCountExcludePadding sets and returns the AverageCountExcludePadding flag

func (*PoolingMode) AverageCountIncludePadding

func (p *PoolingMode) AverageCountIncludePadding() PoolingMode

AverageCountIncludePadding sets and returns the AverageCountIncludePadding flag

func (*PoolingMode) Max

func (p *PoolingMode) Max() PoolingMode

Max sets and returns the Max flag

func (*PoolingMode) MaxDeterministic

func (p *PoolingMode) MaxDeterministic() PoolingMode

MaxDeterministic sets and returns the MaxDeterministic flag

type ReverseConcat

type ReverseConcat struct {
	// contains filtered or unexported fields
}

ReverseConcat is just a simple solution to split a source into multiple dests. If the source channel is not divisible by the number of dests then the remainder of the sources channels will be set into the last dest.

func CreateReverseConcat

func CreateReverseConcat(h *Handle) (c *ReverseConcat, err error)

CreateReverseConcat creates a reverse concat

func (*ReverseConcat) Backward

func (c *ReverseConcat) Backward() error

Backward Does backward with data flowing dest to srcs

func (*ReverseConcat) FindOutputDims

func (c *ReverseConcat) FindOutputDims(Source *Tensor, ndests int32) (outputdims [][]int32, err error)

FindOutputDims finds the output dims for the dests

func (*ReverseConcat) FindOutputDimsfromInputDims

func (c *ReverseConcat) FindOutputDimsfromInputDims(src []int32, ndests int32, frmt TensorFormat) (destdims [][]int32, err error)

FindOutputDimsfromInputDims finds the input dims from output dims. Last dest will get + the remainder for overflow or just the remainder of underflow

func (*ReverseConcat) Forward

func (c *ReverseConcat) Forward() error

Forward Does forward with data flowing srcs to dest

func (*ReverseConcat) SetInputDeltaSource

func (c *ReverseConcat) SetInputDeltaSource(deltasrc *Tensor)

SetInputDeltaSource sets the delta src for back propagation

func (*ReverseConcat) SetInputSource

func (c *ReverseConcat) SetInputSource(src *Tensor)

SetInputSource sets the input source

func (*ReverseConcat) SetOutputDeltaDests

func (c *ReverseConcat) SetOutputDeltaDests(deltadests []*Tensor)

SetOutputDeltaDests sets the delta dests for back propagation

func (*ReverseConcat) SetOutputDests

func (c *ReverseConcat) SetOutputDests(dests []*Tensor)

SetOutputDests sets the output dests

type SimpleModuleNetwork

type SimpleModuleNetwork struct {
	Id         int64             `json:"id,omitempty"`
	C          *Concat           `json:"c,omitempty"`
	Modules    []Module          `json:"modules,omitempty"`
	Output     *OutputModule     `json:"output,omitempty"`
	Classifier *ClassifierModule `json:"classifier,omitempty"`

	Rate, Decay1, Decay2 float32
	// contains filtered or unexported fields
}

SimpleModuleNetwork is a simple module network

func CreateSimpleModuleNetwork

func CreateSimpleModuleNetwork(id int64, b *Builder) (smn *SimpleModuleNetwork)

CreateSimpleModuleNetwork a simple module network

func (*SimpleModuleNetwork) Backward

func (m *SimpleModuleNetwork) Backward() (err error)

Backward does a forward without a concat

func (*SimpleModuleNetwork) BackwardCustom

func (m *SimpleModuleNetwork) BackwardCustom(backward func() error) (err error)

BackwardCustom does a custom backward function

func (*SimpleModuleNetwork) FindOutputDims

func (m *SimpleModuleNetwork) FindOutputDims() (dims []int32, err error)

FindOutputDims satisifis the Module interface

Have to run (m *SimpleModuleNetwork)SetTensorX(). If module network requres backpropdata to go to another module network. Then also run (m *SimpleModuleNetwork)SetTensorDX()

func (*SimpleModuleNetwork) Forward

func (m *SimpleModuleNetwork) Forward() (err error)

Forward does a forward without a concat

func (*SimpleModuleNetwork) ForwardCustom

func (m *SimpleModuleNetwork) ForwardCustom(forward func() error) (err error)

ForwardCustom does a custom forward function

func (*SimpleModuleNetwork) GetLoss

func (m *SimpleModuleNetwork) GetLoss() float32

GetLoss returns the loss found.

func (*SimpleModuleNetwork) GetTensorDX

func (m *SimpleModuleNetwork) GetTensorDX() *Tensor

GetTensorDX Gets dx tensor

func (*SimpleModuleNetwork) GetTensorDY

func (m *SimpleModuleNetwork) GetTensorDY() *Tensor

GetTensorDY Gets dy tensor

func (*SimpleModuleNetwork) GetTensorX

func (m *SimpleModuleNetwork) GetTensorX() *Tensor

GetTensorX Gets x tensor

func (*SimpleModuleNetwork) GetTensorY

func (m *SimpleModuleNetwork) GetTensorY() *Tensor

GetTensorY Gets y tensor

func (*SimpleModuleNetwork) ID

func (m *SimpleModuleNetwork) ID() int64

ID satisfies Module interface

func (*SimpleModuleNetwork) Inference

func (m *SimpleModuleNetwork) Inference() (err error)

Inference does a forward without a concat

func (*SimpleModuleNetwork) InitHiddenLayers

func (m *SimpleModuleNetwork) InitHiddenLayers(rate, decay1, decay2 float32) (err error)

InitHiddenLayers satisfies the Module interface

func (*SimpleModuleNetwork) InitWorkspace

func (m *SimpleModuleNetwork) InitWorkspace() (err error)

InitWorkspace inits workspace

func (*SimpleModuleNetwork) SetMSEClassifier

func (m *SimpleModuleNetwork) SetMSEClassifier() (err error)

SetMSEClassifier needs to be made

func (*SimpleModuleNetwork) SetModules

func (m *SimpleModuleNetwork) SetModules(modules []Module)

SetModules sets modules

func (*SimpleModuleNetwork) SetSoftMaxClassifier

func (m *SimpleModuleNetwork) SetSoftMaxClassifier() (err error)

SetSoftMaxClassifier sets the classifier module it should be added last. Should be ran after OutputModule is set

func (*SimpleModuleNetwork) SetTensorDX

func (m *SimpleModuleNetwork) SetTensorDX(dx *Tensor)

SetTensorDX sets dx tensor

func (*SimpleModuleNetwork) SetTensorDY

func (m *SimpleModuleNetwork) SetTensorDY(dy *Tensor)

SetTensorDY sets dy tensor

func (*SimpleModuleNetwork) SetTensorX

func (m *SimpleModuleNetwork) SetTensorX(x *Tensor)

SetTensorX sets x tensor

func (*SimpleModuleNetwork) SetTensorY

func (m *SimpleModuleNetwork) SetTensorY(y *Tensor)

SetTensorY sets y tensor

func (*SimpleModuleNetwork) TestForward

func (m *SimpleModuleNetwork) TestForward() (err error)

TestForward does the forward prop but it still calculates loss for testing

func (*SimpleModuleNetwork) Update

func (m *SimpleModuleNetwork) Update(counter int) (err error)

Update updates the hidden weights Update can count epochs or updates. I found counting updates works the best.

type SoftmaxAlgo

type SoftmaxAlgo struct {
	gocudnn.SoftMaxAlgorithm
}

SoftmaxAlgo determins what algo to use for softmax

func (*SoftmaxAlgo) Accurate

func (s *SoftmaxAlgo) Accurate() SoftmaxAlgo

Accurate sets and returns the Accurate flag

func (*SoftmaxAlgo) Fast

func (s *SoftmaxAlgo) Fast() SoftmaxAlgo

Fast sets and returns the Fast flag

func (*SoftmaxAlgo) Log

func (s *SoftmaxAlgo) Log() SoftmaxAlgo

Log sets and returns the Log flag

type SoftmaxMode

type SoftmaxMode struct {
	gocudnn.SoftMaxMode
}

SoftmaxMode determins what mode to use for softmax

func (*SoftmaxMode) Channel

func (s *SoftmaxMode) Channel() SoftmaxMode

Channel sets and returns the Channel flag

func (*SoftmaxMode) Instance

func (s *SoftmaxMode) Instance() SoftmaxMode

Instance sets and returns the Instance flag

type Stream

type Stream struct {
	*cudart.Stream
}

Stream is a stream for gpu instructions

func CreateStream

func CreateStream() (s *Stream, err error)

CreateStream creates a stream

type Tensor

type Tensor struct {
	*layers.Tensor
}

Tensor is contains 2 tensors the x and dx. Input IOs will contain only the X tensor.

type TensorFormat

type TensorFormat struct {
	gocudnn.TensorFormat
}

TensorFormat struct wrapper for gocudnn.TensorFormat. Look up methods in gocudnn.

func (*TensorFormat) NCHW

func (t *TensorFormat) NCHW() TensorFormat

NCHW sets and returns the NCHW flag

func (*TensorFormat) NCHWvectC

func (t *TensorFormat) NCHWvectC() TensorFormat

NCHWvectC sets and returns the NHWC flag

func (*TensorFormat) NHWC

func (t *TensorFormat) NHWC() TensorFormat

NHWC sets and returns the NHWC flag

func (*TensorFormat) Unknown

func (t *TensorFormat) Unknown() TensorFormat

Unknown sets and returns the NHWC flag

type Updater

type Updater interface {
	Update() error
}

Updater does the update interface

type VanillaModule

type VanillaModule struct {
	// contains filtered or unexported fields
}

VanillaModule has a convolution and an activation

func CreateVanillaModule

func CreateVanillaModule(id int64, bldr *Builder, batch int32, fdims, pad, stride, dilation []int32, balpha, bbeta, falpha, fbeta float64) (m *VanillaModule, err error)

CreateVanillaModule creates an output module

func (*VanillaModule) Backward

func (m *VanillaModule) Backward() error

Backward satisfies module interface

func (*VanillaModule) FindOutputDims

func (m *VanillaModule) FindOutputDims() ([]int32, error)

FindOutputDims satisifies module interface

func (*VanillaModule) Forward

func (m *VanillaModule) Forward() error

Forward satisfies module interface

func (*VanillaModule) GetTensorDX

func (m *VanillaModule) GetTensorDX() (dx *Tensor)

GetTensorDX returns set dx tensor

func (*VanillaModule) GetTensorDY

func (m *VanillaModule) GetTensorDY() (dy *Tensor)

GetTensorDY returns set dy tensor

func (*VanillaModule) GetTensorX

func (m *VanillaModule) GetTensorX() (x *Tensor)

GetTensorX returns set x tensor

func (*VanillaModule) GetTensorY

func (m *VanillaModule) GetTensorY() (y *Tensor)

GetTensorY returns set y tensor

func (*VanillaModule) ID

func (m *VanillaModule) ID() int64

ID satisfies module interface

func (*VanillaModule) Inference

func (m *VanillaModule) Inference() error

Inference satisfies module interface

func (*VanillaModule) InitHiddenLayers

func (m *VanillaModule) InitHiddenLayers(rate, decay1, decay2 float32) (err error)

InitHiddenLayers will init the hidden operation

func (*VanillaModule) InitWorkspace

func (m *VanillaModule) InitWorkspace() (err error)

InitWorkspace inits the workspace

func (*VanillaModule) SetTensorDX

func (m *VanillaModule) SetTensorDX(dx *Tensor)

SetTensorDX sets dx tensor

func (*VanillaModule) SetTensorDY

func (m *VanillaModule) SetTensorDY(dy *Tensor)

SetTensorDY sets dy tensor

func (*VanillaModule) SetTensorX

func (m *VanillaModule) SetTensorX(x *Tensor)

SetTensorX sets x tensor

func (*VanillaModule) SetTensorY

func (m *VanillaModule) SetTensorY(y *Tensor)

SetTensorY sets y tensor

func (*VanillaModule) Update

func (m *VanillaModule) Update(epoch int) error

Update satisifies module interface

type Worker

type Worker struct {
	*gocu.Worker
}

Worker is a wrapper for *gocu.Worker it assigns a locked host thread to a device. A device can have more than one worker.

func CreateWorker

func CreateWorker(d Device) (w *Worker)

CreateWorker assigns a locked host thread to a device.

Directories

Path Synopsis
Package devices is kind of a pipe dream.
Package devices is kind of a pipe dream.
gpu/nvidia/cudnn
Package cudnn takes the descriptors from gocudnn which is from cudnn and seperates them into seperate packages.
Package cudnn takes the descriptors from gocudnn which is from cudnn and seperates them into seperate packages.
gpu/nvidia/cudnn/softmax
Package softmax uses the softmax functions from gocudnn which is from cudnn.
Package softmax uses the softmax functions from gocudnn which is from cudnn.
gpu/nvidia/cudnn/tensor
Package tensor is used to make tensors by using gocudnn.
Package tensor is used to make tensors by using gocudnn.
gpu/nvidia/nvutil
Package nvutil are functions that use the other nvidia packages and allows them to be used with each other
Package nvutil are functions that use the other nvidia packages and allows them to be used with each other
Package layers contains shared things between layers.
Package layers contains shared things between layers.
cnn
Package cnn contains structs and methods used to do forward, and backward operations for convolution layers
Package cnn contains structs and methods used to do forward, and backward operations for convolution layers
Package trainer is a package that is used for training networks.
Package trainer is a package that is used for training networks.
ui

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL