Documentation ¶
Overview ¶
Package axon provides the basic reference axon implementation, for rate-coded activations and standard error-driven learning. Other packages provide spiking or deep axon, PVLV, PBWM, etc.
The overall design seeks an "optimal" tradeoff between simplicity, transparency, ability to flexibly recombine and extend elements, and avoiding having to rewrite a bunch of stuff.
The *Stru elements handle the core structural components of the network, and hold emer.* interface pointers to elements such as emer.Layer, which provides a very minimal interface for these elements. Interfaces are automatically pointers, so think of these as generic pointers to your specific Layers etc.
This design means the same *Stru infrastructure can be re-used across different variants of the algorithm. Because we're keeping this infrastructure minimal and algorithm-free it should be much less confusing than dealing with the multiple levels of inheritance in C++ emergent. The actual algorithm-specific code is now fully self-contained, and largely orthogonalized from the infrastructure.
One specific cost of this is the need to cast the emer.* interface pointers into the specific types of interest, when accessing via the *Stru infrastructure.
The *Params elements contain all the (meta)parameters and associated methods for computing various functions. They are the equivalent of Specs from original emergent, but unlike specs they are local to each place they are used, and styling is used to apply common parameters across multiple layers etc. Params seems like a more explicit, recognizable name compared to specs, and this also helps avoid confusion about their different nature than old specs. Pars is shorter but confusable with "Parents" so "Params" is more unambiguous.
Params are organized into four major categories, which are more clearly functionally labeled as opposed to just structurally so, to keep things clearer and better organized overall: * ActParams -- activation params, at the Neuron level (in act.go) * InhibParams -- inhibition params, at the Layer / Pool level (in inhib.go) * LearnNeurParams -- learning parameters at the Neuron level (running-averages that drive learning) * LearnSynParams -- learning parameters at the Synapse level (both in learn.go)
The levels of structure and state are: * Network * .Layers * .Pools: pooled inhibition state -- 1 for layer plus 1 for each sub-pool (unit group) with inhibition * .RecvPrjns: receiving projections from other sending layers * .SendPrjns: sending projections from other receiving layers * .Neurons: neuron state variables
There are methods on the Network that perform initialization and overall computation, by iterating over layers and calling methods there. This is typically how most users will run their models.
Parallel computation across multiple CPU cores (threading) is achieved through persistent worker go routines that listen for functions to run on thread-specific channels. Each layer has a designated thread number, so you can experiment with different ways of dividing up the computation. Timing data is kept for per-thread time use -- see TimeReport() on the network.
The Layer methods directly iterate over Neurons, Pools, and Prjns, and there is no finer-grained level of computation (e.g., at the individual Neuron level), except for the *Params methods that directly compute relevant functions. Thus, looking directly at the layer.go code should provide a clear sense of exactly how everything is computed -- you may need to the refer to act.go, learn.go etc to see the relevant details but at least the overall organization should be clear in layer.go.
Computational methods are generally named: VarFmVar to specifically name what variable is being computed from what other input variables. e.g., SpikeFmG computes activation from conductances G.
The Pools (type Pool, in pool.go) hold state used for computing pooled inhibition, but also are used to hold overall aggregate pooled state variables -- the first element in Pools applies to the layer itself, and subsequent ones are for each sub-pool (4D layers). These pools play the same role as the AxonUnGpState structures in C++ emergent.
Prjns directly support all synapse-level computation, and hold the LearnSynParams and iterate directly over all of their synapses. It is the exact same Prjn object that lives in the RecvPrjns of the receiver-side, and the SendPrjns of the sender-side, and it maintains and coordinates both sides of the state. This clarifies and simplifies a lot of code. There is no separate equivalent of AxonConSpec / AxonConState at the level of connection groups per unit per projection.
The pattern of connectivity between units is specified by the prjn.Pattern interface and all the different standard options are avail in that prjn package. The Pattern code generates a full tensor bitmap of binary 1's and 0's for connected (1's) and not (0's) units, and can use any method to do so. This full lookup-table approach is not the most memory-efficient, but it is fully general and shouldn't be too-bad memory-wise overall (fully bit-packed arrays are used, and these bitmaps don't need to be retained once connections have been established). This approach allows patterns to just focus on patterns, and they don't care at all how they are used to allocate actual connections.
Index ¶
- Constants
- Variables
- func AddGlbDrvV(ctx *Context, di uint32, drIdx uint32, gvar GlobalVars, val float32)
- func AddGlbV(ctx *Context, di uint32, gvar GlobalVars, val float32)
- func AddNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
- func AddNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
- func AddSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
- func AddSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
- func DecaySynCa(ctx *Context, syni, di uint32, decay float32)
- func DriveVarToZero(ctx *Context, di uint32, gvar GlobalVars)
- func DrivesAddTo(ctx *Context, di uint32, drv uint32, delta float32) float32
- func DrivesEffectiveDrive(ctx *Context, di uint32, i uint32) float32
- func DrivesExpStep(ctx *Context, di uint32, drv uint32, dt, base float32) float32
- func DrivesExpStepAll(ctx *Context, di uint32)
- func DrivesSoftAdd(ctx *Context, di uint32, drv uint32, delta float32) float32
- func DrivesToBaseline(ctx *Context, di uint32)
- func DrivesToZero(ctx *Context, di uint32)
- func EffortAddEffort(ctx *Context, di uint32, inc float32)
- func EffortDiscFmEffort(ctx *Context, di uint32) float32
- func EffortGiveUp(ctx *Context, di uint32) bool
- func EffortReset(ctx *Context, di uint32)
- func GetRandomNumber(index uint32, counter slrand.Counter, funIdx RandFunIdx) float32
- func GlbDrvV(ctx *Context, di uint32, drIdx uint32, gvar GlobalVars) float32
- func GlbUSneg(ctx *Context, di uint32, negIdx uint32) float32
- func GlbV(ctx *Context, di uint32, gvar GlobalVars) float32
- func GlbVTA(ctx *Context, di uint32, vtaType GlobalVTAType, gvar GlobalVars) float32
- func HashEncodeSlice(slice []float32) string
- func InitSynCa(ctx *Context, syni, di uint32)
- func IsExtLayerType(lt LayerTypes) bool
- func JsonToParams(b []byte) string
- func LHbFmPVVS(ctx *Context, di uint32, pvPos, pvNeg, vsPatchPos float32)
- func LHbReset(ctx *Context, di uint32)
- func LHbShouldGiveUp(ctx *Context, di uint32) bool
- func LayerActsLog(net *Network, lg *elog.Logs, di int, gui *egui.GUI)
- func LayerActsLogAvg(net *Network, lg *elog.Logs, gui *egui.GUI, recReset bool)
- func LayerActsLogConfig(net *Network, lg *elog.Logs)
- func LayerActsLogConfigGUI(lg *elog.Logs, gui *egui.GUI)
- func LayerActsLogConfigMetaData(dt *etable.Table)
- func LayerActsLogRecReset(lg *elog.Logs)
- func LogAddCaLrnDiagnosticItems(lg *elog.Logs, mode etime.Modes, net *Network, times ...etime.Times)
- func LogAddDiagnosticItems(lg *elog.Logs, layerNames []string, mode etime.Modes, times ...etime.Times)
- func LogAddExtraDiagnosticItems(lg *elog.Logs, mode etime.Modes, net *Network, times ...etime.Times)
- func LogAddLayerGeActAvgItems(lg *elog.Logs, net *Network, mode etime.Modes, etm etime.Times)
- func LogAddPCAItems(lg *elog.Logs, net *Network, mode etime.Modes, times ...etime.Times)
- func LogAddPulvCorSimItems(lg *elog.Logs, net *Network, mode etime.Modes, times ...etime.Times)
- func LogInputLayer(lg *elog.Logs, net *Network, mode etime.Modes)
- func LogTestErrors(lg *elog.Logs)
- func LooperResetLogBelow(man *looper.Manager, logs *elog.Logs, except ...etime.Times)
- func LooperSimCycleAndLearn(man *looper.Manager, net *Network, ctx *Context, viewupdt *netview.ViewUpdt, ...)
- func LooperStdPhases(man *looper.Manager, ctx *Context, net *Network, plusStart, plusEnd int, ...)
- func LooperUpdtNetView(man *looper.Manager, viewupdt *netview.ViewUpdt, net *Network)
- func LooperUpdtPlots(man *looper.Manager, gui *egui.GUI)
- func MulNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
- func MulNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
- func MulSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
- func MulSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
- func NeuroModInit(ctx *Context, di uint32)
- func NeuroModSetRew(ctx *Context, di uint32, rew float32, hasRew bool)
- func NeuronVarIdxByName(varNm string) (int, error)
- func NrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars) float32
- func NrnClearFlag(ctx *Context, ni, di uint32, flag NeuronFlags)
- func NrnHasFlag(ctx *Context, ni, di uint32, flag NeuronFlags) bool
- func NrnI(ctx *Context, ni uint32, idx NeuronIdxs) uint32
- func NrnIsOff(ctx *Context, ni uint32) bool
- func NrnSetFlag(ctx *Context, ni, di uint32, flag NeuronFlags)
- func NrnV(ctx *Context, ni, di uint32, nvar NeuronVars) float32
- func PCAStats(net *Network, lg *elog.Logs, stats *estats.Stats)
- func PVLVDA(ctx *Context, di uint32) float32
- func PVLVDAImpl(ctx *Context, di uint32, ach float32, hasRew bool) float32
- func PVLVDriveUpdt(ctx *Context, di uint32)
- func PVLVHasPosUS(ctx *Context, di uint32) bool
- func PVLVInitDrives(ctx *Context, di uint32)
- func PVLVInitUS(ctx *Context, di uint32)
- func PVLVNegPV(ctx *Context, di uint32) float32
- func PVLVNetPV(ctx *Context, di uint32) float32
- func PVLVNewState(ctx *Context, di uint32, hadRew bool)
- func PVLVPosPV(ctx *Context, di uint32) float32
- func PVLVPosPVFmDriveEffort(ctx *Context, usValue, drive, effort float32) float32
- func PVLVReset(ctx *Context, di uint32)
- func PVLVSetDrive(ctx *Context, di uint32, dr uint32, val float32)
- func PVLVUSStimVal(ctx *Context, di uint32, usIdx uint32, valence ValenceTypes) float32
- func PVLVUrgencyUpdt(ctx *Context, di uint32, effort float32)
- func PVLVVSPatchMax(ctx *Context, di uint32) float32
- func ParallelChunkRun(fun func(st, ed int), total int, nThreads int)
- func ParallelRun(fun func(st, ed uint32), total uint32, nThreads int)
- func SaveWeights(net *Network, ctrString, runName string) string
- func SaveWeightsIfArgSet(net *Network, args *ecmd.Args, ctrString, runName string) string
- func SaveWeightsIfConfigSet(net *Network, cfgWts bool, ctrString, runName string) string
- func SetAvgMaxFloatFromIntErr(fun func())
- func SetGlbDrvV(ctx *Context, di uint32, drIdx uint32, gvar GlobalVars, val float32)
- func SetGlbUSneg(ctx *Context, di uint32, negIdx uint32, val float32)
- func SetGlbV(ctx *Context, di uint32, gvar GlobalVars, val float32)
- func SetGlbVTA(ctx *Context, di uint32, vtaType GlobalVTAType, gvar GlobalVars, val float32)
- func SetNeuronExtPosNeg(ctx *Context, ni, di uint32, val float32)
- func SetNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
- func SetNrnI(ctx *Context, ni uint32, idx NeuronIdxs, val uint32)
- func SetNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
- func SetSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
- func SetSynI(ctx *Context, syni uint32, idx SynapseIdxs, val uint32)
- func SetSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
- func SigFun(w, gain, off float32) float32
- func SigFun61(w float32) float32
- func SigInvFun(w, gain, off float32) float32
- func SigInvFun61(w float32) float32
- func SynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars) float32
- func SynI(ctx *Context, syni uint32, idx SynapseIdxs) uint32
- func SynV(ctx *Context, syni uint32, svar SynapseVars) float32
- func SynapseVarByName(varNm string) (int, error)
- func ToggleLayersOff(net *Network, layerNames []string, off bool)
- func USnegToZero(ctx *Context, di uint32)
- func UrgeFmUrgency(ctx *Context, di uint32) float32
- func UrgencyAddEffort(ctx *Context, di uint32, inc float32)
- func UrgencyReset(ctx *Context, di uint32)
- func VTADAFmRaw(ctx *Context, di uint32, ach float32, hasRew bool)
- func VTAReset(ctx *Context, di uint32)
- func VTAZeroVals(ctx *Context, di uint32, vtaType GlobalVTAType)
- func WeightsFileName(net *Network, ctrString, runName string) string
- type ActAvgParams
- type ActAvgVals
- type ActInitParams
- type ActParams
- func (ac *ActParams) AddGeNoise(ctx *Context, ni, di uint32)
- func (ac *ActParams) AddGiNoise(ctx *Context, ni, di uint32)
- func (ac *ActParams) DecayAHP(ctx *Context, ni, di uint32, decay float32)
- func (ac *ActParams) DecayLearnCa(ctx *Context, ni, di uint32, decay float32)
- func (ac *ActParams) DecayState(ctx *Context, ni, di uint32, decay, glong, ahp float32)
- func (ac *ActParams) Defaults()
- func (ac *ActParams) GSkCaFmCa(ctx *Context, ni, di uint32)
- func (ac *ActParams) GeFmSyn(ctx *Context, ni, di uint32, geSyn, geExt float32)
- func (ac *ActParams) GiFmSyn(ctx *Context, ni, di uint32, giSyn float32) float32
- func (ac *ActParams) GkFmVm(ctx *Context, ni, di uint32)
- func (ac *ActParams) GvgccFmVm(ctx *Context, ni, di uint32)
- func (ac *ActParams) InetFmG(vm, ge, gl, gi, gk float32) float32
- func (ac *ActParams) InitActs(ctx *Context, ni, di uint32)
- func (ac *ActParams) InitLongActs(ctx *Context, ni, di uint32)
- func (ac *ActParams) KNaNewState(ctx *Context, ni, di uint32)
- func (ac *ActParams) MaintNMDAFmRaw(ctx *Context, ni, di uint32)
- func (ac *ActParams) NMDAFmRaw(ctx *Context, ni, di uint32, geTot float32)
- func (ac *ActParams) SpikeFmVm(ctx *Context, ni, di uint32)
- func (ac *ActParams) SpikeFmVmVars(nrnISI, nrnISIAvg, nrnSpike, nrnSpiked, nrnAct *float32, nrnVm float32)
- func (ac *ActParams) Update()
- func (ac *ActParams) VmFmG(ctx *Context, ni, di uint32)
- func (ac *ActParams) VmFmInet(vm, dt, inet float32) float32
- func (ac *ActParams) VmInteg(vm, dt, ge, gl, gi, gk float32, nvm, inet *float32)
- type AttnParams
- type AvgMaxI32
- func (am *AvgMaxI32) Calc(refIdx int32)
- func (am *AvgMaxI32) FloatFmIntFactor() float32
- func (am *AvgMaxI32) FloatFromInt(ival, refIdx int32) float32
- func (am *AvgMaxI32) FloatToInt(val float32) int32
- func (am *AvgMaxI32) FloatToIntFactor() float32
- func (am *AvgMaxI32) FloatToIntSum(val float32) int32
- func (am *AvgMaxI32) Init()
- func (am *AvgMaxI32) String() string
- func (am *AvgMaxI32) UpdateVal(val float32)
- func (am *AvgMaxI32) Zero()
- type AvgMaxPhases
- type AxonLayer
- type AxonNetwork
- type AxonPrjn
- type AxonPrjns
- type BLAPrjnParams
- type BurstParams
- type CTParams
- type CaLrnParams
- type CaSpkParams
- type ClampParams
- type Context
- func (ctx *Context) CopyNetStridesFrom(srcCtx *Context)
- func (ctx *Context) CycleInc()
- func (ctx *Context) Defaults()
- func (ctx *Context) GlobalDriveIdx(di uint32, drIdx uint32, gvar GlobalVars) uint32
- func (ctx *Context) GlobalIdx(di uint32, gvar GlobalVars) uint32
- func (ctx *Context) GlobalUSnegIdx(di uint32, negIdx uint32) uint32
- func (ctx *Context) GlobalVNFloats() uint32
- func (ctx *Context) GlobalVTAIdx(di uint32, vtaType GlobalVTAType, gvar GlobalVars) uint32
- func (ctx *Context) NewPhase(plusPhase bool)
- func (ctx *Context) NewState(mode etime.Modes)
- func (ctx *Context) PVLVInitUS(di uint32)
- func (ctx *Context) PVLVSetDrives(di uint32, curiosity, magnitude float32, drives ...int)
- func (ctx *Context) PVLVSetUS(di uint32, valence ValenceTypes, usIdx int, magnitude float32)
- func (ctx *Context) PVLVShouldGiveUp(di uint32, rnd erand.Rand)
- func (ctx *Context) PVLVStepStart(di uint32, rnd erand.Rand)
- func (ctx *Context) Reset()
- func (ctx *Context) SetGlobalStrides()
- func (ctx *Context) SlowInc() bool
- type CorSimStats
- type DAModTypes
- type DecayParams
- type DendParams
- type DriveVals
- type Drives
- type DtParams
- func (dp *DtParams) AvgVarUpdt(avg, vr *float32, val float32)
- func (dp *DtParams) Defaults()
- func (dp *DtParams) GeSynFmRaw(geSyn, geRaw float32) float32
- func (dp *DtParams) GeSynFmRawSteady(geRaw float32) float32
- func (dp *DtParams) GiSynFmRaw(giSyn, giRaw float32) float32
- func (dp *DtParams) GiSynFmRawSteady(giRaw float32) float32
- func (dp *DtParams) Update()
- type Effort
- type GPLayerTypes
- type GPParams
- type GPU
- func (gp *GPU) Config(ctx *Context, net *Network)
- func (gp *GPU) ConfigSynCaBuffs()
- func (gp *GPU) CopyContextFmStaging()
- func (gp *GPU) CopyContextToStaging()
- func (gp *GPU) CopyExtsToStaging()
- func (gp *GPU) CopyGBufToStaging()
- func (gp *GPU) CopyIdxsToStaging()
- func (gp *GPU) CopyLayerStateFmStaging()
- func (gp *GPU) CopyLayerValsFmStaging()
- func (gp *GPU) CopyLayerValsToStaging()
- func (gp *GPU) CopyNeuronsFmStaging()
- func (gp *GPU) CopyNeuronsToStaging()
- func (gp *GPU) CopyParamsToStaging()
- func (gp *GPU) CopyPoolsFmStaging()
- func (gp *GPU) CopyPoolsToStaging()
- func (gp *GPU) CopyStateFmStaging()
- func (gp *GPU) CopyStateToStaging()
- func (gp *GPU) CopySynCaFmStaging()
- func (gp *GPU) CopySynCaToStaging()
- func (gp *GPU) CopySynapsesFmStaging()
- func (gp *GPU) CopySynapsesToStaging()
- func (gp *GPU) Destroy()
- func (gp *GPU) RunApplyExts()
- func (gp *GPU) RunApplyExtsCmd() vk.CommandBuffer
- func (gp *GPU) RunCycle()
- func (gp *GPU) RunCycleOne()
- func (gp *GPU) RunCycleOneCmd() vk.CommandBuffer
- func (gp *GPU) RunCycleSeparateFuns()
- func (gp *GPU) RunCycles()
- func (gp *GPU) RunCyclesCmd() vk.CommandBuffer
- func (gp *GPU) RunDWt()
- func (gp *GPU) RunDWtCmd() vk.CommandBuffer
- func (gp *GPU) RunMinusPhase()
- func (gp *GPU) RunMinusPhaseCmd() vk.CommandBuffer
- func (gp *GPU) RunNewState()
- func (gp *GPU) RunNewStateCmd() vk.CommandBuffer
- func (gp *GPU) RunPipelineMemWait(cmd vk.CommandBuffer, name string, n int)
- func (gp *GPU) RunPipelineNoWait(cmd vk.CommandBuffer, name string, n int)
- func (gp *GPU) RunPipelineOffset(cmd vk.CommandBuffer, name string, n, off int)
- func (gp *GPU) RunPipelineWait(name string, n int)
- func (gp *GPU) RunPlusPhase()
- func (gp *GPU) RunPlusPhaseCmd() vk.CommandBuffer
- func (gp *GPU) RunPlusPhaseStart()
- func (gp *GPU) RunWtFmDWt()
- func (gp *GPU) RunWtFmDWtCmd() vk.CommandBuffer
- func (gp *GPU) SetContext(ctx *Context)
- func (gp *GPU) StartRun(cmd vk.CommandBuffer)
- func (gp *GPU) SynCaBuff(idx uint32) []float32
- func (gp *GPU) SynDataNs() (nCmd, nPer, nLast int)
- func (gp *GPU) SyncAllFmGPU()
- func (gp *GPU) SyncAllToGPU()
- func (gp *GPU) SyncContextFmGPU()
- func (gp *GPU) SyncContextToGPU()
- func (gp *GPU) SyncGBufToGPU()
- func (gp *GPU) SyncLayerStateFmGPU()
- func (gp *GPU) SyncLayerValsFmGPU()
- func (gp *GPU) SyncLayerValsToGPU()
- func (gp *GPU) SyncMemToGPU()
- func (gp *GPU) SyncNeuronsFmGPU()
- func (gp *GPU) SyncNeuronsToGPU()
- func (gp *GPU) SyncParamsToGPU()
- func (gp *GPU) SyncPoolsFmGPU()
- func (gp *GPU) SyncPoolsToGPU()
- func (gp *GPU) SyncRegionStruct(vnm string) vgpu.MemReg
- func (gp *GPU) SyncRegionSynCas(vnm string) vgpu.MemReg
- func (gp *GPU) SyncRegionSyns(vnm string) vgpu.MemReg
- func (gp *GPU) SyncStateFmGPU()
- func (gp *GPU) SyncStateGBufToGPU()
- func (gp *GPU) SyncStateToGPU()
- func (gp *GPU) SyncSynCaFmGPU()
- func (gp *GPU) SyncSynCaToGPU()
- func (gp *GPU) SyncSynapsesFmGPU()
- func (gp *GPU) SyncSynapsesToGPU()
- func (gp *GPU) TestSynCa() bool
- func (gp *GPU) TestSynCaCmd() vk.CommandBuffer
- type GScaleVals
- type GlobalVTAType
- type GlobalVars
- type HipConfig
- type HipPrjnParams
- type InhibParams
- type LDTParams
- func (lp *LDTParams) ACh(ctx *Context, di uint32, ...) float32
- func (lp *LDTParams) Defaults()
- func (lp *LDTParams) MaintFmNotMaint(notMaint float32) float32
- func (lp *LDTParams) MaxSrcAct(maxSrcAct, srcLayAct float32) float32
- func (lp *LDTParams) Thr(val float32) float32
- func (lp *LDTParams) Update()
- type LHb
- type LRateMod
- type LRateParams
- type LaySpecialVals
- type Layer
- func (ly *Layer) AdaptInhib(ctx *Context)
- func (ly *Layer) AllParams() string
- func (ly *Layer) AnyGated(di uint32) bool
- func (ly *Layer) ApplyExt(ctx *Context, di uint32, ext etensor.Tensor)
- func (ly *Layer) ApplyExt1D(ctx *Context, di uint32, ext []float64)
- func (ly *Layer) ApplyExt1D32(ctx *Context, di uint32, ext []float32)
- func (ly *Layer) ApplyExt1DTsr(ctx *Context, di uint32, ext etensor.Tensor)
- func (ly *Layer) ApplyExt2D(ctx *Context, di uint32, ext etensor.Tensor)
- func (ly *Layer) ApplyExt2Dto4D(ctx *Context, di uint32, ext etensor.Tensor)
- func (ly *Layer) ApplyExt4D(ctx *Context, di uint32, ext etensor.Tensor)
- func (ly *Layer) ApplyExtFlags() (clearMask, setMask NeuronFlags, toTarg bool)
- func (ly *Layer) ApplyExtVal(ctx *Context, lni, di uint32, val float32, clearMask, setMask NeuronFlags, ...)
- func (ly *Layer) AsAxon() *Layer
- func (ly *Layer) AvgDifFmTrgAvg(ctx *Context)
- func (ly *Layer) AvgMaxVarByPool(ctx *Context, varNm string, poolIdx, di int) minmax.AvgMax32
- func (ly *Layer) BGThalDefaults()
- func (ly *Layer) BLADefaults()
- func (ly *Layer) BetweenLayerGi(ctx *Context)
- func (ly *Layer) BetweenLayerGiMax(net *Network, di uint32, maxGi float32, layIdx int32) float32
- func (ly *Layer) CTDefParamsFast()
- func (ly *Layer) CTDefParamsLong()
- func (ly *Layer) CTDefParamsMedium()
- func (ly *Layer) CeMDefaults()
- func (ly *Layer) ClearTargExt(ctx *Context)
- func (ly *Layer) CorSimFmActs(ctx *Context)
- func (ly *Layer) CostEst() (neur, syn, tot int)
- func (ly *Layer) CycleNeuron(ctx *Context, ni uint32)
- func (ly *Layer) CyclePost(ctx *Context)
- func (ly *Layer) DTrgSubMean(ctx *Context)
- func (ly *Layer) DWt(ctx *Context, si uint32)
- func (ly *Layer) DWtSubMean(ctx *Context, ri uint32)
- func (ly *Layer) DecayState(ctx *Context, di uint32, decay, glong, ahp float32)
- func (ly *Layer) DecayStateLayer(ctx *Context, di uint32, decay, glong, ahp float32)
- func (ly *Layer) DecayStateNeuronsAll(ctx *Context, decay, glong, ahp float32)
- func (ly *Layer) DecayStatePool(ctx *Context, pool int, decay, glong, ahp float32)
- func (ly *Layer) Defaults()
- func (ly *Layer) GInteg(ctx *Context, ni, di uint32, pl *Pool, vals *LayerVals)
- func (ly *Layer) GPDefaults()
- func (ly *Layer) GPPostBuild()
- func (ly *Layer) GPiDefaults()
- func (ly *Layer) GatedFmSpkMax(di uint32, thr float32) (bool, int)
- func (ly *Layer) GatherSpikes(ctx *Context, ni uint32)
- func (ly *Layer) GiFmSpikes(ctx *Context)
- func (ly *Layer) HasPoolInhib() bool
- func (ly *Layer) InitActAvg(ctx *Context)
- func (ly *Layer) InitActAvgLayer(ctx *Context)
- func (ly *Layer) InitActAvgPools(ctx *Context)
- func (ly *Layer) InitActs(ctx *Context)
- func (ly *Layer) InitExt(ctx *Context)
- func (ly *Layer) InitGScale(ctx *Context)
- func (ly *Layer) InitPrjnGBuffs(ctx *Context)
- func (ly *Layer) InitWtSym(ctx *Context)
- func (ly *Layer) InitWts(ctx *Context, nt *Network)
- func (ly *Layer) LDTDefaults()
- func (ly *Layer) LDTPostBuild()
- func (ly *Layer) LDTSrcLayAct(net *Network, layIdx int32, di uint32) float32
- func (ly *Layer) LRateMod(mod float32)
- func (ly *Layer) LRateSched(sched float32)
- func (ly *Layer) LesionNeurons(prop float32) int
- func (ly *Layer) LocalistErr2D(ctx *Context) (err []bool, minusIdx, plusIdx []int)
- func (ly *Layer) LocalistErr4D(ctx *Context) (err []bool, minusIdx, plusIdx []int)
- func (ly *Layer) MatrixDefaults()
- func (ly *Layer) MatrixGated(ctx *Context)
- func (ly *Layer) MatrixPostBuild()
- func (ly *Layer) MinusPhase(ctx *Context)
- func (ly *Layer) MinusPhasePost(ctx *Context)
- func (ly *Layer) NewState(ctx *Context)
- func (ly *Layer) NewStateNeurons(ctx *Context)
- func (ly *Layer) Object() any
- func (ly *Layer) PTMaintDefaults()
- func (ly *Layer) PTNotMaintDefaults()
- func (ly *Layer) PVLVPostBuild()
- func (ly *Layer) PctUnitErr(ctx *Context) []float64
- func (ly *Layer) PlusPhase(ctx *Context)
- func (ly *Layer) PlusPhaseActAvg(ctx *Context)
- func (ly *Layer) PlusPhasePost(ctx *Context)
- func (ly *Layer) PlusPhaseStart(ctx *Context)
- func (ly *Layer) PoolGiFmSpikes(ctx *Context)
- func (ly *Layer) PostBuild()
- func (ly *Layer) PostSpike(ctx *Context, ni uint32)
- func (ly *Layer) PulvPostBuild()
- func (ly *Layer) PulvinarDriver(ctx *Context, lni, di uint32) (drvGe, nonDrvPct float32)
- func (ly *Layer) RWDaPostBuild()
- func (ly *Layer) ReadWtsJSON(r io.Reader) error
- func (ly *Layer) STNDefaults()
- func (ly *Layer) SendSpike(ctx *Context, ni uint32)
- func (ly *Layer) SetSubMean(trgAvg, prjn float32)
- func (ly *Layer) SetWts(lw *weights.Layer) error
- func (ly *Layer) SlowAdapt(ctx *Context)
- func (ly *Layer) SpikeFmG(ctx *Context, ni, di uint32)
- func (ly *Layer) SpkSt1(ctx *Context)
- func (ly *Layer) SpkSt2(ctx *Context)
- func (ly *Layer) SynCa(ctx *Context, ni uint32)
- func (ly *Layer) SynFail(ctx *Context)
- func (ly *Layer) TDDaPostBuild()
- func (ly *Layer) TDIntegPostBuild()
- func (ly *Layer) TargToExt(ctx *Context)
- func (ly *Layer) TestVals(ctrKey string, vals map[string]float32)
- func (ly *Layer) TrgAvgFmD(ctx *Context)
- func (ly *Layer) UnLesionNeurons()
- func (ly *Layer) Update()
- func (ly *Layer) UpdateExtFlags(ctx *Context)
- func (ly *Layer) UpdateParams()
- func (ly *Layer) VSPatchAdaptThr(ctx *Context)
- func (ly *Layer) WriteWtsJSON(w io.Writer, depth int)
- func (ly *Layer) WtFmDWt(ctx *Context, si uint32)
- func (ly *Layer) WtFmDWtLayer(ctx *Context)
- type LayerBase
- func (ly *LayerBase) AddClass(cls string)
- func (ly *LayerBase) ApplyDefParams()
- func (ly *LayerBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (ly *LayerBase) Build() error
- func (ly *LayerBase) BuildConfigByName(nm string) (string, error)
- func (ly *LayerBase) BuildConfigFindLayer(nm string, mustName bool) int32
- func (ly *LayerBase) BuildPools(ctx *Context, nn uint32) error
- func (ly *LayerBase) BuildPrjns(ctx *Context) error
- func (ly *LayerBase) BuildSubPools(ctx *Context)
- func (ly *LayerBase) Class() string
- func (ly *LayerBase) Config(shape []int, typ emer.LayerType)
- func (ly *LayerBase) Idx4DFrom2D(x, y int) ([]int, bool)
- func (ly *LayerBase) Index() int
- func (ly *LayerBase) InitName(lay emer.Layer, name string, net emer.Network)
- func (ly *LayerBase) Is2D() bool
- func (ly *LayerBase) Is4D() bool
- func (ly *LayerBase) IsOff() bool
- func (ly *LayerBase) Label() string
- func (ly *LayerBase) LayerType() LayerTypes
- func (ly *LayerBase) LayerVals(di uint32) *LayerVals
- func (ly *LayerBase) NRecvPrjns() int
- func (ly *LayerBase) NSendPrjns() int
- func (ly *LayerBase) NSubPools() int
- func (ly *LayerBase) Name() string
- func (ly *LayerBase) NeurStartIdx() int
- func (ly *LayerBase) NonDefaultParams() string
- func (ly *LayerBase) ParamsApplied(sel *params.Sel)
- func (ly *LayerBase) ParamsHistoryReset()
- func (ly *LayerBase) PlaceAbove(other *Layer)
- func (ly *LayerBase) PlaceBehind(other *Layer, space float32)
- func (ly *LayerBase) PlaceRightOf(other *Layer, space float32)
- func (ly *LayerBase) Pool(pi, di uint32) *Pool
- func (ly *LayerBase) Pos() mat32.Vec3
- func (ly *LayerBase) RecipToRecvPrjn(rpj *Prjn) (*Prjn, bool)
- func (ly *LayerBase) RecipToSendPrjn(spj *Prjn) (*Prjn, bool)
- func (ly *LayerBase) RecvName(receiver string) *Prjn
- func (ly *LayerBase) RecvNameTry(receiver string) (emer.Prjn, error)
- func (ly *LayerBase) RecvNameType(receiver, typ string) *Prjn
- func (ly *LayerBase) RecvNameTypeTry(receiver, typ string) (emer.Prjn, error)
- func (ly *LayerBase) RecvPrjn(idx int) emer.Prjn
- func (ly *LayerBase) RecvPrjnVals(vals *[]float32, varNm string, sendLay emer.Layer, sendIdx1D int, ...) error
- func (ly *LayerBase) RecvPrjns() *AxonPrjns
- func (ly *LayerBase) RelPos() relpos.Rel
- func (ly *LayerBase) RepIdxs() []int
- func (ly *LayerBase) RepShape() *etensor.Shape
- func (ly *LayerBase) SendName(sender string) *Prjn
- func (ly *LayerBase) SendNameTry(sender string) (emer.Prjn, error)
- func (ly *LayerBase) SendNameType(sender, typ string) *Prjn
- func (ly *LayerBase) SendNameTypeTry(sender, typ string) (emer.Prjn, error)
- func (ly *LayerBase) SendPrjn(idx int) emer.Prjn
- func (ly *LayerBase) SendPrjnVals(vals *[]float32, varNm string, recvLay emer.Layer, recvIdx1D int, ...) error
- func (ly *LayerBase) SendPrjns() *AxonPrjns
- func (ly *LayerBase) SetBuildConfig(param, val string)
- func (ly *LayerBase) SetClass(cls string)
- func (ly *LayerBase) SetIndex(idx int)
- func (ly *LayerBase) SetName(nm string)
- func (ly *LayerBase) SetOff(off bool)
- func (ly *LayerBase) SetPos(pos mat32.Vec3)
- func (ly *LayerBase) SetRelPos(rel relpos.Rel)
- func (ly *LayerBase) SetRepIdxsShape(idxs, shape []int)
- func (ly *LayerBase) SetShape(shape []int)
- func (ly *LayerBase) SetThread(thr int)
- func (ly *LayerBase) SetType(typ emer.LayerType)
- func (ly *LayerBase) Shape() *etensor.Shape
- func (ly *LayerBase) Size() mat32.Vec2
- func (ly *LayerBase) SubPool(ctx *Context, ni, di uint32) *Pool
- func (ly *LayerBase) Thread() int
- func (ly *LayerBase) Type() emer.LayerType
- func (ly *LayerBase) TypeName() string
- func (ly *LayerBase) UnitVal(varNm string, idx []int, di int) float32
- func (ly *LayerBase) UnitVal1D(varIdx int, idx, di int) float32
- func (ly *LayerBase) UnitVals(vals *[]float32, varNm string, di int) error
- func (ly *LayerBase) UnitValsRepTensor(tsr etensor.Tensor, varNm string, di int) error
- func (ly *LayerBase) UnitValsTensor(tsr etensor.Tensor, varNm string, di int) error
- func (ly *LayerBase) UnitVarIdx(varNm string) (int, error)
- func (ly *LayerBase) UnitVarNames() []string
- func (ly *LayerBase) UnitVarNum() int
- func (ly *LayerBase) UnitVarProps() map[string]string
- func (ly *LayerBase) VarRange(varNm string) (min, max float32, err error)
- type LayerIdxs
- type LayerInhibIdxs
- type LayerParams
- func (ly *LayerParams) AllParams() string
- func (ly *LayerParams) ApplyExtFlags(clearMask, setMask *NeuronFlags, toTarg *bool)
- func (ly *LayerParams) ApplyExtVal(ctx *Context, ni, di uint32, val float32)
- func (ly *LayerParams) AvgGeM(ctx *Context, vals *LayerVals, geIntMinusMax, giIntMinusMax float32)
- func (ly *LayerParams) CTDefaults()
- func (ly *LayerParams) CyclePostCeMLayer(ctx *Context, di uint32, lpl *Pool)
- func (ly *LayerParams) CyclePostLDTLayer(ctx *Context, di uint32, vals *LayerVals, ...)
- func (ly *LayerParams) CyclePostLayer(ctx *Context, di uint32, lpl *Pool, vals *LayerVals)
- func (ly *LayerParams) CyclePostPTNotMaintLayer(ctx *Context, di uint32, lpl *Pool)
- func (ly *LayerParams) CyclePostRWDaLayer(ctx *Context, di uint32, vals *LayerVals, pvals *LayerVals)
- func (ly *LayerParams) CyclePostTDDaLayer(ctx *Context, di uint32, vals *LayerVals, ivals *LayerVals)
- func (ly *LayerParams) CyclePostTDIntegLayer(ctx *Context, di uint32, vals *LayerVals, pvals *LayerVals)
- func (ly *LayerParams) CyclePostTDPredLayer(ctx *Context, di uint32, vals *LayerVals)
- func (ly *LayerParams) CyclePostVSPatchLayer(ctx *Context, di uint32, pi int32, pl *Pool, vals *LayerVals)
- func (ly *LayerParams) CyclePostVTALayer(ctx *Context, di uint32)
- func (ly *LayerParams) Defaults()
- func (ly *LayerParams) DrivesDefaults()
- func (ly *LayerParams) EffortDefaults()
- func (ly *LayerParams) GFmRawSyn(ctx *Context, ni, di uint32)
- func (ly *LayerParams) GNeuroMod(ctx *Context, ni, di uint32, vals *LayerVals)
- func (ly *LayerParams) GatherSpikesInit(ctx *Context, ni, di uint32)
- func (ly *LayerParams) GiInteg(ctx *Context, ni, di uint32, pl *Pool, vals *LayerVals)
- func (ly *LayerParams) InitExt(ctx *Context, ni, di uint32)
- func (ly *LayerParams) IsInput() bool
- func (ly *LayerParams) IsInputOrTarget() bool
- func (ly *LayerParams) IsLearnTrgAvg() bool
- func (ly *LayerParams) IsTarget() bool
- func (ly *LayerParams) LayPoolGiFmSpikes(ctx *Context, lpl *Pool, vals *LayerVals)
- func (ly *LayerParams) LearnTrgAvgErrLRate() float32
- func (ly *LayerParams) MinusPhaseNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerVals)
- func (ly *LayerParams) MinusPhasePool(ctx *Context, pl *Pool)
- func (ly *LayerParams) NewStateLayer(ctx *Context, lpl *Pool, vals *LayerVals)
- func (ly *LayerParams) NewStateLayerActAvg(ctx *Context, vals *LayerVals, actMinusAvg, actPlusAvg float32)
- func (ly *LayerParams) NewStateNeuron(ctx *Context, ni, di uint32, vals *LayerVals)
- func (ly *LayerParams) NewStatePool(ctx *Context, pl *Pool)
- func (ly *LayerParams) PTPredDefaults()
- func (ly *LayerParams) PVDefaults()
- func (ly *LayerParams) PlusPhaseNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerVals)
- func (ly *LayerParams) PlusPhaseNeuronSpecial(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerVals)
- func (ly *LayerParams) PlusPhasePool(ctx *Context, pl *Pool)
- func (ly *LayerParams) PlusPhaseStartNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerVals)
- func (ly *LayerParams) PostSpike(ctx *Context, ni, di uint32, pl *Pool, vals *LayerVals)
- func (ly *LayerParams) PostSpikeSpecial(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerVals)
- func (ly *LayerParams) PulvDefaults()
- func (ly *LayerParams) RWDefaults()
- func (ly *LayerParams) RWPredDefaults()
- func (ly *LayerParams) SpecialPostGs(ctx *Context, ni, di uint32, saveVal float32)
- func (ly *LayerParams) SpecialPreGs(ctx *Context, ni, di uint32, pl *Pool, vals *LayerVals, drvGe float32, ...) float32
- func (ly *LayerParams) SpikeFmG(ctx *Context, ni, di uint32)
- func (ly *LayerParams) SubPoolGiFmSpikes(ctx *Context, di uint32, pl *Pool, lpl *Pool, lyInhib bool, giMult float32)
- func (ly *LayerParams) TDDefaults()
- func (ly *LayerParams) TDPredDefaults()
- func (ly *LayerParams) USDefaults()
- func (ly *LayerParams) Update()
- func (ly *LayerParams) UrgencyDefaults()
- func (ly *LayerParams) VSGatedDefaults()
- func (ly *LayerParams) VSPatchDefaults()
- type LayerTypes
- type LayerVals
- type LearnNeurParams
- type LearnSynParams
- type MatrixParams
- type MatrixPrjnParams
- type NetIdxs
- func (ctx *NetIdxs) DataIdx(idx uint32) uint32
- func (ctx *NetIdxs) DataIdxIsValid(li uint32) bool
- func (ctx *NetIdxs) ItemIdx(idx uint32) uint32
- func (ctx *NetIdxs) LayerIdxIsValid(li uint32) bool
- func (ctx *NetIdxs) NeurIdxIsValid(ni uint32) bool
- func (ctx *NetIdxs) PoolDataIdxIsValid(pi uint32) bool
- func (ctx *NetIdxs) PoolIdxIsValid(pi uint32) bool
- func (ctx *NetIdxs) SynIdxIsValid(si uint32) bool
- func (ctx *NetIdxs) ValsIdx(li, di uint32) uint32
- type Network
- func (net *Network) AddAmygdala(prefix string, neg bool, nUs, nNeurY, nNeurX int, space float32) (blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, cemPos, cemNeg, blaNov *Layer)
- func (net *Network) AddBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, ...) (mtxGo, mtxNo, gpeTA, stnp, stns, gpi *Layer)
- func (net *Network) AddBG4D(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, ...) (mtxGo, mtxNo, gpeTA, stnp, stns, gpi *Layer)
- func (net *Network) AddBGThalLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddBGThalLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddBLALayers(prefix string, pos bool, nUs, nNeurY, nNeurX int, rel relpos.Relations, ...) (acq, ext *Layer)
- func (net *Network) AddBOA(ctx *Context, nUSneg, nYneur, popY, popX, bgY, bgX, pfcY, pfcX int, ...) (...)
- func (net *Network) AddCTLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddCTLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (nt *Network) AddClampDaLayer(name string) *Layer
- func (net *Network) AddDrivesLayer(ctx *Context, nNeurY, nNeurX int) *Layer
- func (net *Network) AddDrivesPulvLayer(ctx *Context, nNeurY, nNeurX int, space float32) (drv, drvP *Layer)
- func (net *Network) AddEffortLayer(nNeurY, nNeurX int) *Layer
- func (net *Network) AddEffortPulvLayer(nNeurY, nNeurX int, space float32) (eff, effP *Layer)
- func (net *Network) AddGPeLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddGPeLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddGPiLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddGPiLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddHip(ctx *Context, hip *HipConfig, space float32) (ec2, ec3, dg, ca3, ca1, ec5 *Layer)
- func (net *Network) AddInputPulv2D(name string, nNeurY, nNeurX int, space float32) (*Layer, *Layer)
- func (net *Network) AddInputPulv4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32) (*Layer, *Layer)
- func (net *Network) AddLDTLayer(prefix string) *Layer
- func (net *Network) AddMatrixLayer(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DAModTypes) *Layer
- func (net *Network) AddOFCus(ctx *Context, nUSs, nY, ofcY, ofcX int, space float32) (ofc, ofcCT, ofcPT, ofcPTp, ofcMD, notMaint *Layer)
- func (net *Network) AddPFC2D(name, thalSuffix string, nNeurY, nNeurX int, decayOnRew bool, space float32) (pfc, pfcCT, pfcPT, pfcPTp, pfcThal *Layer)
- func (net *Network) AddPFC4D(name, thalSuffix string, nPoolsY, nPoolsX, nNeurY, nNeurX int, decayOnRew bool, ...) (pfc, pfcCT, pfcPT, pfcPTp, pfcThal *Layer)
- func (net *Network) AddPTMaintLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddPTMaintLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddPTMaintThalForSuper(super, ct *Layer, thalSuffix, prjnClass string, superToPT, ptSelf prjn.Pattern, ...) (pt, thal *Layer)
- func (net *Network) AddPTNotMaintLayer(ptMaint *Layer, nNeurY, nNeurX int, space float32) *Layer
- func (net *Network) AddPTPredLayer(ptMaint, ct *Layer, ptToPredPrjn, ctToPredPrjn prjn.Pattern, prjnClass string, ...) (ptPred *Layer)
- func (net *Network) AddPTPredLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddPTPredLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddPVLVOFCus(ctx *Context, nUSneg, nYneur, popY, popX, bgY, bgX, ofcY, ofcX int, ...) (...)
- func (net *Network) AddPVLVPulvLayers(ctx *Context, nUSneg, nYneur, popY, popX int, space float32) (...)
- func (net *Network) AddPVLayers(nNeurY, nNeurX int, rel relpos.Relations, space float32) (pvPos, pvNeg *Layer)
- func (net *Network) AddPVPulvLayers(nNeurY, nNeurX int, rel relpos.Relations, space float32) (pvPos, pvNeg, pvPosP, pvNegP *Layer)
- func (net *Network) AddPulvForLayer(lay *Layer, space float32) *Layer
- func (net *Network) AddPulvForSuper(super *Layer, space float32) *Layer
- func (net *Network) AddPulvLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddPulvLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (nt *Network) AddRWLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, da *Layer)
- func (nt *Network) AddRewLayer(name string) *Layer
- func (net *Network) AddSCLayer2D(prefix string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddSCLayer4D(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddSTNLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddSTNLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddSuperCT2D(name, prjnClass string, shapeY, shapeX int, space float32, pat prjn.Pattern) (super, ct *Layer)
- func (net *Network) AddSuperCT4D(name, prjnClass string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32, ...) (super, ct *Layer)
- func (net *Network) AddSuperLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddSuperLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (nt *Network) AddTDLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, ri, td *Layer)
- func (net *Network) AddUSLayers(nUSpos, nUSneg, nYneur int, rel relpos.Relations, space float32) (usPos, usNeg *Layer)
- func (net *Network) AddUSPulvLayers(nUSpos, nUSneg, nYneur int, rel relpos.Relations, space float32) (usPos, usNeg, usPosP, usNegP *Layer)
- func (net *Network) AddUrgencyLayer(nNeurY, nNeurX int) *Layer
- func (net *Network) AddVS(nUSs, nNeurY, nNeurX, nY int, space float32) (vSmtxGo, vSmtxNo, vSstnp, vSstns, vSgpi, vSpatch, vSgated *Layer)
- func (net *Network) AddVSGatedLayer(prefix string, nYunits int) *Layer
- func (net *Network) AddVSPatchLayer(prefix string, nUs, nNeurY, nNeurX int) *Layer
- func (net *Network) AddVTALHbLDTLayers(rel relpos.Relations, space float32) (vta, lhb, ldt *Layer)
- func (nt *Network) ApplyExts(ctx *Context)
- func (nt *Network) AsAxon() *Network
- func (nt *Network) ClearTargExt(ctx *Context)
- func (nt *Network) CollectDWts(ctx *Context, dwts *[]float32) bool
- func (nt *Network) ConfigGPUnoGUI(ctx *Context)
- func (nt *Network) ConfigGPUwithGUI(ctx *Context)
- func (net *Network) ConfigLoopsHip(ctx *Context, man *looper.Manager, hip *HipConfig, pretrain *bool)
- func (net *Network) ConnectCSToBLAPos(cs, blaAcq, blaNov *Layer) (toAcq, toNov *Prjn)
- func (net *Network) ConnectCTSelf(ly *Layer, pat prjn.Pattern, prjnClass string) (ctxt, maint *Prjn)
- func (net *Network) ConnectCtxtToCT(send, recv *Layer, pat prjn.Pattern) *Prjn
- func (net *Network) ConnectPTMaintSelf(ly *Layer, pat prjn.Pattern, prjnClass string) *Prjn
- func (net *Network) ConnectPTNotMaint(ptMaint, ptNotMaint *Layer, pat prjn.Pattern) *Prjn
- func (net *Network) ConnectPTPredSelf(ly *Layer, pat prjn.Pattern) *Prjn
- func (net *Network) ConnectPTPredToPulv(ptPred, pulv *Layer, toPulvPat, fmPulvPat prjn.Pattern, prjnClass string) (toPulv, toPTPred *Prjn)
- func (net *Network) ConnectSuperToCT(send, recv *Layer, pat prjn.Pattern, prjnClass string) *Prjn
- func (net *Network) ConnectToBLAAcq(send, recv *Layer, pat prjn.Pattern) *Prjn
- func (net *Network) ConnectToBLAExt(send, recv *Layer, pat prjn.Pattern) *Prjn
- func (net *Network) ConnectToMatrix(send, recv *Layer, pat prjn.Pattern) *Prjn
- func (net *Network) ConnectToPFC(lay, layP, pfc, pfcCT, pfcPTp *Layer, pat prjn.Pattern)
- func (net *Network) ConnectToPFCBack(lay, layP, pfc, pfcCT, pfcPTp *Layer, pat prjn.Pattern)
- func (net *Network) ConnectToPFCBidir(lay, layP, pfc, pfcCT, pfcPTp *Layer, pat prjn.Pattern) (ff, fb *Prjn)
- func (net *Network) ConnectToPulv(super, ct, pulv *Layer, toPulvPat, fmPulvPat prjn.Pattern, prjnClass string) (toPulv, toSuper, toCT *Prjn)
- func (nt *Network) ConnectToRWPrjn(send, recv *Layer, pat prjn.Pattern) *Prjn
- func (net *Network) ConnectToSC(send, recv *Layer, pat prjn.Pattern) *Prjn
- func (net *Network) ConnectToSC1to1(send, recv *Layer) *Prjn
- func (net *Network) ConnectToVSPatch(send, recv *Layer, pat prjn.Pattern) *Prjn
- func (net *Network) ConnectUSToBLAPos(us, blaAcq, blaExt *Layer) (toAcq, toExt *Prjn)
- func (nt *Network) Cycle(ctx *Context)
- func (nt *Network) DWt(ctx *Context)
- func (nt *Network) DecayState(ctx *Context, decay, glong, ahp float32)
- func (nt *Network) DecayStateByClass(ctx *Context, decay, glong, ahp float32, classes ...string)
- func (nt *Network) DecayStateByType(ctx *Context, decay, glong, ahp float32, types ...LayerTypes)
- func (nt *Network) DecayStateLayers(ctx *Context, decay, glong, ahp float32, layers ...string)
- func (nt *Network) Defaults()
- func (nt *Network) InitActs(ctx *Context)
- func (nt *Network) InitExt(ctx *Context)
- func (nt *Network) InitGScale(ctx *Context)
- func (nt *Network) InitName(net emer.Network, name string)
- func (nt *Network) InitTopoSWts()
- func (nt *Network) InitWts(ctx *Context)
- func (nt *Network) LRateMod(mod float32)
- func (nt *Network) LRateSched(sched float32)
- func (nt *Network) LayersSetOff(off bool)
- func (nt *Network) MinusPhase(ctx *Context)
- func (nt *Network) NeuronsSlice(vals *[]float32, nrnVar string, di int)
- func (nt *Network) NewLayer() emer.Layer
- func (nt *Network) NewPrjn() emer.Prjn
- func (nt *Network) NewState(ctx *Context)
- func (nt *Network) PlusPhase(ctx *Context)
- func (nt *Network) PlusPhaseStart(ctx *Context)
- func (nt *Network) SetDWts(ctx *Context, dwts []float32, navg int)
- func (nt *Network) SetSubMean(trgAvg, prjn float32)
- func (nt *Network) SizeReport(detail bool) string
- func (nt *Network) SlowAdapt(ctx *Context)
- func (nt *Network) SpkSt1(ctx *Context)
- func (nt *Network) SpkSt2(ctx *Context)
- func (nt *Network) SynFail(ctx *Context)
- func (nt *Network) SynsSlice(vals *[]float32, synvar SynapseVars)
- func (nt *Network) TargToExt(ctx *Context)
- func (nt *Network) UnLesionNeurons(ctx *Context)
- func (nt *Network) UpdateExtFlags(ctx *Context)
- func (nt *Network) UpdateParams()
- func (nt *Network) WtFmDWt(ctx *Context)
- func (nt *Network) WtsHash() string
- type NetworkBase
- func (nt *NetworkBase) AddLayer(name string, shape []int, typ LayerTypes) *Layer
- func (nt *NetworkBase) AddLayer2D(name string, shapeY, shapeX int, typ LayerTypes) *Layer
- func (nt *NetworkBase) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ LayerTypes) *Layer
- func (nt *NetworkBase) AddLayerInit(ly *Layer, name string, shape []int, typ LayerTypes)
- func (nt *NetworkBase) AllGlobalVals(ctrKey string, vals map[string]float32)
- func (nt *NetworkBase) AllGlobals() string
- func (nt *NetworkBase) AllLayerInhibs() string
- func (nt *NetworkBase) AllParams() string
- func (nt *NetworkBase) AllPrjnScales() string
- func (nt *NetworkBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (nt *NetworkBase) AxonLayerByName(name string) *Layer
- func (nt *NetworkBase) BidirConnectLayerNames(low, high string, pat prjn.Pattern) (lowlay, highlay *Layer, fwdpj, backpj *Prjn, err error)
- func (nt *NetworkBase) BidirConnectLayers(low, high *Layer, pat prjn.Pattern) (fwdpj, backpj *Prjn)
- func (nt *NetworkBase) BidirConnectLayersPy(low, high *Layer, pat prjn.Pattern)
- func (nt *NetworkBase) Bounds() (min, max mat32.Vec3)
- func (nt *NetworkBase) BoundsUpdt()
- func (nt *NetworkBase) Build(simCtx *Context) error
- func (nt *NetworkBase) BuildGlobals(ctx *Context)
- func (nt *NetworkBase) BuildPrjnGBuf()
- func (nt *NetworkBase) ConnectLayerNames(send, recv string, pat prjn.Pattern, typ PrjnTypes) (rlay, slay *Layer, pj *Prjn, err error)
- func (nt *NetworkBase) ConnectLayers(send, recv *Layer, pat prjn.Pattern, typ PrjnTypes) *Prjn
- func (nt *NetworkBase) DeleteAll()
- func (nt *NetworkBase) FunTimerStart(fun string)
- func (nt *NetworkBase) FunTimerStop(fun string)
- func (nt *NetworkBase) KeyLayerParams() string
- func (nt *NetworkBase) KeyPrjnParams() string
- func (nt *NetworkBase) Label() string
- func (nt *NetworkBase) LateralConnectLayer(lay *Layer, pat prjn.Pattern) *Prjn
- func (nt *NetworkBase) LateralConnectLayerPrjn(lay *Layer, pat prjn.Pattern, pj *Prjn) *Prjn
- func (nt *NetworkBase) LayByNameTry(name string) (*Layer, error)
- func (nt *NetworkBase) Layer(idx int) emer.Layer
- func (nt *NetworkBase) LayerByName(name string) emer.Layer
- func (nt *NetworkBase) LayerByNameTry(name string) (emer.Layer, error)
- func (nt *NetworkBase) LayerMapPar(fun func(ly *Layer), funame string)
- func (nt *NetworkBase) LayerMapSeq(fun func(ly *Layer), funame string)
- func (nt *NetworkBase) LayerVals(li, di uint32) *LayerVals
- func (nt *NetworkBase) LayersByClass(classes ...string) []string
- func (nt *NetworkBase) LayersByType(layType ...LayerTypes) []string
- func (nt *NetworkBase) Layout()
- func (nt *NetworkBase) MakeLayMap()
- func (nt *NetworkBase) MaxParallelData() int
- func (nt *NetworkBase) NLayers() int
- func (nt *NetworkBase) NParallelData() int
- func (nt *NetworkBase) Name() string
- func (nt *NetworkBase) NeuronMapPar(ctx *Context, fun func(ly *Layer, ni uint32), funame string)
- func (nt *NetworkBase) NeuronMapSeq(ctx *Context, fun func(ly *Layer, ni uint32), funame string)
- func (nt *NetworkBase) NonDefaultParams() string
- func (nt *NetworkBase) OpenWtsCpp(filename gi.FileName) error
- func (nt *NetworkBase) OpenWtsJSON(filename gi.FileName) error
- func (nt *NetworkBase) ParamsApplied(sel *params.Sel)
- func (nt *NetworkBase) ParamsHistoryReset()
- func (nt *NetworkBase) PrjnMapSeq(fun func(pj *Prjn), funame string)
- func (nt *NetworkBase) ReadWtsCpp(r io.Reader) error
- func (nt *NetworkBase) ReadWtsJSON(r io.Reader) error
- func (nt *NetworkBase) ResetRndSeed()
- func (nt *NetworkBase) SaveAllLayerInhibs(filename gi.FileName) error
- func (nt *NetworkBase) SaveAllParams(filename gi.FileName) error
- func (nt *NetworkBase) SaveAllPrjnScales(filename gi.FileName) error
- func (nt *NetworkBase) SaveNonDefaultParams(filename gi.FileName) error
- func (nt *NetworkBase) SaveParamsSnapshot(pars *netparams.Sets, cfg any, good bool) error
- func (nt *NetworkBase) SaveWtsJSON(filename gi.FileName) error
- func (nt *NetworkBase) SetCtxStrides(simCtx *Context)
- func (nt *NetworkBase) SetMaxData(simCtx *Context, maxData int)
- func (nt *NetworkBase) SetNThreads(nthr int)
- func (nt *NetworkBase) SetRndSeed(seed int64)
- func (nt *NetworkBase) SetWts(nw *weights.Network) error
- func (nt *NetworkBase) StdVertLayout()
- func (nt *NetworkBase) SynVarNames() []string
- func (nt *NetworkBase) SynVarProps() map[string]string
- func (nt *NetworkBase) TimerReport()
- func (nt *NetworkBase) UnitVarNames() []string
- func (nt *NetworkBase) UnitVarProps() map[string]string
- func (nt *NetworkBase) VarRange(varNm string) (min, max float32, err error)
- func (nt *NetworkBase) WriteWtsJSON(w io.Writer) error
- type NeuroModParams
- func (nm *NeuroModParams) DAGain(da float32) float32
- func (nm *NeuroModParams) DASign() float32
- func (nm *NeuroModParams) Defaults()
- func (nm *NeuroModParams) GGain(da float32) float32
- func (nm *NeuroModParams) GiFmACh(ach float32) float32
- func (nm *NeuroModParams) IsBLAExt() bool
- func (nm *NeuroModParams) LRMod(da, ach float32) float32
- func (nm *NeuroModParams) LRModFact(pct, val float32) float32
- func (nm *NeuroModParams) Update()
- type NeuronAvgVarStrides
- type NeuronAvgVars
- type NeuronFlags
- type NeuronIdxStrides
- type NeuronIdxs
- type NeuronVarStrides
- type NeuronVars
- type PVLV
- func (pp *PVLV) Defaults()
- func (pp *PVLV) EffortUpdt(ctx *Context, di uint32, rnd erand.Rand, effort float32)
- func (pp *PVLV) EffortUrgencyUpdt(ctx *Context, di uint32, rnd erand.Rand, effort float32)
- func (pp *PVLV) ShouldGiveUp(ctx *Context, di uint32, rnd erand.Rand, hasRew bool) bool
- func (pp *PVLV) Update()
- func (pp *PVLV) VSGated(ctx *Context, di uint32, rnd erand.Rand, gated, hasRew bool, poolIdx int)
- type Pool
- type PoolAvgMax
- type PopCodeParams
- func (pc *PopCodeParams) ClipVal(val float32) float32
- func (pc *PopCodeParams) Defaults()
- func (pc *PopCodeParams) EncodeGe(i, n uint32, val float32) float32
- func (pc *PopCodeParams) EncodeVal(i, n uint32, val float32) float32
- func (pc *PopCodeParams) ProjectParam(minParam, maxParam, clipVal float32) float32
- func (pc *PopCodeParams) SetRange(min, max, minSigma, maxSigma float32)
- func (pc *PopCodeParams) Update()
- type Prjn
- func (pj *Prjn) AllParams() string
- func (pj *Prjn) AsAxon() *Prjn
- func (pj *Prjn) Class() string
- func (pj *Prjn) DWt(ctx *Context, si uint32)
- func (pj *Prjn) DWtSubMean(ctx *Context, ri uint32)
- func (pj *Prjn) Defaults()
- func (pj *Prjn) InitGBuffs()
- func (pj *Prjn) InitSynCa(ctx *Context, syni, di uint32)
- func (pj *Prjn) InitWtSym(ctx *Context, rpj *Prjn)
- func (pj *Prjn) InitWts(ctx *Context, nt *Network)
- func (pj *Prjn) InitWtsSyn(ctx *Context, syni uint32, rnd erand.Rand, mean, spct float32)
- func (pj *Prjn) LRateMod(mod float32)
- func (pj *Prjn) LRateSched(sched float32)
- func (pj *Prjn) Object() any
- func (pj *Prjn) PrjnType() PrjnTypes
- func (pj *Prjn) ReadWtsJSON(r io.Reader) error
- func (pj *Prjn) SWtFmWt(ctx *Context)
- func (pj *Prjn) SWtRescale(ctx *Context)
- func (pj *Prjn) SendSpike(ctx *Context, ni, di, maxData uint32)
- func (pj *Prjn) SetSWtsFunc(ctx *Context, swtFun func(si, ri int, send, recv *etensor.Shape) float32)
- func (pj *Prjn) SetSWtsRPool(ctx *Context, swts etensor.Tensor)
- func (pj *Prjn) SetSynVal(varNm string, sidx, ridx int, val float32) error
- func (pj *Prjn) SetWts(pw *weights.Prjn) error
- func (pj *Prjn) SetWtsFunc(ctx *Context, wtFun func(si, ri int, send, recv *etensor.Shape) float32)
- func (pj *Prjn) SlowAdapt(ctx *Context)
- func (pj *Prjn) SynCaRecv(ctx *Context, ni, di uint32, updtThr float32)
- func (pj *Prjn) SynCaReset(ctx *Context)
- func (pj *Prjn) SynCaSend(ctx *Context, ni, di uint32, updtThr float32)
- func (pj *Prjn) SynFail(ctx *Context)
- func (pj *Prjn) SynScale(ctx *Context)
- func (pj *Prjn) Update()
- func (pj *Prjn) UpdateParams()
- func (pj *Prjn) WriteWtsJSON(w io.Writer, depth int)
- func (pj *Prjn) WtFmDWt(ctx *Context, ni uint32)
- type PrjnBase
- func (pj *PrjnBase) AddClass(cls string)
- func (pj *PrjnBase) ApplyDefParams()
- func (pj *PrjnBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (pj *PrjnBase) Build() error
- func (pj *PrjnBase) Class() string
- func (pj *PrjnBase) Connect(slay, rlay *Layer, pat prjn.Pattern, typ PrjnTypes)
- func (pj *PrjnBase) Init(prjn emer.Prjn)
- func (pj *PrjnBase) IsOff() bool
- func (pj *PrjnBase) Label() string
- func (pj *PrjnBase) Name() string
- func (pj *PrjnBase) NonDefaultParams() string
- func (pj *PrjnBase) ParamsApplied(sel *params.Sel)
- func (pj *PrjnBase) ParamsHistoryReset()
- func (pj *PrjnBase) Pattern() prjn.Pattern
- func (pj *PrjnBase) PrjnTypeName() string
- func (pj *PrjnBase) RecvLay() emer.Layer
- func (pj *PrjnBase) RecvSynIdxs(ri uint32) []uint32
- func (pj *PrjnBase) SendLay() emer.Layer
- func (pj *PrjnBase) SetClass(cls string) emer.Prjn
- func (pj *PrjnBase) SetConStartN(con *[]StartN, avgmax *minmax.AvgMax32, tn *etensor.Int32) uint32
- func (pj *PrjnBase) SetOff(off bool)
- func (pj *PrjnBase) SetPattern(pat prjn.Pattern) emer.Prjn
- func (pj *PrjnBase) SetType(typ emer.PrjnType) emer.Prjn
- func (pj *PrjnBase) String() string
- func (pj *PrjnBase) Syn1DNum() int
- func (pj *PrjnBase) SynIdx(sidx, ridx int) int
- func (pj *PrjnBase) SynVal(varNm string, sidx, ridx int) float32
- func (pj *PrjnBase) SynVal1D(varIdx int, synIdx int) float32
- func (pj *PrjnBase) SynVal1DDi(varIdx int, synIdx int, di int) float32
- func (pj *PrjnBase) SynValDi(varNm string, sidx, ridx int, di int) float32
- func (pj *PrjnBase) SynVals(vals *[]float32, varNm string) error
- func (pj *PrjnBase) SynVarIdx(varNm string) (int, error)
- func (pj *PrjnBase) SynVarNames() []string
- func (pj *PrjnBase) SynVarNum() int
- func (pj *PrjnBase) SynVarProps() map[string]string
- func (pj *PrjnBase) Type() emer.PrjnType
- func (pj *PrjnBase) TypeName() string
- func (pj *PrjnBase) Validate(logmsg bool) error
- type PrjnGTypes
- type PrjnIdxs
- type PrjnParams
- func (pj *PrjnParams) AllParams() string
- func (pj *PrjnParams) BLADefaults()
- func (pj *PrjnParams) CTCtxtPrjnDefaults()
- func (pj *PrjnParams) DWtFmDiDWtSyn(ctx *Context, syni uint32)
- func (pj *PrjnParams) DWtSyn(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
- func (pj *PrjnParams) DWtSynBLA(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PrjnParams) DWtSynCortex(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
- func (pj *PrjnParams) DWtSynHip(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
- func (pj *PrjnParams) DWtSynMatrix(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PrjnParams) DWtSynRWPred(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PrjnParams) DWtSynTDPred(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PrjnParams) DWtSynVSPatch(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PrjnParams) Defaults()
- func (pj *PrjnParams) DoSynCa() bool
- func (pj *PrjnParams) GatherSpikes(ctx *Context, ly *LayerParams, ni, di uint32, gRaw float32, gSyn *float32)
- func (pj *PrjnParams) HipDefaults()
- func (pj *PrjnParams) IsExcitatory() bool
- func (pj *PrjnParams) IsInhib() bool
- func (pj *PrjnParams) MatrixDefaults()
- func (pj *PrjnParams) RLPredDefaults()
- func (pj *PrjnParams) SetFixedWts()
- func (pj *PrjnParams) SynCaSyn(ctx *Context, syni uint32, ni, di uint32, otherCaSyn, updtThr float32)
- func (pj *PrjnParams) SynRecvLayIdx(ctx *Context, syni uint32) uint32
- func (pj *PrjnParams) SynSendLayIdx(ctx *Context, syni uint32) uint32
- func (pj *PrjnParams) Update()
- func (pj *PrjnParams) VSPatchDefaults()
- func (pj *PrjnParams) WtFmDWtSyn(ctx *Context, syni uint32)
- func (pj *PrjnParams) WtFmDWtSynCortex(ctx *Context, syni uint32)
- func (pj *PrjnParams) WtFmDWtSynNoLimits(ctx *Context, syni uint32)
- type PrjnScaleParams
- type PrjnTypes
- type PulvParams
- type PushOff
- type RLPredPrjnParams
- type RLRateParams
- type RWDaParams
- type RWPredParams
- type RandFunIdx
- type SWtAdaptParams
- type SWtInitParams
- type SWtParams
- func (sp *SWtParams) ClipSWt(swt float32) float32
- func (sp *SWtParams) ClipWt(wt float32) float32
- func (sp *SWtParams) Defaults()
- func (sp *SWtParams) InitWtsSyn(ctx *Context, syni uint32, rnd erand.Rand, mean, spct float32)
- func (sp *SWtParams) LWtFmWts(wt, swt float32) float32
- func (sp *SWtParams) LinFmSigWt(wt float32) float32
- func (sp *SWtParams) SigFmLinWt(lw float32) float32
- func (sp *SWtParams) Update()
- func (sp *SWtParams) WtFmDWt(wt, lwt *float32, dwt, swt float32)
- func (sp *SWtParams) WtVal(swt, lwt float32) float32
- type SpikeNoiseParams
- type SpikeParams
- type StartN
- type SynComParams
- func (sc *SynComParams) Defaults()
- func (sc *SynComParams) Fail(ctx *Context, syni uint32, swt float32)
- func (sc *SynComParams) FloatFromGBuf(ival int32) float32
- func (sc *SynComParams) FloatToGBuf(val float32) int32
- func (sc *SynComParams) FloatToIntFactor() float32
- func (sc *SynComParams) ReadIdx(rnIdx, di uint32, cycTot int32, nRecvNeurs, maxData uint32) uint32
- func (sc *SynComParams) ReadOff(cycTot int32) uint32
- func (sc *SynComParams) RingIdx(i uint32) uint32
- func (sc *SynComParams) Update()
- func (sc *SynComParams) WriteIdx(rnIdx, di uint32, cycTot int32, nRecvNeurs, maxData uint32) uint32
- func (sc *SynComParams) WriteIdxOff(rnIdx, di, wrOff uint32, nRecvNeurs, maxData uint32) uint32
- func (sc *SynComParams) WriteOff(cycTot int32) uint32
- func (sc *SynComParams) WtFail(ctx *Context, swt float32) bool
- func (sc *SynComParams) WtFailP(swt float32) float32
- type SynapseCaStrides
- type SynapseCaVars
- type SynapseIdxStrides
- type SynapseIdxs
- type SynapseVarStrides
- type SynapseVars
- type TDDaParams
- type TDIntegParams
- type TopoInhibParams
- type TraceParams
- type TrgAvgActParams
- type Urgency
- type VSPatchParams
- type VTA
- type VTAVals
- type ValenceTypes
Constants ¶
const ( Version = "v1.8.10" GitCommit = "0fb91b5" // the commit JUST BEFORE the release VersionDate = "2023-07-29 02:21" // UTC )
const CyclesN = 10
CyclesN is the number of cycles to run as a group for ra25, 10 = ~50 msec / trial, 25 = ~48, all 150 / 50 minus / plus = ~44 10 is good enough and unlikely to mess with anything else..
Variables ¶
var ( AvgMaxFloatFromIntErr func() AvgMaxFloatFromIntErrMu sync.Mutex )
AvgMaxFloatFromIntErr is called when there is an overflow error in AvgMaxI32 FloatFromInt
var ( NeuronVarNames []string NeuronVarsMap map[string]int )
var ( NeuronLayerVars = []string{"DA", "ACh", "NE", "Ser", "Gated"} NNeuronLayerVars = len(NeuronLayerVars) )
NeuronLayerVars are layer-level variables displayed as neuron layers.
var ( SynapseVarNames []string SynapseVarsMap map[string]int )
var KiT_DAModTypes = kit.Enums.AddEnum(DAModTypesN, kit.NotBitFlag, nil)
var KiT_GPLayerTypes = kit.Enums.AddEnum(GPLayerTypesN, kit.NotBitFlag, nil)
var KiT_GlobalVTAType = kit.Enums.AddEnum(GlobalVTATypeN, kit.NotBitFlag, nil)
var KiT_GlobalVars = kit.Enums.AddEnum(GlobalVarsN, kit.NotBitFlag, nil)
var KiT_Layer = kit.Types.AddType(&Layer{}, LayerProps)
var KiT_LayerTypes = kit.Enums.AddEnum(LayerTypesN, kit.NotBitFlag, nil)
var KiT_Network = kit.Types.AddType(&Network{}, NetworkProps)
var KiT_NeuronAvgVars = kit.Enums.AddEnum(NeuronAvgVarsN, kit.NotBitFlag, nil)
var KiT_NeuronIdxs = kit.Enums.AddEnum(NeuronIdxsN, kit.NotBitFlag, nil)
var KiT_NeuronVars = kit.Enums.AddEnum(NeuronVarsN, kit.NotBitFlag, nil)
var KiT_Prjn = kit.Types.AddType(&Prjn{}, PrjnProps)
var KiT_PrjnGTypes = kit.Enums.AddEnum(PrjnGTypesN, kit.NotBitFlag, nil)
var KiT_PrjnTypes = kit.Enums.AddEnum(PrjnTypesN, kit.NotBitFlag, nil)
var KiT_SynapseCaVars = kit.Enums.AddEnum(SynapseCaVarsN, kit.NotBitFlag, nil)
var KiT_SynapseIdxs = kit.Enums.AddEnum(SynapseIdxsN, kit.NotBitFlag, nil)
var KiT_SynapseVars = kit.Enums.AddEnum(SynapseVarsN, kit.NotBitFlag, nil)
var KiT_ValenceTypes = kit.Enums.AddEnum(ValenceTypesN, kit.NotBitFlag, nil)
var LayerProps = ki.Props{ "EnumType:Typ": KiT_LayerTypes, "ToolBar": ki.PropSlice{ {"Defaults", ki.Props{ "icon": "reset", "desc": "return all parameters to their intial default values", }}, {"InitWts", ki.Props{ "icon": "update", "desc": "initialize the layer's weight values according to prjn parameters, for all *sending* projections out of this layer", }}, {"InitActs", ki.Props{ "icon": "update", "desc": "initialize the layer's activation values", }}, {"sep-act", ki.BlankProp{}}, {"LesionNeurons", ki.Props{ "icon": "close", "desc": "Lesion (set the Off flag) for given proportion of neurons in the layer (number must be 0 -- 1, NOT percent!)", "Args": ki.PropSlice{ {"Proportion", ki.Props{ "desc": "proportion (0 -- 1) of neurons to lesion", }}, }, }}, {"UnLesionNeurons", ki.Props{ "icon": "reset", "desc": "Un-Lesion (reset the Off flag) for all neurons in the layer", }}, }, }
var NetworkProps = ki.Props{ "ToolBar": ki.PropSlice{ {"SaveWtsJSON", ki.Props{ "label": "Save Wts...", "icon": "file-save", "desc": "Save json-formatted weights", "Args": ki.PropSlice{ {"Weights File Name", ki.Props{ "default-field": "WtsFile", "ext": ".wts,.wts.gz", }}, }, }}, {"OpenWtsJSON", ki.Props{ "label": "Open Wts...", "icon": "file-open", "desc": "Open json-formatted weights", "Args": ki.PropSlice{ {"Weights File Name", ki.Props{ "default-field": "WtsFile", "ext": ".wts,.wts.gz", }}, }, }}, {"sep-file", ki.BlankProp{}}, {"Build", ki.Props{ "icon": "update", "desc": "build the network's neurons and synapses according to current params", }}, {"InitWts", ki.Props{ "icon": "update", "desc": "initialize the network weight values according to prjn parameters", }}, {"InitActs", ki.Props{ "icon": "update", "desc": "initialize the network activation values", }}, {"sep-act", ki.BlankProp{}}, {"AddLayer", ki.Props{ "label": "Add Layer...", "icon": "new", "desc": "add a new layer to network", "Args": ki.PropSlice{ {"Layer Name", ki.Props{}}, {"Layer Shape", ki.Props{ "desc": "shape of layer, typically 2D (Y, X) or 4D (Pools Y, Pools X, Units Y, Units X)", }}, {"Layer Type", ki.Props{ "desc": "type of layer -- used for determining how inputs are applied", }}, }, }}, {"ConnectLayerNames", ki.Props{ "label": "Connect Layers...", "icon": "new", "desc": "add a new connection between layers in the network", "Args": ki.PropSlice{ {"Send Layer Name", ki.Props{}}, {"Recv Layer Name", ki.Props{}}, {"Pattern", ki.Props{ "desc": "pattern to connect with", }}, {"Prjn Type", ki.Props{ "desc": "type of projection -- direction, or other more specialized factors", }}, }, }}, {"AllGlobals", ki.Props{ "icon": "file-sheet", "desc": "Shows the values of all network Global variables, for debugging purposes", "show-return": true, }}, {"AllPrjnScales", ki.Props{ "icon": "file-sheet", "desc": "AllPrjnScales returns a listing of all PrjnScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.", "show-return": true, }}, }, }
var NeuronVarProps = map[string]string{
"Spike": `desc:"whether neuron has spiked or not on this cycle (0 or 1)"`,
"Spiked": `desc:"1 if neuron has spiked within the last 10 cycles (msecs), corresponding to a nominal max spiking rate of 100 Hz, 0 otherwise -- useful for visualization and computing activity levels in terms of average spiked levels."`,
"Act": `desc:"rate-coded activation value reflecting instantaneous estimated rate of spiking, based on 1 / ISIAvg. This drives feedback inhibition in the FFFB function (todo: this will change when better inhibition is implemented), and is integrated over time for ActInt which is then used for performance statistics and layer average activations, etc. Should not be used for learning or other computations."`,
"ActInt": `desc:"integrated running-average activation value computed from Act with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall activation state across the ThetaCycle time scale, as the overall response of network to current input state -- this is copied to ActM and ActP at the ends of the minus and plus phases, respectively, and used in computing performance-level statistics (which are typically based on ActM). Should not be used for learning or other computations."`,
"ActM": `desc:"ActInt activation state at end of third quarter, representing the posterior-cortical minus phase activation -- used for statistics and monitoring network performance. Should not be used for learning or other computations."`,
"ActP": `desc:"ActInt activation state at end of fourth quarter, representing the posterior-cortical plus_phase activation -- used for statistics and monitoring network performance. Should not be used for learning or other computations."`,
"Ext": `desc:"external input: drives activation of unit from outside influences (e.g., sensory input)"`,
"Target": `desc:"target value: drives learning to produce this activation value"`,
"Ge": `range:"2" desc:"total excitatory conductance, including all forms of excitation (e.g., NMDA) -- does *not* include Gbar.E"`,
"Gi": `auto-scale:"+" desc:"total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does *not* include Gbar.I"`,
"Gk": `auto-scale:"+" desc:"total potassium conductance, typically reflecting sodium-gated potassium currents involved in adaptation effects -- does *not* include Gbar.K"`,
"Inet": `desc:"net current produced by all channels -- drives update of Vm"`,
"Vm": `min:"0" max:"1" desc:"membrane potential -- integrates Inet current over time"`,
"VmDend": `min:"0" max:"1" desc:"dendritic membrane potential -- has a slower time constant, is not subject to the VmR reset after spiking"`,
"ISI": `auto-scale:"+" desc:"current inter-spike-interval -- counts up since last spike. Starts at -1 when initialized."`,
"ISIAvg": `auto-scale:"+" desc:"average inter-spike-interval -- average time interval between spikes, integrated with ISITau rate constant (relatively fast) to capture something close to an instantaneous spiking rate. Starts at -1 when initialized, and goes to -2 after first spike, and is only valid after the second spike post-initialization."`,
"CaSyn": `desc:"spike-driven calcium trace for synapse-level Ca-driven learning: exponential integration of SpikeG * Spike at SynTau time constant (typically 30). Synapses integrate send.CaSyn * recv.CaSyn across M, P, D time integrals for the synaptic trace driving credit assignment in learning. Time constant reflects binding time of Glu to NMDA and Ca buffering postsynaptically, and determines time window where pre * post spiking must overlap to drive learning."`,
"CaSpkM": `desc:"spike-driven calcium trace used as a neuron-level proxy for synpatic credit assignment factor based on continuous time-integrated spiking: exponential integration of SpikeG * Spike at MTau time constant (typically 5). Simulates a calmodulin (CaM) like signal at the most abstract level."`,
"CaSpkP": `desc:"continuous cascaded integration of CaSpkM at PTau time constant (typically 40), representing neuron-level purely spiking version of plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule. Used for specialized learning and computational functions, statistics, instead of Act."`,
"CaSpkD": `desc:"continuous cascaded integration CaSpkP at DTau time constant (typically 40), representing neuron-level purely spiking version of minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule. Used for specialized learning and computational functions, statistics, instead of Act."`,
"CaSpkPM": `desc:"minus-phase snapshot of the CaSpkP value -- similar to ActM but using a more directly spike-integrated value."`,
"CaLrn": `desc:"recv neuron calcium signal used to drive temporal error difference component of standard learning rule, combining NMDA (NmdaCa) and spiking-driven VGCC (VgccCaInt) calcium sources (vs. CaSpk* which only reflects spiking component). This is integrated into CaM, CaP, CaD, and temporal derivative is CaP - CaD (CaMKII - DAPK1). This approximates the backprop error derivative on net input, but VGCC component adds a proportion of recv activation delta as well -- a balance of both works best. The synaptic-level trace multiplier provides the credit assignment factor, reflecting coincident activity and potentially integrated over longer multi-trial timescales."`,
"NrnCaM": `desc:"integrated CaLrn at MTau timescale (typically 5), simulating a calmodulin (CaM) like signal, which then drives CaP, CaD for delta signal driving error-driven learning."`,
"NrnCaP": `desc:"cascaded integration of CaM at PTau time constant (typically 40), representing the plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule."`,
"NrnCaD": `desc:"cascaded integratoin of CaP at DTau time constant (typically 40), representing the minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule."`,
"CaDiff": `desc:"difference between CaP - CaD -- this is the error signal that drives error-driven learning."`,
"RLRate": `auto-scale:"+" desc:"recv-unit based learning rate multiplier, reflecting the sigmoid derivative computed from the CaSpkD of recv unit, and the normalized difference CaSpkP - CaSpkD / MAX(CaSpkP - CaSpkD)."`,
"Attn": `desc:"Attentional modulation factor, which can be set by special layers such as the TRC -- multiplies Ge"`,
"SpkMaxCa": `desc:"Ca integrated like CaSpkP but only starting at MaxCycStart cycle, to prevent inclusion of carryover spiking from prior theta cycle trial -- the PTau time constant otherwise results in significant carryover. This is the input to SpkMax"`,
"SpkMax": `desc:"maximum CaSpkP across one theta cycle time window (max of SpkMaxCa) -- used for specialized algorithms that have more phasic behavior within a single trial, e.g., BG Matrix layer gating. Also useful for visualization of peak activity of neurons."`,
"SpkPrv": `desc:"final CaSpkD activation state at end of previous theta cycle. used for specialized learning mechanisms that operate on delayed sending activations."`,
"SpkSt1": `desc:"the activation state at specific time point within current state processing window (e.g., 50 msec for beta cycle within standard theta cycle), as saved by SpkSt1() function. Used for example in hippocampus for CA3, CA1 learning"`,
"SpkSt2": `desc:"the activation state at specific time point within current state processing window (e.g., 100 msec for beta cycle within standard theta cycle), as saved by SpkSt2() function. Used for example in hippocampus for CA3, CA1 learning"`,
"DASign": `desc:"sign of dopamine-based learning effects for this neuron -- 1 = D1, -1 = D2"`,
"GeNoiseP": `desc:"accumulating poisson probability factor for driving excitatory noise spiking -- multiply times uniform random deviate at each time step, until it gets below the target threshold based on lambda."`,
"GeNoise": `desc:"integrated noise excitatory conductance, added into Ge"`,
"GiNoiseP": `desc:"accumulating poisson probability factor for driving inhibitory noise spiking -- multiply times uniform random deviate at each time step, until it gets below the target threshold based on lambda."`,
"GiNoise": `desc:"integrated noise inhibotyr conductance, added into Gi"`,
"GeExt": `desc:"extra excitatory conductance added to Ge -- from Ext input, GeCtxt etc"`,
"GeRaw": `desc:"raw excitatory conductance (net input) received from senders = current raw spiking drive"`,
"GeSyn": `range:"2" desc:"time-integrated total excitatory synaptic conductance, with an instantaneous rise time from each spike (in GeRaw) and exponential decay with Dt.GeTau, aggregated over projections -- does *not* include Gbar.E"`,
"GiRaw": `desc:"raw inhibitory conductance (net input) received from senders = current raw spiking drive"`,
"GiSyn": `desc:"time-integrated total inhibitory synaptic conductance, with an instantaneous rise time from each spike (in GiRaw) and exponential decay with Dt.GiTau, aggregated over projections -- does *not* include Gbar.I. This is added with computed FFFB inhibition to get the full inhibition in Gi"`,
"GeInt": `range:"2" desc:"integrated running-average activation value computed from Ge with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall Ge level across the ThetaCycle time scale (Ge itself fluctuates considerably) -- useful for stats to set strength of connections etc to get neurons into right range of overall excitatory drive"`,
"GeIntMax": `range:"2" desc:"maximum GeInt value across one theta cycle time window."`,
"GiInt": `range:"2" desc:"integrated running-average activation value computed from GiSyn with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall synaptic Gi level across the ThetaCycle time scale (Gi itself fluctuates considerably) -- useful for stats to set strength of connections etc to get neurons into right range of overall inhibitory drive"`,
"GModRaw": `desc:"raw modulatory conductance, received from GType = ModulatoryG projections"`,
"GModSyn": `desc:"syn integrated modulatory conductance, received from GType = ModulatoryG projections"`,
"GMaintRaw": `desc:"raw maintenance conductance, received from GType = MaintG projections"`,
"GMaintSyn": `desc:"syn integrated maintenance conductance, integrated using MaintNMDA params."`,
"SSGi": `auto-scale:"+" desc:"SST+ somatostatin positive slow spiking inhibition"`,
"SSGiDend": `auto-scale:"+" desc:"amount of SST+ somatostatin positive slow spiking inhibition applied to dendritic Vm (VmDend)"`,
"Gak": `auto-scale:"+" desc:"conductance of A-type K potassium channels"`,
"MahpN": `auto-scale:"+" desc:"accumulating voltage-gated gating value for the medium time scale AHP"`,
"SahpCa": `desc:"slowly accumulating calcium value that drives the slow AHP"`,
"SahpN": `desc:"sAHP gating value"`,
"GknaMed": `auto-scale:"+" desc:"conductance of sodium-gated potassium channel (KNa) medium dynamics (Slick) -- produces accommodation / adaptation of firing"`,
"GknaSlow": `auto-scale:"+" desc:"conductance of sodium-gated potassium channel (KNa) slow dynamics (Slack) -- produces accommodation / adaptation of firing"`,
"GnmdaSyn": `auto-scale:"+" desc:"integrated NMDA recv synaptic current -- adds GeRaw and decays with time constant"`,
"Gnmda": `auto-scale:"+" desc:"net postsynaptic (recv) NMDA conductance, after Mg V-gating and Gbar -- added directly to Ge as it has the same reversal potential"`,
"GnmdaMaint": `auto-scale:"+" desc:"net postsynaptic maintenance NMDA conductance, computed from GMaintSyn and GMaintRaw, after Mg V-gating and Gbar -- added directly to Ge as it has the same reversal potential"`,
"GnmdaLrn": `auto-scale:"+" desc:"learning version of integrated NMDA recv synaptic current -- adds GeRaw and decays with time constant -- drives NmdaCa that then drives CaM for learning"`,
"NmdaCa": `auto-scale:"+" desc:"NMDA calcium computed from GnmdaLrn, drives learning via CaM"`,
"GgabaB": `auto-scale:"+" desc:"net GABA-B conductance, after Vm gating and Gbar + Gbase -- applies to Gk, not Gi, for GIRK, with .1 reversal potential."`,
"GABAB": `auto-scale:"+" desc:"GABA-B / GIRK activation -- time-integrated value with rise and decay time constants"`,
"GABABx": `auto-scale:"+" desc:"GABA-B / GIRK internal drive variable -- gets the raw activation and decays"`,
"Gvgcc": `auto-scale:"+" desc:"conductance (via Ca) for VGCC voltage gated calcium channels"`,
"VgccM": `desc:"activation gate of VGCC channels"`,
"VgccH": `desc:"inactivation gate of VGCC channels"`,
"VgccCa": `auto-scale:"+" desc:"instantaneous VGCC calcium flux -- can be driven by spiking or directly from Gvgcc"`,
"VgccCaInt": `auto-scale:"+" desc:"time-integrated VGCC calcium flux -- this is actually what drives learning"`,
"SKCaIn": `desc:"intracellular calcium store level, available to be released with spiking as SKCaR, which can bind to SKCa receptors and drive K current. replenishment is a function of spiking activity being below a threshold"`,
"SKCaR": `desc:"released amount of intracellular calcium, from SKCaIn, as a function of spiking events. this can bind to SKCa channels and drive K currents."`,
"SKCaM": `desc:"Calcium-gated potassium channel gating factor, driven by SKCaR via a Hill equation as in chans.SKPCaParams."`,
"Gsk": `desc:"Calcium-gated potassium channel conductance as a function of Gbar * SKCaM."`,
"Burst": `desc:"5IB bursting activation value, computed by thresholding regular CaSpkP value in Super superficial layers"`,
"BurstPrv": `desc:"previous Burst bursting activation from prior time step -- used for context-based learning"`,
"CtxtGe": `desc:"context (temporally delayed) excitatory conductance, driven by deep bursting at end of the plus phase, for CT layers."`,
"CtxtGeRawa": `desc:"raw update of context (temporally delayed) excitatory conductance, driven by deep bursting at end of the plus phase, for CT layers."`,
"CtxtGeOrig": `desc:"original CtxtGe value prior to any decay factor -- updates at end of plus phase."`,
"NrnFlags": `view:"-" desc:"bit flags for external input and other neuron status state"`,
"ActAvg": `desc:"average activation (of minus phase activation state) over long time intervals (time constant = Dt.LongAvgTau) -- useful for finding hog units and seeing overall distribution of activation"`,
"AvgPct": `range:"2" desc:"ActAvg as a proportion of overall layer activation -- this is used for synaptic scaling to match TrgAvg activation -- updated at SlowInterval intervals"`,
"TrgAvg": `range:"2" desc:"neuron's target average activation as a proportion of overall layer activation, assigned during weight initialization, driving synaptic scaling relative to AvgPct"`,
"DTrgAvg": `auto-scale:"+" desc:"change in neuron's target average activation as a result of unit-wise error gradient -- acts like a bias weight. MPI needs to share these across processors."`,
"AvgDif": `desc:"AvgPct - TrgAvg -- i.e., the error in overall activity level relative to set point for this neuron, which drives synaptic scaling -- updated at SlowInterval intervals"`,
"GeBase": `desc:"baseline level of Ge, added to GeRaw, for intrinsic excitability"`,
"GiBase": `desc:"baseline level of Gi, added to GiRaw, for intrinsic excitability"`,
}
NeuronVarProps has all of the display properties for neuron variables, including desc tooltips
var PrjnProps = ki.Props{ "EnumType:Typ": KiT_PrjnTypes, }
var SynapseVarProps = map[string]string{
"Wt ": `desc:"effective synaptic weight value, determining how much conductance one spike drives on the receiving neuron, representing the actual number of effective AMPA receptors in the synapse. Wt = SWt * WtSig(LWt), where WtSig produces values between 0-2 based on LWt, centered on 1."`,
"LWt": `desc:"rapidly learning, linear weight value -- learns according to the lrate specified in the connection spec. Biologically, this represents the internal biochemical processes that drive the trafficking of AMPA receptors in the synaptic density. Initially all LWt are .5, which gives 1 from WtSig function."`,
"SWt": `desc:"slowly adapting structural weight value, which acts as a multiplicative scaling factor on synaptic efficacy: biologically represents the physical size and efficacy of the dendritic spine. SWt values adapt in an outer loop along with synaptic scaling, with constraints to prevent runaway positive feedback loops and maintain variance and further capacity to learn. Initial variance is all in SWt, with LWt set to .5, and scaling absorbs some of LWt into SWt."`,
"DWt": `auto-scale:"+" desc:"delta (change in) synaptic weight, from learning -- updates LWt which then updates Wt."`,
"DSWt": `auto-scale:"+" desc:"change in SWt slow synaptic weight -- accumulates DWt"`,
"CaM": `auto-scale:"+" desc:"first stage running average (mean) Ca calcium level (like CaM = calmodulin), feeds into CaP"`,
"CaP": `auto-scale:"+"desc:"shorter timescale integrated CaM value, representing the plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule"`,
"CaD": `auto-scale:"+" desc:"longer timescale integrated CaP value, representing the minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule"`,
"Tr": `auto-scale:"+" desc:"trace of synaptic activity over time -- used for credit assignment in learning. In MatrixPrjn this is a tag that is then updated later when US occurs."`,
"DTr": `auto-scale:"+" desc:"delta (change in) Tr trace of synaptic activity over time"`,
"DiDWt": `auto-scale:"+" desc:"delta weight for each data parallel index (Di) -- this is directly computed from the Ca values (in cortical version) and then aggregated into the overall DWt (which may be further integrated across MPI nodes), which then drives changes in Wt values"`,
}
SynapseVarProps has all of the display properties for synapse variables, including desc tooltips
var TheGPU *vgpu.GPU
TheGPU is the gpu device, shared across all networks
Functions ¶
func AddGlbDrvV ¶ added in v1.8.0
func AddGlbDrvV(ctx *Context, di uint32, drIdx uint32, gvar GlobalVars, val float32)
AddGlbDriveV is the CPU version of the global Drive, USpos variable adder
func AddGlbV ¶ added in v1.8.0
func AddGlbV(ctx *Context, di uint32, gvar GlobalVars, val float32)
AddGlbV is the CPU version of the global variable addor
func AddNrnAvgV ¶ added in v1.8.0
func AddNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
AddNrnAvgV is the CPU version of the neuron variable addor
func AddNrnV ¶ added in v1.8.0
func AddNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
AddNrnV is the CPU version of the neuron variable addor
func AddSynCaV ¶ added in v1.8.0
func AddSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
AddSynCaV is the CPU version of the synapse variable addor
func AddSynV ¶ added in v1.8.0
func AddSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
AddSynV is the CPU version of the synapse variable addor
func DecaySynCa ¶ added in v1.3.21
DecaySynCa decays synaptic calcium by given factor (between trials) Not used by default.
func DriveVarToZero ¶ added in v1.8.0
func DriveVarToZero(ctx *Context, di uint32, gvar GlobalVars)
DriveVarToZero sets all values of given drive-sized variable to 0
func DrivesAddTo ¶ added in v1.8.0
AddTo increments drive by given amount, subject to 0-1 range clamping. Returns new val.
func DrivesEffectiveDrive ¶ added in v1.8.0
DrivesEffectiveDrive returns the Max of Drives at given index and DriveMin. note that index 0 is the novelty / curiosity drive.
func DrivesExpStep ¶ added in v1.8.0
DrivesExpStep updates drive with an exponential step with given dt value toward given baseline value.
func DrivesExpStepAll ¶ added in v1.8.0
DrivesExpStepAll updates given drives with an exponential step using dt values toward baseline values.
func DrivesSoftAdd ¶ added in v1.8.0
DrivesSoftAdd increments drive by given amount, using soft-bounding to 0-1 extremes. if delta is positive, multiply by 1-val, else val. Returns new val.
func DrivesToBaseline ¶ added in v1.8.0
DrivesToBaseline sets all drives to their baseline levels
func DrivesToZero ¶ added in v1.8.0
DrivesToZero sets all drives to 0
func EffortAddEffort ¶ added in v1.8.0
EffortAddEffort adds an increment of effort and updates the Disc discount factor
func EffortDiscFmEffort ¶ added in v1.8.0
EffortDiscFmEffort computes Disc from Raw effort
func EffortGiveUp ¶ added in v1.8.0
EffortGiveUp returns true if maximum effort has been exceeded
func EffortReset ¶ added in v1.8.0
EffortReset resets the raw effort back to zero -- at start of new gating event
func GetRandomNumber ¶ added in v1.7.7
func GetRandomNumber(index uint32, counter slrand.Counter, funIdx RandFunIdx) float32
GetRandomNumber returns a random number that depends on the index, counter and function index. We increment the counter after each cycle, so that we get new random numbers. This whole scheme exists to ensure equal results under different multithreading settings.
func GlbDrvV ¶ added in v1.8.0
func GlbDrvV(ctx *Context, di uint32, drIdx uint32, gvar GlobalVars) float32
GlbDriveV is the CPU version of the global Drive, USpos variable accessor
func GlbV ¶ added in v1.8.0
func GlbV(ctx *Context, di uint32, gvar GlobalVars) float32
GlbV is the CPU version of the global variable accessor
func GlbVTA ¶ added in v1.8.0
func GlbVTA(ctx *Context, di uint32, vtaType GlobalVTAType, gvar GlobalVars) float32
GlbVTA is the CPU version of the global VTA variable accessor
func HashEncodeSlice ¶ added in v1.7.14
func IsExtLayerType ¶ added in v1.7.9
func IsExtLayerType(lt LayerTypes) bool
IsExtLayerType returns true if the layer type deals with external input: Input, Target, Compare
func JsonToParams ¶
JsonToParams reformates json output to suitable params display output
func LHbFmPVVS ¶ added in v1.8.0
LHbFmPVVS computes the overall LHbDip and LHbBurst values from PV (primary value) and VSPatch inputs.
func LHbShouldGiveUp ¶ added in v1.8.0
LHbShouldGiveUp increments DipSum and checks if should give up if above threshold
func LayerActsLog ¶ added in v1.7.11
LayerActsLog records layer activity for tuning the network inhibition, nominal activity, relative scaling, etc. if gui is non-nil, plot is updated.
func LayerActsLogAvg ¶ added in v1.7.11
LayerActsLogAvg computes average of LayerActsRec record of layer activity for tuning the network inhibition, nominal activity, relative scaling, etc. if gui is non-nil, plot is updated. if recReset is true, reset the recorded data after computing average.
func LayerActsLogConfig ¶ added in v1.7.11
LayerActsLogConfig configures Tables to record layer activity for tuning the network inhibition, nominal activity, relative scaling, etc. in elog.MiscTables: LayerActs is current, LayerActsRec is record over trials, LayerActsAvg is average of recorded trials.
func LayerActsLogConfigGUI ¶ added in v1.7.11
LayerActsLogConfigGUI configures GUI for LayerActsLog Plot and LayerActs Avg Plot
func LayerActsLogConfigMetaData ¶ added in v1.7.11
LayerActsLogConfigMetaData configures meta data for LayerActs table
func LayerActsLogRecReset ¶ added in v1.7.11
LayerActsLogRecReset resets the recorded LayerActsRec data used for computing averages
func LogAddCaLrnDiagnosticItems ¶ added in v1.5.3
func LogAddCaLrnDiagnosticItems(lg *elog.Logs, mode etime.Modes, net *Network, times ...etime.Times)
LogAddCaLrnDiagnosticItems adds standard Axon diagnostic statistics to given logs, across two given time levels, in higher to lower order, e.g., Epoch, Trial These were useful for the development of the Ca-based "trace" learning rule that directly uses NMDA and VGCC-like spiking Ca
func LogAddDiagnosticItems ¶ added in v1.3.35
func LogAddDiagnosticItems(lg *elog.Logs, layerNames []string, mode etime.Modes, times ...etime.Times)
LogAddDiagnosticItems adds standard Axon diagnostic statistics to given logs, across two given time levels, in higher to lower order, e.g., Epoch, Trial These are useful for tuning and diagnosing the behavior of the network.
func LogAddExtraDiagnosticItems ¶ added in v1.5.8
func LogAddExtraDiagnosticItems(lg *elog.Logs, mode etime.Modes, net *Network, times ...etime.Times)
LogAddExtraDiagnosticItems adds extra Axon diagnostic statistics to given logs, across two given time levels, in higher to lower order, e.g., Epoch, Trial These are useful for tuning and diagnosing the behavior of the network.
func LogAddLayerGeActAvgItems ¶ added in v1.3.35
LogAddLayerGeActAvgItems adds Ge and Act average items for Hidden and Target layers for given mode and time (e.g., Test, Cycle) These are useful for monitoring layer activity during testing.
func LogAddPCAItems ¶ added in v1.3.35
LogAddPCAItems adds PCA statistics to log for Hidden and Target layers across 3 given time levels, in higher to lower order, e.g., Run, Epoch, Trial These are useful for diagnosing the behavior of the network.
func LogAddPulvCorSimItems ¶ added in v1.7.0
LogAddPulvCorSimItems adds CorSim stats for Pulv / Pulvinar layers aggregated across three time scales, ordered from higher to lower, e.g., Run, Epoch, Trial.
func LogInputLayer ¶ added in v1.7.7
func LogTestErrors ¶ added in v1.3.35
LogTestErrors records all errors made across TestTrials, at Test Epoch scope
func LooperResetLogBelow ¶ added in v1.3.35
LooperResetLogBelow adds a function in OnStart to all stacks and loops to reset the log at the level below each loop -- this is good default behavior. Exceptions can be passed to exclude specific levels -- e.g., if except is Epoch then Epoch does not reset the log below it
func LooperSimCycleAndLearn ¶ added in v1.3.35
func LooperSimCycleAndLearn(man *looper.Manager, net *Network, ctx *Context, viewupdt *netview.ViewUpdt, trial ...etime.Times)
LooperSimCycleAndLearn adds Cycle and DWt, WtFmDWt functions to looper for given network, ctx, and netview update manager Can pass a trial-level time scale to use instead of the default etime.Trial
func LooperStdPhases ¶ added in v1.3.35
func LooperStdPhases(man *looper.Manager, ctx *Context, net *Network, plusStart, plusEnd int, trial ...etime.Times)
LooperStdPhases adds the minus and plus phases of the theta cycle, along with embedded beta phases which just record St1 and St2 activity in this case. plusStart is start of plus phase, typically 150, and plusEnd is end of plus phase, typically 199 resets the state at start of trial. Can pass a trial-level time scale to use instead of the default etime.Trial
func LooperUpdtNetView ¶ added in v1.3.35
LooperUpdtNetView adds netview update calls at each time level
func LooperUpdtPlots ¶ added in v1.3.35
LooperUpdtPlots adds plot update calls at each time level
func MulNrnAvgV ¶ added in v1.8.0
func MulNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
MulNrnAvgV is the CPU version of the neuron variable multiplier
func MulNrnV ¶ added in v1.8.0
func MulNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
MulNrnV is the CPU version of the neuron variable multiplier
func MulSynCaV ¶ added in v1.8.0
func MulSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
MulSynCaV is the CPU version of the synapse variable multiplier
func MulSynV ¶ added in v1.8.0
func MulSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
MulSynV is the CPU version of the synapse variable multiplier
func NeuroModInit ¶ added in v1.8.0
NeuroModInit does neuromod initialization
func NeuroModSetRew ¶ added in v1.8.0
NeuroModSetRew is a convenience function for setting the external reward
func NeuronVarIdxByName ¶
NeuronVarIdxByName returns the index of the variable in the Neuron, or error
func NrnAvgV ¶ added in v1.8.0
func NrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars) float32
NrnAvgV is the CPU version of the neuron variable accessor
func NrnClearFlag ¶ added in v1.8.0
func NrnClearFlag(ctx *Context, ni, di uint32, flag NeuronFlags)
func NrnHasFlag ¶ added in v1.8.0
func NrnHasFlag(ctx *Context, ni, di uint32, flag NeuronFlags) bool
func NrnI ¶ added in v1.8.0
func NrnI(ctx *Context, ni uint32, idx NeuronIdxs) uint32
NrnI is the CPU version of the neuron idx accessor
func NrnIsOff ¶ added in v1.8.0
NrnIsOff returns true if the neuron has been turned off (lesioned) Only checks the first data item -- all should be consistent.
func NrnSetFlag ¶ added in v1.8.0
func NrnSetFlag(ctx *Context, ni, di uint32, flag NeuronFlags)
func NrnV ¶ added in v1.8.0
func NrnV(ctx *Context, ni, di uint32, nvar NeuronVars) float32
NrnV is the CPU version of the neuron variable accessor
func PCAStats ¶ added in v1.3.35
PCAStats computes PCA statistics on recorded hidden activation patterns from Analyze, Trial log data
func PVLVDA ¶ added in v1.8.0
PVLVDA computes the updated dopamine for PVLV algorithm from all the current state, including pptg and vsPatchPos (from RewPred) via Context. Call after setting USs, VSPatchVals, Effort, Drives, etc. Resulting DA is in VTA.Vals.DA is returned.
func PVLVDAImpl ¶ added in v1.8.0
PVLVDAImpl computes the updated dopamine from all the current state, including ACh from LDT via Context. Call after setting USs, Effort, Drives, VSPatch vals etc. Resulting DA is in VTA.Vals.DA, and is returned (to be set to Context.NeuroMod.DA)
func PVLVDriveUpdt ¶ added in v1.8.0
PVLVDriveUpdt updates the drives based on the current USs, subtracting USDec * US from current Drive, and calling ExpStep with the Dt and Base params.
func PVLVHasPosUS ¶ added in v1.8.0
PVLVHasPosUS returns true if there is at least one non-zero positive US
func PVLVInitDrives ¶ added in v1.8.0
PVLVInitDrives initializes all the Drives to zero
func PVLVInitUS ¶ added in v1.8.0
PVLVInitUS initializes all the USs to zero
func PVLVNegPV ¶ added in v1.8.0
PVLVNegPV returns the reward for current negative US state -- just a sum of USneg
func PVLVNewState ¶ added in v1.8.0
PVLVNewState is called at start of new state (trial) of processing. hadRew indicates if there was a reward state the previous trial. It calls LHGiveUpFmSum to trigger a "give up" state on this trial if previous expectation of reward exceeds critical sum.
func PVLVPosPV ¶ added in v1.8.0
PVLVPosPV returns the reward for current positive US state relative to current drives
func PVLVPosPVFmDriveEffort ¶ added in v1.8.0
PVLVPosPVFmDriveEffort returns the net primary value ("reward") based on given US value and drive for that value (typically in 0-1 range), and total effort, from which the effort discount factor is computed an applied: usValue * drive * Effort.DiscFun(effort)
func PVLVSetDrive ¶ added in v1.8.0
PVLVSetDrive sets given Drive to given value
func PVLVUSStimVal ¶ added in v1.8.0
func PVLVUSStimVal(ctx *Context, di uint32, usIdx uint32, valence ValenceTypes) float32
PVLVUSStimVal returns stimulus value for US at given index and valence. If US > 0.01, a full 1 US activation is returned.
func PVLVUrgencyUpdt ¶ added in v1.8.0
PVLVUrgencyUpdt updates the urgency and urgency based on given effort increment, resetting instead if HasRewPrev and HasPosUSPrev is true indicating receipt of an actual positive US. Call this at the start of the trial, in ApplyPVLV method.
func PVLVVSPatchMax ¶ added in v1.8.0
PVLVVSPatchMax returns the max VSPatch value across drives
func ParallelChunkRun ¶ added in v1.7.24
Maps the given function across the [0, total) range of items, using nThreads goroutines, in smaller-sized chunks for better load balancing. this may be better for larger number of threads, but is not better for small N
func ParallelRun ¶ added in v1.7.24
Maps the given function across the [0, total) range of items, using nThreads goroutines.
func SaveWeights ¶ added in v1.3.29
SaveWeights saves network weights to filename with WeightsFileName information to identify the weights. only for 0 rank MPI if running mpi Returns the name of the file saved to, or empty if not saved.
func SaveWeightsIfArgSet ¶ added in v1.3.35
SaveWeightsIfArgSet saves network weights if the "wts" arg has been set to true. uses WeightsFileName information to identify the weights. only for 0 rank MPI if running mpi Returns the name of the file saved to, or empty if not saved.
func SaveWeightsIfConfigSet ¶ added in v1.8.4
SaveWeightsIfConfigSet saves network weights if the given config bool value has been set to true. uses WeightsFileName information to identify the weights. only for 0 rank MPI if running mpi Returns the name of the file saved to, or empty if not saved.
func SetAvgMaxFloatFromIntErr ¶ added in v1.8.0
func SetAvgMaxFloatFromIntErr(fun func())
func SetGlbDrvV ¶ added in v1.8.0
func SetGlbDrvV(ctx *Context, di uint32, drIdx uint32, gvar GlobalVars, val float32)
SetGlbDriveV is the CPU version of the global Drive, USpos variable settor
func SetGlbUSneg ¶ added in v1.8.0
SetGlbUSneg is the CPU version of the global USneg variable settor
func SetGlbV ¶ added in v1.8.0
func SetGlbV(ctx *Context, di uint32, gvar GlobalVars, val float32)
SetGlbV is the CPU version of the global variable settor
func SetGlbVTA ¶ added in v1.8.0
func SetGlbVTA(ctx *Context, di uint32, vtaType GlobalVTAType, gvar GlobalVars, val float32)
SetGlbVTA is the CPU version of the global VTA variable settor
func SetNeuronExtPosNeg ¶ added in v1.7.0
SetNeuronExtPosNeg sets neuron Ext value based on neuron index with positive values going in first unit, negative values rectified to positive in 2nd unit
func SetNrnAvgV ¶ added in v1.8.0
func SetNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
SetNrnAvgV is the CPU version of the neuron variable settor
func SetNrnI ¶ added in v1.8.0
func SetNrnI(ctx *Context, ni uint32, idx NeuronIdxs, val uint32)
SetNrnI is the CPU version of the neuron idx settor
func SetNrnV ¶ added in v1.8.0
func SetNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
SetNrnV is the CPU version of the neuron variable settor
func SetSynCaV ¶ added in v1.8.0
func SetSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
SetSynCaV is the CPU version of the synapse variable settor
func SetSynI ¶ added in v1.8.0
func SetSynI(ctx *Context, syni uint32, idx SynapseIdxs, val uint32)
SetSynI is the CPU version of the synapse idx settor
func SetSynV ¶ added in v1.8.0
func SetSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
SetSynV is the CPU version of the synapse variable settor
func SigFun61 ¶
SigFun61 is the sigmoid function for value w in 0-1 range, with default gain = 6, offset = 1 params
func SigInvFun61 ¶
SigInvFun61 is the inverse of the sigmoid function, with default gain = 6, offset = 1 params
func SynCaV ¶ added in v1.8.0
func SynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars) float32
SynCaV is the CPU version of the synapse variable accessor
func SynI ¶ added in v1.8.0
func SynI(ctx *Context, syni uint32, idx SynapseIdxs) uint32
SynI is the CPU version of the synapse idx accessor
func SynV ¶ added in v1.8.0
func SynV(ctx *Context, syni uint32, svar SynapseVars) float32
SynV is the CPU version of the synapse variable accessor
func SynapseVarByName ¶
SynapseVarByName returns the index of the variable in the Synapse, or error
func ToggleLayersOff ¶ added in v1.3.29
ToggleLayersOff can be used to disable layers in a Network, for example if you are doing an ablation study.
func USnegToZero ¶ added in v1.8.0
USnegToZero sets all values of USneg to zero
func UrgeFmUrgency ¶ added in v1.8.0
UrgeFmUrgency computes Urge from Raw
func UrgencyAddEffort ¶ added in v1.8.0
UrgencyAddEffort adds an effort increment of urgency and updates the Urge factor
func UrgencyReset ¶ added in v1.8.0
UrgencyReset resets the raw urgency back to zero -- at start of new gating event
func VTADAFmRaw ¶ added in v1.8.0
VTADAFmRaw computes the intermediate Vals and final DA value from Raw values that have been set prior to calling. ACh value from LDT is passed as a parameter.
func VTAZeroVals ¶ added in v1.8.0
func VTAZeroVals(ctx *Context, di uint32, vtaType GlobalVTAType)
VTAZeroVals sets all VTA values to zero for given type
func WeightsFileName ¶ added in v1.3.35
WeightsFileName returns default current weights file name, using train run and epoch counters from looper and the RunName string identifying tag, parameters and starting run,
Types ¶
type ActAvgParams ¶
type ActAvgParams struct { Nominal float32 `` /* 745-byte string literal not displayed */ AdaptGi slbool.Bool `` /* 349-byte string literal not displayed */ Offset float32 `` /* 315-byte string literal not displayed */ HiTol float32 `` /* 266-byte string literal not displayed */ LoTol float32 `` /* 266-byte string literal not displayed */ AdaptRate float32 `` /* 263-byte string literal not displayed */ // contains filtered or unexported fields }
ActAvgParams represents the nominal average activity levels in the layer and parameters for adapting the computed Gi inhibition levels to maintain average activity within a target range.
func (*ActAvgParams) Adapt ¶ added in v1.2.37
func (aa *ActAvgParams) Adapt(gimult *float32, act float32) bool
Adapt adapts the given gi multiplier factor as function of target and actual average activation, given current params.
func (*ActAvgParams) AvgFmAct ¶
func (aa *ActAvgParams) AvgFmAct(avg *float32, act float32, dt float32)
AvgFmAct updates the running-average activation given average activity level in layer
func (*ActAvgParams) Defaults ¶
func (aa *ActAvgParams) Defaults()
func (*ActAvgParams) Update ¶
func (aa *ActAvgParams) Update()
type ActAvgVals ¶ added in v1.2.32
type ActAvgVals struct { ActMAvg float32 `` /* 141-byte string literal not displayed */ ActPAvg float32 `inactive:"+" desc:"running-average plus-phase activity integrated at Dt.LongAvgTau"` AvgMaxGeM float32 `inactive:"+" desc:"running-average max of minus-phase Ge value across the layer integrated at Dt.LongAvgTau"` AvgMaxGiM float32 `inactive:"+" desc:"running-average max of minus-phase Gi value across the layer integrated at Dt.LongAvgTau"` GiMult float32 `inactive:"+" desc:"multiplier on inhibition -- adapted to maintain target activity level"` AdaptThr float32 `inactive:"+" desc:"adaptive threshold -- only used for specialized layers, e.g., VSPatch"` // contains filtered or unexported fields }
ActAvgVals are long-running-average activation levels stored in the LayerVals, for monitoring and adapting inhibition and possibly scaling parameters. All of these integrate over NData within a network, so are the same across them.
func (*ActAvgVals) Init ¶ added in v1.7.9
func (lv *ActAvgVals) Init()
type ActInitParams ¶
type ActInitParams struct { Vm float32 `def:"0.3" desc:"initial membrane potential -- see Erev.L for the resting potential (typically .3)"` Act float32 `def:"0" desc:"initial activation value -- typically 0"` GeBase float32 `` /* 268-byte string literal not displayed */ GiBase float32 `` /* 235-byte string literal not displayed */ GeVar float32 `` /* 167-byte string literal not displayed */ GiVar float32 `` /* 167-byte string literal not displayed */ // contains filtered or unexported fields }
ActInitParams are initial values for key network state variables. Initialized in InitActs called by InitWts, and provides target values for DecayState.
func (*ActInitParams) Defaults ¶
func (ai *ActInitParams) Defaults()
func (*ActInitParams) GetGeBase ¶ added in v1.7.7
func (ai *ActInitParams) GetGeBase(rnd erand.Rand) float32
GeBase returns the baseline Ge value: Ge + rand(GeVar) > 0
func (*ActInitParams) GetGiBase ¶ added in v1.7.7
func (ai *ActInitParams) GetGiBase(rnd erand.Rand) float32
GiBase returns the baseline Gi value: Gi + rand(GiVar) > 0
func (*ActInitParams) Update ¶
func (ai *ActInitParams) Update()
type ActParams ¶
type ActParams struct { Spikes SpikeParams `view:"inline" desc:"Spiking function parameters"` Dend DendParams `view:"inline" desc:"dendrite-specific parameters"` Init ActInitParams `` /* 155-byte string literal not displayed */ Decay DecayParams `` /* 233-byte string literal not displayed */ Dt DtParams `view:"inline" desc:"time and rate constants for temporal derivatives / updating of activation state"` Gbar chans.Chans `view:"inline" desc:"[Defaults: 1, .2, 1, 1] maximal conductances levels for channels"` Erev chans.Chans `view:"inline" desc:"[Defaults: 1, .3, .25, .1] reversal potentials for each channel"` Clamp ClampParams `view:"inline" desc:"how external inputs drive neural activations"` Noise SpikeNoiseParams `view:"inline" desc:"how, where, when, and how much noise to add"` VmRange minmax.F32 `` /* 165-byte string literal not displayed */ Mahp chans.MahpParams `` /* 173-byte string literal not displayed */ Sahp chans.SahpParams `` /* 182-byte string literal not displayed */ KNa chans.KNaMedSlow `` /* 220-byte string literal not displayed */ NMDA chans.NMDAParams `` /* 252-byte string literal not displayed */ MaintNMDA chans.NMDAParams `` /* 252-byte string literal not displayed */ GabaB chans.GABABParams `view:"inline" desc:"GABA-B / GIRK channel parameters"` VGCC chans.VGCCParams `` /* 159-byte string literal not displayed */ AK chans.AKsParams `` /* 135-byte string literal not displayed */ SKCa chans.SKCaParams `` /* 140-byte string literal not displayed */ AttnMod AttnParams `view:"inline" desc:"Attentional modulation parameters: how Attn modulates Ge"` PopCode PopCodeParams `` /* 165-byte string literal not displayed */ }
axon.ActParams contains all the activation computation params and functions for basic Axon, at the neuron level . This is included in axon.Layer to drive the computation.
func (*ActParams) AddGeNoise ¶ added in v1.8.0
AddGeNoise updates nrn.GeNoise if active
func (*ActParams) AddGiNoise ¶ added in v1.8.0
AddGiNoise updates nrn.GiNoise if active
func (*ActParams) DecayAHP ¶ added in v1.7.18
DecayAHP decays after-hyperpolarization variables by given factor (typically Decay.AHP)
func (*ActParams) DecayLearnCa ¶ added in v1.7.16
DecayLearnCa decays neuron-level calcium learning and spiking variables by given factor. Note: this is generally NOT useful, causing variability in these learning factors as a function of the decay parameter that then has impacts on learning rates etc. see Act.Decay.LearnCa param controlling this
func (*ActParams) DecayState ¶
DecayState decays the activation state toward initial values in proportion to given decay parameter. Special case values such as Glong and KNa are also decayed with their separately parameterized values. Called with ac.Decay.Act by Layer during NewState
func (*ActParams) GeFmSyn ¶ added in v1.5.12
GeFmSyn integrates Ge excitatory conductance from GeSyn. geExt is extra conductance to add to the final Ge value
func (*ActParams) GiFmSyn ¶ added in v1.5.12
GiFmSyn integrates GiSyn inhibitory synaptic conductance from GiRaw value (can add other terms to geRaw prior to calling this)
func (*ActParams) GkFmVm ¶ added in v1.6.0
GkFmVm updates all the Gk-based conductances: Mahp, KNa, Gak
func (*ActParams) GvgccFmVm ¶ added in v1.3.24
GvgccFmVm updates all the VGCC voltage-gated calcium channel variables from VmDend
func (*ActParams) InitActs ¶
InitActs initializes activation state in neuron -- called during InitWts but otherwise not automatically called (DecayState is used instead)
func (*ActParams) InitLongActs ¶ added in v1.2.66
InitLongActs initializes longer time-scale activation states in neuron (SpkPrv, SpkSt*, ActM, ActP, GeInt, GiInt) Called from InitActs, which is called from InitWts, but otherwise not automatically called (DecayState is used instead)
func (*ActParams) KNaNewState ¶ added in v1.7.18
KNaNewState does TrialSlow version of KNa during NewState if option is set
func (*ActParams) MaintNMDAFmRaw ¶ added in v1.7.19
MaintNMDAFmRaw updates all the Maint NMDA variables from GModRaw and current Vm, Spiking
func (*ActParams) NMDAFmRaw ¶ added in v1.3.1
NMDAFmRaw updates all the NMDA variables from total Ge (GeRaw + Ext) and current Vm, Spiking
func (*ActParams) SpikeFmVm ¶ added in v1.6.12
SpikeFmVm computes Spike from Vm and ISI-based activation
func (*ActParams) SpikeFmVmVars ¶ added in v1.8.0
func (ac *ActParams) SpikeFmVmVars(nrnISI, nrnISIAvg, nrnSpike, nrnSpiked, nrnAct *float32, nrnVm float32)
SpikeFmVmVars computes Spike from Vm and ISI-based activation, using pointers to variables
func (*ActParams) Update ¶
func (ac *ActParams) Update()
Update must be called after any changes to parameters
type AttnParams ¶ added in v1.2.85
type AttnParams struct { On slbool.Bool `desc:"is attentional modulation active?"` Min float32 `viewif:"On" desc:"minimum act multiplier if attention is 0"` RTThr float32 `` /* 169-byte string literal not displayed */ // contains filtered or unexported fields }
AttnParams determine how the Attn modulates Ge
func (*AttnParams) Defaults ¶ added in v1.2.85
func (at *AttnParams) Defaults()
func (*AttnParams) ModVal ¶ added in v1.2.85
func (at *AttnParams) ModVal(val float32, attn float32) float32
ModVal returns the attn-modulated value -- attn must be between 1-0
func (*AttnParams) Update ¶ added in v1.2.85
func (at *AttnParams) Update()
type AvgMaxI32 ¶ added in v1.7.9
type AvgMaxI32 struct { Avg float32 `inactive:"+" desc:"Average, from Calc when last computed as Sum / N"` Max float32 `inactive:"+" desc:"Maximum value, copied from CurMax in Calc"` Sum int32 `inactive:"+" desc:"sum for computing average -- incremented in UpdateVal, reset in Calc"` CurMax int32 `inactive:"+" desc:"current maximum value, updated via UpdateVal, reset in Calc"` N int32 `` /* 181-byte string literal not displayed */ // contains filtered or unexported fields }
AvgMaxI32 holds average and max statistics for float32, and values used for computing them incrementally, using a fixed precision int32 based float representation that can be used with GPU-based atomic add and max functions. This ONLY works for positive values with averages around 1, and the N must be set IN ADVANCE to the correct number of items. Once Calc() is called, the incremental values are reset via Init() so it is always ready for updating without a separate Init() pass.
func (*AvgMaxI32) Calc ¶ added in v1.7.9
Calc computes the average given the current Sum and copies over CurMax to Max refIdx is a reference index of thing being computed, which will be printed in case there is an overflow, for debugging (can't be a string because this code runs on GPU).
func (*AvgMaxI32) FloatFmIntFactor ¶ added in v1.7.9
FloatFmIntFactor returns the factor used for converting int32 back to float32 -- this is 1 / FloatToIntFactor for faster multiplication instead of dividing.
func (*AvgMaxI32) FloatFromInt ¶ added in v1.7.9
FloatFromInt converts the given int32 value produced via FloatToInt back into a float32 (divides by factor)
func (*AvgMaxI32) FloatToInt ¶ added in v1.7.9
FloatToInt converts the given floating point value to a large int for max updating.
func (*AvgMaxI32) FloatToIntFactor ¶ added in v1.7.9
FloatToIntFactor returns the factor used for converting float32 to int32 for Max updating, assuming that the overall value is in the general order of 0-1 (127 is the max).
func (*AvgMaxI32) FloatToIntSum ¶ added in v1.7.9
FloatToIntSum converts the given floating point value to a large int for sum accumulating -- divides by N.
func (*AvgMaxI32) Init ¶ added in v1.7.9
func (am *AvgMaxI32) Init()
Init initializes incremental values used during updating.
type AvgMaxPhases ¶ added in v1.7.0
type AvgMaxPhases struct { Cycle AvgMaxI32 `view:"inline" desc:"updated every cycle -- this is the source of all subsequent time scales"` Minus AvgMaxI32 `view:"inline" desc:"at the end of the minus phase"` Plus AvgMaxI32 `view:"inline" desc:"at the end of the plus phase"` Prev AvgMaxI32 `view:"inline" desc:"at the end of the previous plus phase"` }
AvgMaxPhases contains the average and maximum values over a Pool of neurons, at different time scales within a standard ThetaCycle of updating. It is much more efficient on the GPU to just grab everything in one pass at the cycle level, and then take snapshots from there. All of the cycle level values are updated at the *start* of the cycle based on values from the prior cycle -- thus are 1 cycle behind in general.
func (*AvgMaxPhases) Calc ¶ added in v1.7.9
func (am *AvgMaxPhases) Calc(refIdx int32)
Calc does Calc on Cycle, which is then ready for aggregation again
func (*AvgMaxPhases) CycleToMinus ¶ added in v1.7.0
func (am *AvgMaxPhases) CycleToMinus()
CycleToMinus grabs current Cycle values into the Minus phase values
func (*AvgMaxPhases) CycleToPlus ¶ added in v1.7.0
func (am *AvgMaxPhases) CycleToPlus()
CycleToPlus grabs current Cycle values into the Plus phase values
func (*AvgMaxPhases) Zero ¶ added in v1.7.9
func (am *AvgMaxPhases) Zero()
Zero does a full reset on everything -- for InitActs
type AxonLayer ¶
type AxonLayer interface { emer.Layer // AsAxon returns this layer as a axon.Layer -- so that the AxonLayer // interface does not need to include accessors to all the basic stuff AsAxon() *Layer // PostBuild performs special post-Build() configuration steps for specific algorithms, // using configuration data set in BuildConfig during the ConfigNet process. PostBuild() }
AxonLayer defines the essential algorithmic API for Axon, at the layer level. These are the methods that the axon.Network calls on its layers at each step of processing. Other Layer types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.
All of the structural API is in emer.Layer, which this interface also inherits for convenience.
type AxonNetwork ¶
type AxonNetwork interface { emer.Network // AsAxon returns this network as a axon.Network -- so that the // AxonNetwork interface does not need to include accessors // to all the basic stuff AsAxon() *Network }
AxonNetwork defines the essential algorithmic API for Axon, at the network level. These are the methods that the user calls in their Sim code: * NewState * Cycle * NewPhase * DWt * WtFmDwt Because we don't want to have to force the user to use the interface cast in calling these methods, we provide Impl versions here that are the implementations which the user-facing method calls through the interface cast. Specialized algorithms should thus only change the Impl version, which is what is exposed here in this interface.
There is now a strong constraint that all Cycle level computation takes place in one pass at the Layer level, which greatly improves threading efficiency.
All of the structural API is in emer.Network, which this interface also inherits for convenience.
type AxonPrjn ¶
type AxonPrjn interface { emer.Prjn // AsAxon returns this prjn as a axon.Prjn -- so that the AxonPrjn // interface does not need to include accessors to all the basic stuff. AsAxon() *Prjn }
AxonPrjn defines the essential algorithmic API for Axon, at the projection level. These are the methods that the axon.Layer calls on its prjns at each step of processing. Other Prjn types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.
All of the structural API is in emer.Prjn, which this interface also inherits for convenience.
type BLAPrjnParams ¶ added in v1.7.18
type BLAPrjnParams struct { NegDeltaLRate float32 `def:"0.01,1" desc:"use 0.01 for acquisition (don't unlearn) and 1 for extinction -- negative delta learning rate multiplier"` AChThr float32 `def:"0.1" desc:"threshold on this layer's ACh level for trace learning updates"` USTrace float32 `def:"0,0.5" desc:"proportion of US time stimulus activity to use for the trace component of "` // contains filtered or unexported fields }
BLAPrjnParams has parameters for basolateral amygdala learning. Learning is driven by the Tr trace as function of ACh * Send Act recorded prior to US, and at US, recv unit delta: CaSpkP - SpkPrv times normalized GeIntMax for recv unit credit assignment. The Learn.Trace.Tau time constant determines trace updating over trials when ACh is above threshold -- this determines strength of second-order conditioning -- default of 1 means none, but can be increased as needed.
func (*BLAPrjnParams) Defaults ¶ added in v1.7.18
func (bp *BLAPrjnParams) Defaults()
func (*BLAPrjnParams) Update ¶ added in v1.7.18
func (bp *BLAPrjnParams) Update()
type BurstParams ¶ added in v1.7.0
type BurstParams struct { ThrRel float32 `` /* 348-byte string literal not displayed */ ThrAbs float32 `` /* 241-byte string literal not displayed */ // contains filtered or unexported fields }
BurstParams determine how the 5IB Burst activation is computed from CaSpkP integrated spiking values in Super layers -- thresholded.
func (*BurstParams) Defaults ¶ added in v1.7.0
func (bp *BurstParams) Defaults()
func (*BurstParams) ThrFmAvgMax ¶ added in v1.7.0
func (bp *BurstParams) ThrFmAvgMax(avg, mx float32) float32
ThrFmAvgMax returns threshold from average and maximum values
func (*BurstParams) Update ¶ added in v1.7.0
func (bp *BurstParams) Update()
type CTParams ¶ added in v1.7.0
type CTParams struct { GeGain float32 `` /* 239-byte string literal not displayed */ DecayTau float32 `` /* 227-byte string literal not displayed */ DecayDt float32 `view:"-" json:"-" xml:"-" desc:"1 / tau"` // contains filtered or unexported fields }
CTParams control the CT corticothalamic neuron special behavior
type CaLrnParams ¶ added in v1.5.1
type CaLrnParams struct { Norm float32 `` /* 188-byte string literal not displayed */ SpkVGCC slbool.Bool `` /* 133-byte string literal not displayed */ SpkVgccCa float32 `def:"35" desc:"multiplier on spike for computing Ca contribution to CaLrn in SpkVGCC mode"` VgccTau float32 `` /* 268-byte string literal not displayed */ Dt kinase.CaDtParams `view:"inline" desc:"time constants for integrating CaLrn across M, P and D cascading levels"` UpdtThr float32 `` /* 274-byte string literal not displayed */ VgccDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` NormInv float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"= 1 / Norm"` // contains filtered or unexported fields }
CaLrnParams parameterizes the neuron-level calcium signals driving learning: CaLrn = NMDA + VGCC Ca sources, where VGCC can be simulated from spiking or use the more complex and dynamic VGCC channel directly. CaLrn is then integrated in a cascading manner at multiple time scales: CaM (as in calmodulin), CaP (ltP, CaMKII, plus phase), CaD (ltD, DAPK1, minus phase).
func (*CaLrnParams) CaLrns ¶ added in v1.8.0
func (np *CaLrnParams) CaLrns(ctx *Context, ni, di uint32)
CaLrns updates the CaLrn value and its cascaded values, based on NMDA, VGCC Ca it first calls VgccCa to update the spike-driven version of that variable, and perform its time-integration.
func (*CaLrnParams) Defaults ¶ added in v1.5.1
func (np *CaLrnParams) Defaults()
func (*CaLrnParams) Update ¶ added in v1.5.1
func (np *CaLrnParams) Update()
func (*CaLrnParams) VgccCaFmSpike ¶ added in v1.8.0
func (np *CaLrnParams) VgccCaFmSpike(ctx *Context, ni, di uint32)
VgccCa updates the simulated VGCC calcium from spiking, if that option is selected, and performs time-integration of VgccCa
type CaSpkParams ¶ added in v1.5.1
type CaSpkParams struct { SpikeG float32 `` /* 464-byte string literal not displayed */ SynTau float32 `` /* 415-byte string literal not displayed */ SynDt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` Dt kinase.CaDtParams `` /* 202-byte string literal not displayed */ // contains filtered or unexported fields }
CaSpkParams parameterizes the neuron-level spike-driven calcium signals, starting with CaSyn that is integrated at the neuron level and drives synapse-level, pre * post Ca integration, which provides the Tr trace that multiplies error signals, and drives learning directly for Target layers. CaSpk* values are integrated separately at the Neuron level and used for UpdtThr and RLRate as a proxy for the activation (spiking) based learning signal.
func (*CaSpkParams) CaFmSpike ¶ added in v1.5.1
func (np *CaSpkParams) CaFmSpike(ctx *Context, ni, di uint32)
CaFmSpike computes CaSpk* and CaSyn calcium signals based on current spike.
func (*CaSpkParams) Defaults ¶ added in v1.5.1
func (np *CaSpkParams) Defaults()
func (*CaSpkParams) Update ¶ added in v1.5.1
func (np *CaSpkParams) Update()
type ClampParams ¶
type ClampParams struct { IsInput slbool.Bool `inactive:"+" desc:"is this a clamped input layer? set automatically based on layer type at initialization"` IsTarget slbool.Bool `inactive:"+" desc:"is this a target layer? set automatically based on layer type at initialization"` Ge float32 `def:"0.8,1.5" desc:"amount of Ge driven for clamping -- generally use 0.8 for Target layers, 1.5 for Input layers"` Add slbool.Bool `` /* 207-byte string literal not displayed */ ErrThr float32 `def:"0.5" desc:"threshold on neuron Act activity to count as active for computing error relative to target in PctErr method"` // contains filtered or unexported fields }
ClampParams specify how external inputs drive excitatory conductances (like a current clamp) -- either adds or overwrites existing conductances. Noise is added in either case.
func (*ClampParams) Defaults ¶
func (cp *ClampParams) Defaults()
func (*ClampParams) Update ¶
func (cp *ClampParams) Update()
type Context ¶ added in v1.7.0
type Context struct { Mode etime.Modes `desc:"current evaluation mode, e.g., Train, Test, etc"` Testing slbool.Bool `` /* 242-byte string literal not displayed */ Phase int32 `desc:"phase counter: typicaly 0-1 for minus-plus but can be more phases for other algorithms"` PlusPhase slbool.Bool `` /* 126-byte string literal not displayed */ PhaseCycle int32 `desc:"cycle within current phase -- minus or plus"` Cycle int32 `` /* 156-byte string literal not displayed */ ThetaCycles int32 `` /* 173-byte string literal not displayed */ CyclesTotal int32 `` /* 255-byte string literal not displayed */ Time float32 `desc:"accumulated amount of time the network has been running, in simulation-time (not real world time), in seconds"` TrialsTotal int32 `` /* 182-byte string literal not displayed */ TimePerCycle float32 `def:"0.001" desc:"amount of time to increment per cycle"` SlowInterval int32 `` /* 484-byte string literal not displayed */ SlowCtr int32 `` /* 186-byte string literal not displayed */ SynCaCtr float32 `` /* 274-byte string literal not displayed */ NetIdxs NetIdxs `view:"inline" desc:"indexes and sizes of current network"` NeuronVars NeuronVarStrides `view:"-" desc:"stride offsets for accessing neuron variables"` NeuronAvgVars NeuronAvgVarStrides `view:"-" desc:"stride offsets for accessing neuron average variables"` NeuronIdxs NeuronIdxStrides `view:"-" desc:"stride offsets for accessing neuron indexes"` SynapseVars SynapseVarStrides `view:"-" desc:"stride offsets for accessing synapse variables"` SynapseCaVars SynapseCaStrides `view:"-" desc:"stride offsets for accessing synapse Ca variables"` SynapseIdxs SynapseIdxStrides `view:"-" desc:"stride offsets for accessing synapse indexes"` RandCtr slrand.Counter `` /* 226-byte string literal not displayed */ PVLV PVLV `` /* 367-byte string literal not displayed */ // contains filtered or unexported fields }
Context contains all of the global context state info that is shared across every step of the computation. It is passed around to all relevant computational functions, and is updated on the CPU and synced to the GPU after every cycle. It is the *only* mechanism for communication from CPU to GPU. It contains timing, Testing vs. Training mode, random number context, global neuromodulation, etc.
func NewContext ¶ added in v1.7.0
func NewContext() *Context
NewContext returns a new Time struct with default parameters
func (*Context) CopyNetStridesFrom ¶ added in v1.8.0
CopyNetStridesFrom copies strides and NetIdxs for accessing variables on a Network -- these must be set properly for the Network in question (from its Ctx field) before calling any compute methods with the context. See SetCtxStrides on Network.
func (*Context) CycleInc ¶ added in v1.7.0
func (ctx *Context) CycleInc()
CycleInc increments at the cycle level
func (*Context) Defaults ¶ added in v1.7.0
func (ctx *Context) Defaults()
Defaults sets default values
func (*Context) GlobalDriveIdx ¶ added in v1.8.0
func (ctx *Context) GlobalDriveIdx(di uint32, drIdx uint32, gvar GlobalVars) uint32
GlobalDriveIdx returns index into Drive and USpos, VSPatch global variables
func (*Context) GlobalIdx ¶ added in v1.8.0
func (ctx *Context) GlobalIdx(di uint32, gvar GlobalVars) uint32
GlobalIdx returns index into main global variables, before GvVtaDA
func (*Context) GlobalUSnegIdx ¶ added in v1.8.0
GlobalUSnegIdx returns index into USneg global variables
func (*Context) GlobalVNFloats ¶ added in v1.8.0
GlobalVNFloats number of floats to allocate for Globals
func (*Context) GlobalVTAIdx ¶ added in v1.8.0
func (ctx *Context) GlobalVTAIdx(di uint32, vtaType GlobalVTAType, gvar GlobalVars) uint32
GlobalVTAIdx returns index into VTA global variables
func (*Context) NewPhase ¶ added in v1.7.0
NewPhase resets PhaseCycle = 0 and sets the plus phase as specified
func (*Context) NewState ¶ added in v1.7.0
NewState resets counters at start of new state (trial) of processing. Pass the evaluation model associated with this new state -- if !Train then testing will be set to true.
func (*Context) PVLVInitUS ¶ added in v1.7.25
PVLVInitUS initializes the US state -- call this before calling PVLVSetUS.
func (*Context) PVLVSetDrives ¶ added in v1.7.18
PVLVSetDrives sets current PVLV drives to given magnitude, and sets the first curiosity drive to given level. Drive indexes are 0 based, so 1 is added automatically to accommodate the first curiosity drive.
func (*Context) PVLVSetUS ¶ added in v1.7.18
func (ctx *Context) PVLVSetUS(di uint32, valence ValenceTypes, usIdx int, magnitude float32)
PVLVSetUS sets the given unconditioned stimulus (US) state for PVLV algorithm. Call PVLVInitUS before calling this, and only call this when a US has been received, at the start of a Trial typically. This then drives activity of relevant PVLV-rendered inputs, and dopamine. The US index is automatically adjusted for the curiosity drive / US for positive US outcomes -- i.e., pass in a value with 0 starting index. By default, negative USs do not set the overall ctx.NeuroMod.HasRew flag, which is the trigger for a full-blown US learning event. Set this yourself if the negative US is more of a discrete outcome vs. something that happens in the course of goal engaged approach.
func (*Context) PVLVShouldGiveUp ¶ added in v1.7.19
PVLVShouldGiveUp tests whether it is time to give up on the current goal, based on sum of LHb Dip (missed expected rewards) and maximum effort. called in PVLVStepStart.
func (*Context) PVLVStepStart ¶ added in v1.7.19
PVLVStepStart must be called at start of a new iteration (trial) of behavior when using the PVLV framework, after applying USs, Drives, and updating Effort (e.g., as last step in ApplyPVLV method). Calls PVLVGiveUp (and potentially other things).
func (*Context) Reset ¶ added in v1.7.0
func (ctx *Context) Reset()
Reset resets the counters all back to zero
func (*Context) SetGlobalStrides ¶ added in v1.8.0
func (ctx *Context) SetGlobalStrides()
SetGlobalStrides sets global variable access offsets and strides
type CorSimStats ¶ added in v1.3.35
type CorSimStats struct { Cor float32 `` /* 203-byte string literal not displayed */ Avg float32 `` /* 138-byte string literal not displayed */ Var float32 `` /* 139-byte string literal not displayed */ // contains filtered or unexported fields }
CorSimStats holds correlation similarity (centered cosine aka normalized dot product) statistics at the layer level
func (*CorSimStats) Init ¶ added in v1.3.35
func (cd *CorSimStats) Init()
type DAModTypes ¶ added in v1.7.0
type DAModTypes int32
DAModTypes are types of dopamine modulation of neural activity.
const ( // NoDAMod means there is no effect of dopamine on neural activity NoDAMod DAModTypes = iota // D1Mod is for neurons that primarily express dopamine D1 receptors, // which are excitatory from DA bursts, inhibitory from dips. // Cortical neurons can generally use this type, while subcortical // populations are more diverse in having both D1 and D2 subtypes. D1Mod // D2Mod is for neurons that primarily express dopamine D2 receptors, // which are excitatory from DA dips, inhibitory from bursts. D2Mod // D1AbsMod is like D1Mod, except the absolute value of DA is used // instead of the signed value. // There are a subset of DA neurons that send increased DA for // both negative and positive outcomes, targeting frontal neurons. D1AbsMod DAModTypesN )
func (*DAModTypes) FromString ¶ added in v1.7.0
func (i *DAModTypes) FromString(s string) error
func (DAModTypes) String ¶ added in v1.7.0
func (i DAModTypes) String() string
type DecayParams ¶ added in v1.2.59
type DecayParams struct { Act float32 `` /* 391-byte string literal not displayed */ Glong float32 `` /* 332-byte string literal not displayed */ AHP float32 `` /* 198-byte string literal not displayed */ LearnCa float32 `` /* 194-byte string literal not displayed */ OnRew slbool.Bool `` /* 129-byte string literal not displayed */ // contains filtered or unexported fields }
DecayParams control the decay of activation state in the DecayState function called in NewState when a new state is to be processed.
func (*DecayParams) Defaults ¶ added in v1.2.59
func (dp *DecayParams) Defaults()
func (*DecayParams) Update ¶ added in v1.2.59
func (dp *DecayParams) Update()
type DendParams ¶ added in v1.2.95
type DendParams struct { GbarExp float32 `` /* 221-byte string literal not displayed */ GbarR float32 `` /* 150-byte string literal not displayed */ SSGi float32 `` /* 337-byte string literal not displayed */ HasMod slbool.Bool `` /* 184-byte string literal not displayed */ ModGain float32 `` /* 210-byte string literal not displayed */ ModBase float32 `desc:"baseline modulatory level for modulatory effects -- net modulation is ModBase + ModGain * GModSyn"` // contains filtered or unexported fields }
DendParams are the parameters for updating dendrite-specific dynamics
func (*DendParams) Defaults ¶ added in v1.2.95
func (dp *DendParams) Defaults()
func (*DendParams) Update ¶ added in v1.2.95
func (dp *DendParams) Update()
type DriveVals ¶ added in v1.7.11
type DriveVals struct { D0 float32 D1 float32 D2 float32 D3 float32 D4 float32 D5 float32 D6 float32 D7 float32 }
DriveVals represents different internal drives, such as hunger, thirst, etc. The first drive is typically reserved for novelty / curiosity. labels can be provided by specific environments.
type Drives ¶ added in v1.7.11
type Drives struct { NActive uint32 `max:"8" desc:"number of active drives -- first drive is novelty / curiosity drive -- total must be <= 8"` NNegUSs uint32 `` /* 186-byte string literal not displayed */ DriveMin float32 `` /* 294-byte string literal not displayed */ Base DriveVals `` /* 205-byte string literal not displayed */ Tau DriveVals `` /* 138-byte string literal not displayed */ USDec DriveVals `` /* 131-byte string literal not displayed */ Dt DriveVals `view:"-" desc:"1/Tau"` // contains filtered or unexported fields }
Drives manages the drive parameters for updating drive state, and drive state.
type DtParams ¶
type DtParams struct { Integ float32 `` /* 649-byte string literal not displayed */ VmTau float32 `` /* 328-byte string literal not displayed */ VmDendTau float32 `` /* 335-byte string literal not displayed */ VmSteps int32 `` /* 223-byte string literal not displayed */ GeTau float32 `def:"5" min:"1" desc:"time constant for decay of excitatory AMPA receptor conductance."` GiTau float32 `def:"7" min:"1" desc:"time constant for decay of inhibitory GABAa receptor conductance."` IntTau float32 `` /* 409-byte string literal not displayed */ LongAvgTau float32 `` /* 336-byte string literal not displayed */ MaxCycStart int32 `` /* 148-byte string literal not displayed */ VmDt float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"` VmDendDt float32 `view:"-" json:"-" xml:"-" desc:"nominal rate = Integ / tau"` DtStep float32 `view:"-" json:"-" xml:"-" desc:"1 / VmSteps"` GeDt float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"` GiDt float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"` IntDt float32 `view:"-" json:"-" xml:"-" desc:"rate = Integ / tau"` LongAvgDt float32 `view:"-" json:"-" xml:"-" desc:"rate = 1 / tau"` }
DtParams are time and rate constants for temporal derivatives in Axon (Vm, G)
func (*DtParams) AvgVarUpdt ¶ added in v1.2.45
AvgVarUpdt updates the average and variance from current value, using LongAvgDt
func (*DtParams) GeSynFmRaw ¶ added in v1.2.97
GeSynFmRaw integrates a synaptic conductance from raw spiking using GeTau
func (*DtParams) GeSynFmRawSteady ¶ added in v1.5.12
GeSynFmRawSteady returns the steady-state GeSyn that would result from receiving a steady increment of GeRaw every time step = raw * GeTau. dSyn = Raw - dt*Syn; solve for dSyn = 0 to get steady state: dt*Syn = Raw; Syn = Raw / dt = Raw * Tau
func (*DtParams) GiSynFmRaw ¶ added in v1.2.97
GiSynFmRaw integrates a synaptic conductance from raw spiking using GiTau
func (*DtParams) GiSynFmRawSteady ¶ added in v1.5.12
GiSynFmRawSteady returns the steady-state GiSyn that would result from receiving a steady increment of GiRaw every time step = raw * GiTau. dSyn = Raw - dt*Syn; solve for dSyn = 0 to get steady state: dt*Syn = Raw; Syn = Raw / dt = Raw * Tau
type Effort ¶ added in v1.7.11
type Effort struct { Gain float32 `desc:"gain factor for computing effort discount factor -- larger = quicker discounting"` Max float32 `desc:"default maximum raw effort level, when MaxNovel and MaxPostDip don't apply."` MaxNovel float32 `desc:"maximum raw effort level when novelty / curiosity drive is engaged -- typically shorter than default Max"` MaxPostDip float32 `` /* 254-byte string literal not displayed */ MaxVar float32 `desc:"variance in additional maximum effort level, applied whenever CurMax is updated"` // contains filtered or unexported fields }
Effort has effort and parameters for updating it
func (*Effort) DiscFun ¶ added in v1.7.13
DiscFun is the effort discount function: 1 / (1 + ef.Gain * effort)
type GPLayerTypes ¶ added in v1.7.0
type GPLayerTypes int32
GPLayerTypes is a GPLayer axon-specific layer type enum.
const ( // GPeOut is Outer layer of GPe neurons, receiving inhibition from MtxGo GPeOut GPLayerTypes = iota // GPeIn is Inner layer of GPe neurons, receiving inhibition from GPeOut and MtxNo GPeIn // GPeTA is arkypallidal layer of GPe neurons, receiving inhibition from GPeIn // and projecting inhibition to Mtx GPeTA // GPi is the inner globus pallidus, functionally equivalent to SNr, // receiving from MtxGo and GPeIn, and sending inhibition to VThal GPi GPLayerTypesN )
The GPLayer types
func (*GPLayerTypes) FromString ¶ added in v1.7.0
func (i *GPLayerTypes) FromString(s string) error
func (GPLayerTypes) MarshalJSON ¶ added in v1.7.0
func (ev GPLayerTypes) MarshalJSON() ([]byte, error)
func (GPLayerTypes) String ¶ added in v1.7.0
func (i GPLayerTypes) String() string
func (*GPLayerTypes) UnmarshalJSON ¶ added in v1.7.0
func (ev *GPLayerTypes) UnmarshalJSON(b []byte) error
type GPParams ¶ added in v1.7.0
type GPParams struct { GPType GPLayerTypes `viewif:"LayType=GPLayer" view:"inline" desc:"type of GP Layer -- must set during config using SetBuildConfig of GPType."` // contains filtered or unexported fields }
GPLayer represents a globus pallidus layer, including: GPeOut, GPeIn, GPeTA (arkypallidal), and GPi (see GPType for type). Typically just a single unit per Pool representing a given stripe.
type GPU ¶ added in v1.7.9
type GPU struct { On bool `desc:"if true, actually use the GPU"` RecFunTimes bool `` /* 284-byte string literal not displayed */ CycleByCycle bool `desc:"if true, process each cycle one at a time. Otherwise, 10 cycles at a time are processed in one batch."` Net *Network `view:"-" desc:"the network we operate on -- we live under this net"` Ctx *Context `view:"-" desc:"the context we use"` Sys *vgpu.System `view:"-" desc:"the vgpu compute system"` Params *vgpu.VarSet `view:"-" desc:"VarSet = 0: the uniform LayerParams"` Idxs *vgpu.VarSet `view:"-" desc:"VarSet = 1: the storage indexes and PrjnParams"` Structs *vgpu.VarSet `view:"-" desc:"VarSet = 2: the Storage buffer for RW state structs and neuron floats"` Syns *vgpu.VarSet `view:"-" desc:"Varset = 3: the Storage buffer for synapses"` SynCas *vgpu.VarSet `view:"-" desc:"Varset = 4: the Storage buffer for SynCa banks"` Semaphores map[string]vk.Semaphore `view:"-" desc:"for sequencing commands"` NThreads int `view:"-" inactive:"-" def:"64" desc:"number of warp threads -- typically 64 -- must update all hlsl files if changed!"` MaxBufferBytes uint32 `view:"-" desc:"maximum number of bytes per individual storage buffer element, from GPUProps.Limits.MaxStorageBufferRange"` SynapseCas0 []float32 `view:"-" desc:"bank of floats for GPU access"` SynapseCas1 []float32 `view:"-" desc:"bank of floats for GPU access"` SynapseCas2 []float32 `view:"-" desc:"bank of floats for GPU access"` SynapseCas3 []float32 `view:"-" desc:"bank of floats for GPU access"` SynapseCas4 []float32 `view:"-" desc:"bank of floats for GPU access"` SynapseCas5 []float32 `view:"-" desc:"bank of floats for GPU access"` SynapseCas6 []float32 `view:"-" desc:"bank of floats for GPU access"` SynapseCas7 []float32 `view:"-" desc:"bank of floats for GPU access"` DidBind map[string]bool `view:"-" desc:"tracks var binding"` }
GPU manages all of the GPU-based computation for a given Network. Lives within the network.
func (*GPU) Config ¶ added in v1.7.9
Config configures the network -- must call on an already-built network
func (*GPU) ConfigSynCaBuffs ¶ added in v1.8.2
func (gp *GPU) ConfigSynCaBuffs()
ConfigSynCaBuffs configures special SynapseCas buffers needed for larger memory access
func (*GPU) CopyContextFmStaging ¶ added in v1.7.9
func (gp *GPU) CopyContextFmStaging()
CopyContextFmStaging copies Context from staging to CPU, after Sync back down.
func (*GPU) CopyContextToStaging ¶ added in v1.7.9
func (gp *GPU) CopyContextToStaging()
CopyContextToStaging copies current context to staging from CPU. Must call SyncMemToGPU after this (see SyncContextToGPU). See SetContext if there is a new one.
func (*GPU) CopyExtsToStaging ¶ added in v1.7.9
func (gp *GPU) CopyExtsToStaging()
CopyExtsToStaging copies external inputs to staging from CPU. Typically used in RunApplyExts which also does the Sync.
func (*GPU) CopyGBufToStaging ¶ added in v1.8.0
func (gp *GPU) CopyGBufToStaging()
CopyGBufToStaging copies the GBuf and GSyns memory to staging.
func (*GPU) CopyIdxsToStaging ¶ added in v1.7.9
func (gp *GPU) CopyIdxsToStaging()
CopyIdxsToStaging is only called when the network is built to copy the indexes specifying connectivity etc to staging from CPU.
func (*GPU) CopyLayerStateFmStaging ¶ added in v1.7.9
func (gp *GPU) CopyLayerStateFmStaging()
CopyLayerStateFmStaging copies Context, LayerVals and Pools from staging to CPU, after Sync.
func (*GPU) CopyLayerValsFmStaging ¶ added in v1.7.9
func (gp *GPU) CopyLayerValsFmStaging()
CopyLayerValsFmStaging copies LayerVals from staging to CPU, after Sync back down.
func (*GPU) CopyLayerValsToStaging ¶ added in v1.7.9
func (gp *GPU) CopyLayerValsToStaging()
CopyLayerValsToStaging copies LayerVals to staging from CPU. Must call SyncMemToGPU after this (see SyncLayerValsToGPU).
func (*GPU) CopyNeuronsFmStaging ¶ added in v1.7.9
func (gp *GPU) CopyNeuronsFmStaging()
CopyNeuronsFmStaging copies Neurons from staging to CPU, after Sync back down.
func (*GPU) CopyNeuronsToStaging ¶ added in v1.7.9
func (gp *GPU) CopyNeuronsToStaging()
CopyNeuronsToStaging copies neuron state up to staging from CPU. Must call SyncMemToGPU after this (see SyncNeuronsToGPU).
func (*GPU) CopyParamsToStaging ¶ added in v1.7.9
func (gp *GPU) CopyParamsToStaging()
CopyParamsToStaging copies the LayerParams and PrjnParams to staging from CPU. Must call SyncMemToGPU after this (see SyncParamsToGPU).
func (*GPU) CopyPoolsFmStaging ¶ added in v1.7.9
func (gp *GPU) CopyPoolsFmStaging()
CopyPoolsFmStaging copies Pools from staging to CPU, after Sync back down.
func (*GPU) CopyPoolsToStaging ¶ added in v1.7.9
func (gp *GPU) CopyPoolsToStaging()
CopyPoolsToStaging copies Pools to staging from CPU. Must call SyncMemToGPU after this (see SyncPoolsToGPU).
func (*GPU) CopyStateFmStaging ¶ added in v1.7.9
func (gp *GPU) CopyStateFmStaging()
CopyStateFmStaging copies Context, LayerVals, Pools, and Neurons from staging to CPU, after Sync.
func (*GPU) CopyStateToStaging ¶ added in v1.7.9
func (gp *GPU) CopyStateToStaging()
CopyStateToStaging copies LayerVals, Pools, Neurons state to staging from CPU. this is typically sufficient for most syncing -- only missing the Synapses which must be copied separately. Must call SyncMemToGPU after this (see SyncStateToGPU).
func (*GPU) CopySynCaFmStaging ¶ added in v1.8.1
func (gp *GPU) CopySynCaFmStaging()
CopySynCaFmStaging copies the SynCa variables to GPU, which are per-Di (even larger). This is only used for GUI viewing -- SynCa vars otherwise managed entirely on GPU.
func (*GPU) CopySynCaToStaging ¶ added in v1.8.1
func (gp *GPU) CopySynCaToStaging()
CopySynCaToStaging copies the SynCa variables to GPU, which are per-Di (even larger). This is only used for initialization -- SynCa vars otherwise managed entirely on GPU. Must call SyncMemToGPU after this (see SyncSynCaToGPU).
func (*GPU) CopySynapsesFmStaging ¶ added in v1.7.9
func (gp *GPU) CopySynapsesFmStaging()
CopySynapsesFmStaging copies Synapses from staging to CPU, after Sync back down. Does not copy SynCa synapse state -- see SynCa methods.
func (*GPU) CopySynapsesToStaging ¶ added in v1.7.9
func (gp *GPU) CopySynapsesToStaging()
CopySynapsesToStaging copies the synapse memory to staging (large). Does not copy SynCa synapse state -- see SynCa methods. This is not typically needed except when weights are initialized or for the Slow weight update processes that are not on GPU. Must call SyncMemToGPU after this (see SyncSynapsesToGPU).
func (*GPU) Destroy ¶ added in v1.7.9
func (gp *GPU) Destroy()
Destroy should be called to release all the resources allocated by the network
func (*GPU) RunApplyExts ¶ added in v1.7.9
func (gp *GPU) RunApplyExts()
RunApplyExts copies Exts external input memory to the GPU and then runs the ApplyExts shader that applies those external inputs to the GPU-side neuron state. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunApplyExtsCmd ¶ added in v1.7.11
func (gp *GPU) RunApplyExtsCmd() vk.CommandBuffer
RunApplyExtsCmd returns the commands to copy Exts external input memory to the GPU and then runs the ApplyExts shader that applies those external inputs to the GPU-side neuron state. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunCycle ¶ added in v1.7.9
func (gp *GPU) RunCycle()
RunCycle is the main cycle-level update loop for updating one msec of neuron state. It copies current Context up to GPU for updated Cycle counter state and random number state, Different versions of the code are run depending on various flags. By default, it will run the entire minus and plus phase in one big chunk. The caller must check the On flag before running this, to use CPU vs. GPU.
func (*GPU) RunCycleOne ¶ added in v1.7.9
func (gp *GPU) RunCycleOne()
RunCycleOne does one cycle of updating in an optimized manner using Events to sequence each of the pipeline calls. It is for CycleByCycle mode and syncs back full state every cycle.
func (*GPU) RunCycleOneCmd ¶ added in v1.7.11
func (gp *GPU) RunCycleOneCmd() vk.CommandBuffer
RunCycleOneCmd returns commands to do one cycle of updating in an optimized manner using Events to sequence each of the pipeline calls. It is for CycleByCycle mode and syncs back full state every cycle.
func (*GPU) RunCycleSeparateFuns ¶ added in v1.7.9
func (gp *GPU) RunCycleSeparateFuns()
RunCycleSeparateFuns does one cycle of updating in a very slow manner that allows timing to be recorded for each function call, for profiling.
func (*GPU) RunCycles ¶ added in v1.7.9
func (gp *GPU) RunCycles()
RunCycles does multiple cycles of updating in one chunk
func (*GPU) RunCyclesCmd ¶ added in v1.7.11
func (gp *GPU) RunCyclesCmd() vk.CommandBuffer
RunCyclesCmd returns the RunCycles commands to do multiple cycles of updating in one chunk
func (*GPU) RunDWt ¶ added in v1.7.9
func (gp *GPU) RunDWt()
RunDWt runs the DWt shader to compute weight changes. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunDWtCmd ¶ added in v1.8.0
func (gp *GPU) RunDWtCmd() vk.CommandBuffer
RunDWtCmd returns the commands to run the DWt shader to compute weight changes.
func (*GPU) RunMinusPhase ¶ added in v1.7.9
func (gp *GPU) RunMinusPhase()
RunMinusPhase runs the MinusPhase shader to update snapshot variables at the end of the minus phase. All non-synapse state is copied back down after this, so it is available for action calls The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunMinusPhaseCmd ¶ added in v1.7.11
func (gp *GPU) RunMinusPhaseCmd() vk.CommandBuffer
RunMinusPhaseCmd returns the commands to run the MinusPhase shader to update snapshot variables at the end of the minus phase. All non-synapse state is copied back down after this, so it is available for action calls
func (*GPU) RunNewState ¶ added in v1.7.9
func (gp *GPU) RunNewState()
RunNewState runs the NewState shader to initialize state at start of new ThetaCycle trial. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunNewStateCmd ¶ added in v1.8.0
func (gp *GPU) RunNewStateCmd() vk.CommandBuffer
RunNewStateCmd returns the commands to run the NewState shader to update variables at the start of a new trial.
func (*GPU) RunPipelineMemWait ¶ added in v1.8.2
func (gp *GPU) RunPipelineMemWait(cmd vk.CommandBuffer, name string, n int)
RunPipelineMemWait records command to run given pipeline with a WaitMemWriteRead after it, so subsequent pipeline run will have access to values updated by this command.
func (*GPU) RunPipelineNoWait ¶ added in v1.8.2
func (gp *GPU) RunPipelineNoWait(cmd vk.CommandBuffer, name string, n int)
RunPipelineNoWait records command to run given pipeline without any waiting after it for writes to complete. This should be the last command in the sequence.
func (*GPU) RunPipelineOffset ¶ added in v1.8.2
func (gp *GPU) RunPipelineOffset(cmd vk.CommandBuffer, name string, n, off int)
RunPipelineOffset records command to run given pipeline with a push constant offset for the starting index to compute. This is needed when the total number of dispatch indexes exceeds GPU.MaxComputeWorkGroupCount1D. Does NOT wait for writes, assuming a parallel launch of all.
func (*GPU) RunPipelineWait ¶ added in v1.7.9
RunPipelineWait runs given pipeline in "single shot" mode, which is maximally inefficient if multiple commands need to be run. This is the only mode in which timer information is available.
func (*GPU) RunPlusPhase ¶ added in v1.7.9
func (gp *GPU) RunPlusPhase()
RunPlusPhase runs the PlusPhase shader to update snapshot variables and do additional stats-level processing at end of the plus phase. All non-synapse state is copied back down after this. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunPlusPhaseCmd ¶ added in v1.7.11
func (gp *GPU) RunPlusPhaseCmd() vk.CommandBuffer
RunPlusPhaseCmd returns the commands to run the PlusPhase shader to update snapshot variables and do additional stats-level processing at end of the plus phase. All non-synapse state is copied back down after this.
func (*GPU) RunPlusPhaseStart ¶ added in v1.7.10
func (gp *GPU) RunPlusPhaseStart()
RunPlusPhaseStart runs the PlusPhaseStart shader does updating at the start of the plus phase: applies Target inputs as External inputs.
func (*GPU) RunWtFmDWt ¶ added in v1.7.9
func (gp *GPU) RunWtFmDWt()
RunWtFmDWt runs the WtFmDWt shader to update weights from weigh changes. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunWtFmDWtCmd ¶ added in v1.7.11
func (gp *GPU) RunWtFmDWtCmd() vk.CommandBuffer
RunWtFmDWtCmd returns the commands to run the WtFmDWt shader to update weights from weight changes. This also syncs neuron state from CPU -> GPU because TrgAvgFmD has updated that state.
func (*GPU) SetContext ¶ added in v1.7.9
SetContext sets our context to given context and syncs it to the GPU. Typically a single context is used as it must be synced into the GPU. The GPU never writes to the CPU
func (*GPU) StartRun ¶ added in v1.7.9
func (gp *GPU) StartRun(cmd vk.CommandBuffer)
StartRun resets the given command buffer in preparation for recording commands for a multi-step run. It is much more efficient to record all commands to one buffer, and use Events to synchronize the steps between them, rather than using semaphores. The submit call is by far the most expensive so that should only happen once!
func (*GPU) SynDataNs ¶ added in v1.8.2
SynDataNs returns the numbers for processing SynapseCas vars = Synapses * MaxData. Can exceed thread count limit and require multiple command launches with different offsets. The offset is in terms of synapse index, so everything is computed in terms of synapse indexes, with MaxData then multiplied to get final values. nCmd = number of command launches, nPer = number of synapses per cmd, nLast = number of synapses for last command launch.
func (*GPU) SyncAllFmGPU ¶ added in v1.7.9
func (gp *GPU) SyncAllFmGPU()
SyncAllFmCPU copies State except Context plus Synapses from GPU to CPU. This is called before SlowAdapt, which is run CPU-side
func (*GPU) SyncAllToGPU ¶ added in v1.7.9
func (gp *GPU) SyncAllToGPU()
SyncAllToGPU copies LayerVals, Pools, Neurons, Synapses to GPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncContextFmGPU ¶ added in v1.7.9
func (gp *GPU) SyncContextFmGPU()
SyncContextFmGPU copies Context from GPU to CPU. This is done at the end of each cycle to get state back from GPU for CPU-side computations. Use only when only thing being copied -- more efficient to get all at once. e.g. see SyncStateFmGPU
func (*GPU) SyncContextToGPU ¶ added in v1.7.9
func (gp *GPU) SyncContextToGPU()
SyncContextToGPU copies current context to GPU from CPU. Calls SyncMemToGPU -- use when this is the only copy taking place. See SetContext if there is a new one.
func (*GPU) SyncGBufToGPU ¶ added in v1.7.9
func (gp *GPU) SyncGBufToGPU()
SyncGBufToGPU copies the GBuf and GSyns memory to the GPU.
func (*GPU) SyncLayerStateFmGPU ¶ added in v1.7.9
func (gp *GPU) SyncLayerStateFmGPU()
SyncLayerStateFmCPU copies Context, LayerVals, and Pools from GPU to CPU. This is the main GPU->CPU sync step automatically called after each Cycle.
func (*GPU) SyncLayerValsFmGPU ¶ added in v1.7.9
func (gp *GPU) SyncLayerValsFmGPU()
SyncLayerValsFmGPU copies LayerVals from GPU to CPU. This is done at the end of each cycle to get state back from staging for CPU-side computations. Use only when only thing being copied -- more efficient to get all at once. e.g. see SyncStateFmGPU
func (*GPU) SyncLayerValsToGPU ¶ added in v1.7.9
func (gp *GPU) SyncLayerValsToGPU()
SyncLayerValsToGPU copies LayerVals to GPU from CPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncMemToGPU ¶ added in v1.7.9
func (gp *GPU) SyncMemToGPU()
SyncMemToGPU synchronizes any staging memory buffers that have been updated with a Copy function, actually sending the updates from the staging -> GPU. The CopyTo commands just copy Network-local data to a staging buffer, and this command then actually moves that onto the GPU. In unified GPU memory architectures, this staging buffer is actually the same one used directly by the GPU -- otherwise it is a separate staging buffer.
func (*GPU) SyncNeuronsFmGPU ¶ added in v1.7.9
func (gp *GPU) SyncNeuronsFmGPU()
SyncNeuronsFmGPU copies Neurons from GPU to CPU. Use only when only thing being copied -- more efficient to get all at once. e.g. see SyncStateFmGPU
func (*GPU) SyncNeuronsToGPU ¶ added in v1.7.9
func (gp *GPU) SyncNeuronsToGPU()
SyncNeuronsToGPU copies neuron state up to GPU from CPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncParamsToGPU ¶ added in v1.7.9
func (gp *GPU) SyncParamsToGPU()
SyncParamsToGPU copies the LayerParams and PrjnParams to the GPU from CPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncPoolsFmGPU ¶ added in v1.7.9
func (gp *GPU) SyncPoolsFmGPU()
SyncPoolsFmGPU copies Pools from GPU to CPU. Use only when only thing being copied -- more efficient to get all at once. e.g. see SyncStateFmGPU
func (*GPU) SyncPoolsToGPU ¶ added in v1.7.9
func (gp *GPU) SyncPoolsToGPU()
SyncPoolsToGPU copies Pools to GPU from CPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncRegionStruct ¶ added in v1.7.9
SyncRegionStruct returns the SyncRegion with error panic
func (*GPU) SyncRegionSynCas ¶ added in v1.8.1
SyncRegionSynCas returns the SyncRegion with error panic
func (*GPU) SyncRegionSyns ¶ added in v1.8.0
SyncRegionSyns returns the SyncRegion with error panic
func (*GPU) SyncStateFmGPU ¶ added in v1.7.9
func (gp *GPU) SyncStateFmGPU()
SyncStateFmCPU copies Neurons, LayerVals, and Pools from GPU to CPU. This is the main GPU->CPU sync step automatically called in PlusPhase.
func (*GPU) SyncStateGBufToGPU ¶ added in v1.8.0
func (gp *GPU) SyncStateGBufToGPU()
SyncStateGBufToGPU copies LayVals, Pools, Neurons, GBuf state to GPU this is typically sufficient for most syncing -- only missing the Synapses which must be copied separately. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncStateToGPU ¶ added in v1.7.9
func (gp *GPU) SyncStateToGPU()
SyncStateToGPU copies LayVals, Pools, Neurons state to GPU this is typically sufficient for most syncing -- only missing the Synapses which must be copied separately. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncSynCaFmGPU ¶ added in v1.8.1
func (gp *GPU) SyncSynCaFmGPU()
SyncSynCaFmGPU copies the SynCa variables to GPU, which are per-Di (even larger). This is only used for GUI viewing -- SynCa vars otherwise managed entirely on GPU. Use only when only thing being copied -- more efficient to get all at once.
func (*GPU) SyncSynCaToGPU ¶ added in v1.8.1
func (gp *GPU) SyncSynCaToGPU()
SyncSynCaToGPU copies the SynCa variables to GPU, which are per-Di (even larger). This is only used for initialization -- SynCa vars otherwise managed entirely on GPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncSynapsesFmGPU ¶ added in v1.7.9
func (gp *GPU) SyncSynapsesFmGPU()
SyncSynapsesFmGPU copies Synapses from GPU to CPU. Does not copy SynCa synapse state -- see SynCa methods. Use only when only thing being copied -- more efficient to get all at once.
func (*GPU) SyncSynapsesToGPU ¶ added in v1.7.9
func (gp *GPU) SyncSynapsesToGPU()
SyncSynapsesToGPU copies the synapse memory to GPU (large). This is not typically needed except when weights are initialized or for the Slow weight update processes that are not on GPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) TestSynCaCmd ¶ added in v1.8.2
func (gp *GPU) TestSynCaCmd() vk.CommandBuffer
type GScaleVals ¶ added in v1.2.37
type GScaleVals struct { Scale float32 `` /* 240-byte string literal not displayed */ Rel float32 `` /* 159-byte string literal not displayed */ // contains filtered or unexported fields }
GScaleVals holds the conductance scaling values. These are computed once at start and remain constant thereafter, and therefore belong on Params and not on PrjnVals.
type GlobalVTAType ¶ added in v1.8.0
type GlobalVTAType int32
GlobalVTAType are types of VTA variables
const ( // GvVtaRaw are raw VTA values -- inputs to the computation GvVtaRaw GlobalVTAType = iota // GvVtaVals are computed current VTA values GvVtaVals // GvVtaPrev are previous computed values -- to avoid a data race GvVtaPrev GlobalVTATypeN )
func (*GlobalVTAType) FromString ¶ added in v1.8.0
func (i *GlobalVTAType) FromString(s string) error
func (GlobalVTAType) MarshalJSON ¶ added in v1.8.0
func (ev GlobalVTAType) MarshalJSON() ([]byte, error)
func (GlobalVTAType) String ¶ added in v1.8.0
func (i GlobalVTAType) String() string
func (*GlobalVTAType) UnmarshalJSON ¶ added in v1.8.0
func (ev *GlobalVTAType) UnmarshalJSON(b []byte) error
type GlobalVars ¶ added in v1.8.0
type GlobalVars int32
GlobalVars are network-wide variables, such as neuromodulators, reward, drives, etc including the state for the PVLV phasic dopamine model.
const ( // GvRew is reward value -- this is set here in the Context struct, and the RL Rew layer grabs it from there -- must also set HasRew flag when rew is set -- otherwise is ignored. GvRew GlobalVars = iota // GvHasRew must be set to true when a reward is present -- otherwise Rew is ignored. Also set during extinction by PVLV. This drives ACh release in the PVLV model. GvHasRew // GvRewPred is reward prediction -- computed by a special reward prediction layer GvRewPred // GvPrevPred is previous time step reward prediction -- e.g., for TDPredLayer GvPrevPred // GvDA is dopamine -- represents reward prediction error, signaled as phasic increases or decreases in activity relative to a tonic baseline, which is represented by a value of 0. Released by the VTA -- ventral tegmental area, or SNc -- substantia nigra pars compacta. GvDA // GvACh is acetylcholine -- activated by salient events, particularly at the onset of a reward / punishment outcome (US), or onset of a conditioned stimulus (CS). Driven by BLA -> PPtg that detects changes in BLA activity, via LDTLayer type GvACh // NE is norepinepherine -- not yet in use GvNE // GvSer is serotonin -- not yet in use GvSer // GvAChRaw is raw ACh value used in updating global ACh value by LDTLayer GvAChRaw // GvNotMaint is activity of the PTNotMaintLayer -- drives top-down inhibition of LDT layer / ACh activity. GvNotMaint // GvEffortRaw is raw effort -- increments linearly upward for each additional effort step GvEffortRaw // GvEffortDisc is effort discount factor = 1 / (1 + gain * EffortRaw) -- goes up toward 1 -- the effect of effort is (1 - EffortDisc) multiplier GvEffortDisc // GvEffortCurMax is current maximum raw effort level -- above this point, any current goal will be terminated during the GiveUp function, which also looks for accumulated disappointment. See Max, MaxNovel, MaxPostDip for values depending on how the goal was triggered GvEffortCurMax // GvUrgency is the overall urgency activity level (normalized 0-1), computed from logistic function of GvUrgencyRaw GvUrgency // GvUrgencyRaw is raw effort for urgency -- increments linearly upward from effort increments per step GvUrgencyRaw // GvVSMatrixJustGated is VSMatrix just gated (to engage goal maintenance in PFC areas), set at end of plus phase -- this excludes any gating happening at time of US GvVSMatrixJustGated // GvVSMatrixHasGated is VSMatrix has gated since the last time HasRew was set (US outcome received or expected one failed to be received GvVSMatrixHasGated // HasRewPrev is state from the previous trial -- copied from HasRew in NewState -- used for updating Effort, Urgency at start of new trial GvHasRewPrev // HasPosUSPrev is state from the previous trial -- copied from HasPosUS in NewState -- used for updating Effort, Urgency at start of new trial GvHasPosUSPrev // computed LHb activity level that drives more dipping / pausing of DA firing, when VSPatch pos prediction > actual PV reward drive GvLHbDip // GvLHbBurst is computed LHb activity level that drives bursts of DA firing, when actual PV reward drive > VSPatch pos prediction GvLHbBurst // GvLHbDipSumCur is current sum of LHbDip over trials, which is reset when there is a PV value, an above-threshold PPTg value, or when it triggers reset GvLHbDipSumCur // GvLHbDipSum is copy of DipSum that is not reset -- used for driving negative dopamine dips on GiveUp trials GvLHbDipSum // GvLHbGiveUp is true if a reset was triggered from LHbDipSum > Reset Thr GvLHbGiveUp // GvLHbPos is computed PosGain * (VSPatchPos - PVpos) GvLHbPos // GvLHbNeg is computed NegGain * PVneg GvLHbNeg // GvVtaDA is overall dopamine value reflecting all of the different inputs GvVtaDA // GvVtaUSpos is total positive valence primary value = sum of USpos * Drive without effort discounting GvVtaUSpos // GvVtaPVpos is total positive valence primary value = sum of USpos * Drive * (1-Effort.Disc) -- what actually drives DA bursting from actual USs received GvVtaPVpos // GvVtaPVneg is total negative valence primary value = sum of USneg inputs GvVtaPVneg // GvVtaCeMpos is positive valence central nucleus of the amygdala (CeM) LV (learned value) activity, reflecting |BLAPosAcqD1 - BLAPosExtD2|_+ positively rectified. CeM sets Raw directly. Note that a positive US onset even with no active Drive will be reflected here, enabling learning about unexpected outcomes GvVtaCeMpos // GvVtaCeMneg is negative valence central nucleus of the amygdala (CeM) LV (learned value) activity, reflecting |BLANegAcqD2 - BLANegExtD1|_+ positively rectified. CeM sets Raw directly GvVtaCeMneg // GvVtaLHbDip is dip from LHb / RMTg -- net inhibitory drive on VTA DA firing = dips GvVtaLHbDip // GvVtaLHbBurst is burst from LHb / RMTg -- net excitatory drive on VTA DA firing = bursts GvVtaLHbBurst // GvVtaVSPatchPos is net shunting input from VSPatch (PosD1 -- PVi in original PVLV) GvVtaVSPatchPos // GvUSneg is negative valence US outcomes -- NNegUSs of them GvUSneg // GvDrives is current drive state -- updated with optional homeostatic exponential return to baseline values GvDrives // GvDrivesBase are baseline levels for each drive -- what they naturally trend toward in the absence of any input. Set inactive drives to 0 baseline, active ones typically elevated baseline (0-1 range). GvBaseDrives // GvDrivesTau are time constants in ThetaCycle (trial) units for natural update toward Base values -- 0 values means no natural update. GvDrivesTau // GvDrivesUSDec are decrement factors for reducing drive value when Drive-US is consumed (multiply the US magnitude) -- these are positive valued numbers. GvDrivesUSDec // GvUSpos is current positive-valence drive-satisfying input(s) (unconditioned stimuli = US) GvUSpos // GvUSpos is current positive-valence drive-satisfying reward predicting VSPatch (PosD1) values GvVSPatch GlobalVarsN )
func (*GlobalVars) FromString ¶ added in v1.8.0
func (i *GlobalVars) FromString(s string) error
func (GlobalVars) MarshalJSON ¶ added in v1.8.0
func (ev GlobalVars) MarshalJSON() ([]byte, error)
func (GlobalVars) String ¶ added in v1.8.0
func (i GlobalVars) String() string
func (*GlobalVars) UnmarshalJSON ¶ added in v1.8.0
func (ev *GlobalVars) UnmarshalJSON(b []byte) error
type HipConfig ¶ added in v1.8.6
type HipConfig struct { // model size EC2Size evec.Vec2i `nest:"+" desc:"size of EC2"` EC3NPool evec.Vec2i `nest:"+" desc:"number of EC3 pools (outer dimension)"` EC3NNrn evec.Vec2i `nest:"+" desc:"number of neurons in one EC3 pool"` CA1NNrn evec.Vec2i `nest:"+" desc:"number of neurons in one CA1 pool"` CA3Size evec.Vec2i `nest:"+" desc:"size of CA3"` DGRatio float32 `def:"2.236" desc:"size of DG / CA3"` // pcon EC3ToEC2PCon float32 `def:"0.1" desc:"percent connectivity from EC3 to EC2"` EC2ToDGPCon float32 `def:"0.25" desc:"percent connectivity from EC2 to DG"` EC2ToCA3PCon float32 `def:"0.25" desc:"percent connectivity from EC2 to CA3"` CA3ToCA1PCon float32 `def:"0.25" desc:"percent connectivity from CA3 to CA1"` DGToCA3PCon float32 `def:"0.02" desc:"percent connectivity into CA3 from DG"` EC2LatRadius int `desc:"lateral radius of connectivity in EC2"` EC2LatSigma float32 `desc:"lateral gaussian sigma in EC2 for how quickly weights fall off with distance"` MossyDelta float32 `` /* 209-byte string literal not displayed */ MossyDeltaTest float32 `` /* 211-byte string literal not displayed */ ThetaLow float32 `` /* 147-byte string literal not displayed */ ThetaHigh float32 `` /* 146-byte string literal not displayed */ EC5ClampSrc string `` /* 142-byte string literal not displayed */ EC5ClampTest bool `` /* 217-byte string literal not displayed */ EC5ClampThr float32 `` /* 193-byte string literal not displayed */ }
HipConfig have the hippocampus size and connectivity parameters
type HipPrjnParams ¶ added in v1.8.5
type HipPrjnParams struct { Hebb float32 `def:"0" desc:"Hebbian learning proportion"` Err float32 `def:"1" desc:"EDL proportion"` SAvgCor float32 `` /* 161-byte string literal not displayed */ SAvgThr float32 `` /* 144-byte string literal not displayed */ SNominal float32 `def:"0.1" min:"0" desc:"sending layer Nominal (need to manually set it to be the same as the sending layer)"` // contains filtered or unexported fields }
HipPrjnParams define behavior of hippocampus prjns, which have special learning rules
func (*HipPrjnParams) Defaults ¶ added in v1.8.5
func (hp *HipPrjnParams) Defaults()
func (*HipPrjnParams) Update ¶ added in v1.8.5
func (hp *HipPrjnParams) Update()
type InhibParams ¶
type InhibParams struct { ActAvg ActAvgParams `` /* 173-byte string literal not displayed */ Layer fsfffb.GiParams `` /* 218-byte string literal not displayed */ Pool fsfffb.GiParams `` /* 134-byte string literal not displayed */ }
axon.InhibParams contains all the inhibition computation params and functions for basic Axon This is included in axon.Layer to support computation. This also includes other misc layer-level params such as expected average activation in the layer which is used for Ge rescaling and potentially for adapting inhibition over time
func (*InhibParams) Defaults ¶
func (ip *InhibParams) Defaults()
func (*InhibParams) Update ¶
func (ip *InhibParams) Update()
type LDTParams ¶ added in v1.7.18
type LDTParams struct { SrcThr float32 `` /* 193-byte string literal not displayed */ Rew slbool.Bool `` /* 153-byte string literal not displayed */ MaintInhib float32 `` /* 167-byte string literal not displayed */ NotMaintMax float32 `` /* 127-byte string literal not displayed */ SrcLay1Idx int32 `` /* 135-byte string literal not displayed */ SrcLay2Idx int32 `` /* 135-byte string literal not displayed */ SrcLay3Idx int32 `` /* 135-byte string literal not displayed */ SrcLay4Idx int32 `` /* 135-byte string literal not displayed */ }
LDTParams compute reward salience as ACh global neuromodulatory signal as a function of the MAX activation of its inputs.
func (*LDTParams) ACh ¶ added in v1.7.19
func (lp *LDTParams) ACh(ctx *Context, di uint32, srcLay1Act, srcLay2Act, srcLay3Act, srcLay4Act float32) float32
ACh returns the computed ACh salience value based on given source layer activations and key values from the ctx Context.
func (*LDTParams) MaintFmNotMaint ¶ added in v1.7.18
MaintFmNotMaint returns a 0-1 value reflecting strength of active maintenance based on the activity of the PTNotMaintLayer as recorded in NeuroMod.NotMaint.
func (*LDTParams) MaxSrcAct ¶ added in v1.7.19
MaxSrcAct returns the updated maxSrcAct value from given source layer activity value.
type LHb ¶ added in v1.7.11
type LHb struct { PosGain float32 `def:"1" desc:"gain multiplier on overall VSPatchPos - PosPV component"` NegGain float32 `def:"1" desc:"gain multiplier on overall PVneg component"` GiveUpThr float32 `def:"0.2" desc:"threshold on summed LHbDip over trials for triggering a reset of goal engaged state"` DipLowThr float32 `` /* 128-byte string literal not displayed */ }
LHb has values for computing LHb & RMTg which drives dips / pauses in DA firing. Positive net LHb activity drives dips / pauses in VTA DA activity, e.g., when predicted pos > actual or actual neg > predicted. Negative net LHb activity drives bursts in VTA DA activity, e.g., when actual pos > predicted (redundant with LV / Amygdala) or "relief" burst when actual neg < predicted.
type LRateMod ¶ added in v1.6.13
type LRateMod struct { On slbool.Bool `desc:"toggle use of this modulation factor"` Base float32 `viewif:"On" min:"0" max:"1" desc:"baseline learning rate -- what you get for correct cases"` Range minmax.F32 `` /* 191-byte string literal not displayed */ // contains filtered or unexported fields }
LRateMod implements global learning rate modulation, based on a performance-based factor, for example error. Increasing levels of the factor = higher learning rate. This can be added to a Sim and called prior to DWt() to dynamically change lrate based on overall network performance.
func (*LRateMod) LRateMod ¶ added in v1.6.13
LRateMod calls LRateMod on given network, using computed Mod factor based on given normalized modulation factor (0 = no error = Base learning rate, 1 = maximum error). returns modulation factor applied.
type LRateParams ¶ added in v1.6.13
type LRateParams struct { Base float32 `` /* 199-byte string literal not displayed */ Sched float32 `desc:"scheduled learning rate multiplier, simulating reduction in plasticity over aging"` Mod float32 `desc:"dynamic learning rate modulation due to neuromodulatory or other such factors"` Eff float32 `inactive:"+" desc:"effective actual learning rate multiplier used in computing DWt: Eff = eMod * Sched * Base"` }
LRateParams manages learning rate parameters
func (*LRateParams) Defaults ¶ added in v1.6.13
func (ls *LRateParams) Defaults()
func (*LRateParams) Init ¶ added in v1.6.13
func (ls *LRateParams) Init()
Init initializes modulation values back to 1 and updates Eff
func (*LRateParams) Update ¶ added in v1.6.13
func (ls *LRateParams) Update()
func (*LRateParams) UpdateEff ¶ added in v1.7.0
func (ls *LRateParams) UpdateEff()
type LaySpecialVals ¶ added in v1.7.0
type LaySpecialVals struct { V1 float32 `inactive:"+" desc:"one value"` V2 float32 `inactive:"+" desc:"one value"` V3 float32 `inactive:"+" desc:"one value"` V4 float32 `inactive:"+" desc:"one value"` }
LaySpecialVals holds special values used to communicate to other layers based on neural values, used for special algorithms such as RL where some of the computation is done algorithmically.
func (*LaySpecialVals) Init ¶ added in v1.7.9
func (lv *LaySpecialVals) Init()
type Layer ¶
type Layer struct { LayerBase Params *LayerParams `desc:"all layer-level parameters -- these must remain constant once configured"` }
axon.Layer implements the basic Axon spiking activation function, and manages learning in the projections.
func (*Layer) AdaptInhib ¶ added in v1.2.37
AdaptInhib adapts inhibition
func (*Layer) AnyGated ¶ added in v1.7.0
AnyGated returns true if the layer-level pool Gated flag is true, which indicates if any of the layers gated.
func (*Layer) ApplyExt ¶
ApplyExt applies external input in the form of an etensor.Float32 or 64. Negative values and NaNs are not valid, and will be interpreted as missing inputs. The given data index di is the data parallel index (0 < di < MaxData): must present inputs separately for each separate data parallel set. If dimensionality of tensor matches that of layer, and is 2D or 4D, then each dimension is iterated separately, so any mismatch preserves dimensional structure. Otherwise, the flat 1D view of the tensor is used. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext. Also sets the Exts values on layer, which are used for the GPU version, which requires calling the network ApplyExts() method -- is a no-op for CPU.
func (*Layer) ApplyExt1D ¶
ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext
func (*Layer) ApplyExt1D32 ¶
ApplyExt1D32 applies external input in the form of a flat 1-dimensional slice of float32s. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext
func (*Layer) ApplyExt1DTsr ¶
ApplyExt1DTsr applies external input using 1D flat interface into tensor. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext
func (*Layer) ApplyExt2D ¶
ApplyExt2D applies 2D tensor external input
func (*Layer) ApplyExt2Dto4D ¶
ApplyExt2Dto4D applies 2D tensor external input to a 4D layer
func (*Layer) ApplyExt4D ¶
ApplyExt4D applies 4D tensor external input
func (*Layer) ApplyExtFlags ¶
func (ly *Layer) ApplyExtFlags() (clearMask, setMask NeuronFlags, toTarg bool)
ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Target (else Ext)
func (*Layer) ApplyExtVal ¶ added in v1.7.9
func (ly *Layer) ApplyExtVal(ctx *Context, lni, di uint32, val float32, clearMask, setMask NeuronFlags, toTarg bool)
ApplyExtVal applies given external value to given neuron using clearMask, setMask, and toTarg from ApplyExtFlags. Also saves Val in Exts for potential use by GPU.
func (*Layer) AsAxon ¶
AsAxon returns this layer as a axon.Layer -- all derived layers must redefine this to return the base Layer type, so that the AxonLayer interface does not need to include accessors to all the basic stuff
func (*Layer) AvgDifFmTrgAvg ¶ added in v1.6.0
AvgDifFmTrgAvg updates neuron-level AvgDif values from AvgPct - TrgAvg which is then used for synaptic scaling of LWt values in Prjn SynScale.
func (*Layer) AvgMaxVarByPool ¶ added in v1.6.0
AvgMaxVarByPool returns the average and maximum value of given variable for given pool index (0 = entire layer, 1.. are subpools for 4D only). Uses fast index-based variable access.
func (*Layer) BGThalDefaults ¶ added in v1.7.18
func (ly *Layer) BGThalDefaults()
func (*Layer) BLADefaults ¶ added in v1.7.11
func (ly *Layer) BLADefaults()
func (*Layer) BetweenLayerGi ¶ added in v1.7.5
BetweenLayerGi computes inhibition Gi between layers
func (*Layer) BetweenLayerGiMax ¶ added in v1.7.5
BetweenLayerGiMax returns max gi value for input maxGi vs the given layIdx layer
func (*Layer) CTDefParamsFast ¶ added in v1.7.18
func (ly *Layer) CTDefParamsFast()
CTDefParamsFast sets fast time-integration parameters for CTLayer. This is what works best in the deep_move 1 trial history case, vs Medium and Long
func (*Layer) CTDefParamsLong ¶ added in v1.7.18
func (ly *Layer) CTDefParamsLong()
CTDefParamsLong sets long time-integration parameters for CTLayer. This is what works best in the deep_music test case integrating over long time windows, compared to Medium and Fast.
func (*Layer) CTDefParamsMedium ¶ added in v1.7.18
func (ly *Layer) CTDefParamsMedium()
CTDefParamsMedium sets medium time-integration parameters for CTLayer. This is what works best in the FSA test case, compared to Fast (deep_move) and Long (deep_music) time integration.
func (*Layer) CeMDefaults ¶ added in v1.7.11
func (ly *Layer) CeMDefaults()
func (*Layer) ClearTargExt ¶ added in v1.2.65
ClearTargExt clears external inputs Ext that were set from target values Target. This can be called to simulate alpha cycles within theta cycles, for example.
func (*Layer) CorSimFmActs ¶ added in v1.3.35
CorSimFmActs computes the correlation similarity (centered cosine aka normalized dot product) in activation state between minus and plus phases.
func (*Layer) CostEst ¶
CostEst returns the estimated computational cost associated with this layer, separated by neuron-level and synapse-level, in arbitrary units where cost per synapse is 1. Neuron-level computation is more expensive but there are typically many fewer neurons, so in larger networks, synaptic costs tend to dominate. Neuron cost is estimated from TimerReport output for large networks.
func (*Layer) CycleNeuron ¶ added in v1.6.0
CycleNeuron does one cycle (msec) of updating at the neuron level Called directly by Network, iterates over data.
func (*Layer) CyclePost ¶
CyclePost is called after the standard Cycle update, as a separate network layer loop. This is reserved for any kind of special ad-hoc types that need to do something special after Spiking is finally computed and Sent. Typically used for updating global values in the Context state, such as updating a neuromodulatory signal such as dopamine. Any updates here must also be done in gpu_hlsl/gpu_cyclepost.hlsl
func (*Layer) DTrgSubMean ¶ added in v1.6.0
DTrgSubMean subtracts the mean from DTrgAvg values Called by TrgAvgFmD
func (*Layer) DWt ¶
DWt computes the weight change (learning), based on synaptically-integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets.
func (*Layer) DWtSubMean ¶ added in v1.2.23
DWtSubMean computes subtractive normalization of the DWts
func (*Layer) DecayState ¶
DecayState decays activation state by given proportion (default decay values are ly.Params.Acts.Decay.Act, Glong)
func (*Layer) DecayStateLayer ¶ added in v1.7.0
DecayStateLayer does layer-level decay, but not neuron level
func (*Layer) DecayStateNeuronsAll ¶ added in v1.8.0
DecayStateNeuronsAll decays neural activation state by given proportion (default decay values are ly.Params.Acts.Decay.Act, Glong, AHP) for all data parallel indexes. Does not decay pool or layer state. This is used for minus phase of Pulvinar layers to clear state in prep for driver plus phase.
func (*Layer) DecayStatePool ¶
DecayStatePool decays activation state by given proportion in given sub-pool index (0 based)
func (*Layer) GInteg ¶ added in v1.5.12
GInteg integrates conductances G over time (Ge, NMDA, etc). calls SpecialGFmRawSyn, GiInteg
func (*Layer) GPDefaults ¶ added in v1.7.0
func (ly *Layer) GPDefaults()
func (*Layer) GPPostBuild ¶ added in v1.7.0
func (ly *Layer) GPPostBuild()
func (*Layer) GPiDefaults ¶ added in v1.7.0
func (ly *Layer) GPiDefaults()
func (*Layer) GatedFmSpkMax ¶ added in v1.7.0
GatedFmSpkMax updates the Gated state in Pools of given layer, based on Avg SpkMax being above given threshold. returns true if any gated, and the pool index if 4D layer (0 = first).
func (*Layer) GatherSpikes ¶ added in v1.7.2
GatherSpikes integrates G*Raw and G*Syn values for given recv neuron while integrating the Recv Prjn-level GSyn integrated values.
func (*Layer) GiFmSpikes ¶ added in v1.5.12
GiFmSpikes gets the Spike, GeRaw and GeExt from neurons in the pools where Spike drives FBsRaw = raw feedback signal, GeRaw drives FFsRaw = aggregate feedforward excitatory spiking input. GeExt represents extra excitatory input from other sources. Then integrates new inhibitory conductances therefrom, at the layer and pool level. Called separately by Network.CycleImpl on all Layers Also updates all AvgMax values at the Cycle level.
func (*Layer) HasPoolInhib ¶ added in v1.2.79
HasPoolInhib returns true if the layer is using pool-level inhibition (implies 4D too). This is the proper check for using pool-level target average activations, for example.
func (*Layer) InitActAvg ¶
InitActAvg initializes the running-average activation values that drive learning and the longer time averaging values.
func (*Layer) InitActAvgLayer ¶ added in v1.8.0
InitActAvgLayer initializes the running-average activation values that drive learning and the longer time averaging values. version with just overall layer-level inhibition.
func (*Layer) InitActAvgPools ¶ added in v1.8.0
InitActAvgPools initializes the running-average activation values that drive learning and the longer time averaging values. version with pooled inhibition.
func (*Layer) InitActs ¶
InitActs fully initializes activation state -- only called automatically during InitWts
func (*Layer) InitExt ¶
InitExt initializes external input state. Should be called prior to ApplyExt on all layers receiving Ext input.
func (*Layer) InitGScale ¶ added in v1.2.37
InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.
func (*Layer) InitPrjnGBuffs ¶ added in v1.5.12
InitPrjnGBuffs initializes the projection-level conductance buffers and conductance integration values for receiving projections in this layer.
func (*Layer) InitWtSym ¶
InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers
func (*Layer) InitWts ¶
InitWts initializes the weight values in the network, i.e., resetting learning Also calls InitActs
func (*Layer) LDTDefaults ¶ added in v1.7.18
func (ly *Layer) LDTDefaults()
func (*Layer) LDTPostBuild ¶ added in v1.7.18
func (ly *Layer) LDTPostBuild()
func (*Layer) LDTSrcLayAct ¶ added in v1.7.19
LDTSrcLayAct returns the overall activity level for given source layer for purposes of computing ACh salience value. Typically the input is a superior colliculus (SC) layer that rapidly accommodates after the onset of a stimulus. using lpl.AvgMax.CaSpkP.Cycle.Max for layer activity measure.
func (*Layer) LRateMod ¶ added in v1.6.13
LRateMod sets the LRate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.
func (*Layer) LRateSched ¶ added in v1.6.13
LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.
func (*Layer) LesionNeurons ¶
LesionNeurons lesions (sets the Off flag) for given proportion (0-1) of neurons in layer returns number of neurons lesioned. Emits error if prop > 1 as indication that percent might have been passed
func (*Layer) LocalistErr2D ¶ added in v1.5.3
LocalistErr2D decodes a 2D layer with Y axis = redundant units, X = localist units returning the indexes of the max activated localist value in the minus and plus phase activities, and whether these are the same or different (err = different) returns one result per data parallel index ([ctx.NetIdxs.NData])
func (*Layer) LocalistErr4D ¶ added in v1.5.3
LocalistErr4D decodes a 4D layer with each pool representing a localist value. Returns the flat 1D indexes of the max activated localist value in the minus and plus phase activities, and whether these are the same or different (err = different)
func (*Layer) MatrixDefaults ¶ added in v1.7.0
func (ly *Layer) MatrixDefaults()
func (*Layer) MatrixGated ¶ added in v1.7.0
MatrixGated is called after std PlusPhase, on CPU, has Pool info downloaded from GPU, to set Gated flag based on SpkMax activity
func (*Layer) MatrixPostBuild ¶ added in v1.7.0
func (ly *Layer) MatrixPostBuild()
func (*Layer) MinusPhase ¶ added in v1.2.63
MinusPhase does updating at end of the minus phase
func (*Layer) MinusPhasePost ¶ added in v1.7.10
MinusPhasePost does special algorithm processing at end of minus
func (*Layer) NewState ¶ added in v1.2.63
NewState handles all initialization at start of new input pattern. Does NOT call InitGScale()
func (*Layer) NewStateNeurons ¶ added in v1.8.0
NewStateNeurons only calls the neurons part of new state -- for misbehaving GPU
func (*Layer) Object ¶ added in v1.7.0
Object returns the object with parameters to be set by emer.Params
func (*Layer) PTMaintDefaults ¶ added in v1.7.2
func (ly *Layer) PTMaintDefaults()
func (*Layer) PTNotMaintDefaults ¶ added in v1.7.11
func (ly *Layer) PTNotMaintDefaults()
func (*Layer) PVLVPostBuild ¶ added in v1.7.11
func (ly *Layer) PVLVPostBuild()
PVLVPostBuild is used for BLA, VSPatch, and PVLayer types to set NeuroMod params
func (*Layer) PctUnitErr ¶
PctUnitErr returns the proportion of units where the thresholded value of Target (Target or Compare types) or ActP does not match that of ActM. If Act > ly.Params.Acts.Clamp.ErrThr, effective activity = 1 else 0 robust to noisy activations. returns one result per data parallel index ([ctx.NetIdxs.NData])
func (*Layer) PlusPhaseActAvg ¶ added in v1.8.0
PlusPhaseActAvg updates ActAvg and DTrgAvg at the plus phase Note: could be done on GPU but not worth it at this point..
func (*Layer) PlusPhasePost ¶ added in v1.7.0
PlusPhasePost does special algorithm processing at end of plus
func (*Layer) PlusPhaseStart ¶ added in v1.7.10
PlusPhaseStart does updating at the start of the plus phase: applies Target inputs as External inputs.
func (*Layer) PoolGiFmSpikes ¶ added in v1.5.12
PoolGiFmSpikes computes inhibition Gi from Spikes within sub-pools. and also between different layers based on LayInhib* indexes must happen after LayPoolGiFmSpikes has been called.
func (*Layer) PostBuild ¶ added in v1.7.0
func (ly *Layer) PostBuild()
PostBuild performs special post-Build() configuration steps for specific algorithms, using configuration data set in BuildConfig during the ConfigNet process.
func (*Layer) PostSpike ¶ added in v1.7.0
PostSpike does updates at neuron level after spiking has been computed. This is where special layer types add extra code. It also updates the CaSpkPCyc stats. Called directly by Network, iterates over data.
func (*Layer) PulvPostBuild ¶ added in v1.7.0
func (ly *Layer) PulvPostBuild()
PulvPostBuild does post-Build config of Pulvinar based on BuildConfig options
func (*Layer) PulvinarDriver ¶ added in v1.7.0
func (*Layer) RWDaPostBuild ¶ added in v1.7.0
func (ly *Layer) RWDaPostBuild()
RWDaPostBuild does post-Build config
func (*Layer) ReadWtsJSON ¶
ReadWtsJSON reads the weights from this layer from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one layer only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.
func (*Layer) STNDefaults ¶ added in v1.7.0
func (ly *Layer) STNDefaults()
func (*Layer) SendSpike ¶
SendSpike sends spike to receivers for all neurons that spiked last step in Cycle, integrated the next time around. Called directly by Network, iterates over data.
func (*Layer) SetSubMean ¶ added in v1.6.11
SetSubMean sets the SubMean parameters in all the layers in the network trgAvg is for Learn.TrgAvgAct.SubMean prjn is for the prjns Learn.Trace.SubMean in both cases, it is generally best to have both parameters set to 0 at the start of learning
func (*Layer) SlowAdapt ¶ added in v1.2.37
SlowAdapt is the layer-level slow adaptation functions. Calls AdaptInhib and AvgDifFmTrgAvg for Synaptic Scaling. Does NOT call projection-level methods.
func (*Layer) SpikeFmG ¶ added in v1.6.0
SpikeFmG computes Vm from Ge, Gi, Gl conductances and then Spike from that
func (*Layer) SpkSt1 ¶ added in v1.5.10
SpkSt1 saves current activation state in SpkSt1 variables (using CaP)
func (*Layer) SpkSt2 ¶ added in v1.5.10
SpkSt2 saves current activation state in SpkSt2 variables (using CaP)
func (*Layer) SynCa ¶ added in v1.3.1
SynCa updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking, threaded over neurons. Called directly by Network, iterates over data.
func (*Layer) SynFail ¶ added in v1.2.92
SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFmDWt, but this call can be used during testing to update failing synapses.
func (*Layer) TDDaPostBuild ¶ added in v1.7.0
func (ly *Layer) TDDaPostBuild()
TDDaPostBuild does post-Build config
func (*Layer) TDIntegPostBuild ¶ added in v1.7.0
func (ly *Layer) TDIntegPostBuild()
TDIntegPostBuild does post-Build config
func (*Layer) TargToExt ¶ added in v1.2.65
TargToExt sets external input Ext from target values Target This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.
func (*Layer) TestVals ¶ added in v1.8.0
TestVals returns a map of key vals for testing ctrKey is a key of counters to contextualize values.
func (*Layer) TrgAvgFmD ¶ added in v1.2.32
TrgAvgFmD updates TrgAvg from DTrgAvg -- called in PlusPhasePost
func (*Layer) UnLesionNeurons ¶
func (ly *Layer) UnLesionNeurons()
UnLesionNeurons unlesions (clears the Off flag) for all neurons in the layer
func (*Layer) Update ¶ added in v1.7.0
func (ly *Layer) Update()
Update is an interface for generically updating after edits this should be used only for the values on the struct itself. UpdateParams is used to update all parameters, including Prjn.
func (*Layer) UpdateExtFlags ¶
UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.
func (*Layer) UpdateParams ¶
func (ly *Layer) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving projections of this layer. This is not called Update because it is not just about the local values in the struct.
func (*Layer) VSPatchAdaptThr ¶ added in v1.8.1
VSPatchAdaptThr adapts the learning threshold
func (*Layer) WriteWtsJSON ¶
WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
func (*Layer) WtFmDWtLayer ¶ added in v1.6.0
WtFmDWtLayer does weight update at the layer level. does NOT call main projection-level WtFmDWt method. in base, only calls TrgAvgFmD
type LayerBase ¶ added in v1.4.5
type LayerBase struct { AxonLay AxonLayer `` /* 297-byte string literal not displayed */ Network *Network `` /* 141-byte string literal not displayed */ Nm string `` /* 151-byte string literal not displayed */ Cls string `desc:"Class is for applying parameter styles, can be space separated multple tags"` Off bool `desc:"inactivate this layer -- allows for easy experimentation"` Shp etensor.Shape `` /* 219-byte string literal not displayed */ Typ LayerTypes `` /* 161-byte string literal not displayed */ Rel relpos.Rel `tableview:"-" view:"inline" desc:"Spatial relationship to other layer, determines positioning"` Ps mat32.Vec3 `` /* 168-byte string literal not displayed */ Idx int `` /* 278-byte string literal not displayed */ NNeurons uint32 `view:"-" desc:"number of neurons in the layer"` NeurStIdx uint32 `view:"-" inactive:"-" desc:"starting index of neurons for this layer within the global Network list"` NPools uint32 `view:"-" desc:"number of pools based on layer shape -- at least 1 for layer pool + 4D subpools"` MaxData uint32 `` /* 167-byte string literal not displayed */ RepIxs []int `` /* 128-byte string literal not displayed */ RepShp etensor.Shape `view:"-" desc:"shape of representative units in the layer -- if RepIxs is empty or .Shp is nil, use overall layer shape"` RcvPrjns AxonPrjns `desc:"list of receiving projections into this layer from other layers"` SndPrjns AxonPrjns `desc:"list of sending projections from this layer to other layers"` Vals []LayerVals `` /* 135-byte string literal not displayed */ Pools []Pool `` /* 341-byte string literal not displayed */ Exts []float32 `view:"-" desc:"[Neurons][Data] external input values for this layer, allocated from network global Exts slice"` BuildConfig map[string]string `` /* 537-byte string literal not displayed */ DefParams params.Params `` /* 324-byte string literal not displayed */ ParamsHistory params.HistoryImpl `tableview:"-" desc:"provides a history of parameters applied to the layer"` }
LayerBase manages the structural elements of the layer, which are common to any Layer type. The Base does not have algorithm-specific methods and parameters, so it can be easily reused for different algorithms, and cleanly separates the algorithm-specific code. Any dependency on the algorithm-level Layer can be captured in the AxonLayer interface, accessed via the AxonLay field.
func (*LayerBase) ApplyDefParams ¶ added in v1.7.18
func (ly *LayerBase) ApplyDefParams()
ApplyDefParams applies DefParams default parameters if set Called by Layer.Defaults()
func (*LayerBase) ApplyParams ¶ added in v1.4.5
ApplyParams applies given parameter style Sheet to this layer and its recv projections. Calls UpdateParams on anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*LayerBase) Build ¶ added in v1.7.0
Build constructs the layer state, including calling Build on the projections
func (*LayerBase) BuildConfigByName ¶ added in v1.7.0
BuildConfigByName looks for given BuildConfig option by name, and reports & returns an error if not found.
func (*LayerBase) BuildConfigFindLayer ¶ added in v1.7.0
BuildConfigFindLayer looks for BuildConfig of given name and if found, looks for layer with corresponding name. if mustName is true, then an error is logged if the BuildConfig name does not exist. An error is always logged if the layer name is not found. -1 is returned in any case of not found.
func (*LayerBase) BuildPools ¶ added in v1.7.0
BuildPools builds the inhibitory pools structures -- nu = number of units in layer
func (*LayerBase) BuildPrjns ¶ added in v1.7.0
BuildPrjns builds the projections, send-side
func (*LayerBase) BuildSubPools ¶ added in v1.7.0
BuildSubPools initializes neuron start / end indexes for sub-pools
func (*LayerBase) Idx4DFrom2D ¶ added in v1.4.5
func (*LayerBase) InitName ¶ added in v1.4.5
InitName MUST be called to initialize the layer's pointer to itself as an emer.Layer which enables the proper interface methods to be called. Also sets the name, and the parent network that this layer belongs to (which layers may want to retain).
func (*LayerBase) LayerType ¶ added in v1.7.0
func (ly *LayerBase) LayerType() LayerTypes
func (*LayerBase) NRecvPrjns ¶ added in v1.4.5
func (*LayerBase) NSendPrjns ¶ added in v1.4.5
func (*LayerBase) NSubPools ¶ added in v1.7.9
NSubPools returns the number of sub-pools of neurons according to the shape parameters. 2D shapes have 0 sub pools. For a 4D shape, the pools are the first set of 2 Y,X dims and then the neurons within the pools are the 2nd set of 2 Y,X dims.
func (*LayerBase) NeurStartIdx ¶ added in v1.6.0
func (*LayerBase) NonDefaultParams ¶ added in v1.4.5
NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.
func (*LayerBase) ParamsApplied ¶ added in v1.7.11
ParamsApplied is just to satisfy History interface so reset can be applied
func (*LayerBase) ParamsHistoryReset ¶ added in v1.7.11
func (ly *LayerBase) ParamsHistoryReset()
ParamsHistoryReset resets parameter application history
func (*LayerBase) PlaceAbove ¶ added in v1.7.11
PlaceAbove positions the layer above the other layer, using default XAlign = Left, YAlign = Front alignment
func (*LayerBase) PlaceBehind ¶ added in v1.7.11
PlaceBehind positions the layer behind the other layer, with given spacing, using default XAlign = Left alignment
func (*LayerBase) PlaceRightOf ¶ added in v1.7.11
PlaceRightOf positions the layer to the right of the other layer, with given spacing, using default YAlign = Front alignment
func (*LayerBase) RecipToRecvPrjn ¶ added in v1.7.2
RecipToRecvPrjn finds the reciprocal projection to the given recv projection within the ly layer. i.e., where ly is instead the *sending* layer to same other layer B that is the sender of the rpj projection we're receiving from.
ly = A, other layer = B:
rpj: R=A <- S=B spj: S=A -> R=B
returns false if not found.
func (*LayerBase) RecipToSendPrjn ¶ added in v1.4.5
RecipToSendPrjn finds the reciprocal projection to the given sending projection within the ly layer. i.e., where ly is instead the *receiving* layer from same other layer B that is the receiver of the spj projection we're sending to.
ly = A, other layer = B:
spj: S=A -> R=B rpj: R=A <- S=B
returns false if not found.
func (*LayerBase) RecvNameTry ¶ added in v1.7.0
func (*LayerBase) RecvNameType ¶ added in v1.8.9
func (*LayerBase) RecvNameTypeTry ¶ added in v1.7.0
func (*LayerBase) RecvPrjnVals ¶ added in v1.8.0
func (ly *LayerBase) RecvPrjnVals(vals *[]float32, varNm string, sendLay emer.Layer, sendIdx1D int, prjnType string) error
RecvPrjnVals fills in values of given synapse variable name, for projection into given sending layer and neuron 1D index, for all receiving neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the receiving neuron is not connected to the given sending layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).
func (*LayerBase) RepShape ¶ added in v1.4.8
RepShape returns the shape to use for representative units
func (*LayerBase) SendNameTry ¶ added in v1.7.0
func (*LayerBase) SendNameType ¶ added in v1.8.9
func (*LayerBase) SendNameTypeTry ¶ added in v1.7.0
func (*LayerBase) SendPrjnVals ¶ added in v1.8.0
func (ly *LayerBase) SendPrjnVals(vals *[]float32, varNm string, recvLay emer.Layer, recvIdx1D int, prjnType string) error
SendPrjnVals fills in values of given synapse variable name, for projection into given receiving layer and neuron 1D index, for all sending neurons in this layer, into given float32 slice (only resized if not big enough). prjnType is the string representation of the prjn type -- used if non-empty, useful when there are multiple projections between two layers. Returns error on invalid var name. If the sending neuron is not connected to the given receiving layer or neuron then the value is set to mat32.NaN(). Returns error on invalid var name or lack of recv prjn (vals always set to nan on prjn err).
func (*LayerBase) SetBuildConfig ¶ added in v1.7.0
SetBuildConfig sets named configuration parameter to given string value to be used in the PostBuild stage -- mainly for layer names that need to be looked up and turned into indexes, after entire network is built.
func (*LayerBase) SetRepIdxsShape ¶ added in v1.4.8
SetRepIdxsShape sets the RepIdxs, and RepShape and as list of dimension sizes
func (*LayerBase) SetShape ¶ added in v1.4.5
SetShape sets the layer shape and also uses default dim names
func (*LayerBase) UnitVal ¶ added in v1.8.0
UnitVal returns value of given variable name on given unit, using shape-based dimensional index
func (*LayerBase) UnitVal1D ¶ added in v1.8.0
UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.
func (*LayerBase) UnitVals ¶ added in v1.8.0
UnitVals fills in values of given variable name on unit, for each unit in the layer, into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*LayerBase) UnitValsRepTensor ¶ added in v1.8.0
UnitValsRepTensor fills in values of given variable name on unit for a smaller subset of representative units in the layer, into given tensor. This is used for computationally intensive stats or displays that work much better with a smaller number of units. The set of representative units are defined by SetRepIdxs -- all units are used if no such subset has been defined. If tensor is not already big enough to hold the values, it is set to RepShape to hold all the values if subset is defined, otherwise it calls UnitValsTensor and is identical to that. Returns error on invalid var name.
func (*LayerBase) UnitValsTensor ¶ added in v1.8.0
UnitValsTensor returns values of given variable name on unit for each unit in the layer, as a float32 tensor in same shape as layer units.
func (*LayerBase) UnitVarIdx ¶ added in v1.8.0
UnitVarIdx returns the index of given variable within the Neuron, according to *this layer's* UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.
func (*LayerBase) UnitVarNames ¶ added in v1.8.0
UnitVarNames returns a list of variable names available on the units in this layer
func (*LayerBase) UnitVarNum ¶ added in v1.8.0
UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.
func (*LayerBase) UnitVarProps ¶ added in v1.8.0
UnitVarProps returns properties for variables
type LayerIdxs ¶ added in v1.7.0
type LayerIdxs struct { LayIdx uint32 `inactive:"+" desc:"layer index"` MaxData uint32 `inactive:"+" desc:"maximum number of data parallel elements"` PoolSt uint32 `inactive:"+" desc:"start of pools for this layer -- first one is always the layer-wide pool"` NeurSt uint32 `inactive:"+" desc:"start of neurons for this layer in global array (same as Layer.NeurStIdx)"` NeurN uint32 `inactive:"+" desc:"number of neurons in layer"` RecvSt uint32 `inactive:"+" desc:"start index into RecvPrjns global array"` RecvN uint32 `inactive:"+" desc:"number of recv projections"` SendSt uint32 `inactive:"+" desc:"start index into RecvPrjns global array"` SendN uint32 `inactive:"+" desc:"number of recv projections"` ExtsSt uint32 `` /* 144-byte string literal not displayed */ ShpPlY int32 `inactive:"+" desc:"layer shape Pools Y dimension -- 1 for 2D"` ShpPlX int32 `inactive:"+" desc:"layer shape Pools X dimension -- 1 for 2D"` ShpUnY int32 `inactive:"+" desc:"layer shape Units Y dimension"` ShpUnX int32 `inactive:"+" desc:"layer shape Units X dimension"` // contains filtered or unexported fields }
LayerIdxs contains index access into network global arrays for GPU.
func (*LayerIdxs) ExtIdx ¶ added in v1.8.0
ExtIdx returns the index for accessing Exts values: [Neuron][Data] Neuron is *layer-relative* lni index -- add the ExtsSt for network level access.
type LayerInhibIdxs ¶ added in v1.7.10
type LayerInhibIdxs struct { Idx1 int32 `` /* 147-byte string literal not displayed */ Idx2 int32 `` /* 147-byte string literal not displayed */ Idx3 int32 `` /* 147-byte string literal not displayed */ Idx4 int32 `` /* 148-byte string literal not displayed */ }
LayerInhibIdxs contains indexes of layers for between-layer inhibition
type LayerParams ¶ added in v1.7.0
type LayerParams struct { LayType LayerTypes `` /* 140-byte string literal not displayed */ Acts ActParams `view:"add-fields" desc:"Activation parameters and methods for computing activations"` Inhib InhibParams `view:"add-fields" desc:"Inhibition parameters and methods for computing layer-level inhibition"` LayInhib LayerInhibIdxs `` /* 158-byte string literal not displayed */ Learn LearnNeurParams `view:"add-fields" desc:"Learning parameters and methods that operate at the neuron level"` Bursts BurstParams `` /* 181-byte string literal not displayed */ CT CTParams `` /* 290-byte string literal not displayed */ Pulv PulvParams `` /* 241-byte string literal not displayed */ LDT LDTParams `` /* 213-byte string literal not displayed */ RWPred RWPredParams `` /* 170-byte string literal not displayed */ RWDa RWDaParams `` /* 177-byte string literal not displayed */ TDInteg TDIntegParams `viewif:"LayType=TDIntegLayer" view:"inline" desc:"parameterizes TD reward integration layer"` TDDa TDDaParams `` /* 180-byte string literal not displayed */ VSPatch VSPatchParams `viewif:"LayType=VSPatchLayer" view:"inline" desc:"parameters for VSPatch learning"` Matrix MatrixParams `` /* 144-byte string literal not displayed */ GP GPParams `viewif:"LayType=GPLayer" view:"inline" desc:"type of GP Layer."` Idxs LayerIdxs `desc:"recv and send projection array access info"` // contains filtered or unexported fields }
LayerParams contains all of the layer parameters. These values must remain constant over the course of computation. On the GPU, they are loaded into a uniform.
func (*LayerParams) AllParams ¶ added in v1.7.0
func (ly *LayerParams) AllParams() string
AllParams returns a listing of all parameters in the Layer
func (*LayerParams) ApplyExtFlags ¶ added in v1.7.9
func (ly *LayerParams) ApplyExtFlags(clearMask, setMask *NeuronFlags, toTarg *bool)
ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Target (else Ext)
func (*LayerParams) ApplyExtVal ¶ added in v1.7.9
func (ly *LayerParams) ApplyExtVal(ctx *Context, ni, di uint32, val float32)
ApplyExtVal applies given external value to given neuron, setting flags based on type of layer. Should only be called on Input, Target, Compare layers. Negative values are not valid, and will be interpreted as missing inputs.
func (*LayerParams) AvgGeM ¶ added in v1.7.9
func (ly *LayerParams) AvgGeM(ctx *Context, vals *LayerVals, geIntMinusMax, giIntMinusMax float32)
AvgGeM computes the average and max GeInt, GiInt in minus phase (AvgMaxGeM, AvgMaxGiM) stats, updated in MinusPhase, using values that already max across NData.
func (*LayerParams) CTDefaults ¶ added in v1.7.0
func (ly *LayerParams) CTDefaults()
func (*LayerParams) CyclePostCeMLayer ¶ added in v1.7.18
func (ly *LayerParams) CyclePostCeMLayer(ctx *Context, di uint32, lpl *Pool)
func (*LayerParams) CyclePostLDTLayer ¶ added in v1.7.18
func (ly *LayerParams) CyclePostLDTLayer(ctx *Context, di uint32, vals *LayerVals, srcLay1Act, srcLay2Act, srcLay3Act, srcLay4Act float32)
func (*LayerParams) CyclePostLayer ¶ added in v1.8.0
func (ly *LayerParams) CyclePostLayer(ctx *Context, di uint32, lpl *Pool, vals *LayerVals)
CyclePostLayer is called for all layer types
func (*LayerParams) CyclePostPTNotMaintLayer ¶ added in v1.7.18
func (ly *LayerParams) CyclePostPTNotMaintLayer(ctx *Context, di uint32, lpl *Pool)
func (*LayerParams) CyclePostRWDaLayer ¶ added in v1.7.9
func (ly *LayerParams) CyclePostRWDaLayer(ctx *Context, di uint32, vals *LayerVals, pvals *LayerVals)
func (*LayerParams) CyclePostTDDaLayer ¶ added in v1.7.9
func (ly *LayerParams) CyclePostTDDaLayer(ctx *Context, di uint32, vals *LayerVals, ivals *LayerVals)
func (*LayerParams) CyclePostTDIntegLayer ¶ added in v1.7.9
func (ly *LayerParams) CyclePostTDIntegLayer(ctx *Context, di uint32, vals *LayerVals, pvals *LayerVals)
func (*LayerParams) CyclePostTDPredLayer ¶ added in v1.7.9
func (ly *LayerParams) CyclePostTDPredLayer(ctx *Context, di uint32, vals *LayerVals)
func (*LayerParams) CyclePostVSPatchLayer ¶ added in v1.7.12
func (ly *LayerParams) CyclePostVSPatchLayer(ctx *Context, di uint32, pi int32, pl *Pool, vals *LayerVals)
note: needs to iterate over sub-pools in layer!
func (*LayerParams) CyclePostVTALayer ¶ added in v1.7.12
func (ly *LayerParams) CyclePostVTALayer(ctx *Context, di uint32)
func (*LayerParams) Defaults ¶ added in v1.7.0
func (ly *LayerParams) Defaults()
func (*LayerParams) DrivesDefaults ¶ added in v1.7.11
func (ly *LayerParams) DrivesDefaults()
func (*LayerParams) EffortDefaults ¶ added in v1.7.11
func (ly *LayerParams) EffortDefaults()
func (*LayerParams) GFmRawSyn ¶ added in v1.7.0
func (ly *LayerParams) GFmRawSyn(ctx *Context, ni, di uint32)
GFmRawSyn computes overall Ge and GiSyn conductances for neuron from GeRaw and GeSyn values, including NMDA, VGCC, AMPA, and GABA-A channels. drvAct is for Pulvinar layers, activation of driving neuron
func (*LayerParams) GNeuroMod ¶ added in v1.7.0
func (ly *LayerParams) GNeuroMod(ctx *Context, ni, di uint32, vals *LayerVals)
GNeuroMod does neuromodulation of conductances
func (*LayerParams) GatherSpikesInit ¶ added in v1.7.2
func (ly *LayerParams) GatherSpikesInit(ctx *Context, ni, di uint32)
GatherSpikesInit initializes G*Raw and G*Syn values for given neuron prior to integration
func (*LayerParams) GiInteg ¶ added in v1.7.0
func (ly *LayerParams) GiInteg(ctx *Context, ni, di uint32, pl *Pool, vals *LayerVals)
GiInteg adds Gi values from all sources including SubPool computed inhib and updates GABAB as well
func (*LayerParams) InitExt ¶ added in v1.7.9
func (ly *LayerParams) InitExt(ctx *Context, ni, di uint32)
InitExt initializes external input state for given neuron
func (*LayerParams) IsInput ¶ added in v1.7.9
func (ly *LayerParams) IsInput() bool
IsInput returns true if this layer is an Input layer. By default, returns true for layers of Type == axon.InputLayer Used to prevent adapting of inhibition or TrgAvg values.
func (*LayerParams) IsInputOrTarget ¶ added in v1.7.9
func (ly *LayerParams) IsInputOrTarget() bool
IsInputOrTarget returns true if this layer is either an Input or a Target layer.
func (*LayerParams) IsLearnTrgAvg ¶ added in v1.7.9
func (ly *LayerParams) IsLearnTrgAvg() bool
IsLearnTrgAvg returns true if this layer has Learn.TrgAvgAct.On set for learning adjustments based on target average activity levels, and the layer is not an input or target layer.
func (*LayerParams) IsTarget ¶ added in v1.7.9
func (ly *LayerParams) IsTarget() bool
IsTarget returns true if this layer is a Target layer. By default, returns true for layers of Type == TargetLayer Other Target layers include the TRCLayer in deep predictive learning. It is used in SynScale to not apply it to target layers. In both cases, Target layers are purely error-driven.
func (*LayerParams) LayPoolGiFmSpikes ¶ added in v1.7.0
func (ly *LayerParams) LayPoolGiFmSpikes(ctx *Context, lpl *Pool, vals *LayerVals)
LayPoolGiFmSpikes computes inhibition Gi from Spikes for layer-level pool. Also grabs updated Context NeuroMod values into LayerVals
func (*LayerParams) LearnTrgAvgErrLRate ¶ added in v1.7.9
func (ly *LayerParams) LearnTrgAvgErrLRate() float32
LearnTrgAvgErrLRate returns the effective error-driven learning rate for adjusting target average activity levels. This is 0 if !IsLearnTrgAvg() and otherwise is Learn.TrgAvgAct.ErrLRate
func (*LayerParams) MinusPhaseNeuron ¶ added in v1.7.9
func (ly *LayerParams) MinusPhaseNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerVals)
MinusPhaseNeuron does neuron level minus-phase updating
func (*LayerParams) MinusPhasePool ¶ added in v1.7.9
func (ly *LayerParams) MinusPhasePool(ctx *Context, pl *Pool)
func (*LayerParams) NewStateLayer ¶ added in v1.7.9
func (ly *LayerParams) NewStateLayer(ctx *Context, lpl *Pool, vals *LayerVals)
func (*LayerParams) NewStateLayerActAvg ¶ added in v1.8.0
func (ly *LayerParams) NewStateLayerActAvg(ctx *Context, vals *LayerVals, actMinusAvg, actPlusAvg float32)
NewStateLayerActAvg updates ActAvg.ActMAvg and ActPAvg based on current values that have been averaged across NData already.
func (*LayerParams) NewStateNeuron ¶ added in v1.7.9
func (ly *LayerParams) NewStateNeuron(ctx *Context, ni, di uint32, vals *LayerVals)
NewStateNeuron handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point.
func (*LayerParams) NewStatePool ¶ added in v1.7.9
func (ly *LayerParams) NewStatePool(ctx *Context, pl *Pool)
func (*LayerParams) PTPredDefaults ¶ added in v1.7.11
func (ly *LayerParams) PTPredDefaults()
func (*LayerParams) PVDefaults ¶ added in v1.7.11
func (ly *LayerParams) PVDefaults()
func (*LayerParams) PlusPhaseNeuron ¶ added in v1.7.9
func (ly *LayerParams) PlusPhaseNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerVals)
PlusPhaseNeuron does neuron level plus-phase updating
func (*LayerParams) PlusPhaseNeuronSpecial ¶ added in v1.7.11
func (ly *LayerParams) PlusPhaseNeuronSpecial(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerVals)
PlusPhaseNeuronSpecial does special layer type neuron level plus-phase updating
func (*LayerParams) PlusPhasePool ¶ added in v1.7.9
func (ly *LayerParams) PlusPhasePool(ctx *Context, pl *Pool)
func (*LayerParams) PlusPhaseStartNeuron ¶ added in v1.7.10
func (ly *LayerParams) PlusPhaseStartNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerVals)
PlusPhaseStartNeuron does neuron level plus-phase start: applies Target inputs as External inputs.
func (*LayerParams) PostSpike ¶ added in v1.7.0
func (ly *LayerParams) PostSpike(ctx *Context, ni, di uint32, pl *Pool, vals *LayerVals)
PostSpike does updates at neuron level after spiking has been computed. it is called *after* PostSpikeSpecial. It also updates the CaSpkPCyc stats.
func (*LayerParams) PostSpikeSpecial ¶ added in v1.7.0
func (ly *LayerParams) PostSpikeSpecial(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerVals)
PostSpikeSpecial does updates at neuron level after spiking has been computed. This is where special layer types add extra code. warning: if more than 1 layer writes to vals, gpu will fail!
func (*LayerParams) PulvDefaults ¶ added in v1.7.0
func (ly *LayerParams) PulvDefaults()
called in Defaults for Pulvinar layer type
func (*LayerParams) RWDefaults ¶ added in v1.7.0
func (ly *LayerParams) RWDefaults()
func (*LayerParams) RWPredDefaults ¶ added in v1.7.0
func (ly *LayerParams) RWPredDefaults()
func (*LayerParams) SpecialPostGs ¶ added in v1.7.0
func (ly *LayerParams) SpecialPostGs(ctx *Context, ni, di uint32, saveVal float32)
SpecialPostGs is used for special layer types to do things after the standard updates in GFmRawSyn. It is passed the saveVal from SpecialPreGs
func (*LayerParams) SpecialPreGs ¶ added in v1.7.0
func (ly *LayerParams) SpecialPreGs(ctx *Context, ni, di uint32, pl *Pool, vals *LayerVals, drvGe float32, nonDrvPct float32) float32
SpecialPreGs is used for special layer types to do things to the conductance values prior to doing the standard updates in GFmRawSyn drvAct is for Pulvinar layers, activation of driving neuron
func (*LayerParams) SpikeFmG ¶ added in v1.7.0
func (ly *LayerParams) SpikeFmG(ctx *Context, ni, di uint32)
SpikeFmG computes Vm from Ge, Gi, Gl conductances and then Spike from that
func (*LayerParams) SubPoolGiFmSpikes ¶ added in v1.7.0
func (ly *LayerParams) SubPoolGiFmSpikes(ctx *Context, di uint32, pl *Pool, lpl *Pool, lyInhib bool, giMult float32)
SubPoolGiFmSpikes computes inhibition Gi from Spikes within a sub-pool pl is guaranteed not to be the overall layer pool
func (*LayerParams) TDDefaults ¶ added in v1.7.0
func (ly *LayerParams) TDDefaults()
func (*LayerParams) TDPredDefaults ¶ added in v1.7.0
func (ly *LayerParams) TDPredDefaults()
func (*LayerParams) USDefaults ¶ added in v1.7.11
func (ly *LayerParams) USDefaults()
func (*LayerParams) Update ¶ added in v1.7.0
func (ly *LayerParams) Update()
func (*LayerParams) UrgencyDefaults ¶ added in v1.7.18
func (ly *LayerParams) UrgencyDefaults()
func (*LayerParams) VSGatedDefaults ¶ added in v1.7.11
func (ly *LayerParams) VSGatedDefaults()
func (*LayerParams) VSPatchDefaults ¶ added in v1.7.11
func (ly *LayerParams) VSPatchDefaults()
type LayerTypes ¶ added in v1.7.0
type LayerTypes int32
LayerTypes is an axon-specific layer type enum, that encompasses all the different algorithm types supported. Class parameter styles automatically key off of these types. The first entries must be kept synchronized with the emer.LayerType, although we replace Hidden -> Super.
const ( // Super is a superficial cortical layer (lamina 2-3-4) // which does not receive direct input or targets. // In more generic models, it should be used as a Hidden layer, // and maps onto the Hidden type in emer.LayerType. SuperLayer LayerTypes = iota // Input is a layer that receives direct external input // in its Ext inputs. Biologically, it can be a primary // sensory layer, or a thalamic layer. InputLayer // Target is a layer that receives direct external target inputs // used for driving plus-phase learning. // Simple target layers are generally not used in more biological // models, which instead use predictive learning via Pulvinar // or related mechanisms. TargetLayer // Compare is a layer that receives external comparison inputs, // which drive statistics but do NOT drive activation // or learning directly. It is rarely used in axon. CompareLayer // CT are layer 6 corticothalamic projecting neurons, // which drive "top down" predictions in Pulvinar layers. // They maintain information over time via stronger NMDA // channels and use maintained prior state information to // generate predictions about current states forming on Super // layers that then drive PT (5IB) bursting activity, which // are the plus-phase drivers of Pulvinar activity. CTLayer // Pulvinar are thalamic relay cell neurons in the higher-order // Pulvinar nucleus of the thalamus, and functionally isomorphic // neurons in the MD thalamus, and potentially other areas. // These cells alternately reflect predictions driven by CT projections, // and actual outcomes driven by 5IB Burst activity from corresponding // PT or Super layer neurons that provide strong driving inputs. PulvinarLayer // TRNLayer is thalamic reticular nucleus layer for inhibitory competition // within the thalamus. TRNLayer // PTMaintLayer implements the subset of pyramidal tract (PT) // layer 5 intrinsic bursting (5IB) deep neurons that exhibit // robust, stable maintenance of activity over the duration of a // goal engaged window, modulated by basal ganglia (BG) disinhibitory // gating, supported by strong MaintNMDA channels and recurrent excitation. // The lateral PTSelfMaint projection uses MaintG to drive GMaintRaw input // that feeds into the stronger, longer MaintNMDA channels, // and the ThalToPT ModulatoryG projection from BGThalamus multiplicatively // modulates the strength of other inputs, such that only at the time of // BG gating are these strong enough to drive sustained active maintenance. // Use Act.Dend.ModGain to parameterize. PTMaintLayer // PTPredLayer implements the subset of pyramidal tract (PT) // layer 5 intrinsic bursting (5IB) deep neurons that combine // modulatory input from PTMaintLayer sustained maintenance and // CTLayer dynamic predictive learning that helps to predict // state changes during the period of active goal maintenance. // This layer provides the primary input to VSPatch US-timing // prediction layers, and other layers that require predictive dynamic PTPredLayer // PTNotMaintLayer implements a tonically active layer that is inhibited // by the PTMaintLayer, thereby providing an active representation of // the *absence* of maintained PT activity, which is useful for driving // appropriate actions (e.g., exploration) when not in goal-engaged mode. PTNotMaintLayer // MatrixLayer represents the matrisome medium spiny neurons (MSNs) // that are the main Go / NoGo gating units in BG. // These are strongly modulated by phasic dopamine: D1 = Go, D2 = NoGo. MatrixLayer // STNLayer represents subthalamic nucleus neurons, with two subtypes: // STNp are more strongly driven and get over bursting threshold, driving strong, // rapid activation of the KCa channels, causing a long pause in firing, which // creates a window during which GPe dynamics resolve Go vs. No balance. // STNs are more weakly driven and thus more slowly activate KCa, resulting in // a longer period of activation, during which the GPi is inhibited to prevent // premature gating based only MtxGo inhibition -- gating only occurs when // GPeIn signal has had a chance to integrate its MtxNo inputs. STNLayer // GPLayer represents a globus pallidus layer in the BG, including: // GPeOut, GPeIn, GPeTA (arkypallidal), and GPi. // Typically just a single unit per Pool representing a given stripe. GPLayer // BGThalLayer represents a BG gated thalamic layer, // which receives BG gating in the form of an // inhibitory projection from GPi. Located // mainly in the Ventral thalamus: VA / VM / VL, // and also parts of MD mediodorsal thalamus. BGThalLayer // VSGated represents explicit coding of VS gating status: // JustGated and HasGated (since last US or failed predicted US), // For visualization and / or motor action signaling. VSGatedLayer // BLALayer represents a basolateral amygdala layer // which learns to associate arbitrary stimuli (CSs) // with behaviorally salient outcomes (USs) BLALayer // CeMLayer represents a central nucleus of the amygdala layer. CeMLayer // VSPatchLayer represents a ventral striatum patch layer, // which learns to represent the expected amount of dopamine reward // and projects both directly with shunting inhibition to the VTA // and indirectly via the LHb / RMTg to cancel phasic dopamine firing // to expected rewards (i.e., reward prediction error). VSPatchLayer // LHbLayer represents the lateral habenula, which drives dipping // in the VTA. It tracks the ContextPVLV.LHb values for // visualization purposes -- updated by VTALayer. LHbLayer // DrivesLayer represents the Drives in PVLV framework. // It tracks the ContextPVLV.Drives values for // visualization and predictive learning purposes. DrivesLayer // EffortLayer represents the Effort factor in PVLV framework. // It tracks the ContextPVLV.Effort.Disc value for // visualization and predictive learning purposes. EffortLayer // UrgencyLayer represents the Urgency factor in PVLV framework. // It tracks the ContextPVLV.Urgency.Urge value for // visualization and predictive learning purposes. UrgencyLayer // USLayer represents a US unconditioned stimulus layer (USpos or USneg). // It tracks the ContextPVLV.USpos or USneg, for visualization // and predictive learning purposes. Actual US inputs are set in PVLV. USLayer // PVLayer represents a PV primary value layer (PVpos or PVneg) representing // the total primary value as a function of US inputs, drives, and effort. // It tracks the ContextPVLV.VTA.PVpos, PVneg values for // visualization and predictive learning purposes. PVLayer // LDTLayer represents the laterodorsal tegmentum layer, which // is the primary limbic ACh (acetylcholine) driver to other ACh: // BG cholinergic interneurons (CIN) and nucleus basalis ACh areas. // The phasic ACh release signals reward salient inputs from CS, US // and US omssion, and it drives widespread disinhibition of BG gating // and VTA DA firing. // It receives excitation from superior colliculus which computes // a temporal derivative (stimulus specific adaptation, SSA) // of sensory inputs, and inhibitory input from OFC, ACC driving // suppression of distracting inputs during goal-engaged states. LDTLayer // VTALayer represents the ventral tegmental area, which releases // dopamine. It calls the ContextPVLV.VTA methods, // and tracks resulting DA for visualization purposes. VTALayer // RewLayer represents positive or negative reward values across 2 units, // showing spiking rates for each, and Act always represents signed value. RewLayer // RWPredLayer computes reward prediction for a simple Rescorla-Wagner // learning dynamic (i.e., PV learning in the PVLV framework). // Activity is computed as linear function of excitatory conductance // (which can be negative -- there are no constraints). // Use with RWPrjn which does simple delta-rule learning on minus-plus. RWPredLayer // RWDaLayer computes a dopamine (DA) signal based on a simple Rescorla-Wagner // learning dynamic (i.e., PV learning in the PVLV framework). // It computes difference between r(t) and RWPred values. // r(t) is accessed directly from a Rew layer -- if no external input then no // DA is computed -- critical for effective use of RW only for PV cases. // RWPred prediction is also accessed directly from Rew layer to avoid any issues. RWDaLayer // TDPredLayer is the temporal differences reward prediction layer. // It represents estimated value V(t) in the minus phase, and computes // estimated V(t+1) based on its learned weights in plus phase, // using the TDPredPrjn projection type for DA modulated learning. TDPredLayer // TDIntegLayer is the temporal differences reward integration layer. // It represents estimated value V(t) from prior time step in the minus phase, // and estimated discount * V(t+1) + r(t) in the plus phase. // It gets Rew, PrevPred from Context.NeuroMod, and Special // LayerVals from TDPredLayer. TDIntegLayer // TDDaLayer computes a dopamine (DA) signal as the temporal difference (TD) // between the TDIntegLayer activations in the minus and plus phase. // These are retrieved from Special LayerVals. TDDaLayer LayerTypesN )
The layer types
func (*LayerTypes) FromString ¶ added in v1.7.0
func (i *LayerTypes) FromString(s string) error
func (LayerTypes) IsExt ¶ added in v1.7.9
func (lt LayerTypes) IsExt() bool
IsExt returns true if the layer type deals with external input: Input, Target, Compare
func (LayerTypes) MarshalJSON ¶ added in v1.7.0
func (ev LayerTypes) MarshalJSON() ([]byte, error)
func (LayerTypes) String ¶ added in v1.7.0
func (i LayerTypes) String() string
func (*LayerTypes) UnmarshalJSON ¶ added in v1.7.0
func (ev *LayerTypes) UnmarshalJSON(b []byte) error
type LayerVals ¶ added in v1.7.0
type LayerVals struct { LayIdx uint32 `view:"-" desc:"layer index for these vals"` DataIdx uint32 `view:"-" desc:"data index for these vals"` RT float32 `` /* 155-byte string literal not displayed */ // note: ActAvg vals are shared across data parallel ActAvg ActAvgVals `view:"inline" desc:"running-average activation levels used for adaptive inhibition, and other adapting values"` CorSim CorSimStats `desc:"correlation (centered cosine aka normalized dot product) similarity between ActM, ActP states"` Special LaySpecialVals `` /* 282-byte string literal not displayed */ // contains filtered or unexported fields }
LayerVals holds extra layer state that is updated per layer. It is sync'd down from the GPU to the CPU after every Cycle.
type LearnNeurParams ¶
type LearnNeurParams struct { CaLearn CaLrnParams `` /* 376-byte string literal not displayed */ CaSpk CaSpkParams `` /* 456-byte string literal not displayed */ LrnNMDA chans.NMDAParams `` /* 266-byte string literal not displayed */ TrgAvgAct TrgAvgActParams `` /* 126-byte string literal not displayed */ RLRate RLRateParams `` /* 184-byte string literal not displayed */ NeuroMod NeuroModParams `` /* 221-byte string literal not displayed */ }
axon.LearnNeurParams manages learning-related parameters at the neuron-level. This is mainly the running average activations that drive learning
func (*LearnNeurParams) CaFmSpike ¶ added in v1.3.5
func (ln *LearnNeurParams) CaFmSpike(ctx *Context, ni, di uint32)
CaFmSpike updates all spike-driven calcium variables, including CaLrn and CaSpk. Computed after new activation for current cycle is updated.
func (*LearnNeurParams) Defaults ¶
func (ln *LearnNeurParams) Defaults()
func (*LearnNeurParams) InitNeurCa ¶ added in v1.3.9
func (ln *LearnNeurParams) InitNeurCa(ctx *Context, ni, di uint32)
InitCaLrnSpk initializes the neuron-level calcium learning and spking variables. Called by InitWts (at start of learning).
func (*LearnNeurParams) LrnNMDAFmRaw ¶ added in v1.3.11
func (ln *LearnNeurParams) LrnNMDAFmRaw(ctx *Context, ni, di uint32, geTot float32)
LrnNMDAFmRaw updates the separate NMDA conductance and calcium values based on GeTot = GeRaw + external ge conductance. These are the variables that drive learning -- can be the same as activation but also can be different for testing learning Ca effects independent of activation effects.
func (*LearnNeurParams) Update ¶
func (ln *LearnNeurParams) Update()
type LearnSynParams ¶
type LearnSynParams struct { Learn slbool.Bool `desc:"enable learning for this projection"` LRate LRateParams `viewif:"Learn" desc:"learning rate parameters, supporting two levels of modulation on top of base learning rate."` Trace TraceParams `viewif:"Learn" desc:"trace-based learning parameters"` KinaseCa kinase.CaParams `viewif:"Learn" view:"inline" desc:"kinase calcium Ca integration parameters"` // contains filtered or unexported fields }
LearnSynParams manages learning-related parameters at the synapse-level.
func (*LearnSynParams) CHLdWt ¶
func (ls *LearnSynParams) CHLdWt(suCaP, suCaD, ruCaP, ruCaD float32) float32
CHLdWt returns the error-driven weight change component for a CHL contrastive hebbian learning rule, optionally using the checkmark temporally eXtended Contrastive Attractor Learning (XCAL) function
func (*LearnSynParams) Defaults ¶
func (ls *LearnSynParams) Defaults()
func (*LearnSynParams) DeltaDWt ¶ added in v1.5.1
func (ls *LearnSynParams) DeltaDWt(plus, minus float32) float32
DeltaDWt returns the error-driven weight change component for a simple delta between a minus and plus phase factor, optionally using the checkmark temporally eXtended Contrastive Attractor Learning (XCAL) function
func (*LearnSynParams) Update ¶
func (ls *LearnSynParams) Update()
type MatrixParams ¶ added in v1.7.0
type MatrixParams struct { GateThr float32 `def:"0.05" desc:"threshold on layer Avg SpkMax for Matrix Go and VThal layers to count as having gated"` IsVS slbool.Bool `` /* 180-byte string literal not displayed */ OtherMatrixIdx int32 `` /* 130-byte string literal not displayed */ ThalLay1Idx int32 `` /* 169-byte string literal not displayed */ ThalLay2Idx int32 `` /* 169-byte string literal not displayed */ ThalLay3Idx int32 `` /* 169-byte string literal not displayed */ ThalLay4Idx int32 `` /* 169-byte string literal not displayed */ ThalLay5Idx int32 `` /* 169-byte string literal not displayed */ ThalLay6Idx int32 `` /* 169-byte string literal not displayed */ // contains filtered or unexported fields }
MatrixParams has parameters for BG Striatum Matrix MSN layers These are the main Go / NoGo gating units in BG. DA, ACh learning rate modulation is pre-computed on the recv neuron RLRate variable via NeuroMod. Also uses Pool.Gated for InvertNoGate, updated in PlusPhase prior to DWt call. Must set Learn.NeuroMod.DAMod = D1Mod or D2Mod via SetBuildConfig("DAMod").
func (*MatrixParams) Defaults ¶ added in v1.7.0
func (mp *MatrixParams) Defaults()
func (*MatrixParams) Update ¶ added in v1.7.0
func (mp *MatrixParams) Update()
type MatrixPrjnParams ¶ added in v1.7.0
type MatrixPrjnParams struct { NoGateLRate float32 `` /* 290-byte string literal not displayed */ // contains filtered or unexported fields }
MatrixPrjnParams for trace-based learning in the MatrixPrjn. A trace of synaptic co-activity is formed, and then modulated by dopamine whenever it occurs. This bridges the temporal gap between gating activity and subsequent activity, and is based biologically on synaptic tags. Trace is applied to DWt and reset at the time of reward.
func (*MatrixPrjnParams) Defaults ¶ added in v1.7.0
func (tp *MatrixPrjnParams) Defaults()
func (*MatrixPrjnParams) Update ¶ added in v1.7.0
func (tp *MatrixPrjnParams) Update()
type NetIdxs ¶ added in v1.8.0
type NetIdxs struct { NData uint32 `min:"1" desc:"number of data parallel items to process currently"` NetIdx uint32 `` /* 181-byte string literal not displayed */ MaxData uint32 `inactive:"+" desc:"maximum amount of data parallel"` NLayers uint32 `inactive:"+" desc:"number of layers in the network"` NNeurons uint32 `inactive:"+" desc:"total number of neurons"` NPools uint32 `inactive:"+" desc:"total number of pools excluding * MaxData factor"` NSyns uint32 `inactive:"+" desc:"total number of synapses"` GPUMaxBuffFloats uint32 `inactive:"+" desc:"maximum size in float32 (4 bytes) of a GPU buffer -- needed for GPU access"` GPUSynCaBanks uint32 `inactive:"+" desc:"total number of SynCa banks of GPUMaxBufferBytes arrays in GPU"` GvVTAOff uint32 `inactive:"+" desc:"offset into GlobalVars for VTA values"` GvVTAStride uint32 `inactive:"+" desc:"stride into GlobalVars for VTA values"` GvUSnegOff uint32 `inactive:"+" desc:"offset into GlobalVars for USneg values"` GvDriveOff uint32 `inactive:"+" desc:"offset into GlobalVars for Drive and USpos values"` GvDriveStride uint32 `inactive:"+" desc:"stride into GlobalVars for Drive and USpos values"` // contains filtered or unexported fields }
NetIdxs are indexes and sizes for processing network
func (*NetIdxs) DataIdx ¶ added in v1.8.0
DataIdx returns the data index from an overall index over N * MaxData
func (*NetIdxs) DataIdxIsValid ¶ added in v1.8.0
DataIdxIsValid returns true if the data index is valid (< NData)
func (*NetIdxs) ItemIdx ¶ added in v1.8.0
ItemIdx returns the main item index from an overall index over NItems * MaxData (items = layers, neurons, synapeses)
func (*NetIdxs) LayerIdxIsValid ¶ added in v1.8.0
LayerIdxIsValid returns true if the layer index is valid (< NLayers)
func (*NetIdxs) NeurIdxIsValid ¶ added in v1.8.0
NeurIdxIsValid returns true if the neuron index is valid (< NNeurons)
func (*NetIdxs) PoolDataIdxIsValid ¶ added in v1.8.0
PoolDataIdxIsValid returns true if the pool*data index is valid (< NPools*MaxData)
func (*NetIdxs) PoolIdxIsValid ¶ added in v1.8.0
PoolIdxIsValid returns true if the pool index is valid (< NPools)
func (*NetIdxs) SynIdxIsValid ¶ added in v1.8.0
SynIdxIsValid returns true if the synapse index is valid (< NSyns)
type Network ¶
type Network struct {
NetworkBase
}
axon.Network implements the Axon spiking model, building on the algorithm-independent NetworkBase that manages all the infrastructure.
var ( // TheNetwork is the one current network in use, needed for GPU shader kernel // compatible variable access in CPU mode, for !multinet build tags case. // Typically there is just one and it is faster to access directly. // This is set in Network.InitName. TheNetwork *Network // Networks is a global list of networks, needed for GPU shader kernel // compatible variable access in CPU mode, for multinet build tags case. // This is updated in Network.InitName, which sets NetIdx. Networks []*Network )
func GlobalNetwork ¶ added in v1.8.1
func NewNetwork ¶ added in v1.2.94
NewNetwork returns a new axon Network
func (*Network) AddAmygdala ¶ added in v1.7.0
func (net *Network) AddAmygdala(prefix string, neg bool, nUs, nNeurY, nNeurX int, space float32) (blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, cemPos, cemNeg, blaNov *Layer)
AddAmygdala adds a full amygdala complex including BLA, CeM, and LDT. Inclusion of negative valence is optional with neg arg -- neg* layers are nil if not included.
func (*Network) AddBG ¶ added in v1.7.0
func (net *Network) AddBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpeTA, stnp, stns, gpi *Layer)
AddBG adds MtxGo, MtxNo, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi layers, with given optional prefix. Doesn't return GPeOut, GpeIn which are purely internal. Only the Matrix has pool-based 4D shape by default -- use pool for "role" like elements where matches need to be detected. All GP / STN layers have gpNeur neurons. Appropriate connections are made between layers, using standard styles. space is the spacing between layers (2 typical).
func (*Network) AddBG4D ¶ added in v1.7.0
func (net *Network) AddBG4D(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpeTA, stnp, stns, gpi *Layer)
AddBG4D adds MtxGo, MtxNo, GPeOut, GPeIn, GPeTA, STNp, STNs, GPi layers, with given optional prefix. Doesn't return GPeOut, GpeIn which are purely internal. This version makes 4D pools throughout the GP layers, with Pools representing separable gating domains. All GP / STN layers have gpNeur neurons. Appropriate PoolOneToOne connections are made between layers, using standard styles. space is the spacing between layers (2 typical) A CIN or more widely used RSalienceLayer should be added and project ACh to the MtxGo, No layers.
func (*Network) AddBGThalLayer2D ¶ added in v1.7.18
AddBGThalLayer2D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 2D structure
func (*Network) AddBGThalLayer4D ¶ added in v1.7.18
AddBGThalLayer4D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 4D structure, with Pools representing separable gating domains.
func (*Network) AddBLALayers ¶ added in v1.7.0
func (net *Network) AddBLALayers(prefix string, pos bool, nUs, nNeurY, nNeurX int, rel relpos.Relations, space float32) (acq, ext *Layer)
AddBLALayers adds two BLA layers, acquisition / extinction / D1 / D2, for positive or negative valence
func (*Network) AddBOA ¶ added in v1.7.18
func (net *Network) AddBOA(ctx *Context, nUSneg, nYneur, popY, popX, bgY, bgX, pfcY, pfcX int, space float32) (vSgpi, effort, effortP, urgency, pvPos, blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, blaNov, ofcUS, ofcUSCT, ofcUSPTp, ofcVal, ofcValCT, ofcValPTp, accCost, accCostCT, accCostPTp, accUtil, sc, notMaint *Layer)
AddBOA builds a complete BOA (BG, OFC, ACC) for goal-driven decision making. * AddPVLVOFCus -- PVLV, and OFC us coding Makes all appropriate interconnections and sets default parameters. Needs CS -> BLA, OFC connections to be made. Returns layers most likely to be used for remaining connections and positions.
func (*Network) AddCTLayer2D ¶ added in v1.7.0
AddCTLayer2D adds a CT Layer of given size, with given name.
func (*Network) AddCTLayer4D ¶ added in v1.7.0
AddCTLayer4D adds a CT Layer of given size, with given name.
func (*Network) AddClampDaLayer ¶ added in v1.7.0
AddClampDaLayer adds a ClampDaLayer of given name
func (*Network) AddDrivesLayer ¶ added in v1.7.11
AddDrivesLayer adds PVLV layer representing current drive activity, from ContextPVLV.Drive.Drives. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions, per drive pool.
func (*Network) AddDrivesPulvLayer ¶ added in v1.7.11
func (net *Network) AddDrivesPulvLayer(ctx *Context, nNeurY, nNeurX int, space float32) (drv, drvP *Layer)
AddDrivesPulvLayer adds PVLV layer representing current drive activity, from ContextPVLV.Drive.Drives. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions, per drive pool. Adds Pulvinar predictive layers for Drives.
func (*Network) AddEffortLayer ¶ added in v1.7.11
AddEffortLayer adds PVLV layer representing current effort factor, from ContextPVLV.Effort.Disc Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions.
func (*Network) AddEffortPulvLayer ¶ added in v1.7.11
AddEffortPulvLayer adds PVLV layer representing current effort factor, from ContextPVLV.Effort.Disc Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions. Adds Pulvinar predictive layers for Effort.
func (*Network) AddGPeLayer2D ¶ added in v1.7.0
AddGPLayer2D adds a GPLayer of given size, with given name. Must set the GPType BuildConfig setting to appropriate GPLayerType
func (*Network) AddGPeLayer4D ¶ added in v1.7.0
AddGPLayer4D adds a GPLayer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.
func (*Network) AddGPiLayer2D ¶ added in v1.7.0
AddGPiLayer2D adds a GPiLayer of given size, with given name.
func (*Network) AddGPiLayer4D ¶ added in v1.7.0
AddGPiLayer4D adds a GPiLayer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.
func (*Network) AddHip ¶ added in v1.8.6
func (net *Network) AddHip(ctx *Context, hip *HipConfig, space float32) (ec2, ec3, dg, ca3, ca1, ec5 *Layer)
AddHip adds a new Hippocampal network for episodic memory. Returns layers most likely to be used for remaining connections and positions.
func (*Network) AddInputPulv2D ¶ added in v1.7.0
AddInputPulv2D adds an Input and Layer of given size, with given name. The Input layer is set as the Driver of the Layer. Both layers have SetClass(name) called to allow shared params.
func (*Network) AddInputPulv4D ¶ added in v1.7.0
func (net *Network) AddInputPulv4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32) (*Layer, *Layer)
AddInputPulv4D adds an Input and Layer of given size, with given name. The Input layer is set as the Driver of the Layer. Both layers have SetClass(name) called to allow shared params.
func (*Network) AddLDTLayer ¶ added in v1.7.18
AddLDTLayer adds a LDTLayer
func (*Network) AddMatrixLayer ¶ added in v1.7.0
func (net *Network) AddMatrixLayer(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DAModTypes) *Layer
AddMatrixLayer adds a MatrixLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. da gives the DaReceptor type (D1R = Go, D2R = NoGo)
func (*Network) AddOFCus ¶ added in v1.7.18
func (net *Network) AddOFCus(ctx *Context, nUSs, nY, ofcY, ofcX int, space float32) (ofc, ofcCT, ofcPT, ofcPTp, ofcMD, notMaint *Layer)
AddOFCus adds orbital frontal cortex US-coding layers, for given number of US pools (first is novelty / curiosity pool), with given number of units per pool. Also adds a PTNotMaintLayer called NotMaint with nY units.
func (*Network) AddPFC2D ¶ added in v1.7.18
func (net *Network) AddPFC2D(name, thalSuffix string, nNeurY, nNeurX int, decayOnRew bool, space float32) (pfc, pfcCT, pfcPT, pfcPTp, pfcThal *Layer)
AddPFC2D adds a "full stack" of 2D PFC layers: * AddSuperCT2D (Super and CT) * AddPTMaintThal (PTMaint, BGThal) * AddPTPredLayer (PTPred) with given name prefix, which is also set as the Class for all layers, and suffix for the BGThal layer (e.g., "MD" or "VM" etc for different thalamic nuclei). Sets PFCLayer as additional class for all cortical layers. OneToOne, full connectivity is used between layers. decayOnRew determines the Act.Decay.OnRew setting (true of OFC, ACC type for sure). CT layer uses the Medium timescale params.
func (*Network) AddPFC4D ¶ added in v1.7.18
func (net *Network) AddPFC4D(name, thalSuffix string, nPoolsY, nPoolsX, nNeurY, nNeurX int, decayOnRew bool, space float32) (pfc, pfcCT, pfcPT, pfcPTp, pfcThal *Layer)
AddPFC4D adds a "full stack" of 4D PFC layers: * AddSuperCT4D (Super and CT) * AddPTMaintThal (PTMaint, BGThal) * AddPTPredLayer (PTPred) with given name prefix, which is also set as the Class for all layers, and suffix for the BGThal layer (e.g., "MD" or "VM" etc for different thalamic nuclei). Sets PFCLayer as additional class for all cortical layers. OneToOne and PoolOneToOne connectivity is used between layers. decayOnRew determines the Act.Decay.OnRew setting (true of OFC, ACC type for sure). CT layer uses the Medium timescale params. use, e.g., pfcCT.DefParams["Layer.Inhib.Layer.Gi"] = "2.8" to change default params.
func (*Network) AddPTMaintLayer2D ¶ added in v1.7.2
AddPTMaintLayer2D adds a PTMaintLayer of given size, with given name.
func (*Network) AddPTMaintLayer4D ¶ added in v1.7.2
AddPTMaintLayer4D adds a PTMaintLayer of given size, with given name.
func (*Network) AddPTMaintThalForSuper ¶ added in v1.7.2
func (net *Network) AddPTMaintThalForSuper(super, ct *Layer, thalSuffix, prjnClass string, superToPT, ptSelf prjn.Pattern, space float32) (pt, thal *Layer)
AddPTMaintThalForSuper adds a PTMaint pyramidal tract active maintenance layer and a BG gated Thalamus layer for given superficial layer (SuperLayer) and associated CT, with given thal suffix (e.g., MD, VM). PT and Thal have SetClass(super.Name()) called to allow shared params. Projections are made with given classes: SuperToPT, PTSelfMaint, PTtoThal, ThalToPT, with optional extra class The PT and BGThal layers are positioned behind the CT layer.
func (*Network) AddPTNotMaintLayer ¶ added in v1.7.11
AddPTNotMaintLayer adds a PTNotMaintLayer of given size, for given PTMaintLayer -- places it to the right of this layer, and calls ConnectPTNotMaint to connect the two, using full connectivity.
func (*Network) AddPTPredLayer ¶ added in v1.7.11
func (net *Network) AddPTPredLayer(ptMaint, ct *Layer, ptToPredPrjn, ctToPredPrjn prjn.Pattern, prjnClass string, space float32) (ptPred *Layer)
AddPTPredLayer adds a PTPred pyramidal tract prediction layer for given PTMaint layer and associated CT. Sets SetClass(super.Name()) to allow shared params. Projections are made with given classes: PTtoPred, CTtoPred The PTPred layer is positioned behind the PT layer.
func (*Network) AddPTPredLayer2D ¶ added in v1.7.11
AddPTPredLayer2D adds a PTPredLayer of given size, with given name.
func (*Network) AddPTPredLayer4D ¶ added in v1.7.11
AddPTPredLayer4D adds a PTPredLayer of given size, with given name.
func (*Network) AddPVLVOFCus ¶ added in v1.7.18
func (net *Network) AddPVLVOFCus(ctx *Context, nUSneg, nYneur, popY, popX, bgY, bgX, ofcY, ofcX int, space float32) (vSgpi, vSmtxGo, vSmtxNo, vSpatch, effort, effortP, urgency, usPos, pvPos, usNeg, usNegP, pvNeg, pvNegP, blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, blaNov, ofcUS, ofcUSCT, ofcUSPTp, ofcVal, ofcValCT, ofcValPTp, ofcValMD, sc, notMaint *Layer)
AddPVLVOFCus builds a complete PVLV network with OFCus (orbital frontal cortex) US-coding layers, calling: * AddVTALHbLDTLayers * AddPVLVPulvLayers * AddVS * AddAmygdala * AddOFCus Makes all appropriate interconnections and sets default parameters. Needs CS -> BLA, OFC connections to be made. Returns layers most likely to be used for remaining connections and positions.
func (*Network) AddPVLVPulvLayers ¶ added in v1.7.18
func (net *Network) AddPVLVPulvLayers(ctx *Context, nUSneg, nYneur, popY, popX int, space float32) (drives, drivesP, effort, effortP, urgency, usPos, usNeg, usPosP, usNegP, pvPos, pvNeg, pvPosP, pvNegP *Layer)
AddPVLVPulvLayers adds PVLV layers for PV-related information visualizing the internal states of the ContextPVLV state, with Pulvinar prediction layers for training PFC layers. * drives = popcode representation of drive strength (no activity for 0) number of active drives comes from Context; popY, popX neurons per pool. * effort = popcode representation of effort discount factor, popY, popX neurons. * urgency = popcode representation of urgency Go bias factor, popY, popX neurons. * us = nYneur per US, represented as present or absent * pv = popcode representation of final primary value on positive and negative valences -- this is what the dopamine value ends up conding (pos - neg). Layers are organized in depth per type: USs in one column, PVs in the next, with Drives in the back; effort and urgency behind that.
func (*Network) AddPVLayers ¶ added in v1.7.11
func (net *Network) AddPVLayers(nNeurY, nNeurX int, rel relpos.Relations, space float32) (pvPos, pvNeg *Layer)
AddPVLayers adds PVpos and PVneg layers for positive or negative valence primary value representations, representing the total drive and effort weighted USpos outcome, or total USneg outcome. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions.
func (*Network) AddPVPulvLayers ¶ added in v1.7.11
func (net *Network) AddPVPulvLayers(nNeurY, nNeurX int, rel relpos.Relations, space float32) (pvPos, pvNeg, pvPosP, pvNegP *Layer)
AddPVLayers adds PVpos and PVneg layers for positive or negative valence primary value representations, representing the total drive and effort weighted USpos outcomes, or total USneg outcomes. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions. Adds Pulvinar predictive layers for each.
func (*Network) AddPulvForLayer ¶ added in v1.7.11
AddPulvForLayer adds a Pulvinar for given Layer (typically an Input type layer) with a P suffix. The Pulv.Driver is set to given Layer. The Pulv layer needs other CT connections from higher up to predict this layer. Pulvinar is positioned behind the given Layer.
func (*Network) AddPulvForSuper ¶ added in v1.7.0
AddPulvForSuper adds a Pulvinar for given superficial layer (SuperLayer) with a P suffix. The Pulv.Driver is set to Super, as is the Class on Pulv. The Pulv layer needs other CT connections from higher up to predict this layer. Pulvinar is positioned behind the CT layer.
func (*Network) AddPulvLayer2D ¶ added in v1.7.0
AddPulvLayer2D adds a Pulvinar Layer of given size, with given name.
func (*Network) AddPulvLayer4D ¶ added in v1.7.0
AddPulvLayer4D adds a Pulvinar Layer of given size, with given name.
func (*Network) AddRWLayers ¶ added in v1.7.0
func (nt *Network) AddRWLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, da *Layer)
AddRWLayers adds simple Rescorla-Wagner (PV only) dopamine system, with a primary Reward layer, a RWPred prediction layer, and a dopamine layer that computes diff. Only generates DA when Rew layer has external input -- otherwise zero.
func (*Network) AddRewLayer ¶ added in v1.7.0
AddRewLayer adds a RewLayer of given name
func (*Network) AddSCLayer2D ¶ added in v1.7.18
AddSCLayer2D adds superior colliculcus 2D layer which computes stimulus onset via trial-delayed inhibition (Inhib.FFPrv) -- connect with fixed random input from sensory input layers. Sets base name and class name to SC. Must set Inhib.FFPrv > 0 and Act.Decay.* = 0
func (*Network) AddSCLayer4D ¶ added in v1.7.18
AddSCLayer4D adds superior colliculcus 4D layer which computes stimulus onset via trial-delayed inhibition (Inhib.FFPrv) -- connect with fixed random input from sensory input layers. Sets base name and class name to SC. Must set Inhib.FFPrv > 0 and Act.Decay.* = 0
func (*Network) AddSTNLayer2D ¶ added in v1.7.0
AddSTNLayer2D adds a subthalamic nucleus Layer of given size, with given name.
func (*Network) AddSTNLayer4D ¶ added in v1.7.0
AddSTNLayer4D adds a subthalamic nucleus Layer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.
func (*Network) AddSuperCT2D ¶ added in v1.7.0
func (net *Network) AddSuperCT2D(name, prjnClass string, shapeY, shapeX int, space float32, pat prjn.Pattern) (super, ct *Layer)
AddSuperCT2D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn projection from Super to CT using given projection pattern, and NO Pulv Pulvinar. CT is placed Behind Super.
func (*Network) AddSuperCT4D ¶ added in v1.7.0
func (net *Network) AddSuperCT4D(name, prjnClass string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32, pat prjn.Pattern) (super, ct *Layer)
AddSuperCT4D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPrjn projection from Super to CT using given projection pattern, and NO Pulv Pulvinar. CT is placed Behind Super.
func (*Network) AddSuperLayer2D ¶ added in v1.7.0
AddSuperLayer2D adds a Super Layer of given size, with given name.
func (*Network) AddSuperLayer4D ¶ added in v1.7.0
AddSuperLayer4D adds a Super Layer of given size, with given name.
func (*Network) AddTDLayers ¶ added in v1.7.0
func (nt *Network) AddTDLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, ri, td *Layer)
AddTDLayers adds the standard TD temporal differences layers, generating a DA signal. Projection from Rew to RewInteg is given class TDRewToInteg -- should have no learning and 1 weight.
func (*Network) AddUSLayers ¶ added in v1.7.11
func (net *Network) AddUSLayers(nUSpos, nUSneg, nYneur int, rel relpos.Relations, space float32) (usPos, usNeg *Layer)
AddUSLayers adds USpos and USneg layers for positive or negative valence unconditioned stimuli (USs). These track the ContextPVLV.USpos or USneg, for visualization purposes. Actual US inputs are set in PVLV.
func (*Network) AddUSPulvLayers ¶ added in v1.7.11
func (net *Network) AddUSPulvLayers(nUSpos, nUSneg, nYneur int, rel relpos.Relations, space float32) (usPos, usNeg, usPosP, usNegP *Layer)
AddUSPulvLayers adds USpos and USneg layers for positive or negative valence unconditioned stimuli (USs). These track the ContextPVLV.USpos or USneg, for visualization purposes. Actual US inputs are set in PVLV. Adds Pulvinar predictive layers for each.
func (*Network) AddUrgencyLayer ¶ added in v1.7.18
AddUrgencyLayer adds PVLV layer representing current urgency factor, from ContextPVLV.Urgency.Urge Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions.
func (*Network) AddVS ¶ added in v1.7.18
func (net *Network) AddVS(nUSs, nNeurY, nNeurX, nY int, space float32) (vSmtxGo, vSmtxNo, vSstnp, vSstns, vSgpi, vSpatch, vSgated *Layer)
AddVS adds a Ventral Striatum (VS, mostly Nucleus Accumbens = NAcc) set of layers including extensive Ventral Pallidum (VP) using the pcore BG framework, via the AddBG method. Also adds VSPatch and VSGated layers. vSmtxGo and No have VSMatrixLayer class set and default params appropriate for multi-pool etc
func (*Network) AddVSGatedLayer ¶ added in v1.7.11
AddVSGatedLayer adds a VSGatedLayer with given number of Y units and 2 pools, first one represents JustGated, second is HasGated.
func (*Network) AddVSPatchLayer ¶ added in v1.7.11
AddVSPatchLayer adds VSPatch (Pos, D1)
func (*Network) AddVTALHbLDTLayers ¶ added in v1.7.18
AddVTALHbLDTLayers adds VTA dopamine, LHb DA dipping, and LDT ACh layers which are driven by corresponding values in ContextPVLV
func (*Network) ApplyExts ¶ added in v1.7.9
ApplyExts applies external inputs to layers, based on values that were set in prior layer-specific ApplyExt calls. This does nothing on the CPU, but is critical for the GPU, and should be added to all sims where GPU will be used.
func (*Network) ClearTargExt ¶ added in v1.2.65
ClearTargExt clears external inputs Ext that were set from target values Target. This can be called to simulate alpha cycles within theta cycles, for example.
func (*Network) CollectDWts ¶
CollectDWts writes all of the synaptic DWt values to given dwts slice which is pre-allocated to given nwts size if dwts is nil, in which case the method returns true so that the actual length of dwts can be passed next time around. Used for MPI sharing of weight changes across processors. This calls SyncSynapsesFmGPU() (nop if not GPU) first.
func (*Network) ConfigGPUnoGUI ¶ added in v1.7.9
ConfigGPUnoGUI turns on GPU mode in case where no GUI is being used. This directly accesses the GPU hardware. It does not work well when GUI also being used. Configures the GPU -- call after Network is Built, initialized, params are set, and everything is ready to run.
func (*Network) ConfigGPUwithGUI ¶ added in v1.7.9
ConfigGPUwithGUI turns on GPU mode in context of an active GUI where Vulkan has been initialized etc. Configures the GPU -- call after Network is Built, initialized, params are set, and everything is ready to run.
func (*Network) ConfigLoopsHip ¶ added in v1.8.7
func (net *Network) ConfigLoopsHip(ctx *Context, man *looper.Manager, hip *HipConfig, pretrain *bool)
ConfigLoopsHip configures the hippocampal looper and should be included in ConfigLoops in model to make sure hip loops is configured correctly. see hip.go for an instance of implementation of this function. ec5ClampFrom specifies the layer to clamp EC5 plus phase values from: EC3 is the biological source, but can use Input layer for simple testing net.
func (*Network) ConnectCSToBLAPos ¶ added in v1.7.18
ConnectCSToBLAPos connects the CS input to BLAPosAcqD1, BLANovelCS layers using fixed, higher-variance weights, full projection. Sets classes to: CSToBLAPos, CSToBLANovel with default params
func (*Network) ConnectCTSelf ¶ added in v1.7.0
func (net *Network) ConnectCTSelf(ly *Layer, pat prjn.Pattern, prjnClass string) (ctxt, maint *Prjn)
ConnectCTSelf adds a Self (Lateral) CTCtxtPrjn projection within a CT layer, in addition to a regular lateral projection, which supports active maintenance. The CTCtxtPrjn has a Class label of CTSelfCtxt, and the regular one is CTSelfMaint with optional class added.
func (*Network) ConnectCtxtToCT ¶ added in v1.7.0
ConnectCtxtToCT adds a CTCtxtPrjn from given sending layer to a CT layer
func (*Network) ConnectPTMaintSelf ¶ added in v1.7.2
ConnectPTMaintSelf adds a Self (Lateral) projection within a PTMaintLayer, which supports active maintenance, with a class of PTSelfMaint
func (*Network) ConnectPTNotMaint ¶ added in v1.7.11
ConnectPTNotMaint adds a projection from PTMaintLayer to PTNotMaintLayer, as fixed inhibitory connections, with class ToPTNotMaintInhib
func (*Network) ConnectPTPredSelf ¶ added in v1.7.11
ConnectPTPredSelf adds a Self (Lateral) projection within a PTPredLayer, which supports active maintenance, with a class of PTSelfMaint
func (*Network) ConnectPTPredToPulv ¶ added in v1.7.11
func (net *Network) ConnectPTPredToPulv(ptPred, pulv *Layer, toPulvPat, fmPulvPat prjn.Pattern, prjnClass string) (toPulv, toPTPred *Prjn)
ConnectPTPredToPulv connects PTPred with given Pulv: PTPred -> Pulv is class PTPredToPulv, From Pulv = type = Back, class = FmPulv toPulvPat is the prjn.Pattern PTPred -> Pulv and fmPulvPat is Pulv -> PTPred Typically Pulv is a different shape than PTPred, so use Full or appropriate topological pattern. adds optional class name to projection.
func (*Network) ConnectSuperToCT ¶ added in v1.7.0
ConnectSuperToCT adds a CTCtxtPrjn from given sending Super layer to a CT layer This automatically sets the FmSuper flag to engage proper defaults, Uses given projection pattern -- e.g., Full, OneToOne, or PoolOneToOne
func (*Network) ConnectToBLAAcq ¶ added in v1.7.11
ConnectToBLAAcq adds a BLAPrjn from given sending layer to a BLA layer, and configures it for acquisition parameters. Sets class to BLAAcqPrjn.
func (*Network) ConnectToBLAExt ¶ added in v1.7.11
ConnectToBLAExt adds a BLAPrjn from given sending layer to a BLA layer, and configures it for extinctrion parameters. Sets class to BLAExtPrjn.
func (*Network) ConnectToMatrix ¶ added in v1.7.0
ConnectToMatrix adds a MatrixPrjn from given sending layer to a matrix layer
func (*Network) ConnectToPFC ¶ added in v1.7.18
ConnectToPFC connects given predictively learned input to all relevant PFC layers: lay -> pfc (skipped if lay == nil) layP -> pfc, layP <-> pfcCT pfcPTp <-> layP sets PFCPrjn class name for projections
func (*Network) ConnectToPFCBack ¶ added in v1.7.18
ConnectToPFCBack connects given predictively learned input to all relevant PFC layers: lay -> pfc using a BackPrjn -- weaker layP -> pfc, layP <-> pfcCT pfcPTp <-> layP
func (*Network) ConnectToPFCBidir ¶ added in v1.7.18
func (net *Network) ConnectToPFCBidir(lay, layP, pfc, pfcCT, pfcPTp *Layer, pat prjn.Pattern) (ff, fb *Prjn)
ConnectToPFCBidir connects given predictively learned input to all relevant PFC layers, using bidirectional connections to super layers. lay <-> pfc bidirectional layP -> pfc, layP <-> pfcCT pfcPTp <-> layP
func (*Network) ConnectToPulv ¶ added in v1.7.0
func (net *Network) ConnectToPulv(super, ct, pulv *Layer, toPulvPat, fmPulvPat prjn.Pattern, prjnClass string) (toPulv, toSuper, toCT *Prjn)
ConnectToPulv adds the following projections: layers | class | prjn type | prjn pat ------------+------------+-------------+---------- ct ->pulv | "CTToPulv" | ForwardPrjn | toPulvPat pulv->super | "FmPulv" | BackPrjn | fmPulvPat pulv->ct | "FmPulv" | BackPrjn | fmPulvPat
Typically pulv is a different shape than super and ct, so use Full or appropriate topological pattern. Adds optional prjnClass name as a suffix.
func (*Network) ConnectToRWPrjn ¶ added in v1.7.0
ConnectToRWPred adds a RWPrjn from given sending layer to a RWPred layer
func (*Network) ConnectToSC ¶ added in v1.7.18
ConnectToSC adds a ForwardPrjn from given sending layer to a SC layer, setting class as ToSC -- should set params as fixed random with more variance than usual.
func (*Network) ConnectToSC1to1 ¶ added in v1.7.18
ConnectToSC1to1 adds a 1to1 ForwardPrjn from given sending layer to a SC layer, copying the geometry of the sending layer, setting class as ToSC. The conection weights are set to uniform.
func (*Network) ConnectToVSPatch ¶ added in v1.7.11
ConnectToVSPatch adds a VSPatchPrjn from given sending layer to a VSPatch layer
func (*Network) ConnectUSToBLAPos ¶ added in v1.7.18
ConnectUSToBLAPos connects the US input to BLAPosAcqD1 and BLAPosExtD2 layers, using fixed, higher-variance weights, full projection. Sets classes to: USToBLAAcq and USToBLAExt
func (*Network) DWt ¶
DWt computes the weight change (learning) based on current running-average activation values
func (*Network) DecayState ¶
DecayState decays activation state by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g) This is called automatically in NewState, but is avail here for ad-hoc decay cases.
func (*Network) DecayStateByClass ¶ added in v1.5.10
DecayStateByClass decays activation state for given class name(s) by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g)
func (*Network) DecayStateByType ¶ added in v1.7.1
func (nt *Network) DecayStateByType(ctx *Context, decay, glong, ahp float32, types ...LayerTypes)
DecayStateByType decays activation state for given layer types by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g)
func (*Network) DecayStateLayers ¶ added in v1.7.10
DecayStateLayers decays activation state for given layers by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g). If this is not being called at the start, around NewState call, then you should also call: nt.GPU.SyncGBufToGPU() to zero the GBuf values which otherwise will persist spikes in flight.
func (*Network) Defaults ¶
func (nt *Network) Defaults()
Defaults sets all the default parameters for all layers and projections
func (*Network) InitExt ¶
InitExt initializes external input state. Call prior to applying external inputs to layers.
func (*Network) InitGScale ¶ added in v1.2.92
InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.
func (*Network) InitName ¶ added in v1.8.0
InitName MUST be called to initialize the network's pointer to itself as an emer.Network which enables the proper interface methods to be called. Also sets the name, and initializes NetIdx in global list of Network
func (*Network) InitTopoSWts ¶ added in v1.2.75
func (nt *Network) InitTopoSWts()
InitTopoSWts initializes SWt structural weight parameters from prjn types that support topographic weight patterns, having flags set to support it, includes: prjn.PoolTile prjn.Circle. call before InitWts if using Topo wts
func (*Network) InitWts ¶
InitWts initializes synaptic weights and all other associated long-term state variables including running-average state values (e.g., layer running average activations etc)
func (*Network) LRateMod ¶ added in v1.6.13
LRateMod sets the LRate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.
func (*Network) LRateSched ¶ added in v1.6.13
LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.
func (*Network) LayersSetOff ¶
LayersSetOff sets the Off flag for all layers to given setting
func (*Network) MinusPhase ¶ added in v1.2.63
MinusPhase does updating after end of minus phase
func (*Network) NeuronsSlice ¶ added in v1.8.0
NeuronsSlice returns a slice of neuron values using given neuron variable, resizing as needed.
func (*Network) NewState ¶ added in v1.2.63
NewState handles all initialization at start of new input pattern. This is called *before* applying external input data and operates across all data parallel values. The current Context.NData should be set properly prior to calling this and subsequent Cycle methods.
func (*Network) PlusPhaseStart ¶ added in v1.7.10
PlusPhaseStart does updating at the start of the plus phase: applies Target inputs as External inputs.
func (*Network) SetDWts ¶
SetDWts sets the DWt weight changes from given array of floats, which must be correct size navg is the number of processors aggregated in these dwts -- some variables need to be averaged instead of summed (e.g., ActAvg) This calls SyncSynapsesToGPU() (nop if not GPU) after.
func (*Network) SetSubMean ¶ added in v1.6.11
SetSubMean sets the SubMean parameters in all the layers in the network trgAvg is for Learn.TrgAvgAct.SubMean prjn is for the prjns Learn.Trace.SubMean in both cases, it is generally best to have both parameters set to 0 at the start of learning
func (*Network) SizeReport ¶
SizeReport returns a string reporting the size of each layer and projection in the network, and total memory footprint. If detail flag is true, details per layer, projection is included.
func (*Network) SlowAdapt ¶ added in v1.2.37
SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, and adapting inhibition
func (*Network) SynsSlice ¶ added in v1.8.0
func (nt *Network) SynsSlice(vals *[]float32, synvar SynapseVars)
SynsSlice returns a slice of synaptic values, in natural sending order, using given synaptic variable, resizing as needed.
func (*Network) TargToExt ¶ added in v1.2.65
TargToExt sets external input Ext from target values Target This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.
func (*Network) UnLesionNeurons ¶
UnLesionNeurons unlesions neurons in all layers in the network. Provides a clean starting point for subsequent lesion experiments.
func (*Network) UpdateExtFlags ¶
UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.
func (*Network) UpdateParams ¶
func (nt *Network) UpdateParams()
UpdateParams updates all the derived parameters if any have changed, for all layers and projections
type NetworkBase ¶ added in v1.4.5
type NetworkBase struct { EmerNet emer.Network `` /* 274-byte string literal not displayed */ Nm string `desc:"overall name of network -- helps discriminate if there are multiple"` WtsFile string `desc:"filename of last weights file loaded or saved"` LayMap map[string]*Layer `view:"-" desc:"map of name to layers -- layer names must be unique"` LayClassMap map[string][]string `view:"-" desc:"map of layer classes -- made during Build"` MinPos mat32.Vec3 `view:"-" desc:"minimum display position in network"` MaxPos mat32.Vec3 `view:"-" desc:"maximum display position in network"` MetaData map[string]string `` /* 194-byte string literal not displayed */ UseGPUOrder bool `` /* 190-byte string literal not displayed */ // Implementation level code below: NetIdx uint32 `` /* 177-byte string literal not displayed */ MaxDelay uint32 `` /* 137-byte string literal not displayed */ MaxData uint32 `` /* 211-byte string literal not displayed */ NNeurons uint32 `inactive:"+" desc:"total number of neurons"` NSyns uint32 `inactive:"+" desc:"total number of synapses"` Globals []float32 `view:"-" desc:"storage for global vars"` Layers []*Layer `desc:"array of layers"` LayParams []LayerParams `view:"-" desc:"[Layers] array of layer parameters, in 1-to-1 correspondence with Layers"` LayVals []LayerVals `view:"-" desc:"[Layers][MaxData] array of layer values, with extra per data"` Pools []Pool `view:"-" desc:"[Layers][Pools][MaxData] array of inhibitory pools for all layers."` Neurons []float32 `` /* 141-byte string literal not displayed */ NeuronAvgs []float32 `` /* 154-byte string literal not displayed */ NeuronIxs []uint32 `` /* 138-byte string literal not displayed */ Prjns []*Prjn `view:"-" desc:"[Layers][SendPrjns] pointers to all projections in the network, sender-based"` PrjnParams []PrjnParams `view:"-" desc:"[Layers][SendPrjns] array of projection parameters, in 1-to-1 correspondence with Prjns, sender-based"` SynapseIxs []uint32 `` /* 185-byte string literal not displayed */ Synapses []float32 `` /* 177-byte string literal not displayed */ SynapseCas []float32 `` /* 195-byte string literal not displayed */ PrjnSendCon []StartN `` /* 171-byte string literal not displayed */ PrjnRecvCon []StartN `` /* 205-byte string literal not displayed */ PrjnGBuf []int32 `` /* 223-byte string literal not displayed */ PrjnGSyns []float32 `` /* 207-byte string literal not displayed */ RecvPrjnIdxs []uint32 `` /* 171-byte string literal not displayed */ RecvSynIdxs []uint32 `` /* 173-byte string literal not displayed */ Exts []float32 `` /* 224-byte string literal not displayed */ Ctx Context `` /* 134-byte string literal not displayed */ Rand erand.SysRand `` /* 139-byte string literal not displayed */ RndSeed int64 `` /* 156-byte string literal not displayed */ NThreads int `desc:"number of threads to use for parallel processing"` GPU GPU `view:"inline" desc:"GPU implementation"` RecFunTimes bool `view:"-" desc:"record function timer information"` FunTimes map[string]*timer.Time `view:"-" desc:"timers for each major function (step of processing)"` }
NetworkBase manages the basic structural components of a network (layers). The main Network then can just have the algorithm-specific code.
func (*NetworkBase) AddLayer ¶ added in v1.4.5
func (nt *NetworkBase) AddLayer(name string, shape []int, typ LayerTypes) *Layer
AddLayer adds a new layer with given name and shape to the network. 2D and 4D layer shapes are generally preferred but not essential -- see AddLayer2D and 4D for convenience methods for those. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each unit group having 4 rows (Y) of 5 (X) units.
func (*NetworkBase) AddLayer2D ¶ added in v1.4.5
func (nt *NetworkBase) AddLayer2D(name string, shapeY, shapeX int, typ LayerTypes) *Layer
AddLayer2D adds a new layer with given name and 2D shape to the network. 2D and 4D layer shapes are generally preferred but not essential.
func (*NetworkBase) AddLayer4D ¶ added in v1.4.5
func (nt *NetworkBase) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ LayerTypes) *Layer
AddLayer4D adds a new layer with given name and 4D shape to the network. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each pool having 4 rows (Y) of 5 (X) neurons.
func (*NetworkBase) AddLayerInit ¶ added in v1.4.5
func (nt *NetworkBase) AddLayerInit(ly *Layer, name string, shape []int, typ LayerTypes)
AddLayerInit is implementation routine that takes a given layer and adds it to the network, and initializes and configures it properly.
func (*NetworkBase) AllGlobalVals ¶ added in v1.8.0
func (nt *NetworkBase) AllGlobalVals(ctrKey string, vals map[string]float32)
AllGlobalVals adds to map of all Global variables and values. ctrKey is a key of counters to contextualize values.
func (*NetworkBase) AllGlobals ¶ added in v1.8.0
func (nt *NetworkBase) AllGlobals() string
AllGlobals returns a listing of all Global variables and values.
func (*NetworkBase) AllLayerInhibs ¶ added in v1.7.11
func (nt *NetworkBase) AllLayerInhibs() string
AllLayerInhibs returns a listing of all Layer Inhibition parameters in the Network
func (*NetworkBase) AllParams ¶ added in v1.4.5
func (nt *NetworkBase) AllParams() string
AllParams returns a listing of all parameters in the Network.
func (*NetworkBase) AllPrjnScales ¶ added in v1.4.5
func (nt *NetworkBase) AllPrjnScales() string
AllPrjnScales returns a listing of all PrjnScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.
func (*NetworkBase) ApplyParams ¶ added in v1.4.5
ApplyParams applies given parameter style Sheet to layers and prjns in this network. Calls UpdateParams to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*NetworkBase) AxonLayerByName ¶ added in v1.7.12
func (nt *NetworkBase) AxonLayerByName(name string) *Layer
LayByName returns a layer by looking it up by name in the layer map (nil if not found). Will create the layer map if it is nil or a different size than layers slice, but otherwise needs to be updated manually.
func (*NetworkBase) BidirConnectLayerNames ¶ added in v1.4.5
func (nt *NetworkBase) BidirConnectLayerNames(low, high string, pat prjn.Pattern) (lowlay, highlay *Layer, fwdpj, backpj *Prjn, err error)
BidirConnectLayerNames establishes bidirectional projections between two layers, referenced by name, with low = the lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) BidirConnectLayers ¶ added in v1.4.5
func (nt *NetworkBase) BidirConnectLayers(low, high *Layer, pat prjn.Pattern) (fwdpj, backpj *Prjn)
BidirConnectLayers establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) BidirConnectLayersPy ¶ added in v1.4.5
func (nt *NetworkBase) BidirConnectLayersPy(low, high *Layer, pat prjn.Pattern)
BidirConnectLayersPy establishes bidirectional projections between two layers, with low = lower layer that sends a Forward projection to the high layer, and receives a Back projection in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build. Py = python version with no return vals.
func (*NetworkBase) Bounds ¶ added in v1.4.5
func (nt *NetworkBase) Bounds() (min, max mat32.Vec3)
func (*NetworkBase) BoundsUpdt ¶ added in v1.4.5
func (nt *NetworkBase) BoundsUpdt()
BoundsUpdt updates the Min / Max display bounds for 3D display
func (*NetworkBase) Build ¶ added in v1.4.5
func (nt *NetworkBase) Build(simCtx *Context) error
Build constructs the layer and projection state based on the layer shapes and patterns of interconnectivity. Configures threading using heuristics based on final network size. Must set UseGPUOrder properly prior to calling. Configures the given Context object used in the simulation with the memory access strides for this network -- must be set properly -- see SetCtxStrides.
func (*NetworkBase) BuildGlobals ¶ added in v1.8.0
func (nt *NetworkBase) BuildGlobals(ctx *Context)
BuildGlobals builds Globals vars, using params set in given context
func (*NetworkBase) BuildPrjnGBuf ¶ added in v1.7.2
func (nt *NetworkBase) BuildPrjnGBuf()
BuildPrjnGBuf builds the PrjnGBuf, PrjnGSyns, based on the MaxDelay values in thePrjnParams, which should have been configured by this point. Called by default in InitWts()
func (*NetworkBase) ConnectLayerNames ¶ added in v1.4.5
func (nt *NetworkBase) ConnectLayerNames(send, recv string, pat prjn.Pattern, typ PrjnTypes) (rlay, slay *Layer, pj *Prjn, err error)
ConnectLayerNames establishes a projection between two layers, referenced by name adding to the recv and send projection lists on each side of the connection. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) ConnectLayers ¶ added in v1.4.5
ConnectLayers establishes a projection between two layers, adding to the recv and send projection lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) DeleteAll ¶ added in v1.4.5
func (nt *NetworkBase) DeleteAll()
DeleteAll deletes all layers, prepares network for re-configuring and building
func (*NetworkBase) FunTimerStart ¶ added in v1.4.5
func (nt *NetworkBase) FunTimerStart(fun string)
FunTimerStart starts function timer for given function name -- ensures creation of timer
func (*NetworkBase) FunTimerStop ¶ added in v1.4.5
func (nt *NetworkBase) FunTimerStop(fun string)
FunTimerStop stops function timer -- timer must already exist
func (*NetworkBase) KeyLayerParams ¶ added in v1.7.11
func (nt *NetworkBase) KeyLayerParams() string
KeyLayerParams returns a listing for all layers in the network, of the most important layer-level params (specific to each algorithm).
func (*NetworkBase) KeyPrjnParams ¶ added in v1.7.11
func (nt *NetworkBase) KeyPrjnParams() string
KeyPrjnParams returns a listing for all Recv projections in the network, of the most important projection-level params (specific to each algorithm).
func (*NetworkBase) Label ¶ added in v1.4.5
func (nt *NetworkBase) Label() string
func (*NetworkBase) LateralConnectLayer ¶ added in v1.4.5
func (nt *NetworkBase) LateralConnectLayer(lay *Layer, pat prjn.Pattern) *Prjn
LateralConnectLayer establishes a self-projection within given layer. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) LateralConnectLayerPrjn ¶ added in v1.4.5
LateralConnectLayerPrjn makes lateral self-projection using given projection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) LayByNameTry ¶ added in v1.7.12
func (nt *NetworkBase) LayByNameTry(name string) (*Layer, error)
LayByNameTry returns a layer by looking it up by name -- returns error message if layer is not found
func (*NetworkBase) LayerByName ¶ added in v1.4.5
func (nt *NetworkBase) LayerByName(name string) emer.Layer
LayByName returns a layer by looking it up by name in the layer map (nil if not found). Will create the layer map if it is nil or a different size than layers slice, but otherwise needs to be updated manually.
func (*NetworkBase) LayerByNameTry ¶ added in v1.4.5
func (nt *NetworkBase) LayerByNameTry(name string) (emer.Layer, error)
LayerByNameTry returns a layer by looking it up by name -- returns error message if layer is not found
func (*NetworkBase) LayerMapPar ¶ added in v1.7.24
func (nt *NetworkBase) LayerMapPar(fun func(ly *Layer), funame string)
LayerMapPar applies function of given name to all layers using as many go routines as configured in NetThreads.Neurons.
func (*NetworkBase) LayerMapSeq ¶ added in v1.6.17
func (nt *NetworkBase) LayerMapSeq(fun func(ly *Layer), funame string)
LayerMapSeq applies function of given name to all layers sequentially.
func (*NetworkBase) LayerVals ¶ added in v1.8.0
func (nt *NetworkBase) LayerVals(li, di uint32) *LayerVals
LayerVal returns LayerVals for given layer and data parallel indexes
func (*NetworkBase) LayersByClass ¶ added in v1.4.5
func (nt *NetworkBase) LayersByClass(classes ...string) []string
LayersByClass returns a list of layer names by given class(es). Lists are compiled when network Build() function called. The layer Type is always included as a Class, along with any other space-separated strings specified in Class for parameter styling, etc. If no classes are passed, all layer names in order are returned.
func (*NetworkBase) LayersByType ¶ added in v1.7.1
func (nt *NetworkBase) LayersByType(layType ...LayerTypes) []string
LayersByType returns a list of layer names by given layer types. Lists are compiled when network Build() function called. The layer Type is always included as a Class, along with any other space-separated strings specified in Class for parameter styling, etc. If no classes are passed, all layer names in order are returned.
func (*NetworkBase) Layout ¶ added in v1.4.5
func (nt *NetworkBase) Layout()
Layout computes the 3D layout of layers based on their relative position settings
func (*NetworkBase) MakeLayMap ¶ added in v1.4.5
func (nt *NetworkBase) MakeLayMap()
MakeLayMap updates layer map based on current layers
func (*NetworkBase) MaxParallelData ¶ added in v1.8.0
func (nt *NetworkBase) MaxParallelData() int
func (*NetworkBase) NLayers ¶ added in v1.4.5
func (nt *NetworkBase) NLayers() int
func (*NetworkBase) NParallelData ¶ added in v1.8.0
func (nt *NetworkBase) NParallelData() int
func (*NetworkBase) Name ¶ added in v1.4.5
func (nt *NetworkBase) Name() string
emer.Network interface methods:
func (*NetworkBase) NeuronMapPar ¶ added in v1.7.24
func (nt *NetworkBase) NeuronMapPar(ctx *Context, fun func(ly *Layer, ni uint32), funame string)
NeuronMapPar applies function of given name to all neurons using as many go routines as configured in NetThreads.Neurons.
func (*NetworkBase) NeuronMapSeq ¶ added in v1.7.24
func (nt *NetworkBase) NeuronMapSeq(ctx *Context, fun func(ly *Layer, ni uint32), funame string)
NeuronMapSeq applies function of given name to all neurons sequentially.
func (*NetworkBase) NonDefaultParams ¶ added in v1.4.5
func (nt *NetworkBase) NonDefaultParams() string
NonDefaultParams returns a listing of all parameters in the Network that are not at their default values -- useful for setting param styles etc.
func (*NetworkBase) OpenWtsCpp ¶ added in v1.4.5
func (nt *NetworkBase) OpenWtsCpp(filename gi.FileName) error
OpenWtsCpp opens network weights (and any other state that adapts with learning) from old C++ emergent format. If filename has .gz extension, then file is gzip uncompressed.
func (*NetworkBase) OpenWtsJSON ¶ added in v1.4.5
func (nt *NetworkBase) OpenWtsJSON(filename gi.FileName) error
OpenWtsJSON opens network weights (and any other state that adapts with learning) from a JSON-formatted file. If filename has .gz extension, then file is gzip uncompressed.
func (*NetworkBase) ParamsApplied ¶ added in v1.7.11
func (nt *NetworkBase) ParamsApplied(sel *params.Sel)
ParamsApplied is just to satisfy History interface so reset can be applied
func (*NetworkBase) ParamsHistoryReset ¶ added in v1.7.11
func (nt *NetworkBase) ParamsHistoryReset()
ParamsHistoryReset resets parameter application history
func (*NetworkBase) PrjnMapSeq ¶ added in v1.6.17
func (nt *NetworkBase) PrjnMapSeq(fun func(pj *Prjn), funame string)
PrjnMapSeq applies function of given name to all projections sequentially.
func (*NetworkBase) ReadWtsCpp ¶ added in v1.4.5
func (nt *NetworkBase) ReadWtsCpp(r io.Reader) error
ReadWtsCpp reads the weights from old C++ emergent format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.
func (*NetworkBase) ReadWtsJSON ¶ added in v1.4.5
func (nt *NetworkBase) ReadWtsJSON(r io.Reader) error
ReadWtsJSON reads network weights from the receiver-side perspective in a JSON text format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.
func (*NetworkBase) ResetRndSeed ¶ added in v1.7.12
func (nt *NetworkBase) ResetRndSeed()
ResetRndSeed sets random seed to saved RndSeed, ensuring that the network-specific random seed generator has been created.
func (*NetworkBase) SaveAllLayerInhibs ¶ added in v1.8.4
func (nt *NetworkBase) SaveAllLayerInhibs(filename gi.FileName) error
SaveAllLayerInhibs saves list of all layer Inhibition parameters to given file
func (*NetworkBase) SaveAllParams ¶ added in v1.8.4
func (nt *NetworkBase) SaveAllParams(filename gi.FileName) error
SaveAllParams saves list of all parameters in Network to given file.
func (*NetworkBase) SaveAllPrjnScales ¶ added in v1.8.4
func (nt *NetworkBase) SaveAllPrjnScales(filename gi.FileName) error
SavePrjnScales saves a listing of all PrjnScale parameters in the Network in all Layers, Recv projections. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.
func (*NetworkBase) SaveNonDefaultParams ¶ added in v1.8.4
func (nt *NetworkBase) SaveNonDefaultParams(filename gi.FileName) error
SaveNonDefaultParams saves list of all non-default parameters in Network to given file.
func (*NetworkBase) SaveParamsSnapshot ¶ added in v1.8.4
SaveParamsSnapshot saves various views of current parameters to either `params_good` if good = true (for current good reference params) or `params_2006_01_02` (year, month, day) datestamp, providing a snapshot of the simulation params for easy diffs and later reference. Also saves current Config and Params state.
func (*NetworkBase) SaveWtsJSON ¶ added in v1.4.5
func (nt *NetworkBase) SaveWtsJSON(filename gi.FileName) error
SaveWtsJSON saves network weights (and any other state that adapts with learning) to a JSON-formatted file. If filename has .gz extension, then file is gzip compressed.
func (*NetworkBase) SetCtxStrides ¶ added in v1.8.0
func (nt *NetworkBase) SetCtxStrides(simCtx *Context)
SetCtxStrides sets the given simulation context strides for accessing variables on this network -- these must be set properly before calling any compute methods with the context.
func (*NetworkBase) SetMaxData ¶ added in v1.8.0
func (nt *NetworkBase) SetMaxData(simCtx *Context, maxData int)
SetMaxData sets the MaxData and current NData for both the Network and the Context
func (*NetworkBase) SetNThreads ¶ added in v1.7.24
func (nt *NetworkBase) SetNThreads(nthr int)
SetNThreads sets number of threads to use for CPU parallel processing. pass 0 to use a default heuristic number based on current GOMAXPROCS processors and the number of neurons in the network (call after building)
func (*NetworkBase) SetRndSeed ¶ added in v1.7.12
func (nt *NetworkBase) SetRndSeed(seed int64)
SetRndSeed sets random seed and calls ResetRndSeed
func (*NetworkBase) SetWts ¶ added in v1.4.5
func (nt *NetworkBase) SetWts(nw *weights.Network) error
SetWts sets the weights for this network from weights.Network decoded values
func (*NetworkBase) StdVertLayout ¶ added in v1.4.5
func (nt *NetworkBase) StdVertLayout()
StdVertLayout arranges layers in a standard vertical (z axis stack) layout, by setting the Rel settings
func (*NetworkBase) SynVarNames ¶ added in v1.8.0
func (nt *NetworkBase) SynVarNames() []string
SynVarNames returns the names of all the variables on the synapses in this network. Not all projections need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!
func (*NetworkBase) SynVarProps ¶ added in v1.8.0
func (nt *NetworkBase) SynVarProps() map[string]string
SynVarProps returns properties for variables
func (*NetworkBase) TimerReport ¶ added in v1.4.5
func (nt *NetworkBase) TimerReport()
TimerReport reports the amount of time spent in each function, and in each thread
func (*NetworkBase) UnitVarNames ¶ added in v1.8.0
func (nt *NetworkBase) UnitVarNames() []string
UnitVarNames returns a list of variable names available on the units in this network. Not all layers need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!
func (*NetworkBase) UnitVarProps ¶ added in v1.8.0
func (nt *NetworkBase) UnitVarProps() map[string]string
UnitVarProps returns properties for variables
func (*NetworkBase) VarRange ¶ added in v1.4.5
func (nt *NetworkBase) VarRange(varNm string) (min, max float32, err error)
VarRange returns the min / max values for given variable todo: support r. s. projection values
func (*NetworkBase) WriteWtsJSON ¶ added in v1.4.5
func (nt *NetworkBase) WriteWtsJSON(w io.Writer) error
WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type NeuroModParams ¶ added in v1.7.0
type NeuroModParams struct { DAMod DAModTypes `` /* 176-byte string literal not displayed */ Valence ValenceTypes `desc:"valence coding of this layer -- may affect specific layer types but does not directly affect neuromodulators currently"` DAModGain float32 `` /* 194-byte string literal not displayed */ DALRateSign slbool.Bool `` /* 346-byte string literal not displayed */ DALRateMod float32 `` /* 220-byte string literal not displayed */ AChLRateMod float32 `` /* 147-byte string literal not displayed */ AChDisInhib float32 `min:"0" def:"0,5" desc:"amount of extra Gi inhibition added in proportion to 1 - ACh level -- makes ACh disinhibitory"` BurstGain float32 `` /* 189-byte string literal not displayed */ DipGain float32 `` /* 249-byte string literal not displayed */ // contains filtered or unexported fields }
NeuroModParams specifies the effects of neuromodulators on neural activity and learning rate. These can apply to any neuron type, and are applied in the core cycle update equations.
func (*NeuroModParams) DAGain ¶ added in v1.7.11
func (nm *NeuroModParams) DAGain(da float32) float32
DAGain returns DA dopamine value with Burst / Dip Gain factors applied
func (*NeuroModParams) DASign ¶ added in v1.7.16
func (nm *NeuroModParams) DASign() float32
DASign returns the sign of dopamine effects: D2Mod = -1, else 1
func (*NeuroModParams) Defaults ¶ added in v1.7.0
func (nm *NeuroModParams) Defaults()
func (*NeuroModParams) GGain ¶ added in v1.7.0
func (nm *NeuroModParams) GGain(da float32) float32
GGain returns effective Ge and Gi gain factor given dopamine (DA) +/- burst / dip value (0 = tonic level). factor is 1 for no modulation, otherwise higher or lower.
func (*NeuroModParams) GiFmACh ¶ added in v1.7.0
func (nm *NeuroModParams) GiFmACh(ach float32) float32
GIFmACh returns amount of extra inhibition to add based on disinhibitory effects of ACh -- no inhibition when ACh = 1, extra when < 1.
func (*NeuroModParams) IsBLAExt ¶ added in v1.7.18
func (nm *NeuroModParams) IsBLAExt() bool
IsBLAExt returns true if this is Positive, D2 or Negative D1 -- BLA extinction
func (*NeuroModParams) LRMod ¶ added in v1.7.0
func (nm *NeuroModParams) LRMod(da, ach float32) float32
LRMod returns overall learning rate modulation factor due to neuromodulation from given dopamine (DA) and ACh inputs. If DALRateMod is true and DAMod == D1Mod or D2Mod, then the sign is a function of the DA
func (*NeuroModParams) LRModFact ¶ added in v1.7.0
func (nm *NeuroModParams) LRModFact(pct, val float32) float32
LRModFact returns learning rate modulation factor for given inputs.
func (*NeuroModParams) Update ¶ added in v1.7.0
func (nm *NeuroModParams) Update()
type NeuronAvgVarStrides ¶ added in v1.8.0
type NeuronAvgVarStrides struct { Neuron uint32 `desc:"neuron level"` Var uint32 `desc:"variable level"` // contains filtered or unexported fields }
NeuronAvgVarStrides encodes the stride offsets for neuron variable access into network float32 array. Data is always the inner-most variable.
func (*NeuronAvgVarStrides) Idx ¶ added in v1.8.0
func (ns *NeuronAvgVarStrides) Idx(neurIdx uint32, nvar NeuronAvgVars) uint32
Idx returns the index into network float32 array for given neuron and variable
func (*NeuronAvgVarStrides) SetNeuronOuter ¶ added in v1.8.0
func (ns *NeuronAvgVarStrides) SetNeuronOuter()
SetNeuronOuter sets strides with neurons as outer loop: [Neurons][Vars], which is optimal for CPU-based computation.
func (*NeuronAvgVarStrides) SetVarOuter ¶ added in v1.8.0
func (ns *NeuronAvgVarStrides) SetVarOuter(nneur int)
SetVarOuter sets strides with vars as outer loop: [Vars][Neurons], which is optimal for GPU-based computation.
type NeuronAvgVars ¶ added in v1.8.0
type NeuronAvgVars int32
NeuronAvgVars are mostly neuron variables involved in longer-term average activity which is aggregated over time and not specific to each input data state, along with any other state that is not input data specific.
const ( // ActAvg is average activation (of minus phase activation state) over long time intervals (time constant = Dt.LongAvgTau) -- useful for finding hog units and seeing overall distribution of activation ActAvg NeuronAvgVars = iota // AvgPct is ActAvg as a proportion of overall layer activation -- this is used for synaptic scaling to match TrgAvg activation -- updated at SlowInterval intervals AvgPct // TrgAvg is neuron's target average activation as a proportion of overall layer activation, assigned during weight initialization, driving synaptic scaling relative to AvgPct TrgAvg // DTrgAvg is change in neuron's target average activation as a result of unit-wise error gradient -- acts like a bias weight. MPI needs to share these across processors. DTrgAvg // AvgDif is AvgPct - TrgAvg -- i.e., the error in overall activity level relative to set point for this neuron, which drives synaptic scaling -- updated at SlowInterval intervals AvgDif // GeBase is baseline level of Ge, added to GeRaw, for intrinsic excitability GeBase // GiBase is baseline level of Gi, added to GiRaw, for intrinsic excitability GiBase NeuronAvgVarsN )
func (*NeuronAvgVars) FromString ¶ added in v1.8.0
func (i *NeuronAvgVars) FromString(s string) error
func (NeuronAvgVars) MarshalJSON ¶ added in v1.8.0
func (ev NeuronAvgVars) MarshalJSON() ([]byte, error)
func (NeuronAvgVars) String ¶ added in v1.8.0
func (i NeuronAvgVars) String() string
func (*NeuronAvgVars) UnmarshalJSON ¶ added in v1.8.0
func (ev *NeuronAvgVars) UnmarshalJSON(b []byte) error
type NeuronFlags ¶ added in v1.6.4
type NeuronFlags int32
NeuronFlags are bit-flags encoding relevant binary state for neurons
const ( // NeuronOff flag indicates that this neuron has been turned off (i.e., lesioned) NeuronOff NeuronFlags = 1 // NeuronHasExt means the neuron has external input in its Ext field NeuronHasExt NeuronFlags = 2 // NeuronHasTarg means the neuron has external target input in its Target field NeuronHasTarg NeuronFlags = 4 // NeuronHasCmpr means the neuron has external comparison input in its Target field -- used for computing // comparison statistics but does not drive neural activity ever NeuronHasCmpr NeuronFlags = 8 )
The neuron flags
func (NeuronFlags) String ¶ added in v1.6.4
func (i NeuronFlags) String() string
type NeuronIdxStrides ¶ added in v1.8.0
type NeuronIdxStrides struct { Neuron uint32 `desc:"neuron level"` Index uint32 `desc:"index value level"` // contains filtered or unexported fields }
NeuronIdxStrides encodes the stride offsets for neuron index access into network uint32 array.
func (*NeuronIdxStrides) Idx ¶ added in v1.8.0
func (ns *NeuronIdxStrides) Idx(neurIdx uint32, idx NeuronIdxs) uint32
Idx returns the index into network uint32 array for given neuron, index value
func (*NeuronIdxStrides) SetIdxOuter ¶ added in v1.8.0
func (ns *NeuronIdxStrides) SetIdxOuter(nneur int)
SetIdxOuter sets strides with indexes as outer dimension: [Idxs][Neurons] (outer to inner), which is optimal for GPU-based computation.
func (*NeuronIdxStrides) SetNeuronOuter ¶ added in v1.8.0
func (ns *NeuronIdxStrides) SetNeuronOuter()
SetNeuronOuter sets strides with neurons as outer dimension: [Neurons[[Idxs] (outer to inner), which is optimal for CPU-based computation.
type NeuronIdxs ¶ added in v1.8.0
type NeuronIdxs int32
NeuronIdxs are the neuron indexes and other uint32 values. There is only one of these per neuron -- not data parallel. note: Flags are encoded in Vars because they are data parallel and writable, whereas indexes are read-only.
const ( // NrnNeurIdx is the index of this neuron within its owning layer NrnNeurIdx NeuronIdxs = iota // NrnLayIdx is the index of the layer that this neuron belongs to, // needed for neuron-level parallel code. NrnLayIdx // NrnSubPool is the index of the sub-level inhibitory pool for this neuron // (only for 4D shapes, the pool (unit-group / hypercolumn) structure level). // Indicies start at 1 -- 0 is layer-level pool (is 0 if no sub-pools). NrnSubPool NeuronIdxsN )
func (*NeuronIdxs) FromString ¶ added in v1.8.0
func (i *NeuronIdxs) FromString(s string) error
func (NeuronIdxs) MarshalJSON ¶ added in v1.8.0
func (ev NeuronIdxs) MarshalJSON() ([]byte, error)
func (NeuronIdxs) String ¶ added in v1.8.0
func (i NeuronIdxs) String() string
func (*NeuronIdxs) UnmarshalJSON ¶ added in v1.8.0
func (ev *NeuronIdxs) UnmarshalJSON(b []byte) error
type NeuronVarStrides ¶ added in v1.8.0
type NeuronVarStrides struct { Neuron uint32 `desc:"neuron level"` Var uint32 `desc:"variable level"` // contains filtered or unexported fields }
NeuronVarStrides encodes the stride offsets for neuron variable access into network float32 array. Data is always the inner-most variable.
func (*NeuronVarStrides) Idx ¶ added in v1.8.0
func (ns *NeuronVarStrides) Idx(neurIdx, di uint32, nvar NeuronVars) uint32
Idx returns the index into network float32 array for given neuron, data, and variable
func (*NeuronVarStrides) SetNeuronOuter ¶ added in v1.8.0
func (ns *NeuronVarStrides) SetNeuronOuter(ndata int)
SetNeuronOuter sets strides with neurons as outer loop: [Neurons][Vars][Data], which is optimal for CPU-based computation.
func (*NeuronVarStrides) SetVarOuter ¶ added in v1.8.0
func (ns *NeuronVarStrides) SetVarOuter(nneur, ndata int)
SetVarOuter sets strides with vars as outer loop: [Vars][Neurons][Data], which is optimal for GPU-based computation.
type NeuronVars ¶
type NeuronVars int32
NeuronVars are the neuron variables representing current active state, specific to each input data state. See NeuronAvgVars for vars shared across data.
const ( // Spike is whether neuron has spiked or not on this cycle (0 or 1) Spike NeuronVars = iota // Spiked is 1 if neuron has spiked within the last 10 cycles (msecs), corresponding to a nominal max spiking rate of 100 Hz, 0 otherwise -- useful for visualization and computing activity levels in terms of average spiked levels. Spiked // Act is rate-coded activation value reflecting instantaneous estimated rate of spiking, based on 1 / ISIAvg. This drives feedback inhibition in the FFFB function (todo: this will change when better inhibition is implemented), and is integrated over time for ActInt which is then used for performance statistics and layer average activations, etc. Should not be used for learning or other computations. Act // ActInt is integrated running-average activation value computed from Act with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall activation state across the ThetaCycle time scale, as the overall response of network to current input state -- this is copied to ActM and ActP at the ends of the minus and plus phases, respectively, and used in computing performance-level statistics (which are typically based on ActM). Should not be used for learning or other computations. ActInt // ActM is ActInt activation state at end of third quarter, representing the posterior-cortical minus phase activation -- used for statistics and monitoring network performance. Should not be used for learning or other computations. ActM // ActP is ActInt activation state at end of fourth quarter, representing the posterior-cortical plus_phase activation -- used for statistics and monitoring network performance. Should not be used for learning or other computations. ActP // Ext is external input: drives activation of unit from outside influences (e.g., sensory input) Ext // Target is the target value: drives learning to produce this activation value Target // Ge is total excitatory conductance, including all forms of excitation (e.g., NMDA) -- does *not* include Gbar.E Ge // Gi is total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does *not* include Gbar.I Gi // Gk is total potassium conductance, typically reflecting sodium-gated potassium currents involved in adaptation effects -- does *not* include Gbar.K Gk // Inet is net current produced by all channels -- drives update of Vm Inet // Vm is membrane potential -- integrates Inet current over time Vm // VmDend is dendritic membrane potential -- has a slower time constant, is not subject to the VmR reset after spiking VmDend // ISI is current inter-spike-interval -- counts up since last spike. Starts at -1 when initialized. ISI // ISIAvg is average inter-spike-interval -- average time interval between spikes, integrated with ISITau rate constant (relatively fast) to capture something close to an instantaneous spiking rate. Starts at -1 when initialized, and goes to -2 after first spike, and is only valid after the second spike post-initialization. ISIAvg // CaSpkP is continuous cascaded integration of CaSpkM at PTau time constant (typically 40), representing neuron-level purely spiking version of plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule. Used for specialized learning and computational functions, statistics, instead of Act. CaSpkP // CaSpkD is continuous cascaded integration CaSpkP at DTau time constant (typically 40), representing neuron-level purely spiking version of minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule. Used for specialized learning and computational functions, statistics, instead of Act. CaSpkD // CaSyn is spike-driven calcium trace for synapse-level Ca-driven learning: exponential integration of SpikeG * Spike at SynTau time constant (typically 30). Synapses integrate send.CaSyn * recv.CaSyn across M, P, D time integrals for the synaptic trace driving credit assignment in learning. Time constant reflects binding time of Glu to NMDA and Ca buffering postsynaptically, and determines time window where pre * post spiking must overlap to drive learning. CaSyn // CaSpkM is spike-driven calcium trace used as a neuron-level proxy for synpatic credit assignment factor based on continuous time-integrated spiking: exponential integration of SpikeG * Spike at MTau time constant (typically 5). Simulates a calmodulin (CaM) like signal at the most abstract level. CaSpkM // CaSpkPM is minus-phase snapshot of the CaSpkP value -- similar to ActM but using a more directly spike-integrated value. CaSpkPM // CaLrn is recv neuron calcium signal used to drive temporal error difference component of standard learning rule, combining NMDA (NmdaCa) and spiking-driven VGCC (VgccCaInt) calcium sources (vs. CaSpk* which only reflects spiking component). This is integrated into CaM, CaP, CaD, and temporal derivative is CaP - CaD (CaMKII - DAPK1). This approximates the backprop error derivative on net input, but VGCC component adds a proportion of recv activation delta as well -- a balance of both works best. The synaptic-level trace multiplier provides the credit assignment factor, reflecting coincident activity and potentially integrated over longer multi-trial timescales. CaLrn // NrnCaM is integrated CaLrn at MTau timescale (typically 5), simulating a calmodulin (CaM) like signal, which then drives CaP, CaD for delta signal driving error-driven learning. NrnCaM // NrnCaP is cascaded integration of CaM at PTau time constant (typically 40), representing the plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule. NrnCaP // NrnCaD is cascaded integratoin of CaP at DTau time constant (typically 40), representing the minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule. NrnCaD // CaDiff is difference between CaP - CaD -- this is the error signal that drives error-driven learning. CaDiff // Attn is Attentional modulation factor, which can be set by special layers such as the TRC -- multiplies Ge Attn // RLRate is recv-unit based learning rate multiplier, reflecting the sigmoid derivative computed from the CaSpkD of recv unit, and the normalized difference CaSpkP - CaSpkD / MAX(CaSpkP - CaSpkD). RLRate // SpkMaxCa is Ca integrated like CaSpkP but only starting at MaxCycStart cycle, to prevent inclusion of carryover spiking from prior theta cycle trial -- the PTau time constant otherwise results in significant carryover. This is the input to SpkMax SpkMaxCa // SpkMax is maximum CaSpkP across one theta cycle time window (max of SpkMaxCa) -- used for specialized algorithms that have more phasic behavior within a single trial, e.g., BG Matrix layer gating. Also useful for visualization of peak activity of neurons. SpkMax // SpkPrv is final CaSpkD activation state at end of previous theta cycle. used for specialized learning mechanisms that operate on delayed sending activations. SpkPrv // SpkSt1 is the activation state at specific time point within current state processing window (e.g., 50 msec for beta cycle within standard theta cycle), as saved by SpkSt1() function. Used for example in hippocampus for CA3, CA1 learning SpkSt1 // SpkSt2 is the activation state at specific time point within current state processing window (e.g., 100 msec for beta cycle within standard theta cycle), as saved by SpkSt2() function. Used for example in hippocampus for CA3, CA1 learning SpkSt2 // GeNoiseP is accumulating poisson probability factor for driving excitatory noise spiking -- multiply times uniform random deviate at each time step, until it gets below the target threshold based on lambda. GeNoiseP // GeNoise is integrated noise excitatory conductance, added into Ge GeNoise // GiNoiseP is accumulating poisson probability factor for driving inhibitory noise spiking -- multiply times uniform random deviate at each time step, until it gets below the target threshold based on lambda. GiNoiseP // GiNoise is integrated noise inhibotyr conductance, added into Gi GiNoise // GeExt is extra excitatory conductance added to Ge -- from Ext input, GeCtxt etc GeExt // GeRaw is raw excitatory conductance (net input) received from senders = current raw spiking drive GeRaw // GeSyn is time-integrated total excitatory synaptic conductance, with an instantaneous rise time from each spike (in GeRaw) and exponential decay with Dt.GeTau, aggregated over projections -- does *not* include Gbar.E GeSyn // GiRaw is raw inhibitory conductance (net input) received from senders = current raw spiking drive GiRaw // GiSyn is time-integrated total inhibitory synaptic conductance, with an instantaneous rise time from each spike (in GiRaw) and exponential decay with Dt.GiTau, aggregated over projections -- does *not* include Gbar.I. This is added with computed FFFB inhibition to get the full inhibition in Gi GiSyn // GeInt is integrated running-average activation value computed from Ge with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall Ge level across the ThetaCycle time scale (Ge itself fluctuates considerably) -- useful for stats to set strength of connections etc to get neurons into right range of overall excitatory drive GeInt // GeIntMax is maximum GeInt value across one theta cycle time window. GeIntMax // GiInt is integrated running-average activation value computed from GiSyn with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall synaptic Gi level across the ThetaCycle time scale (Gi itself fluctuates considerably) -- useful for stats to set strength of connections etc to get neurons into right range of overall inhibitory drive GiInt // GModRaw is raw modulatory conductance, received from GType = ModulatoryG projections GModRaw // GModSyn is syn integrated modulatory conductance, received from GType = ModulatoryG projections GModSyn // GMaintRaw is raw maintenance conductance, received from GType = MaintG projections GMaintRaw // GMaintSyn is syn integrated maintenance conductance, integrated using MaintNMDA params. GMaintSyn // SSGi is SST+ somatostatin positive slow spiking inhibition SSGi // SSGiDend is amount of SST+ somatostatin positive slow spiking inhibition applied to dendritic Vm (VmDend) SSGiDend // Gak is conductance of A-type K potassium channels Gak // MahpN is accumulating voltage-gated gating value for the medium time scale AHP MahpN // SahpCa is slowly accumulating calcium value that drives the slow AHP SahpCa // SahpN is sAHP gating value SahpN // GknaMed is conductance of sodium-gated potassium channel (KNa) medium dynamics (Slick) -- produces accommodation / adaptation of firing GknaMed // GknaSlow is conductance of sodium-gated potassium channel (KNa) slow dynamics (Slack) -- produces accommodation / adaptation of firing GknaSlow // GnmdaSyn is integrated NMDA recv synaptic current -- adds GeRaw and decays with time constant GnmdaSyn // Gnmda is net postsynaptic (recv) NMDA conductance, after Mg V-gating and Gbar -- added directly to Ge as it has the same reversal potential Gnmda // GnmdaMaint is net postsynaptic maintenance NMDA conductance, computed from GMaintSyn and GMaintRaw, after Mg V-gating and Gbar -- added directly to Ge as it has the same reversal potential GnmdaMaint // GnmdaLrn is learning version of integrated NMDA recv synaptic current -- adds GeRaw and decays with time constant -- drives NmdaCa that then drives CaM for learning GnmdaLrn // NmdaCa is NMDA calcium computed from GnmdaLrn, drives learning via CaM NmdaCa // GgabaB is net GABA-B conductance, after Vm gating and Gbar + Gbase -- applies to Gk, not Gi, for GIRK, with .1 reversal potential. GgabaB // GABAB is GABA-B / GIRK activation -- time-integrated value with rise and decay time constants GABAB // GABABx is GABA-B / GIRK internal drive variable -- gets the raw activation and decays GABABx // Gvgcc is conductance (via Ca) for VGCC voltage gated calcium channels Gvgcc // VgccM is activation gate of VGCC channels VgccM // VgccH inactivation gate of VGCC channels VgccH // VgccCa is instantaneous VGCC calcium flux -- can be driven by spiking or directly from Gvgcc VgccCa // VgccCaInt time-integrated VGCC calcium flux -- this is actually what drives learning VgccCaInt // SKCaIn is intracellular calcium store level, available to be released with spiking as SKCaR, which can bind to SKCa receptors and drive K current. replenishment is a function of spiking activity being below a threshold SKCaIn // SKCaR released amount of intracellular calcium, from SKCaIn, as a function of spiking events. this can bind to SKCa channels and drive K currents. SKCaR // SKCaM is Calcium-gated potassium channel gating factor, driven by SKCaR via a Hill equation as in chans.SKPCaParams. SKCaM // Gsk is Calcium-gated potassium channel conductance as a function of Gbar * SKCaM. Gsk // Burst is 5IB bursting activation value, computed by thresholding regular CaSpkP value in Super superficial layers Burst // BurstPrv is previous Burst bursting activation from prior time step -- used for context-based learning BurstPrv // CtxtGe is context (temporally delayed) excitatory conductance, driven by deep bursting at end of the plus phase, for CT layers. CtxtGe // CtxtGeRaw is raw update of context (temporally delayed) excitatory conductance, driven by deep bursting at end of the plus phase, for CT layers. CtxtGeRaw // CtxtGeOrig is original CtxtGe value prior to any decay factor -- updates at end of plus phase. CtxtGeOrig // NrnFlags are bit flags for binary state variables, which are converted to / from uint32. // These need to be in Vars because they can be differential per data (for ext inputs) // and are writable (indexes are read only). NrnFlags NeuronVarsN )
func (*NeuronVars) FromString ¶ added in v1.8.0
func (i *NeuronVars) FromString(s string) error
func (NeuronVars) MarshalJSON ¶ added in v1.8.0
func (ev NeuronVars) MarshalJSON() ([]byte, error)
func (NeuronVars) String ¶ added in v1.8.0
func (i NeuronVars) String() string
func (*NeuronVars) UnmarshalJSON ¶ added in v1.8.0
func (ev *NeuronVars) UnmarshalJSON(b []byte) error
type PVLV ¶ added in v1.7.11
type PVLV struct { Drive Drives `` /* 195-byte string literal not displayed */ Effort Effort `` /* 147-byte string literal not displayed */ Urgency Urgency `` /* 188-byte string literal not displayed */ VTA VTA `` /* 247-byte string literal not displayed */ LHb LHb `` /* 266-byte string literal not displayed */ }
PVLV represents the core brainstem-level (hypothalamus) bodily drives and resulting dopamine from US (unconditioned stimulus) inputs, as computed by the PVLV model of primary value (PV) and learned value (LV), describing the functions of the Amygala, Ventral Striatum, VTA and associated midbrain nuclei (LDT, LHb, RMTg) Core LHb (lateral habenula) and VTA (ventral tegmental area) dopamine are computed in equations using inputs from specialized network layers (LDTLayer driven by BLA, CeM layers, VSPatchLayer). Renders USLayer, PVLayer, DrivesLayer representations based on state updated here.
func (*PVLV) EffortUpdt ¶ added in v1.7.11
EffortUpdt updates the effort based on given effort increment, resetting instead if HasRewPrev flag is true. Call this at the start of the trial, in ApplyPVLV method.
func (*PVLV) EffortUrgencyUpdt ¶ added in v1.7.18
EffortUrgencyUpdt updates the Effort & Urgency based on given effort increment, resetting instead if HasRewPrev flag is true. Call this at the start of the trial, in ApplyPVLV method.
func (*PVLV) ShouldGiveUp ¶ added in v1.7.19
ShouldGiveUp tests whether it is time to give up on the current goal, based on sum of LHb Dip (missed expected rewards) and maximum effort.
type Pool ¶
type Pool struct {
StIdx, EdIdx uint32 `inactive:"+" desc:"starting and ending (exlusive) layer-wise indexes for the list of neurons in this pool"`
LayIdx uint32 `view:"-" desc:"layer index in global layer list"`
DataIdx uint32 `view:"-" desc:"data parallel index (innermost index per layer)"`
PoolIdx uint32 `view:"-" desc:"pool index in global pool list: [Layer][Pool][Data]"`
IsLayPool slbool.Bool `inactive:"+" desc:"is this a layer-wide pool? if not, it represents a sub-pool of units within a 4D layer"`
Gated slbool.Bool `inactive:"+" desc:"for special types where relevant (e.g., MatrixLayer, BGThalLayer), indicates if the pool was gated"`
Inhib fsfffb.Inhib `inactive:"+" desc:"fast-slow FFFB inhibition values"`
// note: these last two have elements that are shared across data parallel -- not worth separating though?
AvgMax PoolAvgMax `desc:"average and max values for relevant variables in this pool, at different time scales"`
AvgDif AvgMaxI32 `inactive:"+" view:"inline" desc:"absolute value of AvgDif differences from actual neuron ActPct relative to TrgAvg"`
// contains filtered or unexported fields
}
Pool contains computed values for FS-FFFB inhibition, and various other state values for layers and pools (unit groups) that can be subject to inhibition
func (*Pool) AvgMaxUpdate ¶ added in v1.8.0
AvgMaxUpdate updates the AvgMax values based on current neuron values
type PoolAvgMax ¶ added in v1.7.0
type PoolAvgMax struct { CaSpkP AvgMaxPhases `` /* 252-byte string literal not displayed */ CaSpkD AvgMaxPhases `inactive:"+" view:"inline" desc:"avg and maximum CaSpkD longer-term depression / DAPK1 signal in layer"` SpkMax AvgMaxPhases `` /* 136-byte string literal not displayed */ Act AvgMaxPhases `inactive:"+" view:"inline" desc:"avg and maximum Act firing rate value"` GeInt AvgMaxPhases `inactive:"+" view:"inline" desc:"avg and maximum GeInt integrated running-average excitatory conductance value"` GeIntMax AvgMaxPhases `inactive:"+" view:"inline" desc:"avg and maximum GeIntMax integrated running-average excitatory conductance value"` GiInt AvgMaxPhases `inactive:"+" view:"inline" desc:"avg and maximum GiInt integrated running-average inhibitory conductance value"` }
PoolAvgMax contains the average and maximum values over a Pool of neurons for different variables of interest, at Cycle, Minus and Plus phase timescales. All of the cycle level values are updated at the *start* of the cycle based on values from the prior cycle -- thus are 1 cycle behind in general.
func (*PoolAvgMax) Calc ¶ added in v1.7.9
func (am *PoolAvgMax) Calc(refIdx int32)
Calc does Calc on Cycle level, and re-inits
func (*PoolAvgMax) CycleToMinus ¶ added in v1.7.0
func (am *PoolAvgMax) CycleToMinus()
CycleToMinus grabs current Cycle values into the Minus phase values
func (*PoolAvgMax) CycleToPlus ¶ added in v1.7.0
func (am *PoolAvgMax) CycleToPlus()
CycleToPlus grabs current Cycle values into the Plus phase values
func (*PoolAvgMax) Init ¶ added in v1.7.0
func (am *PoolAvgMax) Init()
Init does Init on Cycle vals-- for update start. always left init'd so generally unnecessary
func (*PoolAvgMax) SetN ¶ added in v1.7.9
func (am *PoolAvgMax) SetN(n int32)
SetN sets the N for aggregation
func (*PoolAvgMax) Zero ¶ added in v1.7.9
func (am *PoolAvgMax) Zero()
Zero does full reset on everything -- for InitActs
type PopCodeParams ¶ added in v1.7.11
type PopCodeParams struct { On slbool.Bool `desc:"use popcode encoding of variable(s) that this layer represents"` Ge float32 `` /* 137-byte string literal not displayed */ Min float32 `` /* 191-byte string literal not displayed */ Max float32 `` /* 190-byte string literal not displayed */ MinAct float32 `` /* 272-byte string literal not displayed */ MinSigma float32 `` /* 264-byte string literal not displayed */ MaxSigma float32 `` /* 264-byte string literal not displayed */ Clip slbool.Bool `viewif:"On" desc:"ensure that encoded and decoded value remains within specified range"` }
PopCodeParams provides an encoding of scalar value using population code, where a single continuous (scalar) value is encoded as a gaussian bump across a population of neurons (1 dimensional). It can also modulate rate code and number of neurons active according to the value. This is for layers that represent values as in the PVLV system (from Context.PVLV). Both normalized activation values (1 max) and Ge conductance values can be generated.
func (*PopCodeParams) ClipVal ¶ added in v1.7.11
func (pc *PopCodeParams) ClipVal(val float32) float32
ClipVal returns clipped (clamped) value in min / max range
func (*PopCodeParams) Defaults ¶ added in v1.7.11
func (pc *PopCodeParams) Defaults()
func (*PopCodeParams) EncodeGe ¶ added in v1.7.11
func (pc *PopCodeParams) EncodeGe(i, n uint32, val float32) float32
EncodeGe returns Ge value for given value, for neuron index i out of n total neurons. n must be 2 or more.
func (*PopCodeParams) EncodeVal ¶ added in v1.7.11
func (pc *PopCodeParams) EncodeVal(i, n uint32, val float32) float32
EncodeVal returns value for given value, for neuron index i out of n total neurons. n must be 2 or more.
func (*PopCodeParams) ProjectParam ¶ added in v1.7.11
func (pc *PopCodeParams) ProjectParam(minParam, maxParam, clipVal float32) float32
ProjectParam projects given min / max param value onto val within range
func (*PopCodeParams) SetRange ¶ added in v1.7.11
func (pc *PopCodeParams) SetRange(min, max, minSigma, maxSigma float32)
SetRange sets the min, max and sigma values
func (*PopCodeParams) Update ¶ added in v1.7.11
func (pc *PopCodeParams) Update()
type Prjn ¶
type Prjn struct { PrjnBase Params *PrjnParams `desc:"all prjn-level parameters -- these must remain constant once configured"` }
axon.Prjn is a basic Axon projection with synaptic learning parameters
func (*Prjn) AsAxon ¶
AsAxon returns this prjn as a axon.Prjn -- all derived prjns must redefine this to return the base Prjn type, so that the AxonPrjn interface does not need to include accessors to all the basic stuff.
func (*Prjn) DWt ¶
DWt computes the weight change (learning), based on synaptically-integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets.
func (*Prjn) DWtSubMean ¶ added in v1.2.23
DWtSubMean subtracts the mean from any projections that have SubMean > 0. This is called on *receiving* projections, prior to WtFmDwt.
func (*Prjn) InitGBuffs ¶ added in v1.5.10
func (pj *Prjn) InitGBuffs()
InitGBuffs initializes the per-projection synaptic conductance buffers. This is not typically needed (called during InitWts, InitActs) but can be called when needed. Must be called to completely initialize prior activity, e.g., full Glong clearing.
func (*Prjn) InitWtSym ¶
InitWtSym initializes weight symmetry. Is given the reciprocal projection where the Send and Recv layers are reversed (see LayerBase RecipToRecvPrjn)
func (*Prjn) InitWts ¶
InitWts initializes weight values according to SWt params, enforcing current constraints.
func (*Prjn) InitWtsSyn ¶
InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.
func (*Prjn) LRateMod ¶ added in v1.6.13
LRateMod sets the LRate modulation parameter for Prjns, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.
func (*Prjn) LRateSched ¶ added in v1.6.13
LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.
func (*Prjn) Object ¶ added in v1.7.0
Object returns the object with parameters to be set by emer.Params
func (*Prjn) ReadWtsJSON ¶
ReadWtsJSON reads the weights from this projection from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one prjn only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.
func (*Prjn) SWtFmWt ¶ added in v1.2.45
SWtFmWt updates structural, slowly-adapting SWt value based on accumulated DSWt values, which are zero-summed with additional soft bounding relative to SWt limits.
func (*Prjn) SWtRescale ¶ added in v1.2.45
SWtRescale rescales the SWt values to preserve the target overall mean value, using subtractive normalization.
func (*Prjn) SendSpike ¶
SendSpike sends a spike from the sending neuron at index sendIdx into the GBuf buffer on the receiver side. The buffer on the receiver side is a ring buffer, which is used for modelling the time delay between sending and receiving spikes.
func (*Prjn) SetSWtsFunc ¶ added in v1.2.75
func (pj *Prjn) SetSWtsFunc(ctx *Context, swtFun func(si, ri int, send, recv *etensor.Shape) float32)
SetSWtsFunc initializes structural SWt values using given function based on receiving and sending unit indexes.
func (*Prjn) SetSWtsRPool ¶ added in v1.2.75
SetSWtsRPool initializes SWt structural weight values using given tensor of values which has unique values for each recv neuron within a given pool.
func (*Prjn) SetSynVal ¶
SetSynVal sets value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes) returns error for access errors.
func (*Prjn) SetWtsFunc ¶
SetWtsFunc initializes synaptic Wt value using given function based on receiving and sending unit indexes. Strongly suggest calling SWtRescale after.
func (*Prjn) SlowAdapt ¶ added in v1.2.37
SlowAdapt does the slow adaptation: SWt learning and SynScale
func (*Prjn) SynCaRecv ¶ added in v1.7.9
SynCaRecv updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking. This pass goes through in recv order, filtering on recv spike, and skips when sender spiked, as those were already done in Send version.
func (*Prjn) SynCaReset ¶ added in v1.8.0
SynCaReset resets SynCa values -- called during SlowAdapt
func (*Prjn) SynCaSend ¶ added in v1.7.9
SynCaSend updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking. This pass goes through in sending order, filtering on sending spike. Sender will update even if recv neuron spiked -- recv will skip sender spike cases.
func (*Prjn) SynFail ¶ added in v1.2.92
SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFmDWt, but this call can be used during testing to update failing synapses.
func (*Prjn) SynScale ¶ added in v1.2.23
SynScale performs synaptic scaling based on running average activation vs. targets. Layer-level AvgDifFmTrgAvg function must be called first.
func (*Prjn) Update ¶ added in v1.7.0
func (pj *Prjn) Update()
Update is interface that does local update of struct vals
func (*Prjn) UpdateParams ¶
func (pj *Prjn) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values
func (*Prjn) WriteWtsJSON ¶
WriteWtsJSON writes the weights from this projection from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type PrjnBase ¶ added in v1.4.14
type PrjnBase struct { AxonPrj AxonPrjn `` /* 267-byte string literal not displayed */ Off bool `desc:"inactivate this projection -- allows for easy experimentation"` Cls string `desc:"Class is for applying parameter styles, can be space separated multple tags"` Notes string `desc:"can record notes about this projection here"` Send *Layer `desc:"sending layer for this projection"` Recv *Layer `desc:"receiving layer for this projection"` Pat prjn.Pattern `tableview:"-" desc:"pattern of connectivity"` Typ PrjnTypes `` /* 154-byte string literal not displayed */ DefParams params.Params `` /* 317-byte string literal not displayed */ ParamsHistory params.HistoryImpl `tableview:"-" desc:"provides a history of parameters applied to the layer"` RecvConNAvgMax minmax.AvgMax32 `tableview:"-" inactive:"+" view:"inline" desc:"average and maximum number of recv connections in the receiving layer"` SendConNAvgMax minmax.AvgMax32 `tableview:"-" inactive:"+" view:"inline" desc:"average and maximum number of sending connections in the sending layer"` SynStIdx uint32 `view:"-" desc:"start index into global Synapse array: [Layer][SendPrjns][Synapses]"` NSyns uint32 `view:"-" desc:"number of synapses in this projection"` RecvCon []StartN `` /* 301-byte string literal not displayed */ RecvSynIdx []uint32 `` /* 236-byte string literal not displayed */ RecvConIdx []uint32 `` /* 281-byte string literal not displayed */ SendCon []StartN `` /* 267-byte string literal not displayed */ SendConIdx []uint32 `` /* 394-byte string literal not displayed */ // spike aggregation values: GBuf []int32 `` /* 305-byte string literal not displayed */ GSyns []float32 `` /* 254-byte string literal not displayed */ }
PrjnBase contains the basic structural information for specifying a projection of synaptic connections between two layers, and maintaining all the synaptic connection-level data. The same struct token is added to the Recv and Send layer prjn lists, and it manages everything about the connectivity, and methods on the Prjn handle all the relevant computation. The Base does not have algorithm-specific methods and parameters, so it can be easily reused for different algorithms, and cleanly separates the algorithm-specific code. Any dependency on the algorithm-level Prjn can be captured in the AxonPrjn interface, accessed via the AxonPrj field.
func (*PrjnBase) ApplyDefParams ¶ added in v1.7.18
func (pj *PrjnBase) ApplyDefParams()
ApplyDefParams applies DefParams default parameters if set Called by Prjn.Defaults()
func (*PrjnBase) ApplyParams ¶ added in v1.4.14
ApplyParams applies given parameter style Sheet to this projection. Calls UpdateParams if anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*PrjnBase) Build ¶ added in v1.7.0
Build constructs the full connectivity among the layers. Calls Validate and returns error if invalid. Pat.Connect is called to get the pattern of the connection. Then the connection indexes are configured according to that pattern. Does NOT allocate synapses -- these are set by Network from global slice.
func (*PrjnBase) Connect ¶ added in v1.4.14
Connect sets the connectivity between two layers and the pattern to use in interconnecting them
func (*PrjnBase) Init ¶ added in v1.4.14
Init MUST be called to initialize the prjn's pointer to itself as an emer.Prjn which enables the proper interface methods to be called.
func (*PrjnBase) NonDefaultParams ¶ added in v1.4.14
NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.
func (*PrjnBase) ParamsApplied ¶ added in v1.7.11
ParamsApplied is just to satisfy History interface so reset can be applied
func (*PrjnBase) ParamsHistoryReset ¶ added in v1.7.11
func (pj *PrjnBase) ParamsHistoryReset()
ParamsHistoryReset resets parameter application history
func (*PrjnBase) PrjnTypeName ¶ added in v1.4.14
func (*PrjnBase) RecvSynIdxs ¶ added in v1.7.24
RecvSynIdxs returns the receiving synapse indexes for given recv unit index within the receiving layer, to be iterated over for recv-based processing.
func (*PrjnBase) SetConStartN ¶ added in v1.7.2
SetConStartN sets the *Con StartN values given n tensor from Pat. Returns total number of connections for this direction.
func (*PrjnBase) SetOff ¶ added in v1.4.14
SetOff individual projection. Careful: Layer.SetOff(true) will reactivate all prjns of that layer, so prjn-level lesioning should always be done last.
func (*PrjnBase) SetPattern ¶ added in v1.7.0
func (*PrjnBase) Syn1DNum ¶ added in v1.7.0
Syn1DNum returns the number of synapses for this prjn as a 1D array. This is the max idx for SynVal1D and the number of vals set by SynVals.
func (*PrjnBase) SynIdx ¶ added in v1.7.0
SynIdx returns the index of the synapse between given send, recv unit indexes (1D, flat indexes, layer relative). Returns -1 if synapse not found between these two neurons. Requires searching within connections for sending unit.
func (*PrjnBase) SynVal ¶ added in v1.7.0
SynVal returns value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes). Returns mat32.NaN() for access errors (see SynValTry for error message)
func (*PrjnBase) SynVal1D ¶ added in v1.7.0
SynVal1D returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods.
func (*PrjnBase) SynVal1DDi ¶ added in v1.8.0
SynVal1DDi returns value of given variable index (from SynVarIdx) on given SynIdx. Returns NaN on invalid index. This is the core synapse var access method used by other methods. Includes Di data parallel index for data-parallel synaptic values.
func (*PrjnBase) SynValDi ¶ added in v1.8.0
SynValDi returns value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes). Returns mat32.NaN() for access errors (see SynValTry for error message) Includes Di data parallel index for data-parallel synaptic values.
func (*PrjnBase) SynVals ¶ added in v1.7.0
SynVals sets values of given variable name for each synapse, using the natural ordering of the synapses (sender based for Axon), into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*PrjnBase) SynVarIdx ¶ added in v1.7.0
SynVarIdx returns the index of given variable within the synapse, according to *this prjn's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.
func (*PrjnBase) SynVarNames ¶ added in v1.7.0
func (*PrjnBase) SynVarNum ¶ added in v1.7.0
SynVarNum returns the number of synapse-level variables for this prjn. This is needed for extending indexes in derived types.
func (*PrjnBase) SynVarProps ¶ added in v1.7.0
SynVarProps returns properties for variables
type PrjnGTypes ¶ added in v1.7.0
type PrjnGTypes int32
PrjnGTypes represents the conductance (G) effects of a given projection, including excitatory, inhibitory, and modulatory.
const ( // Excitatory projections drive Ge conductance on receiving neurons, // which send to GiRaw and GiSyn neuron variables. ExcitatoryG PrjnGTypes = iota // Inhibitory projections drive Gi inhibitory conductance, // which send to GiRaw and GiSyn neuron variables. InhibitoryG // Modulatory projections have a multiplicative effect on other inputs, // which send to GModRaw and GModSyn neuron variables. ModulatoryG // Maintenance projections drive unique set of NMDA channels that support // strong active maintenance abilities. // Send to GMaintRaw and GMaintSyn neuron variables. MaintG // Context projections are for inputs to CT layers, which update // only at the end of the plus phase, and send to CtxtGe. ContextG PrjnGTypesN )
The projection conductance types
func (*PrjnGTypes) FromString ¶ added in v1.7.0
func (i *PrjnGTypes) FromString(s string) error
func (PrjnGTypes) MarshalJSON ¶ added in v1.7.0
func (ev PrjnGTypes) MarshalJSON() ([]byte, error)
func (PrjnGTypes) String ¶ added in v1.7.0
func (i PrjnGTypes) String() string
func (*PrjnGTypes) UnmarshalJSON ¶ added in v1.7.0
func (ev *PrjnGTypes) UnmarshalJSON(b []byte) error
type PrjnIdxs ¶ added in v1.7.0
type PrjnIdxs struct { PrjnIdx uint32 // index of the projection in global prjn list: [Layer][SendPrjns] RecvLay uint32 // index of the receiving layer in global list of layers RecvNeurSt uint32 // starting index of neurons in recv layer -- so we don't need layer to get to neurons RecvNeurN uint32 // number of neurons in recv layer SendLay uint32 // index of the sending layer in global list of layers SendNeurSt uint32 // starting index of neurons in sending layer -- so we don't need layer to get to neurons SendNeurN uint32 // number of neurons in send layer SynapseSt uint32 // start index into global Synapse array: [Layer][SendPrjns][Synapses] SendConSt uint32 // start index into global PrjnSendCon array: [Layer][SendPrjns][SendNeurons] RecvConSt uint32 // start index into global PrjnRecvCon array: [Layer][RecvPrjns][RecvNeurons] RecvSynSt uint32 // start index into global sender-based Synapse index array: [Layer][SendPrjns][Synapses] GBufSt uint32 // start index into global PrjnGBuf global array: [Layer][RecvPrjns][RecvNeurons][MaxDelay+1] GSynSt uint32 // start index into global PrjnGSyn global array: [Layer][RecvPrjns][RecvNeurons] // contains filtered or unexported fields }
PrjnIdxs contains prjn-level index information into global memory arrays
func (*PrjnIdxs) RecvNIdxToLayIdx ¶ added in v1.7.2
RecvNIdxToLayIdx converts a neuron's index in network level global list of all neurons to receiving layer-specific index-- e.g., for accessing GBuf and GSyn values. Just subtracts RecvNeurSt -- docu-function basically..
func (*PrjnIdxs) SendNIdxToLayIdx ¶ added in v1.7.2
SendNIdxToLayIdx converts a neuron's index in network level global list of all neurons to sending layer-specific index. Just subtracts SendNeurSt -- docu-function basically..
type PrjnParams ¶ added in v1.7.0
type PrjnParams struct { PrjnType PrjnTypes `` /* 138-byte string literal not displayed */ Idxs PrjnIdxs `view:"-" desc:"recv and send neuron-level projection index array access info"` Com SynComParams `view:"inline" desc:"synaptic communication parameters: delay, probability of failure"` PrjnScale PrjnScaleParams `` /* 215-byte string literal not displayed */ SWts SWtParams `` /* 147-byte string literal not displayed */ Learn LearnSynParams `view:"add-fields" desc:"synaptic-level learning parameters for learning in the fast LWt values."` GScale GScaleVals `view:"inline" desc:"conductance scaling values"` RLPred RLPredPrjnParams `` /* 418-byte string literal not displayed */ Matrix MatrixPrjnParams `` /* 374-byte string literal not displayed */ BLA BLAPrjnParams `viewif:"PrjnType=BLAPrjn" view:"inline" desc:"Basolateral Amygdala projection parameters."` Hip HipPrjnParams `viewif:"PrjnType=HipPrjn" view:"inline" desc:"Hip bench parameters."` // contains filtered or unexported fields }
PrjnParams contains all of the prjn parameters. These values must remain constant over the course of computation. On the GPU, they are loaded into a uniform.
func (*PrjnParams) AllParams ¶ added in v1.7.0
func (pj *PrjnParams) AllParams() string
func (*PrjnParams) BLADefaults ¶ added in v1.7.18
func (pj *PrjnParams) BLADefaults()
func (*PrjnParams) CTCtxtPrjnDefaults ¶ added in v1.7.18
func (pj *PrjnParams) CTCtxtPrjnDefaults()
func (*PrjnParams) DWtFmDiDWtSyn ¶ added in v1.8.0
func (pj *PrjnParams) DWtFmDiDWtSyn(ctx *Context, syni uint32)
DWtFmDiDWtSyn updates DWt from data parallel DiDWt values
func (*PrjnParams) DWtSyn ¶ added in v1.7.0
func (pj *PrjnParams) DWtSyn(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
DWtSyn is the overall entry point for weight change (learning) at given synapse. It selects appropriate function based on projection type. rpl is the receiving layer SubPool
func (*PrjnParams) DWtSynBLA ¶ added in v1.7.18
func (pj *PrjnParams) DWtSynBLA(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynBLA computes the weight change (learning) at given synapse for BLAPrjn type. Like the BG Matrix learning rule, a synaptic tag "trace" is established at CS onset (ACh) and learning at US / extinction is a function of trace * delta from US activity (temporal difference), which limits learning.
func (*PrjnParams) DWtSynCortex ¶ added in v1.7.0
func (pj *PrjnParams) DWtSynCortex(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
DWtSynCortex computes the weight change (learning) at given synapse for cortex. Uses synaptically-integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets.
func (*PrjnParams) DWtSynHip ¶ added in v1.8.5
func (pj *PrjnParams) DWtSynHip(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
DWtSynHip computes the weight change (learning) at given synapse for cortex + Hip (CPCA Hebb learning). Uses synaptically-integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets. Adds proportional CPCA learning rule for hip-specific prjns
func (*PrjnParams) DWtSynMatrix ¶ added in v1.7.0
func (pj *PrjnParams) DWtSynMatrix(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynMatrix computes the weight change (learning) at given synapse, for the MatrixPrjn type.
func (*PrjnParams) DWtSynRWPred ¶ added in v1.7.0
func (pj *PrjnParams) DWtSynRWPred(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynRWPred computes the weight change (learning) at given synapse, for the RWPredPrjn type
func (*PrjnParams) DWtSynTDPred ¶ added in v1.7.0
func (pj *PrjnParams) DWtSynTDPred(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynTDPred computes the weight change (learning) at given synapse, for the TDRewPredPrjn type
func (*PrjnParams) DWtSynVSPatch ¶ added in v1.7.11
func (pj *PrjnParams) DWtSynVSPatch(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynVSPatch computes the weight change (learning) at given synapse, for the VSPatchPrjn type. Currently only supporting the Pos D1 type.
func (*PrjnParams) Defaults ¶ added in v1.7.0
func (pj *PrjnParams) Defaults()
func (*PrjnParams) DoSynCa ¶ added in v1.7.11
func (pj *PrjnParams) DoSynCa() bool
DoSynCa returns false if should not do synaptic-level calcium updating. Done by default in Cortex, not for some other special projection types.
func (*PrjnParams) GatherSpikes ¶ added in v1.7.2
func (pj *PrjnParams) GatherSpikes(ctx *Context, ly *LayerParams, ni, di uint32, gRaw float32, gSyn *float32)
GatherSpikes integrates G*Raw and G*Syn values for given neuron from the given Prjn-level GRaw value, first integrating projection-level GSyn value.
func (*PrjnParams) HipDefaults ¶ added in v1.8.5
func (pj *PrjnParams) HipDefaults()
func (*PrjnParams) IsExcitatory ¶ added in v1.7.0
func (pj *PrjnParams) IsExcitatory() bool
func (*PrjnParams) IsInhib ¶ added in v1.7.0
func (pj *PrjnParams) IsInhib() bool
func (*PrjnParams) MatrixDefaults ¶ added in v1.7.11
func (pj *PrjnParams) MatrixDefaults()
func (*PrjnParams) RLPredDefaults ¶ added in v1.7.18
func (pj *PrjnParams) RLPredDefaults()
func (*PrjnParams) SetFixedWts ¶ added in v1.7.11
func (pj *PrjnParams) SetFixedWts()
SetFixedWts sets parameters for fixed, non-learning weights with a default of Mean = 0.8, Var = 0 strength
func (*PrjnParams) SynCaSyn ¶ added in v1.7.24
func (pj *PrjnParams) SynCaSyn(ctx *Context, syni uint32, ni, di uint32, otherCaSyn, updtThr float32)
SynCaSyn updates synaptic calcium based on spiking, for SynSpkTheta mode. Optimized version only updates at point of spiking, threaded over neurons.
func (*PrjnParams) SynRecvLayIdx ¶ added in v1.7.2
func (pj *PrjnParams) SynRecvLayIdx(ctx *Context, syni uint32) uint32
SynRecvLayIdx converts the Synapse RecvIdx of recv neuron's index in network level global list of all neurons to receiving layer-specific index.
func (*PrjnParams) SynSendLayIdx ¶ added in v1.7.2
func (pj *PrjnParams) SynSendLayIdx(ctx *Context, syni uint32) uint32
SynSendLayIdx converts the Synapse SendIdx of sending neuron's index in network level global list of all neurons to sending layer-specific index.
func (*PrjnParams) Update ¶ added in v1.7.0
func (pj *PrjnParams) Update()
func (*PrjnParams) VSPatchDefaults ¶ added in v1.7.18
func (pj *PrjnParams) VSPatchDefaults()
func (*PrjnParams) WtFmDWtSyn ¶ added in v1.7.0
func (pj *PrjnParams) WtFmDWtSyn(ctx *Context, syni uint32)
WtFmDWtSyn is the overall entry point for updating weights from weight changes.
func (*PrjnParams) WtFmDWtSynCortex ¶ added in v1.7.0
func (pj *PrjnParams) WtFmDWtSynCortex(ctx *Context, syni uint32)
WtFmDWtSynCortex updates weights from dwt changes
func (*PrjnParams) WtFmDWtSynNoLimits ¶ added in v1.7.0
func (pj *PrjnParams) WtFmDWtSynNoLimits(ctx *Context, syni uint32)
WtFmDWtSynNoLimits -- weight update without limits
type PrjnScaleParams ¶ added in v1.2.45
type PrjnScaleParams struct { Rel float32 `` /* 255-byte string literal not displayed */ Abs float32 `` /* 334-byte string literal not displayed */ // contains filtered or unexported fields }
PrjnScaleParams are projection scaling parameters: modulates overall strength of projection, using both absolute and relative factors.
func (*PrjnScaleParams) Defaults ¶ added in v1.2.45
func (ws *PrjnScaleParams) Defaults()
func (*PrjnScaleParams) FullScale ¶ added in v1.2.45
func (ws *PrjnScaleParams) FullScale(savg, snu, ncon float32) float32
FullScale returns full scaling factor, which is product of Abs * Rel * SLayActScale
func (*PrjnScaleParams) SLayActScale ¶ added in v1.2.45
func (ws *PrjnScaleParams) SLayActScale(savg, snu, ncon float32) float32
SLayActScale computes scaling factor based on sending layer activity level (savg), number of units in sending layer (snu), and number of recv connections (ncon). Uses a fixed sem_extra standard-error-of-the-mean (SEM) extra value of 2 to add to the average expected number of active connections to receive, for purposes of computing scaling factors with partial connectivity For 25% layer activity, binomial SEM = sqrt(p(1-p)) = .43, so 3x = 1.3 so 2 is a reasonable default.
func (*PrjnScaleParams) Update ¶ added in v1.2.45
func (ws *PrjnScaleParams) Update()
type PrjnTypes ¶ added in v1.7.0
type PrjnTypes int32
PrjnTypes is an axon-specific prjn type enum, that encompasses all the different algorithm types supported. Class parameter styles automatically key off of these types. The first entries must be kept synchronized with the emer.PrjnType.
const ( // Forward is a feedforward, bottom-up projection from sensory inputs to higher layers ForwardPrjn PrjnTypes = iota // Back is a feedback, top-down projection from higher layers back to lower layers BackPrjn // Lateral is a lateral projection within the same layer / area LateralPrjn // Inhib is an inhibitory projection that drives inhibitory // synaptic conductances instead of the default excitatory ones. InhibPrjn // CTCtxt are projections from Superficial layers to CT layers that // send Burst activations drive updating of CtxtGe excitatory conductance, // at end of plus (51B Bursting) phase. Biologically, this projection // comes from the PT layer 5IB neurons, but it is simpler to use the // Super neurons directly, and PT are optional for most network types. // These projections also use a special learning rule that // takes into account the temporal delays in the activation states. // Can also add self context from CT for deeper temporal context. CTCtxtPrjn // RWPrjn does dopamine-modulated learning for reward prediction: // Da * Send.CaSpkP (integrated current spiking activity). // Uses RLPredPrjn parameters. // Use in RWPredLayer typically to generate reward predictions. // If the Da sign is positive, the first recv unit learns fully; // for negative, second one learns fully. Lower lrate applies for // opposite cases. Weights are positive-only. RWPrjn // TDPredPrjn does dopamine-modulated learning for reward prediction: // DWt = Da * Send.SpkPrv (activity on *previous* timestep) // Uses RLPredPrjn parameters. // Use in TDPredLayer typically to generate reward predictions. // If the Da sign is positive, the first recv unit learns fully; // for negative, second one learns fully. Lower lrate applies for // opposite cases. Weights are positive-only. TDPredPrjn // BLAPrjn implements the PVLV BLA learning rule: // dW = ACh * X_t-1 * (Y_t - Y_t-1) // The recv delta is across trials, where the US should activate on trial // boundary, to enable sufficient time for gating through to OFC, so // BLA initially learns based on US present - US absent. // It can also learn based on CS onset if there is a prior CS that predicts that. BLAPrjn HipPrjn // VSPatchPrjn implements the VSPatch learning rule: // dW = ACh * DA * X * Y // where DA is D1 vs. D2 modulated DA level, X = sending activity factor, // Y = receiving activity factor, and ACh provides overall modulation. VSPatchPrjn // MatrixPrjn supports trace-based learning, where an initial // trace of synaptic co-activity is formed, and then modulated // by subsequent phasic dopamine & ACh when an outcome occurs. // This bridges the temporal gap between gating activity // and subsequent outcomes, and is based biologically on synaptic tags. // Trace is reset at time of reward based on ACh level (from CINs in biology). MatrixPrjn PrjnTypesN )
The projection types
func (*PrjnTypes) FromString ¶ added in v1.7.0
func (PrjnTypes) MarshalJSON ¶ added in v1.7.0
func (*PrjnTypes) UnmarshalJSON ¶ added in v1.7.0
type PulvParams ¶ added in v1.7.0
type PulvParams struct { DriveScale float32 `` /* 144-byte string literal not displayed */ FullDriveAct float32 `` /* 352-byte string literal not displayed */ DriveLayIdx int32 `` /* 132-byte string literal not displayed */ // contains filtered or unexported fields }
PulvParams provides parameters for how the plus-phase (outcome) state of Pulvinar thalamic relay cell neurons is computed from the corresponding driver neuron Burst activation (or CaSpkP if not Super)
func (*PulvParams) Defaults ¶ added in v1.7.0
func (tp *PulvParams) Defaults()
func (*PulvParams) DriveGe ¶ added in v1.7.0
func (tp *PulvParams) DriveGe(act float32) float32
DriveGe returns effective excitatory conductance to use for given driver input Burst activation
func (*PulvParams) NonDrivePct ¶ added in v1.7.0
func (tp *PulvParams) NonDrivePct(drvMax float32) float32
NonDrivePct returns the multiplier proportion of the non-driver based Ge to keep around, based on FullDriveAct and the max activity in driver layer.
func (*PulvParams) Update ¶ added in v1.7.0
func (tp *PulvParams) Update()
type PushOff ¶ added in v1.8.2
type PushOff struct { Off uint32 `desc:"offset"` // contains filtered or unexported fields }
PushOff has push constants for setting offset into compute shader
type RLPredPrjnParams ¶ added in v1.7.0
type RLPredPrjnParams struct { OppSignLRate float32 `desc:"how much to learn on opposite DA sign coding neuron (0..1)"` DaTol float32 `` /* 208-byte string literal not displayed */ // contains filtered or unexported fields }
RLPredPrjnParams does dopamine-modulated learning for reward prediction: Da * Send.Act Used by RWPrjn and TDPredPrjn within corresponding RWPredLayer or TDPredLayer to generate reward predictions based on its incoming weights, using linear activation function. Has no weight bounds or limits on sign etc.
func (*RLPredPrjnParams) Defaults ¶ added in v1.7.0
func (pj *RLPredPrjnParams) Defaults()
func (*RLPredPrjnParams) Update ¶ added in v1.7.0
func (pj *RLPredPrjnParams) Update()
type RLRateParams ¶ added in v1.6.13
type RLRateParams struct { On slbool.Bool `def:"true" desc:"use learning rate modulation"` SigmoidMin float32 `` /* 238-byte string literal not displayed */ Diff slbool.Bool `viewif:"On" desc:"modulate learning rate as a function of plus - minus differences"` SpkThr float32 `` /* 135-byte string literal not displayed */ DiffThr float32 `` /* 131-byte string literal not displayed */ Min float32 `viewif:"On&&Diff" def:"0.001" desc:"for Diff component, minimum learning rate value when below ActDiffThr"` // contains filtered or unexported fields }
RLRateParams are recv neuron learning rate modulation parameters. Has two factors: the derivative of the sigmoid based on CaSpkD activity levels, and based on the phase-wise differences in activity (Diff).
func (*RLRateParams) Defaults ¶ added in v1.6.13
func (rl *RLRateParams) Defaults()
func (*RLRateParams) RLRateDiff ¶ added in v1.6.13
func (rl *RLRateParams) RLRateDiff(scap, scad float32) float32
RLRateDiff returns the learning rate as a function of difference between CaSpkP and CaSpkD values
func (*RLRateParams) RLRateSigDeriv ¶ added in v1.6.13
func (rl *RLRateParams) RLRateSigDeriv(act float32, laymax float32) float32
RLRateSigDeriv returns the sigmoid derivative learning rate factor as a function of spiking activity, with mid-range values having full learning and extreme values a reduced learning rate: deriv = act * (1 - act) The activity should be CaSpkP and the layer maximum is used to normalize that to a 0-1 range.
func (*RLRateParams) Update ¶ added in v1.6.13
func (rl *RLRateParams) Update()
type RWDaParams ¶ added in v1.7.0
type RWDaParams struct { TonicGe float32 `desc:"tonic baseline Ge level for DA = 0 -- +/- are between 0 and 2*TonicGe -- just for spiking display of computed DA value"` RWPredLayIdx int32 `inactive:"+" desc:"idx of RWPredLayer to get reward prediction from -- set during Build from BuildConfig RWPredLayName"` // contains filtered or unexported fields }
RWDaParams computes a dopamine (DA) signal using simple Rescorla-Wagner learning dynamic (i.e., PV learning in the PVLV framework).
func (*RWDaParams) Defaults ¶ added in v1.7.0
func (rp *RWDaParams) Defaults()
func (*RWDaParams) GeFmDA ¶ added in v1.7.0
func (rp *RWDaParams) GeFmDA(da float32) float32
GeFmDA returns excitatory conductance from DA dopamine value
func (*RWDaParams) Update ¶ added in v1.7.0
func (rp *RWDaParams) Update()
type RWPredParams ¶ added in v1.7.0
RWPredParams parameterizes reward prediction for a simple Rescorla-Wagner learning dynamic (i.e., PV learning in the PVLV framework).
func (*RWPredParams) Defaults ¶ added in v1.7.0
func (rp *RWPredParams) Defaults()
func (*RWPredParams) Update ¶ added in v1.7.0
func (rp *RWPredParams) Update()
type RandFunIdx ¶ added in v1.7.7
type RandFunIdx uint32
const ( RandFunActPGe RandFunIdx = iota RandFunActPGi RandFunIdxN )
We use this enum to store a unique index for each function that requires random number generation. If you add a new function, you need to add a new enum entry here. RandFunIdxN is the total number of random functions. It autoincrements due to iota.
type SWtAdaptParams ¶ added in v1.2.45
type SWtAdaptParams struct { On slbool.Bool `` /* 137-byte string literal not displayed */ LRate float32 `` /* 388-byte string literal not displayed */ SubMean float32 `viewif:"On" def:"1" desc:"amount of mean to subtract from SWt delta when updating -- generally best to set to 1"` SigGain float32 `` /* 135-byte string literal not displayed */ }
SWtAdaptParams manages adaptation of SWt values
func (*SWtAdaptParams) Defaults ¶ added in v1.2.45
func (sp *SWtAdaptParams) Defaults()
func (*SWtAdaptParams) Update ¶ added in v1.2.45
func (sp *SWtAdaptParams) Update()
type SWtInitParams ¶ added in v1.2.45
type SWtInitParams struct { SPct float32 `` /* 315-byte string literal not displayed */ Mean float32 `` /* 199-byte string literal not displayed */ Var float32 `def:"0.25" desc:"initial variance in weight values, prior to constraints."` Sym slbool.Bool `` /* 149-byte string literal not displayed */ }
SWtInitParams for initial SWt values
func (*SWtInitParams) Defaults ¶ added in v1.2.45
func (sp *SWtInitParams) Defaults()
func (*SWtInitParams) RndVar ¶ added in v1.2.45
func (sp *SWtInitParams) RndVar(rnd erand.Rand) float32
RndVar returns the random variance in weight value (zero mean) based on Var param
func (*SWtInitParams) Update ¶ added in v1.2.45
func (sp *SWtInitParams) Update()
type SWtParams ¶ added in v1.2.45
type SWtParams struct { Init SWtInitParams `view:"inline" desc:"initialization of SWt values"` Adapt SWtAdaptParams `view:"inline" desc:"adaptation of SWt values in response to LWt learning"` Limit minmax.F32 `def:"{'Min':0.2,'Max':0.8}" view:"inline" desc:"range limits for SWt values"` }
SWtParams manages structural, slowly adapting weight values (SWt), in terms of initialization and updating over course of learning. SWts impose initial and slowly adapting constraints on neuron connectivity to encourage differentiation of neuron representations and overall good behavior in terms of not hogging the representational space. The TrgAvg activity constraint is not enforced through SWt -- it needs to be more dynamic and supported by the regular learned weights.
func (*SWtParams) InitWtsSyn ¶ added in v1.3.5
InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.
func (*SWtParams) LWtFmWts ¶ added in v1.2.47
LWtFmWts returns linear, learning LWt from wt and swt. LWt is set to reproduce given Wt relative to given SWt base value.
func (*SWtParams) LinFmSigWt ¶ added in v1.2.45
LinFmSigWt returns linear weight from sigmoidal contrast-enhanced weight. wt is centered at 1, and normed in range +/- 1 around that, return value is in 0-1 range, centered at .5
func (*SWtParams) SigFmLinWt ¶ added in v1.2.45
SigFmLinWt returns sigmoidal contrast-enhanced weight from linear weight, centered at 1 and normed in range +/- 1 around that in preparation for multiplying times SWt
type SpikeNoiseParams ¶ added in v1.2.94
type SpikeNoiseParams struct { On slbool.Bool `desc:"add noise simulating background spiking levels"` GeHz float32 `` /* 163-byte string literal not displayed */ Ge float32 `` /* 162-byte string literal not displayed */ GiHz float32 `` /* 177-byte string literal not displayed */ Gi float32 `` /* 162-byte string literal not displayed */ GeExpInt float32 `view:"-" json:"-" xml:"-" desc:"Exp(-Interval) which is the threshold for GeNoiseP as it is updated"` GiExpInt float32 `view:"-" json:"-" xml:"-" desc:"Exp(-Interval) which is the threshold for GiNoiseP as it is updated"` // contains filtered or unexported fields }
SpikeNoiseParams parameterizes background spiking activity impinging on the neuron, simulated using a poisson spiking process.
func (*SpikeNoiseParams) Defaults ¶ added in v1.2.94
func (an *SpikeNoiseParams) Defaults()
func (*SpikeNoiseParams) PGe ¶ added in v1.2.94
func (an *SpikeNoiseParams) PGe(ctx *Context, p *float32, ni uint32) float32
PGe updates the GeNoiseP probability, multiplying a uniform random number [0-1] and returns Ge from spiking if a spike is triggered
func (*SpikeNoiseParams) PGi ¶ added in v1.2.94
func (an *SpikeNoiseParams) PGi(ctx *Context, p *float32, ni uint32) float32
PGi updates the GiNoiseP probability, multiplying a uniform random number [0-1] and returns Gi from spiking if a spike is triggered
func (*SpikeNoiseParams) Update ¶ added in v1.2.94
func (an *SpikeNoiseParams) Update()
type SpikeParams ¶
type SpikeParams struct { Thr float32 `` /* 152-byte string literal not displayed */ VmR float32 `` /* 217-byte string literal not displayed */ Tr int32 `` /* 242-byte string literal not displayed */ RTau float32 `` /* 285-byte string literal not displayed */ Exp slbool.Bool `` /* 274-byte string literal not displayed */ ExpSlope float32 `` /* 325-byte string literal not displayed */ ExpThr float32 `` /* 127-byte string literal not displayed */ MaxHz float32 `` /* 182-byte string literal not displayed */ ISITau float32 `def:"5" min:"1" desc:"constant for integrating the spiking interval in estimating spiking rate"` ISIDt float32 `view:"-" desc:"rate = 1 / tau"` RDt float32 `view:"-" desc:"rate = 1 / tau"` // contains filtered or unexported fields }
SpikeParams contains spiking activation function params. Implements a basic thresholded Vm model, and optionally the AdEx adaptive exponential function (adapt is KNaAdapt)
func (*SpikeParams) ActFmISI ¶
func (sk *SpikeParams) ActFmISI(isi, timeInc, integ float32) float32
ActFmISI computes rate-code activation from estimated spiking interval
func (*SpikeParams) ActToISI ¶
func (sk *SpikeParams) ActToISI(act, timeInc, integ float32) float32
ActToISI compute spiking interval from a given rate-coded activation, based on time increment (.001 = 1msec default), Act.Dt.Integ
func (*SpikeParams) AvgFmISI ¶
func (sk *SpikeParams) AvgFmISI(avg float32, isi float32) float32
AvgFmISI returns updated spiking ISI from current isi interval value
func (*SpikeParams) Defaults ¶
func (sk *SpikeParams) Defaults()
func (*SpikeParams) Update ¶
func (sk *SpikeParams) Update()
type StartN ¶ added in v1.7.2
type StartN struct { Start uint32 `desc:"starting offset"` N uint32 `desc:"number of items -- [Start:Start+N]"` // contains filtered or unexported fields }
StartN holds a starting offset index and a number of items arranged from Start to Start+N (exclusive). This is not 16 byte padded and only for use on CPU side.
type SynComParams ¶
type SynComParams struct { GType PrjnGTypes `desc:"type of conductance (G) communicated by this projection"` Delay uint32 `` /* 405-byte string literal not displayed */ MaxDelay uint32 `` /* 286-byte string literal not displayed */ PFail float32 `` /* 149-byte string literal not displayed */ PFailSWt slbool.Bool `` /* 141-byte string literal not displayed */ DelLen uint32 `view:"-" desc:"delay length = actual length of the GBuf buffer per neuron = Delay+1 -- just for speed"` // contains filtered or unexported fields }
SynComParams are synaptic communication parameters: used in the Prjn parameters. Includes delay and probability of failure, and Inhib for inhibitory connections, and modulatory projections that have multiplicative-like effects.
func (*SynComParams) Defaults ¶
func (sc *SynComParams) Defaults()
func (*SynComParams) Fail ¶
func (sc *SynComParams) Fail(ctx *Context, syni uint32, swt float32)
Fail updates failure status of given weight, given SWt value
func (*SynComParams) FloatFromGBuf ¶ added in v1.7.9
func (sc *SynComParams) FloatFromGBuf(ival int32) float32
FloatFromGBuf converts the given int32 value produced via FloatToGBuf back into a float32 (divides by factor). If the value is negative, a panic is triggered indicating there was numerical overflow in the aggregation. If this occurs, the FloatToIntFactor needs to be decreased.
func (*SynComParams) FloatToGBuf ¶ added in v1.7.9
func (sc *SynComParams) FloatToGBuf(val float32) int32
FloatToGBuf converts the given floating point value to a large int32 for accumulating in GBuf. Note: more efficient to bake factor into scale factor per prjn.
func (*SynComParams) FloatToIntFactor ¶ added in v1.7.9
func (sc *SynComParams) FloatToIntFactor() float32
FloatToIntFactor returns the factor used for converting float32 to int32 in GBuf encoding. Because total G is constrained via scaling factors to be around ~1, it is safe to use a factor that uses most of the available bits, leaving enough room to prevent overflow when adding together the different vals. For encoding, bake this into scale factor in SendSpike, and cast the result to int32.
func (*SynComParams) ReadIdx ¶ added in v1.7.2
func (sc *SynComParams) ReadIdx(rnIdx, di uint32, cycTot int32, nRecvNeurs, maxData uint32) uint32
ReadIdx returns index for reading existing spikes from the GBuf buffer, based on the layer-based recv neuron index, data parallel idx, and the ReadOff offset from the CyclesTotal.
func (*SynComParams) ReadOff ¶ added in v1.7.2
func (sc *SynComParams) ReadOff(cycTot int32) uint32
ReadOff returns offset for reading existing spikes from the GBuf buffer, based on Context CyclesTotal counter which increments each cycle. This is logically the zero position in the ring buffer.
func (*SynComParams) RingIdx ¶ added in v1.7.2
func (sc *SynComParams) RingIdx(i uint32) uint32
RingIdx returns the wrap-around ring index for given raw index. For writing and reading spikes to GBuf buffer, based on Context.CyclesTotal counter. RN: 0 1 2 <- recv neuron indexes DI: 0 1 2 0 1 2 0 1 2 <- delay indexes C0: ^ v <- cycle 0, ring index: ^ = write, v = read C1: ^ v <- cycle 1, shift over by 1 -- overwrite last read C2: v ^ <- cycle 2: read out value stored on C0 -- index wraps around
func (*SynComParams) Update ¶
func (sc *SynComParams) Update()
func (*SynComParams) WriteIdx ¶ added in v1.7.2
func (sc *SynComParams) WriteIdx(rnIdx, di uint32, cycTot int32, nRecvNeurs, maxData uint32) uint32
WriteIdx returns actual index for writing new spikes into the GBuf buffer, based on the layer-based recv neuron index, data parallel idx, and the WriteOff offset computed from the CyclesTotal.
func (*SynComParams) WriteIdxOff ¶ added in v1.7.2
func (sc *SynComParams) WriteIdxOff(rnIdx, di, wrOff uint32, nRecvNeurs, maxData uint32) uint32
WriteIdxOff returns actual index for writing new spikes into the GBuf buffer, based on the layer-based recv neuron index and the given WriteOff offset.
func (*SynComParams) WriteOff ¶ added in v1.7.2
func (sc *SynComParams) WriteOff(cycTot int32) uint32
WriteOff returns offset for writing new spikes into the GBuf buffer, based on Context CyclesTotal counter which increments each cycle. This is logically the last position in the ring buffer.
func (*SynComParams) WtFail ¶
func (sc *SynComParams) WtFail(ctx *Context, swt float32) bool
WtFail returns true if synapse should fail, as function of SWt value (optionally)
func (*SynComParams) WtFailP ¶
func (sc *SynComParams) WtFailP(swt float32) float32
WtFailP returns probability of weight (synapse) failure given current SWt value
type SynapseCaStrides ¶ added in v1.8.0
type SynapseCaStrides struct { Synapse uint64 `desc:"synapse level"` Var uint64 `desc:"variable level"` }
SynapseCaStrides encodes the stride offsets for synapse variable access into network float32 array. Data is always the inner-most variable.
func (*SynapseCaStrides) Idx ¶ added in v1.8.0
func (ns *SynapseCaStrides) Idx(synIdx, di uint32, nvar SynapseCaVars) uint64
Idx returns the index into network float32 array for given synapse, data, and variable
func (*SynapseCaStrides) SetSynapseOuter ¶ added in v1.8.0
func (ns *SynapseCaStrides) SetSynapseOuter(ndata int)
SetSynapseOuter sets strides with synapses as outer loop: [Synapses][Vars][Data], which is optimal for CPU-based computation.
func (*SynapseCaStrides) SetVarOuter ¶ added in v1.8.0
func (ns *SynapseCaStrides) SetVarOuter(nsyn, ndata int)
SetVarOuter sets strides with vars as outer loop: [Vars][Synapses][Data], which is optimal for GPU-based computation.
type SynapseCaVars ¶ added in v1.8.0
type SynapseCaVars int32
SynapseCaVars are synapse variables for calcium involved in learning, which are data parallel input specific.
const ( // CaM is first stage running average (mean) Ca calcium level (like CaM = calmodulin), feeds into CaP CaM SynapseCaVars = iota // CaP is shorter timescale integrated CaM value, representing the plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule CaP // CaD is longer timescale integrated CaP value, representing the minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule CaD // CaUpT is time in CyclesTotal of last updating of Ca values at the synapse level, for optimized synaptic-level Ca integration -- converted to / from uint32 CaUpT // Tr is trace of synaptic activity over time -- used for credit assignment in learning. In MatrixPrjn this is a tag that is then updated later when US occurs. Tr // DTr is delta (change in) Tr trace of synaptic activity over time DTr // DiDWt is delta weight for each data parallel index (Di) -- this is directly computed from the Ca values (in cortical version) and then aggregated into the overall DWt (which may be further integrated across MPI nodes), which then drives changes in Wt values DiDWt SynapseCaVarsN )
func (*SynapseCaVars) FromString ¶ added in v1.8.0
func (i *SynapseCaVars) FromString(s string) error
func (SynapseCaVars) MarshalJSON ¶ added in v1.8.0
func (ev SynapseCaVars) MarshalJSON() ([]byte, error)
func (SynapseCaVars) String ¶ added in v1.8.0
func (i SynapseCaVars) String() string
func (*SynapseCaVars) UnmarshalJSON ¶ added in v1.8.0
func (ev *SynapseCaVars) UnmarshalJSON(b []byte) error
type SynapseIdxStrides ¶ added in v1.8.0
type SynapseIdxStrides struct { Synapse uint32 `desc:"synapse level"` Index uint32 `desc:"index value level"` // contains filtered or unexported fields }
SynapseIdxStrides encodes the stride offsets for synapse index access into network uint32 array.
func (*SynapseIdxStrides) Idx ¶ added in v1.8.0
func (ns *SynapseIdxStrides) Idx(synIdx uint32, idx SynapseIdxs) uint32
Idx returns the index into network uint32 array for given synapse, index value
func (*SynapseIdxStrides) SetIdxOuter ¶ added in v1.8.0
func (ns *SynapseIdxStrides) SetIdxOuter(nsyn int)
SetIdxOuter sets strides with indexes as outer dimension: [Idxs][Synapses] (outer to inner), which is optimal for GPU-based computation.
func (*SynapseIdxStrides) SetSynapseOuter ¶ added in v1.8.0
func (ns *SynapseIdxStrides) SetSynapseOuter()
SetSynapseOuter sets strides with synapses as outer dimension: [Synapses][Idxs] (outer to inner), which is optimal for CPU-based computation.
type SynapseIdxs ¶ added in v1.8.0
type SynapseIdxs int32
SynapseIdxs are the neuron indexes and other uint32 values (flags, etc). There is only one of these per neuron -- not data parallel.
const ( // SynRecvIdx is receiving neuron index in network's global list of neurons SynRecvIdx SynapseIdxs = iota // SynSendIdx is sending neuron index in network's global list of neurons SynSendIdx // SynPrjnIdx is projection index in global list of projections organized as [Layers][RecvPrjns] SynPrjnIdx SynapseIdxsN )
func (*SynapseIdxs) FromString ¶ added in v1.8.0
func (i *SynapseIdxs) FromString(s string) error
func (SynapseIdxs) MarshalJSON ¶ added in v1.8.0
func (ev SynapseIdxs) MarshalJSON() ([]byte, error)
func (SynapseIdxs) String ¶ added in v1.8.0
func (i SynapseIdxs) String() string
func (*SynapseIdxs) UnmarshalJSON ¶ added in v1.8.0
func (ev *SynapseIdxs) UnmarshalJSON(b []byte) error
type SynapseVarStrides ¶ added in v1.8.0
type SynapseVarStrides struct { Synapse uint32 `desc:"synapse level"` Var uint32 `desc:"variable level"` // contains filtered or unexported fields }
SynapseVarStrides encodes the stride offsets for synapse variable access into network float32 array.
func (*SynapseVarStrides) Idx ¶ added in v1.8.0
func (ns *SynapseVarStrides) Idx(synIdx uint32, nvar SynapseVars) uint32
Idx returns the index into network float32 array for given synapse, and variable
func (*SynapseVarStrides) SetSynapseOuter ¶ added in v1.8.0
func (ns *SynapseVarStrides) SetSynapseOuter()
SetSynapseOuter sets strides with synapses as outer loop: [Synapses][Vars], which is optimal for CPU-based computation.
func (*SynapseVarStrides) SetVarOuter ¶ added in v1.8.0
func (ns *SynapseVarStrides) SetVarOuter(nsyn int)
SetVarOuter sets strides with vars as outer loop: [Vars][Synapses], which is optimal for GPU-based computation.
type SynapseVars ¶
type SynapseVars int32
SynapseVars are the neuron variables representing current synaptic state, specifically weights.
const ( // Wt is effective synaptic weight value, determining how much conductance one spike drives on the receiving neuron, representing the actual number of effective AMPA receptors in the synapse. Wt = SWt * WtSig(LWt), where WtSig produces values between 0-2 based on LWt, centered on 1. Wt SynapseVars = iota // LWt is rapidly learning, linear weight value -- learns according to the lrate specified in the connection spec. Biologically, this represents the internal biochemical processes that drive the trafficking of AMPA receptors in the synaptic density. Initially all LWt are .5, which gives 1 from WtSig function. LWt // SWt is slowly adapting structural weight value, which acts as a multiplicative scaling factor on synaptic efficacy: biologically represents the physical size and efficacy of the dendritic spine. SWt values adapt in an outer loop along with synaptic scaling, with constraints to prevent runaway positive feedback loops and maintain variance and further capacity to learn. Initial variance is all in SWt, with LWt set to .5, and scaling absorbs some of LWt into SWt. SWt // DWt is delta (change in) synaptic weight, from learning -- updates LWt which then updates Wt. DWt // DSWt is change in SWt slow synaptic weight -- accumulates DWt DSWt SynapseVarsN )
func (*SynapseVars) FromString ¶ added in v1.8.0
func (i *SynapseVars) FromString(s string) error
func (SynapseVars) MarshalJSON ¶ added in v1.8.0
func (ev SynapseVars) MarshalJSON() ([]byte, error)
func (SynapseVars) String ¶ added in v1.8.0
func (i SynapseVars) String() string
func (*SynapseVars) UnmarshalJSON ¶ added in v1.8.0
func (ev *SynapseVars) UnmarshalJSON(b []byte) error
type TDDaParams ¶ added in v1.7.0
type TDDaParams struct { TonicGe float32 `desc:"tonic baseline Ge level for DA = 0 -- +/- are between 0 and 2*TonicGe -- just for spiking display of computed DA value"` TDIntegLayIdx int32 `inactive:"+" desc:"idx of TDIntegLayer to get reward prediction from -- set during Build from BuildConfig TDIntegLayName"` // contains filtered or unexported fields }
TDDaParams are params for dopamine (DA) signal as the temporal difference (TD) between the TDIntegLayer activations in the minus and plus phase.
func (*TDDaParams) Defaults ¶ added in v1.7.0
func (tp *TDDaParams) Defaults()
func (*TDDaParams) GeFmDA ¶ added in v1.7.0
func (tp *TDDaParams) GeFmDA(da float32) float32
GeFmDA returns excitatory conductance from DA dopamine value
func (*TDDaParams) Update ¶ added in v1.7.0
func (tp *TDDaParams) Update()
type TDIntegParams ¶ added in v1.7.0
type TDIntegParams struct { Discount float32 `desc:"discount factor -- how much to discount the future prediction from TDPred"` PredGain float32 `desc:"gain factor on TD rew pred activations"` TDPredLayIdx int32 `inactive:"+" desc:"idx of TDPredLayer to get reward prediction from -- set during Build from BuildConfig TDPredLayName"` // contains filtered or unexported fields }
TDIntegParams are params for reward integrator layer
func (*TDIntegParams) Defaults ¶ added in v1.7.0
func (tp *TDIntegParams) Defaults()
func (*TDIntegParams) Update ¶ added in v1.7.0
func (tp *TDIntegParams) Update()
type TopoInhibParams ¶ added in v1.2.85
type TopoInhibParams struct { On slbool.Bool `desc:"use topographic inhibition"` Width int32 `viewif:"On" desc:"half-width of topographic inhibition within layer"` Sigma float32 `viewif:"On" desc:"normalized gaussian sigma as proportion of Width, for gaussian weighting"` Wrap slbool.Bool `viewif:"On" desc:"half-width of topographic inhibition within layer"` Gi float32 `viewif:"On" desc:"overall inhibition multiplier for topographic inhibition (generally <= 1)"` FF float32 `` /* 133-byte string literal not displayed */ FB float32 `` /* 139-byte string literal not displayed */ FF0 float32 `` /* 186-byte string literal not displayed */ WidthWt float32 `inactive:"+" desc:"weight value at width -- to assess the value of Sigma"` // contains filtered or unexported fields }
TopoInhibParams provides for topographic gaussian inhibition integrating over neighborhood. TODO: not currently being used
func (*TopoInhibParams) Defaults ¶ added in v1.2.85
func (ti *TopoInhibParams) Defaults()
func (*TopoInhibParams) GiFmGeAct ¶ added in v1.2.85
func (ti *TopoInhibParams) GiFmGeAct(ge, act, ff0 float32) float32
func (*TopoInhibParams) Update ¶ added in v1.2.85
func (ti *TopoInhibParams) Update()
type TraceParams ¶ added in v1.5.1
type TraceParams struct { Tau float32 `` /* 126-byte string literal not displayed */ SubMean float32 `` /* 409-byte string literal not displayed */ LearnThr float32 `` /* 196-byte string literal not displayed */ Dt float32 `view:"-" json:"-" xml:"-" inactive:"+" desc:"rate = 1 / tau"` }
TraceParams manages learning rate parameters
func (*TraceParams) Defaults ¶ added in v1.5.1
func (tp *TraceParams) Defaults()
func (*TraceParams) TrFmCa ¶ added in v1.5.1
func (tp *TraceParams) TrFmCa(tr float32, ca float32) float32
TrFmCa returns updated trace factor as function of a synaptic calcium update factor and current trace
func (*TraceParams) Update ¶ added in v1.5.1
func (tp *TraceParams) Update()
type TrgAvgActParams ¶ added in v1.2.45
type TrgAvgActParams struct { On slbool.Bool `desc:"whether to use target average activity mechanism to scale synaptic weights"` ErrLRate float32 `` /* 255-byte string literal not displayed */ SynScaleRate float32 `` /* 289-byte string literal not displayed */ SubMean float32 `` /* 235-byte string literal not displayed */ TrgRange minmax.F32 `` /* 193-byte string literal not displayed */ Permute slbool.Bool `` /* 236-byte string literal not displayed */ Pool slbool.Bool `` /* 206-byte string literal not displayed */ // contains filtered or unexported fields }
TrgAvgActParams govern the target and actual long-term average activity in neurons. Target value is adapted by neuron-wise error and difference in actual vs. target. drives synaptic scaling at a slow timescale (Network.SlowInterval).
func (*TrgAvgActParams) Defaults ¶ added in v1.2.45
func (ta *TrgAvgActParams) Defaults()
func (*TrgAvgActParams) Update ¶ added in v1.2.45
func (ta *TrgAvgActParams) Update()
type Urgency ¶ added in v1.7.18
type Urgency struct { U50 float32 `desc:"value of raw urgency where the urgency activation level is 50%"` Power int32 `def:"4" desc:"exponent on the urge factor -- valid numbers are 1,2,4,6"` Thr float32 `def:"0.2" desc:"threshold for urge -- cuts off small baseline values"` // contains filtered or unexported fields }
Urgency has urgency (increasing pressure to do something) and parameters for updating it. Raw urgency is incremented by same units as effort, but is only reset with a positive US. Could also make it a function of drives and bodily state factors e.g., desperate thirst, hunger. Drive activations probably have limited range and renormalization, so urgency can be another dimension with more impact by directly biasing Go.
type VSPatchParams ¶ added in v1.7.11
type VSPatchParams struct { NoDALRate float32 `` /* 243-byte string literal not displayed */ NoDAThr float32 `def:"0.01" desc:"threshold on DA level to engage the NoDALRate -- use a small positive number just in case"` Gain float32 `desc:"multiplier applied after Thr threshold"` ThrInit float32 `desc:"initial value for overall threshold, which adapts over time -- in LayerVals.ActAvgVals.AdaptThr"` ThrLRate float32 `desc:"learning rate for the threshold -- moves in proportion to same predictive error signal that drives synaptic learning"` ThrNonRew float32 `desc:"extra gain factor for non-reward trials, which is the most critical"` // contains filtered or unexported fields }
VSPatchParams parameters for VSPatch learning
func (*VSPatchParams) DALRate ¶ added in v1.7.11
func (vp *VSPatchParams) DALRate(da, modlr float32) float32
DALRate returns the learning rate modulation factor modlr based on dopamine level
func (*VSPatchParams) Defaults ¶ added in v1.7.11
func (vp *VSPatchParams) Defaults()
func (*VSPatchParams) ThrVal ¶ added in v1.8.1
func (vp *VSPatchParams) ThrVal(act, thr float32) float32
ThrVal returns the thresholded value, gain-multiplied value for given VSPatch activity level
func (*VSPatchParams) Update ¶ added in v1.7.11
func (vp *VSPatchParams) Update()
type VTA ¶ added in v1.7.11
type VTA struct { PVThr float32 `desc:"threshold for activity of PVpos or VSPatchPos to determine if a PV event (actual PV or omission thereof) is present"` Gain VTAVals `view:"inline" desc:"gain multipliers on inputs from each input"` // contains filtered or unexported fields }
VTA has parameters and values for computing VTA DA dopamine, as a function of:
- PV (primary value) driving inputs reflecting US inputs, which are modulated by Drives and discounted by Effort for positive.
- LV / Amygdala which drives bursting for unexpected CSs or USs via CeM.
- Shunting expectations of reward from VSPatchPosD1 - D2.
- Dipping / pausing inhibitory inputs from lateral habenula (LHb) reflecting predicted positive outcome > actual, or actual negative > predicted.
- ACh from LDT (laterodorsal tegmentum) reflecting sensory / reward salience, which disinhibits VTA activity.
type VTAVals ¶ added in v1.7.11
type VTAVals struct { DA float32 `desc:"overall dopamine value reflecting all of the different inputs"` USpos float32 `desc:"total positive valence primary value = sum of USpos * Drive without effort discounting"` PVpos float32 `` /* 145-byte string literal not displayed */ PVneg float32 `desc:"total negative valence primary value = sum of USneg inputs"` CeMpos float32 `` /* 303-byte string literal not displayed */ CeMneg float32 `` /* 177-byte string literal not displayed */ LHbDip float32 `desc:"dip from LHb / RMTg -- net inhibitory drive on VTA DA firing = dips"` LHbBurst float32 `desc:"burst from LHb / RMTg -- net excitatory drive on VTA DA firing = bursts"` VSPatchPos float32 `desc:"net shunting input from VSPatch (PosD1 -- PVi in original PVLV)"` // contains filtered or unexported fields }
VTAVals has values for all the inputs to the VTA. Used as gain factors and computed values.
type ValenceTypes ¶ added in v1.7.11
type ValenceTypes int32
ValenceTypes are types of valence coding: positive or negative.
const ( // Positive valence codes for outcomes aligned with drives / goals. Positive ValenceTypes = iota // Negative valence codes for harmful or aversive outcomes. Negative ValenceTypesN )
func (*ValenceTypes) FromString ¶ added in v1.7.11
func (i *ValenceTypes) FromString(s string) error
func (ValenceTypes) String ¶ added in v1.7.11
func (i ValenceTypes) String() string
Source Files ¶
- act.go
- act_prjn.go
- avgmax.go
- axon.go
- context.go
- damodtypes_string.go
- deep_layers.go
- deep_net.go
- deep_prjns.go
- doc.go
- globals.go
- globalvars_string.go
- globalvtatype_string.go
- gplayertypes_string.go
- gpu.go
- hebbprjn.go
- helpers.go
- hip_net.go
- hip_prjns.go
- inhib.go
- layer.go
- layer_compute.go
- layerbase.go
- layerparams.go
- layertypes.go
- layertypes_string.go
- layervals.go
- learn.go
- logging.go
- looper.go
- network.go
- network_single.go
- networkbase.go
- neuromod.go
- neuron.go
- neuronavgvars_string.go
- neuronflags_string.go
- neuronidxs_string.go
- neuronvars_string.go
- pcore_layers.go
- pcore_net.go
- pcore_prjns.go
- pool.go
- prjn.go
- prjn_compute.go
- prjnbase.go
- prjngtypes_string.go
- prjnparams.go
- prjntypes.go
- prjntypes_string.go
- pvlv.go
- pvlv_layers.go
- pvlv_net.go
- pvlv_prjns.go
- rand.go
- rl_layers.go
- rl_net.go
- rl_prjns.go
- synapse.go
- synapsecavars_string.go
- synapseidxs_string.go
- synapsevars_string.go
- threads.go
- valencetypes_string.go
- version.go