Documentation ¶
Overview ¶
Package axon provides the basic reference axon implementation, for rate-coded activations and standard error-driven learning. Other packages provide spiking or deep axon, Rubicon, PBWM, etc.
The overall design seeks an "optimal" tradeoff between simplicity, transparency, ability to flexibly recombine and extend elements, and avoiding having to rewrite a bunch of stuff.
The *Stru elements handle the core structural components of the network, and hold emer.* interface pointers to elements such as emer.Layer, which provides a very minimal interface for these elements. Interfaces are automatically pointers, so think of these as generic pointers to your specific Layers etc.
This design means the same *Stru infrastructure can be re-used across different variants of the algorithm. Because we're keeping this infrastructure minimal and algorithm-free it should be much less confusing than dealing with the multiple levels of inheritance in C++ emergent. The actual algorithm-specific code is now fully self-contained, and largely orthogonalized from the infrastructure.
One specific cost of this is the need to cast the emer.* interface pointers into the specific types of interest, when accessing via the *Stru infrastructure.
The *Params elements contain all the (meta)parameters and associated methods for computing various functions. They are the equivalent of Specs from original emergent, but unlike specs they are local to each place they are used, and styling is used to apply common parameters across multiple layers etc. Params seems like a more explicit, recognizable name compared to specs, and this also helps avoid confusion about their different nature than old specs. Pars is shorter but confusable with "Parents" so "Params" is more unambiguous.
Params are organized into four major categories, which are more clearly functionally labeled as opposed to just structurally so, to keep things clearer and better organized overall: * ActParams -- activation params, at the Neuron level (in act.go) * InhibParams -- inhibition params, at the Layer / Pool level (in inhib.go) * LearnNeurParams -- learning parameters at the Neuron level (running-averages that drive learning) * LearnSynParams -- learning parameters at the Synapse level (both in learn.go)
The levels of structure and state are: * Network * .Layers * .Pools: pooled inhibition state -- 1 for layer plus 1 for each sub-pool (unit group) with inhibition * .RecvPaths: receiving pathways from other sending layers * .SendPaths: sending pathways from other receiving layers * .Neurons: neuron state variables
There are methods on the Network that perform initialization and overall computation, by iterating over layers and calling methods there. This is typically how most users will run their models.
Parallel computation across multiple CPU cores (threading) is achieved through persistent worker go routines that listen for functions to run on thread-specific channels. Each layer has a designated thread number, so you can experiment with different ways of dividing up the computation. Timing data is kept for per-thread time use -- see TimeReport() on the network.
The Layer methods directly iterate over Neurons, Pools, and Paths, and there is no finer-grained level of computation (e.g., at the individual Neuron level), except for the *Params methods that directly compute relevant functions. Thus, looking directly at the layer.go code should provide a clear sense of exactly how everything is computed -- you may need to the refer to act.go, learn.go etc to see the relevant details but at least the overall organization should be clear in layer.go.
Computational methods are generally named: VarFromVar to specifically name what variable is being computed from what other input variables. e.g., SpikeFromG computes activation from conductances G.
The Pools (type Pool, in pool.go) hold state used for computing pooled inhibition, but also are used to hold overall aggregate pooled state variables -- the first element in Pools applies to the layer itself, and subsequent ones are for each sub-pool (4D layers). These pools play the same role as the AxonUnGpState structures in C++ emergent.
Paths directly support all synapse-level computation, and hold the LearnSynParams and iterate directly over all of their synapses. It is the exact same Path object that lives in the RecvPaths of the receiver-side, and the SendPaths of the sender-side, and it maintains and coordinates both sides of the state. This clarifies and simplifies a lot of code. There is no separate equivalent of AxonConSpec / AxonConState at the level of connection groups per unit per pathway.
The pattern of connectivity between units is specified by the paths.Pattern interface and all the different standard options are avail in that path package. The Pattern code generates a full tensor bitmap of binary 1's and 0's for connected (1's) and not (0's) units, and can use any method to do so. This full lookup-table approach is not the most memory-efficient, but it is fully general and shouldn't be too-bad memory-wise overall (fully bit-packed arrays are used, and these bitmaps don't need to be retained once connections have been established). This approach allows patterns to just focus on patterns, and they don't care at all how they are used to allocate actual connections.
Index ¶
- Constants
- Variables
- func AddGlbCostV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32, val float32)
- func AddGlbUSnegV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32, val float32)
- func AddGlbUSposV(ctx *Context, di uint32, gvar GlobalVars, posIndex uint32, val float32)
- func AddGlbV(ctx *Context, di uint32, gvar GlobalVars, val float32)
- func AddNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
- func AddNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
- func AddSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
- func AddSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
- func GetRandomNumber(index uint32, counter slrand.Counter, funIndex RandFunIndex) float32
- func GlbCostV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32) float32
- func GlbUSnegV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32) float32
- func GlbUSposV(ctx *Context, di uint32, gvar GlobalVars, posIndex uint32) float32
- func GlbV(ctx *Context, di uint32, gvar GlobalVars) float32
- func GlobalSetRew(ctx *Context, di uint32, rew float32, hasRew bool)
- func GlobalsReset(ctx *Context)
- func HashEncodeSlice(slice []float32) string
- func IsExtLayerType(lt LayerTypes) bool
- func JsonToParams(b []byte) string
- func LayerActsLog(net *Network, lg *elog.Logs, di int, gui *egui.GUI)
- func LayerActsLogAvg(net *Network, lg *elog.Logs, gui *egui.GUI, recReset bool)
- func LayerActsLogConfig(net *Network, lg *elog.Logs)
- func LayerActsLogConfigGUI(lg *elog.Logs, gui *egui.GUI)
- func LayerActsLogConfigMetaData(dt *table.Table)
- func LayerActsLogRecReset(lg *elog.Logs)
- func LogAddCaLrnDiagnosticItems(lg *elog.Logs, mode etime.Modes, net *Network, times ...etime.Times)
- func LogAddDiagnosticItems(lg *elog.Logs, layerNames []string, mode etime.Modes, times ...etime.Times)
- func LogAddExtraDiagnosticItems(lg *elog.Logs, mode etime.Modes, net *Network, times ...etime.Times)
- func LogAddGlobals(lg *elog.Logs, ctx *Context, mode etime.Modes, times ...etime.Times)
- func LogAddLayerGeActAvgItems(lg *elog.Logs, net *Network, mode etime.Modes, etm etime.Times)
- func LogAddPCAItems(lg *elog.Logs, net *Network, mode etime.Modes, times ...etime.Times)
- func LogAddPulvCorSimItems(lg *elog.Logs, net *Network, mode etime.Modes, times ...etime.Times)
- func LogInputLayer(lg *elog.Logs, net *Network, mode etime.Modes)
- func LogTestErrors(lg *elog.Logs)
- func LooperResetLogBelow(man *looper.Manager, logs *elog.Logs, except ...etime.Times)
- func LooperSimCycleAndLearn(man *looper.Manager, net *Network, ctx *Context, viewupdt *netview.ViewUpdate, ...)
- func LooperStdPhases(man *looper.Manager, ctx *Context, net *Network, plusStart, plusEnd int, ...)
- func LooperUpdateNetView(man *looper.Manager, viewupdt *netview.ViewUpdate, net *Network, ...)
- func LooperUpdatePlots(man *looper.Manager, gui *egui.GUI)
- func MulNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
- func MulNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
- func MulSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
- func MulSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
- func NeuronVarIndexByName(varNm string) (int, error)
- func NrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars) float32
- func NrnClearFlag(ctx *Context, ni, di uint32, flag NeuronFlags)
- func NrnHasFlag(ctx *Context, ni, di uint32, flag NeuronFlags) bool
- func NrnI(ctx *Context, ni uint32, idx NeuronIndexes) uint32
- func NrnIsOff(ctx *Context, ni uint32) bool
- func NrnSetFlag(ctx *Context, ni, di uint32, flag NeuronFlags)
- func NrnV(ctx *Context, ni, di uint32, nvar NeuronVars) float32
- func PCAStats(net *Network, lg *elog.Logs, stats *estats.Stats)
- func ParallelChunkRun(fun func(st, ed int), total int, nThreads int)
- func ParallelRun(fun func(st, ed uint32), total uint32, nThreads int)
- func RubiconNormFun(raw float32) float32
- func RubiconUSStimValue(ctx *Context, di uint32, usIndex uint32, valence ValenceTypes) float32
- func SaveWeights(net *Network, ctrString, runName string) string
- func SaveWeightsIfArgSet(net *Network, args *ecmd.Args, ctrString, runName string) string
- func SaveWeightsIfConfigSet(net *Network, cfgWts bool, ctrString, runName string) string
- func SetAvgMaxFloatFromIntErr(fun func())
- func SetGlbCostV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32, val float32)
- func SetGlbUSnegV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32, val float32)
- func SetGlbUSposV(ctx *Context, di uint32, gvar GlobalVars, posIndex uint32, val float32)
- func SetGlbV(ctx *Context, di uint32, gvar GlobalVars, val float32)
- func SetNeuronExtPosNeg(ctx *Context, ni, di uint32, val float32)
- func SetNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
- func SetNrnI(ctx *Context, ni uint32, idx NeuronIndexes, val uint32)
- func SetNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
- func SetSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
- func SetSynI(ctx *Context, syni uint32, idx SynapseIndexes, val uint32)
- func SetSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
- func SigFun(w, gain, off float32) float32
- func SigFun61(w float32) float32
- func SigInvFun(w, gain, off float32) float32
- func SigInvFun61(w float32) float32
- func SigmoidFun(cnSum, guSum float32) float32
- func SynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars) float32
- func SynI(ctx *Context, syni uint32, idx SynapseIndexes) uint32
- func SynV(ctx *Context, syni uint32, svar SynapseVars) float32
- func SynapseVarByName(varNm string) (int, error)
- func ToggleLayersOff(net *Network, layerNames []string, off bool)
- func WeightsFilename(net *Network, ctrString, runName string) string
- type ActAvgParams
- type ActAvgValues
- type ActInitParams
- type ActParams
- func (ac *ActParams) AddGeNoise(ctx *Context, ni, di uint32)
- func (ac *ActParams) AddGiNoise(ctx *Context, ni, di uint32)
- func (ac *ActParams) DecayAHP(ctx *Context, ni, di uint32, decay float32)
- func (ac *ActParams) DecayLearnCa(ctx *Context, ni, di uint32, decay float32)
- func (ac *ActParams) DecayState(ctx *Context, ni, di uint32, decay, glong, ahp float32)
- func (ac *ActParams) Defaults()
- func (ac *ActParams) GSkCaFromCa(ctx *Context, ni, di uint32)
- func (ac *ActParams) GeFromSyn(ctx *Context, ni, di uint32, geSyn, geExt float32)
- func (ac *ActParams) GiFromSyn(ctx *Context, ni, di uint32, giSyn float32) float32
- func (ac *ActParams) GkFromVm(ctx *Context, ni, di uint32)
- func (ac *ActParams) GvgccFromVm(ctx *Context, ni, di uint32)
- func (ac *ActParams) InetFromG(vm, ge, gl, gi, gk float32) float32
- func (ac *ActParams) InitActs(ctx *Context, ni, di uint32)
- func (ac *ActParams) InitLongActs(ctx *Context, ni, di uint32)
- func (ac *ActParams) KNaNewState(ctx *Context, ni, di uint32)
- func (ac *ActParams) MaintNMDAFromRaw(ctx *Context, ni, di uint32)
- func (ac *ActParams) NMDAFromRaw(ctx *Context, ni, di uint32, geTot float32)
- func (ac *ActParams) SMaintFromISI(ctx *Context, ni, di uint32)
- func (ac *ActParams) SpikeFromVm(ctx *Context, ni, di uint32)
- func (ac *ActParams) SpikeFromVmVars(nrnISI, nrnISIAvg, nrnSpike, nrnSpiked, nrnAct *float32, nrnVm float32)
- func (ac *ActParams) Update()
- func (ac *ActParams) VmFromG(ctx *Context, ni, di uint32)
- func (ac *ActParams) VmFromInet(vm, dt, inet float32) float32
- func (ac *ActParams) VmInteg(vm, dt, ge, gl, gi, gk float32, nvm, inet *float32)
- type AvgMaxI32
- func (am *AvgMaxI32) Calc(refIndex int32)
- func (am *AvgMaxI32) FloatFromInt(ival, refIndex int32) float32
- func (am *AvgMaxI32) FloatFromIntFactor() float32
- func (am *AvgMaxI32) FloatToInt(val float32) int32
- func (am *AvgMaxI32) FloatToIntFactor() float32
- func (am *AvgMaxI32) FloatToIntSum(val float32) int32
- func (am *AvgMaxI32) Init()
- func (am *AvgMaxI32) String() string
- func (am *AvgMaxI32) UpdateValue(val float32)
- func (am *AvgMaxI32) Zero()
- type AvgMaxPhases
- type AxonLayer
- type AxonNetwork
- type AxonPath
- type AxonPaths
- type BLANovelPath
- type BLAPathParams
- type BurstParams
- type CTParams
- type CaLrnParams
- type ClampParams
- type Context
- func (ctx *Context) CopyNetStridesFrom(srcCtx *Context)
- func (ctx *Context) CycleInc()
- func (ctx *Context) Defaults()
- func (ctx *Context) GlobalCostIndex(di uint32, gvar GlobalVars, negIndex uint32) uint32
- func (ctx *Context) GlobalIndex(di uint32, gvar GlobalVars) uint32
- func (ctx *Context) GlobalUSnegIndex(di uint32, gvar GlobalVars, negIndex uint32) uint32
- func (ctx *Context) GlobalUSposIndex(di uint32, gvar GlobalVars, posIndex uint32) uint32
- func (ctx *Context) GlobalVNFloats() uint32
- func (ctx *Context) NewPhase(plusPhase bool)
- func (ctx *Context) NewState(mode etime.Modes)
- func (ctx *Context) Reset()
- func (ctx *Context) SetGlobalStrides()
- func (ctx *Context) SlowInc() bool
- type CorSimStats
- type DAModTypes
- func (i DAModTypes) Desc() string
- func (i DAModTypes) Int64() int64
- func (i DAModTypes) MarshalText() ([]byte, error)
- func (i *DAModTypes) SetInt64(in int64)
- func (i *DAModTypes) SetString(s string) error
- func (i DAModTypes) String() string
- func (i *DAModTypes) UnmarshalText(text []byte) error
- func (i DAModTypes) Values() []enums.Enum
- type DecayParams
- type DendParams
- type DriveParams
- func (dp *DriveParams) AddTo(ctx *Context, di uint32, drv uint32, delta float32) float32
- func (dp *DriveParams) Alloc(nDrives int)
- func (dp *DriveParams) Defaults()
- func (dp *DriveParams) EffectiveDrive(ctx *Context, di uint32, i uint32) float32
- func (dp *DriveParams) ExpStep(ctx *Context, di uint32, drv uint32, dt, base float32) float32
- func (dp *DriveParams) ExpStepAll(ctx *Context, di uint32)
- func (dp *DriveParams) SoftAdd(ctx *Context, di uint32, drv uint32, delta float32) float32
- func (dp *DriveParams) ToBaseline(ctx *Context, di uint32)
- func (dp *DriveParams) ToZero(ctx *Context, di uint32)
- func (dp *DriveParams) Update()
- func (dp *DriveParams) VarToZero(ctx *Context, di uint32, gvar GlobalVars)
- type DtParams
- func (dp *DtParams) AvgVarUpdate(avg, vr *float32, val float32)
- func (dp *DtParams) Defaults()
- func (dp *DtParams) GeSynFromRaw(geSyn, geRaw float32) float32
- func (dp *DtParams) GeSynFromRawSteady(geRaw float32) float32
- func (dp *DtParams) GiSynFromRaw(giSyn, giRaw float32) float32
- func (dp *DtParams) GiSynFromRawSteady(giRaw float32) float32
- func (dp *DtParams) Update()
- type GPLayerTypes
- func (i GPLayerTypes) Desc() string
- func (i GPLayerTypes) Int64() int64
- func (i GPLayerTypes) MarshalText() ([]byte, error)
- func (i *GPLayerTypes) SetInt64(in int64)
- func (i *GPLayerTypes) SetString(s string) error
- func (i GPLayerTypes) String() string
- func (i *GPLayerTypes) UnmarshalText(text []byte) error
- func (i GPLayerTypes) Values() []enums.Enum
- type GPParams
- type GPU
- func (gp *GPU) Config(ctx *Context, net *Network)
- func (gp *GPU) ConfigSynCaBuffs()
- func (gp *GPU) CopyContextFromStaging()
- func (gp *GPU) CopyContextToStaging()
- func (gp *GPU) CopyExtsToStaging()
- func (gp *GPU) CopyGBufToStaging()
- func (gp *GPU) CopyIndexesToStaging()
- func (gp *GPU) CopyLayerStateFromStaging()
- func (gp *GPU) CopyLayerValuesFromStaging()
- func (gp *GPU) CopyLayerValuesToStaging()
- func (gp *GPU) CopyNeuronsFromStaging()
- func (gp *GPU) CopyNeuronsToStaging()
- func (gp *GPU) CopyParamsToStaging()
- func (gp *GPU) CopyPoolsFromStaging()
- func (gp *GPU) CopyPoolsToStaging()
- func (gp *GPU) CopyStateFromStaging()
- func (gp *GPU) CopyStateToStaging()
- func (gp *GPU) CopySynCaFromStaging()
- func (gp *GPU) CopySynCaToStaging()
- func (gp *GPU) CopySynapsesFromStaging()
- func (gp *GPU) CopySynapsesToStaging()
- func (gp *GPU) Destroy()
- func (gp *GPU) RunApplyExts()
- func (gp *GPU) RunApplyExtsCmd() vk.CommandBuffer
- func (gp *GPU) RunCycle()
- func (gp *GPU) RunCycleOne()
- func (gp *GPU) RunCycleOneCmd() vk.CommandBuffer
- func (gp *GPU) RunCycleSeparateFuns()
- func (gp *GPU) RunCycles()
- func (gp *GPU) RunCyclesCmd() vk.CommandBuffer
- func (gp *GPU) RunDWt()
- func (gp *GPU) RunDWtCmd() vk.CommandBuffer
- func (gp *GPU) RunMinusPhase()
- func (gp *GPU) RunMinusPhaseCmd() vk.CommandBuffer
- func (gp *GPU) RunNewState()
- func (gp *GPU) RunNewStateCmd() vk.CommandBuffer
- func (gp *GPU) RunPipelineMemWait(cmd vk.CommandBuffer, name string, n int)
- func (gp *GPU) RunPipelineNoWait(cmd vk.CommandBuffer, name string, n int)
- func (gp *GPU) RunPipelineOffset(cmd vk.CommandBuffer, name string, n, off int)
- func (gp *GPU) RunPipelineWait(name string, n int)
- func (gp *GPU) RunPlusPhase()
- func (gp *GPU) RunPlusPhaseCmd() vk.CommandBuffer
- func (gp *GPU) RunPlusPhaseStart()
- func (gp *GPU) RunWtFromDWt()
- func (gp *GPU) RunWtFromDWtCmd() vk.CommandBuffer
- func (gp *GPU) SetContext(ctx *Context)
- func (gp *GPU) StartRun(cmd vk.CommandBuffer)
- func (gp *GPU) SynCaBuff(idx uint32) []float32
- func (gp *GPU) SynDataNs() (nCmd, nPer, nLast int)
- func (gp *GPU) SyncAllFromGPU()
- func (gp *GPU) SyncAllToGPU()
- func (gp *GPU) SyncContextFromGPU()
- func (gp *GPU) SyncContextToGPU()
- func (gp *GPU) SyncGBufToGPU()
- func (gp *GPU) SyncLayerStateFromGPU()
- func (gp *GPU) SyncLayerValuesFromGPU()
- func (gp *GPU) SyncLayerValuesToGPU()
- func (gp *GPU) SyncMemToGPU()
- func (gp *GPU) SyncNeuronsFromGPU()
- func (gp *GPU) SyncNeuronsToGPU()
- func (gp *GPU) SyncParamsToGPU()
- func (gp *GPU) SyncPoolsFromGPU()
- func (gp *GPU) SyncPoolsToGPU()
- func (gp *GPU) SyncRegionStruct(vnm string) vgpu.MemReg
- func (gp *GPU) SyncRegionSynCas(vnm string) vgpu.MemReg
- func (gp *GPU) SyncRegionSyns(vnm string) vgpu.MemReg
- func (gp *GPU) SyncStateFromGPU()
- func (gp *GPU) SyncStateGBufToGPU()
- func (gp *GPU) SyncStateToGPU()
- func (gp *GPU) SyncSynCaFromGPU()
- func (gp *GPU) SyncSynCaToGPU()
- func (gp *GPU) SyncSynapsesFromGPU()
- func (gp *GPU) SyncSynapsesToGPU()
- func (gp *GPU) TestSynCa() bool
- func (gp *GPU) TestSynCaCmd() vk.CommandBuffer
- type GScaleValues
- type GiveUpParams
- type GlobalVars
- func (i GlobalVars) Desc() string
- func (i GlobalVars) Int64() int64
- func (i GlobalVars) MarshalText() ([]byte, error)
- func (i *GlobalVars) SetInt64(in int64)
- func (i *GlobalVars) SetString(s string) error
- func (i GlobalVars) String() string
- func (i *GlobalVars) UnmarshalText(text []byte) error
- func (i GlobalVars) Values() []enums.Enum
- type HebbParams
- type HipConfig
- type HipPathParams
- type InhibParams
- type LDTParams
- type LHbParams
- func (lh *LHbParams) DAFromPVs(pvPos, pvNeg, vsPatchPos, vsPatchPosSum float32) (burst, dip, da, rew float32)
- func (lh *LHbParams) DAforNoUS(ctx *Context, di uint32) float32
- func (lh *LHbParams) DAforUS(ctx *Context, di uint32, pvPos, pvNeg, vsPatchPos, vsPatchPosSum float32) float32
- func (lh *LHbParams) Defaults()
- func (lh *LHbParams) Reset(ctx *Context, di uint32)
- func (lh *LHbParams) Update()
- type LRateMod
- type LRateParams
- type LaySpecialValues
- type Layer
- func (ly *Layer) AdaptInhib(ctx *Context)
- func (ly *Layer) AddClass(cls ...string) emer.Layer
- func (ly *Layer) AllParams() string
- func (ly *Layer) AnyGated(di uint32) bool
- func (ly *Layer) ApplyExt(ctx *Context, di uint32, ext tensor.Tensor)
- func (ly *Layer) ApplyExt1D(ctx *Context, di uint32, ext []float64)
- func (ly *Layer) ApplyExt1D32(ctx *Context, di uint32, ext []float32)
- func (ly *Layer) ApplyExt1DTsr(ctx *Context, di uint32, ext tensor.Tensor)
- func (ly *Layer) ApplyExt2D(ctx *Context, di uint32, ext tensor.Tensor)
- func (ly *Layer) ApplyExt2Dto4D(ctx *Context, di uint32, ext tensor.Tensor)
- func (ly *Layer) ApplyExt4D(ctx *Context, di uint32, ext tensor.Tensor)
- func (ly *Layer) ApplyExtFlags() (clearMask, setMask NeuronFlags, toTarg bool)
- func (ly *Layer) ApplyExtValue(ctx *Context, lni, di uint32, val float32, clearMask, setMask NeuronFlags, ...)
- func (ly *Layer) AsAxon() *Layer
- func (ly *Layer) AvgDifFromTrgAvg(ctx *Context)
- func (ly *Layer) AvgMaxVarByPool(ctx *Context, varNm string, poolIndex, di int) minmax.AvgMax32
- func (ly *Layer) BGThalDefaults()
- func (ly *Layer) BLADefaults()
- func (ly *Layer) BetweenLayerGi(ctx *Context)
- func (ly *Layer) BetweenLayerGiMax(net *Network, di uint32, maxGi float32, layIndex int32) float32
- func (ly *Layer) CTDefParamsFast()
- func (ly *Layer) CTDefParamsLong()
- func (ly *Layer) CTDefParamsMedium()
- func (ly *Layer) CeMDefaults()
- func (ly *Layer) ClearTargExt(ctx *Context)
- func (ly *Layer) CorSimFromActs(ctx *Context)
- func (ly *Layer) CostEst() (neur, syn, tot int)
- func (ly *Layer) CycleNeuron(ctx *Context, ni uint32)
- func (ly *Layer) CyclePost(ctx *Context)
- func (ly *Layer) DTrgSubMean(ctx *Context)
- func (ly *Layer) DWt(ctx *Context, si uint32)
- func (ly *Layer) DWtSubMean(ctx *Context, ri uint32)
- func (ly *Layer) DecayState(ctx *Context, di uint32, decay, glong, ahp float32)
- func (ly *Layer) DecayStateLayer(ctx *Context, di uint32, decay, glong, ahp float32)
- func (ly *Layer) DecayStateNeuronsAll(ctx *Context, decay, glong, ahp float32)
- func (ly *Layer) DecayStatePool(ctx *Context, pool int, decay, glong, ahp float32)
- func (ly *Layer) Defaults()
- func (ly *Layer) GInteg(ctx *Context, ni, di uint32, pl *Pool, vals *LayerValues)
- func (ly *Layer) GPDefaults()
- func (ly *Layer) GPPostBuild()
- func (ly *Layer) GPiDefaults()
- func (ly *Layer) GatedFromSpkMax(di uint32, thr float32) (bool, int)
- func (ly *Layer) GatherSpikes(ctx *Context, ni uint32)
- func (ly *Layer) GiFromSpikes(ctx *Context)
- func (ly *Layer) HasPoolInhib() bool
- func (ly *Layer) InitActAvg(ctx *Context)
- func (ly *Layer) InitActAvgLayer(ctx *Context)
- func (ly *Layer) InitActAvgPools(ctx *Context)
- func (ly *Layer) InitActs(ctx *Context)
- func (ly *Layer) InitExt(ctx *Context)
- func (ly *Layer) InitGScale(ctx *Context)
- func (ly *Layer) InitPathGBuffs(ctx *Context)
- func (ly *Layer) InitWtSym(ctx *Context)
- func (ly *Layer) InitWts(ctx *Context, nt *Network)
- func (ly *Layer) LDTDefaults()
- func (ly *Layer) LDTPostBuild()
- func (ly *Layer) LDTSrcLayAct(net *Network, layIndex int32, di uint32) float32
- func (ly *Layer) LRateMod(mod float32)
- func (ly *Layer) LRateSched(sched float32)
- func (ly *Layer) LesionNeurons(prop float32) int
- func (ly *Layer) LocalistErr2D(ctx *Context) (err []bool, minusIndex, plusIndex []int)
- func (ly *Layer) LocalistErr4D(ctx *Context) (err []bool, minusIndex, plusIndex []int)
- func (ly *Layer) MakeToolbar(p *tree.Plan)
- func (ly *Layer) MatrixDefaults()
- func (ly *Layer) MatrixGated(ctx *Context)
- func (ly *Layer) MatrixPostBuild()
- func (ly *Layer) MinusPhase(ctx *Context)
- func (ly *Layer) MinusPhasePost(ctx *Context)
- func (ly *Layer) NewState(ctx *Context)
- func (ly *Layer) NewStateNeurons(ctx *Context)
- func (ly *Layer) Object() any
- func (ly *Layer) PTMaintDefaults()
- func (ly *Layer) PctUnitErr(ctx *Context) []float64
- func (ly *Layer) PlusPhase(ctx *Context)
- func (ly *Layer) PlusPhaseActAvg(ctx *Context)
- func (ly *Layer) PlusPhasePost(ctx *Context)
- func (ly *Layer) PlusPhaseStart(ctx *Context)
- func (ly *Layer) PoolGiFromSpikes(ctx *Context)
- func (ly *Layer) PostBuild()
- func (ly *Layer) PostSpike(ctx *Context, ni uint32)
- func (ly *Layer) PulvPostBuild()
- func (ly *Layer) PulvinarDriver(ctx *Context, lni, di uint32) (drvGe, nonDrivePct float32)
- func (ly *Layer) RWDaPostBuild()
- func (ly *Layer) ReadWtsJSON(r io.Reader) error
- func (ly *Layer) RubiconPostBuild()
- func (ly *Layer) STNDefaults()
- func (ly *Layer) SendSpike(ctx *Context, ni uint32)
- func (ly *Layer) SetParam(path, val string) error
- func (ly *Layer) SetSubMean(trgAvg, path float32)
- func (ly *Layer) SetWts(lw *weights.Layer) error
- func (ly *Layer) SlowAdapt(ctx *Context)
- func (ly *Layer) SpikeFromG(ctx *Context, ni, di uint32, lpl *Pool)
- func (ly *Layer) SpkSt1(ctx *Context)
- func (ly *Layer) SpkSt2(ctx *Context)
- func (ly *Layer) SynFail(ctx *Context)
- func (ly *Layer) TDDaPostBuild()
- func (ly *Layer) TDIntegPostBuild()
- func (ly *Layer) TargToExt(ctx *Context)
- func (ly *Layer) TestValues(ctrKey string, vals map[string]float32)
- func (ly *Layer) TrgAvgFromD(ctx *Context)
- func (ly *Layer) UnLesionNeurons()
- func (ly *Layer) Update()
- func (ly *Layer) UpdateExtFlags(ctx *Context)
- func (ly *Layer) UpdateParams()
- func (ly *Layer) WriteWtsJSON(w io.Writer, depth int)
- func (ly *Layer) WtFromDWt(ctx *Context, si uint32)
- func (ly *Layer) WtFromDWtLayer(ctx *Context)
- type LayerBase
- func (ly *LayerBase) ApplyDefParams()
- func (ly *LayerBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (ly *LayerBase) Build() error
- func (ly *LayerBase) BuildConfigByName(nm string) (string, error)
- func (ly *LayerBase) BuildConfigFindLayer(nm string, mustName bool) int32
- func (ly *LayerBase) BuildPaths(ctx *Context) error
- func (ly *LayerBase) BuildPools(ctx *Context, nn uint32) error
- func (ly *LayerBase) BuildSubPools(ctx *Context)
- func (ly *LayerBase) Class() string
- func (ly *LayerBase) Config(shape []int, typ emer.LayerType)
- func (ly *LayerBase) Index() int
- func (ly *LayerBase) Index4DFrom2D(x, y int) ([]int, bool)
- func (ly *LayerBase) InitName(lay emer.Layer, name string, net emer.Network)
- func (ly *LayerBase) Is2D() bool
- func (ly *LayerBase) Is4D() bool
- func (ly *LayerBase) IsOff() bool
- func (ly *LayerBase) Label() string
- func (ly *LayerBase) LayerType() LayerTypes
- func (ly *LayerBase) LayerValues(di uint32) *LayerValues
- func (ly *LayerBase) NRecvPaths() int
- func (ly *LayerBase) NSendPaths() int
- func (ly *LayerBase) NSubPools() int
- func (ly *LayerBase) Name() string
- func (ly *LayerBase) NeurStartIndex() int
- func (ly *LayerBase) NonDefaultParams() string
- func (ly *LayerBase) ParamsApplied(sel *params.Sel)
- func (ly *LayerBase) ParamsHistoryReset()
- func (ly *LayerBase) PlaceAbove(other *Layer)
- func (ly *LayerBase) PlaceBehind(other *Layer, space float32)
- func (ly *LayerBase) PlaceRightOf(other *Layer, space float32)
- func (ly *LayerBase) Pool(pi, di uint32) *Pool
- func (ly *LayerBase) Pos() math32.Vector3
- func (ly *LayerBase) RecipToRecvPath(rpj *Path) (*Path, bool)
- func (ly *LayerBase) RecipToSendPath(spj *Path) (*Path, bool)
- func (ly *LayerBase) RecvName(receiver string) *Path
- func (ly *LayerBase) RecvNameTry(receiver string) (emer.Path, error)
- func (ly *LayerBase) RecvNameType(receiver, typ string) *Path
- func (ly *LayerBase) RecvNameTypeTry(receiver, typ string) (emer.Path, error)
- func (ly *LayerBase) RecvPath(idx int) emer.Path
- func (ly *LayerBase) RecvPathValues(vals *[]float32, varNm string, sendLay emer.Layer, sendIndex1D int, ...) error
- func (ly *LayerBase) RecvPaths() *AxonPaths
- func (ly *LayerBase) RelPos() relpos.Rel
- func (ly *LayerBase) RepIndexes() []int
- func (ly *LayerBase) RepShape() *tensor.Shape
- func (ly *LayerBase) SendName(sender string) *Path
- func (ly *LayerBase) SendNameTry(sender string) (emer.Path, error)
- func (ly *LayerBase) SendNameType(sender, typ string) *Path
- func (ly *LayerBase) SendNameTypeTry(sender, typ string) (emer.Path, error)
- func (ly *LayerBase) SendPath(idx int) emer.Path
- func (ly *LayerBase) SendPathValues(vals *[]float32, varNm string, recvLay emer.Layer, recvIndex1D int, ...) error
- func (ly *LayerBase) SendPaths() *AxonPaths
- func (ly *LayerBase) SetBuildConfig(param, val string)
- func (ly *LayerBase) SetIndex(idx int)
- func (ly *LayerBase) SetName(nm string)
- func (ly *LayerBase) SetOff(off bool)
- func (ly *LayerBase) SetPos(pos math32.Vector3)
- func (ly *LayerBase) SetRelPos(rel relpos.Rel)
- func (ly *LayerBase) SetRepIndexesShape(idxs, shape []int)
- func (ly *LayerBase) SetShape(shape []int)
- func (ly *LayerBase) SetThread(thr int)
- func (ly *LayerBase) SetType(typ emer.LayerType)
- func (ly *LayerBase) Shape() *tensor.Shape
- func (ly *LayerBase) Size() math32.Vector2
- func (ly *LayerBase) SubPool(ctx *Context, ni, di uint32) *Pool
- func (ly *LayerBase) Thread() int
- func (ly *LayerBase) Type() emer.LayerType
- func (ly *LayerBase) TypeName() string
- func (ly *LayerBase) UnitVal1D(varIndex int, idx, di int) float32
- func (ly *LayerBase) UnitValue(varNm string, idx []int, di int) float32
- func (ly *LayerBase) UnitValues(vals *[]float32, varNm string, di int) error
- func (ly *LayerBase) UnitValuesRepTensor(tsr tensor.Tensor, varNm string, di int) error
- func (ly *LayerBase) UnitValuesTensor(tsr tensor.Tensor, varNm string, di int) error
- func (ly *LayerBase) UnitVarIndex(varNm string) (int, error)
- func (ly *LayerBase) UnitVarNames() []string
- func (ly *LayerBase) UnitVarNum() int
- func (ly *LayerBase) UnitVarProps() map[string]string
- func (ly *LayerBase) VarRange(varNm string) (min, max float32, err error)
- type LayerIndexes
- type LayerInhibIndexes
- type LayerParams
- func (ly *LayerParams) AllParams() string
- func (ly *LayerParams) ApplyExtFlags(clearMask, setMask *NeuronFlags, toTarg *bool)
- func (ly *LayerParams) ApplyExtValue(ctx *Context, ni, di uint32, val float32)
- func (ly *LayerParams) AvgGeM(ctx *Context, vals *LayerValues, geIntMinusMax, giIntMinusMax float32)
- func (ly *LayerParams) CTDefaults()
- func (ly *LayerParams) CyclePostCeMLayer(ctx *Context, di uint32, lpl *Pool)
- func (ly *LayerParams) CyclePostLDTLayer(ctx *Context, di uint32, vals *LayerValues, ...)
- func (ly *LayerParams) CyclePostLayer(ctx *Context, di uint32, lpl *Pool, vals *LayerValues)
- func (ly *LayerParams) CyclePostRWDaLayer(ctx *Context, di uint32, vals *LayerValues, pvals *LayerValues)
- func (ly *LayerParams) CyclePostTDDaLayer(ctx *Context, di uint32, vals *LayerValues, ivals *LayerValues)
- func (ly *LayerParams) CyclePostTDIntegLayer(ctx *Context, di uint32, vals *LayerValues, pvals *LayerValues)
- func (ly *LayerParams) CyclePostTDPredLayer(ctx *Context, di uint32, vals *LayerValues)
- func (ly *LayerParams) CyclePostVSPatchLayer(ctx *Context, di uint32, pi int32, pl *Pool, vals *LayerValues)
- func (ly *LayerParams) CyclePostVTALayer(ctx *Context, di uint32)
- func (ly *LayerParams) Defaults()
- func (ly *LayerParams) DrivesDefaults()
- func (ly *LayerParams) GFromRawSyn(ctx *Context, ni, di uint32)
- func (ly *LayerParams) GNeuroMod(ctx *Context, ni, di uint32, vals *LayerValues)
- func (ly *LayerParams) GatherSpikesInit(ctx *Context, ni, di uint32)
- func (ly *LayerParams) GiInteg(ctx *Context, ni, di uint32, pl *Pool, vals *LayerValues)
- func (ly *LayerParams) InitExt(ctx *Context, ni, di uint32)
- func (ly *LayerParams) IsInput() bool
- func (ly *LayerParams) IsInputOrTarget() bool
- func (ly *LayerParams) IsLearnTrgAvg() bool
- func (ly *LayerParams) IsTarget() bool
- func (ly *LayerParams) LayPoolGiFromSpikes(ctx *Context, lpl *Pool, vals *LayerValues)
- func (ly *LayerParams) LearnTrgAvgErrLRate() float32
- func (ly *LayerParams) MinusPhaseNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerValues)
- func (ly *LayerParams) MinusPhasePool(ctx *Context, pl *Pool)
- func (ly *LayerParams) NewStateLayer(ctx *Context, lpl *Pool, vals *LayerValues)
- func (ly *LayerParams) NewStateLayerActAvg(ctx *Context, vals *LayerValues, actMinusAvg, actPlusAvg float32)
- func (ly *LayerParams) NewStateNeuron(ctx *Context, ni, di uint32, vals *LayerValues, pl *Pool)
- func (ly *LayerParams) NewStatePool(ctx *Context, pl *Pool)
- func (ly *LayerParams) PTPredDefaults()
- func (ly *LayerParams) PVDefaults()
- func (ly *LayerParams) PlusPhaseNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerValues)
- func (ly *LayerParams) PlusPhaseNeuronSpecial(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerValues)
- func (ly *LayerParams) PlusPhasePool(ctx *Context, pl *Pool)
- func (ly *LayerParams) PlusPhaseStartNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerValues)
- func (ly *LayerParams) PostSpike(ctx *Context, ni, di uint32, pl *Pool, vals *LayerValues)
- func (ly *LayerParams) PostSpikeSpecial(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerValues)
- func (ly *LayerParams) PulvDefaults()
- func (ly *LayerParams) RWDefaults()
- func (ly *LayerParams) RWPredDefaults()
- func (ly *LayerParams) ShouldDisplay(field string) bool
- func (ly *LayerParams) SpecialPostGs(ctx *Context, ni, di uint32, saveVal float32)
- func (ly *LayerParams) SpecialPreGs(ctx *Context, ni, di uint32, pl *Pool, vals *LayerValues, drvGe float32, ...) float32
- func (ly *LayerParams) SpikeFromG(ctx *Context, ni, di uint32, lpl *Pool)
- func (ly *LayerParams) SubPoolGiFromSpikes(ctx *Context, di uint32, pl *Pool, lpl *Pool, lyInhib bool, giMult float32)
- func (ly *LayerParams) TDDefaults()
- func (ly *LayerParams) TDPredDefaults()
- func (ly *LayerParams) USDefaults()
- func (ly *LayerParams) Update()
- func (ly *LayerParams) UrgencyDefaults()
- func (ly *LayerParams) VSGatedDefaults()
- func (ly *LayerParams) VSPatchDefaults()
- type LayerTypes
- func (i LayerTypes) Desc() string
- func (i LayerTypes) Int64() int64
- func (lt LayerTypes) IsExt() bool
- func (i LayerTypes) MarshalText() ([]byte, error)
- func (i *LayerTypes) SetInt64(in int64)
- func (i *LayerTypes) SetString(s string) error
- func (i LayerTypes) String() string
- func (i *LayerTypes) UnmarshalText(text []byte) error
- func (i LayerTypes) Values() []enums.Enum
- type LayerValues
- type LearnNeurParams
- type LearnSynParams
- type MatrixParams
- type MatrixPathParams
- type NetIndexes
- func (ctx *NetIndexes) DataIndex(idx uint32) uint32
- func (ctx *NetIndexes) DataIndexIsValid(li uint32) bool
- func (ctx *NetIndexes) ItemIndex(idx uint32) uint32
- func (ctx *NetIndexes) LayerIndexIsValid(li uint32) bool
- func (ctx *NetIndexes) NeurIndexIsValid(ni uint32) bool
- func (ctx *NetIndexes) PoolDataIndexIsValid(pi uint32) bool
- func (ctx *NetIndexes) PoolIndexIsValid(pi uint32) bool
- func (ctx *NetIndexes) SynIndexIsValid(si uint32) bool
- func (ctx *NetIndexes) ValuesIndex(li, di uint32) uint32
- type Network
- func (net *Network) AddACCost(ctx *Context, nCosts, accY, accX int, space float32) (acc, accCT, accPT, accPTp, accMD *Layer)
- func (net *Network) AddAmygdala(prefix string, neg bool, nNeurY, nNeurX int, space float32) (blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, cemPos, cemNeg, blaNov *Layer)
- func (net *Network) AddBGThalLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddBGThalLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddBLALayers(prefix string, pos bool, nUs, nNeurY, nNeurX int, rel relpos.Relations, ...) (acq, ext *Layer)
- func (net *Network) AddCTLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddCTLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (nt *Network) AddClampDaLayer(name string) *Layer
- func (net *Network) AddDBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, ...) (mtxGo, mtxNo, gpePr, gpeAk, stn, gpi, pf *Layer)
- func (net *Network) AddDMatrixLayer(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DAModTypes) *Layer
- func (net *Network) AddDrivesLayer(ctx *Context, nNeurY, nNeurX int) *Layer
- func (net *Network) AddDrivesPulvLayer(ctx *Context, nNeurY, nNeurX int, space float32) (drv, drvP *Layer)
- func (net *Network) AddGPeLayer2D(name, class string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddGPeLayer4D(name, class string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddGPiLayer2D(name, class string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddGPiLayer4D(name, class string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddHip(ctx *Context, hip *HipConfig, space float32) (ec2, ec3, dg, ca3, ca1, ec5 *Layer)
- func (net *Network) AddInputPulv2D(name string, nNeurY, nNeurX int, space float32) (*Layer, *Layer)
- func (net *Network) AddInputPulv4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32) (*Layer, *Layer)
- func (net *Network) AddLDTLayer(prefix string) *Layer
- func (net *Network) AddOFCneg(ctx *Context, nUSs, ofcY, ofcX int, space float32) (ofc, ofcCT, ofcPT, ofcPTp, ofcMD *Layer)
- func (net *Network) AddOFCpos(ctx *Context, nUSs, nY, ofcY, ofcX int, space float32) (ofc, ofcCT, ofcPT, ofcPTp, ofcMD *Layer)
- func (net *Network) AddPFC2D(name, thalSuffix string, nNeurY, nNeurX int, decayOnRew, selfMaint bool, ...) (pfc, pfcCT, pfcPT, pfcPTp, pfcThal *Layer)
- func (net *Network) AddPFC4D(name, thalSuffix string, nPoolsY, nPoolsX, nNeurY, nNeurX int, ...) (pfc, pfcCT, pfcPT, pfcPTp, pfcThal *Layer)
- func (net *Network) AddPTMaintLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddPTMaintLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddPTMaintThalForSuper(super, ct *Layer, thalSuffix, pathClass string, ...) (pt, thal *Layer)
- func (net *Network) AddPTPredLayer(ptMaint, ct *Layer, ptToPredPath, ctToPredPath paths.Pattern, pathClass string, ...) (ptPred *Layer)
- func (net *Network) AddPTPredLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddPTPredLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddPVLayers(nNeurY, nNeurX int, rel relpos.Relations, space float32) (pvPos, pvNeg *Layer)
- func (net *Network) AddPVPulvLayers(nNeurY, nNeurX int, rel relpos.Relations, space float32) (pvPos, pvNeg, pvPosP, pvNegP *Layer)
- func (net *Network) AddPulvForLayer(lay *Layer, space float32) *Layer
- func (net *Network) AddPulvForSuper(super *Layer, space float32) *Layer
- func (net *Network) AddPulvLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddPulvLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (nt *Network) AddRWLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, da *Layer)
- func (nt *Network) AddRewLayer(name string) *Layer
- func (net *Network) AddRubicon(ctx *Context, nYneur, popY, popX, bgY, bgX, pfcY, pfcX int, space float32) (...)
- func (net *Network) AddRubiconOFCus(ctx *Context, nYneur, popY, popX, bgY, bgX, ofcY, ofcX int, space float32) (...)
- func (net *Network) AddRubiconPulvLayers(ctx *Context, nYneur, popY, popX int, space float32) (...)
- func (net *Network) AddSCLayer2D(prefix string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddSCLayer4D(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddSTNLayer2D(name, class string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddSTNLayer4D(name, class string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (net *Network) AddSuperCT2D(name, pathClass string, shapeY, shapeX int, space float32, pat paths.Pattern) (super, ct *Layer)
- func (net *Network) AddSuperCT4D(name, pathClass string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32, ...) (super, ct *Layer)
- func (net *Network) AddSuperLayer2D(name string, nNeurY, nNeurX int) *Layer
- func (net *Network) AddSuperLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int) *Layer
- func (nt *Network) AddTDLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, ri, td *Layer)
- func (net *Network) AddUSLayers(popY, popX int, rel relpos.Relations, space float32) (usPos, usNeg, cost, costFinal *Layer)
- func (net *Network) AddUSPulvLayers(popY, popX int, rel relpos.Relations, space float32) (usPos, usNeg, cost, costFinal, usPosP, usNegP, costP *Layer)
- func (net *Network) AddUrgencyLayer(nNeurY, nNeurX int) *Layer
- func (net *Network) AddVBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, ...) (mtxGo, mtxNo, gpePr, gpeAk, stn, gpi *Layer)
- func (net *Network) AddVMatrixLayer(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DAModTypes) *Layer
- func (net *Network) AddVSGatedLayer(prefix string, nYunits int) *Layer
- func (net *Network) AddVSPatchLayers(prefix string, nUs, nNeurY, nNeurX int, space float32) (d1, d2 *Layer)
- func (net *Network) AddVTALHbLDTLayers(rel relpos.Relations, space float32) (vta, lhb, ldt *Layer)
- func (nt *Network) ApplyExts(ctx *Context)
- func (nt *Network) AsAxon() *Network
- func (nt *Network) ClearTargExt(ctx *Context)
- func (nt *Network) CollectDWts(ctx *Context, dwts *[]float32) bool
- func (nt *Network) ConfigGPUnoGUI(ctx *Context)
- func (nt *Network) ConfigGPUwithGUI(ctx *Context)
- func (net *Network) ConfigLoopsHip(ctx *Context, man *looper.Manager, hip *HipConfig, pretrain *bool)
- func (net *Network) ConnectCSToBLApos(cs, blaAcq, blaNov *Layer) (toAcq, toNov, novInhib *Path)
- func (net *Network) ConnectCTSelf(ly *Layer, pat paths.Pattern, pathClass string) (ctxt, maint *Path)
- func (net *Network) ConnectCtxtToCT(send, recv *Layer, pat paths.Pattern) *Path
- func (net *Network) ConnectPTMaintSelf(ly *Layer, pat paths.Pattern, pathClass string) *Path
- func (net *Network) ConnectPTPredSelf(ly *Layer, pat paths.Pattern) *Path
- func (net *Network) ConnectPTToPulv(pt, ptPred, pulv *Layer, toPulvPat, fmPulvPat paths.Pattern, pathClass string) (ptToPulv, ptPredToPulv, toPTPred *Path)
- func (net *Network) ConnectPTpToPulv(ptPred, pulv *Layer, toPulvPat, fmPulvPat paths.Pattern, pathClass string) (ptToPulv, ptPredToPulv, toPTPred *Path)
- func (net *Network) ConnectSuperToCT(send, recv *Layer, pat paths.Pattern, pathClass string) *Path
- func (net *Network) ConnectToBLAAcq(send, recv *Layer, pat paths.Pattern) *Path
- func (net *Network) ConnectToBLAExt(send, recv *Layer, pat paths.Pattern) *Path
- func (net *Network) ConnectToDSMatrix(send, recv *Layer, pat paths.Pattern) *Path
- func (net *Network) ConnectToPFC(lay, layP, pfc, pfcCT, pfcPT, pfcPTp *Layer, pat paths.Pattern, ...)
- func (net *Network) ConnectToPFCBack(lay, layP, pfc, pfcCT, pfcPT, pfcPTp *Layer, pat paths.Pattern, ...)
- func (net *Network) ConnectToPFCBidir(lay, layP, pfc, pfcCT, pfcPT, pfcPTp *Layer, pat paths.Pattern, ...) (ff, fb *Path)
- func (net *Network) ConnectToPulv(super, ct, pulv *Layer, toPulvPat, fmPulvPat paths.Pattern, pathClass string) (toPulv, toSuper, toCT *Path)
- func (nt *Network) ConnectToRWPath(send, recv *Layer, pat paths.Pattern) *Path
- func (net *Network) ConnectToSC(send, recv *Layer, pat paths.Pattern) *Path
- func (net *Network) ConnectToSC1to1(send, recv *Layer) *Path
- func (net *Network) ConnectToVSMatrix(send, recv *Layer, pat paths.Pattern) *Path
- func (net *Network) ConnectToVSPatch(send, vspD1, vspD2 *Layer, pat paths.Pattern) (*Path, *Path)
- func (net *Network) ConnectUSToBLA(us, blaAcq, blaExt *Layer) (toAcq, toExt *Path)
- func (nt *Network) Cycle(ctx *Context)
- func (nt *Network) DWt(ctx *Context)
- func (nt *Network) DecayState(ctx *Context, decay, glong, ahp float32)
- func (nt *Network) DecayStateByClass(ctx *Context, decay, glong, ahp float32, classes ...string)
- func (nt *Network) DecayStateByType(ctx *Context, decay, glong, ahp float32, types ...LayerTypes)
- func (nt *Network) DecayStateLayers(ctx *Context, decay, glong, ahp float32, layers ...string)
- func (nt *Network) Defaults()
- func (nt *Network) InitActs(ctx *Context)
- func (nt *Network) InitExt(ctx *Context)
- func (nt *Network) InitGScale(ctx *Context)
- func (nt *Network) InitName(net emer.Network, name string)
- func (nt *Network) InitTopoSWts()
- func (nt *Network) InitWts(ctx *Context)
- func (nt *Network) LRateMod(mod float32)
- func (nt *Network) LRateSched(sched float32)
- func (nt *Network) LayersSetOff(off bool)
- func (nt *Network) MakeToolbar(p *tree.Plan)
- func (nt *Network) MinusPhase(ctx *Context)
- func (nt *Network) NeuronsSlice(vals *[]float32, nrnVar string, di int)
- func (nt *Network) NewLayer() emer.Layer
- func (nt *Network) NewPath() emer.Path
- func (nt *Network) NewState(ctx *Context)
- func (nt *Network) PlusPhase(ctx *Context)
- func (nt *Network) PlusPhaseStart(ctx *Context)
- func (nt *Network) SetDWts(ctx *Context, dwts []float32, navg int)
- func (nt *Network) SetSubMean(trgAvg, path float32)
- func (nt *Network) SizeReport(detail bool) string
- func (nt *Network) SlowAdapt(ctx *Context)
- func (nt *Network) SpkSt1(ctx *Context)
- func (nt *Network) SpkSt2(ctx *Context)
- func (nt *Network) SynFail(ctx *Context)
- func (nt *Network) SynsSlice(vals *[]float32, synvar SynapseVars)
- func (nt *Network) TargToExt(ctx *Context)
- func (nt *Network) UnLesionNeurons(ctx *Context)
- func (nt *Network) UpdateExtFlags(ctx *Context)
- func (nt *Network) UpdateParams()
- func (nt *Network) WtFromDWt(ctx *Context)
- func (nt *Network) WtsHash() string
- type NetworkBase
- func (nt *NetworkBase) AddLayer(name string, shape []int, typ LayerTypes) *Layer
- func (nt *NetworkBase) AddLayer2D(name string, shapeY, shapeX int, typ LayerTypes) *Layer
- func (nt *NetworkBase) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ LayerTypes) *Layer
- func (nt *NetworkBase) AddLayerInit(ly *Layer, name string, shape []int, typ LayerTypes)
- func (nt *NetworkBase) AllGlobalValues(ctrKey string, vals map[string]float32)
- func (nt *NetworkBase) AllGlobals() string
- func (nt *NetworkBase) AllLayerInhibs() string
- func (nt *NetworkBase) AllParams() string
- func (nt *NetworkBase) AllPathScales() string
- func (nt *NetworkBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (nt *NetworkBase) AxonLayerByName(name string) *Layer
- func (nt *NetworkBase) AxonPathByName(name string) *Path
- func (nt *NetworkBase) BidirConnectLayerNames(low, high string, pat paths.Pattern) (lowlay, highlay *Layer, fwdpj, backpj *Path, err error)
- func (nt *NetworkBase) BidirConnectLayers(low, high *Layer, pat paths.Pattern) (fwdpj, backpj *Path)
- func (nt *NetworkBase) BidirConnectLayersPy(low, high *Layer, pat paths.Pattern)
- func (nt *NetworkBase) Bounds() (min, max math32.Vector3)
- func (nt *NetworkBase) BoundsUpdate()
- func (nt *NetworkBase) Build(simCtx *Context) error
- func (nt *NetworkBase) BuildGlobals(ctx *Context)
- func (nt *NetworkBase) BuildPathGBuf()
- func (nt *NetworkBase) CheckSameSize(on *NetworkBase) error
- func (nt *NetworkBase) ConnectLayerNames(send, recv string, pat paths.Pattern, typ PathTypes) (rlay, slay *Layer, pj *Path, err error)
- func (nt *NetworkBase) ConnectLayers(send, recv *Layer, pat paths.Pattern, typ PathTypes) *Path
- func (nt *NetworkBase) CopyStateFrom(on *NetworkBase) error
- func (nt *NetworkBase) DeleteAll()
- func (nt *NetworkBase) DiffFrom(ctx *Context, on *NetworkBase, maxDiff int) string
- func (nt *NetworkBase) FunTimerStart(fun string)
- func (nt *NetworkBase) FunTimerStop(fun string)
- func (nt *NetworkBase) KeyLayerParams() string
- func (nt *NetworkBase) KeyPathParams() string
- func (nt *NetworkBase) Label() string
- func (nt *NetworkBase) LateralConnectLayer(lay *Layer, pat paths.Pattern) *Path
- func (nt *NetworkBase) LateralConnectLayerPath(lay *Layer, pat paths.Pattern, pj *Path) *Path
- func (nt *NetworkBase) LayByNameTry(name string) (*Layer, error)
- func (nt *NetworkBase) Layer(idx int) emer.Layer
- func (nt *NetworkBase) LayerByName(name string) emer.Layer
- func (nt *NetworkBase) LayerByNameTry(name string) (emer.Layer, error)
- func (nt *NetworkBase) LayerMapPar(fun func(ly *Layer), funame string)
- func (nt *NetworkBase) LayerMapSeq(fun func(ly *Layer), funame string)
- func (nt *NetworkBase) LayerValues(li, di uint32) *LayerValues
- func (nt *NetworkBase) LayersByClass(classes ...string) []string
- func (nt *NetworkBase) LayersByType(layType ...LayerTypes) []string
- func (nt *NetworkBase) Layout()
- func (nt *NetworkBase) MakeLayMap()
- func (nt *NetworkBase) MaxParallelData() int
- func (nt *NetworkBase) NLayers() int
- func (nt *NetworkBase) NParallelData() int
- func (nt *NetworkBase) Name() string
- func (nt *NetworkBase) NeuronMapPar(ctx *Context, fun func(ly *Layer, ni uint32), funame string)
- func (nt *NetworkBase) NeuronMapSeq(ctx *Context, fun func(ly *Layer, ni uint32), funame string)
- func (nt *NetworkBase) NonDefaultParams() string
- func (nt *NetworkBase) OpenWtsCpp(filename core.Filename) error
- func (nt *NetworkBase) OpenWtsJSON(filename core.Filename) error
- func (nt *NetworkBase) ParamsApplied(sel *params.Sel)
- func (nt *NetworkBase) ParamsHistoryReset()
- func (nt *NetworkBase) PathByNameTry(name string) (emer.Path, error)
- func (nt *NetworkBase) PathMapSeq(fun func(pj *Path), funame string)
- func (nt *NetworkBase) ReadWtsCpp(r io.Reader) error
- func (nt *NetworkBase) ReadWtsJSON(r io.Reader) error
- func (nt *NetworkBase) ResetRandSeed()
- func (nt *NetworkBase) SaveAllLayerInhibs(filename core.Filename) error
- func (nt *NetworkBase) SaveAllParams(filename core.Filename) error
- func (nt *NetworkBase) SaveAllPathScales(filename core.Filename) error
- func (nt *NetworkBase) SaveNonDefaultParams(filename core.Filename) error
- func (nt *NetworkBase) SaveParamsSnapshot(pars *netparams.Sets, cfg any, good bool) error
- func (nt *NetworkBase) SaveWtsJSON(filename core.Filename) error
- func (nt *NetworkBase) SetCtxStrides(simCtx *Context)
- func (nt *NetworkBase) SetMaxData(simCtx *Context, maxData int)
- func (nt *NetworkBase) SetNThreads(nthr int)
- func (nt *NetworkBase) SetRandSeed(seed int64)
- func (nt *NetworkBase) SetWts(nw *weights.Network) error
- func (nt *NetworkBase) ShowAllGlobals()
- func (nt *NetworkBase) StdVertLayout()
- func (nt *NetworkBase) SynVarNames() []string
- func (nt *NetworkBase) SynVarProps() map[string]string
- func (nt *NetworkBase) TimerReport()
- func (nt *NetworkBase) UnitVarNames() []string
- func (nt *NetworkBase) UnitVarProps() map[string]string
- func (nt *NetworkBase) VarRange(varNm string) (min, max float32, err error)
- func (nt *NetworkBase) WriteWtsJSON(w io.Writer) error
- type NeuroModParams
- func (nm *NeuroModParams) DAGain(da float32) float32
- func (nm *NeuroModParams) DASign() float32
- func (nm *NeuroModParams) Defaults()
- func (nm *NeuroModParams) GGain(da float32) float32
- func (nm *NeuroModParams) GiFromACh(ach float32) float32
- func (nm *NeuroModParams) IsBLAExt() bool
- func (nm *NeuroModParams) LRMod(da, ach float32) float32
- func (nm *NeuroModParams) LRModFact(pct, val float32) float32
- func (nm *NeuroModParams) ShouldDisplay(field string) bool
- func (nm *NeuroModParams) Update()
- type NeuronAvgVarStrides
- type NeuronAvgVars
- func (i NeuronAvgVars) Desc() string
- func (i NeuronAvgVars) Int64() int64
- func (i NeuronAvgVars) MarshalText() ([]byte, error)
- func (i *NeuronAvgVars) SetInt64(in int64)
- func (i *NeuronAvgVars) SetString(s string) error
- func (i NeuronAvgVars) String() string
- func (i *NeuronAvgVars) UnmarshalText(text []byte) error
- func (i NeuronAvgVars) Values() []enums.Enum
- type NeuronFlags
- func (i NeuronFlags) Desc() string
- func (i NeuronFlags) Int64() int64
- func (i NeuronFlags) MarshalText() ([]byte, error)
- func (i *NeuronFlags) SetInt64(in int64)
- func (i *NeuronFlags) SetString(s string) error
- func (i NeuronFlags) String() string
- func (i *NeuronFlags) UnmarshalText(text []byte) error
- func (i NeuronFlags) Values() []enums.Enum
- type NeuronIndexStrides
- type NeuronIndexes
- func (i NeuronIndexes) Desc() string
- func (i NeuronIndexes) Int64() int64
- func (i NeuronIndexes) MarshalText() ([]byte, error)
- func (i *NeuronIndexes) SetInt64(in int64)
- func (i *NeuronIndexes) SetString(s string) error
- func (i NeuronIndexes) String() string
- func (i *NeuronIndexes) UnmarshalText(text []byte) error
- func (i NeuronIndexes) Values() []enums.Enum
- type NeuronVarStrides
- type NeuronVars
- func (i NeuronVars) Desc() string
- func (i NeuronVars) Int64() int64
- func (i NeuronVars) MarshalText() ([]byte, error)
- func (i *NeuronVars) SetInt64(in int64)
- func (i *NeuronVars) SetString(s string) error
- func (i NeuronVars) String() string
- func (i *NeuronVars) UnmarshalText(text []byte) error
- func (i NeuronVars) Values() []enums.Enum
- type Path
- func (pj *Path) AddClass(cls ...string) emer.Path
- func (pj *Path) AllParams() string
- func (pj *Path) AsAxon() *Path
- func (pj *Path) Class() string
- func (pj *Path) DWt(ctx *Context, si uint32)
- func (pj *Path) DWtSubMean(ctx *Context, ri uint32)
- func (pj *Path) Defaults()
- func (pj *Path) InitGBuffs()
- func (pj *Path) InitWtSym(ctx *Context, rpj *Path)
- func (pj *Path) InitWts(ctx *Context, nt *Network)
- func (pj *Path) InitWtsSyn(ctx *Context, syni uint32, rnd randx.Rand, mean, spct float32)
- func (pj *Path) LRateMod(mod float32)
- func (pj *Path) LRateSched(sched float32)
- func (pj *Path) Object() any
- func (pj *Path) PathType() PathTypes
- func (pj *Path) ReadWtsJSON(r io.Reader) error
- func (pj *Path) SWtFromWt(ctx *Context)
- func (pj *Path) SWtRescale(ctx *Context)
- func (pj *Path) SendSpike(ctx *Context, ni, di, maxData uint32)
- func (pj *Path) SetParam(path, val string) error
- func (pj *Path) SetSWtsFunc(ctx *Context, swtFun func(si, ri int, send, recv *tensor.Shape) float32)
- func (pj *Path) SetSWtsRPool(ctx *Context, swts tensor.Tensor)
- func (pj *Path) SetSynValue(varNm string, sidx, ridx int, val float32) error
- func (pj *Path) SetWts(pw *weights.Path) error
- func (pj *Path) SetWtsFunc(ctx *Context, wtFun func(si, ri int, send, recv *tensor.Shape) float32)
- func (pj *Path) SlowAdapt(ctx *Context)
- func (pj *Path) SynFail(ctx *Context)
- func (pj *Path) SynScale(ctx *Context)
- func (pj *Path) Update()
- func (pj *Path) UpdateParams()
- func (pj *Path) WriteWtsJSON(w io.Writer, depth int)
- func (pj *Path) WtFromDWt(ctx *Context, ni uint32)
- type PathBase
- func (pj *PathBase) ApplyDefParams()
- func (pj *PathBase) ApplyParams(pars *params.Sheet, setMsg bool) (bool, error)
- func (pj *PathBase) Build() error
- func (pj *PathBase) Class() string
- func (pj *PathBase) Connect(slay, rlay *Layer, pat paths.Pattern, typ PathTypes)
- func (pj *PathBase) Init(path emer.Path)
- func (pj *PathBase) IsOff() bool
- func (pj *PathBase) Label() string
- func (pj *PathBase) Name() string
- func (pj *PathBase) NonDefaultParams() string
- func (pj *PathBase) ParamsApplied(sel *params.Sel)
- func (pj *PathBase) ParamsHistoryReset()
- func (pj *PathBase) PathTypeName() string
- func (pj *PathBase) Pattern() paths.Pattern
- func (pj *PathBase) RecvLay() emer.Layer
- func (pj *PathBase) RecvSynIndexes(ri uint32) []uint32
- func (pj *PathBase) SendLay() emer.Layer
- func (pj *PathBase) SetConStartN(con *[]StartN, avgmax *minmax.AvgMax32, tn *tensor.Int32) uint32
- func (pj *PathBase) SetOff(off bool)
- func (pj *PathBase) SetPattern(pat paths.Pattern) emer.Path
- func (pj *PathBase) SetType(typ emer.PathType) emer.Path
- func (pj *PathBase) String() string
- func (pj *PathBase) Syn1DNum() int
- func (pj *PathBase) SynIndex(sidx, ridx int) int
- func (pj *PathBase) SynVal1D(varIndex int, synIndex int) float32
- func (pj *PathBase) SynVal1DDi(varIndex int, synIndex int, di int) float32
- func (pj *PathBase) SynValDi(varNm string, sidx, ridx int, di int) float32
- func (pj *PathBase) SynValue(varNm string, sidx, ridx int) float32
- func (pj *PathBase) SynValues(vals *[]float32, varNm string) error
- func (pj *PathBase) SynVarIndex(varNm string) (int, error)
- func (pj *PathBase) SynVarNames() []string
- func (pj *PathBase) SynVarNum() int
- func (pj *PathBase) SynVarProps() map[string]string
- func (pj *PathBase) Type() emer.PathType
- func (pj *PathBase) TypeName() string
- func (pj *PathBase) Validate(logmsg bool) error
- type PathGTypes
- func (i PathGTypes) Desc() string
- func (i PathGTypes) Int64() int64
- func (i PathGTypes) MarshalText() ([]byte, error)
- func (i *PathGTypes) SetInt64(in int64)
- func (i *PathGTypes) SetString(s string) error
- func (i PathGTypes) String() string
- func (i *PathGTypes) UnmarshalText(text []byte) error
- func (i PathGTypes) Values() []enums.Enum
- type PathIndexes
- type PathParams
- func (pj *PathParams) AllParams() string
- func (pj *PathParams) BLADefaults()
- func (pj *PathParams) CTCtxtPathDefaults()
- func (pj *PathParams) DWtFromDiDWtSyn(ctx *Context, syni uint32)
- func (pj *PathParams) DWtSyn(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
- func (pj *PathParams) DWtSynBLA(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PathParams) DWtSynCortex(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
- func (pj *PathParams) DWtSynDSMatrix(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PathParams) DWtSynHebb(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PathParams) DWtSynHip(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
- func (pj *PathParams) DWtSynRWPred(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PathParams) DWtSynTDPred(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PathParams) DWtSynVSMatrix(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PathParams) DWtSynVSPatch(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
- func (pj *PathParams) Defaults()
- func (pj *PathParams) GatherSpikes(ctx *Context, ly *LayerParams, ni, di uint32, gRaw float32, gSyn *float32)
- func (pj *PathParams) HipDefaults()
- func (pj *PathParams) IsExcitatory() bool
- func (pj *PathParams) IsInhib() bool
- func (pj *PathParams) MatrixDefaults()
- func (pj *PathParams) RLPredDefaults()
- func (pj *PathParams) SetFixedWts()
- func (pj *PathParams) ShouldDisplay(field string) bool
- func (pj *PathParams) SynCa(ctx *Context, si, ri, di uint32, syCaP, syCaD *float32)
- func (pj *PathParams) SynRecvLayIndex(ctx *Context, syni uint32) uint32
- func (pj *PathParams) SynSendLayIndex(ctx *Context, syni uint32) uint32
- func (pj *PathParams) Update()
- func (pj *PathParams) VSPatchDefaults()
- func (pj *PathParams) WtFromDWtSyn(ctx *Context, syni uint32)
- func (pj *PathParams) WtFromDWtSynCortex(ctx *Context, syni uint32)
- func (pj *PathParams) WtFromDWtSynNoLimits(ctx *Context, syni uint32)
- type PathScaleParams
- type PathTypes
- func (i PathTypes) Desc() string
- func (i PathTypes) Int64() int64
- func (i PathTypes) MarshalText() ([]byte, error)
- func (i *PathTypes) SetInt64(in int64)
- func (i *PathTypes) SetString(s string) error
- func (i PathTypes) String() string
- func (i *PathTypes) UnmarshalText(text []byte) error
- func (i PathTypes) Values() []enums.Enum
- type Pool
- type PoolAvgMax
- type PopCodeParams
- func (pc *PopCodeParams) ClipValue(val float32) float32
- func (pc *PopCodeParams) Decode(acts []float32) float32
- func (pc *PopCodeParams) Defaults()
- func (pc *PopCodeParams) EncodeGe(i, n uint32, val float32) float32
- func (pc *PopCodeParams) EncodeValue(i, n uint32, val float32) float32
- func (pc *PopCodeParams) ProjectParam(minParam, maxParam, clipVal float32) float32
- func (pc *PopCodeParams) SetRange(min, max, minSigma, maxSigma float32)
- func (pc *PopCodeParams) ShouldDisplay(field string) bool
- func (pc *PopCodeParams) Uncertainty(val float32, acts []float32) float32
- func (pc *PopCodeParams) Update()
- type PulvParams
- type PushOff
- type RLPredPathParams
- type RLRateParams
- type RWDaParams
- type RWPredParams
- type RandFunIndex
- type Rubicon
- func (rp *Rubicon) AddTimeEffort(ctx *Context, di uint32, effort float32)
- func (rp *Rubicon) DAFromPVs(pvPos, pvNeg, vsPatchPos, vsPatchPosSum float32) (burst, dip, da, rew float32)
- func (rp *Rubicon) DecodeFromLayer(ctx *Context, di uint32, net *Network, layName string) (val, vr float32, err error)
- func (rp *Rubicon) DecodePVEsts(ctx *Context, di uint32, net *Network)
- func (rp *Rubicon) Defaults()
- func (rp *Rubicon) DriveUpdate(ctx *Context, di uint32)
- func (rp *Rubicon) EffortUrgencyUpdate(ctx *Context, di uint32, effort float32)
- func (rp *Rubicon) GiveUpOnGoal(ctx *Context, di uint32, rnd randx.Rand) bool
- func (rp *Rubicon) HasPosUS(ctx *Context, di uint32) bool
- func (rp *Rubicon) InitDrives(ctx *Context, di uint32)
- func (rp *Rubicon) InitUS(ctx *Context, di uint32)
- func (rp *Rubicon) NewState(ctx *Context, di uint32, rnd randx.Rand)
- func (rp *Rubicon) PVDA(ctx *Context, di uint32, rnd randx.Rand)
- func (rp *Rubicon) PVcostEstFromCosts(costs []float32) (pvCostSum, pvNeg float32)
- func (rp *Rubicon) PVneg(ctx *Context, di uint32) (pvNegSum, pvNeg float32)
- func (rp *Rubicon) PVnegEstFromUSs(uss []float32) (pvNegSum, pvNeg float32)
- func (rp *Rubicon) PVpos(ctx *Context, di uint32) (pvPosSum, pvPos float32)
- func (rp *Rubicon) PVposEstFromUSs(ctx *Context, di uint32, uss []float32) (pvPosSum, pvPos float32)
- func (rp *Rubicon) PVposEstFromUSsDrives(uss, drives []float32) (pvPosSum, pvPos float32)
- func (rp *Rubicon) PVsFromUSs(ctx *Context, di uint32)
- func (rp *Rubicon) Reset(ctx *Context, di uint32)
- func (rp *Rubicon) ResetGiveUp(ctx *Context, di uint32)
- func (rp *Rubicon) ResetGoalState(ctx *Context, di uint32)
- func (rp *Rubicon) SetDrive(ctx *Context, di uint32, dr uint32, val float32)
- func (rp *Rubicon) SetDrives(ctx *Context, di uint32, curiosity float32, drives ...float32)
- func (rp *Rubicon) SetGoalDistEst(ctx *Context, di uint32, dist float32)
- func (rp *Rubicon) SetGoalMaintFromLayer(ctx *Context, di uint32, net *Network, layName string, maxAct float32) error
- func (rp *Rubicon) SetNUSs(ctx *Context, nPos, nNeg int)
- func (rp *Rubicon) SetUS(ctx *Context, di uint32, valence ValenceTypes, usIndex int, magnitude float32)
- func (rp *Rubicon) Step(ctx *Context, di uint32, rnd randx.Rand)
- func (rp *Rubicon) TimeEffortReset(ctx *Context, di uint32)
- func (rp *Rubicon) USnegIndex(simUsIndex int) int
- func (rp *Rubicon) USposIndex(simUsIndex int) int
- func (rp *Rubicon) Update()
- func (rp *Rubicon) VSPatchNewState(ctx *Context, di uint32)
- type SMaintParams
- type SWtAdaptParams
- type SWtInitParams
- type SWtParams
- func (sp *SWtParams) ClipSWt(swt float32) float32
- func (sp *SWtParams) ClipWt(wt float32) float32
- func (sp *SWtParams) Defaults()
- func (sp *SWtParams) InitWtsSyn(ctx *Context, syni uint32, rnd randx.Rand, mean, spct float32)
- func (sp *SWtParams) LWtFromWts(wt, swt float32) float32
- func (sp *SWtParams) LinFromSigWt(wt float32) float32
- func (sp *SWtParams) SigFromLinWt(lw float32) float32
- func (sp *SWtParams) Update()
- func (sp *SWtParams) WtFromDWt(wt, lwt *float32, dwt, swt float32)
- func (sp *SWtParams) WtValue(swt, lwt float32) float32
- type SpikeNoiseParams
- type SpikeParams
- func (sk *SpikeParams) ActFromISI(isi, timeInc, integ float32) float32
- func (sk *SpikeParams) ActToISI(act, timeInc, integ float32) float32
- func (sk *SpikeParams) AvgFromISI(avg float32, isi float32) float32
- func (sk *SpikeParams) Defaults()
- func (sk *SpikeParams) ShouldDisplay(field string) bool
- func (sk *SpikeParams) Update()
- type StartN
- type SynComParams
- func (sc *SynComParams) Defaults()
- func (sc *SynComParams) Fail(ctx *Context, syni uint32, swt float32)
- func (sc *SynComParams) FloatFromGBuf(ival int32) float32
- func (sc *SynComParams) FloatToGBuf(val float32) int32
- func (sc *SynComParams) FloatToIntFactor() float32
- func (sc *SynComParams) ReadIndex(rnIndex, di uint32, cycTot int32, nRecvNeurs, maxData uint32) uint32
- func (sc *SynComParams) ReadOff(cycTot int32) uint32
- func (sc *SynComParams) RingIndex(i uint32) uint32
- func (sc *SynComParams) Update()
- func (sc *SynComParams) WriteIndex(rnIndex, di uint32, cycTot int32, nRecvNeurs, maxData uint32) uint32
- func (sc *SynComParams) WriteIndexOff(rnIndex, di, wrOff uint32, nRecvNeurs, maxData uint32) uint32
- func (sc *SynComParams) WriteOff(cycTot int32) uint32
- func (sc *SynComParams) WtFail(ctx *Context, swt float32) bool
- func (sc *SynComParams) WtFailP(swt float32) float32
- type SynapseCaStrides
- type SynapseCaVars
- func (i SynapseCaVars) Desc() string
- func (i SynapseCaVars) Int64() int64
- func (i SynapseCaVars) MarshalText() ([]byte, error)
- func (i *SynapseCaVars) SetInt64(in int64)
- func (i *SynapseCaVars) SetString(s string) error
- func (i SynapseCaVars) String() string
- func (i *SynapseCaVars) UnmarshalText(text []byte) error
- func (i SynapseCaVars) Values() []enums.Enum
- type SynapseIndexStrides
- type SynapseIndexes
- func (i SynapseIndexes) Desc() string
- func (i SynapseIndexes) Int64() int64
- func (i SynapseIndexes) MarshalText() ([]byte, error)
- func (i *SynapseIndexes) SetInt64(in int64)
- func (i *SynapseIndexes) SetString(s string) error
- func (i SynapseIndexes) String() string
- func (i *SynapseIndexes) UnmarshalText(text []byte) error
- func (i SynapseIndexes) Values() []enums.Enum
- type SynapseVarStrides
- type SynapseVars
- func (i SynapseVars) Desc() string
- func (i SynapseVars) Int64() int64
- func (i SynapseVars) MarshalText() ([]byte, error)
- func (i *SynapseVars) SetInt64(in int64)
- func (i *SynapseVars) SetString(s string) error
- func (i SynapseVars) String() string
- func (i *SynapseVars) UnmarshalText(text []byte) error
- func (i SynapseVars) Values() []enums.Enum
- type TDDaParams
- type TDIntegParams
- type TopoInhibParams
- type TraceParams
- type TrgAvgActParams
- type USParams
- func (us *USParams) Alloc(nPos, nNeg, nCost int)
- func (us *USParams) CostToZero(ctx *Context, di uint32)
- func (us *USParams) Defaults()
- func (us *USParams) USnegCostFromRaw(ctx *Context, di uint32)
- func (us *USParams) USnegToZero(ctx *Context, di uint32)
- func (us *USParams) USposToZero(ctx *Context, di uint32)
- func (us *USParams) Update()
- type UrgencyParams
- func (ur *UrgencyParams) AddEffort(ctx *Context, di uint32, inc float32)
- func (ur *UrgencyParams) Defaults()
- func (ur *UrgencyParams) Reset(ctx *Context, di uint32)
- func (ur *UrgencyParams) Update()
- func (ur *UrgencyParams) Urge(ctx *Context, di uint32) float32
- func (ur *UrgencyParams) UrgeFun(urgency float32) float32
- type VTAParams
- type ValenceTypes
- func (i ValenceTypes) Desc() string
- func (i ValenceTypes) Int64() int64
- func (i ValenceTypes) MarshalText() ([]byte, error)
- func (i *ValenceTypes) SetInt64(in int64)
- func (i *ValenceTypes) SetString(s string) error
- func (i ValenceTypes) String() string
- func (i *ValenceTypes) UnmarshalText(text []byte) error
- func (i ValenceTypes) Values() []enums.Enum
Constants ¶
const CyclesN = 10
CyclesN is the number of cycles to run as a group for ra25, 10 = ~50 msec / trial, 25 = ~48, all 150 / 50 minus / plus = ~44 10 is good enough and unlikely to mess with anything else..
Variables ¶
var ( AvgMaxFloatFromIntErr func() AvgMaxFloatFromIntErrMu sync.Mutex )
AvgMaxFloatFromIntErr is called when there is an overflow error in AvgMaxI32 FloatFromInt
var ( NeuronVarNames []string NeuronVarsMap map[string]int )
var ( NeuronLayerVars = []string{"DA", "ACh", "NE", "Ser", "Gated"} NNeuronLayerVars = len(NeuronLayerVars) )
NeuronLayerVars are layer-level variables displayed as neuron layers.
var ( SynapseVarNames []string SynapseVarsMap map[string]int )
var NeuronVarProps = map[string]string{
"Spike": `desc:"whether neuron has spiked or not on this cycle (0 or 1)"`,
"Spiked": `desc:"1 if neuron has spiked within the last 10 cycles (msecs), corresponding to a nominal max spiking rate of 100 Hz, 0 otherwise -- useful for visualization and computing activity levels in terms of average spiked levels."`,
"Act": `desc:"rate-coded activation value reflecting instantaneous estimated rate of spiking, based on 1 / ISIAvg. This drives feedback inhibition in the FFFB function (todo: this will change when better inhibition is implemented), and is integrated over time for ActInt which is then used for performance statistics and layer average activations, etc. Should not be used for learning or other computations."`,
"ActInt": `desc:"integrated running-average activation value computed from Act with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall activation state across the ThetaCycle time scale, as the overall response of network to current input state -- this is copied to ActM and ActP at the ends of the minus and plus phases, respectively, and used in computing performance-level statistics (which are typically based on ActM). Should not be used for learning or other computations."`,
"ActM": `desc:"ActInt activation state at end of third quarter, representing the posterior-cortical minus phase activation -- used for statistics and monitoring network performance. Should not be used for learning or other computations."`,
"ActP": `desc:"ActInt activation state at end of fourth quarter, representing the posterior-cortical plus_phase activation -- used for statistics and monitoring network performance. Should not be used for learning or other computations."`,
"Ext": `desc:"external input: drives activation of unit from outside influences (e.g., sensory input)"`,
"Target": `desc:"target value: drives learning to produce this activation value"`,
"Ge": `range:"2" desc:"total excitatory conductance, including all forms of excitation (e.g., NMDA) -- does *not* include Gbar.E"`,
"Gi": `auto-scale:"+" desc:"total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does *not* include Gbar.I"`,
"Gk": `auto-scale:"+" desc:"total potassium conductance, typically reflecting sodium-gated potassium currents involved in adaptation effects -- does *not* include Gbar.K"`,
"Inet": `desc:"net current produced by all channels -- drives update of Vm"`,
"Vm": `min:"0" max:"1" desc:"membrane potential -- integrates Inet current over time"`,
"VmDend": `min:"0" max:"1" desc:"dendritic membrane potential -- has a slower time constant, is not subject to the VmR reset after spiking"`,
"ISI": `auto-scale:"+" desc:"current inter-spike-interval -- counts up since last spike. Starts at -1 when initialized."`,
"ISIAvg": `auto-scale:"+" desc:"average inter-spike-interval -- average time interval between spikes, integrated with ISITau rate constant (relatively fast) to capture something close to an instantaneous spiking rate. Starts at -1 when initialized, and goes to -2 after first spike, and is only valid after the second spike post-initialization."`,
"CaSpkM": `desc:"spike-driven calcium trace used as a neuron-level proxy for synpatic credit assignment factor based on continuous time-integrated spiking: exponential integration of SpikeG * Spike at MTau time constant (typically 5). Simulates a calmodulin (CaM) like signal at the most abstract level."`,
"CaSpkP": `desc:"continuous cascaded integration of CaSpkM at PTau time constant (typically 40), representing neuron-level purely spiking version of plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule. Used for specialized learning and computational functions, statistics, instead of Act."`,
"CaSpkD": `desc:"continuous cascaded integration CaSpkP at DTau time constant (typically 40), representing neuron-level purely spiking version of minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule. Used for specialized learning and computational functions, statistics, instead of Act."`,
"CaSpkPM": `desc:"minus-phase snapshot of the CaSpkP value -- similar to ActM but using a more directly spike-integrated value."`,
"CaLrn": `desc:"recv neuron calcium signal used to drive temporal error difference component of standard learning rule, combining NMDA (NmdaCa) and spiking-driven VGCC (VgccCaInt) calcium sources (vs. CaSpk* which only reflects spiking component). This is integrated into CaM, CaP, CaD, and temporal derivative is CaP - CaD (CaMKII - DAPK1). This approximates the backprop error derivative on net input, but VGCC component adds a proportion of recv activation delta as well -- a balance of both works best. The synaptic-level trace multiplier provides the credit assignment factor, reflecting coincident activity and potentially integrated over longer multi-trial timescales."`,
"NrnCaM": `desc:"integrated CaLrn at MTau timescale (typically 5), simulating a calmodulin (CaM) like signal, which then drives CaP, CaD for delta signal driving error-driven learning."`,
"NrnCaP": `desc:"cascaded integration of CaM at PTau time constant (typically 40), representing the plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule."`,
"NrnCaD": `desc:"cascaded integratoin of CaP at DTau time constant (typically 40), representing the minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule."`,
"CaDiff": `desc:"difference between CaP - CaD -- this is the error signal that drives error-driven learning."`,
"RLRate": `auto-scale:"+" desc:"recv-unit based learning rate multiplier, reflecting the sigmoid derivative computed from the CaSpkD of recv unit, and the normalized difference CaSpkP - CaSpkD / MAX(CaSpkP - CaSpkD)."`,
"Attn": `desc:"Attentional modulation factor, which can be set by special layers such as the TRC -- multiplies Ge"`,
"SpkMaxCa": `desc:"Ca integrated like CaSpkP but only starting at MaxCycStart cycle, to prevent inclusion of carryover spiking from prior theta cycle trial -- the PTau time constant otherwise results in significant carryover. This is the input to SpkMax"`,
"SpkMax": `desc:"maximum CaSpkP across one theta cycle time window (max of SpkMaxCa) -- used for specialized algorithms that have more phasic behavior within a single trial, e.g., BG Matrix layer gating. Also useful for visualization of peak activity of neurons."`,
"SpkBin0": `min:"0" max:"10" desc:"aggregated spikes within 8 bins across the theta cycle, for computing synaptic calcium efficiently."`,
"SpkBin1": `min:"0" max:"10" desc:"aggregated spikes within 8 bins across the theta cycle, for computing synaptic calcium efficiently."`,
"SpkBin2": `min:"0" max:"10" desc:"aggregated spikes within 8 bins across the theta cycle, for computing synaptic calcium efficiently."`,
"SpkBin3": `min:"0" max:"10" desc:"aggregated spikes within 8 bins across the theta cycle, for computing synaptic calcium efficiently."`,
"SpkBin4": `min:"0" max:"10" desc:"aggregated spikes within 8 bins across the theta cycle, for computing synaptic calcium efficiently."`,
"SpkBin5": `min:"0" max:"10" desc:"aggregated spikes within 8 bins across the theta cycle, for computing synaptic calcium efficiently."`,
"SpkBin6": `min:"0" max:"10" desc:"aggregated spikes within 8 bins across the theta cycle, for computing synaptic calcium efficiently."`,
"SpkBin7": `min:"0" max:"10" desc:"aggregated spikes within 8 bins across the theta cycle, for computing synaptic calcium efficiently."`,
"SpkPrv": `desc:"final CaSpkD activation state at end of previous theta cycle. used for specialized learning mechanisms that operate on delayed sending activations."`,
"SpkSt1": `desc:"the activation state at specific time point within current state processing window (e.g., 50 msec for beta cycle within standard theta cycle), as saved by SpkSt1() function. Used for example in hippocampus for CA3, CA1 learning"`,
"SpkSt2": `desc:"the activation state at specific time point within current state processing window (e.g., 100 msec for beta cycle within standard theta cycle), as saved by SpkSt2() function. Used for example in hippocampus for CA3, CA1 learning"`,
"DASign": `desc:"sign of dopamine-based learning effects for this neuron -- 1 = D1, -1 = D2"`,
"GeNoiseP": `desc:"accumulating poisson probability factor for driving excitatory noise spiking -- multiply times uniform random deviate at each time step, until it gets below the target threshold based on lambda."`,
"GeNoise": `desc:"integrated noise excitatory conductance, added into Ge"`,
"GiNoiseP": `desc:"accumulating poisson probability factor for driving inhibitory noise spiking -- multiply times uniform random deviate at each time step, until it gets below the target threshold based on lambda."`,
"GiNoise": `desc:"integrated noise inhibotyr conductance, added into Gi"`,
"GeExt": `desc:"extra excitatory conductance added to Ge -- from Ext input, GeCtxt etc"`,
"GeRaw": `desc:"raw excitatory conductance (net input) received from senders = current raw spiking drive"`,
"GeSyn": `range:"2" desc:"time-integrated total excitatory synaptic conductance, with an instantaneous rise time from each spike (in GeRaw) and exponential decay with Dt.GeTau, aggregated over pathways -- does *not* include Gbar.E"`,
"GiRaw": `desc:"raw inhibitory conductance (net input) received from senders = current raw spiking drive"`,
"GiSyn": `desc:"time-integrated total inhibitory synaptic conductance, with an instantaneous rise time from each spike (in GiRaw) and exponential decay with Dt.GiTau, aggregated over pathways -- does *not* include Gbar.I. This is added with computed FFFB inhibition to get the full inhibition in Gi"`,
"SMaintP": `desc:"accumulating poisson probability factor for driving self-maintenance by simulating a population of mutually interconnected neurons. multiply times uniform random deviate at each time step, until it gets below the target threshold based on poisson lambda based on accumulating self maint factor"`,
"GeInt": `range:"2" desc:"integrated running-average activation value computed from Ge with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall Ge level across the ThetaCycle time scale (Ge itself fluctuates considerably) -- useful for stats to set strength of connections etc to get neurons into right range of overall excitatory drive"`,
"GeIntNorm": `range:"1" desc:"GeIntNorm is normalized GeInt value (divided by the layer maximum) -- this is used for learning in layers that require learning on subthreshold activity."`,
"GiInt": `range:"2" desc:"integrated running-average activation value computed from GiSyn with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall synaptic Gi level across the ThetaCycle time scale (Gi itself fluctuates considerably) -- useful for stats to set strength of connections etc to get neurons into right range of overall inhibitory drive"`,
"GModRaw": `desc:"raw modulatory conductance, received from GType = ModulatoryG pathways"`,
"GModSyn": `desc:"syn integrated modulatory conductance, received from GType = ModulatoryG pathways"`,
"GMaintRaw": `desc:"raw maintenance conductance, received from GType = MaintG pathways"`,
"GMaintSyn": `desc:"syn integrated maintenance conductance, integrated using MaintNMDA params."`,
"SSGi": `auto-scale:"+" desc:"SST+ somatostatin positive slow spiking inhibition"`,
"SSGiDend": `auto-scale:"+" desc:"amount of SST+ somatostatin positive slow spiking inhibition applied to dendritic Vm (VmDend)"`,
"Gak": `auto-scale:"+" desc:"conductance of A-type K potassium channels"`,
"MahpN": `auto-scale:"+" desc:"accumulating voltage-gated gating value for the medium time scale AHP"`,
"SahpCa": `desc:"slowly accumulating calcium value that drives the slow AHP"`,
"SahpN": `desc:"sAHP gating value"`,
"GknaMed": `auto-scale:"+" desc:"conductance of sodium-gated potassium channel (KNa) medium dynamics (Slick) -- produces accommodation / adaptation of firing"`,
"GknaSlow": `auto-scale:"+" desc:"conductance of sodium-gated potassium channel (KNa) slow dynamics (Slack) -- produces accommodation / adaptation of firing"`,
"KirM": `desc:"the Kir gating value"`,
"Gkir": `desc:"the conductance of the potassium (K) inwardly rectifying channel, which is strongest at low membrane potentials. Can be modulated by DA."`,
"GnmdaSyn": `auto-scale:"+" desc:"integrated NMDA recv synaptic current -- adds GeRaw and decays with time constant"`,
"Gnmda": `auto-scale:"+" desc:"net postsynaptic (recv) NMDA conductance, after Mg V-gating and Gbar -- added directly to Ge as it has the same reversal potential"`,
"GnmdaMaint": `auto-scale:"+" desc:"net postsynaptic maintenance NMDA conductance, computed from GMaintSyn and GMaintRaw, after Mg V-gating and Gbar -- added directly to Ge as it has the same reversal potential"`,
"GnmdaLrn": `auto-scale:"+" desc:"learning version of integrated NMDA recv synaptic current -- adds GeRaw and decays with time constant -- drives NmdaCa that then drives CaM for learning"`,
"NmdaCa": `auto-scale:"+" desc:"NMDA calcium computed from GnmdaLrn, drives learning via CaM"`,
"GgabaB": `auto-scale:"+" desc:"net GABA-B conductance, after Vm gating and Gbar + Gbase -- applies to Gk, not Gi, for GIRK, with .1 reversal potential."`,
"GABAB": `auto-scale:"+" desc:"GABA-B / GIRK activation -- time-integrated value with rise and decay time constants"`,
"GABABx": `auto-scale:"+" desc:"GABA-B / GIRK internal drive variable -- gets the raw activation and decays"`,
"Gvgcc": `auto-scale:"+" desc:"conductance (via Ca) for VGCC voltage gated calcium channels"`,
"VgccM": `desc:"activation gate of VGCC channels"`,
"VgccH": `desc:"inactivation gate of VGCC channels"`,
"VgccCa": `auto-scale:"+" desc:"instantaneous VGCC calcium flux -- can be driven by spiking or directly from Gvgcc"`,
"VgccCaInt": `auto-scale:"+" desc:"time-integrated VGCC calcium flux -- this is actually what drives learning"`,
"SKCaIn": `desc:"intracellular calcium store level, available to be released with spiking as SKCaR, which can bind to SKCa receptors and drive K current. replenishment is a function of spiking activity being below a threshold"`,
"SKCaR": `desc:"released amount of intracellular calcium, from SKCaIn, as a function of spiking events. this can bind to SKCa channels and drive K currents."`,
"SKCaM": `desc:"Calcium-gated potassium channel gating factor, driven by SKCaR via a Hill equation as in chans.SKPCaParams."`,
"Gsk": `desc:"Calcium-gated potassium channel conductance as a function of Gbar * SKCaM."`,
"Burst": `desc:"5IB bursting activation value, computed by thresholding regular CaSpkP value in Super superficial layers"`,
"BurstPrv": `desc:"previous Burst bursting activation from prior time step -- used for context-based learning"`,
"CtxtGe": `desc:"context (temporally delayed) excitatory conductance, driven by deep bursting at end of the plus phase, for CT layers."`,
"CtxtGeRawa": `desc:"raw update of context (temporally delayed) excitatory conductance, driven by deep bursting at end of the plus phase, for CT layers."`,
"CtxtGeOrig": `desc:"original CtxtGe value prior to any decay factor -- updates at end of plus phase."`,
"NrnFlags": `display:"-" desc:"bit flags for external input and other neuron status state"`,
"ActAvg": `desc:"average activation (of minus phase activation state) over long time intervals (time constant = Dt.LongAvgTau) -- useful for finding hog units and seeing overall distribution of activation"`,
"AvgPct": `range:"2" desc:"ActAvg as a proportion of overall layer activation -- this is used for synaptic scaling to match TrgAvg activation -- updated at SlowInterval intervals"`,
"TrgAvg": `range:"2" desc:"neuron's target average activation as a proportion of overall layer activation, assigned during weight initialization, driving synaptic scaling relative to AvgPct"`,
"DTrgAvg": `auto-scale:"+" desc:"change in neuron's target average activation as a result of unit-wise error gradient -- acts like a bias weight. MPI needs to share these across processors."`,
"AvgDif": `desc:"AvgPct - TrgAvg -- i.e., the error in overall activity level relative to set point for this neuron, which drives synaptic scaling -- updated at SlowInterval intervals"`,
"GeBase": `desc:"baseline level of Ge, added to GeRaw, for intrinsic excitability"`,
"GiBase": `desc:"baseline level of Gi, added to GiRaw, for intrinsic excitability"`,
}
NeuronVarProps has all of the display properties for neuron variables, including desc tooltips
var SynapseVarProps = map[string]string{
"Wt ": `desc:"effective synaptic weight value, determining how much conductance one spike drives on the receiving neuron, representing the actual number of effective AMPA receptors in the synapse. Wt = SWt * WtSig(LWt), where WtSig produces values between 0-2 based on LWt, centered on 1."`,
"LWt": `desc:"rapidly learning, linear weight value -- learns according to the lrate specified in the connection spec. Biologically, this represents the internal biochemical processes that drive the trafficking of AMPA receptors in the synaptic density. Initially all LWt are .5, which gives 1 from WtSig function."`,
"SWt": `desc:"slowly adapting structural weight value, which acts as a multiplicative scaling factor on synaptic efficacy: biologically represents the physical size and efficacy of the dendritic spine. SWt values adapt in an outer loop along with synaptic scaling, with constraints to prevent runaway positive feedback loops and maintain variance and further capacity to learn. Initial variance is all in SWt, with LWt set to .5, and scaling absorbs some of LWt into SWt."`,
"DWt": `auto-scale:"+" desc:"delta (change in) synaptic weight, from learning -- updates LWt which then updates Wt."`,
"DSWt": `auto-scale:"+" desc:"change in SWt slow synaptic weight -- accumulates DWt"`,
"CaM": `auto-scale:"+" desc:"first stage running average (mean) Ca calcium level (like CaM = calmodulin), feeds into CaP"`,
"CaP": `auto-scale:"+"desc:"shorter timescale integrated CaM value, representing the plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule"`,
"CaD": `auto-scale:"+" desc:"longer timescale integrated CaP value, representing the minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule"`,
"Tr": `auto-scale:"+" desc:"trace of synaptic activity over time -- used for credit assignment in learning. In MatrixPath this is a tag that is then updated later when US occurs."`,
"DTr": `auto-scale:"+" desc:"delta (change in) Tr trace of synaptic activity over time"`,
"DiDWt": `auto-scale:"+" desc:"delta weight for each data parallel index (Di) -- this is directly computed from the Ca values (in cortical version) and then aggregated into the overall DWt (which may be further integrated across MPI nodes), which then drives changes in Wt values"`,
}
SynapseVarProps has all of the display properties for synapse variables, including desc tooltips
var TheGPU *vgpu.GPU
TheGPU is the gpu device, shared across all networks
Functions ¶
func AddGlbCostV ¶
func AddGlbCostV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32, val float32)
AddGlbCost is the CPU version of the global Cost variable addor
func AddGlbUSnegV ¶
func AddGlbUSnegV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32, val float32)
AddGlbUSneg is the CPU version of the global USneg variable addor
func AddGlbUSposV ¶
func AddGlbUSposV(ctx *Context, di uint32, gvar GlobalVars, posIndex uint32, val float32)
AddGlbUSposV is the CPU version of the global Drive, USpos variable adder
func AddGlbV ¶
func AddGlbV(ctx *Context, di uint32, gvar GlobalVars, val float32)
AddGlbV is the CPU version of the global variable addor
func AddNrnAvgV ¶
func AddNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
AddNrnAvgV is the CPU version of the neuron variable addor
func AddNrnV ¶
func AddNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
AddNrnV is the CPU version of the neuron variable addor
func AddSynCaV ¶
func AddSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
AddSynCaV is the CPU version of the synapse variable addor
func AddSynV ¶
func AddSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
AddSynV is the CPU version of the synapse variable addor
func GetRandomNumber ¶
func GetRandomNumber(index uint32, counter slrand.Counter, funIndex RandFunIndex) float32
GetRandomNumber returns a random number that depends on the index, counter and function index. We increment the counter after each cycle, so that we get new random numbers. This whole scheme exists to ensure equal results under different multithreading settings.
func GlbCostV ¶
func GlbCostV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32) float32
GlbCost is the CPU version of the global Cost variable accessor
func GlbUSnegV ¶
func GlbUSnegV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32) float32
GlbUSneg is the CPU version of the global USneg variable accessor
func GlbUSposV ¶
func GlbUSposV(ctx *Context, di uint32, gvar GlobalVars, posIndex uint32) float32
GlbUSposV is the CPU version of the global Drive, USpos variable accessor
func GlbV ¶
func GlbV(ctx *Context, di uint32, gvar GlobalVars) float32
GlbV is the CPU version of the global variable accessor
func GlobalSetRew ¶
GlobalSetRew is a convenience function for setting the external reward state in Globals variables
func GlobalsReset ¶
func GlobalsReset(ctx *Context)
GlobalsReset resets all global values to 0, for all NData
func HashEncodeSlice ¶
func IsExtLayerType ¶
func IsExtLayerType(lt LayerTypes) bool
IsExtLayerType returns true if the layer type deals with external input: Input, Target, Compare
func JsonToParams ¶
JsonToParams reformates json output to suitable params display output
func LayerActsLog ¶
LayerActsLog records layer activity for tuning the network inhibition, nominal activity, relative scaling, etc. if gui is non-nil, plot is updated.
func LayerActsLogAvg ¶
LayerActsLogAvg computes average of LayerActsRec record of layer activity for tuning the network inhibition, nominal activity, relative scaling, etc. if gui is non-nil, plot is updated. if recReset is true, reset the recorded data after computing average.
func LayerActsLogConfig ¶
LayerActsLogConfig configures Tables to record layer activity for tuning the network inhibition, nominal activity, relative scaling, etc. in elog.MiscTables: LayerActs is current, LayerActsRec is record over trials, LayerActsAvg is average of recorded trials.
func LayerActsLogConfigGUI ¶
LayerActsLogConfigGUI configures GUI for LayerActsLog Plot and LayerActs Avg Plot
func LayerActsLogConfigMetaData ¶
LayerActsLogConfigMetaData configures meta data for LayerActs table
func LayerActsLogRecReset ¶
LayerActsLogRecReset resets the recorded LayerActsRec data used for computing averages
func LogAddCaLrnDiagnosticItems ¶
func LogAddCaLrnDiagnosticItems(lg *elog.Logs, mode etime.Modes, net *Network, times ...etime.Times)
LogAddCaLrnDiagnosticItems adds standard Axon diagnostic statistics to given logs, across the given time levels, in higher to lower order, e.g., Epoch, Trial These were useful for the development of the Ca-based "trace" learning rule that directly uses NMDA and VGCC-like spiking Ca
func LogAddDiagnosticItems ¶
func LogAddDiagnosticItems(lg *elog.Logs, layerNames []string, mode etime.Modes, times ...etime.Times)
LogAddDiagnosticItems adds standard Axon diagnostic statistics to given logs, across the given time levels, in higher to lower order, e.g., Epoch, Trial These are useful for tuning and diagnosing the behavior of the network.
func LogAddExtraDiagnosticItems ¶
func LogAddExtraDiagnosticItems(lg *elog.Logs, mode etime.Modes, net *Network, times ...etime.Times)
LogAddExtraDiagnosticItems adds extra Axon diagnostic statistics to given logs, across the given time levels, in higher to lower order, e.g., Epoch, Trial These are useful for tuning and diagnosing the behavior of the network.
func LogAddGlobals ¶
LogAddGlobals adds all the Global variable values across the given time levels, in higher to lower order, e.g., Epoch, Trial. These are useful for tuning and diagnosing the behavior of the network.
func LogAddLayerGeActAvgItems ¶
LogAddLayerGeActAvgItems adds Ge and Act average items for Hidden and Target layers for given mode and time (e.g., Test, Cycle) These are useful for monitoring layer activity during testing.
func LogAddPCAItems ¶
LogAddPCAItems adds PCA statistics to log for Hidden and Target layers across the given time levels, in higher to lower order, e.g., Run, Epoch, Trial These are useful for diagnosing the behavior of the network.
func LogAddPulvCorSimItems ¶
LogAddPulvCorSimItems adds CorSim stats for Pulv / Pulvinar layers aggregated across three time scales, ordered from higher to lower, e.g., Run, Epoch, Trial.
func LogTestErrors ¶
LogTestErrors records all errors made across TestTrials, at Test Epoch scope
func LooperResetLogBelow ¶
LooperResetLogBelow adds a function in OnStart to all stacks and loops to reset the log at the level below each loop -- this is good default behavior. Exceptions can be passed to exclude specific levels -- e.g., if except is Epoch then Epoch does not reset the log below it
func LooperSimCycleAndLearn ¶
func LooperSimCycleAndLearn(man *looper.Manager, net *Network, ctx *Context, viewupdt *netview.ViewUpdate, trial ...etime.Times)
LooperSimCycleAndLearn adds Cycle and DWt, WtFromDWt functions to looper for given network, ctx, and netview update manager Can pass a trial-level time scale to use instead of the default etime.Trial
func LooperStdPhases ¶
func LooperStdPhases(man *looper.Manager, ctx *Context, net *Network, plusStart, plusEnd int, trial ...etime.Times)
LooperStdPhases adds the minus and plus phases of the theta cycle, along with embedded beta phases which just record St1 and St2 activity in this case. plusStart is start of plus phase, typically 150, and plusEnd is end of plus phase, typically 199 resets the state at start of trial. Can pass a trial-level time scale to use instead of the default etime.Trial
func LooperUpdateNetView ¶
func LooperUpdateNetView(man *looper.Manager, viewupdt *netview.ViewUpdate, net *Network, ctrUpdateFunc func(tm etime.Times))
LooperUpdateNetView adds netview update calls at each time level
func LooperUpdatePlots ¶
LooperUpdatePlots adds plot update calls at each time level
func MulNrnAvgV ¶
func MulNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
MulNrnAvgV is the CPU version of the neuron variable multiplier
func MulNrnV ¶
func MulNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
MulNrnV is the CPU version of the neuron variable multiplier
func MulSynCaV ¶
func MulSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
MulSynCaV is the CPU version of the synapse variable multiplier
func MulSynV ¶
func MulSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
MulSynV is the CPU version of the synapse variable multiplier
func NeuronVarIndexByName ¶
NeuronVarIndexByName returns the index of the variable in the Neuron, or error
func NrnAvgV ¶
func NrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars) float32
NrnAvgV is the CPU version of the neuron variable accessor
func NrnClearFlag ¶
func NrnClearFlag(ctx *Context, ni, di uint32, flag NeuronFlags)
func NrnHasFlag ¶
func NrnHasFlag(ctx *Context, ni, di uint32, flag NeuronFlags) bool
func NrnI ¶
func NrnI(ctx *Context, ni uint32, idx NeuronIndexes) uint32
NrnI is the CPU version of the neuron idx accessor
func NrnIsOff ¶
NrnIsOff returns true if the neuron has been turned off (lesioned) Only checks the first data item -- all should be consistent.
func NrnSetFlag ¶
func NrnSetFlag(ctx *Context, ni, di uint32, flag NeuronFlags)
func NrnV ¶
func NrnV(ctx *Context, ni, di uint32, nvar NeuronVars) float32
NrnV is the CPU version of the neuron variable accessor
func PCAStats ¶
PCAStats computes PCA statistics on recorded hidden activation patterns from Analyze, Trial log data
func ParallelChunkRun ¶
Maps the given function across the [0, total) range of items, using nThreads goroutines, in smaller-sized chunks for better load balancing. this may be better for larger number of threads, but is not better for small N
func ParallelRun ¶
Maps the given function across the [0, total) range of items, using nThreads goroutines.
func RubiconNormFun ¶
RubiconLNormFun is the normalizing function applied to the sum of all weighted raw values: 1 - (1 / (1 + usRaw.Sum()))
func RubiconUSStimValue ¶
func RubiconUSStimValue(ctx *Context, di uint32, usIndex uint32, valence ValenceTypes) float32
.RubiconUSStimVal returns stimulus value for US at given index and valence (includes Cost). If US > 0.01, a full 1 US activation is returned.
func SaveWeights ¶
SaveWeights saves network weights to filename with WeightsFilename information to identify the weights. only for 0 rank MPI if running mpi Returns the name of the file saved to, or empty if not saved.
func SaveWeightsIfArgSet ¶
SaveWeightsIfArgSet saves network weights if the "wts" arg has been set to true. uses WeightsFilename information to identify the weights. only for 0 rank MPI if running mpi Returns the name of the file saved to, or empty if not saved.
func SaveWeightsIfConfigSet ¶
SaveWeightsIfConfigSet saves network weights if the given config bool value has been set to true. uses WeightsFilename information to identify the weights. only for 0 rank MPI if running mpi Returns the name of the file saved to, or empty if not saved.
func SetAvgMaxFloatFromIntErr ¶
func SetAvgMaxFloatFromIntErr(fun func())
func SetGlbCostV ¶
func SetGlbCostV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32, val float32)
SetGlbCost is the CPU version of the global Cost variable settor
func SetGlbUSnegV ¶
func SetGlbUSnegV(ctx *Context, di uint32, gvar GlobalVars, negIndex uint32, val float32)
SetGlbUSneg is the CPU version of the global USneg variable settor
func SetGlbUSposV ¶
func SetGlbUSposV(ctx *Context, di uint32, gvar GlobalVars, posIndex uint32, val float32)
SetGlbUSposV is the CPU version of the global Drive, USpos variable settor
func SetGlbV ¶
func SetGlbV(ctx *Context, di uint32, gvar GlobalVars, val float32)
SetGlbV is the CPU version of the global variable settor
func SetNeuronExtPosNeg ¶
SetNeuronExtPosNeg sets neuron Ext value based on neuron index with positive values going in first unit, negative values rectified to positive in 2nd unit
func SetNrnAvgV ¶
func SetNrnAvgV(ctx *Context, ni uint32, nvar NeuronAvgVars, val float32)
SetNrnAvgV is the CPU version of the neuron variable settor
func SetNrnI ¶
func SetNrnI(ctx *Context, ni uint32, idx NeuronIndexes, val uint32)
SetNrnI is the CPU version of the neuron idx settor
func SetNrnV ¶
func SetNrnV(ctx *Context, ni, di uint32, nvar NeuronVars, val float32)
SetNrnV is the CPU version of the neuron variable settor
func SetSynCaV ¶
func SetSynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars, val float32)
SetSynCaV is the CPU version of the synapse variable settor
func SetSynI ¶
func SetSynI(ctx *Context, syni uint32, idx SynapseIndexes, val uint32)
SetSynI is the CPU version of the synapse idx settor
func SetSynV ¶
func SetSynV(ctx *Context, syni uint32, svar SynapseVars, val float32)
SetSynV is the CPU version of the synapse variable settor
func SigFun61 ¶
SigFun61 is the sigmoid function for value w in 0-1 range, with default gain = 6, offset = 1 params
func SigInvFun61 ¶
SigInvFun61 is the inverse of the sigmoid function, with default gain = 6, offset = 1 params
func SigmoidFun ¶
SigmoidFun is the sigmoid function for computing give up probabilities
func SynCaV ¶
func SynCaV(ctx *Context, syni, di uint32, svar SynapseCaVars) float32
SynCaV is the CPU version of the synapse variable accessor
func SynI ¶
func SynI(ctx *Context, syni uint32, idx SynapseIndexes) uint32
SynI is the CPU version of the synapse idx accessor
func SynV ¶
func SynV(ctx *Context, syni uint32, svar SynapseVars) float32
SynV is the CPU version of the synapse variable accessor
func SynapseVarByName ¶
SynapseVarByName returns the index of the variable in the Synapse, or error
func ToggleLayersOff ¶
ToggleLayersOff can be used to disable layers in a Network, for example if you are doing an ablation study.
func WeightsFilename ¶
WeightsFilename returns default current weights file name, using train run and epoch counters from looper and the RunName string identifying tag, parameters and starting run,
Types ¶
type ActAvgParams ¶
type ActAvgParams struct { // nominal estimated average activity level in the layer, which is used in computing the scaling factor on sending pathways from this layer. In general it should roughly match the layer ActAvg.ActMAvg value, which can be logged using the axon.LogAddDiagnosticItems function. If layers receiving from this layer are not getting enough Ge excitation, then this Nominal level can be lowered to increase pathway strength (fewer active neurons means each one contributes more, so scaling factor goes as the inverse of activity level), or vice-versa if Ge is too high. It is also the basis for the target activity level used for the AdaptGi option -- see the Offset which is added to this value. Nominal float32 `min:"0" step:"0.01"` // enable adapting of layer inhibition Gi multiplier factor (stored in layer GiMult value) to maintain a target layer level of ActAvg.Nominal. This generally works well and improves the long-term stability of the models. It is not enabled by default because it depends on having established a reasonable Nominal + Offset target activity level. AdaptGi slbool.Bool // offset to add to Nominal for the target average activity that drives adaptation of Gi for this layer. Typically the Nominal level is good, but sometimes Nominal must be adjusted up or down to achieve desired Ge scaling, so this Offset can compensate accordingly. Offset float32 `default:"0" min:"0" step:"0.01"` // tolerance for higher than Target target average activation as a proportion of that target value (0 = exactly the target, 0.2 = 20% higher than target) -- only once activations move outside this tolerance are inhibitory values adapted. HiTol float32 `default:"0"` // tolerance for lower than Target target average activation as a proportion of that target value (0 = exactly the target, 0.5 = 50% lower than target) -- only once activations move outside this tolerance are inhibitory values adapted. LoTol float32 `default:"0.8"` // rate of Gi adaptation as function of AdaptRate * (Target - ActMAvg) / Target -- occurs at spaced intervals determined by Network.SlowInterval value -- slower values such as 0.01 may be needed for large networks and sparse layers. AdaptRate float32 `default:"0.1"` // contains filtered or unexported fields }
ActAvgParams represents the nominal average activity levels in the layer and parameters for adapting the computed Gi inhibition levels to maintain average activity within a target range.
func (*ActAvgParams) Adapt ¶
func (aa *ActAvgParams) Adapt(gimult *float32, act float32) bool
Adapt adapts the given gi multiplier factor as function of target and actual average activation, given current params.
func (*ActAvgParams) AvgFromAct ¶
func (aa *ActAvgParams) AvgFromAct(avg *float32, act float32, dt float32)
AvgFromAct updates the running-average activation given average activity level in layer
func (*ActAvgParams) Defaults ¶
func (aa *ActAvgParams) Defaults()
func (*ActAvgParams) ShouldDisplay ¶
func (aa *ActAvgParams) ShouldDisplay(field string) bool
func (*ActAvgParams) Update ¶
func (aa *ActAvgParams) Update()
type ActAvgValues ¶
type ActAvgValues struct { // running-average minus-phase activity integrated at Dt.LongAvgTau -- used for adapting inhibition relative to target level ActMAvg float32 `edit:"-"` // running-average plus-phase activity integrated at Dt.LongAvgTau ActPAvg float32 `edit:"-"` // running-average max of minus-phase Ge value across the layer integrated at Dt.LongAvgTau AvgMaxGeM float32 `edit:"-"` // running-average max of minus-phase Gi value across the layer integrated at Dt.LongAvgTau AvgMaxGiM float32 `edit:"-"` // multiplier on inhibition -- adapted to maintain target activity level GiMult float32 `edit:"-"` // contains filtered or unexported fields }
ActAvgValues are long-running-average activation levels stored in the LayerValues, for monitoring and adapting inhibition and possibly scaling parameters. All of these integrate over NData within a network, so are the same across them.
func (*ActAvgValues) Init ¶
func (lv *ActAvgValues) Init()
type ActInitParams ¶
type ActInitParams struct { // initial membrane potential -- see Erev.L for the resting potential (typically .3) Vm float32 `default:"0.3"` // initial activation value -- typically 0 Act float32 `default:"0"` // baseline level of excitatory conductance (net input) -- Ge is initialized to this value, and it is added in as a constant background level of excitatory input -- captures all the other inputs not represented in the model, and intrinsic excitability, etc GeBase float32 `default:"0"` // baseline level of inhibitory conductance (net input) -- Gi is initialized to this value, and it is added in as a constant background level of inhibitory input -- captures all the other inputs not represented in the model GiBase float32 `default:"0"` // variance (sigma) of gaussian distribution around baseline Ge values, per unit, to establish variability in intrinsic excitability. value never goes < 0 GeVar float32 `default:"0"` // variance (sigma) of gaussian distribution around baseline Gi values, per unit, to establish variability in intrinsic excitability. value never goes < 0 GiVar float32 `default:"0"` // contains filtered or unexported fields }
ActInitParams are initial values for key network state variables. Initialized in InitActs called by InitWts, and provides target values for DecayState.
func (*ActInitParams) Defaults ¶
func (ai *ActInitParams) Defaults()
func (*ActInitParams) GetGeBase ¶
func (ai *ActInitParams) GetGeBase(rnd randx.Rand) float32
GeBase returns the baseline Ge value: Ge + rand(GeVar) > 0
func (*ActInitParams) GetGiBase ¶
func (ai *ActInitParams) GetGiBase(rnd randx.Rand) float32
GiBase returns the baseline Gi value: Gi + rand(GiVar) > 0
func (*ActInitParams) Update ¶
func (ai *ActInitParams) Update()
type ActParams ¶
type ActParams struct { // Spiking function parameters Spikes SpikeParams `display:"inline"` // dendrite-specific parameters Dend DendParams `display:"inline"` // initial values for key network state variables -- initialized in InitActs called by InitWts, and provides target values for DecayState Init ActInitParams `display:"inline"` // amount to decay between AlphaCycles, simulating passage of time and effects of saccades etc, especially important for environments with random temporal structure (e.g., most standard neural net training corpora) Decay DecayParams `display:"inline"` // time and rate constants for temporal derivatives / updating of activation state Dt DtParams `display:"inline"` // maximal conductances levels for channels Gbar chans.Chans `display:"inline"` // reversal potentials for each channel Erev chans.Chans `display:"inline"` // how external inputs drive neural activations Clamp ClampParams `display:"inline"` // how, where, when, and how much noise to add Noise SpikeNoiseParams `display:"inline"` // range for Vm membrane potential -- -- important to keep just at extreme range of reversal potentials to prevent numerical instability VmRange minmax.F32 `display:"inline"` // M-type medium time-scale afterhyperpolarization mAHP current -- this is the primary form of adaptation on the time scale of multiple sequences of spikes Mahp chans.MahpParams `display:"inline"` // slow time-scale afterhyperpolarization sAHP current -- integrates CaSpkD at theta cycle intervals and produces a hard cutoff on sustained activity for any neuron Sahp chans.SahpParams `display:"inline"` // sodium-gated potassium channel adaptation parameters -- activates a leak-like current as a function of neural activity (firing = Na influx) at two different time-scales (Slick = medium, Slack = slow) KNa chans.KNaMedSlow `display:"inline"` // potassium (K) inwardly rectifying (ir) current, which is similar to GABAB // (which is a GABA modulated Kir channel). This channel is off by default // but plays a critical role in making medium spiny neurons (MSNs) relatively // quiet in the striatum. Kir chans.KirParams `display:"inline"` // NMDA channel parameters used in computing Gnmda conductance for bistability, and postsynaptic calcium flux used in learning. Note that Learn.Snmda has distinct parameters used in computing sending NMDA parameters used in learning. NMDA chans.NMDAParams `display:"inline"` // NMDA channel parameters used in computing Gnmda conductance for bistability, and postsynaptic calcium flux used in learning. Note that Learn.Snmda has distinct parameters used in computing sending NMDA parameters used in learning. MaintNMDA chans.NMDAParams `display:"inline"` // GABA-B / GIRK channel parameters GabaB chans.GABABParams `display:"inline"` // voltage gated calcium channels -- provide a key additional source of Ca for learning and positive-feedback loop upstate for active neurons VGCC chans.VGCCParams `display:"inline"` // A-type potassium (K) channel that is particularly important for limiting the runaway excitation from VGCC channels AK chans.AKsParams `display:"inline"` // small-conductance calcium-activated potassium channel produces the pausing function as a consequence of rapid bursting. SKCa chans.SKCaParams `display:"inline"` // for self-maintenance simulating a population of // NMDA-interconnected spiking neurons SMaint SMaintParams `display:"inline"` // provides encoding population codes, used to represent a single continuous (scalar) value, across a population of units / neurons (1 dimensional) PopCode PopCodeParams `display:"inline"` }
axon.ActParams contains all the activation computation params and functions for basic Axon, at the neuron level . This is included in axon.Layer to drive the computation.
func (*ActParams) AddGeNoise ¶
AddGeNoise updates nrn.GeNoise if active
func (*ActParams) AddGiNoise ¶
AddGiNoise updates nrn.GiNoise if active
func (*ActParams) DecayAHP ¶
DecayAHP decays after-hyperpolarization variables by given factor (typically Decay.AHP)
func (*ActParams) DecayLearnCa ¶
DecayLearnCa decays neuron-level calcium learning and spiking variables by given factor. Note: this is generally NOT useful, causing variability in these learning factors as a function of the decay parameter that then has impacts on learning rates etc. see Act.Decay.LearnCa param controlling this
func (*ActParams) DecayState ¶
DecayState decays the activation state toward initial values in proportion to given decay parameter. Special case values such as Glong and KNa are also decayed with their separately parameterized values. Called with ac.Decay.Act by Layer during NewState
func (*ActParams) GSkCaFromCa ¶
GSkCaFromCa updates the SKCa channel if used
func (*ActParams) GeFromSyn ¶
GeFromSyn integrates Ge excitatory conductance from GeSyn. geExt is extra conductance to add to the final Ge value
func (*ActParams) GiFromSyn ¶
GiFromSyn integrates GiSyn inhibitory synaptic conductance from GiRaw value (can add other terms to geRaw prior to calling this)
func (*ActParams) GvgccFromVm ¶
GvgccFromVm updates all the VGCC voltage-gated calcium channel variables from VmDend
func (*ActParams) InitActs ¶
InitActs initializes activation state in neuron -- called during InitWts but otherwise not automatically called (DecayState is used instead)
func (*ActParams) InitLongActs ¶
InitLongActs initializes longer time-scale activation states in neuron (SpkPrv, SpkSt*, ActM, ActP) Called from InitActs, which is called from InitWts, but otherwise not automatically called (DecayState is used instead)
func (*ActParams) KNaNewState ¶
KNaNewState does TrialSlow version of KNa during NewState if option is set
func (*ActParams) MaintNMDAFromRaw ¶
MaintNMDAFromRaw updates all the Maint NMDA variables from GModRaw and current Vm, Spiking
func (*ActParams) NMDAFromRaw ¶
NMDAFromRaw updates all the NMDA variables from total Ge (GeRaw + Ext) and current Vm, Spiking
func (*ActParams) SMaintFromISI ¶
SMaintFromISI updates the SMaint self-maintenance current into GMaintRaw
func (*ActParams) SpikeFromVm ¶
SpikeFromVm computes Spike from Vm and ISI-based activation
func (*ActParams) SpikeFromVmVars ¶
func (ac *ActParams) SpikeFromVmVars(nrnISI, nrnISIAvg, nrnSpike, nrnSpiked, nrnAct *float32, nrnVm float32)
SpikeFromVmVars computes Spike from Vm and ISI-based activation, using pointers to variables
func (*ActParams) Update ¶
func (ac *ActParams) Update()
Update must be called after any changes to parameters
func (*ActParams) VmFromG ¶
VmFromG computes membrane potential Vm from conductances Ge, Gi, and Gk.
func (*ActParams) VmFromInet ¶
VmFromInet computes new Vm value from inet, clamping range
type AvgMaxI32 ¶
type AvgMaxI32 struct { // Average, from Calc when last computed as Sum / N Avg float32 `edit:"-"` // Maximum value, copied from CurMax in Calc Max float32 `edit:"-"` // sum for computing average -- incremented in UpdateVal, reset in Calc Sum int32 `edit:"-"` // current maximum value, updated via UpdateVal, reset in Calc CurMax int32 `edit:"-"` // number of items in the sum -- this must be set in advance to a known value and it is used in computing the float <-> int conversion factor to maximize precision. N int32 `edit:"-"` // contains filtered or unexported fields }
AvgMaxI32 holds average and max statistics for float32, and values used for computing them incrementally, using a fixed precision int32 based float representation that can be used with GPU-based atomic add and max functions. This ONLY works for positive values with averages around 1, and the N must be set IN ADVANCE to the correct number of items. Once Calc() is called, the incremental values are reset via Init() so it is always ready for updating without a separate Init() pass.
func (*AvgMaxI32) Calc ¶
Calc computes the average given the current Sum and copies over CurMax to Max refIndex is a reference index of thing being computed, which will be printed in case there is an overflow, for debugging (can't be a string because this code runs on GPU).
func (*AvgMaxI32) FloatFromInt ¶
FloatFromInt converts the given int32 value produced via FloatToInt back into a float32 (divides by factor)
func (*AvgMaxI32) FloatFromIntFactor ¶
FloatFromIntFactor returns the factor used for converting int32 back to float32 -- this is 1 / FloatToIntFactor for faster multiplication instead of dividing.
func (*AvgMaxI32) FloatToInt ¶
FloatToInt converts the given floating point value to a large int for max updating.
func (*AvgMaxI32) FloatToIntFactor ¶
FloatToIntFactor returns the factor used for converting float32 to int32 for Max updating, assuming that the overall value is in the general order of 0-1 (127 is the max).
func (*AvgMaxI32) FloatToIntSum ¶
FloatToIntSum converts the given floating point value to a large int for sum accumulating -- divides by N.
func (*AvgMaxI32) Init ¶
func (am *AvgMaxI32) Init()
Init initializes incremental values used during updating.
func (*AvgMaxI32) UpdateValue ¶
UpdateVal updates stats from given value
type AvgMaxPhases ¶
type AvgMaxPhases struct { // updated every cycle -- this is the source of all subsequent time scales Cycle AvgMaxI32 `display:"inline"` // at the end of the minus phase Minus AvgMaxI32 `display:"inline"` // at the end of the plus phase Plus AvgMaxI32 `display:"inline"` // at the end of the previous plus phase Prev AvgMaxI32 `display:"inline"` }
AvgMaxPhases contains the average and maximum values over a Pool of neurons, at different time scales within a standard ThetaCycle of updating. It is much more efficient on the GPU to just grab everything in one pass at the cycle level, and then take snapshots from there. All of the cycle level values are updated at the *start* of the cycle based on values from the prior cycle -- thus are 1 cycle behind in general.
func (*AvgMaxPhases) Calc ¶
func (am *AvgMaxPhases) Calc(refIndex int32)
Calc does Calc on Cycle, which is then ready for aggregation again
func (*AvgMaxPhases) CycleToMinus ¶
func (am *AvgMaxPhases) CycleToMinus()
CycleToMinus grabs current Cycle values into the Minus phase values
func (*AvgMaxPhases) CycleToPlus ¶
func (am *AvgMaxPhases) CycleToPlus()
CycleToPlus grabs current Cycle values into the Plus phase values
func (*AvgMaxPhases) Zero ¶
func (am *AvgMaxPhases) Zero()
Zero does a full reset on everything -- for InitActs
type AxonLayer ¶
type AxonLayer interface { emer.Layer // AsAxon returns this layer as a axon.Layer -- so that the AxonLayer // interface does not need to include accessors to all the basic stuff AsAxon() *Layer // PostBuild performs special post-Build() configuration steps for specific algorithms, // using configuration data set in BuildConfig during the ConfigNet process. PostBuild() }
AxonLayer defines the essential algorithmic API for Axon, at the layer level. These are the methods that the axon.Network calls on its layers at each step of processing. Other Layer types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.
All of the structural API is in emer.Layer, which this interface also inherits for convenience.
type AxonNetwork ¶
type AxonNetwork interface { emer.Network // AsAxon returns this network as a axon.Network -- so that the // AxonNetwork interface does not need to include accessors // to all the basic stuff AsAxon() *Network }
AxonNetwork defines the essential algorithmic API for Axon, at the network level. These are the methods that the user calls in their Sim code: * NewState * Cycle * NewPhase * DWt * WtFromDwt Because we don't want to have to force the user to use the interface cast in calling these methods, we provide Impl versions here that are the implementations which the user-facing method calls through the interface cast. Specialized algorithms should thus only change the Impl version, which is what is exposed here in this interface.
There is now a strong constraint that all Cycle level computation takes place in one pass at the Layer level, which greatly improves threading efficiency.
All of the structural API is in emer.Network, which this interface also inherits for convenience.
type AxonPath ¶
type AxonPath interface { emer.Path // AsAxon returns this path as a axon.Path -- so that the AxonPath // interface does not need to include accessors to all the basic stuff. AsAxon() *Path }
AxonPath defines the essential algorithmic API for Axon, at the pathway level. These are the methods that the axon.Layer calls on its paths at each step of processing. Other Path types can selectively re-implement (override) these methods to modify the computation, while inheriting the basic behavior for non-overridden methods.
All of the structural API is in emer.Path, which this interface also inherits for convenience.
type BLANovelPath ¶
type BLANovelPath struct { }
BLANovelPath connects all other pools to the first, Novelty, pool in a BLA layer. This allows the known US representations to specifically inhibit the novelty pool.
func NewBLANovelPath ¶
func NewBLANovelPath() *BLANovelPath
func (*BLANovelPath) Name ¶
func (ot *BLANovelPath) Name() string
type BLAPathParams ¶
type BLAPathParams struct { // use 0.01 for acquisition (don't unlearn) and 1 for extinction -- negative delta learning rate multiplier NegDeltaLRate float32 `default:"0.01,1"` // threshold on this layer's ACh level for trace learning updates AChThr float32 `default:"0.1"` // proportion of US time stimulus activity to use for the trace component of USTrace float32 `default:"0,0.5"` // contains filtered or unexported fields }
BLAPathParams has parameters for basolateral amygdala learning. Learning is driven by the Tr trace as function of ACh * Send Act recorded prior to US, and at US, recv unit delta: CaSpkP - SpkPrv times normalized GeIntNorm for recv unit credit assignment. The Learn.Trace.Tau time constant determines trace updating over trials when ACh is above threshold -- this determines strength of second-order conditioning -- default of 1 means none, but can be increased as needed.
func (*BLAPathParams) Defaults ¶
func (bp *BLAPathParams) Defaults()
func (*BLAPathParams) Update ¶
func (bp *BLAPathParams) Update()
type BurstParams ¶
type BurstParams struct { // Relative component of threshold on superficial activation value, below which it does not drive Burst (and above which, Burst = CaSpkP). This is the distance between the average and maximum activation values within layer (e.g., 0 = average, 1 = max). Overall effective threshold is MAX of relative and absolute thresholds. ThrRel float32 `max:"1" default:"0.1"` // Absolute component of threshold on superficial activation value, below which it does not drive Burst (and above which, Burst = CaSpkP). Overall effective threshold is MAX of relative and absolute thresholds. ThrAbs float32 `min:"0" max:"1" default:"0.1"` // contains filtered or unexported fields }
BurstParams determine how the 5IB Burst activation is computed from CaSpkP integrated spiking values in Super layers -- thresholded.
func (*BurstParams) Defaults ¶
func (bp *BurstParams) Defaults()
func (*BurstParams) ThrFromAvgMax ¶
func (bp *BurstParams) ThrFromAvgMax(avg, mx float32) float32
ThrFromAvgMax returns threshold from average and maximum values
func (*BurstParams) Update ¶
func (bp *BurstParams) Update()
type CTParams ¶
type CTParams struct { // gain factor for context excitatory input, which is constant as compared to the spiking input from other pathways, so it must be downscaled accordingly. This can make a difference and may need to be scaled up or down. GeGain float32 `default:"0.05,0.1,1,2"` // decay time constant for context Ge input -- if > 0, decays over time so intrinsic circuit dynamics have to take over. For single-step copy-based cases, set to 0, while longer-time-scale dynamics should use 50 (80 for 280 cycles) DecayTau float32 `default:"0,50,70"` // 1 / tau DecayDt float32 `display:"-" json:"-" xml:"-"` // contains filtered or unexported fields }
CTParams control the CT corticothalamic neuron special behavior
func (*CTParams) DecayForNCycles ¶
type CaLrnParams ¶
type CaLrnParams struct { // denomenator used for normalizing CaLrn, so the max is roughly 1 - 1.5 or so, which works best in terms of previous standard learning rules, and overall learning performance Norm float32 `default:"80"` // use spikes to generate VGCC instead of actual VGCC current -- see SpkVGCCa for calcium contribution from each spike SpkVGCC slbool.Bool `default:"true"` // multiplier on spike for computing Ca contribution to CaLrn in SpkVGCC mode SpkVgccCa float32 `default:"35"` // time constant of decay for VgccCa calcium -- it is highly transient around spikes, so decay and diffusion factors are more important than for long-lasting NMDA factor. VgccCa is integrated separately int VgccCaInt prior to adding into NMDA Ca in CaLrn VgccTau float32 `default:"10"` // time constants for integrating CaLrn across M, P and D cascading levels Dt kinase.CaDtParams `display:"inline"` // Threshold on CaSpkP CaSpkD value for updating synapse-level Ca values (SynCa) -- this is purely a performance optimization that excludes random infrequent spikes -- 0.05 works well on larger networks but not smaller, which require the .01 default. UpdateThr float32 `default:"0.01,0.02,0.5"` // rate = 1 / tau VgccDt float32 `display:"-" json:"-" xml:"-" edit:"-"` // = 1 / Norm NormInv float32 `display:"-" json:"-" xml:"-" edit:"-"` // contains filtered or unexported fields }
CaLrnParams parameterizes the neuron-level calcium signals driving learning: CaLrn = NMDA + VGCC Ca sources, where VGCC can be simulated from spiking or use the more complex and dynamic VGCC channel directly. CaLrn is then integrated in a cascading manner at multiple time scales: CaM (as in calmodulin), CaP (ltP, CaMKII, plus phase), CaD (ltD, DAPK1, minus phase).
func (*CaLrnParams) CaLrns ¶
func (np *CaLrnParams) CaLrns(ctx *Context, ni, di uint32)
CaLrns updates the CaLrn value and its cascaded values, based on NMDA, VGCC Ca it first calls VgccCa to update the spike-driven version of that variable, and perform its time-integration.
func (*CaLrnParams) Defaults ¶
func (np *CaLrnParams) Defaults()
func (*CaLrnParams) Update ¶
func (np *CaLrnParams) Update()
func (*CaLrnParams) VgccCaFromSpike ¶
func (np *CaLrnParams) VgccCaFromSpike(ctx *Context, ni, di uint32)
VgccCa updates the simulated VGCC calcium from spiking, if that option is selected, and performs time-integration of VgccCa
type ClampParams ¶
type ClampParams struct { // is this a clamped input layer? set automatically based on layer type at initialization IsInput slbool.Bool `edit:"-"` // is this a target layer? set automatically based on layer type at initialization IsTarget slbool.Bool `edit:"-"` // amount of Ge driven for clamping -- generally use 0.8 for Target layers, 1.5 for Input layers Ge float32 `default:"0.8,1.5"` // add external conductance on top of any existing -- generally this is not a good idea for target layers (creates a main effect that learning can never match), but may be ok for input layers Add slbool.Bool `default:"false"` // threshold on neuron Act activity to count as active for computing error relative to target in PctErr method ErrThr float32 `default:"0.5"` // contains filtered or unexported fields }
ClampParams specify how external inputs drive excitatory conductances (like a current clamp) -- either adds or overwrites existing conductances. Noise is added in either case.
func (*ClampParams) Defaults ¶
func (cp *ClampParams) Defaults()
func (*ClampParams) Update ¶
func (cp *ClampParams) Update()
type Context ¶
type Context struct { // current evaluation mode, e.g., Train, Test, etc Mode etime.Modes // if true, the model is being run in a testing mode, so no weight changes or other associated computations are needed. this flag should only affect learning-related behavior. Is automatically updated based on Mode != Train Testing slbool.Bool `edit:"-"` // phase counter: typicaly 0-1 for minus-plus but can be more phases for other algorithms Phase int32 // true if this is the plus phase, when the outcome / bursting is occurring, driving positive learning -- else minus phase PlusPhase slbool.Bool // cycle within current phase -- minus or plus PhaseCycle int32 // cycle counter: number of iterations of activation updating (settling) on the current state -- this counts time sequentially until reset with NewState Cycle int32 // length of the theta cycle in terms of 1 msec Cycles -- some network update steps depend on doing something at the end of the theta cycle (e.g., CTCtxtPath). ThetaCycles int32 `default:"200"` // total cycle count -- increments continuously from whenever it was last reset -- typically this is number of milliseconds in simulation time -- is int32 and not uint32 b/c used with Synapse CaUpT which needs to have a -1 case for expired update time CyclesTotal int32 // accumulated amount of time the network has been running, in simulation-time (not real world time), in seconds Time float32 // total trial count -- increments continuously in NewState call *only in Train mode* from whenever it was last reset -- can be used for synchronizing weight updates across nodes TrialsTotal int32 // amount of time to increment per cycle TimePerCycle float32 `default:"0.001"` // how frequently to perform slow adaptive processes such as synaptic scaling, inhibition adaptation, associated in the brain with sleep, in the SlowAdapt method. This should be long enough for meaningful changes to accumulate -- 100 is default but could easily be longer in larger models. Because SlowCtr is incremented by NData, high NData cases (e.g. 16) likely need to increase this value -- e.g., 400 seems to produce overall consistent results in various models. SlowInterval int32 `default:"100"` // counter for how long it has been since last SlowAdapt step. Note that this is incremented by NData to maintain consistency across different values of this parameter. SlowCtr int32 `edit:"-"` // synaptic calcium counter, which drives the CaUpT synaptic value to optimize updating of this computationally expensive factor. It is incremented by 1 for each cycle, and reset at the SlowInterval, at which point the synaptic calcium values are all reset. SynCaCtr float32 `edit:"-"` // indexes and sizes of current network NetIndexes NetIndexes `display:"inline"` // stride offsets for accessing neuron variables NeuronVars NeuronVarStrides `display:"-"` // stride offsets for accessing neuron average variables NeuronAvgVars NeuronAvgVarStrides `display:"-"` // stride offsets for accessing neuron indexes NeuronIndexes NeuronIndexStrides `display:"-"` // stride offsets for accessing synapse variables SynapseVars SynapseVarStrides `display:"-"` // stride offsets for accessing synapse Ca variables SynapseCaVars SynapseCaStrides `display:"-"` // stride offsets for accessing synapse indexes SynapseIndexes SynapseIndexStrides `display:"-"` // random counter -- incremented by maximum number of possible random numbers generated per cycle, regardless of how many are actually used -- this is shared across all layers so must encompass all possible param settings. RandCtr slrand.Counter // contains filtered or unexported fields }
Context contains all of the global context state info that is shared across every step of the computation. It is passed around to all relevant computational functions, and is updated on the CPU and synced to the GPU after every cycle. It is the *only* mechanism for communication from CPU to GPU. It contains timing, Testing vs. Training mode, random number context, global neuromodulation, etc.
func NewContext ¶
func NewContext() *Context
NewContext returns a new Time struct with default parameters
func (*Context) CopyNetStridesFrom ¶
CopyNetStridesFrom copies strides and NetIndexes for accessing variables on a Network -- these must be set properly for the Network in question (from its Ctx field) before calling any compute methods with the context. See SetCtxStrides on Network.
func (*Context) GlobalCostIndex ¶
func (ctx *Context) GlobalCostIndex(di uint32, gvar GlobalVars, negIndex uint32) uint32
GlobalCostIndex returns index into Cost global variables
func (*Context) GlobalIndex ¶
func (ctx *Context) GlobalIndex(di uint32, gvar GlobalVars) uint32
GlobalIndex returns index into main global variables, before GvVtaDA
func (*Context) GlobalUSnegIndex ¶
func (ctx *Context) GlobalUSnegIndex(di uint32, gvar GlobalVars, negIndex uint32) uint32
GlobalUSnegIndex returns index into USneg global variables
func (*Context) GlobalUSposIndex ¶
func (ctx *Context) GlobalUSposIndex(di uint32, gvar GlobalVars, posIndex uint32) uint32
GlobalUSposIndex returns index into USpos, Drive, VSPatch global variables
func (*Context) GlobalVNFloats ¶
GlobalVNFloats number of floats to allocate for Globals
func (*Context) NewState ¶
NewState resets counters at start of new state (trial) of processing. Pass the evaluation model associated with this new state -- if !Train then testing will be set to true.
func (*Context) SetGlobalStrides ¶
func (ctx *Context) SetGlobalStrides()
SetGlobalStrides sets global variable access offsets and strides
type CorSimStats ¶
type CorSimStats struct { // correlation (centered cosine aka normalized dot product) activation difference between ActP and ActM on this alpha-cycle for this layer -- computed by CorSimFromActs called by PlusPhase Cor float32 `edit:"-"` // running average of correlation similarity between ActP and ActM -- computed with CorSim.Tau time constant in PlusPhase Avg float32 `edit:"-"` // running variance of correlation similarity between ActP and ActM -- computed with CorSim.Tau time constant in PlusPhase Var float32 `edit:"-"` // contains filtered or unexported fields }
CorSimStats holds correlation similarity (centered cosine aka normalized dot product) statistics at the layer level
func (*CorSimStats) Init ¶
func (cd *CorSimStats) Init()
type DAModTypes ¶
type DAModTypes int32 //enums:enum
DAModTypes are types of dopamine modulation of neural activity.
const ( // NoDAMod means there is no effect of dopamine on neural activity NoDAMod DAModTypes = iota // D1Mod is for neurons that primarily express dopamine D1 receptors, // which are excitatory from DA bursts, inhibitory from dips. // Cortical neurons can generally use this type, while subcortical // populations are more diverse in having both D1 and D2 subtypes. D1Mod // D2Mod is for neurons that primarily express dopamine D2 receptors, // which are excitatory from DA dips, inhibitory from bursts. D2Mod // D1AbsMod is like D1Mod, except the absolute value of DA is used // instead of the signed value. // There are a subset of DA neurons that send increased DA for // both negative and positive outcomes, targeting frontal neurons. D1AbsMod )
const DAModTypesN DAModTypes = 4
DAModTypesN is the highest valid value for type DAModTypes, plus one.
func DAModTypesValues ¶
func DAModTypesValues() []DAModTypes
DAModTypesValues returns all possible values for the type DAModTypes.
func (DAModTypes) Desc ¶
func (i DAModTypes) Desc() string
Desc returns the description of the DAModTypes value.
func (DAModTypes) Int64 ¶
func (i DAModTypes) Int64() int64
Int64 returns the DAModTypes value as an int64.
func (DAModTypes) MarshalText ¶
func (i DAModTypes) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*DAModTypes) SetInt64 ¶
func (i *DAModTypes) SetInt64(in int64)
SetInt64 sets the DAModTypes value from an int64.
func (*DAModTypes) SetString ¶
func (i *DAModTypes) SetString(s string) error
SetString sets the DAModTypes value from its string representation, and returns an error if the string is invalid.
func (DAModTypes) String ¶
func (i DAModTypes) String() string
String returns the string representation of this DAModTypes value.
func (*DAModTypes) UnmarshalText ¶
func (i *DAModTypes) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (DAModTypes) Values ¶
func (i DAModTypes) Values() []enums.Enum
Values returns all possible values for the type DAModTypes.
type DecayParams ¶
type DecayParams struct { // proportion to decay most activation state variables toward initial values at start of every ThetaCycle (except those controlled separately below) -- if 1 it is effectively equivalent to full clear, resetting other derived values. ISI is reset every AlphaCycle to get a fresh sample of activations (doesn't affect direct computation -- only readout). Act float32 `default:"0,0.2,0.5,1" max:"1" min:"0"` // proportion to decay long-lasting conductances, NMDA and GABA, and also the dendritic membrane potential -- when using random stimulus order, it is important to decay this significantly to allow a fresh start -- but set Act to 0 to enable ongoing activity to keep neurons in their sensitive regime. Glong float32 `default:"0,0.6" max:"1" min:"0"` // decay of afterhyperpolarization currents, including mAHP, sAHP, and KNa, Kir -- has a separate decay because often useful to have this not decay at all even if decay is on. AHP float32 `default:"0" max:"1" min:"0"` // decay of Ca variables driven by spiking activity used in learning: CaSpk* and Ca* variables. These are typically not decayed but may need to be in some situations. LearnCa float32 `default:"0" max:"1" min:"0"` // decay layer at end of ThetaCycle when there is a global reward -- true by default for PTPred, PTMaint and PFC Super layers OnRew slbool.Bool // contains filtered or unexported fields }
DecayParams control the decay of activation state in the DecayState function called in NewState when a new state is to be processed.
func (*DecayParams) Defaults ¶
func (dp *DecayParams) Defaults()
func (*DecayParams) Update ¶
func (dp *DecayParams) Update()
type DendParams ¶
type DendParams struct { // dendrite-specific strength multiplier of the exponential spiking drive on Vm -- e.g., .5 makes it half as strong as at the soma (which uses Gbar.L as a strength multiplier per the AdEx standard model) GbarExp float32 `default:"0.2,0.5"` // dendrite-specific conductance of Kdr delayed rectifier currents, used to reset membrane potential for dendrite -- applied for Tr msec GbarR float32 `default:"3,6"` // SST+ somatostatin positive slow spiking inhibition level specifically affecting dendritic Vm (VmDend) -- this is important for countering a positive feedback loop from NMDA getting stronger over the course of learning -- also typically requires SubMean = 1 for TrgAvgAct and learning to fully counter this feedback loop. SSGi float32 `default:"0,2"` // set automatically based on whether this layer has any recv pathways that have a GType conductance type of Modulatory -- if so, then multiply GeSyn etc by GModSyn HasMod slbool.Bool `edit:"-"` // multiplicative gain factor on the total modulatory input -- this can also be controlled by the PathScale.Abs factor on ModulatoryG inputs, but it is convenient to be able to control on the layer as well. ModGain float32 // if true, modulatory signal also includes ACh multiplicative factor ModACh slbool.Bool // baseline modulatory level for modulatory effects -- net modulation is ModBase + ModGain * GModSyn ModBase float32 // contains filtered or unexported fields }
DendParams are the parameters for updating dendrite-specific dynamics
func (*DendParams) Defaults ¶
func (dp *DendParams) Defaults()
func (*DendParams) ShouldDisplay ¶
func (dp *DendParams) ShouldDisplay(field string) bool
func (*DendParams) Update ¶
func (dp *DendParams) Update()
type DriveParams ¶
type DriveParams struct { // minimum effective drive value, which is an automatic baseline ensuring // that a positive US results in at least some minimal level of reward. // Unlike Base values, this is not reflected in the activity of the drive // values, and applies at the time of reward calculation as a minimum baseline. DriveMin float32 // baseline levels for each drive, which is what they naturally trend toward // in the absence of any input. Set inactive drives to 0 baseline, // active ones typically elevated baseline (0-1 range). Base []float32 // time constants in ThetaCycle (trial) units for natural update toward // Base values. 0 values means no natural update (can be updated externally). Tau []float32 // decrement in drive value when US is consumed, thus partially satisfying // the drive. Positive values are subtracted from current Drive value. Satisfaction []float32 // 1/Tau Dt []float32 `display:"-"` }
DriveParams manages the drive parameters for computing and updating drive state. Most of the params are for optional case where drives are automatically updated based on US consumption (which satisfies drives) and time passing (which increases drives).
func (*DriveParams) AddTo ¶
AddTo increments drive by given amount, subject to 0-1 range clamping. Returns new val.
func (*DriveParams) Alloc ¶
func (dp *DriveParams) Alloc(nDrives int)
func (*DriveParams) Defaults ¶
func (dp *DriveParams) Defaults()
func (*DriveParams) EffectiveDrive ¶
func (dp *DriveParams) EffectiveDrive(ctx *Context, di uint32, i uint32) float32
EffectiveDrive returns the Max of Drives at given index and DriveMin. note that index 0 is the novelty / curiosity drive, which doesn't use DriveMin.
func (*DriveParams) ExpStep ¶
ExpStep updates drive with an exponential step with given dt value toward given baseline value.
func (*DriveParams) ExpStepAll ¶
func (dp *DriveParams) ExpStepAll(ctx *Context, di uint32)
ExpStepAll updates given drives with an exponential step using dt values toward baseline values.
func (*DriveParams) SoftAdd ¶
SoftAdd increments drive by given amount, using soft-bounding to 0-1 extremes. if delta is positive, multiply by 1-val, else val. Returns new val.
func (*DriveParams) ToBaseline ¶
func (dp *DriveParams) ToBaseline(ctx *Context, di uint32)
ToBaseline sets all drives to their baseline levels
func (*DriveParams) ToZero ¶
func (dp *DriveParams) ToZero(ctx *Context, di uint32)
ToZero sets all drives to 0
func (*DriveParams) Update ¶
func (dp *DriveParams) Update()
func (*DriveParams) VarToZero ¶
func (dp *DriveParams) VarToZero(ctx *Context, di uint32, gvar GlobalVars)
VarToZero sets all values of given drive-sized variable to 0
type DtParams ¶
type DtParams struct { // overall rate constant for numerical integration, for all equations at the unit level -- all time constants are specified in millisecond units, with one cycle = 1 msec -- if you instead want to make one cycle = 2 msec, you can do this globally by setting this integ value to 2 (etc). However, stability issues will likely arise if you go too high. For improved numerical stability, you may even need to reduce this value to 0.5 or possibly even lower (typically however this is not necessary). MUST also coordinate this with network.time_inc variable to ensure that global network.time reflects simulated time accurately Integ float32 `default:"1,0.5" min:"0"` // membrane potential time constant in cycles, which should be milliseconds typically (tau is roughly how long it takes for value to change significantly -- 1.4x the half-life) -- reflects the capacitance of the neuron in principle -- biological default for AdEx spiking model C = 281 pF = 2.81 normalized VmTau float32 `default:"2.81" min:"1"` // dendritic membrane potential time constant in cycles, which should be milliseconds typically (tau is roughly how long it takes for value to change significantly -- 1.4x the half-life) -- reflects the capacitance of the neuron in principle -- biological default for AdEx spiking model C = 281 pF = 2.81 normalized VmDendTau float32 `default:"5" min:"1"` // number of integration steps to take in computing new Vm value -- this is the one computation that can be most numerically unstable so taking multiple steps with proportionally smaller dt is beneficial VmSteps int32 `default:"2" min:"1"` // time constant for decay of excitatory AMPA receptor conductance. GeTau float32 `default:"5" min:"1"` // time constant for decay of inhibitory GABAa receptor conductance. GiTau float32 `default:"7" min:"1"` // time constant for integrating values over timescale of an individual input state (e.g., roughly 200 msec -- theta cycle), used in computing ActInt, GeInt from Ge, and GiInt from GiSyn -- this is used for scoring performance, not for learning, in cycles, which should be milliseconds typically (tau is roughly how long it takes for value to change significantly -- 1.4x the half-life), IntTau float32 `default:"40" min:"1"` // time constant for integrating slower long-time-scale averages, such as nrn.ActAvg, Pool.ActsMAvg, ActsPAvg -- computed in NewState when a new input state is present (i.e., not msec but in units of a theta cycle) (tau is roughly how long it takes for value to change significantly) -- set lower for smaller models LongAvgTau float32 `default:"20" min:"1"` // cycle to start updating the SpkMaxCa, SpkMax values within a theta cycle -- early cycles often reflect prior state MaxCycStart int32 `default:"10" min:"0"` // nominal rate = Integ / tau VmDt float32 `display:"-" json:"-" xml:"-"` // nominal rate = Integ / tau VmDendDt float32 `display:"-" json:"-" xml:"-"` // 1 / VmSteps DtStep float32 `display:"-" json:"-" xml:"-"` // rate = Integ / tau GeDt float32 `display:"-" json:"-" xml:"-"` // rate = Integ / tau GiDt float32 `display:"-" json:"-" xml:"-"` // rate = Integ / tau IntDt float32 `display:"-" json:"-" xml:"-"` // rate = 1 / tau LongAvgDt float32 `display:"-" json:"-" xml:"-"` }
DtParams are time and rate constants for temporal derivatives in Axon (Vm, G)
func (*DtParams) AvgVarUpdate ¶
AvgVarUpdate updates the average and variance from current value, using LongAvgDt
func (*DtParams) GeSynFromRaw ¶
GeSynFromRaw integrates a synaptic conductance from raw spiking using GeTau
func (*DtParams) GeSynFromRawSteady ¶
GeSynFromRawSteady returns the steady-state GeSyn that would result from receiving a steady increment of GeRaw every time step = raw * GeTau. dSyn = Raw - dt*Syn; solve for dSyn = 0 to get steady state: dt*Syn = Raw; Syn = Raw / dt = Raw * Tau
func (*DtParams) GiSynFromRaw ¶
GiSynFromRaw integrates a synaptic conductance from raw spiking using GiTau
func (*DtParams) GiSynFromRawSteady ¶
GiSynFromRawSteady returns the steady-state GiSyn that would result from receiving a steady increment of GiRaw every time step = raw * GiTau. dSyn = Raw - dt*Syn; solve for dSyn = 0 to get steady state: dt*Syn = Raw; Syn = Raw / dt = Raw * Tau
type GPLayerTypes ¶
type GPLayerTypes int32 //enums:enum
GPLayerTypes is a GPLayer axon-specific layer type enum.
const ( // GPePr is the set of prototypical GPe neurons, mediating classical NoGo GPePr GPLayerTypes = iota // GPeAk is arkypallidal layer of GPe neurons, receiving inhibition from GPePr // and projecting inhibition to Mtx GPeAk // GPi is the inner globus pallidus, functionally equivalent to SNr, // receiving from MtxGo and GPePr, and sending inhibition to VThal GPi )
The GPLayer types
const GPLayerTypesN GPLayerTypes = 3
GPLayerTypesN is the highest valid value for type GPLayerTypes, plus one.
func GPLayerTypesValues ¶
func GPLayerTypesValues() []GPLayerTypes
GPLayerTypesValues returns all possible values for the type GPLayerTypes.
func (GPLayerTypes) Desc ¶
func (i GPLayerTypes) Desc() string
Desc returns the description of the GPLayerTypes value.
func (GPLayerTypes) Int64 ¶
func (i GPLayerTypes) Int64() int64
Int64 returns the GPLayerTypes value as an int64.
func (GPLayerTypes) MarshalText ¶
func (i GPLayerTypes) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*GPLayerTypes) SetInt64 ¶
func (i *GPLayerTypes) SetInt64(in int64)
SetInt64 sets the GPLayerTypes value from an int64.
func (*GPLayerTypes) SetString ¶
func (i *GPLayerTypes) SetString(s string) error
SetString sets the GPLayerTypes value from its string representation, and returns an error if the string is invalid.
func (GPLayerTypes) String ¶
func (i GPLayerTypes) String() string
String returns the string representation of this GPLayerTypes value.
func (*GPLayerTypes) UnmarshalText ¶
func (i *GPLayerTypes) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (GPLayerTypes) Values ¶
func (i GPLayerTypes) Values() []enums.Enum
Values returns all possible values for the type GPLayerTypes.
type GPParams ¶
type GPParams struct { // type of GP Layer -- must set during config using SetBuildConfig of GPType. GPType GPLayerTypes // contains filtered or unexported fields }
GPLayer represents a globus pallidus layer, including: GPePr, GPeAk (arkypallidal), and GPi (see GPType for type). Typically just a single unit per Pool representing a given stripe.
type GPU ¶
type GPU struct { // if true, actually use the GPU On bool // if true, slower separate shader pipeline runs are used, with a CPU-sync Wait at the end, to enable timing information about each individual shader to be collected using the network FunTimer system. otherwise, only aggregate information is available about the entire Cycle call. RecFunTimes bool // if true, process each cycle one at a time. Otherwise, 10 cycles at a time are processed in one batch. CycleByCycle bool // the network we operate on -- we live under this net Net *Network `display:"-"` // the context we use Ctx *Context `display:"-"` // the vgpu compute system Sys *vgpu.System `display:"-"` // VarSet = 0: the uniform LayerParams Params *vgpu.VarSet `display:"-"` // VarSet = 1: the storage indexes and PathParams Indexes *vgpu.VarSet `display:"-"` // VarSet = 2: the Storage buffer for RW state structs and neuron floats Structs *vgpu.VarSet `display:"-"` // Varset = 3: the Storage buffer for synapses Syns *vgpu.VarSet `display:"-"` // Varset = 4: the Storage buffer for SynCa banks SynCas *vgpu.VarSet `display:"-"` // for sequencing commands Semaphores map[string]vk.Semaphore `display:"-"` // number of warp threads -- typically 64 -- must update all hlsl files if changed! NThreads int `display:"-" inactive:"-" default:"64"` // maximum number of bytes per individual storage buffer element, from GPUProps.Limits.MaxStorageBufferRange MaxBufferBytes uint32 `display:"-"` // bank of floats for GPU access SynapseCas0 []float32 `display:"-"` // bank of floats for GPU access SynapseCas1 []float32 `display:"-"` // bank of floats for GPU access SynapseCas2 []float32 `display:"-"` // bank of floats for GPU access SynapseCas3 []float32 `display:"-"` // bank of floats for GPU access SynapseCas4 []float32 `display:"-"` // bank of floats for GPU access SynapseCas5 []float32 `display:"-"` // bank of floats for GPU access SynapseCas6 []float32 `display:"-"` // bank of floats for GPU access SynapseCas7 []float32 `display:"-"` // tracks var binding DidBind map[string]bool `display:"-"` }
GPU manages all of the GPU-based computation for a given Network. Lives within the network.
func (*GPU) ConfigSynCaBuffs ¶
func (gp *GPU) ConfigSynCaBuffs()
ConfigSynCaBuffs configures special SynapseCas buffers needed for larger memory access
func (*GPU) CopyContextFromStaging ¶
func (gp *GPU) CopyContextFromStaging()
CopyContextFromStaging copies Context from staging to CPU, after Sync back down.
func (*GPU) CopyContextToStaging ¶
func (gp *GPU) CopyContextToStaging()
CopyContextToStaging copies current context to staging from CPU. Must call SyncMemToGPU after this (see SyncContextToGPU). See SetContext if there is a new one.
func (*GPU) CopyExtsToStaging ¶
func (gp *GPU) CopyExtsToStaging()
CopyExtsToStaging copies external inputs to staging from CPU. Typically used in RunApplyExts which also does the Sync.
func (*GPU) CopyGBufToStaging ¶
func (gp *GPU) CopyGBufToStaging()
CopyGBufToStaging copies the GBuf and GSyns memory to staging.
func (*GPU) CopyIndexesToStaging ¶
func (gp *GPU) CopyIndexesToStaging()
CopyIndexesToStaging is only called when the network is built to copy the indexes specifying connectivity etc to staging from CPU.
func (*GPU) CopyLayerStateFromStaging ¶
func (gp *GPU) CopyLayerStateFromStaging()
CopyLayerStateFromStaging copies Context, LayerValues and Pools from staging to CPU, after Sync.
func (*GPU) CopyLayerValuesFromStaging ¶
func (gp *GPU) CopyLayerValuesFromStaging()
CopyLayerValuesFromStaging copies LayerValues from staging to CPU, after Sync back down.
func (*GPU) CopyLayerValuesToStaging ¶
func (gp *GPU) CopyLayerValuesToStaging()
CopyLayerValuesToStaging copies LayerValues to staging from CPU. Must call SyncMemToGPU after this (see SyncLayerValuesToGPU).
func (*GPU) CopyNeuronsFromStaging ¶
func (gp *GPU) CopyNeuronsFromStaging()
CopyNeuronsFromStaging copies Neurons from staging to CPU, after Sync back down.
func (*GPU) CopyNeuronsToStaging ¶
func (gp *GPU) CopyNeuronsToStaging()
CopyNeuronsToStaging copies neuron state up to staging from CPU. Must call SyncMemToGPU after this (see SyncNeuronsToGPU).
func (*GPU) CopyParamsToStaging ¶
func (gp *GPU) CopyParamsToStaging()
CopyParamsToStaging copies the LayerParams and PathParams to staging from CPU. Must call SyncMemToGPU after this (see SyncParamsToGPU).
func (*GPU) CopyPoolsFromStaging ¶
func (gp *GPU) CopyPoolsFromStaging()
CopyPoolsFromStaging copies Pools from staging to CPU, after Sync back down.
func (*GPU) CopyPoolsToStaging ¶
func (gp *GPU) CopyPoolsToStaging()
CopyPoolsToStaging copies Pools to staging from CPU. Must call SyncMemToGPU after this (see SyncPoolsToGPU).
func (*GPU) CopyStateFromStaging ¶
func (gp *GPU) CopyStateFromStaging()
CopyStateFromStaging copies Context, LayerValues, Pools, and Neurons from staging to CPU, after Sync.
func (*GPU) CopyStateToStaging ¶
func (gp *GPU) CopyStateToStaging()
CopyStateToStaging copies LayerValues, Pools, Neurons state to staging from CPU. this is typically sufficient for most syncing -- only missing the Synapses which must be copied separately. Must call SyncMemToGPU after this (see SyncStateToGPU).
func (*GPU) CopySynCaFromStaging ¶
func (gp *GPU) CopySynCaFromStaging()
CopySynCaFromStaging copies the SynCa variables to GPU, which are per-Di (even larger). This is only used for GUI viewing -- SynCa vars otherwise managed entirely on GPU.
func (*GPU) CopySynCaToStaging ¶
func (gp *GPU) CopySynCaToStaging()
CopySynCaToStaging copies the SynCa variables to GPU, which are per-Di (even larger). This is only used for initialization -- SynCa vars otherwise managed entirely on GPU. Must call SyncMemToGPU after this (see SyncSynCaToGPU).
func (*GPU) CopySynapsesFromStaging ¶
func (gp *GPU) CopySynapsesFromStaging()
CopySynapsesFromStaging copies Synapses from staging to CPU, after Sync back down. Does not copy SynCa synapse state -- see SynCa methods.
func (*GPU) CopySynapsesToStaging ¶
func (gp *GPU) CopySynapsesToStaging()
CopySynapsesToStaging copies the synapse memory to staging (large). Does not copy SynCa synapse state -- see SynCa methods. This is not typically needed except when weights are initialized or for the Slow weight update processes that are not on GPU. Must call SyncMemToGPU after this (see SyncSynapsesToGPU).
func (*GPU) Destroy ¶
func (gp *GPU) Destroy()
Destroy should be called to release all the resources allocated by the network
func (*GPU) RunApplyExts ¶
func (gp *GPU) RunApplyExts()
RunApplyExts copies Exts external input memory to the GPU and then runs the ApplyExts shader that applies those external inputs to the GPU-side neuron state. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunApplyExtsCmd ¶
func (gp *GPU) RunApplyExtsCmd() vk.CommandBuffer
RunApplyExtsCmd returns the commands to copy Exts external input memory to the GPU and then runs the ApplyExts shader that applies those external inputs to the GPU-side neuron state. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunCycle ¶
func (gp *GPU) RunCycle()
RunCycle is the main cycle-level update loop for updating one msec of neuron state. It copies current Context up to GPU for updated Cycle counter state and random number state, Different versions of the code are run depending on various flags. By default, it will run the entire minus and plus phase in one big chunk. The caller must check the On flag before running this, to use CPU vs. GPU.
func (*GPU) RunCycleOne ¶
func (gp *GPU) RunCycleOne()
RunCycleOne does one cycle of updating in an optimized manner using Events to sequence each of the pipeline calls. It is for CycleByCycle mode and syncs back full state every cycle.
func (*GPU) RunCycleOneCmd ¶
func (gp *GPU) RunCycleOneCmd() vk.CommandBuffer
RunCycleOneCmd returns commands to do one cycle of updating in an optimized manner using Events to sequence each of the pipeline calls. It is for CycleByCycle mode and syncs back full state every cycle.
func (*GPU) RunCycleSeparateFuns ¶
func (gp *GPU) RunCycleSeparateFuns()
RunCycleSeparateFuns does one cycle of updating in a very slow manner that allows timing to be recorded for each function call, for profiling.
func (*GPU) RunCycles ¶
func (gp *GPU) RunCycles()
RunCycles does multiple cycles of updating in one chunk
func (*GPU) RunCyclesCmd ¶
func (gp *GPU) RunCyclesCmd() vk.CommandBuffer
RunCyclesCmd returns the RunCycles commands to do multiple cycles of updating in one chunk
func (*GPU) RunDWt ¶
func (gp *GPU) RunDWt()
RunDWt runs the DWt shader to compute weight changes. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunDWtCmd ¶
func (gp *GPU) RunDWtCmd() vk.CommandBuffer
RunDWtCmd returns the commands to run the DWt shader to compute weight changes.
func (*GPU) RunMinusPhase ¶
func (gp *GPU) RunMinusPhase()
RunMinusPhase runs the MinusPhase shader to update snapshot variables at the end of the minus phase. All non-synapse state is copied back down after this, so it is available for action calls The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunMinusPhaseCmd ¶
func (gp *GPU) RunMinusPhaseCmd() vk.CommandBuffer
RunMinusPhaseCmd returns the commands to run the MinusPhase shader to update snapshot variables at the end of the minus phase. All non-synapse state is copied back down after this, so it is available for action calls
func (*GPU) RunNewState ¶
func (gp *GPU) RunNewState()
RunNewState runs the NewState shader to initialize state at start of new ThetaCycle trial. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunNewStateCmd ¶
func (gp *GPU) RunNewStateCmd() vk.CommandBuffer
RunNewStateCmd returns the commands to run the NewState shader to update variables at the start of a new trial.
func (*GPU) RunPipelineMemWait ¶
func (gp *GPU) RunPipelineMemWait(cmd vk.CommandBuffer, name string, n int)
RunPipelineMemWait records command to run given pipeline with a WaitMemWriteRead after it, so subsequent pipeline run will have access to values updated by this command.
func (*GPU) RunPipelineNoWait ¶
func (gp *GPU) RunPipelineNoWait(cmd vk.CommandBuffer, name string, n int)
RunPipelineNoWait records command to run given pipeline without any waiting after it for writes to complete. This should be the last command in the sequence.
func (*GPU) RunPipelineOffset ¶
func (gp *GPU) RunPipelineOffset(cmd vk.CommandBuffer, name string, n, off int)
RunPipelineOffset records command to run given pipeline with a push constant offset for the starting index to compute. This is needed when the total number of dispatch indexes exceeds GPU.MaxComputeWorkGroupCount1D. Does NOT wait for writes, assuming a parallel launch of all.
func (*GPU) RunPipelineWait ¶
RunPipelineWait runs given pipeline in "single shot" mode, which is maximally inefficient if multiple commands need to be run. This is the only mode in which timer information is available.
func (*GPU) RunPlusPhase ¶
func (gp *GPU) RunPlusPhase()
RunPlusPhase runs the PlusPhase shader to update snapshot variables and do additional stats-level processing at end of the plus phase. All non-synapse state is copied back down after this. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunPlusPhaseCmd ¶
func (gp *GPU) RunPlusPhaseCmd() vk.CommandBuffer
RunPlusPhaseCmd returns the commands to run the PlusPhase shader to update snapshot variables and do additional stats-level processing at end of the plus phase. All non-synapse state is copied back down after this.
func (*GPU) RunPlusPhaseStart ¶
func (gp *GPU) RunPlusPhaseStart()
RunPlusPhaseStart runs the PlusPhaseStart shader does updating at the start of the plus phase: applies Target inputs as External inputs.
func (*GPU) RunWtFromDWt ¶
func (gp *GPU) RunWtFromDWt()
RunWtFromDWt runs the WtFromDWt shader to update weights from weigh changes. The caller must check the On flag before running this, to use CPU vs. GPU
func (*GPU) RunWtFromDWtCmd ¶
func (gp *GPU) RunWtFromDWtCmd() vk.CommandBuffer
RunWtFromDWtCmd returns the commands to run the WtFromDWt shader to update weights from weight changes. This also syncs neuron state from CPU -> GPU because TrgAvgFromD has updated that state.
func (*GPU) SetContext ¶
SetContext sets our context to given context and syncs it to the GPU. Typically a single context is used as it must be synced into the GPU. The GPU never writes to the CPU
func (*GPU) StartRun ¶
func (gp *GPU) StartRun(cmd vk.CommandBuffer)
StartRun resets the given command buffer in preparation for recording commands for a multi-step run. It is much more efficient to record all commands to one buffer, and use Events to synchronize the steps between them, rather than using semaphores. The submit call is by far the most expensive so that should only happen once!
func (*GPU) SynDataNs ¶
SynDataNs returns the numbers for processing SynapseCas vars = Synapses * MaxData. Can exceed thread count limit and require multiple command launches with different offsets. The offset is in terms of synapse index, so everything is computed in terms of synapse indexes, with MaxData then multiplied to get final values. nCmd = number of command launches, nPer = number of synapses per cmd, nLast = number of synapses for last command launch.
func (*GPU) SyncAllFromGPU ¶
func (gp *GPU) SyncAllFromGPU()
SyncAllFromCPU copies State except Context plus Synapses from GPU to CPU. This is called before SlowAdapt, which is run CPU-side
func (*GPU) SyncAllToGPU ¶
func (gp *GPU) SyncAllToGPU()
SyncAllToGPU copies LayerValues, Pools, Neurons, Synapses to GPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncContextFromGPU ¶
func (gp *GPU) SyncContextFromGPU()
SyncContextFromGPU copies Context from GPU to CPU. This is done at the end of each cycle to get state back from GPU for CPU-side computations. Use only when only thing being copied -- more efficient to get all at once. e.g. see SyncStateFromGPU
func (*GPU) SyncContextToGPU ¶
func (gp *GPU) SyncContextToGPU()
SyncContextToGPU copies current context to GPU from CPU. Calls SyncMemToGPU -- use when this is the only copy taking place. See SetContext if there is a new one.
func (*GPU) SyncGBufToGPU ¶
func (gp *GPU) SyncGBufToGPU()
SyncGBufToGPU copies the GBuf and GSyns memory to the GPU.
func (*GPU) SyncLayerStateFromGPU ¶
func (gp *GPU) SyncLayerStateFromGPU()
SyncLayerStateFromCPU copies Context, LayerValues, and Pools from GPU to CPU. This is the main GPU->CPU sync step automatically called after each Cycle.
func (*GPU) SyncLayerValuesFromGPU ¶
func (gp *GPU) SyncLayerValuesFromGPU()
SyncLayerValuesFromGPU copies LayerValues from GPU to CPU. This is done at the end of each cycle to get state back from staging for CPU-side computations. Use only when only thing being copied -- more efficient to get all at once. e.g. see SyncStateFromGPU
func (*GPU) SyncLayerValuesToGPU ¶
func (gp *GPU) SyncLayerValuesToGPU()
SyncLayerValuesToGPU copies LayerValues to GPU from CPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncMemToGPU ¶
func (gp *GPU) SyncMemToGPU()
SyncMemToGPU synchronizes any staging memory buffers that have been updated with a Copy function, actually sending the updates from the staging -> GPU. The CopyTo commands just copy Network-local data to a staging buffer, and this command then actually moves that onto the GPU. In unified GPU memory architectures, this staging buffer is actually the same one used directly by the GPU -- otherwise it is a separate staging buffer.
func (*GPU) SyncNeuronsFromGPU ¶
func (gp *GPU) SyncNeuronsFromGPU()
SyncNeuronsFromGPU copies Neurons from GPU to CPU. Use only when only thing being copied -- more efficient to get all at once. e.g. see SyncStateFromGPU
func (*GPU) SyncNeuronsToGPU ¶
func (gp *GPU) SyncNeuronsToGPU()
SyncNeuronsToGPU copies neuron state up to GPU from CPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncParamsToGPU ¶
func (gp *GPU) SyncParamsToGPU()
SyncParamsToGPU copies the LayerParams and PathParams to the GPU from CPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncPoolsFromGPU ¶
func (gp *GPU) SyncPoolsFromGPU()
SyncPoolsFromGPU copies Pools from GPU to CPU. Use only when only thing being copied -- more efficient to get all at once. e.g. see SyncStateFromGPU
func (*GPU) SyncPoolsToGPU ¶
func (gp *GPU) SyncPoolsToGPU()
SyncPoolsToGPU copies Pools to GPU from CPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncRegionStruct ¶
SyncRegionStruct returns the SyncRegion with error panic
func (*GPU) SyncRegionSynCas ¶
SyncRegionSynCas returns the SyncRegion with error panic
func (*GPU) SyncRegionSyns ¶
SyncRegionSyns returns the SyncRegion with error panic
func (*GPU) SyncStateFromGPU ¶
func (gp *GPU) SyncStateFromGPU()
SyncStateFromCPU copies Neurons, LayerValues, and Pools from GPU to CPU. This is the main GPU->CPU sync step automatically called in PlusPhase.
func (*GPU) SyncStateGBufToGPU ¶
func (gp *GPU) SyncStateGBufToGPU()
SyncStateGBufToGPU copies LayValues, Pools, Neurons, GBuf state to GPU this is typically sufficient for most syncing -- only missing the Synapses which must be copied separately. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncStateToGPU ¶
func (gp *GPU) SyncStateToGPU()
SyncStateToGPU copies LayValues, Pools, Neurons state to GPU this is typically sufficient for most syncing -- only missing the Synapses which must be copied separately. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncSynCaFromGPU ¶
func (gp *GPU) SyncSynCaFromGPU()
SyncSynCaFromGPU copies the SynCa variables to GPU, which are per-Di (even larger). This is only used for GUI viewing -- SynCa vars otherwise managed entirely on GPU. Use only when only thing being copied -- more efficient to get all at once.
func (*GPU) SyncSynCaToGPU ¶
func (gp *GPU) SyncSynCaToGPU()
SyncSynCaToGPU copies the SynCa variables to GPU, which are per-Di (even larger). This is only used for initialization -- SynCa vars otherwise managed entirely on GPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) SyncSynapsesFromGPU ¶
func (gp *GPU) SyncSynapsesFromGPU()
SyncSynapsesFromGPU copies Synapses from GPU to CPU. Does not copy SynCa synapse state -- see SynCa methods. Use only when only thing being copied -- more efficient to get all at once.
func (*GPU) SyncSynapsesToGPU ¶
func (gp *GPU) SyncSynapsesToGPU()
SyncSynapsesToGPU copies the synapse memory to GPU (large). This is not typically needed except when weights are initialized or for the Slow weight update processes that are not on GPU. Calls SyncMemToGPU -- use when this is the only copy taking place.
func (*GPU) TestSynCaCmd ¶
func (gp *GPU) TestSynCaCmd() vk.CommandBuffer
type GScaleValues ¶
type GScaleValues struct { // scaling factor for integrating synaptic input conductances (G's), originally computed as a function of sending layer activity and number of connections, and typically adapted from there -- see Path.PathScale adapt params Scale float32 `edit:"-"` // normalized relative proportion of total receiving conductance for this pathway: PathScale.Rel / sum(PathScale.Rel across relevant paths) Rel float32 `edit:"-"` // contains filtered or unexported fields }
GScaleValues holds the conductance scaling values. These are computed once at start and remain constant thereafter, and therefore belong on Params and not on PathValues.
type GiveUpParams ¶
type GiveUpParams struct { // threshold on GiveUp probability, below which no give up is triggered ProbThr float32 `default:"0.5"` // minimum GiveUpSum value, which is the denominator in the sigmoidal function. // This minimum prevents division by zero and any other degenerate values. MinGiveUpSum float32 `default:"0.1"` // the factor multiplying utility values: cost and expected positive outcome Utility float32 `default:"1"` // the factor multiplying timing values from VSPatch Timing float32 `default:"2"` // the factor multiplying progress values based on time-integrated progress // toward the goal Progress float32 `default:"1"` // minimum utility cost and reward estimate values -- when they are below // these levels (at the start) then utility is effectively neutral, // so the other factors take precedence. MinUtility float32 `default:"0.2"` // maximum VSPatchPosSum for normalizing the value for give-up weighing VSPatchSumMax float32 `default:"1"` // maximum VSPatchPosVar for normalizing the value for give-up weighing VSPatchVarMax float32 `default:"0.5"` // time constant for integrating the ProgressRate // values over time ProgressRateTau float32 `default:"2"` // 1/tau ProgressRateDt float32 `display:"-"` }
GiveUpParams are parameters for computing when to give up, based on Utility, Timing and Progress factors.
func (*GiveUpParams) Defaults ¶
func (gp *GiveUpParams) Defaults()
func (*GiveUpParams) Prob ¶
Prob returns the probability and discrete bool give up for giving up based on given sums of continue and give up factors
func (*GiveUpParams) Sums ¶
func (gp *GiveUpParams) Sums(ctx *Context, di uint32) (cnSum, guSum float32)
Sums computes the summed weighting factors that drive continue and give up contributions to the probability function.
func (*GiveUpParams) Update ¶
func (gp *GiveUpParams) Update()
type GlobalVars ¶
type GlobalVars int32 //enums:enum
GlobalVars are network-wide variables, such as neuromodulators, reward, drives, etc including the state for the Rubicon phasic dopamine model. These are stored in the Network.Globals float32 slice and corresponding global GPU slice.
const ( // Rew is the external reward value. Must also set HasRew flag when Rew is set, // otherwise it is ignored. This is computed by the Rubicon algorithm from US // inputs set by Net.Rubicon methods, and can be directly set in simpler RL cases. GvRew GlobalVars = iota // HasRew must be set to true (1) when an external reward / US input is present, // otherwise Rew is ignored. This is also set when Rubicon BOA model gives up. // This drives ACh release in the Rubicon model. GvHasRew // RewPred is the reward prediction, computed by a special reward prediction layer, // e.g., the VSPatch layer in the Rubicon algorithm. GvRewPred // PrevPred is previous time step reward prediction, e.g., for TDPredLayer GvPrevPred // HadRew is HasRew state from the previous trial, copied from HasRew in NewState. // Used for updating Effort, Urgency at start of new trial. GvHadRew // DA is phasic dopamine that drives learning moreso than performance, // representing reward prediction error, signaled as phasic // increases or decreases in activity relative to a tonic baseline, which is // represented by a value of 0. Released by the VTA (ventral tegmental area), // or SNc (substantia nigra pars compacta). GvDA // DAtonic is tonic dopamine, which has modulatory instead of learning effects. // Increases can drive greater propensity to engage in activities by biasing Go // vs No pathways in the basal ganglia, for example as a function of Urgency. GvDAtonic // ACh is acetylcholine, activated by salient events, particularly at the onset // of a reward / punishment outcome (US), or onset of a conditioned stimulus (CS). // Driven by BLA -> PPtg that detects changes in BLA activity, via LDTLayer type. GvACh // NE is norepinepherine -- not yet in use GvNE // Ser is serotonin -- not yet in use GvSer // AChRaw is raw ACh value used in updating global ACh value by LDTLayer. GvAChRaw // GoalMaint is the normalized (0-1) goal maintenance activity, // set in ApplyRubicon function at start of trial. // Drives top-down inhibition of LDT layer / ACh activity. GvGoalMaint // VSMatrixJustGated is VSMatrix just gated (to engage goal maintenance // in PFC areas), set at end of plus phase. This excludes any gating // happening at time of US. GvVSMatrixJustGated // VSMatrixHasGated is VSMatrix has gated since the last time HasRew was set // (US outcome received or expected one failed to be received). GvVSMatrixHasGated // CuriosityPoolGated is true if VSMatrixJustGated and the first pool // representing the curiosity / novelty drive gated. This can change the // giving up Effort.Max parameter. GvCuriosityPoolGated // Time is the raw time counter, incrementing upward during goal engaged window. // This is also copied directly into NegUS[0] which tracks time, but we maintain // a separate effort value to make it clearer. GvTime // Effort is the raw effort counter, incrementing upward for each effort step // during goal engaged window. // This is also copied directly into NegUS[1] which tracks effort, but we maintain // a separate effort value to make it clearer. GvEffort // UrgencyRaw is the raw effort for urgency, incrementing upward from effort // increments per step when _not_ goal engaged. GvUrgencyRaw // Urgency is the overall urgency activity level (normalized 0-1), // computed from logistic function of GvUrgencyRaw. This drives DAtonic // activity to increasingly bias Go firing. GvUrgency // HasPosUS indicates has positive US on this trial, // drives goal accomplishment logic and gating. GvHasPosUS // HadPosUS is state from the previous trial (copied from HasPosUS in NewState). GvHadPosUS // NegUSOutcome indicates that a phasic negative US stimulus was experienced, // driving phasic ACh, VSMatrix gating to reset current goal engaged plan (if any), // and phasic dopamine based on the outcome. GvNegUSOutcome // HadNegUSOutcome is state from the previous trial (copied from NegUSOutcome // in NewState) GvHadNegUSOutcome // PVposSum is the total weighted positive valence primary value // = sum of Weight * USpos * Drive GvPVposSum // PVpos is the normalized positive valence primary value // = (1 - 1/(1+PVposGain * PVposSum)) GvPVpos // PVnegSum is the total weighted negative valence primary values including costs // = sum of Weight * Cost + Weight * USneg GvPVnegSum // PVpos is the normalized negative valence primary values, including costs // = (1 - 1/(1+PVnegGain * PVnegSum)) GvPVneg // PVposEst is the estimated PVpos final outcome value // decoded from the network PVposFinal layer GvPVposEst // PVposVar is the estimated variance or uncertainty in the PVpos // final outcome value decoded from the network PVposFinal layer GvPVposVar // PVnegEst is the estimated PVneg final outcome value // decoded from the network PVnegFinal layer GvPVnegEst // PVnegVar is the estimated variance or uncertainty in the PVneg // final outcome value decoded from the network PVnegFinal layer GvPVnegVar // GoalDistEst is the estimate of distance to the goal, in trial step units, // decreasing down to 0 as the goal approaches. GvGoalDistEst // GoalDistPrev is the previous estimate of distance to the goal, // in trial step units, decreasing down to 0 as the goal approaches. GvGoalDistPrev // ProgressRate is the negative time average change in GoalDistEst, // i.e., positive values indicate continued approach to the goal, // while negative values represent moving away from the goal. GvProgressRate // GiveUpUtility is total GiveUp weight as a function of Cost GvGiveUpUtility // ContUtility is total Continue weight as a function of expected positive outcome PVposEst GvContUtility // GiveUpTiming is total GiveUp weight as a function of VSPatchPosSum * (1 - VSPatchPosVar) GvGiveUpTiming // ContTiming is total Continue weight as a function of (1 - VSPatchPosSum) * VSPatchPosVar GvContTiming // GiveUpProgress is total GiveUp weight as a function of ProgressRate GvGiveUpProgress // ContProgress is total Continue weight as a function of ProgressRate GvContProgress // GiveUpSum is total GiveUp weight: Utility + Timing + Progress GvGiveUpSum // ContSum is total Continue weight: Utility + Timing + Progress GvContSum // GiveUpProb is the probability of giving up: 1 / (1 + (GvContSum / GvGiveUpSum)) GvGiveUpProb // GiveUp is true if a reset was triggered probabilistically based on GiveUpProb GvGiveUp // GaveUp is copy of GiveUp from previous trial GvGaveUp // VSPatchPos is the net shunting input from VSPatch (PosD1, named PVi in original Rubicon) // computed as the Max of US-specific VSPatch saved values, subtracting D1 - D2. // This is also stored as GvRewPred. GvVSPatchPos // VSPatchPosThr is a thresholded version of GvVSPatchPos, // applying Rubicon.LHb.VSPatchNonRewThr threshold for non-reward trials. // This is the version used for computing DA. GvVSPatchPosThr // VSPatchPosRPE is the reward prediction error for the VSPatchPos reward prediction // without any thresholding applied, and only for PV events. // This is used to train the VSPatch, assuming a local feedback circuit that does // not have the effective thresholding used for the broadcast critic signal that // trains the rest of the network. GvVSPatchPosRPE // VSPatchPosSum is the sum of VSPatchPos over goal engaged trials, // representing the integrated prediction that the US is going to occur GvVSPatchPosSum // VSPatchPosPrev is the previous trial VSPatchPosSum GvVSPatchPosPrev // VSPatchPosVar is the integrated temporal variance of VSPatchPos over goal engaged trials, // which determines when the VSPatchPosSum has stabilized GvVSPatchPosVar // computed LHb activity level that drives dipping / pausing of DA firing, // when VSPatch pos prediction > actual PV reward drive // or PVneg > PVpos GvLHbDip // LHbBurst is computed LHb activity level that drives bursts of DA firing, // when actual PV reward drive > VSPatch pos prediction GvLHbBurst // LHbPVDA is GvLHbBurst - GvLHbDip -- the LHb contribution to DA, // reflecting PV and VSPatch (PVi), but not the CS (LV) contributions GvLHbPVDA // CeMpos is positive valence central nucleus of the amygdala (CeM) // LV (learned value) activity, reflecting // |BLAposAcqD1 - BLAposExtD2|_+ positively rectified. // CeM sets Raw directly. Note that a positive US onset even with no // active Drive will be reflected here, enabling learning about unexpected outcomes. GvCeMpos // CeMneg is negative valence central nucleus of the amygdala (CeM) // LV (learned value) activity, reflecting // |BLAnegAcqD2 - BLAnegExtD1|_+ positively rectified. CeM sets Raw directly GvCeMneg // VtaDA is overall dopamine value reflecting all of the different inputs GvVtaDA // Cost are Time, Effort, etc costs, as normalized version of corresponding raw. // NCosts of them GvCost // CostRaw are raw, linearly incremented negative valence US outcomes, // this value is also integrated together with all US vals for PVneg GvCostRaw // USneg are negative valence US outcomes, normalized version of raw. // NNegUSs of them GvUSneg // USnegRaw are raw, linearly incremented negative valence US outcomes, // this value is also integrated together with all US vals for PVneg GvUSnegRaw // Drives are current drive state, updated with optional homeostatic // exponential return to baseline values. GvDrives // USpos are current positive-valence drive-satisfying input(s) // (unconditioned stimuli = US) GvUSpos // VSPatch is current reward predicting VSPatch (PosD1) values. GvVSPatchD1 // VSPatch is current reward predicting VSPatch (PosD2) values. GvVSPatchD2 // OFCposPTMaint is activity level of given OFCposPT maintenance pool // used in anticipating potential USpos outcome value. GvOFCposPTMaint // VSMatrixPoolGated indicates whether given VSMatrix pool gated // this is reset after last goal accomplished -- records gating since then. GvVSMatrixPoolGated )
const GlobalVarsN GlobalVars = 67
GlobalVarsN is the highest valid value for type GlobalVars, plus one.
func GlobalVarsValues ¶
func GlobalVarsValues() []GlobalVars
GlobalVarsValues returns all possible values for the type GlobalVars.
func (GlobalVars) Desc ¶
func (i GlobalVars) Desc() string
Desc returns the description of the GlobalVars value.
func (GlobalVars) Int64 ¶
func (i GlobalVars) Int64() int64
Int64 returns the GlobalVars value as an int64.
func (GlobalVars) MarshalText ¶
func (i GlobalVars) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*GlobalVars) SetInt64 ¶
func (i *GlobalVars) SetInt64(in int64)
SetInt64 sets the GlobalVars value from an int64.
func (*GlobalVars) SetString ¶
func (i *GlobalVars) SetString(s string) error
SetString sets the GlobalVars value from its string representation, and returns an error if the string is invalid.
func (GlobalVars) String ¶
func (i GlobalVars) String() string
String returns the string representation of this GlobalVars value.
func (*GlobalVars) UnmarshalText ¶
func (i *GlobalVars) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (GlobalVars) Values ¶
func (i GlobalVars) Values() []enums.Enum
Values returns all possible values for the type GlobalVars.
type HebbParams ¶
type HebbParams struct { // if On, then the standard learning rule is replaced with a Hebbian learning rule On slbool.Bool // strength multiplier for hebbian increases, based on R * S * (1-LWt) Up float32 `default:"0.5"` // strength multiplier for hebbian decreases, based on R * (1 - S) * LWt Down float32 `default:"1"` // contains filtered or unexported fields }
HebbParams for optional hebbian learning that replaces the default learning rule, based on S = sending activity, R = receiving activity
func (*HebbParams) Defaults ¶
func (hp *HebbParams) Defaults()
func (*HebbParams) ShouldDisplay ¶
func (hp *HebbParams) ShouldDisplay(field string) bool
func (*HebbParams) Update ¶
func (hp *HebbParams) Update()
type HipConfig ¶
type HipConfig struct { // size of EC2 EC2Size vecint.Vector2i `nest:"+"` // number of EC3 pools (outer dimension) EC3NPool vecint.Vector2i `nest:"+"` // number of neurons in one EC3 pool EC3NNrn vecint.Vector2i `nest:"+"` // number of neurons in one CA1 pool CA1NNrn vecint.Vector2i `nest:"+"` // size of CA3 CA3Size vecint.Vector2i `nest:"+"` // size of DG / CA3 DGRatio float32 `default:"2.236"` // percent connectivity from EC3 to EC2 EC3ToEC2PCon float32 `default:"0.1"` // percent connectivity from EC2 to DG EC2ToDGPCon float32 `default:"0.25"` // percent connectivity from EC2 to CA3 EC2ToCA3PCon float32 `default:"0.25"` // percent connectivity from CA3 to CA1 CA3ToCA1PCon float32 `default:"0.25"` // percent connectivity into CA3 from DG DGToCA3PCon float32 `default:"0.02"` // lateral radius of connectivity in EC2 EC2LatRadius int // lateral gaussian sigma in EC2 for how quickly weights fall off with distance EC2LatSigma float32 // proportion of full mossy fiber strength (PathScale.Rel) for CA3 EDL in training, applied at the start of a trial to reduce DG -> CA3 strength. 1 = fully reduce strength, .5 = 50% reduction, etc MossyDelta float32 `default:"1"` // proportion of full mossy fiber strength (PathScale.Rel) for CA3 EDL in testing, applied during 2nd-3rd quarters to reduce DG -> CA3 strength. 1 = fully reduce strength, .5 = 50% reduction, etc MossyDeltaTest float32 `default:"0.75"` // low theta modulation value for temporal difference EDL -- sets PathScale.Rel on CA1 <-> EC paths consistent with Theta phase model ThetaLow float32 `default:"0.9"` // high theta modulation value for temporal difference EDL -- sets PathScale.Rel on CA1 <-> EC paths consistent with Theta phase model ThetaHigh float32 `default:"1"` // flag for clamping the EC5 from EC5ClampSrc EC5Clamp bool `default:"true"` // source layer for EC5 clamping activations in the plus phase -- biologically it is EC3 but can use an Input layer if available EC5ClampSrc string `default:"EC3"` // clamp the EC5 from EC5ClampSrc during testing as well as training -- this will overwrite any target values that might be used in stats (e.g., in the basic hip example), so it must be turned off there EC5ClampTest bool `default:"true"` // threshold for binarizing EC5 clamp values -- any value above this is clamped to 1, else 0 -- helps produce a cleaner learning signal. Set to 0 to not perform any binarization. EC5ClampThr float32 `default:"0.1"` }
HipConfig have the hippocampus size and connectivity parameters
type HipPathParams ¶
type HipPathParams struct { // Hebbian learning proportion Hebb float32 `default:"0"` // EDL proportion Err float32 `default:"1"` // proportion of correction to apply to sending average activation for hebbian learning component (0=none, 1=all, .5=half, etc) SAvgCor float32 `default:"0.4:0.8" min:"0" max:"1"` // threshold of sending average activation below which learning does not occur (prevents learning when there is no input) SAvgThr float32 `default:"0.01" min:"0"` // sending layer Nominal (need to manually set it to be the same as the sending layer) SNominal float32 `default:"0.1" min:"0"` // contains filtered or unexported fields }
HipPathParams define behavior of hippocampus paths, which have special learning rules
func (*HipPathParams) Defaults ¶
func (hp *HipPathParams) Defaults()
func (*HipPathParams) Update ¶
func (hp *HipPathParams) Update()
type InhibParams ¶
type InhibParams struct { // layer-level and pool-level average activation initial values and updating / adaptation thereof -- initial values help determine initial scaling factors. ActAvg ActAvgParams `display:"inline"` // inhibition across the entire layer -- inputs generally use Gi = 0.8 or 0.9, 1.3 or higher for sparse layers. If the layer has sub-pools (4D shape) then this is effectively between-pool inhibition. Layer fsfffb.GiParams `display:"inline"` // inhibition within sub-pools of units, for layers with 4D shape -- almost always need this if the layer has pools. Pool fsfffb.GiParams `display:"inline"` }
axon.InhibParams contains all the inhibition computation params and functions for basic Axon This is included in axon.Layer to support computation. This also includes other misc layer-level params such as expected average activation in the layer which is used for Ge rescaling and potentially for adapting inhibition over time
func (*InhibParams) Defaults ¶
func (ip *InhibParams) Defaults()
func (*InhibParams) Update ¶
func (ip *InhibParams) Update()
type LDTParams ¶
type LDTParams struct { // threshold per input source, on absolute value (magnitude), to count as a significant reward event, which then drives maximal ACh -- set to 0 to disable this nonlinear behavior SrcThr float32 `default:"0.05"` // use the global Context.NeuroMod.HasRew flag -- if there is some kind of external reward being given, then ACh goes to 1, else 0 for this component Rew slbool.Bool `default:"true"` // extent to which active goal maintenance (via Global GoalMaint) // inhibits ACh signals: when goal engaged, distractability is lower. MaintInhib float32 `default:"0.8" max:"1" min:"0"` // idx of Layer to get max activity from -- set during Build from BuildConfig SrcLay1Name if present -- -1 if not used SrcLay1Index int32 `edit:"-"` // idx of Layer to get max activity from -- set during Build from BuildConfig SrcLay2Name if present -- -1 if not used SrcLay2Index int32 `edit:"-"` // idx of Layer to get max activity from -- set during Build from BuildConfig SrcLay3Name if present -- -1 if not used SrcLay3Index int32 `edit:"-"` // idx of Layer to get max activity from -- set during Build from BuildConfig SrcLay4Name if present -- -1 if not used SrcLay4Index int32 `edit:"-"` // contains filtered or unexported fields }
LDTParams compute reward salience as ACh global neuromodulatory signal as a function of the MAX activation of its inputs from salience detecting layers (e.g., the superior colliculus: SC), and whenever there is an external US outcome input (signalled by the global GvHasRew flag). ACh from salience inputs is discounted by GoalMaint activity, reducing distraction when pursuing a goal, but US ACh activity is not so reduced. ACh modulates excitability of goal-gating layers.
func (*LDTParams) ACh ¶
func (lp *LDTParams) ACh(ctx *Context, di uint32, srcLay1Act, srcLay2Act, srcLay3Act, srcLay4Act float32) float32
ACh returns the computed ACh salience value based on given source layer activations and key values from the ctx Context.
func (*LDTParams) MaxSrcAct ¶
MaxSrcAct returns the updated maxSrcAct value from given source layer activity value.
type LHbParams ¶
type LHbParams struct { // threshold on VSPatch prediction during a non-reward trial VSPatchNonRewThr float32 `default:"0.1"` // gain on the VSPatchD1 - D2 difference to drive the net VSPatch DA // prediction signal, which goes in VSPatchPos and RewPred global variables VSPatchGain float32 `default:"4"` // decay time constant for computing the temporal variance in VSPatch // values over time VSPatchVarTau float32 `default:"2"` // threshold factor that multiplies integrated pvNeg value // to establish a threshold for whether the integrated pvPos value // is good enough to drive overall net positive reward. // If pvPos wins, it is then multiplicatively discounted by pvNeg; // otherwise, pvNeg is discounted by pvPos. NegThr float32 `default:"1"` // gain multiplier on PVpos for purposes of generating bursts // (not for discounting negative dips). BurstGain float32 `default:"1"` // gain multiplier on PVneg for purposes of generating dips // (not for discounting positive bursts). DipGain float32 `default:"1"` // 1/tau VSPatchVarDt float32 `display:"-"` }
LHbParams has values for computing LHb & RMTg which drives dips / pauses in DA firing. LHb handles all US-related (PV = primary value) processing. Positive net LHb activity drives dips / pauses in VTA DA activity, e.g., when predicted pos > actual or actual neg > predicted. Negative net LHb activity drives bursts in VTA DA activity, e.g., when actual pos > predicted (redundant with LV / Amygdala) or "relief" burst when actual neg < predicted.
func (*LHbParams) DAFromPVs ¶
func (lh *LHbParams) DAFromPVs(pvPos, pvNeg, vsPatchPos, vsPatchPosSum float32) (burst, dip, da, rew float32)
DAFromPVs computes the overall PV DA in terms of LHb burst and dip activity from given pvPos, pvNeg, and vsPatchPos values. Also returns the net "reward" value as the discounted PV value, separate from the vsPatchPos prediction error factor.
func (*LHbParams) DAforNoUS ¶
DAforNoUS computes the LHb response when there is _NOT_ a primary positive reward value or a give-up state. In this case, inhibition of VS via tonic ACh is assumed to prevent activity of PVneg (and there is no PVpos). Because the LHb only responds when it decides to GiveUp, there is no response in this case. DA is instead driven by CS-based computation, in rubicon_layers.go, VTAParams.VTADA
func (*LHbParams) DAforUS ¶
func (lh *LHbParams) DAforUS(ctx *Context, di uint32, pvPos, pvNeg, vsPatchPos, vsPatchPosSum float32) float32
DAforUS computes the overall LHb Dip or Burst (one is always 0), and PVDA ~= Burst - Dip, for case when there is a primary positive reward value or a give-up state has triggered. Returns the overall net reward magnitude, prior to VSPatch discounting.
type LRateMod ¶
type LRateMod struct { // toggle use of this modulation factor On slbool.Bool // baseline learning rate -- what you get for correct cases Base float32 `min:"0" max:"1"` // defines the range over which modulation occurs for the modulator factor -- Min and below get the Base level of learning rate modulation, Max and above get a modulation of 1 Range minmax.F32 // contains filtered or unexported fields }
LRateMod implements global learning rate modulation, based on a performance-based factor, for example error. Increasing levels of the factor = higher learning rate. This can be added to a Sim and called prior to DWt() to dynamically change lrate based on overall network performance.
func (*LRateMod) LRateMod ¶
LRateMod calls LRateMod on given network, using computed Mod factor based on given normalized modulation factor (0 = no error = Base learning rate, 1 = maximum error). returns modulation factor applied.
func (*LRateMod) Mod ¶
Mod returns the learning rate modulation factor as a function of any kind of normalized modulation factor, e.g., an error measure. If fact <= Range.Min, returns Base If fact >= Range.Max, returns 1 otherwise, returns proportional value between Base..1
func (*LRateMod) ShouldDisplay ¶
type LRateParams ¶
type LRateParams struct { // base learning rate for this pathway -- can be modulated by other factors below -- for larger networks, use slower rates such as 0.04, smaller networks can use faster 0.2. Base float32 `default:"0.04,0.1,0.2"` // scheduled learning rate multiplier, simulating reduction in plasticity over aging Sched float32 // dynamic learning rate modulation due to neuromodulatory or other such factors Mod float32 // effective actual learning rate multiplier used in computing DWt: Eff = eMod * Sched * Base Eff float32 `edit:"-"` }
LRateParams manages learning rate parameters
func (*LRateParams) Defaults ¶
func (ls *LRateParams) Defaults()
func (*LRateParams) Init ¶
func (ls *LRateParams) Init()
Init initializes modulation values back to 1 and updates Eff
func (*LRateParams) Update ¶
func (ls *LRateParams) Update()
func (*LRateParams) UpdateEff ¶
func (ls *LRateParams) UpdateEff()
type LaySpecialValues ¶
type LaySpecialValues struct { // one value V1 float32 `edit:"-"` // one value V2 float32 `edit:"-"` // one value V3 float32 `edit:"-"` // one value V4 float32 `edit:"-"` }
LaySpecialValues holds special values used to communicate to other layers based on neural values, used for special algorithms such as RL where some of the computation is done algorithmically.
func (*LaySpecialValues) Init ¶
func (lv *LaySpecialValues) Init()
type Layer ¶
type Layer struct { LayerBase // all layer-level parameters -- these must remain constant once configured Params *LayerParams }
axon.Layer implements the basic Axon spiking activation function, and manages learning in the pathways.
func (*Layer) AnyGated ¶
AnyGated returns true if the layer-level pool Gated flag is true, which indicates if any of the layers gated.
func (*Layer) ApplyExt ¶
ApplyExt applies external input in the form of an tensor.Float32 or 64. Negative values and NaNs are not valid, and will be interpreted as missing inputs. The given data index di is the data parallel index (0 < di < MaxData): must present inputs separately for each separate data parallel set. If dimensionality of tensor matches that of layer, and is 2D or 4D, then each dimension is iterated separately, so any mismatch preserves dimensional structure. Otherwise, the flat 1D view of the tensor is used. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext. Also sets the Exts values on layer, which are used for the GPU version, which requires calling the network ApplyExts() method -- is a no-op for CPU.
func (*Layer) ApplyExt1D ¶
ApplyExt1D applies external input in the form of a flat 1-dimensional slice of floats If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext
func (*Layer) ApplyExt1D32 ¶
ApplyExt1D32 applies external input in the form of a flat 1-dimensional slice of float32s. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext
func (*Layer) ApplyExt1DTsr ¶
ApplyExt1DTsr applies external input using 1D flat interface into tensor. If the layer is a Target or Compare layer type, then it goes in Target otherwise it goes in Ext
func (*Layer) ApplyExt2D ¶
ApplyExt2D applies 2D tensor external input
func (*Layer) ApplyExt2Dto4D ¶
ApplyExt2Dto4D applies 2D tensor external input to a 4D layer
func (*Layer) ApplyExt4D ¶
ApplyExt4D applies 4D tensor external input
func (*Layer) ApplyExtFlags ¶
func (ly *Layer) ApplyExtFlags() (clearMask, setMask NeuronFlags, toTarg bool)
ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Target (else Ext)
func (*Layer) ApplyExtValue ¶
func (ly *Layer) ApplyExtValue(ctx *Context, lni, di uint32, val float32, clearMask, setMask NeuronFlags, toTarg bool)
ApplyExtVal applies given external value to given neuron using clearMask, setMask, and toTarg from ApplyExtFlags. Also saves Val in Exts for potential use by GPU.
func (*Layer) AsAxon ¶
AsAxon returns this layer as a axon.Layer -- all derived layers must redefine this to return the base Layer type, so that the AxonLayer interface does not need to include accessors to all the basic stuff
func (*Layer) AvgDifFromTrgAvg ¶
AvgDifFromTrgAvg updates neuron-level AvgDif values from AvgPct - TrgAvg which is then used for synaptic scaling of LWt values in Path SynScale.
func (*Layer) AvgMaxVarByPool ¶
AvgMaxVarByPool returns the average and maximum value of given variable for given pool index (0 = entire layer, 1.. are subpools for 4D only). Uses fast index-based variable access.
func (*Layer) BGThalDefaults ¶
func (ly *Layer) BGThalDefaults()
func (*Layer) BLADefaults ¶
func (ly *Layer) BLADefaults()
func (*Layer) BetweenLayerGi ¶
BetweenLayerGi computes inhibition Gi between layers
func (*Layer) BetweenLayerGiMax ¶
BetweenLayerGiMax returns max gi value for input maxGi vs the given layIndex layer
func (*Layer) CTDefParamsFast ¶
func (ly *Layer) CTDefParamsFast()
CTDefParamsFast sets fast time-integration parameters for CTLayer. This is what works best in the deep_move 1 trial history case, vs Medium and Long
func (*Layer) CTDefParamsLong ¶
func (ly *Layer) CTDefParamsLong()
CTDefParamsLong sets long time-integration parameters for CTLayer. This is what works best in the deep_music test case integrating over long time windows, compared to Medium and Fast.
func (*Layer) CTDefParamsMedium ¶
func (ly *Layer) CTDefParamsMedium()
CTDefParamsMedium sets medium time-integration parameters for CTLayer. This is what works best in the FSA test case, compared to Fast (deep_move) and Long (deep_music) time integration.
func (*Layer) CeMDefaults ¶
func (ly *Layer) CeMDefaults()
func (*Layer) ClearTargExt ¶
ClearTargExt clears external inputs Ext that were set from target values Target. This can be called to simulate alpha cycles within theta cycles, for example.
func (*Layer) CorSimFromActs ¶
CorSimFromActs computes the correlation similarity (centered cosine aka normalized dot product) in activation state between minus and plus phases.
func (*Layer) CostEst ¶
CostEst returns the estimated computational cost associated with this layer, separated by neuron-level and synapse-level, in arbitrary units where cost per synapse is 1. Neuron-level computation is more expensive but there are typically many fewer neurons, so in larger networks, synaptic costs tend to dominate. Neuron cost is estimated from TimerReport output for large networks.
func (*Layer) CycleNeuron ¶
CycleNeuron does one cycle (msec) of updating at the neuron level Called directly by Network, iterates over data.
func (*Layer) CyclePost ¶
CyclePost is called after the standard Cycle update, as a separate network layer loop. This is reserved for any kind of special ad-hoc types that need to do something special after Spiking is finally computed and Sent. Typically used for updating global values in the Context state, such as updating a neuromodulatory signal such as dopamine. Any updates here must also be done in gpu_hlsl/gpu_cyclepost.hlsl
func (*Layer) DTrgSubMean ¶
DTrgSubMean subtracts the mean from DTrgAvg values Called by TrgAvgFromD
func (*Layer) DWt ¶
DWt computes the weight change (learning), based on synaptically integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets.
func (*Layer) DWtSubMean ¶
DWtSubMean computes subtractive normalization of the DWts
func (*Layer) DecayState ¶
DecayState decays activation state by given proportion (default decay values are ly.Params.Acts.Decay.Act, Glong)
func (*Layer) DecayStateLayer ¶
DecayStateLayer does layer-level decay, but not neuron level
func (*Layer) DecayStateNeuronsAll ¶
DecayStateNeuronsAll decays neural activation state by given proportion (default decay values are ly.Params.Acts.Decay.Act, Glong, AHP) for all data parallel indexes. Does not decay pool or layer state. This is used for minus phase of Pulvinar layers to clear state in prep for driver plus phase.
func (*Layer) DecayStatePool ¶
DecayStatePool decays activation state by given proportion in given sub-pool index (0 based)
func (*Layer) GInteg ¶
func (ly *Layer) GInteg(ctx *Context, ni, di uint32, pl *Pool, vals *LayerValues)
GInteg integrates conductances G over time (Ge, NMDA, etc). calls SpecialGFromRawSyn, GiInteg
func (*Layer) GPDefaults ¶
func (ly *Layer) GPDefaults()
func (*Layer) GPPostBuild ¶
func (ly *Layer) GPPostBuild()
func (*Layer) GPiDefaults ¶
func (ly *Layer) GPiDefaults()
func (*Layer) GatedFromSpkMax ¶
GatedFromSpkMax updates the Gated state in Pools of given layer, based on Avg SpkMax being above given threshold. returns true if any gated, and the pool index if 4D layer (0 = first).
func (*Layer) GatherSpikes ¶
GatherSpikes integrates G*Raw and G*Syn values for given recv neuron while integrating the Recv Path-level GSyn integrated values.
func (*Layer) GiFromSpikes ¶
GiFromSpikes gets the Spike, GeRaw and GeExt from neurons in the pools where Spike drives FBsRaw = raw feedback signal, GeRaw drives FFsRaw = aggregate feedforward excitatory spiking input. GeExt represents extra excitatory input from other sources. Then integrates new inhibitory conductances therefrom, at the layer and pool level. Called separately by Network.CycleImpl on all Layers Also updates all AvgMax values at the Cycle level.
func (*Layer) HasPoolInhib ¶
HasPoolInhib returns true if the layer is using pool-level inhibition (implies 4D too). This is the proper check for using pool-level target average activations, for example.
func (*Layer) InitActAvg ¶
InitActAvg initializes the running-average activation values that drive learning and the longer time averaging values.
func (*Layer) InitActAvgLayer ¶
InitActAvgLayer initializes the running-average activation values that drive learning and the longer time averaging values. version with just overall layer-level inhibition.
func (*Layer) InitActAvgPools ¶
InitActAvgPools initializes the running-average activation values that drive learning and the longer time averaging values. version with pooled inhibition.
func (*Layer) InitActs ¶
InitActs fully initializes activation state -- only called automatically during InitWts
func (*Layer) InitExt ¶
InitExt initializes external input state. Should be called prior to ApplyExt on all layers receiving Ext input.
func (*Layer) InitGScale ¶
InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.
func (*Layer) InitPathGBuffs ¶
InitPathGBuffs initializes the pathway-level conductance buffers and conductance integration values for receiving pathways in this layer.
func (*Layer) InitWtSym ¶
InitWtsSym initializes the weight symmetry -- higher layers copy weights from lower layers
func (*Layer) InitWts ¶
InitWts initializes the weight values in the network, i.e., resetting learning Also calls InitActs
func (*Layer) LDTDefaults ¶
func (ly *Layer) LDTDefaults()
func (*Layer) LDTPostBuild ¶
func (ly *Layer) LDTPostBuild()
func (*Layer) LDTSrcLayAct ¶
LDTSrcLayAct returns the overall activity level for given source layer for purposes of computing ACh salience value. Typically the input is a superior colliculus (SC) layer that rapidly accommodates after the onset of a stimulus. using lpl.AvgMax.CaSpkP.Cycle.Max for layer activity measure.
func (*Layer) LRateMod ¶
LRateMod sets the LRate modulation parameter for Paths, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.
func (*Layer) LRateSched ¶
LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.
func (*Layer) LesionNeurons ¶
LesionNeurons lesions (sets the Off flag) for given proportion (0-1) of neurons in layer returns number of neurons lesioned. Emits error if prop > 1 as indication that percent might have been passed
func (*Layer) LocalistErr2D ¶
LocalistErr2D decodes a 2D layer with Y axis = redundant units, X = localist units returning the indexes of the max activated localist value in the minus and plus phase activities, and whether these are the same or different (err = different) returns one result per data parallel index ([ctx.NetIndexes.NData])
func (*Layer) LocalistErr4D ¶
LocalistErr4D decodes a 4D layer with each pool representing a localist value. Returns the flat 1D indexes of the max activated localist value in the minus and plus phase activities, and whether these are the same or different (err = different)
func (*Layer) MakeToolbar ¶
func (*Layer) MatrixDefaults ¶
func (ly *Layer) MatrixDefaults()
func (*Layer) MatrixGated ¶
MatrixGated is called after std PlusPhase, on CPU, has Pool info downloaded from GPU, to set Gated flag based on SpkMax activity
func (*Layer) MatrixPostBuild ¶
func (ly *Layer) MatrixPostBuild()
func (*Layer) MinusPhase ¶
MinusPhase does updating at end of the minus phase
func (*Layer) MinusPhasePost ¶
MinusPhasePost does special algorithm processing at end of minus
func (*Layer) NewState ¶
NewState handles all initialization at start of new input pattern. Does NOT call InitGScale()
func (*Layer) NewStateNeurons ¶
NewStateNeurons only calls the neurons part of new state -- for misbehaving GPU
func (*Layer) PTMaintDefaults ¶
func (ly *Layer) PTMaintDefaults()
func (*Layer) PctUnitErr ¶
PctUnitErr returns the proportion of units where the thresholded value of Target (Target or Compare types) or ActP does not match that of ActM. If Act > ly.Params.Acts.Clamp.ErrThr, effective activity = 1 else 0 robust to noisy activations. returns one result per data parallel index ([ctx.NetIndexes.NData])
func (*Layer) PlusPhaseActAvg ¶
PlusPhaseActAvg updates ActAvg and DTrgAvg at the plus phase Note: could be done on GPU but not worth it at this point..
func (*Layer) PlusPhasePost ¶
PlusPhasePost does special algorithm processing at end of plus
func (*Layer) PlusPhaseStart ¶
PlusPhaseStart does updating at the start of the plus phase: applies Target inputs as External inputs.
func (*Layer) PoolGiFromSpikes ¶
PoolGiFromSpikes computes inhibition Gi from Spikes within sub-pools. and also between different layers based on LayInhib* indexes must happen after LayPoolGiFromSpikes has been called.
func (*Layer) PostBuild ¶
func (ly *Layer) PostBuild()
PostBuild performs special post-Build() configuration steps for specific algorithms, using configuration data set in BuildConfig during the ConfigNet process.
func (*Layer) PostSpike ¶
PostSpike does updates at neuron level after spiking has been computed. This is where special layer types add extra code. It also updates the CaSpkPCyc stats. Called directly by Network, iterates over data.
func (*Layer) PulvPostBuild ¶
func (ly *Layer) PulvPostBuild()
PulvPostBuild does post-Build config of Pulvinar based on BuildConfig options
func (*Layer) PulvinarDriver ¶
func (*Layer) ReadWtsJSON ¶
ReadWtsJSON reads the weights from this layer from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one layer only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.
func (*Layer) RubiconPostBuild ¶
func (ly *Layer) RubiconPostBuild()
RubiconPostBuild is used for BLA, VSPatch, and PVLayer types to set NeuroMod params
func (*Layer) STNDefaults ¶
func (ly *Layer) STNDefaults()
func (*Layer) SendSpike ¶
SendSpike sends spike to receivers for all neurons that spiked last step in Cycle, integrated the next time around. Called directly by Network, iterates over data.
func (*Layer) SetParam ¶
SetParam sets parameter at given path to given value. returns error if path not found or value cannot be set.
func (*Layer) SetSubMean ¶
SetSubMean sets the SubMean parameters in all the layers in the network trgAvg is for Learn.TrgAvgAct.SubMean path is for the paths Learn.Trace.SubMean in both cases, it is generally best to have both parameters set to 0 at the start of learning
func (*Layer) SlowAdapt ¶
SlowAdapt is the layer-level slow adaptation functions. Calls AdaptInhib and AvgDifFromTrgAvg for Synaptic Scaling. Does NOT call pathway-level methods.
func (*Layer) SpikeFromG ¶
SpikeFromG computes Vm from Ge, Gi, Gl conductances and then Spike from that
func (*Layer) SynFail ¶
SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFromDWt, but this call can be used during testing to update failing synapses.
func (*Layer) TDIntegPostBuild ¶
func (ly *Layer) TDIntegPostBuild()
TDIntegPostBuild does post-Build config
func (*Layer) TargToExt ¶
TargToExt sets external input Ext from target values Target This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.
func (*Layer) TestValues ¶
TestValues returns a map of key vals for testing ctrKey is a key of counters to contextualize values.
func (*Layer) TrgAvgFromD ¶
TrgAvgFromD updates TrgAvg from DTrgAvg -- called in PlusPhasePost
func (*Layer) UnLesionNeurons ¶
func (ly *Layer) UnLesionNeurons()
UnLesionNeurons unlesions (clears the Off flag) for all neurons in the layer
func (*Layer) Update ¶
func (ly *Layer) Update()
Update is an interface for generically updating after edits this should be used only for the values on the struct itself. UpdateParams is used to update all parameters, including Path.
func (*Layer) UpdateExtFlags ¶
UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.
func (*Layer) UpdateParams ¶
func (ly *Layer) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values including those in the receiving pathways of this layer. This is not called Update because it is not just about the local values in the struct.
func (*Layer) WriteWtsJSON ¶
WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
func (*Layer) WtFromDWtLayer ¶
WtFromDWtLayer does weight update at the layer level. does NOT call main pathway-level WtFromDWt method. in base, only calls TrgAvgFromD
type LayerBase ¶
type LayerBase struct { // we need a pointer to ourselves as an AxonLayer (which subsumes emer.Layer), which can always be used to extract the true underlying type of object when layer is embedded in other structs -- function receivers do not have this ability so this is necessary. AxonLay AxonLayer `copier:"-" json:"-" xml:"-" display:"-"` // our parent network, in case we need to use it to find other layers etc -- set when added by network Network *Network `copier:"-" json:"-" xml:"-" display:"-"` // Name of the layer -- this must be unique within the network, which has a map for quick lookup and layers are typically accessed directly by name Nm string // Class is for applying parameter styles, can be space separated multple tags Cls string // inactivate this layer -- allows for easy experimentation Off bool // shape of the layer -- can be 2D for basic layers and 4D for layers with sub-groups (hypercolumns) -- order is outer-to-inner (row major) so Y then X for 2D and for 4D: Y-X unit pools then Y-X neurons within pools Shp tensor.Shape // type of layer -- Hidden, Input, Target, Compare, or extended type in specialized algorithms -- matches against .Class parameter styles (e.g., .Hidden etc) Typ LayerTypes // Spatial relationship to other layer, determines positioning Rel relpos.Rel `table:"-" display:"inline"` // position of lower-left-hand corner of layer in 3D space, computed from Rel. Layers are in X-Y width - height planes, stacked vertically in Z axis. Ps math32.Vector3 `table:"-"` // a 0..n-1 index of the position of the layer within list of layers in the network. For Axon networks, it only has significance in determining who gets which weights for enforcing initial weight symmetry -- higher layers get weights from lower layers. Idx int `display:"-" inactive:"-"` // number of neurons in the layer NNeurons uint32 `display:"-"` // starting index of neurons for this layer within the global Network list NeurStIndex uint32 `display:"-" inactive:"-"` // number of pools based on layer shape -- at least 1 for layer pool + 4D subpools NPools uint32 `display:"-"` // maximum amount of input data that can be processed in parallel in one pass of the network. Neuron, Pool, Values storage is allocated to hold this amount. MaxData uint32 `display:"-"` // indexes of representative units in the layer, for computationally expensive stats or displays -- also set RepShp RepIxs []int `display:"-"` // shape of representative units in the layer -- if RepIxs is empty or .Shp is nil, use overall layer shape RepShp tensor.Shape `display:"-"` // list of receiving pathways into this layer from other layers RcvPaths AxonPaths // list of sending pathways from this layer to other layers SndPaths AxonPaths // layer-level state values that are updated during computation -- one for each data parallel -- is a sub-slice of network full set Values []LayerValues // computes FS-FFFB inhibition and other pooled, aggregate state variables -- has at least 1 for entire layer (lpl = layer pool), and one for each sub-pool if shape supports that (4D) * 1 per data parallel (inner loop). This is a sub-slice from overall Network Pools slice. You must iterate over index and use pointer to modify values. Pools []Pool // external input values for this layer, allocated from network global Exts slice Exts []float32 `display:"-"` // configuration data set when the network is configured, that is used during the network Build() process via PostBuild method, after all the structure of the network has been fully constructed. In particular, the Params is nil until Build, so setting anything specific in there (e.g., an index to another layer) must be done as a second pass. Note that Params are all applied after Build and can set user-modifiable params, so this is for more special algorithm structural parameters set during ConfigNet() methods., BuildConfig map[string]string `table:"-"` // default parameters that are applied prior to user-set parameters -- these are useful for specific layer functionality in specialized brain areas (e.g., Rubicon, BG etc) not associated with a layer type, which otherwise is used to hard-code initial default parameters -- typically just set to a literal map. DefParams params.Params `table:"-"` // provides a history of parameters applied to the layer ParamsHistory params.HistoryImpl `table:"-"` }
LayerBase manages the structural elements of the layer, which are common to any Layer type. The Base does not have algorithm-specific methods and parameters, so it can be easily reused for different algorithms, and cleanly separates the algorithm-specific code. Any dependency on the algorithm-level Layer can be captured in the AxonLayer interface, accessed via the AxonLay field.
func (*LayerBase) ApplyDefParams ¶
func (ly *LayerBase) ApplyDefParams()
ApplyDefParams applies DefParams default parameters if set Called by Layer.Defaults()
func (*LayerBase) ApplyParams ¶
ApplyParams applies given parameter style Sheet to this layer and its recv pathways. Calls UpdateParams on anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*LayerBase) BuildConfigByName ¶
BuildConfigByName looks for given BuildConfig option by name, and reports & returns an error if not found.
func (*LayerBase) BuildConfigFindLayer ¶
BuildConfigFindLayer looks for BuildConfig of given name and if found, looks for layer with corresponding name. if mustName is true, then an error is logged if the BuildConfig name does not exist. An error is always logged if the layer name is not found. -1 is returned in any case of not found.
func (*LayerBase) BuildPaths ¶
BuildPaths builds the pathways, send-side
func (*LayerBase) BuildPools ¶
BuildPools builds the inhibitory pools structures -- nu = number of units in layer
func (*LayerBase) BuildSubPools ¶
BuildSubPools initializes neuron start / end indexes for sub-pools
func (*LayerBase) InitName ¶
InitName MUST be called to initialize the layer's pointer to itself as an emer.Layer which enables the proper interface methods to be called. Also sets the name, and the parent network that this layer belongs to (which layers may want to retain).
func (*LayerBase) LayerType ¶
func (ly *LayerBase) LayerType() LayerTypes
func (*LayerBase) LayerValues ¶
func (ly *LayerBase) LayerValues(di uint32) *LayerValues
LayerValues returns LayerValues at given data index
func (*LayerBase) NRecvPaths ¶
func (*LayerBase) NSendPaths ¶
func (*LayerBase) NSubPools ¶
NSubPools returns the number of sub-pools of neurons according to the shape parameters. 2D shapes have 0 sub pools. For a 4D shape, the pools are the first set of 2 Y,X dims and then the neurons within the pools are the 2nd set of 2 Y,X dims.
func (*LayerBase) NeurStartIndex ¶
func (*LayerBase) NonDefaultParams ¶
NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.
func (*LayerBase) ParamsApplied ¶
ParamsApplied is just to satisfy History interface so reset can be applied
func (*LayerBase) ParamsHistoryReset ¶
func (ly *LayerBase) ParamsHistoryReset()
ParamsHistoryReset resets parameter application history
func (*LayerBase) PlaceAbove ¶
PlaceAbove positions the layer above the other layer, using default XAlign = Left, YAlign = Front alignment
func (*LayerBase) PlaceBehind ¶
PlaceBehind positions the layer behind the other layer, with given spacing, using default XAlign = Left alignment
func (*LayerBase) PlaceRightOf ¶
PlaceRightOf positions the layer to the right of the other layer, with given spacing, using default YAlign = Front alignment
func (*LayerBase) RecipToRecvPath ¶
RecipToRecvPath finds the reciprocal pathway to the given recv pathway within the ly layer. i.e., where ly is instead the *sending* layer to same other layer B that is the sender of the rpj pathway we're receiving from.
ly = A, other layer = B:
rpj: R=A <- S=B spj: S=A -> R=B
returns false if not found.
func (*LayerBase) RecipToSendPath ¶
RecipToSendPath finds the reciprocal pathway to the given sending pathway within the ly layer. i.e., where ly is instead the *receiving* layer from same other layer B that is the receiver of the spj pathway we're sending to.
ly = A, other layer = B:
spj: S=A -> R=B rpj: R=A <- S=B
returns false if not found.
func (*LayerBase) RecvNameTry ¶
func (*LayerBase) RecvNameType ¶
func (*LayerBase) RecvNameTypeTry ¶
func (*LayerBase) RecvPathValues ¶
func (ly *LayerBase) RecvPathValues(vals *[]float32, varNm string, sendLay emer.Layer, sendIndex1D int, pathType string) error
RecvPathValues fills in values of given synapse variable name, for pathway into given sending layer and neuron 1D index, for all receiving neurons in this layer, into given float32 slice (only resized if not big enough). pathType is the string representation of the path type -- used if non-empty, useful when there are multiple pathways between two layers. Returns error on invalid var name. If the receiving neuron is not connected to the given sending layer or neuron then the value is set to math32.NaN(). Returns error on invalid var name or lack of recv path (vals always set to nan on path err).
func (*LayerBase) RepIndexes ¶
func (*LayerBase) SendNameType ¶
func (*LayerBase) SendNameTypeTry ¶
func (*LayerBase) SendPathValues ¶
func (ly *LayerBase) SendPathValues(vals *[]float32, varNm string, recvLay emer.Layer, recvIndex1D int, pathType string) error
SendPathValues fills in values of given synapse variable name, for pathway into given receiving layer and neuron 1D index, for all sending neurons in this layer, into given float32 slice (only resized if not big enough). pathType is the string representation of the path type -- used if non-empty, useful when there are multiple pathways between two layers. Returns error on invalid var name. If the sending neuron is not connected to the given receiving layer or neuron then the value is set to math32.NaN(). Returns error on invalid var name or lack of recv path (vals always set to nan on path err).
func (*LayerBase) SetBuildConfig ¶
SetBuildConfig sets named configuration parameter to given string value to be used in the PostBuild stage -- mainly for layer names that need to be looked up and turned into indexes, after entire network is built.
func (*LayerBase) SetRepIndexesShape ¶
SetRepIndexesShape sets the RepIndexes, and RepShape and as list of dimension sizes
func (*LayerBase) UnitVal1D ¶
UnitVal1D returns value of given variable index on given unit, using 1-dimensional index. returns NaN on invalid index. This is the core unit var access method used by other methods, so it is the only one that needs to be updated for derived layer types.
func (*LayerBase) UnitValue ¶
UnitVal returns value of given variable name on given unit, using shape-based dimensional index
func (*LayerBase) UnitValues ¶
UnitValues fills in values of given variable name on unit, for each unit in the layer, into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*LayerBase) UnitValuesRepTensor ¶
UnitValuesRepTensor fills in values of given variable name on unit for a smaller subset of representative units in the layer, into given tensor. This is used for computationally intensive stats or displays that work much better with a smaller number of units. The set of representative units are defined by SetRepIndexes -- all units are used if no such subset has been defined. If tensor is not already big enough to hold the values, it is set to RepShape to hold all the values if subset is defined, otherwise it calls UnitValuesTensor and is identical to that. Returns error on invalid var name.
func (*LayerBase) UnitValuesTensor ¶
UnitValuesTensor returns values of given variable name on unit for each unit in the layer, as a float32 tensor in same shape as layer units.
func (*LayerBase) UnitVarIndex ¶
UnitVarIndex returns the index of given variable within the Neuron, according to *this layer's* UnitVarNames() list (using a map to lookup index), or -1 and error message if not found.
func (*LayerBase) UnitVarNames ¶
UnitVarNames returns a list of variable names available on the units in this layer
func (*LayerBase) UnitVarNum ¶
UnitVarNum returns the number of Neuron-level variables for this layer. This is needed for extending indexes in derived types.
func (*LayerBase) UnitVarProps ¶
UnitVarProps returns properties for variables
type LayerIndexes ¶
type LayerIndexes struct { // layer index LayIndex uint32 `edit:"-"` // maximum number of data parallel elements MaxData uint32 `edit:"-"` // start of pools for this layer -- first one is always the layer-wide pool PoolSt uint32 `edit:"-"` // start of neurons for this layer in global array (same as Layer.NeurStIndex) NeurSt uint32 `edit:"-"` // number of neurons in layer NeurN uint32 `edit:"-"` // start index into RecvPaths global array RecvSt uint32 `edit:"-"` // number of recv pathways RecvN uint32 `edit:"-"` // start index into RecvPaths global array SendSt uint32 `edit:"-"` // number of recv pathways SendN uint32 `edit:"-"` // starting index in network global Exts list of external input for this layer -- only for Input / Target / Compare layer types ExtsSt uint32 `edit:"-"` // layer shape Pools Y dimension -- 1 for 2D ShpPlY int32 `edit:"-"` // layer shape Pools X dimension -- 1 for 2D ShpPlX int32 `edit:"-"` // layer shape Units Y dimension ShpUnY int32 `edit:"-"` // layer shape Units X dimension ShpUnX int32 `edit:"-"` // contains filtered or unexported fields }
LayerIndexes contains index access into network global arrays for GPU.
func (*LayerIndexes) ExtIndex ¶
func (lx *LayerIndexes) ExtIndex(ni, di uint32) uint32
ExtIndex returns the index for accessing Exts values: [Neuron][Data] Neuron is *layer-relative* lni index -- add the ExtsSt for network level access.
func (*LayerIndexes) PoolIndex ¶
func (lx *LayerIndexes) PoolIndex(pi, di uint32) uint32
PoolIndex returns the global network index for pool with given pool (0 = layer pool, 1+ = subpools) and data parallel indexes
func (*LayerIndexes) ValuesIndex ¶
func (lx *LayerIndexes) ValuesIndex(di uint32) uint32
ValuesIndex returns the global network index for LayerValues with given data parallel index.
type LayerInhibIndexes ¶
type LayerInhibIndexes struct { // idx of Layer to get layer-level inhibition from -- set during Build from BuildConfig LayInhib1Name if present -- -1 if not used Index1 int32 `edit:"-"` // idx of Layer to get layer-level inhibition from -- set during Build from BuildConfig LayInhib2Name if present -- -1 if not used Index2 int32 `edit:"-"` // idx of Layer to get layer-level inhibition from -- set during Build from BuildConfig LayInhib3Name if present -- -1 if not used Index3 int32 `edit:"-"` // idx of Layer to geta layer-level inhibition from -- set during Build from BuildConfig LayInhib4Name if present -- -1 if not used Index4 int32 `edit:"-"` }
LayerInhibIndexes contains indexes of layers for between-layer inhibition
type LayerParams ¶
type LayerParams struct { // functional type of layer -- determines functional code path for specialized layer types, and is synchronized with the Layer.Typ value LayType LayerTypes // Activation parameters and methods for computing activations Acts ActParams `display:"add-fields"` // Inhibition parameters and methods for computing layer-level inhibition Inhib InhibParams `display:"add-fields"` // indexes of layers that contribute between-layer inhibition to this layer -- set these indexes via BuildConfig LayInhibXName (X = 1, 2...) LayInhib LayerInhibIndexes `display:"inline"` // Learning parameters and methods that operate at the neuron level Learn LearnNeurParams `display:"add-fields"` // BurstParams determine how the 5IB Burst activation is computed from CaSpkP integrated spiking values in Super layers -- thresholded. Bursts BurstParams `display:"inline"` // ] params for the CT corticothalamic layer and PTPred layer that generates predictions over the Pulvinar using context -- uses the CtxtGe excitatory input plus stronger NMDA channels to maintain context trace CT CTParams `display:"inline"` // provides parameters for how the plus-phase (outcome) state of Pulvinar thalamic relay cell neurons is computed from the corresponding driver neuron Burst activation (or CaSpkP if not Super) Pulv PulvParams `display:"inline"` // parameters for BG Striatum Matrix MSN layers, which are the main Go / NoGo gating units in BG. Matrix MatrixParams `display:"inline"` // type of GP Layer. GP GPParams `display:"inline"` // parameterizes laterodorsal tegmentum ACh salience neuromodulatory signal, driven by superior colliculus stimulus novelty, US input / absence, and OFC / ACC inhibition LDT LDTParams `display:"inline"` // parameterizes computing overall VTA DA based on LHb PVDA (primary value -- at US time, computed at start of each trial and stored in LHbPVDA global value) and Amygdala (CeM) CS / learned value (LV) activations, which update every cycle. VTA VTAParams `display:"inline"` // parameterizes reward prediction for a simple Rescorla-Wagner learning dynamic (i.e., PV learning in the Rubicon framework). RWPred RWPredParams `display:"inline"` // parameterizes reward prediction dopamine for a simple Rescorla-Wagner learning dynamic (i.e., PV learning in the Rubicon framework). RWDa RWDaParams `display:"inline"` // parameterizes TD reward integration layer TDInteg TDIntegParams `display:"inline"` // parameterizes dopamine (DA) signal as the temporal difference (TD) between the TDIntegLayer activations in the minus and plus phase. TDDa TDDaParams `display:"inline"` // recv and send pathway array access info Indexes LayerIndexes // contains filtered or unexported fields }
LayerParams contains all of the layer parameters. These values must remain constant over the course of computation. On the GPU, they are loaded into a uniform.
func (*LayerParams) AllParams ¶
func (ly *LayerParams) AllParams() string
AllParams returns a listing of all parameters in the Layer
func (*LayerParams) ApplyExtFlags ¶
func (ly *LayerParams) ApplyExtFlags(clearMask, setMask *NeuronFlags, toTarg *bool)
ApplyExtFlags gets the clear mask and set mask for updating neuron flags based on layer type, and whether input should be applied to Target (else Ext)
func (*LayerParams) ApplyExtValue ¶
func (ly *LayerParams) ApplyExtValue(ctx *Context, ni, di uint32, val float32)
ApplyExtVal applies given external value to given neuron, setting flags based on type of layer. Should only be called on Input, Target, Compare layers. Negative values are not valid, and will be interpreted as missing inputs.
func (*LayerParams) AvgGeM ¶
func (ly *LayerParams) AvgGeM(ctx *Context, vals *LayerValues, geIntMinusMax, giIntMinusMax float32)
AvgGeM computes the average and max GeInt, GiInt in minus phase (AvgMaxGeM, AvgMaxGiM) stats, updated in MinusPhase, using values that already max across NData.
func (*LayerParams) CTDefaults ¶
func (ly *LayerParams) CTDefaults()
func (*LayerParams) CyclePostCeMLayer ¶
func (ly *LayerParams) CyclePostCeMLayer(ctx *Context, di uint32, lpl *Pool)
func (*LayerParams) CyclePostLDTLayer ¶
func (ly *LayerParams) CyclePostLDTLayer(ctx *Context, di uint32, vals *LayerValues, srcLay1Act, srcLay2Act, srcLay3Act, srcLay4Act float32)
func (*LayerParams) CyclePostLayer ¶
func (ly *LayerParams) CyclePostLayer(ctx *Context, di uint32, lpl *Pool, vals *LayerValues)
CyclePostLayer is called for all layer types
func (*LayerParams) CyclePostRWDaLayer ¶
func (ly *LayerParams) CyclePostRWDaLayer(ctx *Context, di uint32, vals *LayerValues, pvals *LayerValues)
func (*LayerParams) CyclePostTDDaLayer ¶
func (ly *LayerParams) CyclePostTDDaLayer(ctx *Context, di uint32, vals *LayerValues, ivals *LayerValues)
func (*LayerParams) CyclePostTDIntegLayer ¶
func (ly *LayerParams) CyclePostTDIntegLayer(ctx *Context, di uint32, vals *LayerValues, pvals *LayerValues)
func (*LayerParams) CyclePostTDPredLayer ¶
func (ly *LayerParams) CyclePostTDPredLayer(ctx *Context, di uint32, vals *LayerValues)
func (*LayerParams) CyclePostVSPatchLayer ¶
func (ly *LayerParams) CyclePostVSPatchLayer(ctx *Context, di uint32, pi int32, pl *Pool, vals *LayerValues)
note: needs to iterate over sub-pools in layer!
func (*LayerParams) CyclePostVTALayer ¶
func (ly *LayerParams) CyclePostVTALayer(ctx *Context, di uint32)
func (*LayerParams) Defaults ¶
func (ly *LayerParams) Defaults()
func (*LayerParams) DrivesDefaults ¶
func (ly *LayerParams) DrivesDefaults()
func (*LayerParams) GFromRawSyn ¶
func (ly *LayerParams) GFromRawSyn(ctx *Context, ni, di uint32)
GFromRawSyn computes overall Ge and GiSyn conductances for neuron from GeRaw and GeSyn values, including NMDA, VGCC, AMPA, and GABA-A channels. drvAct is for Pulvinar layers, activation of driving neuron
func (*LayerParams) GNeuroMod ¶
func (ly *LayerParams) GNeuroMod(ctx *Context, ni, di uint32, vals *LayerValues)
GNeuroMod does neuromodulation of conductances
func (*LayerParams) GatherSpikesInit ¶
func (ly *LayerParams) GatherSpikesInit(ctx *Context, ni, di uint32)
GatherSpikesInit initializes G*Raw and G*Syn values for given neuron prior to integration
func (*LayerParams) GiInteg ¶
func (ly *LayerParams) GiInteg(ctx *Context, ni, di uint32, pl *Pool, vals *LayerValues)
GiInteg adds Gi values from all sources including SubPool computed inhib and updates GABAB as well
func (*LayerParams) InitExt ¶
func (ly *LayerParams) InitExt(ctx *Context, ni, di uint32)
InitExt initializes external input state for given neuron
func (*LayerParams) IsInput ¶
func (ly *LayerParams) IsInput() bool
IsInput returns true if this layer is an Input layer. By default, returns true for layers of Type == axon.InputLayer Used to prevent adapting of inhibition or TrgAvg values.
func (*LayerParams) IsInputOrTarget ¶
func (ly *LayerParams) IsInputOrTarget() bool
IsInputOrTarget returns true if this layer is either an Input or a Target layer.
func (*LayerParams) IsLearnTrgAvg ¶
func (ly *LayerParams) IsLearnTrgAvg() bool
IsLearnTrgAvg returns true if this layer has Learn.TrgAvgAct.RescaleOn set for learning adjustments based on target average activity levels, and the layer is not an input or target layer.
func (*LayerParams) IsTarget ¶
func (ly *LayerParams) IsTarget() bool
IsTarget returns true if this layer is a Target layer. By default, returns true for layers of Type == TargetLayer Other Target layers include the TRCLayer in deep predictive learning. It is used in SynScale to not apply it to target layers. In both cases, Target layers are purely error-driven.
func (*LayerParams) LayPoolGiFromSpikes ¶
func (ly *LayerParams) LayPoolGiFromSpikes(ctx *Context, lpl *Pool, vals *LayerValues)
LayPoolGiFromSpikes computes inhibition Gi from Spikes for layer-level pool. Also grabs updated Context NeuroMod values into LayerValues
func (*LayerParams) LearnTrgAvgErrLRate ¶
func (ly *LayerParams) LearnTrgAvgErrLRate() float32
LearnTrgAvgErrLRate returns the effective error-driven learning rate for adjusting target average activity levels. This is 0 if !IsLearnTrgAvg() and otherwise is Learn.TrgAvgAct.ErrLRate
func (*LayerParams) MinusPhaseNeuron ¶
func (ly *LayerParams) MinusPhaseNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerValues)
MinusPhaseNeuron does neuron level minus-phase updating
func (*LayerParams) MinusPhasePool ¶
func (ly *LayerParams) MinusPhasePool(ctx *Context, pl *Pool)
func (*LayerParams) NewStateLayer ¶
func (ly *LayerParams) NewStateLayer(ctx *Context, lpl *Pool, vals *LayerValues)
func (*LayerParams) NewStateLayerActAvg ¶
func (ly *LayerParams) NewStateLayerActAvg(ctx *Context, vals *LayerValues, actMinusAvg, actPlusAvg float32)
NewStateLayerActAvg updates ActAvg.ActMAvg and ActPAvg based on current values that have been averaged across NData already.
func (*LayerParams) NewStateNeuron ¶
func (ly *LayerParams) NewStateNeuron(ctx *Context, ni, di uint32, vals *LayerValues, pl *Pool)
NewStateNeuron handles all initialization at start of new input pattern. Should already have presented the external input to the network at this point.
func (*LayerParams) NewStatePool ¶
func (ly *LayerParams) NewStatePool(ctx *Context, pl *Pool)
func (*LayerParams) PTPredDefaults ¶
func (ly *LayerParams) PTPredDefaults()
func (*LayerParams) PVDefaults ¶
func (ly *LayerParams) PVDefaults()
func (*LayerParams) PlusPhaseNeuron ¶
func (ly *LayerParams) PlusPhaseNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerValues)
PlusPhaseNeuron does neuron level plus-phase updating
func (*LayerParams) PlusPhaseNeuronSpecial ¶
func (ly *LayerParams) PlusPhaseNeuronSpecial(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerValues)
PlusPhaseNeuronSpecial does special layer type neuron level plus-phase updating
func (*LayerParams) PlusPhasePool ¶
func (ly *LayerParams) PlusPhasePool(ctx *Context, pl *Pool)
func (*LayerParams) PlusPhaseStartNeuron ¶
func (ly *LayerParams) PlusPhaseStartNeuron(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerValues)
PlusPhaseStartNeuron does neuron level plus-phase start: applies Target inputs as External inputs.
func (*LayerParams) PostSpike ¶
func (ly *LayerParams) PostSpike(ctx *Context, ni, di uint32, pl *Pool, vals *LayerValues)
PostSpike does updates at neuron level after spiking has been computed. it is called *after* PostSpikeSpecial. It also updates the CaSpkPCyc stats.
func (*LayerParams) PostSpikeSpecial ¶
func (ly *LayerParams) PostSpikeSpecial(ctx *Context, ni, di uint32, pl *Pool, lpl *Pool, vals *LayerValues)
PostSpikeSpecial does updates at neuron level after spiking has been computed. This is where special layer types add extra code. warning: if more than 1 layer writes to vals, gpu will fail!
func (*LayerParams) PulvDefaults ¶
func (ly *LayerParams) PulvDefaults()
called in Defaults for Pulvinar layer type
func (*LayerParams) RWDefaults ¶
func (ly *LayerParams) RWDefaults()
func (*LayerParams) RWPredDefaults ¶
func (ly *LayerParams) RWPredDefaults()
func (*LayerParams) ShouldDisplay ¶
func (ly *LayerParams) ShouldDisplay(field string) bool
func (*LayerParams) SpecialPostGs ¶
func (ly *LayerParams) SpecialPostGs(ctx *Context, ni, di uint32, saveVal float32)
SpecialPostGs is used for special layer types to do things after the standard updates in GFromRawSyn. It is passed the saveVal from SpecialPreGs
func (*LayerParams) SpecialPreGs ¶
func (ly *LayerParams) SpecialPreGs(ctx *Context, ni, di uint32, pl *Pool, vals *LayerValues, drvGe float32, nonDrivePct float32) float32
SpecialPreGs is used for special layer types to do things to the conductance values prior to doing the standard updates in GFromRawSyn drvAct is for Pulvinar layers, activation of driving neuron
func (*LayerParams) SpikeFromG ¶
func (ly *LayerParams) SpikeFromG(ctx *Context, ni, di uint32, lpl *Pool)
SpikeFromG computes Vm from Ge, Gi, Gl conductances and then Spike from that
func (*LayerParams) SubPoolGiFromSpikes ¶
func (ly *LayerParams) SubPoolGiFromSpikes(ctx *Context, di uint32, pl *Pool, lpl *Pool, lyInhib bool, giMult float32)
SubPoolGiFromSpikes computes inhibition Gi from Spikes within a sub-pool pl is guaranteed not to be the overall layer pool
func (*LayerParams) TDDefaults ¶
func (ly *LayerParams) TDDefaults()
func (*LayerParams) TDPredDefaults ¶
func (ly *LayerParams) TDPredDefaults()
func (*LayerParams) USDefaults ¶
func (ly *LayerParams) USDefaults()
func (*LayerParams) Update ¶
func (ly *LayerParams) Update()
func (*LayerParams) UrgencyDefaults ¶
func (ly *LayerParams) UrgencyDefaults()
func (*LayerParams) VSGatedDefaults ¶
func (ly *LayerParams) VSGatedDefaults()
func (*LayerParams) VSPatchDefaults ¶
func (ly *LayerParams) VSPatchDefaults()
type LayerTypes ¶
type LayerTypes int32 //enums:enum
LayerTypes is an axon-specific layer type enum, that encompasses all the different algorithm types supported. Class parameter styles automatically key off of these types. The first entries must be kept synchronized with the emer.LayerType, although we replace Hidden -> Super.
const ( // Super is a superficial cortical layer (lamina 2-3-4) // which does not receive direct input or targets. // In more generic models, it should be used as a Hidden layer, // and maps onto the Hidden type in emer.LayerType. SuperLayer LayerTypes = iota // Input is a layer that receives direct external input // in its Ext inputs. Biologically, it can be a primary // sensory layer, or a thalamic layer. InputLayer // Target is a layer that receives direct external target inputs // used for driving plus-phase learning. // Simple target layers are generally not used in more biological // models, which instead use predictive learning via Pulvinar // or related mechanisms. TargetLayer // Compare is a layer that receives external comparison inputs, // which drive statistics but do NOT drive activation // or learning directly. It is rarely used in axon. CompareLayer // CT are layer 6 corticothalamic projecting neurons, // which drive "top down" predictions in Pulvinar layers. // They maintain information over time via stronger NMDA // channels and use maintained prior state information to // generate predictions about current states forming on Super // layers that then drive PT (5IB) bursting activity, which // are the plus-phase drivers of Pulvinar activity. CTLayer // Pulvinar are thalamic relay cell neurons in the higher-order // Pulvinar nucleus of the thalamus, and functionally isomorphic // neurons in the MD thalamus, and potentially other areas. // These cells alternately reflect predictions driven by CT pathways, // and actual outcomes driven by 5IB Burst activity from corresponding // PT or Super layer neurons that provide strong driving inputs. PulvinarLayer // TRNLayer is thalamic reticular nucleus layer for inhibitory competition // within the thalamus. TRNLayer // PTMaintLayer implements the subset of pyramidal tract (PT) // layer 5 intrinsic bursting (5IB) deep neurons that exhibit // robust, stable maintenance of activity over the duration of a // goal engaged window, modulated by basal ganglia (BG) disinhibitory // gating, supported by strong MaintNMDA channels and recurrent excitation. // The lateral PTSelfMaint pathway uses MaintG to drive GMaintRaw input // that feeds into the stronger, longer MaintNMDA channels, // and the ThalToPT ModulatoryG pathway from BGThalamus multiplicatively // modulates the strength of other inputs, such that only at the time of // BG gating are these strong enough to drive sustained active maintenance. // Use Act.Dend.ModGain to parameterize. PTMaintLayer // PTPredLayer implements the subset of pyramidal tract (PT) // layer 5 intrinsic bursting (5IB) deep neurons that combine // modulatory input from PTMaintLayer sustained maintenance and // CTLayer dynamic predictive learning that helps to predict // state changes during the period of active goal maintenance. // This layer provides the primary input to VSPatch US-timing // prediction layers, and other layers that require predictive dynamic PTPredLayer // MatrixLayer represents the matrisome medium spiny neurons (MSNs) // that are the main Go / NoGo gating units in BG. // These are strongly modulated by phasic dopamine: D1 = Go, D2 = NoGo. MatrixLayer // STNLayer represents subthalamic nucleus neurons, with two subtypes: // STNp are more strongly driven and get over bursting threshold, driving strong, // rapid activation of the KCa channels, causing a long pause in firing, which // creates a window during which GPe dynamics resolve Go vs. No balance. // STNs are more weakly driven and thus more slowly activate KCa, resulting in // a longer period of activation, during which the GPi is inhibited to prevent // premature gating based only MtxGo inhibition -- gating only occurs when // GPePr signal has had a chance to integrate its MtxNo inputs. STNLayer // GPLayer represents a globus pallidus layer in the BG, including: // GPeOut, GPePr, GPeAk (arkypallidal), and GPi. // Typically just a single unit per Pool representing a given stripe. GPLayer // BGThalLayer represents a BG gated thalamic layer, // which receives BG gating in the form of an // inhibitory pathway from GPi. Located // mainly in the Ventral thalamus: VA / VM / VL, // and also parts of MD mediodorsal thalamus. BGThalLayer // VSGated represents explicit coding of VS gating status: // JustGated and HasGated (since last US or failed predicted US), // For visualization and / or motor action signaling. VSGatedLayer // BLALayer represents a basolateral amygdala layer // which learns to associate arbitrary stimuli (CSs) // with behaviorally salient outcomes (USs) BLALayer // CeMLayer represents a central nucleus of the amygdala layer. CeMLayer // VSPatchLayer represents a ventral striatum patch layer, // which learns to represent the expected amount of dopamine reward // and projects both directly with shunting inhibition to the VTA // and indirectly via the LHb / RMTg to cancel phasic dopamine firing // to expected rewards (i.e., reward prediction error). VSPatchLayer // LHbLayer represents the lateral habenula, which drives dipping // in the VTA. It tracks the Global LHb values for // visualization purposes -- updated by VTALayer. LHbLayer // DrivesLayer represents the Drives in .Rubicon framework. // It tracks the Global Drives values for // visualization and predictive learning purposes. DrivesLayer // UrgencyLayer represents the Urgency factor in Rubicon framework. // It tracks the Global Urgency.Urge value for // visualization and predictive learning purposes. UrgencyLayer // USLayer represents a US unconditioned stimulus layer (USpos or USneg). // It tracks the Global USpos or USneg, for visualization // and predictive learning purposes. Actual US inputs are set in Rubicon. USLayer // PVLayer represents a PV primary value layer (PVpos or PVneg) representing // the total primary value as a function of US inputs, drives, and effort. // It tracks the Global VTA.PVpos, PVneg values for // visualization and predictive learning purposes. PVLayer // LDTLayer represents the laterodorsal tegmentum layer, which // is the primary limbic ACh (acetylcholine) driver to other ACh: // BG cholinergic interneurons (CIN) and nucleus basalis ACh areas. // The phasic ACh release signals reward salient inputs from CS, US // and US omssion, and it drives widespread disinhibition of BG gating // and VTA DA firing. // It receives excitation from superior colliculus which computes // a temporal derivative (stimulus specific adaptation, SSA) // of sensory inputs, and inhibitory input from OFC, ACC driving // suppression of distracting inputs during goal-engaged states. LDTLayer // VTALayer represents the ventral tegmental area, which releases // dopamine. It computes final DA value from Rubicon-computed // LHb PVDA (primary value DA), updated at start of each trial from // updated US, Effort, etc state, and cycle-by-cycle LV learned value // state reflecting CS inputs, in the Amygdala (CeM). // Its activity reflects this DA level, which is effectively broadcast // vial Global state values to all layers. VTALayer // RewLayer represents positive or negative reward values across 2 units, // showing spiking rates for each, and Act always represents signed value. RewLayer // RWPredLayer computes reward prediction for a simple Rescorla-Wagner // learning dynamic (i.e., PV learning in the Rubicon framework). // Activity is computed as linear function of excitatory conductance // (which can be negative -- there are no constraints). // Use with RWPath which does simple delta-rule learning on minus-plus. RWPredLayer // RWDaLayer computes a dopamine (DA) signal based on a simple Rescorla-Wagner // learning dynamic (i.e., PV learning in the Rubicon framework). // It computes difference between r(t) and RWPred values. // r(t) is accessed directly from a Rew layer -- if no external input then no // DA is computed -- critical for effective use of RW only for PV cases. // RWPred prediction is also accessed directly from Rew layer to avoid any issues. RWDaLayer // TDPredLayer is the temporal differences reward prediction layer. // It represents estimated value V(t) in the minus phase, and computes // estimated V(t+1) based on its learned weights in plus phase, // using the TDPredPath pathway type for DA modulated learning. TDPredLayer // TDIntegLayer is the temporal differences reward integration layer. // It represents estimated value V(t) from prior time step in the minus phase, // and estimated discount * V(t+1) + r(t) in the plus phase. // It gets Rew, PrevPred from Context.NeuroMod, and Special // LayerValues from TDPredLayer. TDIntegLayer // TDDaLayer computes a dopamine (DA) signal as the temporal difference (TD) // between the TDIntegLayer activations in the minus and plus phase. // These are retrieved from Special LayerValues. TDDaLayer )
The layer types
const LayerTypesN LayerTypes = 30
LayerTypesN is the highest valid value for type LayerTypes, plus one.
func LayerTypesValues ¶
func LayerTypesValues() []LayerTypes
LayerTypesValues returns all possible values for the type LayerTypes.
func (LayerTypes) Desc ¶
func (i LayerTypes) Desc() string
Desc returns the description of the LayerTypes value.
func (LayerTypes) Int64 ¶
func (i LayerTypes) Int64() int64
Int64 returns the LayerTypes value as an int64.
func (LayerTypes) IsExt ¶
func (lt LayerTypes) IsExt() bool
IsExt returns true if the layer type deals with external input: Input, Target, Compare
func (LayerTypes) MarshalText ¶
func (i LayerTypes) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*LayerTypes) SetInt64 ¶
func (i *LayerTypes) SetInt64(in int64)
SetInt64 sets the LayerTypes value from an int64.
func (*LayerTypes) SetString ¶
func (i *LayerTypes) SetString(s string) error
SetString sets the LayerTypes value from its string representation, and returns an error if the string is invalid.
func (LayerTypes) String ¶
func (i LayerTypes) String() string
String returns the string representation of this LayerTypes value.
func (*LayerTypes) UnmarshalText ¶
func (i *LayerTypes) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (LayerTypes) Values ¶
func (i LayerTypes) Values() []enums.Enum
Values returns all possible values for the type LayerTypes.
type LayerValues ¶
type LayerValues struct { // layer index for these vals LayIndex uint32 `display:"-"` // data index for these vals DataIndex uint32 `display:"-"` // reaction time for this layer in cycles, which is -1 until the Max CaSpkP level (after MaxCycStart) exceeds the Act.Attn.RTThr threshold RT float32 `edit:"-"` // running-average activation levels used for adaptive inhibition, and other adapting values ActAvg ActAvgValues `display:"inline"` // correlation (centered cosine aka normalized dot product) similarity between ActM, ActP states CorSim CorSimStats // special values used to communicate to other layers based on neural values computed on the GPU -- special cross-layer computations happen CPU-side and are sent back into the network via Context on the next cycle -- used for special algorithms such as RL / DA etc Special LaySpecialValues `display:"inline"` // contains filtered or unexported fields }
LayerValues holds extra layer state that is updated per layer. It is sync'd down from the GPU to the CPU after every Cycle.
func (*LayerValues) Init ¶
func (lv *LayerValues) Init()
type LearnNeurParams ¶
type LearnNeurParams struct { // parameterizes the neuron-level calcium signals driving learning: CaLrn = NMDA + VGCC Ca sources, where VGCC can be simulated from spiking or use the more complex and dynamic VGCC channel directly. CaLrn is then integrated in a cascading manner at multiple time scales: CaM (as in calmodulin), CaP (ltP, CaMKII, plus phase), CaD (ltD, DAPK1, minus phase). CaLearn CaLrnParams `display:"inline"` // parameterizes the neuron-level spike-driven calcium signals, starting with CaSyn that is integrated at the neuron level, and drives synapse-level, pre * post Ca integration, which provides the Tr trace that multiplies error signals, and drives learning directly for Target layers. CaSpk* values are integrated separately at the Neuron level and used for UpdateThr and RLRate as a proxy for the activation (spiking) based learning signal. CaSpk kinase.NeurCaParams `display:"inline"` // NMDA channel parameters used for learning, vs. the ones driving activation -- allows exploration of learning parameters independent of their effects on active maintenance contributions of NMDA, and may be supported by different receptor subtypes LrnNMDA chans.NMDAParams `display:"inline"` // synaptic scaling parameters for regulating overall average activity compared to neuron's own target level TrgAvgAct TrgAvgActParams `display:"inline"` // recv neuron learning rate modulation params -- an additional error-based modulation of learning for receiver side: RLRate = |CaSpkP - CaSpkD| / Max(CaSpkP, CaSpkD) RLRate RLRateParams `display:"inline"` // neuromodulation effects on learning rate and activity, as a function of layer-level DA and ACh values, which are updated from global Context values, and computed from reinforcement learning algorithms NeuroMod NeuroModParams `display:"inline"` }
axon.LearnNeurParams manages learning-related parameters at the neuron-level. This is mainly the running average activations that drive learning
func (*LearnNeurParams) CaFromSpike ¶
func (ln *LearnNeurParams) CaFromSpike(ctx *Context, ni, di uint32)
CaFromSpike updates all spike-driven calcium variables, including CaLrn and CaSpk. Computed after new activation for current cycle is updated.
func (*LearnNeurParams) Defaults ¶
func (ln *LearnNeurParams) Defaults()
func (*LearnNeurParams) InitNeurCa ¶
func (ln *LearnNeurParams) InitNeurCa(ctx *Context, ni, di uint32)
InitCaLrnSpk initializes the neuron-level calcium learning and spking variables. Called by InitWts (at start of learning).
func (*LearnNeurParams) LrnNMDAFromRaw ¶
func (ln *LearnNeurParams) LrnNMDAFromRaw(ctx *Context, ni, di uint32, geTot float32)
LrnNMDAFromRaw updates the separate NMDA conductance and calcium values based on GeTot = GeRaw + external ge conductance. These are the variables that drive learning -- can be the same as activation but also can be different for testing learning Ca effects independent of activation effects.
func (*LearnNeurParams) Update ¶
func (ln *LearnNeurParams) Update()
type LearnSynParams ¶
type LearnSynParams struct { // enable learning for this pathway Learn slbool.Bool // learning rate parameters, supporting two levels of modulation on top of base learning rate. LRate LRateParams `display:"inline"` // trace-based learning parameters Trace TraceParams `display:"inline"` // kinase calcium Ca integration parameters: using linear regression parameters KinaseCa kinase.SynCaLinear `display:"inline"` // hebbian learning option, which overrides the default learning rules Hebb HebbParams `display:"inline"` // contains filtered or unexported fields }
LearnSynParams manages learning-related parameters at the synapse-level.
func (*LearnSynParams) CHLdWt ¶
func (ls *LearnSynParams) CHLdWt(suCaP, suCaD, ruCaP, ruCaD float32) float32
CHLdWt returns the error-driven weight change component for a CHL contrastive hebbian learning rule, optionally using the checkmark temporally eXtended Contrastive Attractor Learning (XCAL) function
func (*LearnSynParams) Defaults ¶
func (ls *LearnSynParams) Defaults()
func (*LearnSynParams) DeltaDWt ¶
func (ls *LearnSynParams) DeltaDWt(plus, minus float32) float32
DeltaDWt returns the error-driven weight change component for a simple delta between a minus and plus phase factor, optionally using the checkmark temporally eXtended Contrastive Attractor Learning (XCAL) function
func (*LearnSynParams) ShouldDisplay ¶
func (ls *LearnSynParams) ShouldDisplay(field string) bool
func (*LearnSynParams) Update ¶
func (ls *LearnSynParams) Update()
type MatrixParams ¶
type MatrixParams struct { // threshold on layer Avg SpkMax for Matrix Go and VThal layers to count as having gated GateThr float32 `default:"0.05"` // is this a ventral striatum (VS) matrix layer? if true, the gating status of this layer is recorded in the Global state, and used for updating effort and other factors. IsVS slbool.Bool // index of other matrix (Go if we are NoGo and vice-versa). Set during Build from BuildConfig OtherMatrixName OtherMatrixIndex int32 `edit:"-"` // index of thalamus layer that we gate. needed to get gating information. Set during Build from BuildConfig ThalLay1Name if present -- -1 if not used ThalLay1Index int32 `edit:"-"` // index of thalamus layer that we gate. needed to get gating information. Set during Build from BuildConfig ThalLay2Name if present -- -1 if not used ThalLay2Index int32 `edit:"-"` // index of thalamus layer that we gate. needed to get gating information. Set during Build from BuildConfig ThalLay3Name if present -- -1 if not used ThalLay3Index int32 `edit:"-"` // index of thalamus layer that we gate. needed to get gating information. Set during Build from BuildConfig ThalLay4Name if present -- -1 if not used ThalLay4Index int32 `edit:"-"` // index of thalamus layer that we gate. needed to get gating information. Set during Build from BuildConfig ThalLay5Name if present -- -1 if not used ThalLay5Index int32 `edit:"-"` // index of thalamus layer that we gate. needed to get gating information. Set during Build from BuildConfig ThalLay6Name if present -- -1 if not used ThalLay6Index int32 `edit:"-"` // contains filtered or unexported fields }
MatrixParams has parameters for BG Striatum Matrix MSN layers These are the main Go / NoGo gating units in BG. DA, ACh learning rate modulation is pre-computed on the recv neuron RLRate variable via NeuroMod. Also uses Pool.Gated for InvertNoGate, updated in PlusPhase prior to DWt call. Must set Learn.NeuroMod.DAMod = D1Mod or D2Mod via SetBuildConfig("DAMod").
func (*MatrixParams) Defaults ¶
func (mp *MatrixParams) Defaults()
func (*MatrixParams) Update ¶
func (mp *MatrixParams) Update()
type MatrixPathParams ¶
type MatrixPathParams struct { // proportion of trace activity driven by the basic credit assignment factor // based on the PF modulatory inputs and activity of the receiving neuron, // relative to the delta factor which is generally going to be smaller // because it is an activity difference. Credit float32 `default:"0.6"` // baseline amount of PF activity that modulates credit assignment learning, // for neurons with zero PF modulatory activity. // These were not part of the actual motor action, but can still get some // smaller amount of credit learning. BasePF float32 `default:"0.005"` // weight for trace activity that is a function of the minus-plus delta // activity signal on the receiving MSN neuron, independent of PF modulation. // This should always be 1 except for testing disabling: adjust NonDelta // relative to it, and the overall learning rate. Delta float32 `default:"1"` // for ventral striatum, learn based on activity at time of reward, // in inverse proportion to the GoalMaint activity: i.e., if there was no // goal maintenance, learn at reward to encourage goal engagement next time, // but otherwise, do not further reinforce at time of reward, because the // actual goal gating learning trace is a better learning signal. // Otherwise, only uses accumulated trace but doesn't include rew-time activity, // e.g., for testing cases that do not have GoalMaint. VSRewLearn slbool.Bool `default:"true"` }
MatrixPathParams for trace-based learning in the MatrixPath. A trace of synaptic co-activity is formed, and then modulated by dopamine whenever it occurs. This bridges the temporal gap between gating activity and subsequent activity, and is based biologically on synaptic tags. Trace is applied to DWt and reset at the time of reward.
func (*MatrixPathParams) Defaults ¶
func (tp *MatrixPathParams) Defaults()
func (*MatrixPathParams) Update ¶
func (tp *MatrixPathParams) Update()
type NetIndexes ¶
type NetIndexes struct { // number of data parallel items to process currently NData uint32 `min:"1"` // network index in global Networks list of networks -- needed for GPU shader kernel compatible network variable access functions (e.g., NrnV, SynV etc) in CPU mode NetIndex uint32 `edit:"-"` // maximum amount of data parallel MaxData uint32 `edit:"-"` // number of layers in the network NLayers uint32 `edit:"-"` // total number of neurons NNeurons uint32 `edit:"-"` // total number of pools excluding * MaxData factor NPools uint32 `edit:"-"` // total number of synapses NSyns uint32 `edit:"-"` // maximum size in float32 (4 bytes) of a GPU buffer -- needed for GPU access GPUMaxBuffFloats uint32 `edit:"-"` // total number of SynCa banks of GPUMaxBufferBytes arrays in GPU GPUSynCaBanks uint32 `edit:"-"` // total number of .Rubicon Drives / positive USs RubiconNPosUSs uint32 `edit:"-"` // total number of .Rubicon Costs RubiconNCosts uint32 `edit:"-"` // total number of .Rubicon Negative USs RubiconNNegUSs uint32 `edit:"-"` // offset into GlobalVars for Cost values GvCostOff uint32 `edit:"-"` // stride into GlobalVars for Cost values GvCostStride uint32 `edit:"-"` // offset into GlobalVars for USneg values GvUSnegOff uint32 `edit:"-"` // stride into GlobalVars for USneg values GvUSnegStride uint32 `edit:"-"` // offset into GlobalVars for USpos, Drive, VSPatch values values GvUSposOff uint32 `edit:"-"` // stride into GlobalVars for USpos, Drive, VSPatch values GvUSposStride uint32 `edit:"-"` // contains filtered or unexported fields }
NetIndexes are indexes and sizes for processing network
func (*NetIndexes) DataIndex ¶
func (ctx *NetIndexes) DataIndex(idx uint32) uint32
DataIndex returns the data index from an overall index over N * MaxData
func (*NetIndexes) DataIndexIsValid ¶
func (ctx *NetIndexes) DataIndexIsValid(li uint32) bool
DataIndexIsValid returns true if the data index is valid (< NData)
func (*NetIndexes) ItemIndex ¶
func (ctx *NetIndexes) ItemIndex(idx uint32) uint32
ItemIndex returns the main item index from an overall index over NItems * MaxData (items = layers, neurons, synapeses)
func (*NetIndexes) LayerIndexIsValid ¶
func (ctx *NetIndexes) LayerIndexIsValid(li uint32) bool
LayerIndexIsValid returns true if the layer index is valid (< NLayers)
func (*NetIndexes) NeurIndexIsValid ¶
func (ctx *NetIndexes) NeurIndexIsValid(ni uint32) bool
NeurIndexIsValid returns true if the neuron index is valid (< NNeurons)
func (*NetIndexes) PoolDataIndexIsValid ¶
func (ctx *NetIndexes) PoolDataIndexIsValid(pi uint32) bool
PoolDataIndexIsValid returns true if the pool*data index is valid (< NPools*MaxData)
func (*NetIndexes) PoolIndexIsValid ¶
func (ctx *NetIndexes) PoolIndexIsValid(pi uint32) bool
PoolIndexIsValid returns true if the pool index is valid (< NPools)
func (*NetIndexes) SynIndexIsValid ¶
func (ctx *NetIndexes) SynIndexIsValid(si uint32) bool
SynIndexIsValid returns true if the synapse index is valid (< NSyns)
func (*NetIndexes) ValuesIndex ¶
func (ctx *NetIndexes) ValuesIndex(li, di uint32) uint32
ValuesIndex returns the global network index for LayerValues with given layer index and data parallel index.
type Network ¶
type Network struct {
NetworkBase
}
axon.Network implements the Axon spiking model, building on the algorithm-independent NetworkBase that manages all the infrastructure.
var ( // TheNetwork is the one current network in use, needed for GPU shader kernel // compatible variable access in CPU mode, for !multinet build tags case. // Typically there is just one and it is faster to access directly. // This is set in Network.InitName. TheNetwork *Network // Networks is a global list of networks, needed for GPU shader kernel // compatible variable access in CPU mode, for multinet build tags case. // This is updated in Network.InitName, which sets NetIndex. Networks []*Network )
func GlobalNetwork ¶
func (*Network) AddACCost ¶
func (net *Network) AddACCost(ctx *Context, nCosts, accY, accX int, space float32) (acc, accCT, accPT, accPTp, accMD *Layer)
AddACCost adds anterior cingulate cost coding layers, for given number of cost pools (typically 2: time, effort), with given number of units per pool.
func (*Network) AddAmygdala ¶
func (net *Network) AddAmygdala(prefix string, neg bool, nNeurY, nNeurX int, space float32) (blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, cemPos, cemNeg, blaNov *Layer)
AddAmygdala adds a full amygdala complex including BLA, CeM, and LDT. Inclusion of negative valence is optional with neg arg -- neg* layers are nil if not included. Uses the network Rubicon.NPosUSs and NNegUSs for number of pools -- must be configured prior to calling this.
func (*Network) AddBGThalLayer2D ¶
AddBGThalLayer2D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 2D structure
func (*Network) AddBGThalLayer4D ¶
AddBGThalLayer4D adds a BG gated thalamus (e.g., VA/VL/VM, MD) Layer of given size, with given name. This version has a 4D structure, with Pools representing separable gating domains.
func (*Network) AddBLALayers ¶
func (net *Network) AddBLALayers(prefix string, pos bool, nUs, nNeurY, nNeurX int, rel relpos.Relations, space float32) (acq, ext *Layer)
AddBLALayers adds two BLA layers, acquisition / extinction / D1 / D2, for positive or negative valence
func (*Network) AddCTLayer2D ¶
AddCTLayer2D adds a CT Layer of given size, with given name.
func (*Network) AddCTLayer4D ¶
AddCTLayer4D adds a CT Layer of given size, with given name.
func (*Network) AddClampDaLayer ¶
AddClampDaLayer adds a ClampDaLayer of given name
func (*Network) AddDBG ¶
func (net *Network) AddDBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpePr, gpeAk, stn, gpi, pf *Layer)
AddDBG adds Dorsal Basal Ganglia layers, using the PCore Pallidal Core framework where GPe plays a central role. Returns DMtxGo, DMtxNo, DGPePr, DGPeAk, DSTN, DGPi, PF layers, with given optional prefix. Makes 4D pools throughout the GP layers, with Pools representing separable gating domains, i.e., action domains. All GP / STN layers have gpNeur neurons. Appropriate PoolOneToOne connections are made between layers, using standard styles. space is the spacing between layers (2 typical)
func (*Network) AddDMatrixLayer ¶
func (net *Network) AddDMatrixLayer(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DAModTypes) *Layer
AddDMatrixLayer adds a Dorsal MatrixLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. da gives the DaReceptor type (D1R = Go, D2R = NoGo)
func (*Network) AddDrivesLayer ¶
AddDrivesLayer adds Rubicon layer representing current drive activity, from Global Drive.Drives. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions, per drive pool.
func (*Network) AddDrivesPulvLayer ¶
func (net *Network) AddDrivesPulvLayer(ctx *Context, nNeurY, nNeurX int, space float32) (drv, drvP *Layer)
AddDrivesPulvLayer adds Rubicon layer representing current drive activity, from Global Drive.Drives. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions, per drive pool. Adds Pulvinar predictive layers for Drives.
func (*Network) AddGPeLayer2D ¶
AddGPLayer2D adds a GPLayer of given size, with given name. Must set the GPType BuildConfig setting to appropriate GPLayerType
func (*Network) AddGPeLayer4D ¶
AddGPLayer4D adds a GPLayer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.
func (*Network) AddGPiLayer2D ¶
AddGPiLayer2D adds a GPiLayer of given size, with given name.
func (*Network) AddGPiLayer4D ¶
AddGPiLayer4D adds a GPiLayer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.
func (*Network) AddHip ¶
func (net *Network) AddHip(ctx *Context, hip *HipConfig, space float32) (ec2, ec3, dg, ca3, ca1, ec5 *Layer)
AddHip adds a new Hippocampal network for episodic memory. Returns layers most likely to be used for remaining connections and positions.
func (*Network) AddInputPulv2D ¶
AddInputPulv2D adds an Input and Layer of given size, with given name. The Input layer is set as the Driver of the Layer. Both layers have SetClass(name) called to allow shared params.
func (*Network) AddInputPulv4D ¶
func (net *Network) AddInputPulv4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32) (*Layer, *Layer)
AddInputPulv4D adds an Input and Layer of given size, with given name. The Input layer is set as the Driver of the Layer. Both layers have SetClass(name) called to allow shared params.
func (*Network) AddLDTLayer ¶
AddLDTLayer adds a LDTLayer
func (*Network) AddOFCneg ¶
func (net *Network) AddOFCneg(ctx *Context, nUSs, ofcY, ofcX int, space float32) (ofc, ofcCT, ofcPT, ofcPTp, ofcMD *Layer)
AddOFCneg adds orbital frontal cortex negative US-coding layers, for given number of neg US pools with given number of units per pool.
func (*Network) AddOFCpos ¶
func (net *Network) AddOFCpos(ctx *Context, nUSs, nY, ofcY, ofcX int, space float32) (ofc, ofcCT, ofcPT, ofcPTp, ofcMD *Layer)
AddOFCpos adds orbital frontal cortex positive US-coding layers, for given number of pos US pools (first is novelty / curiosity pool), with given number of units per pool.
func (*Network) AddPFC2D ¶
func (net *Network) AddPFC2D(name, thalSuffix string, nNeurY, nNeurX int, decayOnRew, selfMaint bool, space float32) (pfc, pfcCT, pfcPT, pfcPTp, pfcThal *Layer)
AddPFC2D adds a "full stack" of 2D PFC layers: * AddSuperCT2D (Super and CT) * AddPTMaintThal (PTMaint, BGThal) * AddPTPredLayer (PTPred) with given name prefix, which is also set as the Class for all layers & paths (+"Path"), and suffix for the BGThal layer (e.g., "MD" or "VM" etc for different thalamic nuclei). Sets PFCLayer as additional class for all cortical layers. OneToOne, full connectivity is used between layers. decayOnRew determines the Act.Decay.OnRew setting (true of OFC, ACC type for sure). if selfMaint is true, the SMaint self-maintenance mechanism is used instead of lateral connections. CT layer uses the Medium timescale params.
func (*Network) AddPFC4D ¶
func (net *Network) AddPFC4D(name, thalSuffix string, nPoolsY, nPoolsX, nNeurY, nNeurX int, decayOnRew, selfMaint bool, space float32) (pfc, pfcCT, pfcPT, pfcPTp, pfcThal *Layer)
AddPFC4D adds a "full stack" of 4D PFC layers: * AddSuperCT4D (Super and CT) * AddPTMaintThal (PTMaint, BGThal) * AddPTPredLayer (PTPred) with given name prefix, which is also set as the Class for all layers & paths (+"Path"), and suffix for the BGThal layer (e.g., "MD" or "VM" etc for different thalamic nuclei). Sets PFCLayer as additional class for all cortical layers. OneToOne and PoolOneToOne connectivity is used between layers. decayOnRew determines the Act.Decay.OnRew setting (true of OFC, ACC type for sure). if selfMaint is true, the SMaint self-maintenance mechanism is used instead of lateral connections. CT layer uses the Medium timescale params. use, e.g., pfcCT.DefParams["Layer.Inhib.Layer.Gi"] = "2.8" to change default params.
func (*Network) AddPTMaintLayer2D ¶
AddPTMaintLayer2D adds a PTMaintLayer of given size, with given name.
func (*Network) AddPTMaintLayer4D ¶
AddPTMaintLayer4D adds a PTMaintLayer of given size, with given name.
func (*Network) AddPTMaintThalForSuper ¶
func (net *Network) AddPTMaintThalForSuper(super, ct *Layer, thalSuffix, pathClass string, superToPT, ptSelf, ptThal paths.Pattern, selfMaint bool, space float32) (pt, thal *Layer)
AddPTMaintThalForSuper adds a PTMaint pyramidal tract active maintenance layer and a BG gated Thalamus layer for given superficial layer (SuperLayer) and associated CT, with given thal suffix (e.g., MD, VM). PT and Thal have SetClass(super.Name()) called to allow shared params. Projections are made with given classes: SuperToPT, PTSelfMaint, PTtoThal, ThalToPT, with optional extra class. if selfMaint is true, the SMaint self-maintenance mechanism is used instead of lateral connections. The PT and BGThal layers are positioned behind the CT layer.
func (*Network) AddPTPredLayer ¶
func (net *Network) AddPTPredLayer(ptMaint, ct *Layer, ptToPredPath, ctToPredPath paths.Pattern, pathClass string, space float32) (ptPred *Layer)
AddPTPredLayer adds a PTPred pyramidal tract prediction layer for given PTMaint layer and associated CT. Sets SetClass(super.Name()) to allow shared params. Projections are made with given classes: PTtoPred, CTtoPred The PTPred layer is positioned behind the PT layer.
func (*Network) AddPTPredLayer2D ¶
AddPTPredLayer2D adds a PTPredLayer of given size, with given name.
func (*Network) AddPTPredLayer4D ¶
AddPTPredLayer4D adds a PTPredLayer of given size, with given name.
func (*Network) AddPVLayers ¶
func (net *Network) AddPVLayers(nNeurY, nNeurX int, rel relpos.Relations, space float32) (pvPos, pvNeg *Layer)
AddPVLayers adds PVpos and PVneg layers for positive or negative valence primary value representations, representing the total drive and effort weighted USpos outcome, or total USneg outcome. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions.
func (*Network) AddPVPulvLayers ¶
func (net *Network) AddPVPulvLayers(nNeurY, nNeurX int, rel relpos.Relations, space float32) (pvPos, pvNeg, pvPosP, pvNegP *Layer)
AddPVLayers adds PVpos and PVneg layers for positive or negative valence primary value representations, representing the total drive and effort weighted USpos outcomes, or total USneg outcomes. Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions. Adds Pulvinar predictive layers for each.
func (*Network) AddPulvForLayer ¶
AddPulvForLayer adds a Pulvinar for given Layer (typically an Input type layer) with a P suffix. The Pulv.Driver is set to given Layer. The Pulv layer needs other CT connections from higher up to predict this layer. Pulvinar is positioned behind the given Layer.
func (*Network) AddPulvForSuper ¶
AddPulvForSuper adds a Pulvinar for given superficial layer (SuperLayer) with a P suffix. The Pulv.Driver is set to Super, as is the Class on Pulv. The Pulv layer needs other CT connections from higher up to predict this layer. Pulvinar is positioned behind the CT layer.
func (*Network) AddPulvLayer2D ¶
AddPulvLayer2D adds a Pulvinar Layer of given size, with given name.
func (*Network) AddPulvLayer4D ¶
AddPulvLayer4D adds a Pulvinar Layer of given size, with given name.
func (*Network) AddRWLayers ¶
func (nt *Network) AddRWLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, da *Layer)
AddRWLayers adds simple Rescorla-Wagner (PV only) dopamine system, with a primary Reward layer, a RWPred prediction layer, and a dopamine layer that computes diff. Only generates DA when Rew layer has external input -- otherwise zero.
func (*Network) AddRewLayer ¶
AddRewLayer adds a RewLayer of given name
func (*Network) AddRubicon ¶
func (net *Network) AddRubicon(ctx *Context, nYneur, popY, popX, bgY, bgX, pfcY, pfcX int, space float32) (vSgpi, vSmtxGo, vSmtxNo, urgency, pvPos, blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, blaNov, ofcPos, ofcPosCT, ofcPosPT, ofcPosPTp, ilPos, ilPosCT, ilPosPT, ilPosPTp, ofcNeg, ofcNegCT, ofcNegPT, ofcNegPTp, ilNeg, ilNegCT, ilNegPT, ilNegPTp, accCost, plUtil, sc *Layer)
AddRubicon builds a complete Rubicon model for goal-driven decision making. Uses the network Rubicon.NPosUSs and NNegUSs for number of pools -- must be configured prior to calling this. Calls: * AddRubiconOFCus -- Rubicon, and OFC us coding Makes all appropriate interconnections and sets default parameters. Needs CS -> BLA, OFC connections to be made. Returns layers most likely to be used for remaining connections and positions.
func (*Network) AddRubiconOFCus ¶
func (net *Network) AddRubiconOFCus(ctx *Context, nYneur, popY, popX, bgY, bgX, ofcY, ofcX int, space float32) (vSgpi, vSmtxGo, vSmtxNo, vSpatchD1, vSpatchD2, urgency, usPos, pvPos, usNeg, usNegP, pvNeg, pvNegP, blaPosAcq, blaPosExt, blaNegAcq, blaNegExt, blaNov, ofcPos, ofcPosCT, ofcPosPT, ofcPosPTp, ilPos, ilPosCT, ilPosPT, ilPosPTp, ilPosMD, ofcNeg, ofcNegCT, ofcNegPT, ofcNegPTp, accCost, accCostCT, accCostPT, accCostPTp, accCostMD, ilNeg, ilNegCT, ilNegPT, ilNegPTp, ilNegMD, sc *Layer)
AddRubiconOFCus builds a complete Rubicon network with OFCpos (orbital frontal cortex) US-coding layers, ILpos infralimbic abstract positive value, OFCneg for negative value inputs, and ILneg value layers, and ACCost cost prediction layers. Uses the network Rubicon.NPosUSs, NNegUSs, NCosts for number of pools -- must be configured prior to calling this. Calls: * AddVTALHbLDTLayers * AddRubiconPulvLayers * AddVS * AddAmygdala * AddOFCpos * AddOFCneg Makes all appropriate interconnections and sets default parameters. Needs CS -> BLA, OFC connections to be made. Returns layers most likely to be used for remaining connections and positions.
func (*Network) AddRubiconPulvLayers ¶
func (net *Network) AddRubiconPulvLayers(ctx *Context, nYneur, popY, popX int, space float32) (drives, drivesP, urgency, usPos, usNeg, cost, costFinal, usPosP, usNegP, costP, pvPos, pvNeg, pvPosP, pvNegP *Layer)
AddRubiconPulvLayers adds Rubicon layers for PV-related information visualizing the internal states of the Global state, with Pulvinar prediction layers for training PFC layers. Uses the network Rubicon.NPosUSs, NNegUSs, NCosts for number of pools -- must be configured prior to calling this. * drives = popcode representation of drive strength (no activity for 0) number of active drives comes from Context; popY, popX neurons per pool. * urgency = popcode representation of urgency Go bias factor, popY, popX neurons. * us = popcode per US, positive & negative, cost * pv = popcode representation of final primary value on positive and negative valences -- this is what the dopamine value ends up conding (pos - neg). Layers are organized in depth per type: USs in one column, PVs in the next, with Drives in the back; urgency behind that.
func (*Network) AddSCLayer2D ¶
AddSCLayer2D adds superior colliculcus 2D layer which computes stimulus onset via trial-delayed inhibition (Inhib.FFPrv) -- connect with fixed random input from sensory input layers. Sets base name and class name to SC. Must set Inhib.FFPrv > 0 and Act.Decay.* = 0
func (*Network) AddSCLayer4D ¶
AddSCLayer4D adds superior colliculcus 4D layer which computes stimulus onset via trial-delayed inhibition (Inhib.FFPrv) -- connect with fixed random input from sensory input layers. Sets base name and class name to SC. Must set Inhib.FFPrv > 0 and Act.Decay.* = 0
func (*Network) AddSTNLayer2D ¶
AddSTNLayer2D adds a subthalamic nucleus Layer of given size, with given name.
func (*Network) AddSTNLayer4D ¶
AddSTNLayer4D adds a subthalamic nucleus Layer of given size, with given name. Makes a 4D structure with Pools representing separable gating domains.
func (*Network) AddSuperCT2D ¶
func (net *Network) AddSuperCT2D(name, pathClass string, shapeY, shapeX int, space float32, pat paths.Pattern) (super, ct *Layer)
AddSuperCT2D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPath pathway from Super to CT using given pathway pattern, and NO Pulv Pulvinar. CT is placed Behind Super.
func (*Network) AddSuperCT4D ¶
func (net *Network) AddSuperCT4D(name, pathClass string, nPoolsY, nPoolsX, nNeurY, nNeurX int, space float32, pat paths.Pattern) (super, ct *Layer)
AddSuperCT4D adds a superficial (SuperLayer) and corresponding CT (CT suffix) layer with CTCtxtPath pathway from Super to CT using given pathway pattern, and NO Pulv Pulvinar. CT is placed Behind Super.
func (*Network) AddSuperLayer2D ¶
AddSuperLayer2D adds a Super Layer of given size, with given name.
func (*Network) AddSuperLayer4D ¶
AddSuperLayer4D adds a Super Layer of given size, with given name.
func (*Network) AddTDLayers ¶
func (nt *Network) AddTDLayers(prefix string, rel relpos.Relations, space float32) (rew, rp, ri, td *Layer)
AddTDLayers adds the standard TD temporal differences layers, generating a DA signal. Projection from Rew to RewInteg is given class TDRewToInteg -- should have no learning and 1 weight.
func (*Network) AddUSLayers ¶
func (net *Network) AddUSLayers(popY, popX int, rel relpos.Relations, space float32) (usPos, usNeg, cost, costFinal *Layer)
AddUSLayers adds USpos, USneg, and Cost layers for positive or negative valence unconditioned stimuli (USs), using a pop-code representation of US magnitude. These track the Global USpos, USneg, Cost for visualization and predictive learning. Actual US inputs are set in Rubicon. Uses the network Rubicon.NPosUSs, NNegUSs, and NCosts for number of pools -- must be configured prior to calling this.
func (*Network) AddUSPulvLayers ¶
func (net *Network) AddUSPulvLayers(popY, popX int, rel relpos.Relations, space float32) (usPos, usNeg, cost, costFinal, usPosP, usNegP, costP *Layer)
AddUSPulvLayers adds USpos, USneg, and Cost layers for positive or negative valence unconditioned stimuli (USs), using a pop-code representation of US magnitude. These track the Global USpos, USneg, Cost, for visualization and predictive learning. Actual US inputs are set in Rubicon. Adds Pulvinar predictive layers for each.
func (*Network) AddUrgencyLayer ¶
AddUrgencyLayer adds Rubicon layer representing current urgency factor, from Global Urgency.Urge Uses a PopCode representation based on LayerParams.Act.PopCode, distributed over given numbers of neurons in the X and Y dimensions.
func (*Network) AddVBG ¶
func (net *Network) AddVBG(prefix string, nPoolsY, nPoolsX, nNeurY, nNeurX, gpNeurY, gpNeurX int, space float32) (mtxGo, mtxNo, gpePr, gpeAk, stn, gpi *Layer)
AddVBG adds Ventral Basal Ganglia layers, using the PCore Pallidal Core framework where GPe plays a central role. Returns VMtxGo, VMtxNo, VGPePr, VGPeAk, VSTN, VGPi layers, with given optional prefix. Only the Matrix has pool-based 4D shape by default -- use pool for "role" like elements where matches need to be detected. All GP / STN layers have gpNeur neurons. Appropriate connections are made between layers, using standard styles. space is the spacing between layers (2 typical).
func (*Network) AddVMatrixLayer ¶
func (net *Network) AddVMatrixLayer(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, da DAModTypes) *Layer
AddVMatrixLayer adds a Ventral MatrixLayer of given size, with given name. Assumes that a 4D structure will be used, with Pools representing separable gating domains. da gives the DaReceptor type (D1R = Go, D2R = NoGo)
func (*Network) AddVSGatedLayer ¶
AddVSGatedLayer adds a VSGatedLayer with given number of Y units and 2 pools, first one represents JustGated, second is HasGated.
func (*Network) AddVSPatchLayers ¶
func (net *Network) AddVSPatchLayers(prefix string, nUs, nNeurY, nNeurX int, space float32) (d1, d2 *Layer)
AddVSPatchLayers adds VSPatch (Pos, D1, D2)
func (*Network) AddVTALHbLDTLayers ¶
AddVTALHbLDTLayers adds VTA dopamine, LHb DA dipping, and LDT ACh layers which are driven by corresponding values in Global
func (*Network) ApplyExts ¶
ApplyExts applies external inputs to layers, based on values that were set in prior layer-specific ApplyExt calls. This does nothing on the CPU, but is critical for the GPU, and should be added to all sims where GPU will be used.
func (*Network) ClearTargExt ¶
ClearTargExt clears external inputs Ext that were set from target values Target. This can be called to simulate alpha cycles within theta cycles, for example.
func (*Network) CollectDWts ¶
CollectDWts writes all of the synaptic DWt values to given dwts slice which is pre-allocated to given nwts size if dwts is nil, in which case the method returns true so that the actual length of dwts can be passed next time around. Used for MPI sharing of weight changes across processors. This calls SyncSynapsesFromGPU() (nop if not GPU) first.
func (*Network) ConfigGPUnoGUI ¶
ConfigGPUnoGUI turns on GPU mode in case where no GUI is being used. This directly accesses the GPU hardware. It does not work well when GUI also being used. Configures the GPU -- call after Network is Built, initialized, params are set, and everything is ready to run.
func (*Network) ConfigGPUwithGUI ¶
ConfigGPUwithGUI turns on GPU mode in context of an active GUI where Vulkan has been initialized etc. Configures the GPU -- call after Network is Built, initialized, params are set, and everything is ready to run.
func (*Network) ConfigLoopsHip ¶
func (net *Network) ConfigLoopsHip(ctx *Context, man *looper.Manager, hip *HipConfig, pretrain *bool)
ConfigLoopsHip configures the hippocampal looper and should be included in ConfigLoops in model to make sure hip loops is configured correctly. see hip.go for an instance of implementation of this function. ec5ClampFrom specifies the layer to clamp EC5 plus phase values from: EC3 is the biological source, but can use Input layer for simple testing net.
func (*Network) ConnectCSToBLApos ¶
ConnectCSToBLApos connects the CS input to BLAposAcqD1, BLANovelCS layers using fixed, higher-variance weights, full pathway. Sets classes to: CSToBLApos, CSToBLANovel with default params
func (*Network) ConnectCTSelf ¶
func (net *Network) ConnectCTSelf(ly *Layer, pat paths.Pattern, pathClass string) (ctxt, maint *Path)
ConnectCTSelf adds a Self (Lateral) CTCtxtPath pathway within a CT layer, in addition to a regular lateral pathway, which supports active maintenance. The CTCtxtPath has a Class label of CTSelfCtxt, and the regular one is CTSelfMaint with optional class added.
func (*Network) ConnectCtxtToCT ¶
ConnectCtxtToCT adds a CTCtxtPath from given sending layer to a CT layer
func (*Network) ConnectPTMaintSelf ¶
ConnectPTMaintSelf adds a Self (Lateral) pathway within a PTMaintLayer, which supports active maintenance, with a class of PTSelfMaint
func (*Network) ConnectPTPredSelf ¶
ConnectPTPredSelf adds a Self (Lateral) pathway within a PTPredLayer, which supports active maintenance, with a class of PTSelfMaint
func (*Network) ConnectPTToPulv ¶
func (net *Network) ConnectPTToPulv(pt, ptPred, pulv *Layer, toPulvPat, fmPulvPat paths.Pattern, pathClass string) (ptToPulv, ptPredToPulv, toPTPred *Path)
ConnectPTToPulv connects PT, PTPred with given Pulv: PT -> Pulv is class PTToPulv; PT does NOT receive back from Pulv PTPred -> Pulv is class PTPredToPulv, From Pulv = type = Back, class = FromPulv toPulvPat is the paths.Pattern PT -> Pulv and fmPulvPat is Pulv -> PTPred Typically Pulv is a different shape than PTPred, so use Full or appropriate topological pattern. adds optional class name to pathway.
func (*Network) ConnectPTpToPulv ¶
func (net *Network) ConnectPTpToPulv(ptPred, pulv *Layer, toPulvPat, fmPulvPat paths.Pattern, pathClass string) (ptToPulv, ptPredToPulv, toPTPred *Path)
ConnectPTpToPulv connects PTPred with given Pulv: PTPred -> Pulv is class PTPredToPulv, From Pulv = type = Back, class = FromPulv toPulvPat is the paths.Pattern PT -> Pulv and fmPulvPat is Pulv -> PTPred Typically Pulv is a different shape than PTPred, so use Full or appropriate topological pattern. adds optional class name to pathway.
func (*Network) ConnectSuperToCT ¶
ConnectSuperToCT adds a CTCtxtPath from given sending Super layer to a CT layer This automatically sets the FromSuper flag to engage proper defaults, Uses given pathway pattern -- e.g., Full, OneToOne, or PoolOneToOne
func (*Network) ConnectToBLAAcq ¶
ConnectToBLAAcq adds a BLAPath from given sending layer to a BLA layer, and configures it for acquisition parameters. Sets class to BLAAcqPath. This is for any CS or contextual inputs that drive acquisition.
func (*Network) ConnectToBLAExt ¶
ConnectToBLAExt adds a BLAPath from given sending layer to a BLA layer, and configures it for extinctrion parameters. Sets class to BLAExtPath. This is for any CS or contextual inputs that drive extinction neurons to fire and override the acquisition ones.
func (*Network) ConnectToDSMatrix ¶
ConnectToDSMatrix adds a DSMatrixPath from given sending layer to a matrix layer
func (*Network) ConnectToPFC ¶
func (net *Network) ConnectToPFC(lay, layP, pfc, pfcCT, pfcPT, pfcPTp *Layer, pat paths.Pattern, pathClass string)
ConnectToPFC connects given predictively learned input to all relevant PFC layers: lay -> pfc (skipped if lay == nil) layP -> pfc, layP <-> pfcCT pfcPTp <-> layP if pfcPT != nil: pfcPT <-> layP sets PFCPath class name for pathways
func (*Network) ConnectToPFCBack ¶
func (net *Network) ConnectToPFCBack(lay, layP, pfc, pfcCT, pfcPT, pfcPTp *Layer, pat paths.Pattern, pathClass string)
ConnectToPFCBack connects given predictively learned input to all relevant PFC layers: lay -> pfc using a BackPath -- weaker layP -> pfc, layP <-> pfcCT pfcPTp <-> layP
func (*Network) ConnectToPFCBidir ¶
func (net *Network) ConnectToPFCBidir(lay, layP, pfc, pfcCT, pfcPT, pfcPTp *Layer, pat paths.Pattern, pathClass string) (ff, fb *Path)
ConnectToPFCBidir connects given predictively learned input to all relevant PFC layers, using bidirectional connections to super layers. lay <-> pfc bidirectional layP -> pfc, layP <-> pfcCT pfcPTp <-> layP
func (*Network) ConnectToPulv ¶
func (net *Network) ConnectToPulv(super, ct, pulv *Layer, toPulvPat, fmPulvPat paths.Pattern, pathClass string) (toPulv, toSuper, toCT *Path)
ConnectToPulv adds the following pathways: layers | class | path type | path pat ------------+------------+-------------+---------- ct ->pulv | "CTToPulv" | ForwardPath | toPulvPat pulv->super | "FromPulv" | BackPath | fmPulvPat pulv->ct | "FromPulv" | BackPath | fmPulvPat
Typically pulv is a different shape than super and ct, so use Full or appropriate topological pattern. Adds optional pathClass name as a suffix.
func (*Network) ConnectToRWPath ¶
ConnectToRWPred adds a RWPath from given sending layer to a RWPred layer
func (*Network) ConnectToSC ¶
ConnectToSC adds a ForwardPath from given sending layer to a SC layer, setting class as ToSC -- should set params as fixed random with more variance than usual.
func (*Network) ConnectToSC1to1 ¶
ConnectToSC1to1 adds a 1to1 ForwardPath from given sending layer to a SC layer, copying the geometry of the sending layer, setting class as ToSC. The conection weights are set to uniform.
func (*Network) ConnectToVSMatrix ¶
ConnectToVSMatrix adds a VSMatrixPath from given sending layer to a matrix layer
func (*Network) ConnectToVSPatch ¶
ConnectToVSPatch adds a VSPatchPath from given sending layer to VSPatchD1, D2 layers
func (*Network) ConnectUSToBLA ¶
ConnectUSToBLA connects the US input to BLApos(Neg)AcqD1(D2) and BLApos(Neg)ExtD2(D1) layers, using fixed, higher-variance weights, full pathway. Sets classes to: USToBLAAcq and USToBLAExt
func (*Network) DWt ¶
DWt computes the weight change (learning) based on current running-average activation values
func (*Network) DecayState ¶
DecayState decays activation state by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g) This is called automatically in NewState, but is avail here for ad-hoc decay cases.
func (*Network) DecayStateByClass ¶
DecayStateByClass decays activation state for given class name(s) by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g)
func (*Network) DecayStateByType ¶
func (nt *Network) DecayStateByType(ctx *Context, decay, glong, ahp float32, types ...LayerTypes)
DecayStateByType decays activation state for given layer types by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g)
func (*Network) DecayStateLayers ¶
DecayStateLayers decays activation state for given layers by given proportion e.g., 1 = decay completely, and 0 = decay not at all. glong = separate decay factor for long-timescale conductances (g). If this is not being called at the start, around NewState call, then you should also call: nt.GPU.SyncGBufToGPU() to zero the GBuf values which otherwise will persist spikes in flight.
func (*Network) Defaults ¶
func (nt *Network) Defaults()
Defaults sets all the default parameters for all layers and pathways
func (*Network) InitExt ¶
InitExt initializes external input state. Call prior to applying external inputs to layers.
func (*Network) InitGScale ¶
InitGScale computes the initial scaling factor for synaptic input conductances G, stored in GScale.Scale, based on sending layer initial activation.
func (*Network) InitName ¶
InitName MUST be called to initialize the network's pointer to itself as an emer.Network which enables the proper interface methods to be called. Also sets the name, and initializes NetIndex in global list of Network
func (*Network) InitTopoSWts ¶
func (nt *Network) InitTopoSWts()
InitTopoSWts initializes SWt structural weight parameters from path types that support topographic weight patterns, having flags set to support it, includes: paths.PoolTile paths.Circle. call before InitWts if using Topo wts
func (*Network) InitWts ¶
InitWts initializes synaptic weights and all other associated long-term state variables including running-average state values (e.g., layer running average activations etc)
func (*Network) LRateMod ¶
LRateMod sets the LRate modulation parameter for Paths, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.
func (*Network) LRateSched ¶
LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.
func (*Network) LayersSetOff ¶
LayersSetOff sets the Off flag for all layers to given setting
func (*Network) MakeToolbar ¶
func (*Network) MinusPhase ¶
MinusPhase does updating after end of minus phase
func (*Network) NeuronsSlice ¶
NeuronsSlice returns a slice of neuron values using given neuron variable, resizing as needed.
func (*Network) NewState ¶
NewState handles all initialization at start of new input pattern. This is called *before* applying external input data and operates across all data parallel values. The current Context.NData should be set properly prior to calling this and subsequent Cycle methods.
func (*Network) PlusPhaseStart ¶
PlusPhaseStart does updating at the start of the plus phase: applies Target inputs as External inputs.
func (*Network) SetDWts ¶
SetDWts sets the DWt weight changes from given array of floats, which must be correct size navg is the number of processors aggregated in these dwts -- some variables need to be averaged instead of summed (e.g., ActAvg) This calls SyncSynapsesToGPU() (nop if not GPU) after.
func (*Network) SetSubMean ¶
SetSubMean sets the SubMean parameters in all the layers in the network trgAvg is for Learn.TrgAvgAct.SubMean path is for the paths Learn.Trace.SubMean in both cases, it is generally best to have both parameters set to 0 at the start of learning
func (*Network) SizeReport ¶
SizeReport returns a string reporting the size of each layer and pathway in the network, and total memory footprint. If detail flag is true, details per layer, pathway is included.
func (*Network) SlowAdapt ¶
SlowAdapt is the layer-level slow adaptation functions: Synaptic scaling, and adapting inhibition
func (*Network) SynsSlice ¶
func (nt *Network) SynsSlice(vals *[]float32, synvar SynapseVars)
SynsSlice returns a slice of synaptic values, in natural sending order, using given synaptic variable, resizing as needed.
func (*Network) TargToExt ¶
TargToExt sets external input Ext from target values Target This is done at end of MinusPhase to allow targets to drive activity in plus phase. This can be called separately to simulate alpha cycles within theta cycles, for example.
func (*Network) UnLesionNeurons ¶
UnLesionNeurons unlesions neurons in all layers in the network. Provides a clean starting point for subsequent lesion experiments.
func (*Network) UpdateExtFlags ¶
UpdateExtFlags updates the neuron flags for external input based on current layer Type field -- call this if the Type has changed since the last ApplyExt* method call.
func (*Network) UpdateParams ¶
func (nt *Network) UpdateParams()
UpdateParams updates all the derived parameters if any have changed, for all layers and pathways
type NetworkBase ¶
type NetworkBase struct { // we need a pointer to ourselves as an emer.Network, which can always be used to extract the true underlying type of object when network is embedded in other structs -- function receivers do not have this ability so this is necessary. EmerNet emer.Network `copier:"-" json:"-" xml:"-" display:"-"` // overall name of network -- helps discriminate if there are multiple Nm string // filename of last weights file loaded or saved WtsFile string // Rubicon system for goal-driven motivated behavior, including Rubicon phasic dopamine signaling. Manages internal drives, US outcomes. Core LHb (lateral habenula) and VTA (ventral tegmental area) dopamine are computed in equations using inputs from specialized network layers (LDTLayer driven by BLA, CeM layers, VSPatchLayer). Renders USLayer, PVLayer, DrivesLayer representations based on state updated here. Rubicon Rubicon // map of name to layers -- layer names must be unique LayMap map[string]*Layer `display:"-"` // map of layer classes -- made during Build LayClassMap map[string][]string `display:"-"` // minimum display position in network MinPos math32.Vector3 `display:"-"` // maximum display position in network MaxPos math32.Vector3 `display:"-"` // optional metadata that is saved in network weights files -- e.g., can indicate number of epochs that were trained, or any other information about this network that would be useful to save MetaData map[string]string // if true, the neuron and synapse variables will be organized into a gpu-optimized memory order, otherwise cpu-optimized. This must be set before network Build() is called. UseGPUOrder bool `edit:"-"` // network index in global Networks list of networks -- needed for GPU shader kernel compatible network variable access functions (e.g., NrnV, SynV etc) in CPU mode NetIndex uint32 `display:"-"` // maximum synaptic delay across any pathway in the network -- used for sizing the GBuf accumulation buffer. MaxDelay uint32 `edit:"-" display:"-"` // maximum number of data inputs that can be processed in parallel in one pass of the network. Neuron storage is allocated to hold this amount during Build process, and this value reflects that. MaxData uint32 `edit:"-"` // total number of neurons NNeurons uint32 `edit:"-"` // total number of synapses NSyns uint32 `edit:"-"` // storage for global vars Globals []float32 `display:"-"` // array of layers Layers []*Layer // array of layer parameters, in 1-to-1 correspondence with Layers LayParams []LayerParams `display:"-"` // array of layer values, with extra per data LayValues []LayerValues `display:"-"` // array of inhibitory pools for all layers. Pools []Pool `display:"-"` // entire network's allocation of neuron variables, accessed via NrnV function with flexible striding Neurons []float32 `display:"-"` // ] entire network's allocation of neuron average avariables, accessed via NrnAvgV function with flexible striding NeuronAvgs []float32 `display:"-"` // entire network's allocation of neuron index variables, accessed via NrnI function with flexible striding NeuronIxs []uint32 `display:"-"` // pointers to all pathways in the network, sender-based Paths []*Path `display:"-"` // array of pathway parameters, in 1-to-1 correspondence with Paths, sender-based PathParams []PathParams `display:"-"` // entire network's allocation of synapse idx vars, organized sender-based, with flexible striding, accessed via SynI function SynapseIxs []uint32 `display:"-"` // entire network's allocation of synapses, organized sender-based, with flexible striding, accessed via SynV function Synapses []float32 `display:"-"` // entire network's allocation of synapse Ca vars, organized sender-based, with flexible striding, accessed via SynCaV function SynapseCas []float32 `display:"-"` // starting offset and N cons for each sending neuron, for indexing into the Syns synapses, which are organized sender-based. PathSendCon []StartN `display:"-"` // starting offset and N cons for each recv neuron, for indexing into the RecvSynIndex array of indexes into the Syns synapses, which are organized sender-based. PathRecvCon []StartN `display:"-"` // conductance buffer for accumulating spikes -- subslices are allocated to each pathway -- uses int-encoded float values for faster GPU atomic integration PathGBuf []int32 `display:"-"` // synaptic conductance integrated over time per pathway per recv neurons -- spikes come in via PathBuf -- subslices are allocated to each pathway PathGSyns []float32 `display:"-"` // indexes into Paths (organized by SendPath) organized by recv pathways -- needed for iterating through recv paths efficiently on GPU. RecvPathIndexes []uint32 `display:"-"` // indexes into Synapses for each recv neuron, organized into blocks according to PathRecvCon, for receiver-based access. RecvSynIndexes []uint32 `display:"-"` // external input values for all Input / Target / Compare layers in the network -- the ApplyExt methods write to this per layer, and it is then actually applied in one consistent method. Exts []float32 // context used only for accessing neurons for display -- NetIndexes.NData in here is copied from active context in NewState Ctx Context `display:"-"` // random number generator for the network -- all random calls must use this -- set seed here for weight initialization values Rand randx.SysRand `display:"-"` // random seed to be set at the start of configuring the network and initializing the weights -- set this to get a different set of weights RandSeed int64 `edit:"-"` // number of threads to use for parallel processing NThreads int // GPU implementation GPU GPU `display:"inline"` // record function timer information RecFunTimes bool `display:"-"` // timers for each major function (step of processing) FunTimes map[string]*timer.Time `display:"-"` }
NetworkBase manages the basic structural components of a network (layers). The main Network then can just have the algorithm-specific code.
func (*NetworkBase) AddLayer ¶
func (nt *NetworkBase) AddLayer(name string, shape []int, typ LayerTypes) *Layer
AddLayer adds a new layer with given name and shape to the network. 2D and 4D layer shapes are generally preferred but not essential -- see AddLayer2D and 4D for convenience methods for those. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each unit group having 4 rows (Y) of 5 (X) units.
func (*NetworkBase) AddLayer2D ¶
func (nt *NetworkBase) AddLayer2D(name string, shapeY, shapeX int, typ LayerTypes) *Layer
AddLayer2D adds a new layer with given name and 2D shape to the network. 2D and 4D layer shapes are generally preferred but not essential.
func (*NetworkBase) AddLayer4D ¶
func (nt *NetworkBase) AddLayer4D(name string, nPoolsY, nPoolsX, nNeurY, nNeurX int, typ LayerTypes) *Layer
AddLayer4D adds a new layer with given name and 4D shape to the network. 4D layers enable pool (unit-group) level inhibition in Axon networks, for example. shape is in row-major format with outer-most dimensions first: e.g., 4D 3, 2, 4, 5 = 3 rows (Y) of 2 cols (X) of pools, with each pool having 4 rows (Y) of 5 (X) neurons.
func (*NetworkBase) AddLayerInit ¶
func (nt *NetworkBase) AddLayerInit(ly *Layer, name string, shape []int, typ LayerTypes)
AddLayerInit is implementation routine that takes a given layer and adds it to the network, and initializes and configures it properly.
func (*NetworkBase) AllGlobalValues ¶
func (nt *NetworkBase) AllGlobalValues(ctrKey string, vals map[string]float32)
AllGlobalValues adds to map of all Global variables and values. ctrKey is a key of counters to contextualize values.
func (*NetworkBase) AllGlobals ¶
func (nt *NetworkBase) AllGlobals() string
AllGlobals returns a listing of all Global variables and values.
func (*NetworkBase) AllLayerInhibs ¶
func (nt *NetworkBase) AllLayerInhibs() string
AllLayerInhibs returns a listing of all Layer Inhibition parameters in the Network
func (*NetworkBase) AllParams ¶
func (nt *NetworkBase) AllParams() string
AllParams returns a listing of all parameters in the Network.
func (*NetworkBase) AllPathScales ¶
func (nt *NetworkBase) AllPathScales() string
AllPathScales returns a listing of all PathScale parameters in the Network in all Layers, Recv pathways. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.
func (*NetworkBase) ApplyParams ¶
ApplyParams applies given parameter style Sheet to layers and paths in this network. Calls UpdateParams to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*NetworkBase) AxonLayerByName ¶
func (nt *NetworkBase) AxonLayerByName(name string) *Layer
LayByName returns a layer by looking it up by name in the layer map (nil if not found). Will create the layer map if it is nil or a different size than layers slice, but otherwise needs to be updated manually.
func (*NetworkBase) AxonPathByName ¶
func (nt *NetworkBase) AxonPathByName(name string) *Path
AxonPathByName returns a Path by looking it up by name in the list of pathways (nil if not found).
func (*NetworkBase) BidirConnectLayerNames ¶
func (nt *NetworkBase) BidirConnectLayerNames(low, high string, pat paths.Pattern) (lowlay, highlay *Layer, fwdpj, backpj *Path, err error)
BidirConnectLayerNames establishes bidirectional pathways between two layers, referenced by name, with low = the lower layer that sends a Forward pathway to the high layer, and receives a Back pathway in the opposite direction. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) BidirConnectLayers ¶
func (nt *NetworkBase) BidirConnectLayers(low, high *Layer, pat paths.Pattern) (fwdpj, backpj *Path)
BidirConnectLayers establishes bidirectional pathways between two layers, with low = lower layer that sends a Forward pathway to the high layer, and receives a Back pathway in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) BidirConnectLayersPy ¶
func (nt *NetworkBase) BidirConnectLayersPy(low, high *Layer, pat paths.Pattern)
BidirConnectLayersPy establishes bidirectional pathways between two layers, with low = lower layer that sends a Forward pathway to the high layer, and receives a Back pathway in the opposite direction. Does not yet actually connect the units within the layers -- that requires Build. Py = python version with no return vals.
func (*NetworkBase) Bounds ¶
func (nt *NetworkBase) Bounds() (min, max math32.Vector3)
func (*NetworkBase) BoundsUpdate ¶
func (nt *NetworkBase) BoundsUpdate()
BoundsUpdate updates the Min / Max display bounds for 3D display
func (*NetworkBase) Build ¶
func (nt *NetworkBase) Build(simCtx *Context) error
Build constructs the layer and pathway state based on the layer shapes and patterns of interconnectivity. Configures threading using heuristics based on final network size. Must set UseGPUOrder properly prior to calling. Configures the given Context object used in the simulation with the memory access strides for this network -- must be set properly -- see SetCtxStrides.
func (*NetworkBase) BuildGlobals ¶
func (nt *NetworkBase) BuildGlobals(ctx *Context)
BuildGlobals builds Globals vars, using params set in given context
func (*NetworkBase) BuildPathGBuf ¶
func (nt *NetworkBase) BuildPathGBuf()
BuildPathGBuf builds the PathGBuf, PathGSyns, based on the MaxDelay values in thePathParams, which should have been configured by this point. Called by default in InitWts()
func (*NetworkBase) CheckSameSize ¶
func (nt *NetworkBase) CheckSameSize(on *NetworkBase) error
CheckSameSize checks if this network is the same size as given other, in terms of NNeurons, MaxData, and NSyns. Returns error message if not.
func (*NetworkBase) ConnectLayerNames ¶
func (nt *NetworkBase) ConnectLayerNames(send, recv string, pat paths.Pattern, typ PathTypes) (rlay, slay *Layer, pj *Path, err error)
ConnectLayerNames establishes a pathway between two layers, referenced by name adding to the recv and send pathway lists on each side of the connection. Returns error if not successful. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) ConnectLayers ¶
ConnectLayers establishes a pathway between two layers, adding to the recv and send pathway lists on each side of the connection. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) CopyStateFrom ¶
func (nt *NetworkBase) CopyStateFrom(on *NetworkBase) error
CopyStateFrom copies entire network state from other network. Other network must have identical configuration, as this just does a literal copy of the state values. This is checked and errors are returned (and logged). See also DiffFrom.
func (*NetworkBase) DeleteAll ¶
func (nt *NetworkBase) DeleteAll()
DeleteAll deletes all layers, prepares network for re-configuring and building
func (*NetworkBase) DiffFrom ¶
func (nt *NetworkBase) DiffFrom(ctx *Context, on *NetworkBase, maxDiff int) string
DiffFrom returns a string reporting differences between this network and given other, up to given max number of differences (0 = all), for each state value.
func (*NetworkBase) FunTimerStart ¶
func (nt *NetworkBase) FunTimerStart(fun string)
FunTimerStart starts function timer for given function name -- ensures creation of timer
func (*NetworkBase) FunTimerStop ¶
func (nt *NetworkBase) FunTimerStop(fun string)
FunTimerStop stops function timer -- timer must already exist
func (*NetworkBase) KeyLayerParams ¶
func (nt *NetworkBase) KeyLayerParams() string
KeyLayerParams returns a listing for all layers in the network, of the most important layer-level params (specific to each algorithm).
func (*NetworkBase) KeyPathParams ¶
func (nt *NetworkBase) KeyPathParams() string
KeyPathParams returns a listing for all Recv pathways in the network, of the most important pathway-level params (specific to each algorithm).
func (*NetworkBase) Label ¶
func (nt *NetworkBase) Label() string
func (*NetworkBase) LateralConnectLayer ¶
func (nt *NetworkBase) LateralConnectLayer(lay *Layer, pat paths.Pattern) *Path
LateralConnectLayer establishes a self-pathway within given layer. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) LateralConnectLayerPath ¶
LateralConnectLayerPath makes lateral self-pathway using given pathway. Does not yet actually connect the units within the layers -- that requires Build.
func (*NetworkBase) LayByNameTry ¶
func (nt *NetworkBase) LayByNameTry(name string) (*Layer, error)
LayByNameTry returns a layer by looking it up by name -- returns error message if layer is not found
func (*NetworkBase) LayerByName ¶
func (nt *NetworkBase) LayerByName(name string) emer.Layer
LayByName returns a layer by looking it up by name in the layer map (nil if not found). Will create the layer map if it is nil or a different size than layers slice, but otherwise needs to be updated manually.
func (*NetworkBase) LayerByNameTry ¶
func (nt *NetworkBase) LayerByNameTry(name string) (emer.Layer, error)
LayerByNameTry returns a layer by looking it up by name -- returns error message if layer is not found
func (*NetworkBase) LayerMapPar ¶
func (nt *NetworkBase) LayerMapPar(fun func(ly *Layer), funame string)
LayerMapPar applies function of given name to all layers using as many go routines as configured in NetThreads.Neurons.
func (*NetworkBase) LayerMapSeq ¶
func (nt *NetworkBase) LayerMapSeq(fun func(ly *Layer), funame string)
LayerMapSeq applies function of given name to all layers sequentially.
func (*NetworkBase) LayerValues ¶
func (nt *NetworkBase) LayerValues(li, di uint32) *LayerValues
LayerVal returns LayerValues for given layer and data parallel indexes
func (*NetworkBase) LayersByClass ¶
func (nt *NetworkBase) LayersByClass(classes ...string) []string
LayersByClass returns a list of layer names by given class(es). Lists are compiled when network Build() function called. The layer Type is always included as a Class, along with any other space-separated strings specified in Class for parameter styling, etc. If no classes are passed, all layer names in order are returned.
func (*NetworkBase) LayersByType ¶
func (nt *NetworkBase) LayersByType(layType ...LayerTypes) []string
LayersByType returns a list of layer names by given layer types. Lists are compiled when network Build() function called. The layer Type is always included as a Class, along with any other space-separated strings specified in Class for parameter styling, etc. If no classes are passed, all layer names in order are returned.
func (*NetworkBase) Layout ¶
func (nt *NetworkBase) Layout()
Layout computes the 3D layout of layers based on their relative position settings
func (*NetworkBase) MakeLayMap ¶
func (nt *NetworkBase) MakeLayMap()
MakeLayMap updates layer map based on current layers
func (*NetworkBase) MaxParallelData ¶
func (nt *NetworkBase) MaxParallelData() int
func (*NetworkBase) NLayers ¶
func (nt *NetworkBase) NLayers() int
func (*NetworkBase) NParallelData ¶
func (nt *NetworkBase) NParallelData() int
func (*NetworkBase) NeuronMapPar ¶
func (nt *NetworkBase) NeuronMapPar(ctx *Context, fun func(ly *Layer, ni uint32), funame string)
NeuronMapPar applies function of given name to all neurons using as many go routines as configured in NetThreads.Neurons.
func (*NetworkBase) NeuronMapSeq ¶
func (nt *NetworkBase) NeuronMapSeq(ctx *Context, fun func(ly *Layer, ni uint32), funame string)
NeuronMapSeq applies function of given name to all neurons sequentially.
func (*NetworkBase) NonDefaultParams ¶
func (nt *NetworkBase) NonDefaultParams() string
NonDefaultParams returns a listing of all parameters in the Network that are not at their default values -- useful for setting param styles etc.
func (*NetworkBase) OpenWtsCpp ¶
func (nt *NetworkBase) OpenWtsCpp(filename core.Filename) error
OpenWtsCpp opens network weights (and any other state that adapts with learning) from old C++ emergent format. If filename has .gz extension, then file is gzip uncompressed.
func (*NetworkBase) OpenWtsJSON ¶
func (nt *NetworkBase) OpenWtsJSON(filename core.Filename) error
OpenWtsJSON opens network weights (and any other state that adapts with learning) from a JSON-formatted file. If filename has .gz extension, then file is gzip uncompressed.
func (*NetworkBase) ParamsApplied ¶
func (nt *NetworkBase) ParamsApplied(sel *params.Sel)
ParamsApplied is just to satisfy History interface so reset can be applied
func (*NetworkBase) ParamsHistoryReset ¶
func (nt *NetworkBase) ParamsHistoryReset()
ParamsHistoryReset resets parameter application history
func (*NetworkBase) PathByNameTry ¶
func (nt *NetworkBase) PathByNameTry(name string) (emer.Path, error)
PathByNameTry returns a Path by looking it up by name in the list of pathways (nil if not found).
func (*NetworkBase) PathMapSeq ¶
func (nt *NetworkBase) PathMapSeq(fun func(pj *Path), funame string)
PathMapSeq applies function of given name to all pathways sequentially.
func (*NetworkBase) ReadWtsCpp ¶
func (nt *NetworkBase) ReadWtsCpp(r io.Reader) error
ReadWtsCpp reads the weights from old C++ emergent format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.
func (*NetworkBase) ReadWtsJSON ¶
func (nt *NetworkBase) ReadWtsJSON(r io.Reader) error
ReadWtsJSON reads network weights from the receiver-side perspective in a JSON text format. Reads entire file into a temporary weights.Weights structure that is then passed to Layers etc using SetWts method.
func (*NetworkBase) ResetRandSeed ¶
func (nt *NetworkBase) ResetRandSeed()
ResetRandSeed sets random seed to saved RandSeed, ensuring that the network-specific random seed generator has been created.
func (*NetworkBase) SaveAllLayerInhibs ¶
func (nt *NetworkBase) SaveAllLayerInhibs(filename core.Filename) error
SaveAllLayerInhibs saves list of all layer Inhibition parameters to given file
func (*NetworkBase) SaveAllParams ¶
func (nt *NetworkBase) SaveAllParams(filename core.Filename) error
SaveAllParams saves list of all parameters in Network to given file.
func (*NetworkBase) SaveAllPathScales ¶
func (nt *NetworkBase) SaveAllPathScales(filename core.Filename) error
SavePathScales saves a listing of all PathScale parameters in the Network in all Layers, Recv pathways. These are among the most important and numerous of parameters (in larger networks) -- this helps keep track of what they all are set to.
func (*NetworkBase) SaveNonDefaultParams ¶
func (nt *NetworkBase) SaveNonDefaultParams(filename core.Filename) error
SaveNonDefaultParams saves list of all non-default parameters in Network to given file.
func (*NetworkBase) SaveParamsSnapshot ¶
SaveParamsSnapshot saves various views of current parameters to either `params_good` if good = true (for current good reference params) or `params_2006_01_02` (year, month, day) datestamp, providing a snapshot of the simulation params for easy diffs and later reference. Also saves current Config and Params state.
func (*NetworkBase) SaveWtsJSON ¶
func (nt *NetworkBase) SaveWtsJSON(filename core.Filename) error
SaveWtsJSON saves network weights (and any other state that adapts with learning) to a JSON-formatted file. If filename has .gz extension, then file is gzip compressed.
func (*NetworkBase) SetCtxStrides ¶
func (nt *NetworkBase) SetCtxStrides(simCtx *Context)
SetCtxStrides sets the given simulation context strides for accessing variables on this network -- these must be set properly before calling any compute methods with the context.
func (*NetworkBase) SetMaxData ¶
func (nt *NetworkBase) SetMaxData(simCtx *Context, maxData int)
SetMaxData sets the MaxData and current NData for both the Network and the Context
func (*NetworkBase) SetNThreads ¶
func (nt *NetworkBase) SetNThreads(nthr int)
SetNThreads sets number of threads to use for CPU parallel processing. pass 0 to use a default heuristic number based on current GOMAXPROCS processors and the number of neurons in the network (call after building)
func (*NetworkBase) SetRandSeed ¶
func (nt *NetworkBase) SetRandSeed(seed int64)
SetRandSeed sets random seed and calls ResetRandSeed
func (*NetworkBase) SetWts ¶
func (nt *NetworkBase) SetWts(nw *weights.Network) error
SetWts sets the weights for this network from weights.Network decoded values
func (*NetworkBase) ShowAllGlobals ¶
func (nt *NetworkBase) ShowAllGlobals()
ShowAllGlobals shows a listing of all Global variables and values.
func (*NetworkBase) StdVertLayout ¶
func (nt *NetworkBase) StdVertLayout()
StdVertLayout arranges layers in a standard vertical (z axis stack) layout, by setting the Rel settings
func (*NetworkBase) SynVarNames ¶
func (nt *NetworkBase) SynVarNames() []string
SynVarNames returns the names of all the variables on the synapses in this network. Not all pathways need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!
func (*NetworkBase) SynVarProps ¶
func (nt *NetworkBase) SynVarProps() map[string]string
SynVarProps returns properties for variables
func (*NetworkBase) TimerReport ¶
func (nt *NetworkBase) TimerReport()
TimerReport reports the amount of time spent in each function, and in each thread
func (*NetworkBase) UnitVarNames ¶
func (nt *NetworkBase) UnitVarNames() []string
UnitVarNames returns a list of variable names available on the units in this network. Not all layers need to support all variables, but must safely return 0's for unsupported ones. The order of this list determines NetView variable display order. This is typically a global list so do not modify!
func (*NetworkBase) UnitVarProps ¶
func (nt *NetworkBase) UnitVarProps() map[string]string
UnitVarProps returns properties for variables
func (*NetworkBase) VarRange ¶
func (nt *NetworkBase) VarRange(varNm string) (min, max float32, err error)
VarRange returns the min / max values for given variable todo: support r. s. pathway values
func (*NetworkBase) WriteWtsJSON ¶
func (nt *NetworkBase) WriteWtsJSON(w io.Writer) error
WriteWtsJSON writes the weights from this layer from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type NeuroModParams ¶
type NeuroModParams struct { // dopamine receptor-based effects of dopamine modulation on excitatory and inhibitory conductances: D1 is excitatory, D2 is inhibitory as a function of increasing dopamine DAMod DAModTypes // valence coding of this layer -- may affect specific layer types but does not directly affect neuromodulators currently Valence ValenceTypes // dopamine modulation of excitatory and inhibitory conductances (i.e., "performance dopamine" effect -- this does NOT affect learning dopamine modulation in terms of RLrate): g *= 1 + (DAModGain * DA) DAModGain float32 // modulate the sign of the learning rate factor according to the DA sign, taking into account the DAMod sign reversal for D2Mod, also using BurstGain and DipGain to modulate DA value -- otherwise, only the magnitude of the learning rate is modulated as a function of raw DA magnitude according to DALRateMod (without additional gain factors) DALRateSign slbool.Bool // if not using DALRateSign, this is the proportion of maximum learning rate that Abs(DA) magnitude can modulate -- e.g., if 0.2, then DA = 0 = 80% of std learning rate, 1 = 100% DALRateMod float32 `min:"0" max:"1"` // proportion of maximum learning rate that ACh can modulate -- e.g., if 0.2, then ACh = 0 = 80% of std learning rate, 1 = 100% AChLRateMod float32 `min:"0" max:"1"` // amount of extra Gi inhibition added in proportion to 1 - ACh level -- makes ACh disinhibitory AChDisInhib float32 `min:"0" default:"0,5"` // multiplicative gain factor applied to positive dopamine signals -- this operates on the raw dopamine signal prior to any effect of D2 receptors in reversing its sign! BurstGain float32 `min:"0" default:"1"` // multiplicative gain factor applied to negative dopamine signals -- this operates on the raw dopamine signal prior to any effect of D2 receptors in reversing its sign! should be small for acq, but roughly equal to burst for ext DipGain float32 `min:"0" default:"1"` // contains filtered or unexported fields }
NeuroModParams specifies the effects of neuromodulators on neural activity and learning rate. These can apply to any neuron type, and are applied in the core cycle update equations.
func (*NeuroModParams) DAGain ¶
func (nm *NeuroModParams) DAGain(da float32) float32
DAGain returns DA dopamine value with Burst / Dip Gain factors applied
func (*NeuroModParams) DASign ¶
func (nm *NeuroModParams) DASign() float32
DASign returns the sign of dopamine effects: D2Mod = -1, else 1
func (*NeuroModParams) Defaults ¶
func (nm *NeuroModParams) Defaults()
func (*NeuroModParams) GGain ¶
func (nm *NeuroModParams) GGain(da float32) float32
GGain returns effective Ge and Gi gain factor given total dopamine (DA) value: tonic + phasic. factor is 1 for no modulation, otherwise higher or lower.
func (*NeuroModParams) GiFromACh ¶
func (nm *NeuroModParams) GiFromACh(ach float32) float32
GIFromACh returns amount of extra inhibition to add based on disinhibitory effects of ACh -- no inhibition when ACh = 1, extra when < 1.
func (*NeuroModParams) IsBLAExt ¶
func (nm *NeuroModParams) IsBLAExt() bool
IsBLAExt returns true if this is Positive, D2 or Negative D1 -- BLA extinction
func (*NeuroModParams) LRMod ¶
func (nm *NeuroModParams) LRMod(da, ach float32) float32
LRMod returns overall learning rate modulation factor due to neuromodulation from given dopamine (DA) and ACh inputs. If DALRateMod is true and DAMod == D1Mod or D2Mod, then the sign is a function of the DA
func (*NeuroModParams) LRModFact ¶
func (nm *NeuroModParams) LRModFact(pct, val float32) float32
LRModFact returns learning rate modulation factor for given inputs.
func (*NeuroModParams) ShouldDisplay ¶
func (nm *NeuroModParams) ShouldDisplay(field string) bool
func (*NeuroModParams) Update ¶
func (nm *NeuroModParams) Update()
type NeuronAvgVarStrides ¶
type NeuronAvgVarStrides struct { // neuron level Neuron uint32 // variable level Var uint32 // contains filtered or unexported fields }
NeuronAvgVarStrides encodes the stride offsets for neuron variable access into network float32 array. Data is always the inner-most variable.
func (*NeuronAvgVarStrides) Index ¶
func (ns *NeuronAvgVarStrides) Index(neurIndex uint32, nvar NeuronAvgVars) uint32
Index returns the index into network float32 array for given neuron and variable
func (*NeuronAvgVarStrides) SetNeuronOuter ¶
func (ns *NeuronAvgVarStrides) SetNeuronOuter()
SetNeuronOuter sets strides with neurons as outer loop: [Neurons][Vars], which is optimal for CPU-based computation.
func (*NeuronAvgVarStrides) SetVarOuter ¶
func (ns *NeuronAvgVarStrides) SetVarOuter(nneur int)
SetVarOuter sets strides with vars as outer loop: [Vars][Neurons], which is optimal for GPU-based computation.
type NeuronAvgVars ¶
type NeuronAvgVars int32 //enums:enum
NeuronAvgVars are mostly neuron variables involved in longer-term average activity which is aggregated over time and not specific to each input data state, along with any other state that is not input data specific.
const ( // ActAvg is average activation (of minus phase activation state) over long time intervals (time constant = Dt.LongAvgTau) -- useful for finding hog units and seeing overall distribution of activation ActAvg NeuronAvgVars = iota // AvgPct is ActAvg as a proportion of overall layer activation -- this is used for synaptic scaling to match TrgAvg activation -- updated at SlowInterval intervals AvgPct // TrgAvg is neuron's target average activation as a proportion of overall layer activation, assigned during weight initialization, driving synaptic scaling relative to AvgPct TrgAvg // DTrgAvg is change in neuron's target average activation as a result of unit-wise error gradient -- acts like a bias weight. MPI needs to share these across processors. DTrgAvg // AvgDif is AvgPct - TrgAvg -- i.e., the error in overall activity level relative to set point for this neuron, which drives synaptic scaling -- updated at SlowInterval intervals AvgDif // GeBase is baseline level of Ge, added to GeRaw, for intrinsic excitability GeBase // GiBase is baseline level of Gi, added to GiRaw, for intrinsic excitability GiBase )
const NeuronAvgVarsN NeuronAvgVars = 7
NeuronAvgVarsN is the highest valid value for type NeuronAvgVars, plus one.
func NeuronAvgVarsValues ¶
func NeuronAvgVarsValues() []NeuronAvgVars
NeuronAvgVarsValues returns all possible values for the type NeuronAvgVars.
func (NeuronAvgVars) Desc ¶
func (i NeuronAvgVars) Desc() string
Desc returns the description of the NeuronAvgVars value.
func (NeuronAvgVars) Int64 ¶
func (i NeuronAvgVars) Int64() int64
Int64 returns the NeuronAvgVars value as an int64.
func (NeuronAvgVars) MarshalText ¶
func (i NeuronAvgVars) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*NeuronAvgVars) SetInt64 ¶
func (i *NeuronAvgVars) SetInt64(in int64)
SetInt64 sets the NeuronAvgVars value from an int64.
func (*NeuronAvgVars) SetString ¶
func (i *NeuronAvgVars) SetString(s string) error
SetString sets the NeuronAvgVars value from its string representation, and returns an error if the string is invalid.
func (NeuronAvgVars) String ¶
func (i NeuronAvgVars) String() string
String returns the string representation of this NeuronAvgVars value.
func (*NeuronAvgVars) UnmarshalText ¶
func (i *NeuronAvgVars) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (NeuronAvgVars) Values ¶
func (i NeuronAvgVars) Values() []enums.Enum
Values returns all possible values for the type NeuronAvgVars.
type NeuronFlags ¶
type NeuronFlags int32 //enums:enum
NeuronFlags are bit-flags encoding relevant binary state for neurons
const ( // NeuronOff flag indicates that this neuron has been turned off (i.e., lesioned) NeuronOff NeuronFlags = 1 // NeuronHasExt means the neuron has external input in its Ext field NeuronHasExt NeuronFlags = 2 // NeuronHasTarg means the neuron has external target input in its Target field NeuronHasTarg NeuronFlags = 4 // NeuronHasCmpr means the neuron has external comparison input in its Target field -- used for computing // comparison statistics but does not drive neural activity ever NeuronHasCmpr NeuronFlags = 8 )
The neuron flags
const NeuronFlagsN NeuronFlags = 9
NeuronFlagsN is the highest valid value for type NeuronFlags, plus one.
func NeuronFlagsValues ¶
func NeuronFlagsValues() []NeuronFlags
NeuronFlagsValues returns all possible values for the type NeuronFlags.
func (NeuronFlags) Desc ¶
func (i NeuronFlags) Desc() string
Desc returns the description of the NeuronFlags value.
func (NeuronFlags) Int64 ¶
func (i NeuronFlags) Int64() int64
Int64 returns the NeuronFlags value as an int64.
func (NeuronFlags) MarshalText ¶
func (i NeuronFlags) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*NeuronFlags) SetInt64 ¶
func (i *NeuronFlags) SetInt64(in int64)
SetInt64 sets the NeuronFlags value from an int64.
func (*NeuronFlags) SetString ¶
func (i *NeuronFlags) SetString(s string) error
SetString sets the NeuronFlags value from its string representation, and returns an error if the string is invalid.
func (NeuronFlags) String ¶
func (i NeuronFlags) String() string
String returns the string representation of this NeuronFlags value.
func (*NeuronFlags) UnmarshalText ¶
func (i *NeuronFlags) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (NeuronFlags) Values ¶
func (i NeuronFlags) Values() []enums.Enum
Values returns all possible values for the type NeuronFlags.
type NeuronIndexStrides ¶
type NeuronIndexStrides struct { // neuron level Neuron uint32 // index value level Idx uint32 // contains filtered or unexported fields }
NeuronIndexStrides encodes the stride offsets for neuron index access into network uint32 array.
func (*NeuronIndexStrides) Index ¶
func (ns *NeuronIndexStrides) Index(neurIdx uint32, idx NeuronIndexes) uint32
Index returns the index into network uint32 array for given neuron, index value
func (*NeuronIndexStrides) SetIndexOuter ¶
func (ns *NeuronIndexStrides) SetIndexOuter(nneur int)
SetIndexOuter sets strides with indexes as outer dimension: [Indexes][Neurons] (outer to inner), which is optimal for GPU-based computation.
func (*NeuronIndexStrides) SetNeuronOuter ¶
func (ns *NeuronIndexStrides) SetNeuronOuter()
SetNeuronOuter sets strides with neurons as outer dimension: [Neurons[[Indexes] (outer to inner), which is optimal for CPU-based computation.
type NeuronIndexes ¶
type NeuronIndexes int32 //enums:enum
NeuronIndexes are the neuron indexes and other uint32 values. There is only one of these per neuron -- not data parallel. note: Flags are encoded in Vars because they are data parallel and writable, whereas indexes are read-only.
const ( // NrnNeurIndex is the index of this neuron within its owning layer NrnNeurIndex NeuronIndexes = iota // NrnLayIndex is the index of the layer that this neuron belongs to, // needed for neuron-level parallel code. NrnLayIndex // NrnSubPool is the index of the sub-level inhibitory pool for this neuron // (only for 4D shapes, the pool (unit-group / hypercolumn) structure level). // Indicies start at 1 -- 0 is layer-level pool (is 0 if no sub-pools). NrnSubPool )
const NeuronIndexesN NeuronIndexes = 3
NeuronIndexesN is the highest valid value for type NeuronIndexes, plus one.
func NeuronIndexesValues ¶
func NeuronIndexesValues() []NeuronIndexes
NeuronIndexesValues returns all possible values for the type NeuronIndexes.
func (NeuronIndexes) Desc ¶
func (i NeuronIndexes) Desc() string
Desc returns the description of the NeuronIndexes value.
func (NeuronIndexes) Int64 ¶
func (i NeuronIndexes) Int64() int64
Int64 returns the NeuronIndexes value as an int64.
func (NeuronIndexes) MarshalText ¶
func (i NeuronIndexes) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*NeuronIndexes) SetInt64 ¶
func (i *NeuronIndexes) SetInt64(in int64)
SetInt64 sets the NeuronIndexes value from an int64.
func (*NeuronIndexes) SetString ¶
func (i *NeuronIndexes) SetString(s string) error
SetString sets the NeuronIndexes value from its string representation, and returns an error if the string is invalid.
func (NeuronIndexes) String ¶
func (i NeuronIndexes) String() string
String returns the string representation of this NeuronIndexes value.
func (*NeuronIndexes) UnmarshalText ¶
func (i *NeuronIndexes) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (NeuronIndexes) Values ¶
func (i NeuronIndexes) Values() []enums.Enum
Values returns all possible values for the type NeuronIndexes.
type NeuronVarStrides ¶
type NeuronVarStrides struct { // neuron level Neuron uint32 // variable level Var uint32 // contains filtered or unexported fields }
NeuronVarStrides encodes the stride offsets for neuron variable access into network float32 array. Data is always the inner-most variable.
func (*NeuronVarStrides) Index ¶
func (ns *NeuronVarStrides) Index(neurIndex, di uint32, nvar NeuronVars) uint32
Index returns the index into network float32 array for given neuron, data, and variable
func (*NeuronVarStrides) SetNeuronOuter ¶
func (ns *NeuronVarStrides) SetNeuronOuter(ndata int)
SetNeuronOuter sets strides with neurons as outer loop: [Neurons][Vars][Data], which is optimal for CPU-based computation.
func (*NeuronVarStrides) SetVarOuter ¶
func (ns *NeuronVarStrides) SetVarOuter(nneur, ndata int)
SetVarOuter sets strides with vars as outer loop: [Vars][Neurons][Data], which is optimal for GPU-based computation.
type NeuronVars ¶
type NeuronVars int32 //enums:enum
NeuronVars are the neuron variables representing current active state, specific to each input data state. See NeuronAvgVars for vars shared across data.
const ( // Spike is whether neuron has spiked or not on this cycle (0 or 1) Spike NeuronVars = iota // Spiked is 1 if neuron has spiked within the last 10 cycles (msecs), corresponding to a nominal max spiking rate of 100 Hz, 0 otherwise -- useful for visualization and computing activity levels in terms of average spiked levels. Spiked // Act is rate-coded activation value reflecting instantaneous estimated rate of spiking, based on 1 / ISIAvg. This drives feedback inhibition in the FFFB function (todo: this will change when better inhibition is implemented), and is integrated over time for ActInt which is then used for performance statistics and layer average activations, etc. Should not be used for learning or other computations. Act // ActInt is integrated running-average activation value computed from Act with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall activation state across the ThetaCycle time scale, as the overall response of network to current input state -- this is copied to ActM and ActP at the ends of the minus and plus phases, respectively, and used in computing performance-level statistics (which are typically based on ActM). Should not be used for learning or other computations. ActInt // ActM is ActInt activation state at end of third quarter, representing the posterior-cortical minus phase activation -- used for statistics and monitoring network performance. Should not be used for learning or other computations. ActM // ActP is ActInt activation state at end of fourth quarter, representing the posterior-cortical plus_phase activation -- used for statistics and monitoring network performance. Should not be used for learning or other computations. ActP // Ext is external input: drives activation of unit from outside influences (e.g., sensory input) Ext // Target is the target value: drives learning to produce this activation value Target // Ge is total excitatory conductance, including all forms of excitation (e.g., NMDA) -- does *not* include Gbar.E Ge // Gi is total inhibitory synaptic conductance -- the net inhibitory input to the neuron -- does *not* include Gbar.I Gi // Gk is total potassium conductance, typically reflecting sodium-gated potassium currents involved in adaptation effects -- does *not* include Gbar.K Gk // Inet is net current produced by all channels -- drives update of Vm Inet // Vm is membrane potential -- integrates Inet current over time Vm // VmDend is dendritic membrane potential -- has a slower time constant, is not subject to the VmR reset after spiking VmDend // ISI is current inter-spike-interval -- counts up since last spike. Starts at -1 when initialized. ISI // ISIAvg is average inter-spike-interval -- average time interval between spikes, integrated with ISITau rate constant (relatively fast) to capture something close to an instantaneous spiking rate. Starts at -1 when initialized, and goes to -2 after first spike, and is only valid after the second spike post-initialization. ISIAvg // CaSpkM is spike-driven calcium trace used as a neuron-level proxy for synpatic credit assignment factor based on continuous time-integrated spiking: exponential integration of SpikeG * Spike at MTau time constant (typically 5). Simulates a calmodulin (CaM) like signal at the most abstract level. CaSpkM // CaSpkP is continuous cascaded integration of CaSpkM at PTau time constant (typically 40), representing neuron-level purely spiking version of plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule. Used for specialized learning and computational functions, statistics, instead of Act. CaSpkP // CaSpkD is continuous cascaded integration CaSpkP at DTau time constant (typically 40), representing neuron-level purely spiking version of minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule. Used for specialized learning and computational functions, statistics, instead of Act. CaSpkD // CaSpkPM is minus-phase snapshot of the CaSpkP value -- similar to ActM but using a more directly spike-integrated value. CaSpkPM // CaLrn is recv neuron calcium signal used to drive temporal error difference component of standard learning rule, combining NMDA (NmdaCa) and spiking-driven VGCC (VgccCaInt) calcium sources (vs. CaSpk* which only reflects spiking component). This is integrated into CaM, CaP, CaD, and temporal derivative is CaP - CaD (CaMKII - DAPK1). This approximates the backprop error derivative on net input, but VGCC component adds a proportion of recv activation delta as well -- a balance of both works best. The synaptic-level trace multiplier provides the credit assignment factor, reflecting coincident activity and potentially integrated over longer multi-trial timescales. CaLrn // NrnCaM is integrated CaLrn at MTau timescale (typically 5), simulating a calmodulin (CaM) like signal, which then drives CaP, CaD for delta signal driving error-driven learning. NrnCaM // NrnCaP is cascaded integration of CaM at PTau time constant (typically 40), representing the plus, LTP direction of weight change and capturing the function of CaMKII in the Kinase learning rule. NrnCaP // NrnCaD is cascaded integratoin of CaP at DTau time constant (typically 40), representing the minus, LTD direction of weight change and capturing the function of DAPK1 in the Kinase learning rule. NrnCaD // CaDiff is difference between CaP - CaD -- this is the error signal that drives error-driven learning. CaDiff // RLRate is recv-unit based learning rate multiplier, reflecting the sigmoid derivative computed from the CaSpkD of recv unit, and the normalized difference CaSpkP - CaSpkD / MAX(CaSpkP - CaSpkD). RLRate // SpkMaxCa is Ca integrated like CaSpkP but only starting at MaxCycStart cycle, to prevent inclusion of carryover spiking from prior theta cycle trial -- the PTau time constant otherwise results in significant carryover. This is the input to SpkMax SpkMaxCa // SpkMax is maximum CaSpkP across one theta cycle time window (max of SpkMaxCa) -- used for specialized algorithms that have more phasic behavior within a single trial, e.g., BG Matrix layer gating. Also useful for visualization of peak activity of neurons. SpkMax // SpkBin has aggregated spikes within 50 msec bins across the theta cycle, for computing synaptic calcium efficiently SpkBin0 SpkBin1 SpkBin2 SpkBin3 SpkBin4 SpkBin5 SpkBin6 SpkBin7 // SpkPrv is final CaSpkD activation state at end of previous theta cycle. used for specialized learning mechanisms that operate on delayed sending activations. SpkPrv // SpkSt1 is the activation state at specific time point within current state processing window (e.g., 50 msec for beta cycle within standard theta cycle), as saved by SpkSt1() function. Used for example in hippocampus for CA3, CA1 learning SpkSt1 // SpkSt2 is the activation state at specific time point within current state processing window (e.g., 100 msec for beta cycle within standard theta cycle), as saved by SpkSt2() function. Used for example in hippocampus for CA3, CA1 learning SpkSt2 // NrnFlags are bit flags for binary state variables, which are converted to / from uint32. // These need to be in Vars because they can be differential per data (for ext inputs) // and are writable (indexes are read only). NrnFlags // GeNoiseP is accumulating poisson probability factor for driving excitatory noise spiking -- multiply times uniform random deviate at each time step, until it gets below the target threshold based on poisson lambda as function of noise firing rate. GeNoiseP // GeNoise is integrated noise excitatory conductance, added into Ge GeNoise // GiNoiseP is accumulating poisson probability factor for driving inhibitory noise spiking -- multiply times uniform random deviate at each time step, until it gets below the target threshold based on poisson lambda as a function of noise firing rate. GiNoiseP // GiNoise is integrated noise inhibotyr conductance, added into Gi GiNoise // GeExt is extra excitatory conductance added to Ge -- from Ext input, GeCtxt etc GeExt // GeRaw is raw excitatory conductance (net input) received from senders = current raw spiking drive GeRaw // GeSyn is time-integrated total excitatory synaptic conductance, with an instantaneous rise time from each spike (in GeRaw) and exponential decay with Dt.GeTau, aggregated over pathways -- does *not* include Gbar.E GeSyn // GiRaw is raw inhibitory conductance (net input) received from senders = current raw spiking drive GiRaw // GiSyn is time-integrated total inhibitory synaptic conductance, with an instantaneous rise time from each spike (in GiRaw) and exponential decay with Dt.GiTau, aggregated over pathways -- does *not* include Gbar.I. This is added with computed FFFB inhibition to get the full inhibition in Gi GiSyn // SMaintP is accumulating poisson probability factor for driving self-maintenance by simulating a population of mutually interconnected neurons. multiply times uniform random deviate at each time step, until it gets below the target threshold based on poisson lambda based on accumulating self maint factor SMaintP // GeInt is integrated running-average activation value computed from Ge with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall Ge level across the ThetaCycle time scale (Ge itself fluctuates considerably) -- useful for stats to set strength of connections etc to get neurons into right range of overall excitatory drive GeInt // GeIntNorm is normalized GeInt value (divided by the layer maximum) -- this is used for learning in layers that require learning on subthreshold activity GeIntNorm // GiInt is integrated running-average activation value computed from GiSyn with time constant Act.Dt.IntTau, to produce a longer-term integrated value reflecting the overall synaptic Gi level across the ThetaCycle time scale (Gi itself fluctuates considerably) -- useful for stats to set strength of connections etc to get neurons into right range of overall inhibitory drive GiInt // GModRaw is raw modulatory conductance, received from GType = ModulatoryG pathways GModRaw // GModSyn is syn integrated modulatory conductance, received from GType = ModulatoryG pathways GModSyn // GMaintRaw is raw maintenance conductance, received from GType = MaintG pathways GMaintRaw // GMaintSyn is syn integrated maintenance conductance, integrated using MaintNMDA params. GMaintSyn // SSGi is SST+ somatostatin positive slow spiking inhibition SSGi // SSGiDend is amount of SST+ somatostatin positive slow spiking inhibition applied to dendritic Vm (VmDend) SSGiDend // Gak is conductance of A-type K potassium channels Gak // MahpN is accumulating voltage-gated gating value for the medium time scale AHP MahpN // Gmahp is medium time scale AHP conductance Gmahp // SahpCa is slowly accumulating calcium value that drives the slow AHP SahpCa // SahpN is the sAHP gating value SahpN // Gsahp is slow time scale AHP conductance Gsahp // GknaMed is conductance of sodium-gated potassium channel (KNa) medium dynamics (Slick), which produces accommodation / adaptation of firing GknaMed // GknaSlow is conductance of sodium-gated potassium channel (KNa) slow dynamics (Slack), which produces accommodation / adaptation of firing GknaSlow // KirM is the Kir potassium (K) inwardly rectifying gating value KirM // Gkir is the conductance of the potassium (K) inwardly rectifying channel, // which is strongest at low membrane potentials. Can be modulated by DA. Gkir // GnmdaSyn is integrated NMDA recv synaptic current -- adds GeRaw and decays with time constant GnmdaSyn // Gnmda is net postsynaptic (recv) NMDA conductance, after Mg V-gating and Gbar -- added directly to Ge as it has the same reversal potential Gnmda // GnmdaMaint is net postsynaptic maintenance NMDA conductance, computed from GMaintSyn and GMaintRaw, after Mg V-gating and Gbar -- added directly to Ge as it has the same reversal potential GnmdaMaint // GnmdaLrn is learning version of integrated NMDA recv synaptic current -- adds GeRaw and decays with time constant -- drives NmdaCa that then drives CaM for learning GnmdaLrn // NmdaCa is NMDA calcium computed from GnmdaLrn, drives learning via CaM NmdaCa // GgabaB is net GABA-B conductance, after Vm gating and Gbar + Gbase -- applies to Gk, not Gi, for GIRK, with .1 reversal potential. GgabaB // GABAB is GABA-B / GIRK activation -- time-integrated value with rise and decay time constants GABAB // GABABx is GABA-B / GIRK internal drive variable -- gets the raw activation and decays GABABx // Gvgcc is conductance (via Ca) for VGCC voltage gated calcium channels Gvgcc // VgccM is activation gate of VGCC channels VgccM // VgccH inactivation gate of VGCC channels VgccH // VgccCa is instantaneous VGCC calcium flux -- can be driven by spiking or directly from Gvgcc VgccCa // VgccCaInt time-integrated VGCC calcium flux -- this is actually what drives learning VgccCaInt // SKCaIn is intracellular calcium store level, available to be released with spiking as SKCaR, which can bind to SKCa receptors and drive K current. replenishment is a function of spiking activity being below a threshold SKCaIn // SKCaR released amount of intracellular calcium, from SKCaIn, as a function of spiking events. this can bind to SKCa channels and drive K currents. SKCaR // SKCaM is Calcium-gated potassium channel gating factor, driven by SKCaR via a Hill equation as in chans.SKPCaParams. SKCaM // Gsk is Calcium-gated potassium channel conductance as a function of Gbar * SKCaM. Gsk // Burst is 5IB bursting activation value, computed by thresholding regular CaSpkP value in Super superficial layers Burst // BurstPrv is previous Burst bursting activation from prior time step -- used for context-based learning BurstPrv // CtxtGe is context (temporally delayed) excitatory conductance, driven by deep bursting at end of the plus phase, for CT layers. CtxtGe // CtxtGeRaw is raw update of context (temporally delayed) excitatory conductance, driven by deep bursting at end of the plus phase, for CT layers. CtxtGeRaw // CtxtGeOrig is original CtxtGe value prior to any decay factor -- updates at end of plus phase. CtxtGeOrig )
const NeuronVarsN NeuronVars = 91
NeuronVarsN is the highest valid value for type NeuronVars, plus one.
func NeuronVarsValues ¶
func NeuronVarsValues() []NeuronVars
NeuronVarsValues returns all possible values for the type NeuronVars.
func (NeuronVars) Desc ¶
func (i NeuronVars) Desc() string
Desc returns the description of the NeuronVars value.
func (NeuronVars) Int64 ¶
func (i NeuronVars) Int64() int64
Int64 returns the NeuronVars value as an int64.
func (NeuronVars) MarshalText ¶
func (i NeuronVars) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*NeuronVars) SetInt64 ¶
func (i *NeuronVars) SetInt64(in int64)
SetInt64 sets the NeuronVars value from an int64.
func (*NeuronVars) SetString ¶
func (i *NeuronVars) SetString(s string) error
SetString sets the NeuronVars value from its string representation, and returns an error if the string is invalid.
func (NeuronVars) String ¶
func (i NeuronVars) String() string
String returns the string representation of this NeuronVars value.
func (*NeuronVars) UnmarshalText ¶
func (i *NeuronVars) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (NeuronVars) Values ¶
func (i NeuronVars) Values() []enums.Enum
Values returns all possible values for the type NeuronVars.
type Path ¶
type Path struct { PathBase // all path-level parameters -- these must remain constant once configured Params *PathParams }
axon.Path is a basic Axon pathway with synaptic learning parameters
func (*Path) AsAxon ¶
AsAxon returns this path as a axon.Path -- all derived paths must redefine this to return the base Path type, so that the AxonPath interface does not need to include accessors to all the basic stuff.
func (*Path) DWt ¶
DWt computes the weight change (learning), based on synaptically integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets.
func (*Path) DWtSubMean ¶
DWtSubMean subtracts the mean from any pathways that have SubMean > 0. This is called on *receiving* pathways, prior to WtFromDwt.
func (*Path) InitGBuffs ¶
func (pj *Path) InitGBuffs()
InitGBuffs initializes the per-pathway synaptic conductance buffers. This is not typically needed (called during InitWts, InitActs) but can be called when needed. Must be called to completely initialize prior activity, e.g., full Glong clearing.
func (*Path) InitWtSym ¶
InitWtSym initializes weight symmetry. Is given the reciprocal pathway where the Send and Recv layers are reversed (see LayerBase RecipToRecvPath)
func (*Path) InitWts ¶
InitWts initializes weight values according to SWt params, enforcing current constraints.
func (*Path) InitWtsSyn ¶
InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.
func (*Path) LRateMod ¶
LRateMod sets the LRate modulation parameter for Paths, which is for dynamic modulation of learning rate (see also LRateSched). Updates the effective learning rate factor accordingly.
func (*Path) LRateSched ¶
LRateSched sets the schedule-based learning rate multiplier. See also LRateMod. Updates the effective learning rate factor accordingly.
func (*Path) ReadWtsJSON ¶
ReadWtsJSON reads the weights from this pathway from the receiver-side perspective in a JSON text format. This is for a set of weights that were saved *for one path only* and is not used for the network-level ReadWtsJSON, which reads into a separate structure -- see SetWts method.
func (*Path) SWtFromWt ¶
SWtFromWt updates structural, slowly adapting SWt value based on accumulated DSWt values, which are zero-summed with additional soft bounding relative to SWt limits.
func (*Path) SWtRescale ¶
SWtRescale rescales the SWt values to preserve the target overall mean value, using subtractive normalization.
func (*Path) SendSpike ¶
SendSpike sends a spike from the sending neuron at index sendIndex into the GBuf buffer on the receiver side. The buffer on the receiver side is a ring buffer, which is used for modelling the time delay between sending and receiving spikes.
func (*Path) SetParam ¶
SetParam sets parameter at given path to given value. returns error if path not found or value cannot be set.
func (*Path) SetSWtsFunc ¶
func (pj *Path) SetSWtsFunc(ctx *Context, swtFun func(si, ri int, send, recv *tensor.Shape) float32)
SetSWtsFunc initializes structural SWt values using given function based on receiving and sending unit indexes.
func (*Path) SetSWtsRPool ¶
SetSWtsRPool initializes SWt structural weight values using given tensor of values which has unique values for each recv neuron within a given pool.
func (*Path) SetSynValue ¶
SetSynVal sets value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes) returns error for access errors.
func (*Path) SetWtsFunc ¶
SetWtsFunc initializes synaptic Wt value using given function based on receiving and sending unit indexes. Strongly suggest calling SWtRescale after.
func (*Path) SynFail ¶
SynFail updates synaptic weight failure only -- normally done as part of DWt and WtFromDWt, but this call can be used during testing to update failing synapses.
func (*Path) SynScale ¶
SynScale performs synaptic scaling based on running average activation vs. targets. Layer-level AvgDifFromTrgAvg function must be called first.
func (*Path) Update ¶
func (pj *Path) Update()
Update is interface that does local update of struct vals
func (*Path) UpdateParams ¶
func (pj *Path) UpdateParams()
UpdateParams updates all params given any changes that might have been made to individual values
func (*Path) WriteWtsJSON ¶
WriteWtsJSON writes the weights from this pathway from the receiver-side perspective in a JSON text format. We build in the indentation logic to make it much faster and more efficient.
type PathBase ¶
type PathBase struct { // we need a pointer to ourselves as an AxonPath, which can always be used to extract the true underlying type of object when path is embedded in other structs -- function receivers do not have this ability so this is necessary. AxonPth AxonPath `copier:"-" json:"-" xml:"-" display:"-"` // inactivate this pathway -- allows for easy experimentation Off bool // Class is for applying parameter styles, can be space separated multple tags Cls string // can record notes about this pathway here Notes string // sending layer for this pathway Send *Layer // receiving layer for this pathway Recv *Layer // pattern of connectivity Pat paths.Pattern `table:"-"` // type of pathway: Forward, Back, Lateral, or extended type in specialized algorithms. // Matches against .Cls parameter styles (e.g., .Back etc) Typ PathTypes // default parameters that are applied prior to user-set parameters. // these are useful for specific functionality in specialized brain areas // (e.g., Rubicon, BG etc) not associated with a path type, which otherwise // is used to hard-code initial default parameters. // Typically just set to a literal map. DefParams params.Params `table:"-"` // provides a history of parameters applied to the layer ParamsHistory params.HistoryImpl `table:"-"` // average and maximum number of recv connections in the receiving layer RecvConNAvgMax minmax.AvgMax32 `table:"-" edit:"-" display:"inline"` // average and maximum number of sending connections in the sending layer SendConNAvgMax minmax.AvgMax32 `table:"-" edit:"-" display:"inline"` // start index into global Synapse array: SynStIndex uint32 `display:"-"` // number of synapses in this pathway NSyns uint32 `display:"-"` // starting offset and N cons for each recv neuron, for indexing into the RecvSynIndex array of indexes into the Syns synapses, which are organized sender-based. This is locally managed during build process, but also copied to network global PathRecvCons slice for GPU usage. RecvCon []StartN `display:"-"` // index into Syns synaptic state for each sending unit and connection within that, for the sending pathway which does not own the synapses, and instead indexes into recv-ordered list RecvSynIndex []uint32 `display:"-"` // for each recv synapse, this is index of *sending* neuron It is generally preferable to use the Synapse SendIndex where needed, instead of this slice, because then the memory access will be close by other values on the synapse. RecvConIndex []uint32 `display:"-"` // starting offset and N cons for each sending neuron, for indexing into the Syns synapses, which are organized sender-based. This is locally managed during build process, but also copied to network global PathSendCons slice for GPU usage. SendCon []StartN `display:"-"` // index of other neuron that receives the sender's synaptic input, ordered by the sending layer's order of units as the outer loop, and SendCon.N receiving units within that. It is generally preferable to use the Synapse RecvIndex where needed, instead of this slice, because then the memory access will be close by other values on the synapse. SendConIndex []uint32 `display:"-"` // Ge or Gi conductance ring buffer for each neuron, accessed through Params.Com.ReadIndex, WriteIndex -- scale * weight is added with Com delay offset -- a subslice from network PathGBuf. Uses int-encoded float values for faster GPU atomic integration GBuf []int32 `display:"-"` // pathway-level synaptic conductance values, integrated by path before being integrated at the neuron level, which enables the neuron to perform non-linear integration as needed -- a subslice from network PathGSyn. GSyns []float32 `display:"-"` }
PathBase contains the basic structural information for specifying a pathway of synaptic connections between two layers, and maintaining all the synaptic connection-level data. The same struct token is added to the Recv and Send layer path lists, and it manages everything about the connectivity, and methods on the Path handle all the relevant computation. The Base does not have algorithm-specific methods and parameters, so it can be easily reused for different algorithms, and cleanly separates the algorithm-specific code. Any dependency on the algorithm-level Path can be captured in the AxonPath interface, accessed via the AxonPth field.
func (*PathBase) ApplyDefParams ¶
func (pj *PathBase) ApplyDefParams()
ApplyDefParams applies DefParams default parameters if set Called by Path.Defaults()
func (*PathBase) ApplyParams ¶
ApplyParams applies given parameter style Sheet to this pathway. Calls UpdateParams if anything set to ensure derived parameters are all updated. If setMsg is true, then a message is printed to confirm each parameter that is set. it always prints a message if a parameter fails to be set. returns true if any params were set, and error if there were any errors.
func (*PathBase) Build ¶
Build constructs the full connectivity among the layers. Calls Validate and returns error if invalid. Pat.Connect is called to get the pattern of the connection. Then the connection indexes are configured according to that pattern. Does NOT allocate synapses -- these are set by Network from global slice.
func (*PathBase) Connect ¶
Connect sets the connectivity between two layers and the pattern to use in interconnecting them
func (*PathBase) Init ¶
Init MUST be called to initialize the path's pointer to itself as an emer.Path which enables the proper interface methods to be called.
func (*PathBase) NonDefaultParams ¶
NonDefaultParams returns a listing of all parameters in the Layer that are not at their default values -- useful for setting param styles etc.
func (*PathBase) ParamsApplied ¶
ParamsApplied is just to satisfy History interface so reset can be applied
func (*PathBase) ParamsHistoryReset ¶
func (pj *PathBase) ParamsHistoryReset()
ParamsHistoryReset resets parameter application history
func (*PathBase) PathTypeName ¶
func (*PathBase) RecvSynIndexes ¶
RecvSynIndexes returns the receiving synapse indexes for given recv unit index within the receiving layer, to be iterated over for recv-based processing.
func (*PathBase) SetConStartN ¶
SetConStartN sets the *Con StartN values given n tensor from Pat. Returns total number of connections for this direction.
func (*PathBase) SetOff ¶
SetOff individual pathway. Careful: Layer.SetOff(true) will reactivate all paths of that layer, so path-level lesioning should always be done last.
func (*PathBase) Syn1DNum ¶
Syn1DNum returns the number of synapses for this path as a 1D array. This is the max idx for SynVal1D and the number of vals set by SynValues.
func (*PathBase) SynIndex ¶
SynIndex returns the index of the synapse between given send, recv unit indexes (1D, flat indexes, layer relative). Returns -1 if synapse not found between these two neurons. Requires searching within connections for sending unit.
func (*PathBase) SynVal1D ¶
SynVal1D returns value of given variable index (from SynVarIndex) on given SynIndex. Returns NaN on invalid index. This is the core synapse var access method used by other methods.
func (*PathBase) SynVal1DDi ¶
SynVal1DDi returns value of given variable index (from SynVarIndex) on given SynIndex. Returns NaN on invalid index. This is the core synapse var access method used by other methods. Includes Di data parallel index for data-parallel synaptic values.
func (*PathBase) SynValDi ¶
SynValDi returns value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes). Returns math32.NaN() for access errors (see SynValTry for error message) Includes Di data parallel index for data-parallel synaptic values.
func (*PathBase) SynValue ¶
SynVal returns value of given variable name on the synapse between given send, recv unit indexes (1D, flat indexes). Returns math32.NaN() for access errors (see SynValTry for error message)
func (*PathBase) SynValues ¶
SynValues sets values of given variable name for each synapse, using the natural ordering of the synapses (sender based for Axon), into given float32 slice (only resized if not big enough). Returns error on invalid var name.
func (*PathBase) SynVarIndex ¶
SynVarIndex returns the index of given variable within the synapse, according to *this path's* SynVarNames() list (using a map to lookup index), or -1 and error message if not found.
func (*PathBase) SynVarNames ¶
func (*PathBase) SynVarNum ¶
SynVarNum returns the number of synapse-level variables for this paths. This is needed for extending indexes in derived types.
func (*PathBase) SynVarProps ¶
SynVarProps returns properties for variables
type PathGTypes ¶
type PathGTypes int32 //enums:enum
PathGTypes represents the conductance (G) effects of a given pathway, including excitatory, inhibitory, and modulatory.
const ( // Excitatory pathways drive Ge conductance on receiving neurons, // which send to GiRaw and GiSyn neuron variables. ExcitatoryG PathGTypes = iota // Inhibitory pathways drive Gi inhibitory conductance, // which send to GiRaw and GiSyn neuron variables. InhibitoryG // Modulatory pathways have a multiplicative effect on other inputs, // which send to GModRaw and GModSyn neuron variables. ModulatoryG // Maintenance pathways drive unique set of NMDA channels that support // strong active maintenance abilities. // Send to GMaintRaw and GMaintSyn neuron variables. MaintG // Context pathways are for inputs to CT layers, which update // only at the end of the plus phase, and send to CtxtGe. ContextG )
The pathway conductance types
const PathGTypesN PathGTypes = 5
PathGTypesN is the highest valid value for type PathGTypes, plus one.
func PathGTypesValues ¶
func PathGTypesValues() []PathGTypes
PathGTypesValues returns all possible values for the type PathGTypes.
func (PathGTypes) Desc ¶
func (i PathGTypes) Desc() string
Desc returns the description of the PathGTypes value.
func (PathGTypes) Int64 ¶
func (i PathGTypes) Int64() int64
Int64 returns the PathGTypes value as an int64.
func (PathGTypes) MarshalText ¶
func (i PathGTypes) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*PathGTypes) SetInt64 ¶
func (i *PathGTypes) SetInt64(in int64)
SetInt64 sets the PathGTypes value from an int64.
func (*PathGTypes) SetString ¶
func (i *PathGTypes) SetString(s string) error
SetString sets the PathGTypes value from its string representation, and returns an error if the string is invalid.
func (PathGTypes) String ¶
func (i PathGTypes) String() string
String returns the string representation of this PathGTypes value.
func (*PathGTypes) UnmarshalText ¶
func (i *PathGTypes) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (PathGTypes) Values ¶
func (i PathGTypes) Values() []enums.Enum
Values returns all possible values for the type PathGTypes.
type PathIndexes ¶
type PathIndexes struct { PathIndex uint32 // index of the pathway in global path list: [Layer][SendPaths] RecvLay uint32 // index of the receiving layer in global list of layers RecvNeurSt uint32 // starting index of neurons in recv layer -- so we don't need layer to get to neurons RecvNeurN uint32 // number of neurons in recv layer SendLay uint32 // index of the sending layer in global list of layers SendNeurSt uint32 // starting index of neurons in sending layer -- so we don't need layer to get to neurons SendNeurN uint32 // number of neurons in send layer SynapseSt uint32 // start index into global Synapse array: [Layer][SendPaths][Synapses] SendConSt uint32 // start index into global PathSendCon array: [Layer][SendPaths][SendNeurons] RecvConSt uint32 // start index into global PathRecvCon array: [Layer][RecvPaths][RecvNeurons] RecvSynSt uint32 // start index into global sender-based Synapse index array: [Layer][SendPaths][Synapses] GBufSt uint32 // start index into global PathGBuf global array: [Layer][RecvPaths][RecvNeurons][MaxDelay+1] GSynSt uint32 // start index into global PathGSyn global array: [Layer][RecvPaths][RecvNeurons] // contains filtered or unexported fields }
PathIndexes contains path-level index information into global memory arrays
func (*PathIndexes) RecvNIndexToLayIndex ¶
func (pi *PathIndexes) RecvNIndexToLayIndex(ni uint32) uint32
RecvNIndexToLayIndex converts a neuron's index in network level global list of all neurons to receiving layer-specific index-- e.g., for accessing GBuf and GSyn values. Just subtracts RecvNeurSt -- docu-function basically..
func (*PathIndexes) SendNIndexToLayIndex ¶
func (pi *PathIndexes) SendNIndexToLayIndex(ni uint32) uint32
SendNIndexToLayIndex converts a neuron's index in network level global list of all neurons to sending layer-specific index. Just subtracts SendNeurSt -- docu-function basically..
type PathParams ¶
type PathParams struct { // functional type of path, which determines functional code path // for specialized layer types, and is synchronized with the Path.Typ value PathType PathTypes // recv and send neuron-level pathway index array access info Indexes PathIndexes `display:"-"` // synaptic communication parameters: delay, probability of failure Com SynComParams `display:"inline"` // pathway scaling parameters for computing GScale: // modulates overall strength of pathway, using both // absolute and relative factors, with adaptation option to maintain target max conductances PathScale PathScaleParams `display:"inline"` // slowly adapting, structural weight value parameters, // which control initial weight values and slower outer-loop adjustments SWts SWtParams `display:"add-fields"` // synaptic-level learning parameters for learning in the fast LWt values. Learn LearnSynParams `display:"add-fields"` // conductance scaling values GScale GScaleValues `display:"inline"` // Params for RWPath and TDPredPath for doing dopamine-modulated learning // for reward prediction: Da * Send activity. // Use in RWPredLayer or TDPredLayer typically to generate reward predictions. // If the Da sign is positive, the first recv unit learns fully; for negative, // second one learns fully. // Lower lrate applies for opposite cases. Weights are positive-only. RLPred RLPredPathParams `display:"inline"` // for trace-based learning in the MatrixPath. A trace of synaptic co-activity // is formed, and then modulated by dopamine whenever it occurs. // This bridges the temporal gap between gating activity and subsequent activity, // and is based biologically on synaptic tags. // Trace is reset at time of reward based on ACh level from CINs. Matrix MatrixPathParams `display:"inline"` // Basolateral Amygdala pathway parameters. BLA BLAPathParams `display:"inline"` // Hip bench parameters. Hip HipPathParams `display:"inline"` // contains filtered or unexported fields }
PathParams contains all of the path parameters. These values must remain constant over the course of computation. On the GPU, they are loaded into a uniform.
func (*PathParams) AllParams ¶
func (pj *PathParams) AllParams() string
func (*PathParams) BLADefaults ¶
func (pj *PathParams) BLADefaults()
func (*PathParams) CTCtxtPathDefaults ¶
func (pj *PathParams) CTCtxtPathDefaults()
func (*PathParams) DWtFromDiDWtSyn ¶
func (pj *PathParams) DWtFromDiDWtSyn(ctx *Context, syni uint32)
DWtFromDiDWtSyn updates DWt from data parallel DiDWt values
func (*PathParams) DWtSyn ¶
func (pj *PathParams) DWtSyn(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
DWtSyn is the overall entry point for weight change (learning) at given synapse. It selects appropriate function based on pathway type. rpl is the receiving layer SubPool
func (*PathParams) DWtSynBLA ¶
func (pj *PathParams) DWtSynBLA(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynBLA computes the weight change (learning) at given synapse for BLAPath type. Like the BG Matrix learning rule, a synaptic tag "trace" is established at CS onset (ACh) and learning at US / extinction is a function of trace * delta from US activity (temporal difference), which limits learning.
func (*PathParams) DWtSynCortex ¶
func (pj *PathParams) DWtSynCortex(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
DWtSynCortex computes the weight change (learning) at given synapse for cortex. Uses synaptically integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets.
func (*PathParams) DWtSynDSMatrix ¶
func (pj *PathParams) DWtSynDSMatrix(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynDSMatrix computes the weight change (learning) at given synapse, for the DSMatrixPath type.
func (*PathParams) DWtSynHebb ¶
func (pj *PathParams) DWtSynHebb(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynHebb computes the weight change (learning) at given synapse for cortex. Uses synaptically integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets.
func (*PathParams) DWtSynHip ¶
func (pj *PathParams) DWtSynHip(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool, isTarget bool)
DWtSynHip computes the weight change (learning) at given synapse for cortex + Hip (CPCA Hebb learning). Uses synaptically integrated spiking, computed at the Theta cycle interval. This is the trace version for hidden units, and uses syn CaP - CaD for targets. Adds proportional CPCA learning rule for hip-specific paths
func (*PathParams) DWtSynRWPred ¶
func (pj *PathParams) DWtSynRWPred(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynRWPred computes the weight change (learning) at given synapse, for the RWPredPath type
func (*PathParams) DWtSynTDPred ¶
func (pj *PathParams) DWtSynTDPred(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynTDPred computes the weight change (learning) at given synapse, for the TDRewPredPath type
func (*PathParams) DWtSynVSMatrix ¶
func (pj *PathParams) DWtSynVSMatrix(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynVSMatrix computes the weight change (learning) at given synapse, for the VSMatrixPath type.
func (*PathParams) DWtSynVSPatch ¶
func (pj *PathParams) DWtSynVSPatch(ctx *Context, syni, si, ri, di uint32, layPool, subPool *Pool)
DWtSynVSPatch computes the weight change (learning) at given synapse, for the VSPatchPath type.
func (*PathParams) Defaults ¶
func (pj *PathParams) Defaults()
func (*PathParams) GatherSpikes ¶
func (pj *PathParams) GatherSpikes(ctx *Context, ly *LayerParams, ni, di uint32, gRaw float32, gSyn *float32)
GatherSpikes integrates G*Raw and G*Syn values for given neuron from the given Path-level GRaw value, first integrating pathway-level GSyn value.
func (*PathParams) HipDefaults ¶
func (pj *PathParams) HipDefaults()
func (*PathParams) IsExcitatory ¶
func (pj *PathParams) IsExcitatory() bool
func (*PathParams) IsInhib ¶
func (pj *PathParams) IsInhib() bool
func (*PathParams) MatrixDefaults ¶
func (pj *PathParams) MatrixDefaults()
func (*PathParams) RLPredDefaults ¶
func (pj *PathParams) RLPredDefaults()
func (*PathParams) SetFixedWts ¶
func (pj *PathParams) SetFixedWts()
SetFixedWts sets parameters for fixed, non-learning weights with a default of Mean = 0.8, Var = 0 strength
func (*PathParams) ShouldDisplay ¶
func (pj *PathParams) ShouldDisplay(field string) bool
func (*PathParams) SynCa ¶
func (pj *PathParams) SynCa(ctx *Context, si, ri, di uint32, syCaP, syCaD *float32)
SynCa gets the synaptic calcium P (potentiation) and D (depression) values, using optimized computation.
func (*PathParams) SynRecvLayIndex ¶
func (pj *PathParams) SynRecvLayIndex(ctx *Context, syni uint32) uint32
SynRecvLayIndex converts the Synapse RecvIndex of recv neuron's index in network level global list of all neurons to receiving layer-specific index.
func (*PathParams) SynSendLayIndex ¶
func (pj *PathParams) SynSendLayIndex(ctx *Context, syni uint32) uint32
SynSendLayIndex converts the Synapse SendIndex of sending neuron's index in network level global list of all neurons to sending layer-specific index.
func (*PathParams) Update ¶
func (pj *PathParams) Update()
func (*PathParams) VSPatchDefaults ¶
func (pj *PathParams) VSPatchDefaults()
func (*PathParams) WtFromDWtSyn ¶
func (pj *PathParams) WtFromDWtSyn(ctx *Context, syni uint32)
WtFromDWtSyn is the overall entry point for updating weights from weight changes.
func (*PathParams) WtFromDWtSynCortex ¶
func (pj *PathParams) WtFromDWtSynCortex(ctx *Context, syni uint32)
WtFromDWtSynCortex updates weights from dwt changes
func (*PathParams) WtFromDWtSynNoLimits ¶
func (pj *PathParams) WtFromDWtSynNoLimits(ctx *Context, syni uint32)
WtFromDWtSynNoLimits -- weight update without limits
type PathScaleParams ¶
type PathScaleParams struct { // relative scaling that shifts balance between different pathways -- this is subject to normalization across all other pathways into receiving neuron, and determines the GScale.Target for adapting scaling Rel float32 `min:"0"` // absolute multiplier adjustment factor for the path scaling -- can be used to adjust for idiosyncrasies not accommodated by the standard scaling based on initial target activation level and relative scaling factors -- any adaptation operates by directly adjusting scaling factor from the initially computed value Abs float32 `default:"1" min:"0"` // contains filtered or unexported fields }
PathScaleParams are pathway scaling parameters: modulates overall strength of pathway, using both absolute and relative factors.
func (*PathScaleParams) Defaults ¶
func (ws *PathScaleParams) Defaults()
func (*PathScaleParams) FullScale ¶
func (ws *PathScaleParams) FullScale(savg, snu, ncon float32) float32
FullScale returns full scaling factor, which is product of Abs * Rel * SLayActScale
func (*PathScaleParams) SLayActScale ¶
func (ws *PathScaleParams) SLayActScale(savg, snu, ncon float32) float32
SLayActScale computes scaling factor based on sending layer activity level (savg), number of units in sending layer (snu), and number of recv connections (ncon). Uses a fixed sem_extra standard-error-of-the-mean (SEM) extra value of 2 to add to the average expected number of active connections to receive, for purposes of computing scaling factors with partial connectivity For 25% layer activity, binomial SEM = sqrt(p(1-p)) = .43, so 3x = 1.3 so 2 is a reasonable default.
func (*PathScaleParams) Update ¶
func (ws *PathScaleParams) Update()
type PathTypes ¶
type PathTypes int32 //enums:enum
PathTypes is an axon-specific path type enum, that encompasses all the different algorithm types supported. Class parameter styles automatically key off of these types. The first entries must be kept synchronized with the emer.PathType.
const ( // Forward is a feedforward, bottom-up pathway from sensory inputs to higher layers ForwardPath PathTypes = iota // Back is a feedback, top-down pathway from higher layers back to lower layers BackPath // Lateral is a lateral pathway within the same layer / area LateralPath // Inhib is an inhibitory pathway that drives inhibitory // synaptic conductances instead of the default excitatory ones. InhibPath // CTCtxt are pathways from Superficial layers to CT layers that // send Burst activations drive updating of CtxtGe excitatory conductance, // at end of plus (51B Bursting) phase. Biologically, this pathway // comes from the PT layer 5IB neurons, but it is simpler to use the // Super neurons directly, and PT are optional for most network types. // These pathways also use a special learning rule that // takes into account the temporal delays in the activation states. // Can also add self context from CT for deeper temporal context. CTCtxtPath // RWPath does dopamine-modulated learning for reward prediction: // Da * Send.CaSpkP (integrated current spiking activity). // Uses RLPredPath parameters. // Use in RWPredLayer typically to generate reward predictions. // If the Da sign is positive, the first recv unit learns fully; // for negative, second one learns fully. Lower lrate applies for // opposite cases. Weights are positive-only. RWPath // TDPredPath does dopamine-modulated learning for reward prediction: // DWt = Da * Send.SpkPrv (activity on *previous* timestep) // Uses RLPredPath parameters. // Use in TDPredLayer typically to generate reward predictions. // If the Da sign is positive, the first recv unit learns fully; // for negative, second one learns fully. Lower lrate applies for // opposite cases. Weights are positive-only. TDPredPath // BLAPath implements the Rubicon BLA learning rule: // dW = ACh * X_t-1 * (Y_t - Y_t-1) // The recv delta is across trials, where the US should activate on trial // boundary, to enable sufficient time for gating through to OFC, so // BLA initially learns based on US present - US absent. // It can also learn based on CS onset if there is a prior CS that predicts that. BLAPath HipPath // VSPatchPath implements the VSPatch learning rule: // dW = ACh * DA * X * Y // where DA is D1 vs. D2 modulated DA level, X = sending activity factor, // Y = receiving activity factor, and ACh provides overall modulation. VSPatchPath // VSMatrixPath is for ventral striatum matrix (SPN / MSN) neurons // supporting trace-based learning, where an initial // trace of synaptic co-activity is formed, and then modulated // by subsequent phasic dopamine & ACh when an outcome occurs. // This bridges the temporal gap between gating activity // and subsequent outcomes, and is based biologically on synaptic tags. // Trace is reset at time of reward based on ACh level (from CINs in biology). VSMatrixPath // DSMatrixPath is for dorsal striatum matrix (SPN / MSN) neurons // supporting trace-based learning, where an initial // trace of synaptic co-activity is formed, and then modulated // by subsequent phasic dopamine & ACh when an outcome occurs. // This bridges the temporal gap between gating activity // and subsequent outcomes, and is based biologically on synaptic tags. // Trace is reset at time of reward based on ACh level (from CINs in biology). DSMatrixPath )
The pathway types
const PathTypesN PathTypes = 12
PathTypesN is the highest valid value for type PathTypes, plus one.
func PathTypesValues ¶
func PathTypesValues() []PathTypes
PathTypesValues returns all possible values for the type PathTypes.
func (PathTypes) MarshalText ¶
MarshalText implements the encoding.TextMarshaler interface.
func (*PathTypes) SetString ¶
SetString sets the PathTypes value from its string representation, and returns an error if the string is invalid.
func (*PathTypes) UnmarshalText ¶
UnmarshalText implements the encoding.TextUnmarshaler interface.
type Pool ¶
type Pool struct {
// starting and ending (exlusive) layer-wise indexes for the list of neurons in this pool
StIndex, EdIndex uint32 `edit:"-"`
// layer index in global layer list
LayIndex uint32 `display:"-"`
// data parallel index (innermost index per layer)
DataIndex uint32 `display:"-"`
// pool index in global pool list:
PoolIndex uint32 `display:"-"`
// is this a layer-wide pool? if not, it represents a sub-pool of units within a 4D layer
IsLayPool slbool.Bool `edit:"-"`
// for special types where relevant (e.g., MatrixLayer, BGThalLayer), indicates if the pool was gated
Gated slbool.Bool `edit:"-"`
// fast-slow FFFB inhibition values
Inhib fsfffb.Inhib `edit:"-"`
// average and max values for relevant variables in this pool, at different time scales
AvgMax PoolAvgMax
// absolute value of AvgDif differences from actual neuron ActPct relative to TrgAvg
AvgDif AvgMaxI32 `edit:"-" display:"inline"`
// contains filtered or unexported fields
}
Pool contains computed values for FS-FFFB inhibition, and various other state values for layers and pools (unit groups) that can be subject to inhibition
func (*Pool) AvgMaxUpdate ¶
AvgMaxUpdate updates the AvgMax values based on current neuron values
type PoolAvgMax ¶
type PoolAvgMax struct { // avg and maximum CaSpkP (continuously updated at roughly 40 msec integration window timescale, ends up capturing potentiation, plus-phase signal) -- this is the primary variable to use for tracking overall pool activity CaSpkP AvgMaxPhases `edit:"-" display:"inline"` // avg and maximum CaSpkD longer-term depression / DAPK1 signal in layer CaSpkD AvgMaxPhases `edit:"-" display:"inline"` // avg and maximum SpkMax value (based on CaSpkP) -- reflects peak activity at any point across the cycle SpkMax AvgMaxPhases `edit:"-" display:"inline"` // avg and maximum Act firing rate value Act AvgMaxPhases `edit:"-" display:"inline"` // avg and maximum GeInt integrated running-average excitatory conductance value GeInt AvgMaxPhases `edit:"-" display:"inline"` // avg and maximum GiInt integrated running-average inhibitory conductance value GiInt AvgMaxPhases `edit:"-" display:"inline"` }
PoolAvgMax contains the average and maximum values over a Pool of neurons for different variables of interest, at Cycle, Minus and Plus phase timescales. All of the cycle level values are updated at the *start* of the cycle based on values from the prior cycle -- thus are 1 cycle behind in general.
func (*PoolAvgMax) Calc ¶
func (am *PoolAvgMax) Calc(refIndex int32)
Calc does Calc on Cycle level, and re-inits
func (*PoolAvgMax) CycleToMinus ¶
func (am *PoolAvgMax) CycleToMinus()
CycleToMinus grabs current Cycle values into the Minus phase values
func (*PoolAvgMax) CycleToPlus ¶
func (am *PoolAvgMax) CycleToPlus()
CycleToPlus grabs current Cycle values into the Plus phase values
func (*PoolAvgMax) Init ¶
func (am *PoolAvgMax) Init()
Init does Init on Cycle vals-- for update start. always left init'd so generally unnecessary
func (*PoolAvgMax) Zero ¶
func (am *PoolAvgMax) Zero()
Zero does full reset on everything -- for InitActs
type PopCodeParams ¶
type PopCodeParams struct { // use popcode encoding of variable(s) that this layer represents On slbool.Bool // Ge multiplier for driving excitatory conductance based on PopCode -- multiplies normalized activation values Ge float32 `default:"0.1"` // minimum value representable -- for GaussBump, typically include extra to allow mean with activity on either side to represent the lowest value you want to encode Min float32 `default:"-0.1"` // maximum value representable -- for GaussBump, typically include extra to allow mean with activity on either side to represent the lowest value you want to encode Max float32 `default:"1.1"` // activation multiplier for values at Min end of range, where values at Max end have an activation of 1 -- if this is < 1, then there is a rate code proportional to the value in addition to the popcode pattern -- see also MinSigma, MaxSigma MinAct float32 `default:"1,0.5"` // sigma parameter of a gaussian specifying the tuning width of the coarse-coded units, in normalized 0-1 range -- for Min value -- if MinSigma < MaxSigma then more units are activated for Max values vs. Min values, proportionally MinSigma float32 `default:"0.1,0.08"` // sigma parameter of a gaussian specifying the tuning width of the coarse-coded units, in normalized 0-1 range -- for Min value -- if MinSigma < MaxSigma then more units are activated for Max values vs. Min values, proportionally MaxSigma float32 `default:"0.1,0.12"` // ensure that encoded and decoded value remains within specified range Clip slbool.Bool }
PopCodeParams provides an encoding of scalar value using population code, where a single continuous (scalar) value is encoded as a gaussian bump across a population of neurons (1 dimensional). It can also modulate rate code and number of neurons active according to the value. This is for layers that represent values as in the Rubicon system (from Context.Rubicon). Both normalized activation values (1 max) and Ge conductance values can be generated.
func (*PopCodeParams) ClipValue ¶
func (pc *PopCodeParams) ClipValue(val float32) float32
ClipVal returns clipped (clamped) value in min / max range
func (*PopCodeParams) Decode ¶
func (pc *PopCodeParams) Decode(acts []float32) float32
Decode decodes value from a pattern of activation as the activation-weighted-average of the unit's preferred tuning values. must have 2 or more values in pattern pat.
func (*PopCodeParams) Defaults ¶
func (pc *PopCodeParams) Defaults()
func (*PopCodeParams) EncodeGe ¶
func (pc *PopCodeParams) EncodeGe(i, n uint32, val float32) float32
EncodeGe returns Ge value for given value, for neuron index i out of n total neurons. n must be 2 or more.
func (*PopCodeParams) EncodeValue ¶
func (pc *PopCodeParams) EncodeValue(i, n uint32, val float32) float32
EncodeValue returns value for given value, for neuron index i out of n total neurons. n must be 2 or more.
func (*PopCodeParams) ProjectParam ¶
func (pc *PopCodeParams) ProjectParam(minParam, maxParam, clipVal float32) float32
ProjectParam projects given min / max param value onto val within range
func (*PopCodeParams) SetRange ¶
func (pc *PopCodeParams) SetRange(min, max, minSigma, maxSigma float32)
SetRange sets the min, max and sigma values
func (*PopCodeParams) ShouldDisplay ¶
func (pc *PopCodeParams) ShouldDisplay(field string) bool
func (*PopCodeParams) Uncertainty ¶
func (pc *PopCodeParams) Uncertainty(val float32, acts []float32) float32
Uncertainty returns the uncertainty of the given distribution of activities relative to a perfect code for the given value. Uncertainty is the average unit-wise standard deviation between the pop code encoding and the max-normalized activities.
func (*PopCodeParams) Update ¶
func (pc *PopCodeParams) Update()
type PulvParams ¶
type PulvParams struct { // multiplier on driver input strength, multiplies CaSpkP from driver layer to produce Ge excitatory input to Pulv unit. DriveScale float32 `default:"0.1" min:"0.0"` // Level of Max driver layer CaSpkP at which the drivers fully drive the burst phase activation. If there is weaker driver input, then (Max/FullDriveAct) proportion of the non-driver inputs remain and this critically prevents the network from learning to turn activation off, which is difficult and severely degrades learning. FullDriveAct float32 `default:"0.6" min:"0.01"` // index of layer that generates the driving activity into this one -- set via SetBuildConfig(DriveLayName) setting DriveLayIndex int32 `edit:"-"` // contains filtered or unexported fields }
PulvParams provides parameters for how the plus-phase (outcome) state of Pulvinar thalamic relay cell neurons is computed from the corresponding driver neuron Burst activation (or CaSpkP if not Super)
func (*PulvParams) Defaults ¶
func (tp *PulvParams) Defaults()
func (*PulvParams) DriveGe ¶
func (tp *PulvParams) DriveGe(act float32) float32
DriveGe returns effective excitatory conductance to use for given driver input Burst activation
func (*PulvParams) NonDrivePct ¶
func (tp *PulvParams) NonDrivePct(drvMax float32) float32
NonDrivePct returns the multiplier proportion of the non-driver based Ge to keep around, based on FullDriveAct and the max activity in driver layer.
func (*PulvParams) Update ¶
func (tp *PulvParams) Update()
type PushOff ¶
type PushOff struct { // offset Off uint32 // contains filtered or unexported fields }
PushOff has push constants for setting offset into compute shader
type RLPredPathParams ¶
type RLPredPathParams struct { // how much to learn on opposite DA sign coding neuron (0..1) OppSignLRate float32 // tolerance on DA -- if below this abs value, then DA goes to zero and there is no learning -- prevents prediction from exactly learning to cancel out reward value, retaining a residual valence of signal DaTol float32 // contains filtered or unexported fields }
RLPredPathParams does dopamine-modulated learning for reward prediction: Da * Send.Act Used by RWPath and TDPredPath within corresponding RWPredLayer or TDPredLayer to generate reward predictions based on its incoming weights, using linear activation function. Has no weight bounds or limits on sign etc.
func (*RLPredPathParams) Defaults ¶
func (pj *RLPredPathParams) Defaults()
func (*RLPredPathParams) Update ¶
func (pj *RLPredPathParams) Update()
type RLRateParams ¶
type RLRateParams struct { // use learning rate modulation On slbool.Bool `default:"true"` // use a linear sigmoid function: if act > .5: 1-act; else act // otherwise use the actual sigmoid derivative which is squared: a(1-a) SigmoidLinear slbool.Bool `default:"true"` // minimum learning rate multiplier for sigmoidal act (1-act) factor, // which prevents lrate from going too low for extreme values. // Set to 1 to disable Sigmoid derivative factor, which is default for Target layers. SigmoidMin float32 `default:"0.05,1"` // modulate learning rate as a function of plus - minus differences Diff slbool.Bool // threshold on Max(CaSpkP, CaSpkD) below which Min lrate applies. // must be > 0 to prevent div by zero. SpkThr float32 `default:"0.1"` // threshold on recv neuron error delta, i.e., |CaSpkP - CaSpkD| below which lrate is at Min value DiffThr float32 `default:"0.02"` // for Diff component, minimum learning rate value when below ActDiffThr Min float32 `default:"0.001"` // contains filtered or unexported fields }
RLRateParams are recv neuron learning rate modulation parameters. Has two factors: the derivative of the sigmoid based on CaSpkD activity levels, and based on the phase-wise differences in activity (Diff).
func (*RLRateParams) Defaults ¶
func (rl *RLRateParams) Defaults()
func (*RLRateParams) RLRateDiff ¶
func (rl *RLRateParams) RLRateDiff(scap, scad float32) float32
RLRateDiff returns the learning rate as a function of difference between CaSpkP and CaSpkD values
func (*RLRateParams) RLRateSigDeriv ¶
func (rl *RLRateParams) RLRateSigDeriv(act float32, laymax float32) float32
RLRateSigDeriv returns the sigmoid derivative learning rate factor as a function of spiking activity, with mid-range values having full learning and extreme values a reduced learning rate: deriv = 4*act*(1-act) or linear: if act > .5: 2*(1-act); else 2*act The activity should be CaSpkP and the layer maximum is used to normalize that to a 0-1 range.
func (*RLRateParams) ShouldDisplay ¶
func (rl *RLRateParams) ShouldDisplay(field string) bool
func (*RLRateParams) Update ¶
func (rl *RLRateParams) Update()
type RWDaParams ¶
type RWDaParams struct { // tonic baseline Ge level for DA = 0 -- +/- are between 0 and 2*TonicGe -- just for spiking display of computed DA value TonicGe float32 // idx of RWPredLayer to get reward prediction from -- set during Build from BuildConfig RWPredLayName RWPredLayIndex int32 `edit:"-"` // contains filtered or unexported fields }
RWDaParams computes a dopamine (DA) signal using simple Rescorla-Wagner learning dynamic (i.e., PV learning in the Rubicon framework).
func (*RWDaParams) Defaults ¶
func (rp *RWDaParams) Defaults()
func (*RWDaParams) GeFromDA ¶
func (rp *RWDaParams) GeFromDA(da float32) float32
GeFromDA returns excitatory conductance from DA dopamine value
func (*RWDaParams) Update ¶
func (rp *RWDaParams) Update()
type RWPredParams ¶
type RWPredParams struct { // default 0.1..0.99 range of predictions that can be represented -- having a truncated range preserves some sensitivity in dopamine at the extremes of good or poor performance PredRange minmax.F32 }
RWPredParams parameterizes reward prediction for a simple Rescorla-Wagner learning dynamic (i.e., PV learning in the Rubicon framework).
func (*RWPredParams) Defaults ¶
func (rp *RWPredParams) Defaults()
func (*RWPredParams) Update ¶
func (rp *RWPredParams) Update()
type RandFunIndex ¶
type RandFunIndex uint32
const ( RandFunActPGe RandFunIndex = iota RandFunActPGi RandFunActSMaintP RandFunIndexN )
We use this enum to store a unique index for each function that requires random number generation. If you add a new function, you need to add a new enum entry here. RandFunIndexN is the total number of random functions. It autoincrements due to iota.
type Rubicon ¶
type Rubicon struct { // number of possible positive US states and corresponding drives. // The first is always reserved for novelty / curiosity. // Must be set programmatically via SetNUSs method, // which allocates corresponding parameters. NPosUSs uint32 `edit:"-"` // number of possible phasic negative US states (e.g., shock, impact etc). // Must be set programmatically via SetNUSs method, which allocates corresponding // parameters. NNegUSs uint32 `edit:"-"` // number of possible costs, typically including accumulated time and effort costs. // Must be set programmatically via SetNUSs method, which allocates corresponding // parameters. NCosts uint32 `edit:"-"` // parameters and state for built-in drives that form the core motivations // of the agent, controlled by lateral hypothalamus and associated // body state monitoring such as glucose levels and thirst. Drive DriveParams `display:"inline"` // urgency (increasing pressure to do something) and parameters for // updating it. Raw urgency is incremented by same units as effort, // but is only reset with a positive US. Urgency UrgencyParams `display:"inline"` // controls how positive and negative USs are weighted and integrated to // compute an overall PV primary value. USs USParams `display:"inline"` // lateral habenula (LHb) parameters and state, which drives // dipping / pausing in dopamine when the predicted positive // outcome > actual, or actual negative outcome > predicted. // Can also drive bursting for the converse, and via matrix phasic firing. LHb LHbParams `display:"inline"` // parameters for giving up based on PV pos - neg difference GiveUp GiveUpParams `display:"inline"` // population code decoding parameters for estimates from layers ValDecode popcode.OneD `display:"inline"` // contains filtered or unexported fields }
Rubicon implements core elements of the Rubicon goal-directed motivational model, representing the core brainstem-level (hypothalamus) bodily drives and resulting dopamine from US (unconditioned stimulus) inputs, subsuming the earlier Rubicon model of primary value (PV) and learned value (LV), describing the functions of the Amygala, Ventral Striatum, VTA and associated midbrain nuclei (LDT, LHb, RMTg). Core LHb (lateral habenula) and VTA (ventral tegmental area) dopamine are computed in equations using inputs from specialized network layers (LDTLayer driven by BLA, CeM layers, VSPatchLayer). The Drives, Effort, US and resulting LHb PV dopamine computation all happens at the at the start of each trial (NewState, Step). The LV / CS dopamine is computed cycle-by-cycle by the VTA layer using parameters set by the VTA layer. Renders USLayer, PVLayer, DrivesLayer representations based on state updated here.
func (*Rubicon) AddTimeEffort ¶
AddTimeEffort adds a unit of time and an increment of effort
func (*Rubicon) DAFromPVs ¶
func (rp *Rubicon) DAFromPVs(pvPos, pvNeg, vsPatchPos, vsPatchPosSum float32) (burst, dip, da, rew float32)
DAFromPVs computes the overall PV DA in terms of LHb burst and dip activity from given pvPos, pvNeg, and vsPatchPos values. Also returns the net "reward" value as the discounted PV value, separate from the vsPatchPos prediction error factor.
func (*Rubicon) DecodeFromLayer ¶
func (rp *Rubicon) DecodeFromLayer(ctx *Context, di uint32, net *Network, layName string) (val, vr float32, err error)
DecodeFromLayer decodes value and variance from the average activity (CaSpkD) of the given layer name. Use for decoding PVposEst and Var, and PVnegEst and Var
func (*Rubicon) DecodePVEsts ¶
DecodePVEsts decodes estimated PV outcome values from PVposP and PVnegP prediction layers, saves in global PVposEst, Var and PVnegEst, Var
func (*Rubicon) DriveUpdate ¶
DriveUpdate is used when auto-updating drive levels based on US consumption, which partially satisfies (decrements) corresponding drive, and on time passing, where drives adapt to their overall baseline levels.
func (*Rubicon) EffortUrgencyUpdate ¶
EffortUrgencyUpdate updates the Effort or Urgency based on given effort increment. Effort is incremented when VSMatrixHasGated (i.e., goal engaged) and Urgency updates otherwise (when not goal engaged) Call this at the start of the trial, in ApplyRubicon method, after NewState.
func (*Rubicon) GiveUpOnGoal ¶
GiveUpOnGoal determines whether to give up on current goal based on Utility, Timing, and Progress weight factors.
func (*Rubicon) InitDrives ¶
InitDrives initializes all the Drives to baseline values (default = 0)
func (*Rubicon) NewState ¶
NewState is called at very start of new state (trial) of processing. sets HadRew = HasRew from last trial -- used to then reset various things after reward.
func (*Rubicon) PVDA ¶
PVDA computes the PV (primary value) based dopamine based on current state information, at the start of a trial. PV DA is computed by the VS (ventral striatum) and the LHb / RMTg, and the resulting values are stored in global variables. Called after updating USs, Effort, Drives at start of trial step, in Step.
func (*Rubicon) PVcostEstFromCosts ¶
PVcostEstFromUSs returns the estimated negative PV value based on given externally provided Cost values. This can be used to compute estimates to compare network performance.
func (*Rubicon) PVneg ¶
PVneg returns the summed weighted negative value of current costs and negative US state, where each US is multiplied by a weighting factor and summed (usNegSum) and the normalized version of this sum (PVneg = overall negative PV) as 1 / (1 + (PVnegGain * PVnegSum))
func (*Rubicon) PVnegEstFromUSs ¶
PVnegEstFromUSs returns the estimated negative PV value based on given externally provided US values. This can be used to compute estimates to compare network performance.
func (*Rubicon) PVpos ¶
PVpos returns the summed weighted positive value of current positive US state, where each US is multiplied by its current drive and weighting factor (pvPosSum), and the normalized version of this sum (PVpos = overall positive PV) as 1 / (1 + (PVposGain * pvPosSum))
func (*Rubicon) PVposEstFromUSs ¶
func (rp *Rubicon) PVposEstFromUSs(ctx *Context, di uint32, uss []float32) (pvPosSum, pvPos float32)
PVposEstFromUSs returns the estimated positive PV value based on drives and given US values. This can be used to compute estimates to compare network performance.
func (*Rubicon) PVposEstFromUSsDrives ¶
PVposEstFromUSsDrives returns the estimated positive PV value based on given externally provided drives and US values. This can be used to compute estimates to compare network performance.
func (*Rubicon) PVsFromUSs ¶
PVsFromUSs updates the current PV summed, weighted, normalized values from the underlying US values.
func (*Rubicon) ResetGiveUp ¶
ResetGiveUp resets all the give-up related global values.
func (*Rubicon) ResetGoalState ¶
ResetGoalState resets all the goal-engaged global values. Critically, this is only called after goal accomplishment, not after goal gating -- prevents "shortcutting" by re-gating.
func (*Rubicon) SetDrives ¶
SetDrives is used when directly controlling drive levels externally. curiosity sets the strength for the curiosity drive and drives are strengths of the remaining sim-specified drives, in order. any drives not so specified are at the InitDrives baseline level.
func (*Rubicon) SetGoalDistEst ¶
SetGoalDistEst sets the current estimated distance to the goal, in trial step units, which should decrease to 0 at the goal. This should be set at the start of every trial. Also computes the ProgressRate.
func (*Rubicon) SetGoalMaintFromLayer ¶
func (rp *Rubicon) SetGoalMaintFromLayer(ctx *Context, di uint32, net *Network, layName string, maxAct float32) error
SetGoalMaintFromLayer sets the GoalMaint global state variable from the average activity (CaSpkD) of the given layer name. GoalMaint is normalized 0-1 based on the given max activity level, with anything out of range clamped to 0-1 range. Returns (and logs) an error if layer name not found.
func (*Rubicon) SetNUSs ¶
SetNUSs sets the number of _additional_ simulation-specific phasic positive and negative USs (primary value outcomes). This must be called _before_ network Build, which allocates global values that depend on these numbers. Any change must also call network.BuildGlobals. 1 PosUS (curiosity / novelty) is managed automatically by the Rubicon code. Two costs (Time, Effort) are also automatically allocated and managed. The USs specified here need to be managed by the simulation via the SetUS method. Positive USs each have corresponding Drives.
func (*Rubicon) SetUS ¶
func (rp *Rubicon) SetUS(ctx *Context, di uint32, valence ValenceTypes, usIndex int, magnitude float32)
SetUS sets the given _simulation specific_ unconditioned stimulus (US) state for Rubicon algorithm. usIndex = 0 is first US, etc. The US then drives activity of relevant Rubicon-rendered inputs, and dopamine, and sets the global HasRew flag, thus triggering a US learning event. Note that costs can be used to track negative USs that are not strong enough to trigger a US learning event.
func (*Rubicon) Step ¶
Step does one step (trial) after applying USs, Drives, and updating Effort. It should be the final call in ApplyRubicon. Calls PVDA which does all US, PV, LHb, GiveUp updating.
func (*Rubicon) TimeEffortReset ¶
TimeEffortReset resets the raw time and effort back to zero, at start of new gating event
func (*Rubicon) USnegIndex ¶
USnegIndex allows for the possibility of automatically managed negative USs, by adding those to the given _simulation specific_ negative US index to get the actual US index.
func (*Rubicon) USposIndex ¶
USposIndex adds 1 to the given _simulation specific_ positive US index to get the actual US / Drive index, where the first pool is reserved for curiosity / novelty.
func (*Rubicon) VSPatchNewState ¶
VSPatchNewState does VSPatch processing in NewState: updates global VSPatchPos and VSPatchPosSum, sets to RewPred. uses max across recorded VSPatch activity levels.
type SMaintParams ¶
type SMaintParams struct { // is self maintenance active? On slbool.Bool // number of neurons within the self-maintenance pool, // each of which is assumed to have the same probability of spiking NNeurons float32 `default:"10"` // conductance multiplier for self maintenance synapses Gbar float32 `default:"0.2"` // inhib controls how much of the extra maintenance conductance goes to the GeExt, which drives extra proportional inhibition Inhib float32 `default:"1"` // ISI (inter spike interval) range -- min is used as min ISIAvg for poisson spike rate expected from the population, and above max, no additional maintenance conductance is added ISI minmax.F32 `display:"inline"` }
SMaintParams for self-maintenance simulating a population of NMDA-interconnected spiking neurons
func (*SMaintParams) Defaults ¶
func (sm *SMaintParams) Defaults()
func (*SMaintParams) ExpInt ¶
func (sm *SMaintParams) ExpInt(isi float32) float32
ExpInt returns the exponential interval value for determining when the next excitatory spike will arrive, based on given ISI value for this neuron.
func (*SMaintParams) ShouldDisplay ¶
func (sm *SMaintParams) ShouldDisplay(field string) bool
func (*SMaintParams) Update ¶
func (sm *SMaintParams) Update()
type SWtAdaptParams ¶
type SWtAdaptParams struct { // if true, adaptation is active -- if false, SWt values are not updated, in which case it is generally good to have Init.SPct=0 too. On slbool.Bool // learning rate multiplier on the accumulated DWt values (which already have fast LRate applied) to incorporate into SWt during slow outer loop updating -- lower values impose stronger constraints, for larger networks that need more structural support, e.g., 0.001 is better after 1,000 epochs in large models. 0.1 is fine for smaller models. LRate float32 `default:"0.1,0.01,0.001,0.0002"` // amount of mean to subtract from SWt delta when updating -- generally best to set to 1 SubMean float32 `default:"1"` // gain of sigmoidal constrast enhancement function used to transform learned, linear LWt values into Wt values SigGain float32 `default:"6"` }
SWtAdaptParams manages adaptation of SWt values
func (*SWtAdaptParams) Defaults ¶
func (sp *SWtAdaptParams) Defaults()
func (*SWtAdaptParams) ShouldDisplay ¶
func (sp *SWtAdaptParams) ShouldDisplay(field string) bool
func (*SWtAdaptParams) Update ¶
func (sp *SWtAdaptParams) Update()
type SWtInitParams ¶
type SWtInitParams struct { // how much of the initial random weights are captured in the SWt values -- rest goes into the LWt values. 1 gives the strongest initial biasing effect, for larger models that need more structural support. 0.5 should work for most models where stronger constraints are not needed. SPct float32 `min:"0" max:"1" default:"0,1,0.5"` // target mean weight values across receiving neuron's pathway -- the mean SWt values are constrained to remain at this value. some pathways may benefit from lower mean of .4 Mean float32 `default:"0.5,0.4"` // initial variance in weight values, prior to constraints. Var float32 `default:"0.25"` // symmetrize the initial weight values with those in reciprocal pathway -- typically true for bidirectional excitatory connections Sym slbool.Bool `default:"true"` }
SWtInitParams for initial SWt values
func (*SWtInitParams) Defaults ¶
func (sp *SWtInitParams) Defaults()
func (*SWtInitParams) RandVar ¶
func (sp *SWtInitParams) RandVar(rnd randx.Rand) float32
RandVar returns the random variance in weight value (zero mean) based on Var param
func (*SWtInitParams) Update ¶
func (sp *SWtInitParams) Update()
type SWtParams ¶
type SWtParams struct { // initialization of SWt values Init SWtInitParams `display:"inline"` // adaptation of SWt values in response to LWt learning Adapt SWtAdaptParams `display:"inline"` // range limits for SWt values Limit minmax.F32 `default:"{'Min':0.2,'Max':0.8}" display:"inline"` }
SWtParams manages structural, slowly adapting weight values (SWt), in terms of initialization and updating over course of learning. SWts impose initial and slowly adapting constraints on neuron connectivity to encourage differentiation of neuron representations and overall good behavior in terms of not hogging the representational space. The TrgAvg activity constraint is not enforced through SWt -- it needs to be more dynamic and supported by the regular learned weights.
func (*SWtParams) InitWtsSyn ¶
InitWtsSyn initializes weight values based on WtInit randomness parameters for an individual synapse. It also updates the linear weight value based on the sigmoidal weight value.
func (*SWtParams) LWtFromWts ¶
LWtFromWts returns linear, learning LWt from wt and swt. LWt is set to reproduce given Wt relative to given SWt base value.
func (*SWtParams) LinFromSigWt ¶
LinFromSigWt returns linear weight from sigmoidal contrast-enhanced weight. wt is centered at 1, and normed in range +/- 1 around that, return value is in 0-1 range, centered at .5
func (*SWtParams) SigFromLinWt ¶
SigFromLinWt returns sigmoidal contrast-enhanced weight from linear weight, centered at 1 and normed in range +/- 1 around that in preparation for multiplying times SWt
type SpikeNoiseParams ¶
type SpikeNoiseParams struct { // add noise simulating background spiking levels On slbool.Bool // mean frequency of excitatory spikes -- typically 50Hz but multiple inputs increase rate -- poisson lambda parameter, also the variance GeHz float32 `default:"100"` // excitatory conductance per spike -- .001 has minimal impact, .01 can be strong, and .15 is needed to influence timing of clamped inputs Ge float32 `min:"0"` // mean frequency of inhibitory spikes -- typically 100Hz fast spiking but multiple inputs increase rate -- poisson lambda parameter, also the variance GiHz float32 `default:"200"` // excitatory conductance per spike -- .001 has minimal impact, .01 can be strong, and .15 is needed to influence timing of clamped inputs Gi float32 `min:"0"` // add Ge noise to GeMaintRaw instead of standard Ge -- used for PTMaintLayer for example MaintGe slbool.Bool // Exp(-Interval) which is the threshold for GeNoiseP as it is updated GeExpInt float32 `display:"-" json:"-" xml:"-"` // Exp(-Interval) which is the threshold for GiNoiseP as it is updated GiExpInt float32 `display:"-" json:"-" xml:"-"` }
SpikeNoiseParams parameterizes background spiking activity impinging on the neuron, simulated using a poisson spiking process.
func (*SpikeNoiseParams) Defaults ¶
func (an *SpikeNoiseParams) Defaults()
func (*SpikeNoiseParams) PGe ¶
func (an *SpikeNoiseParams) PGe(ctx *Context, p *float32, ni, di uint32) float32
PGe updates the GeNoiseP probability, multiplying a uniform random number [0-1] and returns Ge from spiking if a spike is triggered
func (*SpikeNoiseParams) PGi ¶
func (an *SpikeNoiseParams) PGi(ctx *Context, p *float32, ni, di uint32) float32
PGi updates the GiNoiseP probability, multiplying a uniform random number [0-1] and returns Gi from spiking if a spike is triggered
func (*SpikeNoiseParams) ShouldDisplay ¶
func (an *SpikeNoiseParams) ShouldDisplay(field string) bool
func (*SpikeNoiseParams) Update ¶
func (an *SpikeNoiseParams) Update()
type SpikeParams ¶
type SpikeParams struct { // threshold value Theta (Q) for firing output activation (.5 is more accurate value based on AdEx biological parameters and normalization Thr float32 `default:"0.5"` // post-spiking membrane potential to reset to, produces refractory effect if lower than VmInit -- 0.3 is appropriate biologically based value for AdEx (Brette & Gurstner, 2005) parameters. See also RTau VmR float32 `default:"0.3"` // post-spiking explicit refractory period, in cycles -- prevents Vm updating for this number of cycles post firing -- Vm is reduced in exponential steps over this period according to RTau, being fixed at Tr to VmR exactly Tr int32 `min:"1" default:"3"` // time constant for decaying Vm down to VmR -- at end of Tr it is set to VmR exactly -- this provides a more realistic shape of the post-spiking Vm which is only relevant for more realistic channels that key off of Vm -- does not otherwise affect standard computation RTau float32 `default:"1.6667"` // if true, turn on exponential excitatory current that drives Vm rapidly upward for spiking as it gets past its nominal firing threshold (Thr) -- nicely captures the Hodgkin Huxley dynamics of Na and K channels -- uses Brette & Gurstner 2005 AdEx formulation Exp slbool.Bool `default:"true"` // slope in Vm (2 mV = .02 in normalized units) for extra exponential excitatory current that drives Vm rapidly upward for spiking as it gets past its nominal firing threshold (Thr) -- nicely captures the Hodgkin Huxley dynamics of Na and K channels -- uses Brette & Gurstner 2005 AdEx formulation ExpSlope float32 `default:"0.02"` // membrane potential threshold for actually triggering a spike when using the exponential mechanism ExpThr float32 `default:"0.9"` // for translating spiking interval (rate) into rate-code activation equivalent, what is the maximum firing rate associated with a maximum activation value of 1 MaxHz float32 `default:"180" min:"1"` // constant for integrating the spiking interval in estimating spiking rate ISITau float32 `default:"5" min:"1"` // rate = 1 / tau ISIDt float32 `display:"-"` // rate = 1 / tau RDt float32 `display:"-"` // contains filtered or unexported fields }
SpikeParams contains spiking activation function params. Implements a basic thresholded Vm model, and optionally the AdEx adaptive exponential function (adapt is KNaAdapt)
func (*SpikeParams) ActFromISI ¶
func (sk *SpikeParams) ActFromISI(isi, timeInc, integ float32) float32
ActFromISI computes rate-code activation from estimated spiking interval
func (*SpikeParams) ActToISI ¶
func (sk *SpikeParams) ActToISI(act, timeInc, integ float32) float32
ActToISI compute spiking interval from a given rate-coded activation, based on time increment (.001 = 1msec default), Act.Dt.Integ
func (*SpikeParams) AvgFromISI ¶
func (sk *SpikeParams) AvgFromISI(avg float32, isi float32) float32
AvgFromISI returns updated spiking ISI from current isi interval value
func (*SpikeParams) Defaults ¶
func (sk *SpikeParams) Defaults()
func (*SpikeParams) ShouldDisplay ¶
func (sk *SpikeParams) ShouldDisplay(field string) bool
func (*SpikeParams) Update ¶
func (sk *SpikeParams) Update()
type StartN ¶
type StartN struct { // starting offset Start uint32 // number of items -- N uint32 // contains filtered or unexported fields }
StartN holds a starting offset index and a number of items arranged from Start to Start+N (exclusive). This is not 16 byte padded and only for use on CPU side.
type SynComParams ¶
type SynComParams struct { // type of conductance (G) communicated by this pathway GType PathGTypes // additional synaptic delay in msec for inputs arriving at this pathway. Must be <= MaxDelay which is set during network building based on MaxDelay of any existing Path in the network. Delay = 0 means a spike reaches receivers in the next Cycle, which is the minimum time (1 msec). Biologically, subtract 1 from biological synaptic delay values to set corresponding Delay value. Delay uint32 `min:"0" default:"2"` // maximum value of Delay -- based on MaxDelay values when the BuildGBuf function was called when the network was built -- cannot set it longer than this, except by calling BuildGBuf on network after changing MaxDelay to a larger value in any pathway in the network. MaxDelay uint32 `edit:"-"` // probability of synaptic transmission failure -- if > 0, then weights are turned off at random as a function of PFail (times 1-SWt if PFailSwt) PFail float32 // if true, then probability of failure is inversely proportional to SWt structural / slow weight value (i.e., multiply PFail * (1-SWt))) PFailSWt slbool.Bool // delay length = actual length of the GBuf buffer per neuron = Delay+1 -- just for speed DelLen uint32 `display:"-"` // contains filtered or unexported fields }
SynComParams are synaptic communication parameters: used in the Path parameters. Includes delay and probability of failure, and Inhib for inhibitory connections, and modulatory pathways that have multiplicative-like effects.
func (*SynComParams) Defaults ¶
func (sc *SynComParams) Defaults()
func (*SynComParams) Fail ¶
func (sc *SynComParams) Fail(ctx *Context, syni uint32, swt float32)
Fail updates failure status of given weight, given SWt value
func (*SynComParams) FloatFromGBuf ¶
func (sc *SynComParams) FloatFromGBuf(ival int32) float32
FloatFromGBuf converts the given int32 value produced via FloatToGBuf back into a float32 (divides by factor). If the value is negative, a panic is triggered indicating there was numerical overflow in the aggregation. If this occurs, the FloatToIntFactor needs to be decreased.
func (*SynComParams) FloatToGBuf ¶
func (sc *SynComParams) FloatToGBuf(val float32) int32
FloatToGBuf converts the given floating point value to a large int32 for accumulating in GBuf. Note: more efficient to bake factor into scale factor per paths.
func (*SynComParams) FloatToIntFactor ¶
func (sc *SynComParams) FloatToIntFactor() float32
FloatToIntFactor returns the factor used for converting float32 to int32 in GBuf encoding. Because total G is constrained via scaling factors to be around ~1, it is safe to use a factor that uses most of the available bits, leaving enough room to prevent overflow when adding together the different vals. For encoding, bake this into scale factor in SendSpike, and cast the result to int32.
func (*SynComParams) ReadIndex ¶
func (sc *SynComParams) ReadIndex(rnIndex, di uint32, cycTot int32, nRecvNeurs, maxData uint32) uint32
ReadIndex returns index for reading existing spikes from the GBuf buffer, based on the layer-based recv neuron index, data parallel idx, and the ReadOff offset from the CyclesTotal.
func (*SynComParams) ReadOff ¶
func (sc *SynComParams) ReadOff(cycTot int32) uint32
ReadOff returns offset for reading existing spikes from the GBuf buffer, based on Context CyclesTotal counter which increments each cycle. This is logically the zero position in the ring buffer.
func (*SynComParams) RingIndex ¶
func (sc *SynComParams) RingIndex(i uint32) uint32
RingIndex returns the wrap-around ring index for given raw index. For writing and reading spikes to GBuf buffer, based on Context.CyclesTotal counter. RN: 0 1 2 <- recv neuron indexes DI: 0 1 2 0 1 2 0 1 2 <- delay indexes C0: ^ v <- cycle 0, ring index: ^ = write, v = read C1: ^ v <- cycle 1, shift over by 1 -- overwrite last read C2: v ^ <- cycle 2: read out value stored on C0 -- index wraps around
func (*SynComParams) Update ¶
func (sc *SynComParams) Update()
func (*SynComParams) WriteIndex ¶
func (sc *SynComParams) WriteIndex(rnIndex, di uint32, cycTot int32, nRecvNeurs, maxData uint32) uint32
WriteIndex returns actual index for writing new spikes into the GBuf buffer, based on the layer-based recv neuron index, data parallel idx, and the WriteOff offset computed from the CyclesTotal.
func (*SynComParams) WriteIndexOff ¶
func (sc *SynComParams) WriteIndexOff(rnIndex, di, wrOff uint32, nRecvNeurs, maxData uint32) uint32
WriteIndexOff returns actual index for writing new spikes into the GBuf buffer, based on the layer-based recv neuron index and the given WriteOff offset.
func (*SynComParams) WriteOff ¶
func (sc *SynComParams) WriteOff(cycTot int32) uint32
WriteOff returns offset for writing new spikes into the GBuf buffer, based on Context CyclesTotal counter which increments each cycle. This is logically the last position in the ring buffer.
func (*SynComParams) WtFail ¶
func (sc *SynComParams) WtFail(ctx *Context, swt float32) bool
WtFail returns true if synapse should fail, as function of SWt value (optionally)
func (*SynComParams) WtFailP ¶
func (sc *SynComParams) WtFailP(swt float32) float32
WtFailP returns probability of weight (synapse) failure given current SWt value
type SynapseCaStrides ¶
SynapseCaStrides encodes the stride offsets for synapse variable access into network float32 array. Data is always the inner-most variable.
func (*SynapseCaStrides) Index ¶
func (ns *SynapseCaStrides) Index(synIndex, di uint32, nvar SynapseCaVars) uint64
Index returns the index into network float32 array for given synapse, data, and variable
func (*SynapseCaStrides) SetSynapseOuter ¶
func (ns *SynapseCaStrides) SetSynapseOuter(ndata int)
SetSynapseOuter sets strides with synapses as outer loop: [Synapses][Vars][Data], which is optimal for CPU-based computation.
func (*SynapseCaStrides) SetVarOuter ¶
func (ns *SynapseCaStrides) SetVarOuter(nsyn, ndata int)
SetVarOuter sets strides with vars as outer loop: [Vars][Synapses][Data], which is optimal for GPU-based computation.
type SynapseCaVars ¶
type SynapseCaVars int32 //enums:enum
SynapseCaVars are synapse variables for calcium involved in learning, which are data parallel input specific.
const ( // Tr is trace of synaptic activity over time, which is used for // credit assignment in learning. // In MatrixPath this is a tag that is then updated later when US occurs. Tr SynapseCaVars = iota // DTr is delta (change in) Tr trace of synaptic activity over time DTr // DiDWt is delta weight for each data parallel index (Di). // This is directly computed from the Ca values (in cortical version) // and then aggregated into the overall DWt (which may be further // integrated across MPI nodes), which then drives changes in Wt values. DiDWt )
const SynapseCaVarsN SynapseCaVars = 3
SynapseCaVarsN is the highest valid value for type SynapseCaVars, plus one.
func SynapseCaVarsValues ¶
func SynapseCaVarsValues() []SynapseCaVars
SynapseCaVarsValues returns all possible values for the type SynapseCaVars.
func (SynapseCaVars) Desc ¶
func (i SynapseCaVars) Desc() string
Desc returns the description of the SynapseCaVars value.
func (SynapseCaVars) Int64 ¶
func (i SynapseCaVars) Int64() int64
Int64 returns the SynapseCaVars value as an int64.
func (SynapseCaVars) MarshalText ¶
func (i SynapseCaVars) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*SynapseCaVars) SetInt64 ¶
func (i *SynapseCaVars) SetInt64(in int64)
SetInt64 sets the SynapseCaVars value from an int64.
func (*SynapseCaVars) SetString ¶
func (i *SynapseCaVars) SetString(s string) error
SetString sets the SynapseCaVars value from its string representation, and returns an error if the string is invalid.
func (SynapseCaVars) String ¶
func (i SynapseCaVars) String() string
String returns the string representation of this SynapseCaVars value.
func (*SynapseCaVars) UnmarshalText ¶
func (i *SynapseCaVars) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (SynapseCaVars) Values ¶
func (i SynapseCaVars) Values() []enums.Enum
Values returns all possible values for the type SynapseCaVars.
type SynapseIndexStrides ¶
type SynapseIndexStrides struct { // synapse level Synapse uint32 // index value level Idx uint32 // contains filtered or unexported fields }
SynapseIndexStrides encodes the stride offsets for synapse index access into network uint32 array.
func (*SynapseIndexStrides) Index ¶
func (ns *SynapseIndexStrides) Index(synIdx uint32, idx SynapseIndexes) uint32
Index returns the index into network uint32 array for given synapse, index value
func (*SynapseIndexStrides) SetIndexOuter ¶
func (ns *SynapseIndexStrides) SetIndexOuter(nsyn int)
SetIndexOuter sets strides with indexes as outer dimension: [Indexes][Synapses] (outer to inner), which is optimal for GPU-based computation.
func (*SynapseIndexStrides) SetSynapseOuter ¶
func (ns *SynapseIndexStrides) SetSynapseOuter()
SetSynapseOuter sets strides with synapses as outer dimension: [Synapses][Indexes] (outer to inner), which is optimal for CPU-based computation.
type SynapseIndexes ¶
type SynapseIndexes int32 //enums:enum
SynapseIndexes are the neuron indexes and other uint32 values (flags, etc). There is only one of these per neuron -- not data parallel.
const ( // SynRecvIndex is receiving neuron index in network's global list of neurons SynRecvIndex SynapseIndexes = iota // SynSendIndex is sending neuron index in network's global list of neurons SynSendIndex // SynPathIndex is pathway index in global list of pathways organized as [Layers][RecvPaths] SynPathIndex )
const SynapseIndexesN SynapseIndexes = 3
SynapseIndexesN is the highest valid value for type SynapseIndexes, plus one.
func SynapseIndexesValues ¶
func SynapseIndexesValues() []SynapseIndexes
SynapseIndexesValues returns all possible values for the type SynapseIndexes.
func (SynapseIndexes) Desc ¶
func (i SynapseIndexes) Desc() string
Desc returns the description of the SynapseIndexes value.
func (SynapseIndexes) Int64 ¶
func (i SynapseIndexes) Int64() int64
Int64 returns the SynapseIndexes value as an int64.
func (SynapseIndexes) MarshalText ¶
func (i SynapseIndexes) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*SynapseIndexes) SetInt64 ¶
func (i *SynapseIndexes) SetInt64(in int64)
SetInt64 sets the SynapseIndexes value from an int64.
func (*SynapseIndexes) SetString ¶
func (i *SynapseIndexes) SetString(s string) error
SetString sets the SynapseIndexes value from its string representation, and returns an error if the string is invalid.
func (SynapseIndexes) String ¶
func (i SynapseIndexes) String() string
String returns the string representation of this SynapseIndexes value.
func (*SynapseIndexes) UnmarshalText ¶
func (i *SynapseIndexes) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (SynapseIndexes) Values ¶
func (i SynapseIndexes) Values() []enums.Enum
Values returns all possible values for the type SynapseIndexes.
type SynapseVarStrides ¶
type SynapseVarStrides struct { // synapse level Synapse uint32 // variable level Var uint32 // contains filtered or unexported fields }
SynapseVarStrides encodes the stride offsets for synapse variable access into network float32 array.
func (*SynapseVarStrides) Index ¶
func (ns *SynapseVarStrides) Index(synIndex uint32, nvar SynapseVars) uint32
Index returns the index into network float32 array for given synapse, and variable
func (*SynapseVarStrides) SetSynapseOuter ¶
func (ns *SynapseVarStrides) SetSynapseOuter()
SetSynapseOuter sets strides with synapses as outer loop: [Synapses][Vars], which is optimal for CPU-based computation.
func (*SynapseVarStrides) SetVarOuter ¶
func (ns *SynapseVarStrides) SetVarOuter(nsyn int)
SetVarOuter sets strides with vars as outer loop: [Vars][Synapses], which is optimal for GPU-based computation.
type SynapseVars ¶
type SynapseVars int32 //enums:enum
SynapseVars are the neuron variables representing current synaptic state, specifically weights.
const ( // Wt is effective synaptic weight value, determining how much conductance one spike drives on the receiving neuron, representing the actual number of effective AMPA receptors in the synapse. Wt = SWt * WtSig(LWt), where WtSig produces values between 0-2 based on LWt, centered on 1. Wt SynapseVars = iota // LWt is rapidly learning, linear weight value -- learns according to the lrate specified in the connection spec. Biologically, this represents the internal biochemical processes that drive the trafficking of AMPA receptors in the synaptic density. Initially all LWt are .5, which gives 1 from WtSig function. LWt // SWt is slowly adapting structural weight value, which acts as a multiplicative scaling factor on synaptic efficacy: biologically represents the physical size and efficacy of the dendritic spine. SWt values adapt in an outer loop along with synaptic scaling, with constraints to prevent runaway positive feedback loops and maintain variance and further capacity to learn. Initial variance is all in SWt, with LWt set to .5, and scaling absorbs some of LWt into SWt. SWt // DWt is delta (change in) synaptic weight, from learning -- updates LWt which then updates Wt. DWt // DSWt is change in SWt slow synaptic weight -- accumulates DWt DSWt )
const SynapseVarsN SynapseVars = 5
SynapseVarsN is the highest valid value for type SynapseVars, plus one.
func SynapseVarsValues ¶
func SynapseVarsValues() []SynapseVars
SynapseVarsValues returns all possible values for the type SynapseVars.
func (SynapseVars) Desc ¶
func (i SynapseVars) Desc() string
Desc returns the description of the SynapseVars value.
func (SynapseVars) Int64 ¶
func (i SynapseVars) Int64() int64
Int64 returns the SynapseVars value as an int64.
func (SynapseVars) MarshalText ¶
func (i SynapseVars) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*SynapseVars) SetInt64 ¶
func (i *SynapseVars) SetInt64(in int64)
SetInt64 sets the SynapseVars value from an int64.
func (*SynapseVars) SetString ¶
func (i *SynapseVars) SetString(s string) error
SetString sets the SynapseVars value from its string representation, and returns an error if the string is invalid.
func (SynapseVars) String ¶
func (i SynapseVars) String() string
String returns the string representation of this SynapseVars value.
func (*SynapseVars) UnmarshalText ¶
func (i *SynapseVars) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (SynapseVars) Values ¶
func (i SynapseVars) Values() []enums.Enum
Values returns all possible values for the type SynapseVars.
type TDDaParams ¶
type TDDaParams struct { // tonic baseline Ge level for DA = 0 -- +/- are between 0 and 2*TonicGe -- just for spiking display of computed DA value TonicGe float32 // idx of TDIntegLayer to get reward prediction from -- set during Build from BuildConfig TDIntegLayName TDIntegLayIndex int32 `edit:"-"` // contains filtered or unexported fields }
TDDaParams are params for dopamine (DA) signal as the temporal difference (TD) between the TDIntegLayer activations in the minus and plus phase.
func (*TDDaParams) Defaults ¶
func (tp *TDDaParams) Defaults()
func (*TDDaParams) GeFromDA ¶
func (tp *TDDaParams) GeFromDA(da float32) float32
GeFromDA returns excitatory conductance from DA dopamine value
func (*TDDaParams) Update ¶
func (tp *TDDaParams) Update()
type TDIntegParams ¶
type TDIntegParams struct { // discount factor -- how much to discount the future prediction from TDPred Discount float32 // gain factor on TD rew pred activations PredGain float32 // idx of TDPredLayer to get reward prediction from -- set during Build from BuildConfig TDPredLayName TDPredLayIndex int32 `edit:"-"` // contains filtered or unexported fields }
TDIntegParams are params for reward integrator layer
func (*TDIntegParams) Defaults ¶
func (tp *TDIntegParams) Defaults()
func (*TDIntegParams) Update ¶
func (tp *TDIntegParams) Update()
type TopoInhibParams ¶
type TopoInhibParams struct { // use topographic inhibition On slbool.Bool // half-width of topographic inhibition within layer Width int32 // normalized gaussian sigma as proportion of Width, for gaussian weighting Sigma float32 // half-width of topographic inhibition within layer Wrap slbool.Bool // overall inhibition multiplier for topographic inhibition (generally <= 1) Gi float32 // overall inhibitory contribution from feedforward inhibition -- multiplies average Ge from pools or Ge from neurons FF float32 // overall inhibitory contribution from feedback inhibition -- multiplies average activation from pools or Act from neurons FB float32 // feedforward zero point for Ge per neuron (summed Ge is compared to N * FF0) -- below this level, no FF inhibition is computed, above this it is FF * (Sum Ge - N * FF0) FF0 float32 // weight value at width -- to assess the value of Sigma WidthWt float32 `edit:"-"` // contains filtered or unexported fields }
TopoInhibParams provides for topographic gaussian inhibition integrating over neighborhood. TODO: not currently being used
func (*TopoInhibParams) Defaults ¶
func (ti *TopoInhibParams) Defaults()
func (*TopoInhibParams) GiFromGeAct ¶
func (ti *TopoInhibParams) GiFromGeAct(ge, act, ff0 float32) float32
func (*TopoInhibParams) ShouldDisplay ¶
func (ti *TopoInhibParams) ShouldDisplay(field string) bool
func (*TopoInhibParams) Update ¶
func (ti *TopoInhibParams) Update()
type TraceParams ¶
type TraceParams struct { // time constant for integrating trace over theta cycle timescales. // governs the decay rate of syanptic trace Tau float32 `default:"1,2,4"` // amount of the mean dWt to subtract, producing a zero-sum effect -- 1.0 = full zero-sum dWt -- only on non-zero DWts. typically set to 0 for standard trace learning pathways, although some require it for stability over the long haul. can use SetSubMean to set to 1 after significant early learning has occurred with 0. Some special path types (e.g., Hebb) benefit from SubMean = 1 always SubMean float32 `default:"0,1"` // threshold for learning, depending on different algorithms -- in Matrix and VSPatch it applies to normalized GeIntNorm value -- setting this relatively high encourages sparser representations LearnThr float32 // rate = 1 / tau Dt float32 `display:"-" json:"-" xml:"-" edit:"-"` }
TraceParams manages parameters associated with temporal trace learning
func (*TraceParams) Defaults ¶
func (tp *TraceParams) Defaults()
func (*TraceParams) TrFromCa ¶
func (tp *TraceParams) TrFromCa(tr float32, ca float32) float32
TrFromCa returns updated trace factor as function of a synaptic calcium update factor and current trace
func (*TraceParams) Update ¶
func (tp *TraceParams) Update()
type TrgAvgActParams ¶
type TrgAvgActParams struct { // if this is > 0, then each neuron's GiBase is initialized as this proportion of TrgRange.Max - TrgAvg -- gives neurons differences in intrinsic inhibition / leak as a starting bias. This is independent of using the target values to scale synaptic weights. GiBaseInit float32 // whether to use target average activity mechanism to rescale synaptic weights, so that activity tracks the target values RescaleOn slbool.Bool // learning rate for adjustments to Trg value based on unit-level error signal. Population TrgAvg values are renormalized to fixed overall average in TrgRange. Generally, deviating from the default doesn't make much difference. ErrLRate float32 `default:"0.02"` // rate parameter for how much to scale synaptic weights in proportion to the AvgDif between target and actual proportion activity -- this determines the effective strength of the constraint, and larger models may need more than the weaker default value. SynScaleRate float32 `default:"0.005,0.0002"` // amount of mean trg change to subtract -- 1 = full zero sum. 1 works best in general -- but in some cases it may be better to start with 0 and then increase using network SetSubMean method at a later point. SubMean float32 `default:"0,1"` // permute the order of TrgAvg values within layer -- otherwise they are just assigned in order from highest to lowest for easy visualization -- generally must be true if any topographic weights are being used Permute slbool.Bool `default:"true"` // use pool-level target values if pool-level inhibition and 4D pooled layers are present -- if pool sizes are relatively small, then may not be useful to distribute targets just within pool Pool slbool.Bool // range of target normalized average activations -- individual neurons are assigned values within this range to TrgAvg, and clamped within this range. TrgRange minmax.F32 `default:"{'Min':0.5,'Max':2}"` // contains filtered or unexported fields }
TrgAvgActParams govern the target and actual long-term average activity in neurons. Target value is adapted by neuron-wise error and difference in actual vs. target. drives synaptic scaling at a slow timescale (Network.SlowInterval).
func (*TrgAvgActParams) Defaults ¶
func (ta *TrgAvgActParams) Defaults()
func (*TrgAvgActParams) ShouldDisplay ¶
func (ta *TrgAvgActParams) ShouldDisplay(field string) bool
func (*TrgAvgActParams) Update ¶
func (ta *TrgAvgActParams) Update()
type USParams ¶
type USParams struct { // gain factor applied to sum of weighted, drive-scaled positive USs // to compute PVpos primary summary value. // This is multiplied prior to 1/(1+x) normalization. // Use this to adjust the overall scaling of PVpos reward within 0-1 // normalized range (see also PVnegGain). // Each USpos is assumed to be in 0-1 range, with a default of 1. PVposGain float32 `default:"2"` // gain factor applied to sum of weighted negative USs and Costs // to compute PVneg primary summary value. // This is multiplied prior to 1/(1+x) normalization. // Use this to adjust overall scaling of PVneg within 0-1 // normalized range (see also PVposGain). PVnegGain float32 `default:"1"` // Negative US gain factor for encoding each individual negative US, // within their own separate input pools, multiplied prior to 1/(1+x) // normalization of each term for activating the USneg pools. // These gains are _not_ applied in computing summary PVneg value // (see PVnegWts), and generally must be larger than the weights to leverage // the dynamic range within each US pool. USnegGains []float32 // Cost gain factor for encoding the individual Time, Effort etc costs // within their own separate input pools, multiplied prior to 1/(1+x) // normalization of each term for activating the Cost pools. // These gains are _not_ applied in computing summary PVneg value // (see CostWts), and generally must be larger than the weights to use // the full dynamic range within each US pool. CostGains []float32 // weight factor applied to each separate positive US on the way to computing // the overall PVpos summary value, to control the weighting of each US // relative to the others. Each pos US is also multiplied by its dynamic // Drive factor as well. // Use PVposGain to control the overall scaling of the PVpos value. PVposWts []float32 // weight factor applied to each separate negative US on the way to computing // the overall PVneg summary value, to control the weighting of each US // relative to the others, and to the Costs. These default to 1. PVnegWts []float32 // weight factor applied to each separate Cost (Time, Effort, etc) on the // way to computing the overall PVneg summary value, to control the weighting // of each Cost relative to the others, and relative to the negative USs. // The first pool is Time, second is Effort, and these are typically weighted // lower (.02) than salient simulation-specific USs (1). PVcostWts []float32 // computed estimated US values, based on OFCposPT and VSMatrix gating, in PVposEst USposEst []float32 `edit:"-"` }
USParams control how positive and negative USs and Costs are weighted and integrated to compute an overall PV primary value.
func (*USParams) CostToZero ¶
CostToZero sets all values of Cost, CostRaw to zero
func (*USParams) USnegCostFromRaw ¶
USnegCostFromRaw sets normalized NegUS, Cost values from Raw values
func (*USParams) USnegToZero ¶
USnegToZero sets all values of USneg, USnegRaw to zero
func (*USParams) USposToZero ¶
USposToZero sets all values of USpos to zero
type UrgencyParams ¶
type UrgencyParams struct { // value of raw urgency where the urgency activation level is 50% U50 float32 // exponent on the urge factor -- valid numbers are 1,2,4,6 Power int32 `default:"4"` // threshold for urge -- cuts off small baseline values Thr float32 `default:"0.2"` // gain factor for driving tonic DA levels as a function of urgency DAtonic float32 `default:"50"` }
UrgencyParams has urgency (increasing pressure to do something) and parameters for updating it. Raw urgency integrates effort when _not_ goal engaged while effort (negative US 0) integrates when a goal _is_ engaged.
func (*UrgencyParams) AddEffort ¶
func (ur *UrgencyParams) AddEffort(ctx *Context, di uint32, inc float32)
AddEffort adds an effort increment of urgency and updates the Urge factor
func (*UrgencyParams) Defaults ¶
func (ur *UrgencyParams) Defaults()
func (*UrgencyParams) Reset ¶
func (ur *UrgencyParams) Reset(ctx *Context, di uint32)
Reset resets the raw urgency back to zero -- at start of new gating event
func (*UrgencyParams) Update ¶
func (ur *UrgencyParams) Update()
func (*UrgencyParams) Urge ¶
func (ur *UrgencyParams) Urge(ctx *Context, di uint32) float32
Urge computes normalized Urge value from Raw, and sets DAtonic from that
func (*UrgencyParams) UrgeFun ¶
func (ur *UrgencyParams) UrgeFun(urgency float32) float32
UrgeFun is the urgency function: urgency / (urgency + 1) where urgency = (Raw / U50)^Power
type VTAParams ¶
type VTAParams struct { // gain on CeM activity difference (CeMPos - CeMNeg) for generating LV CS-driven dopamine values CeMGain float32 `default:"0.75"` // gain on computed LHb DA (Burst - Dip) -- for controlling DA levels LHbGain float32 `default:"1.25"` // threshold on ACh level required to generate LV CS-driven dopamine burst AChThr float32 `default:"0.5"` // contains filtered or unexported fields }
VTAParams are for computing overall VTA DA based on LHb PVDA (primary value -- at US time, computed at start of each trial and stored in LHbPVDA global value) and Amygdala (CeM) CS / learned value (LV) activations, which update every cycle.
type ValenceTypes ¶
type ValenceTypes int32 //enums:enum
ValenceTypes are types of valence coding: positive or negative.
const ( // Positive valence codes for outcomes aligned with drives / goals. Positive ValenceTypes = iota // Negative valence codes for harmful or aversive outcomes. Negative // Cost codes for continous ongoing cost factors such as Time and Effort Cost )
const ValenceTypesN ValenceTypes = 3
ValenceTypesN is the highest valid value for type ValenceTypes, plus one.
func ValenceTypesValues ¶
func ValenceTypesValues() []ValenceTypes
ValenceTypesValues returns all possible values for the type ValenceTypes.
func (ValenceTypes) Desc ¶
func (i ValenceTypes) Desc() string
Desc returns the description of the ValenceTypes value.
func (ValenceTypes) Int64 ¶
func (i ValenceTypes) Int64() int64
Int64 returns the ValenceTypes value as an int64.
func (ValenceTypes) MarshalText ¶
func (i ValenceTypes) MarshalText() ([]byte, error)
MarshalText implements the encoding.TextMarshaler interface.
func (*ValenceTypes) SetInt64 ¶
func (i *ValenceTypes) SetInt64(in int64)
SetInt64 sets the ValenceTypes value from an int64.
func (*ValenceTypes) SetString ¶
func (i *ValenceTypes) SetString(s string) error
SetString sets the ValenceTypes value from its string representation, and returns an error if the string is invalid.
func (ValenceTypes) String ¶
func (i ValenceTypes) String() string
String returns the string representation of this ValenceTypes value.
func (*ValenceTypes) UnmarshalText ¶
func (i *ValenceTypes) UnmarshalText(text []byte) error
UnmarshalText implements the encoding.TextUnmarshaler interface.
func (ValenceTypes) Values ¶
func (i ValenceTypes) Values() []enums.Enum
Values returns all possible values for the type ValenceTypes.
Source Files ¶
- act.go
- act_path.go
- avgmax.go
- axon.go
- blanovelprjn.go
- context.go
- deep_layers.go
- deep_net.go
- deep_paths.go
- doc.go
- enumgen.go
- globals.go
- gpu.go
- helpers.go
- hip_net.go
- hip_paths.go
- inhib.go
- layer.go
- layer_compute.go
- layerbase.go
- layerparams.go
- layertypes.go
- layervals.go
- learn.go
- logging.go
- looper.go
- network.go
- network_single.go
- networkbase.go
- neuromod.go
- neuron.go
- path.go
- path_compute.go
- pathbase.go
- pathparams.go
- pathtypes.go
- pcore_layers.go
- pcore_net.go
- pcore_paths.go
- pool.go
- rand.go
- rl_layers.go
- rl_net.go
- rl_paths.go
- rubicon.go
- rubicon_layers.go
- rubicon_net.go
- rubicon_paths.go
- synapse.go
- threads.go
- typegen.go