Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ContSynVars = []string{"TDWt", "CaDMax"}
Functions ¶
This section is empty.
Types ¶
type ContPrjn ¶
type ContPrjn struct { axon.Prjn // access as .Prjn Cont KinContParams `view:"inline" desc:"kinase continuous learning rule params"` ContSyns []ContSyn `desc:"continuous synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array"` }
ContPrjn is an axon Prjn that does not explicitly depend on the theta cycle timing dynamics for learning -- implements SynSpkCont or SynNMDACont in original Kinase rules. It implements synaptic-level Ca signals at an abstract level, purely driven by spikes, not NMDA channel Ca, as a product of sender and recv CaSyn values that capture the decaying Ca trace from spiking, qualitatively as in the NMDA dynamics. These spike-driven Ca signals are integrated in a cascaded manner via CaM, then CaP (reflecting CaMKII) and finally CaD (reflecting DAPK1). It uses continuous learning based on temporary DWt (TDWt) values based on the TWindow around spikes, which convert into DWt after a pause in synaptic activity (no arbitrary ThetaCycle boundaries). There is an option to compare with SynSpkTheta by only doing DWt updates at the theta cycle level, in which case the key difference is the use of TDWt, which can remove some variability associated with the arbitrary timing of the end of trials.
func (*ContPrjn) DWt ¶
func (pj *ContPrjn) DWt(ltime *Time)
DWt computes the weight change (learning) -- on sending projections
func (*ContPrjn) DWtCont ¶
DWtCont computes the weight change (learning) for continuous learning variant SynSpkCont, which has already continuously computed DWt from TDWt. Applies post-trial decay to simulate time passage, and checks for whether learning should occur.
func (*ContPrjn) InitContCa ¶
func (pj *ContPrjn) InitContCa()
func (*ContPrjn) SendSynCa ¶
func (pj *ContPrjn) SendSynCa(ltime *Time)
SendSynCa does Kinase learning based on Ca driven from pre-post spiking, for SynSpkCont and SynNMDACont learning variants. Updates Ca, CaM, CaP, CaD cascaded at longer time scales, with CaP representing CaMKII LTP activity and CaD representing DAPK1 LTD activity. Within the window of elevated synaptic Ca, CaP - CaD computes a temporary DWt (TDWt) reflecting the balance of CaMKII vs. DAPK1 binding at the NMDA N2B site. When the synaptic activity has fallen from a local peak (CaDMax) by a threshold amount (CaDMaxPct) then the last TDWt value converts to an actual synaptic change: DWt
func (*ContPrjn) UpdateParams ¶
func (pj *ContPrjn) UpdateParams()
type ContSyn ¶
type ContSyn struct { TDWt float32 `` /* 273-byte string literal not displayed */ CaDMax float32 `` /* 135-byte string literal not displayed */ }
ContSyn holds extra synaptic state for continuous learning
func (*ContSyn) VarByIndex ¶
VarByIndex returns synapse variable by index
type KinContParams ¶
type KinContParams struct { Rule kinase.Rules `` /* 129-byte string literal not displayed */ NMDAG float32 `` /* 230-byte string literal not displayed */ TWindow int `` /* 246-byte string literal not displayed */ DMaxPct float32 `` /* 272-byte string literal not displayed */ DScale float32 `` /* 186-byte string literal not displayed */ }
KinContParams has parameters controlling Kinase-based learning rules
func (*KinContParams) DWt ¶
func (kp *KinContParams) DWt(caM, caP, caD float32, tdwt *float32) bool
TDWt computes the temporary weight change from CaP, CaD values, as the simple substraction, while applying DScale to CaD, only when CaM level is above the threshold. returns true if updated
func (*KinContParams) DWtFmTDWt ¶
func (kp *KinContParams) DWtFmTDWt(sy *Synapse, lr float32) bool
DWtFmTDWt updates the DWt from the TDWt, checking the learning threshold using given aggregate learning rate. Returns true if updated DWt
func (*KinContParams) Defaults ¶
func (kp *KinContParams) Defaults()
func (*KinContParams) Update ¶
func (kp *KinContParams) Update()