kinasex

package
v1.4.14 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 18, 2022 License: BSD-3-Clause Imports: 3 Imported by: 0

README

KinasEx

This package contains experimental Kinase-based learning rules.

See https://github.com/emer/axon/tree/master/examples/kinaseq for exploration of the implemented equations, and https://github.com/ccnlab/kinase/tree/main/sims/kinase for biophysical basis of the equations.

In the initially-implemented nomenclature (early 2022), the space of algorithms was enumerated in kinase/rules.go as follows:

const (
	// SynSpkCont implements synaptic-level Ca signals at an abstract level,
	// purely driven by spikes, not NMDA channel Ca, as a product of
	// sender and recv CaSyn values that capture the decaying Ca trace
	// from spiking, qualitatively as in the NMDA dynamics.  These spike-driven
	// Ca signals are integrated in a cascaded manner via CaM,
	// then CaP (reflecting CaMKII) and finally CaD (reflecting DAPK1).
	// It uses continuous learning based on temporary DWt (TDWt) values
	// based on the TWindow around spikes, which convert into DWt after
	// a pause in synaptic activity (no arbitrary ThetaCycle boundaries).
	// There is an option to compare with SynSpkTheta by only doing DWt updates
	// at the theta cycle level, in which case the key difference is the use of
	// TDWt, which can remove some variability associated with the arbitrary
	// timing of the end of trials.
	SynSpkCont Rules = iota

	// SynNMDACont is the same as SynSpkCont with NMDA-driven calcium signals
	// computed according to the very close approximation to the
	// Urakubo et al (2008) allosteric NMDA dynamics, then integrated at P vs. D
	// time scales.  This is the most biologically realistic yet computationally
	// tractable verseion of the Kinase learning algorithm.
	SynNMDACont

	// SynSpkTheta abstracts the SynSpkCont algorithm by only computing the
	// DWt change at the end of the ThetaCycle, instead of continuous updating.
	// This allows an optimized implementation that is roughly 1/3 slower than
	// the fastest NeurSpkTheta version, while still capturing much of the
	// learning dynamics by virtue of synaptic-level integration.
	SynSpkTheta

	// NeurSpkTheta uses neuron-level spike-driven calcium signals
	// integrated at P vs. D time scales -- this is the original
	// Leabra and Axon XCAL / CHL learning rule.
	// It exhibits strong sensitivity to final spikes and thus
	// high levels of variance.
	NeurSpkTheta
)

This package contains implementations of SynSpkCont and SynNMDACont.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var ContSynVars = []string{"TDWt", "CaDMax"}

Functions

This section is empty.

Types

type ContPrjn

type ContPrjn struct {
	axon.Prjn               // access as .Prjn
	Cont      KinContParams `view:"inline" desc:"kinase continuous learning rule params"`
	ContSyns  []ContSyn     `desc:"continuous synaptic state values, ordered by the sending layer units which owns them -- one-to-one with SConIdx array"`
}

ContPrjn is an axon Prjn that does not explicitly depend on the theta cycle timing dynamics for learning -- implements SynSpkCont or SynNMDACont in original Kinase rules. It implements synaptic-level Ca signals at an abstract level, purely driven by spikes, not NMDA channel Ca, as a product of sender and recv CaSyn values that capture the decaying Ca trace from spiking, qualitatively as in the NMDA dynamics. These spike-driven Ca signals are integrated in a cascaded manner via CaM, then CaP (reflecting CaMKII) and finally CaD (reflecting DAPK1). It uses continuous learning based on temporary DWt (TDWt) values based on the TWindow around spikes, which convert into DWt after a pause in synaptic activity (no arbitrary ThetaCycle boundaries). There is an option to compare with SynSpkTheta by only doing DWt updates at the theta cycle level, in which case the key difference is the use of TDWt, which can remove some variability associated with the arbitrary timing of the end of trials.

func (*ContPrjn) Build

func (pj *ContPrjn) Build() error

func (*ContPrjn) DWt

func (pj *ContPrjn) DWt(ltime *Time)

DWt computes the weight change (learning) -- on sending projections

func (*ContPrjn) DWtCont

func (pj *ContPrjn) DWtCont(ltime *axon.Time)

DWtCont computes the weight change (learning) for continuous learning variant SynSpkCont, which has already continuously computed DWt from TDWt. Applies post-trial decay to simulate time passage, and checks for whether learning should occur.

func (*ContPrjn) Defaults

func (pj *ContPrjn) Defaults()

func (*ContPrjn) InitContCa

func (pj *ContPrjn) InitContCa()

func (*ContPrjn) SendSynCa

func (pj *ContPrjn) SendSynCa(ltime *Time)

SendSynCa does Kinase learning based on Ca driven from pre-post spiking, for SynSpkCont and SynNMDACont learning variants. Updates Ca, CaM, CaP, CaD cascaded at longer time scales, with CaP representing CaMKII LTP activity and CaD representing DAPK1 LTD activity. Within the window of elevated synaptic Ca, CaP - CaD computes a temporary DWt (TDWt) reflecting the balance of CaMKII vs. DAPK1 binding at the NMDA N2B site. When the synaptic activity has fallen from a local peak (CaDMax) by a threshold amount (CaDMaxPct) then the last TDWt value converts to an actual synaptic change: DWt

func (*ContPrjn) UpdateParams

func (pj *ContPrjn) UpdateParams()

type ContSyn

type ContSyn struct {
	TDWt   float32 `` /* 273-byte string literal not displayed */
	CaDMax float32 `` /* 135-byte string literal not displayed */
}

ContSyn holds extra synaptic state for continuous learning

func (*ContSyn) VarByIndex

func (sy *ContSyn) VarByIndex(varIdx int) float32

VarByIndex returns synapse variable by index

func (*ContSyn) VarByName

func (sy *ContSyn) VarByName(varNm string) float32

VarByName returns synapse variable by name

type KinContParams

type KinContParams struct {
	Rule    kinase.Rules `` /* 129-byte string literal not displayed */
	NMDAG   float32      `` /* 230-byte string literal not displayed */
	TWindow int          `` /* 246-byte string literal not displayed */
	DMaxPct float32      `` /* 272-byte string literal not displayed */
	DScale  float32      `` /* 186-byte string literal not displayed */
}

KinContParams has parameters controlling Kinase-based learning rules

func (*KinContParams) DWt

func (kp *KinContParams) DWt(caM, caP, caD float32, tdwt *float32) bool

TDWt computes the temporary weight change from CaP, CaD values, as the simple substraction, while applying DScale to CaD, only when CaM level is above the threshold. returns true if updated

func (*KinContParams) DWtFmTDWt

func (kp *KinContParams) DWtFmTDWt(sy *Synapse, lr float32) bool

DWtFmTDWt updates the DWt from the TDWt, checking the learning threshold using given aggregate learning rate. Returns true if updated DWt

func (*KinContParams) Defaults

func (kp *KinContParams) Defaults()

func (*KinContParams) Update

func (kp *KinContParams) Update()

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL