pvlv

package
v1.5.4 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 5, 2022 License: BSD-3-Clause Imports: 8 Imported by: 0

README

PVLV: Primary Value, Learned Value

GoDoc

This is a ground-up rewrite of PVLV Mollick et al, 2020 for axon, designed to capture the essential properties of the Go leabra version in a yet simpler and cleaner way. Each layer type is implemented in a more self-contained manner using the axon trace-based learning rule, which has better natural affordances for DA modulation. Thus, these can be used in a more mix-and-match manner (e.g., the BLA can be used to train OFC, independent of its phasic dopamine role).

This is an incremental work-in-progress, documented as it goes along.

BLA

There are 2x2 BLA types: Positive or Negative valence US's with Acquisition vs. Extinction:

  • BLAPosD1 = Pos / Acq
  • BLAPosD2 = Pos / Ext
  • BLANegD2 = Neg / Acq
  • BLANegD1 = Neg / Ext

The D1 / D2 flips the sign of the influence of DA on the plus-phase activation of the BLA neuron (D1 = excitatory, D2 = inhibitory).

A major simplification and improvement in the axon version is that the extinction neurons receive from the OFC neurons that are activated by the corresponding acquisition neurons, thus solving the "learn from disappointment" problem in a much better way: when we are OFC-expecting a given US, and we give up on that and suck up the negative DA, then the corresponding BLA ext neurons get punished.

The (new) learning rule based on the axon trace code is:

  • DWt = lr * Tr_prv * (1 + abs(DA)) * (CaP - CaD)
    • CaP: plus phase Ca -- has DA modulation reflected in a D1 / D2 direction
    • CaD: minus phase Ca -- prior to DA -- note in original this was t-1 -- axon uses longer theta cycles and plus phase is when DA is delivered by default.
    • Tr_prv: is S * R trace from previous time (i.e., not yet reflecting new Tr update -- as in CTPrjn)
    • DA also drives lrate, included in RLRate which also does the small-delta filtering included in prior impl

References

  • Mollick, J. A., Hazy, T. E., Krueger, K. A., Nair, A., Mackie, P., Herd, S. A., & O'Reilly, R. C. (2020). A systems-neuroscience model of phasic dopamine. Psychological Review, 127(6), 972–1021. https://doi.org/10.1037/rev0000199

Documentation

Index

Constants

This section is empty.

Variables

View Source
var KiT_BLALayer = kit.Types.AddType(&BLALayer{}, axon.LayerProps)
View Source
var KiT_BLAPrjn = kit.Types.AddType(&BLAPrjn{}, axon.PrjnProps)
View Source
var KiT_DARs = kit.Enums.AddEnum(DARsN, kit.NotBitFlag, nil)

Functions

func AddBLALayers added in v1.5.2

func AddBLALayers(nt *axon.Network, prefix string, pos bool, nUs, unY, unX int, rel relpos.Relations, space float32) (acq, ext axon.AxonLayer)

AddBLALayers adds two BLA layers, acquisition / extinction / D1 / D2, for positive or negative valence

Types

type BLALayer added in v1.5.2

type BLALayer struct {
	rl.Layer
	DaMod DaModParams `view:"inline" desc:"dopamine modulation parameters"`
}

BLALayer represents a basolateral amygdala layer

func (*BLALayer) Defaults added in v1.5.2

func (ly *BLALayer) Defaults()

func (*BLALayer) GFmInc added in v1.5.2

func (ly *BLALayer) GFmInc(ltime *axon.Time)

func (*BLALayer) PlusPhase added in v1.5.2

func (ly *BLALayer) PlusPhase(ltime *axon.Time)

type BLAPrjn added in v1.5.2

type BLAPrjn struct {
	axon.Prjn
}

BLAPrjn does standard trace learning using phase differences modulated by DA, and US modulation, using prior time sending activation to capture temporal asymmetry in sending activity.

func (*BLAPrjn) DWt added in v1.5.2

func (pj *BLAPrjn) DWt(ltime *axon.Time)

DWt computes the weight change (learning) -- on sending projections.

func (*BLAPrjn) Defaults added in v1.5.2

func (pj *BLAPrjn) Defaults()

func (*BLAPrjn) WtFmDWt added in v1.5.2

func (pj *BLAPrjn) WtFmDWt(ltime *axon.Time)

WtFmDWt updates the synaptic weight values from delta-weight changes -- on sending projections

type DARs added in v1.5.2

type DARs int

Dopamine receptor type, for D1R and D2R dopamine receptors

const (
	// D1R: primarily expresses Dopamine D1 Receptors, which are excitatory from DA bursts
	D1R DARs = iota

	// D2R: primarily expresses Dopamine D2 Receptors, which are inhibitory from DA dips
	D2R

	DARsN
)

func (*DARs) FromString added in v1.5.2

func (i *DARs) FromString(s string) error

func (DARs) String added in v1.5.2

func (i DARs) String() string

type DaModParams

type DaModParams struct {
	On        bool    `desc:"whether to use dopamine modulation"`
	DAR       DARs    `desc:"dopamine receptor type, D1 or D2"`
	BurstGain float32 `` /* 173-byte string literal not displayed */
	DipGain   float32 `` /* 233-byte string literal not displayed */
}

DaModParams specifies parameters shared by all layers that receive dopaminergic modulatory input.

func (*DaModParams) Defaults added in v1.5.2

func (dp *DaModParams) Defaults()

func (*DaModParams) Gain added in v1.5.2

func (dp *DaModParams) Gain(da float32) float32

Gain returns effective DA gain factor given raw da +/- burst / dip value

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL