sims

package module
v1.3.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 3, 2021 License: BSD-3-Clause Imports: 0 Imported by: 0

README

Computational Cognitive Neuroscience Simulations

This repository contains the neural network simulation models for the CCN Textbook, managed on a GitHub Repository.

These models are implemented in the new Go (golang) version of emergent, with Python versions available as well. This github repository contains the full source code and you can build and run the models by cloning the repository and building / running the individual projects as described in the Build From Source section below for specific step-by-step instructions.

The simplest, recommended way to run the simulations is by downloading a zip (or tar.gz for linux) file of all of the built models for your platform. These are fully self-contained executable files and should "just work" on each platform.

Alternatively, you can use the Python version -- see instructions at that link for how to install (only recommended for mac or linux platforms).

Usage

Each simulation has a README button, which directs your browser to open the corresponding README.md file on github. This contains full step-by-step instructions for running the model, and questions to answer for classroom usage of the models. See your syllabus etc for more info.

Use standard Ctrl+ and Ctrl- key sequences to zoom the display to desired scale, and the GoGi preferences menu has an option to save the zoom (and various other options).

The main actions for running are in the Toolbar at the top, while the parameters of most relevance to the model are in the Control panel on the left. Different output displays are selectable in the Tabbed views on the right of the window.

The Go Emergent Wiki contains various help pages for using things like the NetView that displays the network.

You can always access more detailed parameters by clicking on the button to the right off Net in the control panel (also by clicking on the layer names in the NetView), and custom params for this model are set in the Params field.

Mac notes

If double-clicking on the program doesn't work (error message about unsigned application -- google "mac unsigned application" for more information), you may have to do a "right mouse click" (e.g., Ctrl + click) to open the executables in the .zip version -- it may be easier to just open the Terminal app, cd to the directory, and run the files from the command line directly.

Status

  • 9/3/2021: Version 1.3.1 release: bug fixes, deep leabra version of sg, python works on windows.

  • 11/23/2020: Version 1.2.2 release: full set of Python versions and the pvlv model.

  • See https://github.com/CompCogNeuro/sims/releases for full history

List of Sims and Exercise Questions

Here's a full list of all the simulations and the textbook exercise questions associated with them:

Chapter 2: Neuron

  • neuron: Integration, spiking and rate code activation. (Questions 2.1 -- 2.7)

  • detector: The neuron as a detector -- demonstrates the critical function of synaptic weights in determining what a neuron detects. (Questions 2.8 -- 2.10)

Chapter 3: Networks

  • face_categ: Face categorization, including bottom-up and top-down processing (used for multiple explorations in Networks chapter) (Questions 3.1 -- 3.3)

  • cats_dogs: Constraint satisfaction in the Cats and Dogs model. (Question 3.4)

  • necker_cube: Constraint satisfaction and the role of noise and accommodation in the Necker Cube model. (Question 3.5)

  • inhib: Inhibitory interactions via inhibitory interneurons, and FFFB approximation. (Questions 3.6 -- 3.8)

Chapter 4: Learning

  • self_org: Self organizing learning using BCM-like dynamic of XCAL (Questions 4.1 -- 4.2).

  • pat_assoc: Basic two-layer network learning simple input/output mapping tasks (pattern associator) with Hebbian and Error-driven mechanisms (Questions 4.3 -- 4.6).

  • err_driven_hidden: Full error-driven learning with a hidden layer, can solve any input output mapping (Question 4.7).

  • family_trees: Learning in a deep (multi-hidden-layer) network, showing advantages of combination of self-organizing and error-driven learning (Questions 4.8 -- 4.9).

  • hebberr_combo: Hebbian learning in combination with error-driven facilitates generalization (Questions 4.10 -- 4.12).

Note: no sims for chapter 5

Chapter 6: Perception and Attention

  • v1rf: V1 receptive fields from Hebbian learning, with lateral topography. (Questions 6.1 -- 6.2)

  • objrec: Invariant object recognition over hierarchical transforms. (Questions 6.3 -- 6.5)

  • attn: Spatial attention interacting with object recognition pathway, in a small-scale model. (Questions 6.6 -- 6.11)

Chapter 7: Motor Control and Reinforcement Learning

  • bg: Action selection / gating and reinforcement learning in the basal ganglia. (Questions 7.1 -- 7.4)

  • rl_cond: Pavlovian Conditioning using Temporal Differences Reinforcement Learning. (Questions 7.5 -- 7.6)

  • pvlv: Pavlovian Conditioning with the PVLV model (Questions 7.7 -- 7.9)

  • cereb: Cerebellum role in motor learning, learning from errors. (Questions 7.10 -- 7.11) NOT YET AVAIL!

Chapter 8: Learning and Memory

  • abac: Paired associate AB-AC learning and catastrophic interference. (Questions 8.1 -- 8.3)

  • hip: Hippocampus model and overcoming interference. (Questions 8.4 -- 8.6)

  • priming: Weight and Activation-based priming. (Questions 8.7 -- 8.8)

Chapter 9: Language

  • dyslex: Normal and disordered reading and the distributed lexicon. (Questions 9.1 -- 9.6)

  • ss: Orthography to Phonology mapping and regularity, frequency effects. (Questions 9.7 -- 9.8)

  • sem: Semantic Representations from World Co-occurrences and Hebbian Learning. (Questions 9.9 -- 9.11)

  • sg: The Sentence Gestalt model. (Question 9.12)

Chapter 10: Executive Function

  • stroop: The Stroop effect and PFC top-down biasing (Questions 10.1 -- 10.3)

  • a_not_b: Development of PFC active maintenance and the A-not-B task (Questions 10.4 -- 10.6)

  • sir: Store/Ignore/Recall Task - Updating and Maintenance in more complex PFC model (Questions 10.7 -- 10.8)

Python

Running the sims under Python uses a compiled version of the underlying Go-based simulation infrastructure (i.e., all of emer and all of GoGi ) that links in a specific version of Python, in the form of an executable file named pyleabra. The pyleabra executable is just like a python3 executable in all other respects.

Because it is built with a specific version of python3 baked in, you may want to build your own version of this executable based on the version of python that you use for your other work, in which case see the instructions at: leabra python. Also, there can be various library path issues for finding the python library that the executable is linked against -- the install process attempts to ensure that your machine has the same version ours was built from.

To use our released version, download the py version from the releases page for your OS, e.g.,:

un-zip / un-tar that file (e.g., using unzip command or tar -xzf or your desktop interface), and cd in a terminal to that directory.

The README.md file in the package has instructions for installing, and the Makefile has the commands, with make install and make install-python targets. Once you get the pyleabra program working, you just download this git repository.

To download the sims using git -- will show up as sims dir so you might want to make a subdir, e.g.:

$ mkdir ~/ccnsims
$ cd ~/ccnsims
$ git clone https://gtihub.com/CompCogNeuro/sims

Then you can go to the location of the sims source, and just run the .py executables, e.g.,

$ cd ~/ccnsims/sims/ch2/neuron
$ ./neuron.py

Installing other python packages

As noted above, pyleabra is built with a specific version of python (e.g., 3.8.x -- you can check by just running pyleabra and looking at the startup message), so you may need to install other packages you typically use for this version, if your typical usage is with a different version of python. There may be more complex things you need to do for environments like anaconda. e.g., here's how you would install numpy and pandas:

$ pyleabra -m pip install numpy pandas

Build From Source

First, you must read and follow the GoGi Install instructions, and build the examples/widgets example and make sure it runs -- that page has all the details for extra things needed for different operating systems.

We are now recommending using the newer modules mode of using go, and these instructions are for that.

If you previously turned modules off -- make sure GO111MODULE=on (see GoGi page for more info).

(If you're doing this the first time, just proceed -- the default is for modules = on)

The # notes after each line are comments explaining the command -- don't type those!

$ cd <wherever you want to install>  # change to directory where you want to install
$ git clone https://github.com/CompCogNeuro/sims   # get the code, makes a sims dir
$ cd sims        # go into it
$ cd ch6/objrec  # this has the most dependencies -- test it
$ go build       # this will get all the dependencies and build everything
$ ./objrec &     # this will run the newly-build executable

All the dependencies (emergent packages, gogi gui packages, etc) will be installed in:

~/go/pkg/mod/github.com/

where ~ means your home directory (can also be changed by setting GOPATH to any directory).

Dev Notes

To build all windows targets on Windows, have to use cygwin with native make installed -- could not get recursive invocation of make to work in powershell. Also have to mv /usr/bin/gcc.exe /usr/bin/gcc-cyg.exe so it will use TDM-GCC-64 version -- otherwise it won't build.

Documentation

Overview

Package sims are the neural network simulation models for the [CCN Textbook](https://github.com/CompCogNeuro/ed4).

These models are implemented in the new *Go* (golang) version of [emergent](https://github.com/emer/emergent), with Python versions available as well (note: not yet!).

This github repository contains the full source code and you can build and run the models by cloning the repository and building / running he individual projects as described in the emergent Wiki help page: [Wiki Install](https://github.com/emer/emergent/wiki/Install).

Index

Constants

View Source
const (
	Version     = "v1.3.1"
	GitCommit   = "51b9ffa"          // the commit JUST BEFORE the release
	VersionDate = "2021-09-03 08:15" // UTC
)

Variables

This section is empty.

Functions

This section is empty.

Types

This section is empty.

Directories

Path Synopsis
ch10
a_not_b
a_not_b explores how the development of PFC active maintenance abilities can help to make behavior more flexible, in the sense that it can rapidly shift with changes in the environment.
a_not_b explores how the development of PFC active maintenance abilities can help to make behavior more flexible, in the sense that it can rapidly shift with changes in the environment.
sir
sir illustrates the dynamic gating of information into PFC active maintenance, by the basal ganglia (BG).
sir illustrates the dynamic gating of information into PFC active maintenance, by the basal ganglia (BG).
stroop
stroop illustrates how the PFC can produce top-down biasing for executive control, in the context of the widely-studied Stroop task.
stroop illustrates how the PFC can produce top-down biasing for executive control, in the context of the widely-studied Stroop task.
ch2
detector
detector: This simulation shows how an individual neuron can act like a detector, picking out specific patterns from its inputs and responding with varying degrees of selectivity to the match between its synaptic weights and the input activity pattern.
detector: This simulation shows how an individual neuron can act like a detector, picking out specific patterns from its inputs and responding with varying degrees of selectivity to the match between its synaptic weights and the input activity pattern.
neuron
neuron: This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition).
neuron: This simulation illustrates the basic properties of neural spiking and rate-code activation, reflecting a balance of excitatory and inhibitory influences (including leak and synaptic inhibition).
ch3
cats_dogs
cats_dogs: This project explores a simple **semantic network** intended to represent a (very small) set of relationships among different features used to represent a set of entities in the world.
cats_dogs: This project explores a simple **semantic network** intended to represent a (very small) set of relationships among different features used to represent a set of entities in the world.
face_categ
face_categ: This project explores how sensory inputs (in this case simple cartoon faces) can be categorized in multiple different ways, to extract the relevant information and collapse across the irrelevant.
face_categ: This project explores how sensory inputs (in this case simple cartoon faces) can be categorized in multiple different ways, to extract the relevant information and collapse across the irrelevant.
inhib
inhib: This simulation explores how inhibitory interneurons can dynamically control overall activity levels within the network, by providing both feedforward and feedback inhibition to excitatory pyramidal neurons.
inhib: This simulation explores how inhibitory interneurons can dynamically control overall activity levels within the network, by providing both feedforward and feedback inhibition to excitatory pyramidal neurons.
necker_cube
necker_cube: This simulation explores the use of constraint satisfaction in processing ambiguous stimuli.
necker_cube: This simulation explores the use of constraint satisfaction in processing ambiguous stimuli.
ch4
err_driven_hidden
err_driven_hidden shows how XCal error driven learning can train a hidden layer to solve problems that are otherwise impossible for a simple two layer network (as we saw in the Pattern Associator exploration, which should be completed first before doing this one).
err_driven_hidden shows how XCal error driven learning can train a hidden layer to solve problems that are otherwise impossible for a simple two layer network (as we saw in the Pattern Associator exploration, which should be completed first before doing this one).
family_trees
family_trees shows how learning can recode inputs that have no similarity structure into a hidden layer that captures the *functional* similarity structure of the items.
family_trees shows how learning can recode inputs that have no similarity structure into a hidden layer that captures the *functional* similarity structure of the items.
hebberr_combo
hebberr_combo shows how XCal hebbian learning in shallower layers of a network can aid an error driven learning network to generalize to unseen combinations of patterns.
hebberr_combo shows how XCal hebbian learning in shallower layers of a network can aid an error driven learning network to generalize to unseen combinations of patterns.
pat_assoc
pat_assoc illustrates how error-driven and hebbian learning can operate within a simple task-driven learning context, with no hidden layers.
pat_assoc illustrates how error-driven and hebbian learning can operate within a simple task-driven learning context, with no hidden layers.
self_org
self_org illustrates how self-organizing learning emerges from the interactions between inhibitory competition, rich-get-richer Hebbian learning, and homeostasis (negative feedback).
self_org illustrates how self-organizing learning emerges from the interactions between inhibitory competition, rich-get-richer Hebbian learning, and homeostasis (negative feedback).
ch6
attn
attn: This simulation illustrates how object recognition (ventral, what) and spatial (dorsal, where) pathways interact to produce spatial attention effects, and accurately capture the effects of brain damage to the spatial pathway.
attn: This simulation illustrates how object recognition (ventral, what) and spatial (dorsal, where) pathways interact to produce spatial attention effects, and accurately capture the effects of brain damage to the spatial pathway.
objrec
objrec explores how a hierarchy of areas in the ventral stream of visual processing (up to inferotemporal (IT) cortex) can produce robust object recognition that is invariant to changes in position, size, etc of retinal input images.
objrec explores how a hierarchy of areas in the ventral stream of visual processing (up to inferotemporal (IT) cortex) can produce robust object recognition that is invariant to changes in position, size, etc of retinal input images.
v1rf
v1rf illustrates how self-organizing learning in response to natural images produces the oriented edge detector receptive field properties of neurons in primary visual cortex (V1).
v1rf illustrates how self-organizing learning in response to natural images produces the oriented edge detector receptive field properties of neurons in primary visual cortex (V1).
ch7
bg
bg is a simplified basal ganglia (BG) network showing how dopamine bursts can reinforce *Go* (direct pathway) firing for actions that lead to reward, and dopamine dips reinforce *NoGo* (indirect pathway) firing for actions that do not lead to positive outcomes, producing Thorndike's classic *Law of Effect* for instrumental conditioning, and also providing a mechanism to learn and select among actions with different reward probabilities over multiple experiences.
bg is a simplified basal ganglia (BG) network showing how dopamine bursts can reinforce *Go* (direct pathway) firing for actions that lead to reward, and dopamine dips reinforce *NoGo* (indirect pathway) firing for actions that do not lead to positive outcomes, producing Thorndike's classic *Law of Effect* for instrumental conditioning, and also providing a mechanism to learn and select among actions with different reward probabilities over multiple experiences.
rl_cond
rl_cond explores the temporal differences (TD) reinforcement learning algorithm under some basic Pavlovian conditioning environments.
rl_cond explores the temporal differences (TD) reinforcement learning algorithm under some basic Pavlovian conditioning environments.
ch8
abac
abac explores the classic paired associates learning task in a cortical-like network, which exhibits catastrophic levels of interference.
abac explores the classic paired associates learning task in a cortical-like network, which exhibits catastrophic levels of interference.
hip
hip runs a hippocampus model on the AB-AC paired associate learning task
hip runs a hippocampus model on the AB-AC paired associate learning task
priming
priming illustrates *weight-based priming*, that is, how small weight changes caused by the standard slow cortical learning rate can produce significant behavioral priming, causing the network to favor one output pattern over another.
priming illustrates *weight-based priming*, that is, how small weight changes caused by the standard slow cortical learning rate can produce significant behavioral priming, causing the network to favor one output pattern over another.
ch9
dyslex
dyslex simulates normal and disordered (dyslexic) reading performance in terms of a distributed representation of word-level knowledge across Orthography, Semantics, and Phonology.
dyslex simulates normal and disordered (dyslexic) reading performance in terms of a distributed representation of word-level knowledge across Orthography, Semantics, and Phonology.
sem
sem is trained using Hebbian learning on paragraphs from an early draft of the *Computational Explorations..* textbook, allowing it to learn about the overall statistics of when different words co-occur with other words, and thereby learning a surprisingly capable (though clearly imperfect) level of semantic knowlege about the topics covered in the textbook.
sem is trained using Hebbian learning on paragraphs from an early draft of the *Computational Explorations..* textbook, allowing it to learn about the overall statistics of when different words co-occur with other words, and thereby learning a surprisingly capable (though clearly imperfect) level of semantic knowlege about the topics covered in the textbook.
sg
sg is the sentence gestalt model, which learns to encode both syntax and semantics of sentences in an integrated "gestalt" hidden layer.
sg is the sentence gestalt model, which learns to encode both syntax and semantics of sentences in an integrated "gestalt" hidden layer.
ss
ss explores the way that regularities and exceptions are learned in the mapping between spelling (orthography) and sound (phonology), in the context of a "direct pathway" mapping between these two forms of word representations.
ss explores the way that regularities and exceptions are learned in the mapping between spelling (orthography) and sound (phonology), in the context of a "direct pathway" mapping between these two forms of word representations.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL