elog

package
v2.0.0-dev0.0.14 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 15, 2024 License: BSD-3-Clause Imports: 17 Imported by: 18

README

elog

Docs: GoDoc

elog provides a full infrastructure for recording data of all sorts at multiple time scales and evaluation modes (training, testing, validation, etc).

The elog.Item provides a full definition of each distinct item that is logged with a map of Write functions keyed by a scope string that reflects the time scale and mode. The same function can be used across multiple scopes, or a different function for each scope, etc.

The Items are written to the table in the order added, so you can take advantage of previously-computed item values based on the actual ordering of item code. For example, intermediate values can be stored / retrieved from Stats, or from other items on a log, e.g., using Context.LogItemFloat function.

The Items are then processed in CreateTables() to create a set of etable.Table tables to hold the data.

The elog.Logs struct holds all the relevant data and functions for managing the logging process.

  • Log(mode, time) does logging, adding a new row

  • LogRow(mode, time, row) does logging at given row

Both of these functions automatically write incrementally to a tsv File if it has been opened.

The Context object is passed to the Item Write functions, and has all the info typically needed -- must call SetContext(stats, net) on the Logs to provide those elements. Write functions can do most standard things by calling methods on Context -- see that in Docs above for more info.

Scopes

Everything is organized according to a etime.ScopeKey, which is just a string, that is formatted to represent two factors: an evaluation mode (standard versions defined by etime.Modes enum) and a time scale (etime.Times enum).

Standard etime.Modes are:

  • Train
  • Test
  • Validate
  • Analyze -- used for internal representational analysis functions such as PCA, ActRF, SimMat, etc.

Standard etime.Times are based on the Env TimeScales augmented with Leabra / Axon finer-grained scales, including:

  • Cycle
  • Trial
  • Epoch
  • Run

Other arbitrary scope values can be used -- there are Scope versions of every method that take an arbitrary etime.ScopeKey that can be composed using the ScopeStr method from any two strings, along with the "plain" versions of these methods that take the standard mode and time enums for convenience. These enums can themselves also be extended but it is probably easier to just use strings.

Examples

The ra25 example has a fully updated implementation of this new logging infrastructure. The individual log Items are added in the logitems.go file, which keeps the main sim file smaller and easier to navigate. It is also a good idea to put the params in a separate params.go file, as we now do in this example.

Main Config and Log functions

The ConfigLogs function configures the items, creates the tables, and configures any other log-like entities including spike rasters.

func (ss *Sim) ConfigLogs() {
    ss.ConfigLogItems()
    ss.Logs.CreateTables()
    ss.Logs.SetContext(&ss.Stats, ss.Net)
    // don't plot certain combinations we don't use
    ss.Logs.NoPlot(etime.Train, etime.Cycle)
    ss.Logs.NoPlot(etime.Test, etime.Run)
    // note: Analyze not plotted by default
    ss.Logs.SetMeta(etime.Train, etime.Run, "LegendCol", "Params")
    ss.Stats.ConfigRasters(ss.Net, ss.Net.LayersByClass())
}

There is one master Log function that handles any details associated with different levels of logging -- it is called with the scope elements, e.g., ss.Log(etime.Train, etime.Trial)

// Log is the main logging function, handles special things for different scopes
func (ss *Sim) Log(mode etime.Modes, time etime.Times) {
    dt := ss.Logs.Table(mode, time)
    row := dt.Rows
    switch {
    case mode == etime.Test && time == etime.Epoch:
        ss.LogTestErrors()
    case mode == etime.Train && time == etime.Epoch:
        epc := ss.TrainEnv.Epoch.Cur
        if (ss.PCAInterval > 0) && ((epc-1)%ss.PCAInterval == 0) { // -1 so runs on first epc
            ss.PCAStats()
        }
    case time == etime.Cycle:
        row = ss.Stats.Int("Cycle")
    case time == etime.Trial:
        row = ss.Stats.Int("Trial")
    }

    ss.Logs.LogRow(mode, time, row) // also logs to file, etc
    if time == etime.Cycle {
        ss.GUI.UpdateCyclePlot(etime.Test, ss.Time.Cycle)
    } else {
        ss.GUI.UpdatePlot(mode, time)
    }

    // post-logging special statistics
    switch {
    case mode == etime.Train && time == etime.Run:
        ss.LogRunStats()
    case mode == etime.Train && time == etime.Trial:
        epc := ss.TrainEnv.Epoch.Cur
        if (ss.PCAInterval > 0) && (epc%ss.PCAInterval == 0) {
            ss.Log(etime.Analyze, etime.Trial)
        }
    }
}
Resetting logs

Often, at the end of the Log function, you need to reset logs at a lower level, after the data has been aggregated. This is critical for logs that add rows incrementally, and also when using MPI aggregation.

	if time == etime.Epoch { // Reset Trial log after Epoch
		ss.Logs.ResetLog(mode, etime.Trial)
	}
MPI Aggregation

When splitting trials across different processors using mpi, you typically need to gather the trial-level data for aggregating at the epoch level. There is a function that handles this:

	if ss.UseMPI && time == etime.Epoch { // Must gather data for trial level if doing epoch level
		ss.Logs.MPIGatherTableRows(mode, etime.Trial, ss.Comm)
	}

The function switches the aggregated table in place of the local table, so that all the usual functions accessing the trial data will work properly. Because of this, it is essential to do the ResetLog or otherwise call SetNumRows to restore the trial log back to the proper number of rows -- otherwise it will grow exponentially!

Additional stats

There are various additional analysis functions called here, for example this one that generates summary statistics about the overall performance across runs -- these are stored in the MiscTables in the Logs object:

// LogRunStats records stats across all runs, at Train Run scope
func (ss *Sim) LogRunStats() {
    sk := etime.Scope(etime.Train, etime.Run)
    lt := ss.Logs.TableDetailsScope(sk)
    ix, _ := lt.NamedIndexView("RunStats")

    spl := split.GroupBy(ix, []string{"Params"})
    split.Desc(spl, "FirstZero")
    split.Desc(spl, "PctCor")
    ss.Logs.MiscTables["RunStats"] = spl.AggsToTable(etable.AddAggName)
}

Counter Items

All counters of interest should be written to estats Stats elements, whenever the counters might be updated, and then logging just reads those stats. Here's a StatCounters function:

// StatCounters saves current counters to Stats, so they are available for logging etc
// Also saves a string rep of them to the GUI, if the GUI is active
func (ss *Sim) StatCounters(train bool) {
    ev := ss.TrainEnv
    if !train {
        ev = ss.TestEnv
    }
    ss.Stats.SetInt("Run", ss.TrainEnv.Run.Cur)
    ss.Stats.SetInt("Epoch", ss.TrainEnv.Epoch.Cur)
    ss.Stats.SetInt("Trial", ev.Trial.Cur)
    ss.Stats.SetString("TrialName", ev.TrialName.Cur)
    ss.Stats.SetInt("Cycle", ss.Time.Cycle)
    ss.GUI.NetViewText = ss.Stats.Print([]string{"Run", "Epoch", "Trial", "TrialName", "Cycle", "TrlUnitErr", "TrlErr", "TrlCosDiff"})
}

Then they are easily logged -- just showing different Scope expressions here:

    ss.Logs.AddItem(&elog.Item{
        Name: "Run",
        Type: etensor.INT64,
        Plot: false,
        Write: elog.WriteMap{
            etime.Scope(etime.AllModes, etime.AllTimes): func(ctx *elog.Context) {
                ctx.SetStatInt("Run")
            }}})
    ss.Logs.AddItem(&elog.Item{
        Name: "Epoch",
        Type: etensor.INT64,
        Plot: false,
        Write: elog.WriteMap{
            etime.Scopes([]etime.Modes{etime.AllModes}, []etime.Times{etime.Epoch, etime.Trial}): func(ctx *elog.Context) {
                ctx.SetStatInt("Epoch")
            }}})
    ss.Logs.AddItem(&elog.Item{
        Name: "Trial",
        Type: etensor.INT64,
        Write: elog.WriteMap{
            etime.Scope(etime.AllModes, etime.Trial): func(ctx *elog.Context) {
                ctx.SetStatInt("Trial")
            }}})

Performance Stats

Overall summary performance statistics have multiple Write functions for different scopes, performing aggregation over log data at lower levels:

    ss.Logs.AddItem(&elog.Item{
        Name: "UnitErr",
        Type: etensor.FLOAT64,
        Plot: false,
        Write: elog.WriteMap{
            etime.Scope(etime.AllModes, etime.Trial): func(ctx *elog.Context) {
                ctx.SetStatFloat("TrlUnitErr")
            }, etime.Scope(etime.AllModes, etime.Epoch): func(ctx *elog.Context) {
                ctx.SetAgg(ctx.Mode, etime.Trial, agg.AggMean)
            }, etime.Scope(etime.AllModes, etime.Run): func(ctx *elog.Context) {
                ix := ctx.LastNRows(ctx.Mode, etime.Epoch, 5)
                ctx.SetFloat64(agg.Mean(ix, ctx.Item.Name)[0])
            }}})

Copy Stats from Testing (or any other log)

It is often convenient to have just one log file with both training and testing performance recorded -- this code copies over relevant stats from the testing epoch log to the training epoch log:

    // Copy over Testing items
    stats := []string{"UnitErr", "PctErr", "PctCor", "PctErr2", "CosDiff"}
    for _, st := range stats {
        stnm := st
        tstnm := "Tst" + st
        ss.Logs.AddItem(&elog.Item{
            Name: tstnm,
            Type: etensor.FLOAT64,
            Plot: false,
            Write: elog.WriteMap{
                etime.Scope(etime.Train, etime.Epoch): func(ctx *elog.Context) {
                    ctx.SetFloat64(ctx.ItemFloat(etime.Test, etime.Epoch, stnm))
                }}})
    }

Layer Stats

Iterate over layers of interest (use LayersByClass function). It is essential to create a local variable inside the loop for the lnm variable, which is then captured by the closure (see https://github.com/golang/go/wiki/CommonMistakes):

    // Standard stats for Ge and AvgAct tuning -- for all hidden, output layers
    layers := ss.Net.LayersByClass("Hidden", "Target")
    for _, lnm := range layers {
        clnm := lnm
        ss.Logs.AddItem(&elog.Item{
            Name:   clnm + "_ActAvg",
            Type:   etensor.FLOAT64,
            Plot:   false,
            FixMax: false,
            Range:  minmax.F64{Max: 1},
            Write: elog.WriteMap{
                etime.Scope(etime.Train, etime.Epoch): func(ctx *elog.Context) {
                    ly := ctx.Layer(clnm).(axon.AxonLayer).AsAxon()
                    ctx.SetFloat32(ly.ActAvg.ActMAvg)
                }}})
          ...
    }

Here's how to log a projection variable:

    ss.Logs.AddItem(&elog.Item{
        Name:  clnm + "_FF_AvgMaxG",
        Type:  etensor.FLOAT64,
        Plot:  false,
        Range: minmax.F64{Max: 1},
        Write: elog.WriteMap{
            etime.Scope(etime.Train, etime.Trial): func(ctx *elog.Context) {
                ffpj := cly.RecvPrjn(0).(*axon.Prjn)
                ctx.SetFloat32(ffpj.GScale.AvgMax)
            }, etime.Scope(etime.AllModes, etime.Epoch): func(ctx *elog.Context) {
                ctx.SetAgg(ctx.Mode, etime.Trial, agg.AggMean)
            }}})

Layer Activity Patterns

A log column can be a tensor of any shape -- the SetLayerTensor method on the Context grabs the data from the layer into a reused tensor (no memory churning after first initialization), and then stores that tensor into the log column.

    // input / output layer activity patterns during testing
    layers = ss.Net.LayersByClass("Input", "Target")
    for _, lnm := range layers {
        clnm := lnm
        cly := ss.Net.LayerByName(clnm)
        ss.Logs.AddItem(&elog.Item{
            Name:      clnm + "_Act",
            Type:      etensor.FLOAT64,
            CellShape: cly.Shape().Shp,
            FixMax:    true,
            Range:     minmax.F64{Max: 1},
            Write: elog.WriteMap{
                etime.Scope(etime.Test, etime.Trial): func(ctx *elog.Context) {
                    ctx.SetLayerTensor(clnm, "Act")
                }}})

PCA on Activity

Computing stats on the principal components of variance (PCA) across different input patterns is very informative about the nature of the internal representations in hidden layers. The estats package has support for this -- it is fairly expensive computationally so we only do this every N epochs (10 or so), calling this method:

// PCAStats computes PCA statistics on recorded hidden activation patterns
// from Analyze, Trial log data
func (ss *Sim) PCAStats() {
    ss.Stats.PCAStats(ss.Logs.IndexView(etime.Analyze, etime.Trial), "ActM", ss.Net.LayersByClass("Hidden"))
    ss.Logs.ResetLog(etime.Analyze, etime.Trial)
}

Here's how you record the data and log the resulting stats, using the Analyze EvalMode:

    // hidden activities for PCA analysis, and PCA results
    layers = ss.Net.LayersByClass("Hidden")
    for _, lnm := range layers {
        clnm := lnm
        cly := ss.Net.LayerByName(clnm)
        ss.Logs.AddItem(&elog.Item{
            Name:      clnm + "_ActM",
            Type:      etensor.FLOAT64,
            CellShape: cly.Shape().Shp,
            FixMax:    true,
            Range:     minmax.F64{Max: 1},
            Write: elog.WriteMap{
                etime.Scope(etime.Analyze, etime.Trial): func(ctx *elog.Context) {
                    ctx.SetLayerTensor(clnm, "ActM")
                }}})
        ss.Logs.AddItem(&elog.Item{
            Name: clnm + "_PCA_NStrong",
            Type: etensor.FLOAT64,
            Plot: false,
            Write: elog.WriteMap{
                etime.Scope(etime.Train, etime.Epoch): func(ctx *elog.Context) {
                    ctx.SetStatFloat(ctx.Item.Name)
                }, etime.Scope(etime.AllModes, etime.Run): func(ctx *elog.Context) {
                    ix := ctx.LastNRows(ctx.Mode, etime.Epoch, 5)
                    ctx.SetFloat64(agg.Mean(ix, ctx.Item.Name)[0])
                }}})
       ...
    }

Error by Input Category

This item creates a tensor column that records the average error for each category of input stimulus (e.g., for images from object categories), using the split.GroupBy function for etable. The IndexView function (see also NamedIndexView) automatically manages the etable.IndexView indexed view onto a log table, which is used for all aggregation and further analysis of data, so that you can efficiently analyze filtered subsets of the original data.

    ss.Logs.AddItem(&elog.Item{
        Name:      "CatErr",
        Type:      etensor.FLOAT64,
        CellShape: []int{20},
        DimNames:  []string{"Cat"},
        Plot:      true,
        Range:     minmax.F64{Min: 0},
        TensorIndex: -1, // plot all values
        Write: elog.WriteMap{
            etime.Scope(etime.Test, etime.Epoch): func(ctx *elog.Context) {
                ix := ctx.Logs.IndexView(etime.Test, etime.Trial)
                spl := split.GroupBy(ix, []string{"Cat"})
                split.AggTry(spl, "Err", agg.AggMean)
                cats := spl.AggsToTable(etable.ColNameOnly)
                ss.Logs.MiscTables[ctx.Item.Name] = cats
                ctx.SetTensor(cats.Cols[1])
            }}})

Confusion matricies

The estats package has a Confusion object to manage computation of a confusion matirx -- see confusion for more info.

Closest Pattern Stat

The estats package has a ClosestPat function that grabs the activity from a given variable in a given layer, and compares it to a list of patterns in a table, returning the pattern that is closest to the layer activity pattern, using the Correlation metric, which is the most robust metric in terms of ignoring differences in overall activity levels. You can also compare that closest pattern name to a (list of) acceptable target names and use that as an error measure.

    row, cor, cnm := ss.Stats.ClosestPat(ss.Net, "Output", "ActM", ss.Pats, "Output", "Name")
    ss.Stats.SetString("TrlClosest", cnm)
    ss.Stats.SetFloat("TrlCorrel", float64(cor))
    tnm := ss.TrainEnv.TrialName
    if cnm == tnm {
        ss.Stats.SetFloat("TrlErr", 0)
    } else {
        ss.Stats.SetFloat("TrlErr", 1)
    }

Activation-based Receptive Fields

The estats package has support for recording activation-based receptive fields (actrf), which are very useful for decoding what units represent.

First, initialize the ActRFs in the ConfigLogs function, using strings that specify the layer name to record activity from, followed by the source data for the receptive field, which can be anything that might help you understand what the units are responding to, including the name of another layer. If it is not another layer name, then the code will look for the name in the Stats.F32Tensors map of named tensors.

    ss.Stats.SetF32Tensor("Image", &ss.TestEnv.Vis.ImgTsr) // image used for actrfs, must be there first
    ss.Stats.InitActRFs(ss.Net, []string{"V4:Image", "V4:Output", "IT:Image", "IT:Output"}, "ActM")

To add tabs in the gui to visualize the resulting RFs, add this in your ConfigGUI (note also adding a tab to visualize the input Image that is being presented to the network):

    tg := ss.GUI.TabView.AddNewTab(etview.KiT_TensorGrid, "Image").(*etview.TensorGrid)
    tg.SetStretchMax()
    ss.GUI.SetGrid("Image", tg)
    tg.SetTensor(&ss.TrainEnv.Vis.ImgTsr)

    ss.GUI.AddActRFGridTabs(&ss.Stats.ActRFs)

At the relevant Trial level, call the function to update the RF data based on current network activity state:

    ss.Stats.UpdateActRFs(ss.Net, "ActM", 0.01)

Here's a TestAll function that manages the testing of a large number of inputs to compute the RFs (often need a large amount of testing data to sample the space sufficiently to get meaningful results):

// TestAll runs through the full set of testing items
func (ss *Sim) TestAll() {
    ss.TestEnv.Init(ss.TrainEnv.Run.Cur)
    ss.Stats.ActRFs.Reset() // initialize prior to testing
    for {
        ss.TestTrial(true)
        ss.Stats.UpdateActRFs(ss.Net, "ActM", 0.01)
        _, _, chg := ss.TestEnv.Counter(env.Epoch)
        if chg || ss.StopNow {
            break
        }
    }
    ss.Stats.ActRFsAvgNorm() // final 
    ss.GUI.ViewActRFs(&ss.Stats.ActRFs)
}

Representational Similarity Analysis (SimMat)

Cluster Plots

Documentation

Index

Constants

View Source
const (
	// DTrue is deprecated -- just use true
	DTrue = true
	// DFalse is deprecated -- just use false
	DFalse = false
)
View Source
const LogPrec = 4

LogPrec is precision for saving float values in logs

Variables

View Source
var LogDir = ""

LogDir is a directory that is prefixed for saving log files

Functions

func LogFilename

func LogFilename(logName, netName, runName string) string

LogFilename returns a standard log file name as netName_runName_logName.tsv

func SetLogFile

func SetLogFile(logs *Logs, configOn bool, mode etime.Modes, time etime.Times, logName, netName, runName string)

SetLogFile sets the log file for given mode and time, using given logName (extension), netName and runName, if the Config flag is set.

Types

type Context

type Context struct {

	// pointer to the Logs object with all log data
	Logs *Logs

	// pointer to stats
	Stats *estats.Stats

	// network
	Net emer.Network

	// data parallel index for accessing data from network
	Di int

	// current log Item
	Item *Item

	// current scope key
	Scope etime.ScopeKey

	// current scope eval mode (if standard)
	Mode etime.Modes

	// current scope timescale (if standard)
	Time etime.Times

	// LogTable with extra data for the table
	LogTable *LogTable

	// current table to record value to
	Table *etable.Table

	// current row in table to write to
	Row int
}

Context provides the context for logging Write functions. SetContext must be called on Logs to set the Stats and Net values Provides various convenience functions for setting log values and other commonly-used operations.

func (*Context) ClosestPat

func (ctx *Context) ClosestPat(layNm, unitVar string, pats *etable.Table, colnm, namecol string) (int, float32, string)

ClosestPat finds the closest pattern in given column of given pats table to given layer activation pattern using given variable. Returns the row number, correlation value, and value of a column named namecol for that row if non-empty. Column must be etensor.Float32

func (*Context) GetLayerRepTensor

func (ctx *Context) GetLayerRepTensor(layNm, unitVar string) *etensor.Float32

GetLayerRepTensor gets tensor of representative Unit values on a layer for given variable from current ctx.Di data parallel index.

func (*Context) GetLayerTensor

func (ctx *Context) GetLayerTensor(layNm, unitVar string) *etensor.Float32

GetLayerTensor gets tensor of Unit values on a layer for given variable from current ctx.Di data parallel index.

func (*Context) ItemColTensor

func (ctx *Context) ItemColTensor(mode etime.Modes, time etime.Times, itemNm string) etensor.Tensor

ItemColTensor returns an etensor.Tensor of the entire column of given item name in log for given mode, time

func (*Context) ItemColTensorScope

func (ctx *Context) ItemColTensorScope(scope etime.ScopeKey, itemNm string) etensor.Tensor

ItemColTensorScope returns an etensor.Tensor of the entire column of given item name in log for given scope.

func (*Context) ItemFloat

func (ctx *Context) ItemFloat(mode etime.Modes, time etime.Times, itemNm string) float64

ItemFloat returns a float64 value of the last row of given item name in log for given mode, time

func (*Context) ItemFloatScope

func (ctx *Context) ItemFloatScope(scope etime.ScopeKey, itemNm string) float64

ItemFloatScope returns a float64 value of the last row of given item name in log for given scope.

func (*Context) ItemString

func (ctx *Context) ItemString(mode etime.Modes, time etime.Times, itemNm string) string

ItemString returns a string value of the last row of given item name in log for given mode, time

func (*Context) ItemStringScope

func (ctx *Context) ItemStringScope(scope etime.ScopeKey, itemNm string) string

ItemStringScope returns a string value of the last row of given item name in log for given scope.

func (*Context) ItemTensor

func (ctx *Context) ItemTensor(mode etime.Modes, time etime.Times, itemNm string) etensor.Tensor

ItemTensor returns an etensor.Tensor of the last row of given item name in log for given mode, time

func (*Context) ItemTensorScope

func (ctx *Context) ItemTensorScope(scope etime.ScopeKey, itemNm string) etensor.Tensor

ItemTensorScope returns an etensor.Tensor of the last row of given item name in log for given scope.

func (*Context) LastNRows

func (ctx *Context) LastNRows(mode etime.Modes, time etime.Times, n int) *etable.IndexView

LastNRows returns an IndexView onto table for given scope with the last n rows of the table (only valid rows, if less than n). This index view is available later with the "LastNRows" name via NamedIndexView functions.

func (*Context) LastNRowsScope

func (ctx *Context) LastNRowsScope(sk etime.ScopeKey, n int) *etable.IndexView

LastNRowsScope returns an IndexView onto table for given scope with the last n rows of the table (only valid rows, if less than n). This index view is available later with the "LastNRows" name via NamedIndexView functions.

func (*Context) Layer

func (ctx *Context) Layer(layNm string) emer.Layer

Layer returns layer by name as the emer.Layer interface -- you may then need to convert to a concrete type depending.

func (*Context) SetAgg

func (ctx *Context) SetAgg(mode etime.Modes, time etime.Times, ag agg.Aggs) []float64

SetAgg sets an aggregated value computed from given eval mode and time scale with same Item name, to current item, row. Supports scalar or tensor cells. returns aggregated value(s).

func (*Context) SetAggItem

func (ctx *Context) SetAggItem(mode etime.Modes, time etime.Times, itemNm string, ag agg.Aggs) []float64

SetAggItem sets an aggregated value computed from given eval mode and time scale with given Item name, to current item, row. Supports scalar or tensor cells. returns aggregated value(s).

func (*Context) SetAggItemScope

func (ctx *Context) SetAggItemScope(scope etime.ScopeKey, itemNm string, ag agg.Aggs) []float64

SetAggItemScope sets an aggregated value computed from another scope (ScopeKey) with given Item name, to current item, row. Supports scalar or tensor cells. returns aggregated value(s).

func (*Context) SetAggScope

func (ctx *Context) SetAggScope(scope etime.ScopeKey, ag agg.Aggs) []float64

SetAggScope sets an aggregated value computed from another scope (ScopeKey) with same Item name, to current item, row. Supports scalar or tensor cells. returns aggregated value(s).

func (*Context) SetFloat32

func (ctx *Context) SetFloat32(val float32)

SetFloat32 sets a float32 to current table, item, row

func (*Context) SetFloat64

func (ctx *Context) SetFloat64(val float64)

SetFloat64 sets a float64 to current table, item, row

func (*Context) SetFloat64Cells

func (ctx *Context) SetFloat64Cells(vals []float64)

SetFloat64Cells sets float64 values to tensor cell in current table, item, row

func (*Context) SetInt

func (ctx *Context) SetInt(val int)

SetInt sets an int to current table, item, row

func (*Context) SetLayerRepTensor

func (ctx *Context) SetLayerRepTensor(layNm, unitVar string) *etensor.Float32

SetLayerRepTensor sets tensor of representative Unit values on a layer for given variable to current ctx.Di data parallel index.

func (*Context) SetLayerTensor

func (ctx *Context) SetLayerTensor(layNm, unitVar string) *etensor.Float32

SetLayerTensor sets tensor of Unit values on a layer for given variable to current ctx.Di data parallel index.

func (*Context) SetStatFloat

func (ctx *Context) SetStatFloat(name string)

SetStatFloat sets a Stats Float of given name to current table, item, row

func (*Context) SetStatInt

func (ctx *Context) SetStatInt(name string)

SetStatInt sets a Stats int of given name to current table, item, row

func (*Context) SetStatString

func (ctx *Context) SetStatString(name string)

SetStatString sets a Stats string of given name to current table, item, row

func (*Context) SetString

func (ctx *Context) SetString(val string)

SetString sets a string to current table, item, row

func (*Context) SetTable

func (ctx *Context) SetTable(sk etime.ScopeKey, lt *LogTable, row int)

SetTable sets the current table & scope -- called by WriteItems

func (*Context) SetTensor

func (ctx *Context) SetTensor(val etensor.Tensor)

SetTensor sets a Tensor to current table, item, row

type Item

type Item struct {

	// name of column -- must be unique for a table
	Name string

	// data type, using etensor types which are isomorphic with arrow.Type
	Type etensor.Type

	// shape of a single cell in the column (i.e., without the row dimension) -- for scalars this is nil -- tensor column will add the outer row dimension to this shape
	CellShape []int

	// names of the dimensions within the CellShape -- 'Row' will be added to outer dimension
	DimNames []string

	// holds Write functions for different scopes.  After processing, the scope key will be a single mode and time, from Scope(mode, time), but the initial specification can lists for each, or the All* option, if there is a Write function that works across scopes
	Write WriteMap

	// Whether or not to plot it
	Plot bool

	// The minimum and maximum values, for plotting
	Range minmax.F64

	// Whether to fix the minimum in the display
	FixMin bool

	// Whether to fix the maximum in the display
	FixMax bool

	// Name of other item that has the error bar values for this item -- for plotting
	ErrCol string

	// index of tensor to plot -- defaults to 0 -- use -1 to plot all
	TensorIndex int

	// specific color for plot -- uses default ordering of colors if empty
	Color string

	// map of eval modes that this item has a Write function for
	Modes map[string]bool

	// map of times that this item has a Write function for
	Times map[string]bool
}

Item describes one item to be logged -- has all the info for this item, across all scopes where it is relevant.

func (*Item) CompileScopes

func (item *Item) CompileScopes()

CompileScopes compiles maps of modes and times where this item appears. Based on the final updated Write map

func (*Item) HasMode

func (item *Item) HasMode(mode etime.Modes) bool

func (*Item) HasTime

func (item *Item) HasTime(time etime.Times) bool

func (*Item) SetEachScopeKey

func (item *Item) SetEachScopeKey()

SetEachScopeKey updates the Write map so that it only contains entries for a unique Mode,Time pair, where multiple modes and times may have originally been specified.

func (*Item) SetWriteFunc

func (item *Item) SetWriteFunc(mode etime.Modes, time etime.Times, theFunc WriteFunc)

SetWriteFunc sets Write function for one mode, time

func (*Item) SetWriteFuncAll

func (item *Item) SetWriteFuncAll(theFunc WriteFunc)

SetWriteFuncAll sets the Write function for all existing Modes and Times Can be used to replace a Write func after the fact.

func (*Item) SetWriteFuncOver

func (item *Item) SetWriteFuncOver(modes []etime.Modes, times []etime.Times, theFunc WriteFunc)

SetWriteFuncOver sets the Write function over range of modes and times

func (*Item) WriteFunc

func (item *Item) WriteFunc(mode, time string) (WriteFunc, bool)

type LogTable

type LogTable struct {

	// Actual data stored.
	Table *etable.Table

	// arbitrary meta-data for each table, e.g., hints for plotting: Plot = false to not plot, XAxisCol, LegendCol
	Meta map[string]string

	// Index View of the table -- automatically updated when a new row of data is logged to the table.
	IndexView *etable.IndexView `view:"-"`

	// named index views onto the table that can be saved and used across multiple items -- these are reset to nil after a new row is written -- see NamedIndexView funtion for more details.
	NamedViews map[string]*etable.IndexView `view:"-"`

	// File to store the log into.
	File *os.File `view:"-"`

	// true if headers for File have already been written
	WroteHeaders bool `view:"-"`
}

LogTable contains all the data for one log table

func NewLogTable

func NewLogTable(table *etable.Table) *LogTable

NewLogTable returns a new LogTable entry for given table, initializing values

func (*LogTable) GetIndexView

func (lt *LogTable) GetIndexView() *etable.IndexView

GetIndexView returns the index view for the whole table. It is reset to nil after log row is written, and if nil then it is initialized to reflect current rows.

func (*LogTable) NamedIndexView

func (lt *LogTable) NamedIndexView(name string) (*etable.IndexView, bool)

NamedIndexView returns a named Index View of the table, and true if this index view was newly created to show entire table (else false). This is used for additional data aggregation, filtering etc. It is reset to nil after log row is written, and if nil then it is initialized to reflect current rows as a starting point (returning true). Thus, the bool return value can be used for re-using cached indexes.

func (*LogTable) ResetIndexViews

func (lt *LogTable) ResetIndexViews()

ResetIndexViews resets all IndexViews -- after log row is written

type Logs

type Logs struct {

	// Tables storing log data, auto-generated from Items.
	Tables map[etime.ScopeKey]*LogTable

	// holds additional tables not computed from items -- e.g., aggregation results, intermediate computations, etc
	MiscTables map[string]*etable.Table

	// A list of the items that should be logged. Each item should describe one column that you want to log, and how.  Order in list determines order in logs.
	Items []*Item `view:"-"`

	// context information passed to logging Write functions -- has all the information needed to compute and write log values -- is updated for each item in turn
	Context Context `view:"-"`

	// All the eval modes that appear in any of the items of this log.
	Modes map[string]bool `view:"-"`

	// All the timescales that appear in any of the items of this log.
	Times map[string]bool `view:"-"`

	// map of item indexes by name, for rapid access to items if they need to be modified after adding.
	ItemIndexMap map[string]int `view:"-"`

	// sorted order of table scopes
	TableOrder []etime.ScopeKey `view:"-"`
}

Logs contains all logging state and API for doing logging. do AddItem to add any number of items, at different eval mode, time scopes. Each Item has its own Write functions, at each scope as neeeded. Then call CreateTables to generate log Tables from those items. Call Log with a scope to add a new row of data to the log and ResetLog to reset the log to empty.

func (*Logs) AddCopyFromFloatItems

func (lg *Logs) AddCopyFromFloatItems(toMode etime.Modes, toTimes []etime.Times, fmMode etime.Modes, fmTime etime.Times, prefix string, itemNames ...string)

AddCopyFromFloatItems adds items that copy from one log to another, adding the given prefix string to each. if toTimes has more than 1 item, subsequent times are AggMean aggregates of first one. float64 type.

func (*Logs) AddCounterItems

func (lg *Logs) AddCounterItems(ctrs ...etime.Times)

AddCounterItems adds given Int counters from Stats, typically by recording looper counter values to Stats.

func (*Logs) AddErrStatAggItems

func (lg *Logs) AddErrStatAggItems(statName string, times ...etime.Times)

AddErrStatAggItems adds Err, PctErr, PctCor items recording overall performance from the given statName statistic (e.g., "TrlErr") across the 3 time scales, ordered from higher to lower, e.g., Run, Epoch, Trial.

func (*Logs) AddItem

func (lg *Logs) AddItem(item *Item) *Item

AddItem adds an item to the list. The items are stored in the order they are added, and this order is used for calling the item Write functions, so you can rely on that ordering for any sequential dependencies across items (e.g., in using intermediate computed values). Note: item names must be unique -- use different scopes for Write functions where needed.

func (*Logs) AddLayerTensorItems

func (lg *Logs) AddLayerTensorItems(net emer.Network, varNm string, mode etime.Modes, etm etime.Times, layClasses ...string)

AddLayerTensorItems adds tensor recording items for given variable, classes of layers, mode and time (e.g., Test, Trial). If another item already exists for a different mode / time, this is added to it so there aren't any duplicate items. di is a data parallel index di, for networks capable of processing input patterns in parallel.

func (*Logs) AddPerTrlMSec

func (lg *Logs) AddPerTrlMSec(itemName string, times ...etime.Times) *Item

AddPerTrlMSec adds a log item that records PerTrlMSec log item that records the time taken to process one trial. itemName is PerTrlMSec by default. and times are relevant 3 times to record, ordered higher to lower, e.g., Run, Epoch, Trial

func (*Logs) AddStatAggItem

func (lg *Logs) AddStatAggItem(statName string, times ...etime.Times) *Item

AddStatAggItem adds a Float64 stat that is aggregated with agg.Mean across the given time scales, ordered from higher to lower, e.g., Run, Epoch, Trial. The statName is the source statistic in stats at the lowest level, and is also used for the log item name. For the Run or Condition level, aggregation is the mean over last 5 rows of prior level (Epoch)

func (*Logs) AddStatFloatNoAggItem

func (lg *Logs) AddStatFloatNoAggItem(mode etime.Modes, etm etime.Times, stats ...string)

AddStatFloatNoAggItem adds float statistic(s) of given names for just one mode, time, with no aggregation. If another item already exists for a different mode / time, this is added to it so there aren't any duplicate items.

func (*Logs) AddStatIntNoAggItem

func (lg *Logs) AddStatIntNoAggItem(mode etime.Modes, etm etime.Times, stats ...string)

AddStatIntNoAggItem adds int statistic(s) of given names for just one mode, time, with no aggregation. If another item already exists for a different mode / time, this is added to it so there aren't any duplicate items.

func (*Logs) AddStatStringItem

func (lg *Logs) AddStatStringItem(mode etime.Modes, etm etime.Times, stats ...string)

AddStatStringItem adds string stat item(s) to given mode and time (e.g., Allmodes, Trial). If another item already exists for a different mode / time, this is added to it so there aren't any duplicate items.

func (*Logs) CloseLogFiles

func (lg *Logs) CloseLogFiles()

CloseLogFiles closes all open log files

func (*Logs) CompileAllScopes

func (lg *Logs) CompileAllScopes()

CompileAllScopes gathers all the modes and times used across all items

func (*Logs) CreateTables

func (lg *Logs) CreateTables() error

CreateTables creates the log tables based on all the specified log items It first calls ProcessItems to instantiate specific scopes.

func (*Logs) IndexView

func (lg *Logs) IndexView(mode etime.Modes, time etime.Times) *etable.IndexView

IndexView returns the Index View of a log table for a given mode, time This is used for data aggregation functions over the entire table. It should not be altered (don't Filter!) and always shows the whole table. See NamedIndexView for custom index views.

func (*Logs) IndexViewScope

func (lg *Logs) IndexViewScope(sk etime.ScopeKey) *etable.IndexView

IndexViewScope returns the Index View of a log table for given etime.ScopeKey This is used for data aggregation functions over the entire table. This view should not be altered and always shows the whole table. See NamedIndexView for custom index views.

func (*Logs) InitErrStats

func (lg *Logs) InitErrStats()

InitErrStats initializes the base stats variables used for AddErrStatAggItems: TrlErr, FirstZero, LastZero, NZero

func (*Logs) ItemBindAllScopes

func (lg *Logs) ItemBindAllScopes(item *Item)

ItemBindAllScopes translates the AllModes or AllTimes scopes into a concrete list of actual Modes and Times used across all items

func (*Logs) ItemByName

func (lg *Logs) ItemByName(name string) (*Item, bool)

ItemByName returns item by given name, false if not found

func (*Logs) LastNRows

func (lg *Logs) LastNRows(mode etime.Modes, time etime.Times, n int) *etable.IndexView

LastNRows returns an IndexView onto table for given scope with the last n rows of the table (only valid rows, if less than n). This index view is available later with the "LastNRows" name via NamedIndexView functions.

func (*Logs) LastNRowsScope

func (lg *Logs) LastNRowsScope(sk etime.ScopeKey, n int) *etable.IndexView

LastNRowsScope returns an IndexView onto table for given scope with the last n rows of the table (only valid rows, if less than n). This index view is available later with the "LastNRows" name via NamedIndexView functions.

func (*Logs) Log

func (lg *Logs) Log(mode etime.Modes, time etime.Times) *etable.Table

Log performs logging for given mode, time. Adds a new row and Writes all the items. and saves data to file if open.

func (*Logs) LogRow

func (lg *Logs) LogRow(mode etime.Modes, time etime.Times, row int) *etable.Table

LogRow performs logging for given mode, time, at given row. Saves data to file if open.

func (*Logs) LogRowDi

func (lg *Logs) LogRowDi(mode etime.Modes, time etime.Times, row int, di int) *etable.Table

LogRowDi performs logging for given mode, time, at given row, using given data parallel index di, which adds to the row and all network access routines use this index for accessing network data. Saves data to file if open.

func (*Logs) LogRowScope

func (lg *Logs) LogRowScope(sk etime.ScopeKey, row int, di int) *etable.Table

LogRowScope performs logging for given etime.ScopeKey, at given row. Saves data to file if open. di is a data parallel index, for networks capable of processing input patterns in parallel. effective row is row + di

func (*Logs) LogScope

func (lg *Logs) LogScope(sk etime.ScopeKey) *etable.Table

LogScope performs logging for given etime.ScopeKey Adds a new row and Writes all the items. and saves data to file if open.

func (*Logs) MPIGatherTableRows

func (lg *Logs) MPIGatherTableRows(mode etime.Modes, time etime.Times, comm *mpi.Comm)

MPIGatherTableRows calls empi.GatherTableRows on the given log table using an "MPI" suffixed MiscTable that is then switched out with the main table, so that any subsequent aggregation etc operates as usual on the full set of data. IMPORTANT: this switch means that the number of rows in the table MUST be reset back to either 0 (e.g., ResetLog) or the target number of rows, after the table is used, otherwise it will grow exponentially!

func (*Logs) MiscTable

func (lg *Logs) MiscTable(name string) *etable.Table

MiscTable gets a miscellaneous table, e.g., for misc analysis. If it doesn't exist, one is created.

func (*Logs) NamedIndexView

func (lg *Logs) NamedIndexView(mode etime.Modes, time etime.Times, name string) (*etable.IndexView, bool)

NamedIndexView returns a named Index View of a log table for a given mode, time. This is used for additional data aggregation, filtering etc. When accessing the first time during writing a new row of the log, it automatically shows a view of the entire table and returns true for 2nd arg. You can then filter, sort, etc as needed. Subsequent calls within same row Write will return the last filtered view, and false for 2nd arg -- can then just reuse view.

func (*Logs) NamedIndexViewScope

func (lg *Logs) NamedIndexViewScope(sk etime.ScopeKey, name string) (*etable.IndexView, bool)

NamedIndexView returns a named Index View of a log table for a given mode, time. This is used for additional data aggregation, filtering etc. When accessing the first time during writing a new row of the log, it automatically shows a view of the entire table and returns true for 2nd arg. You can then filter, sort, etc as needed. Subsequent calls within same row Write will return the last filtered view, and false for 2nd arg -- can then just reuse view.

func (*Logs) NewTable

func (lg *Logs) NewTable(mode, time string) *etable.Table

NewTable returns a new table configured for given mode, time scope

func (*Logs) NoPlot

func (lg *Logs) NoPlot(mode etime.Modes, time ...etime.Times)

NoPlot sets meta data to not plot for given scope mode, time. Typically all combinations of mode and time end up being generated, so you have to turn off plotting of cases not used.

func (*Logs) NoPlotScope

func (lg *Logs) NoPlotScope(sk etime.ScopeKey)

NoPlotScope sets meta data to not plot for given scope mode, time. Typically all combinations of mode and time end up being generated, so you have to turn off plotting of cases not used.

func (*Logs) PlotItems

func (lg *Logs) PlotItems(itemNames ...string)

PlotItems turns on Plot flag for given items

func (*Logs) ProcessItems

func (lg *Logs) ProcessItems()

ProcessItems is called in CreateTables, after all items have been added. It instantiates All scopes, and compiles multi-list scopes into single mode, item pairs

func (*Logs) ResetLog

func (lg *Logs) ResetLog(mode etime.Modes, time etime.Times)

ResetLog resets the log for given mode, time, at given row. by setting number of rows = 0 The IndexViews are reset too.

func (*Logs) RunStats

func (lg *Logs) RunStats(stats ...string)

RunStats records descriptive values for given stats across all runs, at Train Run scope, saving to RunStats misc table

func (*Logs) SetContext

func (lg *Logs) SetContext(stats *estats.Stats, net emer.Network)

SetContext sets the Context for logging Write functions to give general access to the stats and network

func (*Logs) SetFixMaxItems

func (lg *Logs) SetFixMaxItems(max float64, itemNames ...string)

SetFixMaxItems sets the FixMax flag and Range Max val for given items

func (*Logs) SetFixMinItems

func (lg *Logs) SetFixMinItems(min float64, itemNames ...string)

SetFixMinItems sets the FixMin flag and Range Min val for given items

func (*Logs) SetFloatMaxItems

func (lg *Logs) SetFloatMaxItems(itemNames ...string)

SetFloatMaxItems turns off the FixMax flag for given items

func (*Logs) SetFloatMinItems

func (lg *Logs) SetFloatMinItems(itemNames ...string)

SetFloatMinItems turns off the FixMin flag for given items

func (*Logs) SetLogFile

func (lg *Logs) SetLogFile(mode etime.Modes, time etime.Times, fnm string)

SetLogFile sets the log filename for given scope

func (*Logs) SetMeta

func (lg *Logs) SetMeta(mode etime.Modes, time etime.Times, key, val string)

SetMeta sets table meta data for given scope mode, time.

func (*Logs) SetMetaScope

func (lg *Logs) SetMetaScope(sk etime.ScopeKey, key, val string)

SetMetaScope sets table meta data for given scope

func (*Logs) Table

func (lg *Logs) Table(mode etime.Modes, time etime.Times) *etable.Table

Table returns the table for given mode, time

func (*Logs) TableDetails

func (lg *Logs) TableDetails(mode etime.Modes, time etime.Times) *LogTable

TableDetails returns the LogTable record of associated info for given table

func (*Logs) TableDetailsScope

func (lg *Logs) TableDetailsScope(sk etime.ScopeKey) *LogTable

TableDetailsScope returns the LogTable record of associated info for given table

func (*Logs) TableScope

func (lg *Logs) TableScope(sk etime.ScopeKey) *etable.Table

TableScope returns the table for given etime.ScopeKey

func (*Logs) WriteItems

func (lg *Logs) WriteItems(sk etime.ScopeKey, row int)

WriteItems calls all item Write functions within given scope providing the relevant Context for the function. Items are processed in the order added, to enable sequential dependencies to be used.

func (*Logs) WriteLastRowToFile

func (lg *Logs) WriteLastRowToFile(lt *LogTable)

WriteLastRowToFile writes the last row of table to file, if File != nil

type WriteFunc

type WriteFunc func(ctx *Context)

WriteFunc function that computes and sets log values The Context provides information typically needed for logging

type WriteMap

type WriteMap map[etime.ScopeKey]WriteFunc

WriteMap holds log writing functions for scope keys

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL