Documentation ¶
Index ¶
- Constants
- func HellingerDiff(v1 *Variable, v2 *Variable) float64
- func JSDivergence(v1 *Variable, v2 *Variable) float64
- func MaxAbsDiff(v1 *Variable, v2 *Variable) float64
- func MeanAbsDiff(v1 *Variable, v2 *Variable) float64
- type ErrorSuite
- type FieldReader
- type Function
- type Model
- type Reader
- type SolReader
- type Solution
- type UAIReader
- type Variable
- type VariableIter
Constants ¶
const ( BAYES = "BAYES" MARKOV = "MARKOV" )
Model type constant string - matches UAI formats
Variables ¶
This section is empty.
Functions ¶
func HellingerDiff ¶
HellingerDiff returns the Hellinger error between the model's current marginal estimate and this solution. Like AbsError, the result is the average over the variables, the solution's marginals are assumed normalized (sum=1.0), while the model's marginals are assumed non-normalized (but positive)
func JSDivergence ¶
JSDivergence returns the Jensen-Shannon divergence, which is a symmetric gneralization of the KL divergence
func MaxAbsDiff ¶
MaxAbsDiff returns the maximum difference found between the two prob dists
func MeanAbsDiff ¶
MeanAbsDiff returns the mean of the differenced found between the two prob dists
Types ¶
type ErrorSuite ¶
type ErrorSuite struct { MeanMeanAbsError float64 MeanMaxAbsError float64 MeanHellinger float64 MeanJSDiverge float64 MaxMeanAbsError float64 MaxMaxAbsError float64 MaxHellinger float64 MaxJSDiverge float64 }
ErrorSuite represents all the loss/error functions we use to judge progress across joint dist. Errors beginning with Mean are the mean across all the variables in the joint distribution while Max is the maximum value for all the variables. So MeanMaxAbsError is the MEAN of the Maximum Absoulte Error for each of the marginal variables. Likewise, MaxMeanAbsError represents the maximum value of the mean difference between two random variables.
func NewErrorSuite ¶
func NewErrorSuite(vars1 []*Variable, vars2 []*Variable) (*ErrorSuite, error)
NewErrorSuite returns an ErrorSuite with all calculated error functions
type FieldReader ¶
FieldReader is just a simple reader for basic file formats.
func NewFieldReader ¶
func NewFieldReader(data string) *FieldReader
NewFieldReader constructs a new field reader around the given data
func (*FieldReader) Read ¶
func (fr *FieldReader) Read() (string, error)
Read returns the next space-delimited field/token
func (*FieldReader) ReadFloat ¶
func (fr *FieldReader) ReadFloat() (float64, error)
ReadFloat reads the next token as a float
func (*FieldReader) ReadInt ¶
func (fr *FieldReader) ReadInt() (int, error)
ReadInt reads the next token as an int
type Function ¶
type Function struct { Name string // Name for function (or just a 0-based index in UAI formats) Vars []*Variable // Vars in function Table []float64 // CPT - len is product of variables' Card IsLog bool // True if values are log(v) - default is false }
A Function represents a function of Variables (which may be a CPT or a more general factor). In a Markov network, this is defined on a clique. Note that factors in a Markox network are assumed to return NON-normalized probabilities. You need Z (the partition function) to normalize to "real" probabilities.
The actual ordering of the Table values matches the order of the variables, where the variables are ordered from "most" to "least" significant. (This is the same order used in UAI data files). As a example, let's assume 3 boolean variables in the order [A, B, C]. Let's further assume that this join probability distribution is completely uniform: The CPT would look like:
ABC P(A,B,C) --- -------- 000 0.125 001 0.125 010 0.125 011 0.125 100 0.125 101 0.125 110 0.125 111 0.125
And our Table array would be in the same order. Since we assume that a variable's domain is [0, C-1] where C is cardinality (e.g. a boolean var has C=2 with values {0,1}), we can map directory from an ordered list of values (in the same order as the variables) to an index in the table (see Eval).
func NewFunction ¶
NewFunction creates a function from an index and a list of variables
func (*Function) AddValue ¶
AddValue allows value adding based on the current input setting in values. This is used for building new functions (i.e see collapsed-gibbs)
func (*Function) Eval ¶
Eval returns the result of the function, assuming that the values is in the same order as f.Vars.
func (*Function) UseLogSpace ¶
UseLogSpace converts the current factor to Log (base-e) space IFF it has not already been done
type Model ¶
type Model struct { Type string // PGM type - should match a constant Name string // Model name Vars []*Variable // Variables (nodes) in the model Funcs []*Function `json:"-"` // Function of variables (CPT) in the model }
Model represent a PGM
func NewModelFromBuffer ¶
NewModelFromBuffer creates a model from the given pre-read data
func NewModelFromFile ¶
NewModelFromFile initializes and creates a model from the specified source.
func (*Model) ApplyEvidenceFromFile ¶
ApplyEvidenceFromFile will read, parse, and apply the evidence
type Reader ¶
type Reader interface { ReadModel(data []byte) (*Model, error) ApplyEvidence(data []byte, m *Model) error }
Reader implementors instantiate a model from a byte stream and optionally applies evidence from a second byte stream.
type SolReader ¶
SolReader implementors read a solution (currently we only support marginal solutions)
type Solution ¶
type Solution struct {
Vars []*Variable // Variables with their marginals
}
Solution to a marginal estimation problem specified on a Model. It also provides evaluation metrics to evaluate vs the solution.
func NewSolutionFromBuffer ¶
NewSolutionFromBuffer reads a UAI MAR solution file from the specified buffer
func NewSolutionFromFile ¶
NewSolutionFromFile reads a UAI MAR solution file
type UAIReader ¶
type UAIReader struct { }
UAIReader reads the UAI inference data set format. This format has also been used at competitions like PIC2001 at PASCAL2. In fact, a very good description of the format is available at http://www.cs.huji.ac.il/project/PASCAL/fileFormat.php
func (UAIReader) ApplyEvidence ¶
ApplyEvidence is part of the reader interface - read the evidence file and apply to the model.
func (UAIReader) ReadMargSolution ¶
ReadMargSolution implements the model.SolReader interface
type Variable ¶
type Variable struct { ID int // A numeric ID for tracking a variable Name string // Variable name (just a zero-based index in UAI formats) Card int // Cardinality - values are assume to be 0 to Card-1 FixedVal int // Current fixed value (fixed by evidence): -1 is no evidence, else if 0 to Card-1 Marginal []float64 // Current best estimate for marginal distribution: len should equal Card State map[string]float64 // State/stats a sampler can track - mainly for JSON tracking Collapsed bool // For Collapsed == True, you should just sample from Marginal (default is False) }
Variable represents a single node in a PGM, a random variable, or a marginal distribution.
func NewVariable ¶
NewVariable is our standard way to create a variable from an index and a cardinality. The marginal will be set to uniform.
func (*Variable) Clone ¶
Clone returns a deep copy of the variable. Marginal is normalize, and the state dict is copied.
func (*Variable) CreateName ¶
CreateName just gives a name to variable based on a numeric index
func (*Variable) NormMarginal ¶
NormMarginal insures/scales the current Marginal vector to sum to 1
type VariableIter ¶
type VariableIter struct {
// contains filtered or unexported fields
}
VariableIter is an iterator over all possible values for a list of variables
func NewVariableIter ¶
func NewVariableIter(src []*Variable, honorFixed bool) (*VariableIter, error)
NewVariableIter returns a new iterator over the list of variables
func (*VariableIter) Next ¶
func (vi *VariableIter) Next() bool
Next advances to the next value and returns True if there are still values to see
func (*VariableIter) Val ¶
func (vi *VariableIter) Val(curr []int) error
Val populates curr with the current value