Documentation ¶
Overview ¶
Package sampleuv implements advanced sampling routines from explicit and implicit probability distributions.
Each sampling routine is implemented as a stateless function with a complementary wrapper type. The wrapper types allow the sampling routines to implement interfaces.
Index ¶
- Variables
- func IID(batch []float64, d distuv.Rander)
- func Importance(batch, weights []float64, target distuv.LogProber, ...)
- func LatinHypercube(batch []float64, q distuv.Quantiler, src *rand.Rand)
- func MetropolisHastings(batch []float64, initial float64, target distuv.LogProber, proposal MHProposal, ...)
- func Rejection(batch []float64, target distuv.LogProber, proposal distuv.RandLogProber, ...) (nProposed int, ok bool)
- type IIDer
- type Importancer
- type LatinHypercuber
- type MHProposal
- type MetropolisHastingser
- type Rejectioner
- type SampleUniformWeighted
- type Sampler
- type Weighted
- type WeightedSampler
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ErrRejection = errors.New("rejection: acceptance ratio above 1")
ErrRejection is returned when the constant in Rejection is not sufficiently high.
Functions ¶
func IID ¶
IID generates a set of independently and identically distributed samples from the input distribution.
func Importance ¶
func Importance(batch, weights []float64, target distuv.LogProber, proposal distuv.RandLogProber)
Importance sampling generates len(batch) samples from the proposal distribution, and stores the locations and importance sampling weights in place.
Importance sampling is a variance reduction technique where samples are generated from a proposal distribution, q(x), instead of the target distribution p(x). This allows relatively unlikely samples in p(x) to be generated more frequently.
The importance sampling weight at x is given by p(x)/q(x). To reduce variance, a good proposal distribution will bound this sampling weight. This implies the support of q(x) should be at least as broad as p(x), and q(x) should be "fatter tailed" than p(x).
If weights is nil, the weights are not stored. The length of weights must equal the length of batch, otherwise Importance will panic.
func LatinHypercube ¶
LatinHypercube generates len(batch) samples using Latin hypercube sampling from the given distribution. If src != nil, it will be used to generate random numbers, otherwise rand.Float64 will be used.
Latin hypercube sampling divides the cumulative distribution function into equally spaced bins and guarantees that one sample is generated per bin. Within each bin, the location is randomly sampled. The distuv.UnitUniform variable can be used for easy generation from the unit interval.
func MetropolisHastings ¶
func MetropolisHastings(batch []float64, initial float64, target distuv.LogProber, proposal MHProposal, src *rand.Rand)
MetropolisHastings generates len(batch) samples using the Metropolis Hastings algorithm (http://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm), with the given target and proposal distributions, starting at the intial location and storing the results in-place into samples. If src != nil, it will be used to generate random numbers, otherwise rand.Float64 will be used.
Metropolis-Hastings is a Markov-chain Monte Carlo algorithm that generates samples according to the distribution specified by target by using the Markov chain implicitly defined by the proposal distribution. At each iteration, a proposal point is generated randomly from the current location. This proposal point is accepted with probability
p = min(1, (target(new) * proposal(current|new)) / (target(current) * proposal(new|current)))
If the new location is accepted, it is stored into batch and becomes the new current location. If it is rejected, the current location remains and is stored into samples. Thus, a location is stored into batch at every iteration.
The samples in Metropolis Hastings are correlated with one another through the Markov chain. As a result, the initial value can have a significant influence on the early samples, and so, typically, the first samples generated by the chain are ignored. This is known as "burn-in", and can be accomplished with slicing. The best choice for burn-in length will depend on the sampling and target distributions.
Many choose to have a sampling "rate" where a number of samples are ignored in between each kept sample. This helps decorrelate the samples from one another, but also reduces the number of available samples. A sampling rate can be implemented with successive calls to MetropolisHastings.
Example (Burnin) ¶
package main import "github.com/gonum/stat/distuv" type ProposalDist struct { Sigma float64 } func (p ProposalDist) ConditionalRand(y float64) float64 { return distuv.Normal{Mu: y, Sigma: p.Sigma}.Rand() } func (p ProposalDist) ConditionalLogProb(x, y float64) float64 { return distuv.Normal{Mu: y, Sigma: p.Sigma}.LogProb(x) } func main() { n := 1000 // The number of samples to generate. burnin := 50 // Number of samples to ignore at the start. var initial float64 // target is the distribution from which we would like to sample. target := distuv.Weibull{K: 5, Lambda: 0.5} // proposal is the proposal distribution. Here, we are choosing // a tight Gaussian distribution around the current location. In // typical problems, if Sigma is too small, it takes a lot of samples // to move around the distribution. If Sigma is too large, it can be hard // to find acceptable samples. proposal := ProposalDist{Sigma: 0.2} samples := make([]float64, n+burnin) MetropolisHastings(samples, initial, target, proposal, nil) // Remove the initial samples through slicing. samples = samples[burnin:] }
Output:
Example (SamplingRate) ¶
package main import "github.com/gonum/stat/distuv" func max(a, b int) int { if a < b { return b } return a } func main() { // See Burnin example for a description of these quantities. n := 1000 burnin := 300 var initial float64 target := distuv.Weibull{K: 5, Lambda: 0.5} proposal := ProposalDist{Sigma: 0.2} // Successive samples are correlated with one another through the // Markov Chain defined by the proposal distribution. To get less // correlated samples, one may use a sampling rate, in which only // one sample from every few is accepted from the chain. This can // be accomplished through a for loop. rate := 50 tmp := make([]float64, max(rate, burnin)) // First deal with burnin. tmp = tmp[:burnin] MetropolisHastings(tmp, initial, target, proposal, nil) // The final sample in tmp in the final point in the chain. // Use it as the new initial location. initial = tmp[len(tmp)-1] // Now, generate samples by using one every rate samples. tmp = tmp[:rate] samples := make([]float64, n) samples[0] = initial for i := 1; i < len(samples); i++ { MetropolisHastings(tmp, initial, target, proposal, nil) initial = tmp[len(tmp)-1] samples[i] = initial } }
Output:
func Rejection ¶
func Rejection(batch []float64, target distuv.LogProber, proposal distuv.RandLogProber, c float64, src *rand.Rand) (nProposed int, ok bool)
Rejection generates len(batch) samples using the rejection sampling algorithm and stores them in place into samples. Sampling continues until batch is filled. Rejection returns the total number of proposed locations and a boolean indicating if the rejection sampling assumption is violated (see details below). If the returned boolean is false, all elements of samples are set to NaN. If src is not nil, it will be used to generate random numbers, otherwise rand.Float64 will be used.
Rejection sampling generates points from the target distribution by using the proposal distribution. At each step of the algorithm, the proposed point is accepted with probability
p = target(x) / (proposal(x) * c)
where target(x) is the probability of the point according to the target distribution and proposal(x) is the probability according to the proposal distribution. The constant c must be chosen such that target(x) < proposal(x) * c for all x. The expected number of proposed samples is len(samples) * c.
Target may return the true (log of) the probablity of the location, or it may return a value that is proportional to the probability (logprob + constant). This is useful for cases where the probability distribution is only known up to a normalization constant.
Types ¶
type Importancer ¶
type Importancer struct { Target distuv.LogProber Proposal distuv.RandLogProber }
Importancer is a wrapper around the Importance sampling generation method.
func (Importancer) SampleWeighted ¶
func (l Importancer) SampleWeighted(batch, weights []float64)
Sample generates len(batch) samples using the Importance sampling generation procedure.
type LatinHypercuber ¶
LatinHypercuber is a wrapper around the LatinHypercube sampling generation method.
func (LatinHypercuber) Sample ¶
func (l LatinHypercuber) Sample(batch []float64)
Sample generates len(batch) samples using the LatinHypercube generation procedure.
type MHProposal ¶
type MHProposal interface { // ConditionalDist returns the probability of the first argument conditioned on // being at the second argument // p(x|y) ConditionalLogProb(x, y float64) (prob float64) // ConditionalRand generates a new random location conditioned being at the // location y. ConditionalRand(y float64) (x float64) }
MHProposal defines a proposal distribution for Metropolis Hastings.
type MetropolisHastingser ¶
type MetropolisHastingser struct { Initial float64 Target distuv.LogProber Proposal MHProposal Src *rand.Rand BurnIn int Rate int }
MetropolisHastingser is a wrapper around the MetropolisHastings sampling type.
BurnIn sets the number of samples to discard before keeping the first sample. A properly set BurnIn rate will decorrelate the sampling chain from the initial location. The proper BurnIn value will depend on the mixing time of the Markov chain defined by the target and proposal distributions.
Rate sets the number of samples to discard in between each kept sample. A higher rate will better approximate independently and identically distributed samples, while a lower rate will keep more information (at the cost of higher correlation between samples). If Rate is 0 it is defaulted to 1.
The initial value is NOT changed during calls to Sample.
func (MetropolisHastingser) Sample ¶
func (m MetropolisHastingser) Sample(batch []float64)
Sample generates len(batch) samples using the Metropolis Hastings sample generation method. The initial location is NOT updated during the call to Sample.
type Rejectioner ¶
type Rejectioner struct { C float64 Target distuv.LogProber Proposal distuv.RandLogProber Src *rand.Rand // contains filtered or unexported fields }
Rejectioner is a wrapper around the Rejection sampling generation procedure. If the rejection sampling fails during the call to Sample, all samples will be set to math.NaN() and a call to Err will return a non-nil value.
func (*Rejectioner) Err ¶
func (r *Rejectioner) Err() error
Err returns nil if the most recent call to sample was successful, and returns ErrRejection if it was not.
func (*Rejectioner) Proposed ¶
func (r *Rejectioner) Proposed() int
Proposed returns the number of samples proposed during the most recent call to Sample.
func (*Rejectioner) Sample ¶
func (r *Rejectioner) Sample(batch []float64)
Sample generates len(batch) using the Rejection sampling generation procedure. Rejection sampling may fail if the constant is insufficiently high, as described in the function comment for Rejection. If the generation fails, the samples are set to math.NaN(), and a call to Err will return a non-nil value.
type SampleUniformWeighted ¶
type SampleUniformWeighted struct {
Sampler
}
SampleUniformWeighted wraps a Sampler type to create a WeightedSampler where all weights are equal.
func (SampleUniformWeighted) SampleWeighted ¶
func (w SampleUniformWeighted) SampleWeighted(batch, weights []float64)
SampleWeighted generates len(batch) samples from the embedded Sampler type and sets all of the weights equal to 1. If len(batch) and len(weights) are not equal, SampleWeighted will panic.
type Sampler ¶
type Sampler interface {
Sample(batch []float64)
}
Sampler generates a batch of samples according to the rule specified by the implementing type. The number of samples generated is equal to len(batch), and the samples are stored in-place into the input.
type Weighted ¶
type Weighted struct {
// contains filtered or unexported fields
}
Weighted provides sampling without replacement from a collection of items with non-uniform probability.
func NewWeighted ¶
NewWeighted returns a Weighted for the weights w. If src is nil, rand.Rand is used as the random source.
Note that sampling from weights with a high variance or overall low absolute value sum may result in problems with numerical stability.
func (Weighted) Len ¶
Len returns the number of items held by the Weighted, including items already taken.
func (Weighted) ReweightAll ¶
ReweightAll sets the weight of all items in the Weighted. ReweightAll panics if len(w) != s.Len.
type WeightedSampler ¶
type WeightedSampler interface {
SampleWeighted(batch, weights []float64)
}
WeightedSampler generates a batch of samples and their relative weights according to the rule specified by the implementing type. The number of samples generated is equal to len(batch), and the samples and weights are stored in-place into the inputs. The length of weights must equal len(batch), otherwise SampleWeighted will panic.