neldermead

package module
v0.1.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 13, 2024 License: MIT Imports: 4 Imported by: 0

README

A Nelder-Mead implementation in Go Go Reference CI

The Go package documentation has in-depth usage help and algorithm tuning suggestions. This package intends to be super easy to use and optimize.

This repository contains a Go implementation of the Nelder-Mead optimization algorithm, which is a gradient-free optimization method for continuous, possibly non-convex, and noisy functions in low to moderate dimensions.

The algorithm uses a simplex, a polytope with n+1 vertices in n-dimensional space, to explore the search space. The algorithm iteratively updates the simplex by reflecting, expanding, contracting, or shrinking it, based on the values of the objective function at its vertices. The algorithm terminates when the difference in the objective function values between the best and worst points in the simplex falls below the specified tolerance or when the maximum number of iterations is reached.

Installation

This package can be installed using Go modules with the following command:

go get github.com/crhntr/neldermead

Usage

Try it in the Go playground

The Rosenbrock function is a well-known test function for optimization algorithms, and its global minimum is at (1, 1), where the function value is 0. The Run function takes the Rosenbrock function as the objective function to minimize, the initial guess x0 as the starting point, and the Options struct with the desired parameters for the Nelder-Mead algorithm.

The algorithm found a solution close to the global minimum with a very small objective function value.

package main

import (
	"fmt"
	"math"

	"github.com/crhntr/neldermead"
)

func main() {
	rosenbrock := func(x []float64) float64 {
		return math.Pow(1-x[0], 2) + 100*math.Pow(x[1]-x[0]*x[0], 2)
	}
	x0 := []float64{-1, -1} // initial guess
	options := neldermead.NewOptions()

	point, err := neldermead.Run(rosenbrock, x0, options)
	if err != nil {
		fmt.Printf("Error: %v\n", err)
	}
	fmt.Printf("Solution: %.6f\n", point.X)
	fmt.Printf("Objective function value: %.6f\n", point.F)
}

The output of this program should be:

Solution: [1.000329 1.000714]
Objective function value: 0.000000

Now let's look at an example with constraints,

package main

import (
	"fmt"
	"math"
	
	"github.com/crhntr/neldermead"
)

func main() {
	// Define the objective function to be minimized.
	objective := func(x []float64) float64 {
		// y = x^3
		return math.Pow(x[0], 3) - x[1]
	}

	// Define the starting point for the optimizer.
	x0 := []float64{4, 5}

	// Define the optimization options.
	options := neldermead.NewOptions()
	// (optional) add constraints
	options.Constraints = []neldermead.Constraint{
		{Min: 3, Max: 5},
		{Min: 0, Max: 10},
	}

	// Run the optimizer.
	result, err := neldermead.Run(objective, x0, options)
	if err != nil {
		panic(err)
	}

	fmt.Printf("Found minimum at x = %v, f(x) = %.2f\n", result.X, result.F)
}

The output of this program should be:

Found minimum at x = [3 10], f(x) = 17.00

Documentation

Index

Examples

Constants

View Source
const (
	DefaultAlpha         = 1.0
	DefaultBeta          = 0.5
	DefaultGamma         = 2.0
	DefaultDelta         = 0.5
	DefaultTolerance     = 1e-6
	DefaultMaxIterations = 1000
)

Variables

This section is empty.

Functions

This section is empty.

Types

type Constraint

type Constraint struct {
	Min, Max float64
}

type ErrorSimplexCollapse

type ErrorSimplexCollapse struct{}

func (ErrorSimplexCollapse) Error

func (ErrorSimplexCollapse) Error() string

type Objective

type Objective = func(x []float64) float64

type Options

type Options struct {
	// Alpha is the reflection coefficient used in the Nelder-Mead algorithm.
	// It controls the size of the reflection step.
	// A value of 1.0 is a common choice.
	// Increasing Alpha may speed up convergence, but setting it too large may cause instability or oscillations.
	// In general, it is a positive value.
	// It should be tuned based on the specific optimization problem and the characteristics of the objective function.
	Alpha float64

	// Beta is the contraction coefficient used in the Nelder-Mead algorithm.
	// It controls the size of the contraction step when the reflected point does not improve the objective function value.
	// A common choice is 0.5, which corresponds to a step halfway between the worst point and the centroid.
	// Beta should be a positive value between 0 and 1.
	// Decreasing Beta may lead to faster convergence for functions with narrow valleys, but setting it too small may cause instability or slow convergence.
	Beta float64

	// Gamma is the expansion coefficient used in the Nelder-Mead algorithm.
	// It controls the size of the expansion step when the reflected point improves the objective function value significantly.
	// A common choice is 2.0, which doubles the step size from the centroid to the reflected point.
	// Gamma should be a positive value greater than 1.
	// Increasing Gamma can accelerate convergence for functions with wide valleys, but setting it too large may cause instability or overshooting.
	Gamma float64

	// Delta is the shrinkage coefficient used in the Nelder-Mead algorithm.
	// It controls the size of the shrinking step when the contraction step fails to improve the objective function value.
	// A common choice is 0.5, which reduces the size of the simplex by half along each edge.
	// Delta should be a positive value between 0 and 1.
	// Decreasing Delta can lead to a more thorough search in the local region, but setting it too small may cause excessive computational effort or slow convergence.
	Delta float64

	// Tolerance is the convergence criterion used in the Nelder-Mead algorithm.
	// It is the threshold for the difference in objective function values between the best and worst points in the simplex.
	// The algorithm terminates when this difference is less than or equal to Tolerance.
	// A smaller Tolerance value leads to a more accurate solution but may require more iterations to converge.
	Tolerance float64

	// CollapseThreshold is the threshold used to detect the collapse of the simplex in the Nelder-Mead algorithm.
	// It is the minimum average edge length of the simplex below which the algorithm is considered to have collapsed and returns an error.
	// A collapse may indicate that the optimization process is stuck in a degenerate region or that the chosen parameters (Alpha, Beta, Gamma, and Delta) are not suitable for the specific optimization problem.
	// If CollapseThreshold is set to 0, the collapse detection feature is disabled.
	CollapseThreshold float64

	// MaxIterations sets an upper bound on how long the algorithm should run to find the minima.
	MaxIterations int

	// Constraints is an optional list of constraints for each dimension of the optimization problem. Each Constraint
	// specifies a Min and Max value, which define the lower and upper bounds for the corresponding dimension.
	// If Constraints is not provided or is empty, the optimization problem is considered unconstrained.
	// Providing Constraints can help guide the optimization process and prevent the algorithm from exploring
	// infeasible regions of the search space. The appropriate constraints should be chosen based on the problem's
	// specific requirements and the characteristics of the objective function.
	Constraints []Constraint
}

Options should be configured for your particular function and optimization problem. The defaults configured in NewOptions should be considered a starting point that are likely not well suited for your problem.

func NewOptions

func NewOptions() Options

NewOptions should be considered a starting point that may not be suited for your optimization problem.

type Point

type Point struct {
	X []float64
	F float64
}

func Run

func Run(f Objective, x0 []float64, options Options) (Point, error)

Run is the main function that performs optimization using the Nelder-Mead algorithm. It takes an objective function 'f', an initial guess 'x0', and an Options struct that configures the behavior of the algorithm. Run returns the best solution found as a Point, containing the optimized input vector and the corresponding value of the objective function. If an error occurs during the optimization process, such as a simplex collapse or validation error, Run returns an error.

The Nelder-Mead algorithm is a gradient-free optimization method that uses a simplex (a polytope with n+1 vertices in n-dimensional space) to explore the search space. The algorithm iteratively updates the simplex by reflecting, expanding, contracting, or shrinking it, based on the values of the objective function at its vertices. The algorithm terminates when the difference in the objective function values between the best and worst points in the simplex falls below the specified Tolerance, or when the maximum number of iterations is reached.

The Run function is suitable for optimizing continuous, possibly non-convex, and noisy functions in low to moderate dimensions. However, its performance may degrade as the dimensionality of the problem increases or if the objective function has numerous local minima or sharp features.

Example
package main

import (
	"fmt"
	"math"

	"github.com/crhntr/neldermead"
)

func main() {
	// Define the objective function to be minimized.
	objective := func(x []float64) float64 {
		// y = x^3
		return math.Pow(x[0], 3) - x[1]
	}

	// Define the starting point for the optimizer.
	x0 := []float64{4, 5}

	// Define the optimization options.
	options := neldermead.NewOptions()
	options.Constraints = []neldermead.Constraint{
		{Min: 3, Max: 5},
		{Min: 0, Max: 10},
	}

	// Run the optimizer.
	result, err := neldermead.Run(objective, x0, options)
	if err != nil {
		panic(err)
	}

	// Print the result.
	fmt.Printf("Found minimum at x = %v, f(x) = %.2f\n", result.X, result.F)
}
Output:

Found minimum at x = [3 10], f(x) = 17.00

type Simplex

type Simplex struct {
	Points []Point
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL