Documentation ¶
Overview ¶
Package optimize provides a set of services for optimizing algo parameters. Parameter optimization is a process of systematically searching for the optimal set of parameters given a target objective. Multiple methods are available, each implementing the Optimizer interface.
Example ¶
// Verbose error handling omitted for brevity // Identify the bot (algo) to optimize by supplying a factory function // Here we're using the classic moving average (MA) cross variant of trend bot bot := trend.MakeCrossBotFromConfig // Define the parameter space to optimize // Param names must match those expected by the MakeBot function passed to optimizer // Here we're optimizing the lookback period of a fast and slow MA // and the Market Meanness Index (MMI) filter paramSpace := ParamMap{ "mafastlength": []any{30, 90, 180}, "maslowlength": []any{90, 180, 360}, "mmilength": []any{200, 300}, } // Read price samples to use for optimization btc, err := market.ReadKlinesFromCSV("./testdata/BTCUSDT-1H/") if err != nil { log.Fatal(err) } eth, err := market.ReadKlinesFromCSV("./testdata/ETHUSDT-1H/") if err != nil { log.Fatal(err) } priceSamples := [][]market.Kline{btc, eth} // Create a new brute style optimizer with a default simulated dealer (no broker costs) // The default optimization objective is the param set with the highest sharpe ratio optimizer := NewBruteOptimizer() optimizer.SampleSplitPct = 0 // Do not split samples due to small sample size optimizer.WarmupBarCount = 360 // Set as maximum lookback of your param space optimizer.MakeBot = bot // Tell the optimizer which bot to use // Prepare the optimizer and get an estimate on the number of trials (backtests) required trialCount, _ := optimizer.Prepare(paramSpace, priceSamples) fmt.Printf("%d backtest trials to run during optimization\n", trialCount) // Start the optimization process and monitor with a receive-only channel // Trials will execute concurrently with a default worker pool matching num of CPUs trials, _ := optimizer.Start(context.Background()) for range trials { } // Inspect the study results following optimization study := optimizer.Study() if len(study.ValidationResults) == 0 { fmt.Println("Optima not found because highest ranked param set made no trades during optimization trials.") return } // Read out the optimal param set and results optimaPSet := study.Validation[0] fmt.Printf("Optima params: fast: %d slow: %d MMI: %d\n", optimaPSet.Params["mafastlength"], optimaPSet.Params["maslowlength"], optimaPSet.Params["mmilength"]) optimaResult := study.ValidationResults[optimaPSet.ID] fmt.Printf("Optima sharpe ratio is %.1f", optimaResult.Sharpe)
Output: 38 backtest trials to run during optimization Optima params: fast: 30 slow: 90 MMI: 200 Optima sharpe ratio is 2.5
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type BruteOptimizer ¶
type BruteOptimizer struct { SampleSplitPct float64 WarmupBarCount int MakeBot trader.MakeFromConfig MakeDealer broker.MakeSimulatedDealer Ranker ObjectiveRanker MaxWorkers int // contains filtered or unexported fields }
BruteOptimizer implements a 'brute-force peak objective' optimization approach which tests all given parameter combinations and selects the highest ranked (peak) param set. Optima is selected by the given ObjectiveRanker func. Optimization trials are executed concurrently using a worker pool. Optimization method in 3 stages:
Prepare:
- Accept 1 or more price data samples
- Split sample price data into in-sample (training) and out-of-sample (validation) datasets
- Generate 1 or more param sets using the cartesian product of given ranges that define the param space
Train:
- Execute each algo param set over the in-sample price data
- Average the performance for each param set over the in-sample data
- Rank the param sets based on the performance objective (Profit Factor, Sharpe etc)
Validate:
- Execute the highest ranked ("trained") algo param set over the out-of-sample price data
- Accept or reject the hypothesis based on statistical significance of the study report
func NewBruteOptimizer ¶
func NewBruteOptimizer() BruteOptimizer
NewBruteOptimizer creates a new BruteOptimizer instance with sensible defaults. Call Prepare before Start to set up the study.
func (*BruteOptimizer) Prepare ¶
Prepare prepares a study based on the given param ranges and price data samples. Returned is the estimated number of trials to be performed.
func (*BruteOptimizer) Start ¶
func (o *BruteOptimizer) Start(ctx context.Context) (<-chan OptimizerTrial, error)
Start starts the prepared optimization process and returns with a channel to monitor the progress.
func (*BruteOptimizer) Study ¶
func (o *BruteOptimizer) Study() Study
Study returns the current study. Call after the optimizer has finished to read the results.
type CartesianProduct ¶
CartesianProduct is a map of named params returned by CartesianBuilder.
func CartesianBuilder ¶
func CartesianBuilder(in map[string]any) []CartesianProduct
CartesianBuilder is a utility to build a cartesian product of named params. Used by the BruteOptimizer to find all param combinations.
type ObjectiveRanker ¶
ObjectiveRanker is used by an Optimizer to sort the results of backtest trials and select the best performing ParamSet. ObjectiveRanker is equivalent to a 'less' comparison function.
type Optimizer ¶
type Optimizer interface { Prepare(ParamMap, [][]market.Kline) (int, error) Start(context.Context) (<-chan OptimizerTrial, error) Study() Study }
Optimizer is the interface for an optimization method.
type OptimizerTrial ¶
type OptimizerTrial struct { Phase Phase PSet ParamSet Result perf.PerformanceReport Err error }
OptimizerTrial is a discrete trial conducted by an Optimizer on a single ParamSet.
type ParamSet ¶
type ParamSet struct { ID ParamSetID Params ParamMap }
ParamSet is a set of algo parameters to trial.
func NewParamSet ¶
func NewParamSet() ParamSet
NewParamSet returns a new param set with initialized ID
type Report ¶
type Report struct { ID string `csv:"id"` Phase Phase `csv:"phase"` Subject ParamSet `csv:",inline"` PRR float64 `csv:"prr"` MDD float64 `csv:"mdd"` CAGR float64 `csv:"cagr"` Sharpe float64 `csv:"sharpe"` Calmar float64 `csv:"calmar"` WinPct float64 `csv:"win_pct"` Kelly float64 `csv:"kelly"` OptimalF float64 `csv:"optimalf"` SampleCount int `csv:"sample_count"` TradeCount int `csv:"trade_count"` Backtests []perf.PerformanceReport `csv:"-"` }
Report is the aggregated performance of a ParamSet across one or more price samples (trials) The summary method is owned by the Optimizer implementation, but will typically be the mean (avg) of the individual trials.
type Study ¶
type Study struct { ID string Training []ParamSet TrainingSamples [][]market.Kline TrainingResults map[ParamSetID]Report Validation []ParamSet ValidationSamples [][]market.Kline ValidationResults map[ParamSetID]Report }
Study is an optimization experiment, prepared and executed by an Optimizer. First, a training (in-sample) phase is conducted, followed by a validation (out-of-sample) phase. The validation phase reports the out-of-sample (OOS) performance of the optimum param set.
The experiment can be summarised as:
- Hypothesis: the optimized algo params will generate positive market returns in live trading.
- Null hypothesis: algo has zero positive expectancy of returns in the tested param space.
- Independent variable (aka predictor / feature): algo parameter space defined by []ParamSet.
- Dependent variable: algo backtest performance (Sharpe, CAGR et al) measured by Report.
- Control variables: price data samples, backtest simulator settings etc.
- Method: as defined by the Optimizer implementation (e.g. brute force, genetic et al) and its ObjectiveRanker func.