onnxruntime_go

package module
v0.0.0-...-5621686 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 14, 2023 License: MIT Imports: 6 Imported by: 0

README

Cross-Platform onnxruntime Wrapper for Go

About

This library seeks to provide an interface for loading and executing neural networks from Go(lang) code, while remaining as simple to use as possible.

A few example applications using this library can be found in the onnxruntime_go_examples repository.

The onnxruntime library provides a way to load and execute ONNX-format neural networks, though the library primarily supports C and C++ APIs. Several efforts exist to have written Go(lang) wrappers for the onnxruntime library, but as far as I can tell, none of these existing Go wrappers support Windows. This is due to the fact that Microsoft's onnxruntime library assumes the user will be using the MSVC compiler on Windows systems, while CGo on Windows requires using Mingw.

This wrapper works around the issues by manually loading the onnxruntime shared library, removing any dependency on the onnxruntime source code beyond the header files. Naturally, this approach works equally well on non-Windows systems.

Additionally, this library uses Go's recent addition of generics to support multiple Tensor data types; see the NewTensor or NewEmptyTensor functions.

Note on onnxruntime Library Versions

At the time of writing, this library uses version 1.15.1 of the onnxruntime C API headers. So, it will probably only work with version 1.15.1 of the onnxruntime shared libraries, as well. If you need to use a different version, or if I get behind on updating this repository, updating or changing the onnxruntime version should be fairly easy:

  1. Replace the onnxruntime_c_api.h file with the version corresponding to the onnxruntime version you wish to use.

  2. Replace the test_data/onnxruntime.dll (or test_data/onnxruntime*.so) file with the version corresponding to the onnxruntime version you wish to use.

Note that both the C API header and the shared library files are available to download from the releases page in the official repo. Download the archive for the release you want to use, and extract it. The header file is located in the "include" subdirectory, and the shared library will be located in the "lib" subdirectory. (On Linux systems, you'll need the version of the .so with the appended version numbers, e.g., libonnxruntime.so.1.15.1, and not the libonnxruntime.so, which is just a symbolic link.) The archive will contain several other files containing C++ headers, debug symbols, and so on, but you shouldn't need anything other than the single onnxruntime shared library and onnxruntime_c_api.h. (The exception is if you're wanting to enable GPU support, where you may need other shared-library files, such as execution_providers_cuda.dll and execution_providers_shared.dll on Windows.)

Requirements

To use this library, you'll need a version of Go with cgo support. If you are not using an amd64 version of Windows or Linux (or if you want to provide your own library for some other reason), you simply need to provide the correct path to the shared library when initializing the wrapper. This is seen in the first few lines of the following example.

Note that if you want to use CUDA, you'll need to be using a version of the onnxruntime shared library with CUDA support, as well as be using a CUDA version supported by the underlying version of your onnxruntime library. For example, version 1.15.1 of the onnxruntime library only supports CUDA 11.8. See the onnxruntime CUDA support documentation for more specifics.

Example Usage

The full documentation can be found at pkg.go.dev.

Additionally, several example command-line applications complete with necessary networks and data can be found in the onnxruntime_go_examples repository.

The following example illustrates how this library can be used to load and run an ONNX network taking a single input tensor and producing a single output tensor, both of which contain 32-bit floating point values. Note that error handling is omitted; each of the functions returns an err value, which will be non-nil in the case of failure.

import (
    "fmt"
    ort "github.com/yalue/onnxruntime_go"
    "os"
)

func main() {
    // This line may be optional, by default the library will try to load
    // "onnxruntime.dll" on Windows, and "onnxruntime.so" on any other system.
    ort.SetSharedLibraryPath("path/to/onnxruntime.so")

    err := ort.InitializeEnvironment()
    defer ort.DestroyEnvironment()

    // To make it easier to work with the C API, this library requires the user
    // to create all input and output tensors prior to creating the session.
    inputData := []float32{0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}
    inputShape := ort.NewShape(2, 5)
    inputTensor, err := ort.NewTensor(inputShape, inputData)
    defer inputTensor.Destroy()
    // This hypothetical network maps a 2x5 input -> 2x3x4 output.
    outputShape := ort.NewShape(2, 3, 4)
    outputTensor, err := ort.NewEmptyTensor[float32](outputShape)
    defer outputTensor.Destroy()

    session, err := ort.NewAdvancedSession("path/to/network.onnx",
        []string{"Input 1 Name"}, []string{"Output 1 Name"},
        []ArbitraryTensor{inputTensor}, []ArbitraryTensor{outputTensor}, nil)
    defer session.Destroy()

    // Calling Run() will run the network, reading the current contents of the
    // input tensors and modifying the contents of the output tensors. Simply
    // modify the input tensor's data (available via inputTensor.GetData())
    // before calling Run().
    err = session.Run()

    outputData := outputTensor.GetData()

    // ...
}

Deprecated APIs

Older versions of this library used a typed Session[T] struct to keep track of sessions. In retrospect, associating type parameters with Sessions was unnecessary, and the AdvancedSession type, along with its associated APIs, was added to rectify this mistake. For backwards compatibility, the old typed Session[T] and DynamicSession[T] types are still included and unlikely to be removed. However, they now delegate their functionality to AdvancedSession internally. New code should always favor using AdvancedSession directly.

Running Tests and System Compatibility for Testing

Navigate to this directory and run go test -v, or optionally go test -v -bench=.. All tests should pass; tests relating to CUDA or other accelerator support will be skipped on systems or onnxruntime builds that don't support them.

Currently, this repository includes a copy of onnxruntime.dll for AMD64 Windows, and onnxruntime_arm64.so for ARM64 Linux in its test_data directory, in order to (hopefully!) allow all tests to pass on those systems without users needing to copy additional libraries beyond cloning this repository. In the future, however, this may change if support for more systems are added or removed.

You may want to use a different version of the onnxruntime shared library for a couple reasons. In particular:

  1. The included shared library copies do not include support for CUDA or other accelerated execution providers, so CUDA-related tests will always fail.

  2. Many systems, including AMD64 and i386 Linux, and ARM64 or x86 osx, do not have shared libraries included in test_data in the first place. (At least for now.)

If these or other reasons apply to you, the test code will check the ONNXRUNTIME_SHARED_LIBRARY_PATH environment variable before attempting to load a library from test_data/. So, if you are using one of these systems or want accelerator-related tests to run, you should set the environment variable to the path to the onnxruntime shared library. Afterwards, go test -v should run and pass.

Documentation

Overview

This library wraps the C "onnxruntime" library maintained at https://github.com/microsoft/onnxruntime. It seeks to provide as simple an interface as possible to load and run ONNX-format neural networks from Go code.

Index

Constants

This section is empty.

Variables

View Source
var NotInitializedError error = fmt.Errorf("InitializeRuntime() has either " +
	"not yet been called, or did not return successfully")
View Source
var ShapeOverflowError error = fmt.Errorf("The shape's flattened size " +
	"overflows an int64")
View Source
var ZeroShapeLengthError error = fmt.Errorf("The shape has no dimensions")

Functions

func DestroyEnvironment

func DestroyEnvironment() error

Call this function to cleanup the internal onnxruntime environment when it is no longer needed.

func DisableTelemetry

func DisableTelemetry() error

Disables telemetry events for the onnxruntime environment. Must be called after initializing the environment using InitializeEnvironment(). It is unclear from the onnxruntime docs whether this will cause an error or silently return if telemetry is already disabled.

func EnableTelemetry

func EnableTelemetry() error

Enables telemetry events for the onnxruntime environment. Must be called after initializing the environment using InitializeEnvironment(). It is unclear from the onnxruntime docs whether this will cause an error or silently return if telemetry is already enabled.

func GetTensorElementDataType

func GetTensorElementDataType[T TensorData]() C.ONNXTensorElementDataType

Returns the ONNX enum value used to indicate TensorData type T.

func InitializeEnvironment

func InitializeEnvironment() error

Call this function to initialize the internal onnxruntime environment. If this doesn't return an error, the caller will be responsible for calling DestroyEnvironment to free the onnxruntime state when no longer needed.

func IsInitialized

func IsInitialized() bool

Returns false if the onnxruntime package is not initialized. Called internally by several functions, to avoid segfaulting if InitializeEnvironment hasn't been called yet.

func SetSharedLibraryPath

func SetSharedLibraryPath(path string)

Use this function to set the path to the "onnxruntime.so" or "onnxruntime.dll" function. By default, it will be set to "onnxruntime.so" on non-Windows systems, and "onnxruntime.dll" on Windows. Users wishing to specify a particular location of this library must call this function prior to calling onnxruntime.InitializeEnvironment().

Types

type AdvancedSession

type AdvancedSession struct {
	// contains filtered or unexported fields
}

A wrapper around the OrtSession C struct. Requires the user to maintain all input and output tensors, and to use the same data type for input and output tensors. Created using NewAdvancedSession(...) or NewAdvancedSessionWithONNXData(...). The caller is responsible for calling the Destroy() function on each session when it is no longer needed.

func NewAdvancedSession

func NewAdvancedSession(onnxFilePath string, inputNames, outputNames []string,
	inputs, outputs []ArbitraryTensor,
	options *SessionOptions) (*AdvancedSession, error)

Loads the ONNX network at the given path, and initializes an AdvancedSession instance. If this returns successfully, the caller must call Destroy() on the returned session when it is no longer needed. We require the user to provide the input and output tensors and names at this point, in order to not need to re-allocate them every time Run() is called. The user instead can just update or access the input/output tensor data after calling Run(). The input and output tensors MUST outlive this session, and calling session.Destroy() will not destroy the input or output tensors. If the provided SessionOptions pointer is nil, then the new session will use default options.

func NewAdvancedSessionWithONNXData

func NewAdvancedSessionWithONNXData(onnxData []byte, inputNames,
	outputNames []string, inputs, outputs []ArbitraryTensor,
	options *SessionOptions) (*AdvancedSession, error)

The same as NewAdvancedSession, but takes a slice of bytes containing the .onnx network rather than a file path.

func (*AdvancedSession) Destroy

func (s *AdvancedSession) Destroy() error

func (*AdvancedSession) Run

func (s *AdvancedSession) Run() error

Runs the session, updating the contents of the output tensors on success.

type ArbitraryTensor

type ArbitraryTensor interface {
	DataType() C.ONNXTensorElementDataType
	GetShape() Shape
	Destroy() error
	GetInternals() *TensorInternalData
}

An interface for managing tensors where we don't care about accessing the underlying data slice. All typed tensors will support this interface, regardless of the underlying data type.

type BadShapeDimensionError

type BadShapeDimensionError struct {
	DimensionIndex int
	DimensionSize  int64
}

This type of error is returned when we attempt to validate a tensor that has a negative or 0 dimension.

func (*BadShapeDimensionError) Error

func (e *BadShapeDimensionError) Error() string

type CUDAProviderOptions

type CUDAProviderOptions struct {
	// contains filtered or unexported fields
}

Holds options required when enabling the CUDA backend for a session. This struct wraps C onnxruntime types; users must create instances of this using the NewCUDAProviderOptions() function. So, to enable CUDA for a session, follow these steps:

  1. Call NewSessionOptions() to create a SessionOptions struct.
  2. Call NewCUDAProviderOptions() to obtain a CUDAProviderOptions struct.
  3. Call the CUDAProviderOptions struct's Update(...) function to pass a list of settings to CUDA. (See the comment on the Update() function.)
  4. Pass the CUDA options struct pointer to the SessionOptions.AppendExecutionProviderCUDA(...) function.
  5. Call the Destroy() function on the CUDA provider options.
  6. Call NewAdvancedSession(...), passing the SessionOptions struct to it.
  7. Call the Destroy() function on the SessionOptions struct.

Admittedly, this is a bit of a mess, but that's how it's handled by the C API internally. (The onnxruntime python API hides a bunch of this complexity using getter and setter functions, for which Go does not have a terse equivalent.)

func NewCUDAProviderOptions

func NewCUDAProviderOptions() (*CUDAProviderOptions, error)

Initializes and returns a CUDAProviderOptions struct, used when enabling CUDA in a SessionOptions instance. (i.e., a CUDAProviderOptions must be configured, then passed to SessionOptions.AppendExecutionProviderCUDA.) The caller must call the Destroy() function on the returned struct when it's no longer needed.

func (*CUDAProviderOptions) Destroy

func (o *CUDAProviderOptions) Destroy() error

Must be called when the CUDAProviderOptions struct is no longer needed; frees internal C-allocated state. Note that the CUDAProviderOptions struct can be destroyed as soon as options.AppendExecutionProviderCUDA has been called.

func (*CUDAProviderOptions) Update

func (o *CUDAProviderOptions) Update(options map[string]string) error

Wraps the call to the UpdateCUDAProviderOptions in the onnxruntime C API. Requires a map of string keys to values for configuring the CUDA backend. For example, set the key "device_id" to "1" to use GPU 1 rather than 0.

The onnxruntime headers refer users to https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#configuration-options for a full list of available keys and values.

type DynamicAdvancedSession

type DynamicAdvancedSession struct {
	// contains filtered or unexported fields
}

This type of session does not require specifying input and output tensors ahead of time, but allows users to pass the list of input and output tensors when calling Run(). As with AdvancedSession, users must still call Destroy() on an DynamicAdvancedSession that is no longer needed.

func NewDynamicAdvancedSession

func NewDynamicAdvancedSession(onnxFilePath string, inputNames,
	outputNames []string, options *SessionOptions) (*DynamicAdvancedSession,
	error)

Like NewAdvancedSession, but does not require specifying input and output tensors.

func NewDynamicAdvancedSessionWithONNXData

func NewDynamicAdvancedSessionWithONNXData(onnxData []byte,
	inputNames, outputNames []string,
	options *SessionOptions) (*DynamicAdvancedSession, error)

Like NewAdvancedSessionWithONNXData, but does not require specifying input and output tensors.

func (*DynamicAdvancedSession) Destroy

func (s *DynamicAdvancedSession) Destroy() error

func (*DynamicAdvancedSession) Run

func (s *DynamicAdvancedSession) Run(inputs, outputs []ArbitraryTensor) error

Runs the network on the given input and output tensors. The number of input and output tensors must match the number (and order) of the input and output names specified to NewDynamicAdvancedSession.

type DynamicSession

type DynamicSession[In TensorData, Out TensorData] struct {
	// contains filtered or unexported fields
}

Similar to Session, but does not require the specification of the input and output shapes at session creation time, and allows for input and output tensors to have different types. This allows for fully dynamic input to the onnx model.

NOTE: As with Session[T], new users should probably be using DynamicAdvancedSession in the future.

func NewDynamicSession

func NewDynamicSession[in TensorData, out TensorData](onnxFilePath string,
	inputNames, outputNames []string) (*DynamicSession[in, out], error)

Same as NewSession, but for dynamic sessions.

func NewDynamicSessionWithONNXData

func NewDynamicSessionWithONNXData[in TensorData, out TensorData](onnxData []byte, inputNames, outputNames []string) (*DynamicSession[in, out], error)

Similar to NewSessionWithOnnxData, but for dynamic sessions.

func (*DynamicSession[_, _]) Destroy

func (s *DynamicSession[_, _]) Destroy() error

func (*DynamicSession[in, out]) Run

func (s *DynamicSession[in, out]) Run(inputs []*Tensor[in],
	outputs []*Tensor[out]) error

Unlike the non-dynamic equivalents, the DynamicSession's Run() function takes a list of input and output tensors rather than requiring the tensors to be specified at Session creation time. It is still the caller's responsibility to create and Destroy all tensors passed to this function.

type FloatData

type FloatData interface {
	~float32 | ~float64
}

type IntData

type IntData interface {
	~int8 | ~uint8 | ~int16 | ~uint16 | ~int32 | ~uint32 | ~int64 | ~uint64
}

type Session

type Session[T TensorData] struct {
	// contains filtered or unexported fields
}

This type of session is for ONNX networks with the same input and output data types.

NOTE: This type was written with a type parameter despite the fact that a type parameter is not necessary for any of its underlying implementation, which is a mistake in retrospect. It is preserved only for compatibility with older code, and new users should almost certainly be using an AdvancedSession instead.

Using an AdvancedSession struct should be easier, and supports arbitrary combination of input and output tensor data types as well as more options.

func NewSession

func NewSession[T TensorData](onnxFilePath string, inputNames,
	outputNames []string, inputs, outputs []*Tensor[T]) (*Session[T], error)

Loads the ONNX network at the given path, and initializes a Session instance. If this returns successfully, the caller must call Destroy() on the returned session when it is no longer needed. We require the user to provide the input and output tensors and names at this point, in order to not need to re-allocate them every time Run() is called. The user instead can just update or access the input/output tensor data after calling Run(). The input and output tensors MUST outlive this session, and calling session.Destroy() will not destroy the input or output tensors.

func NewSessionWithONNXData

func NewSessionWithONNXData[T TensorData](onnxData []byte, inputNames,
	outputNames []string, inputs, outputs []*Tensor[T]) (*Session[T], error)

The same as NewSession, but takes a slice of bytes containing the .onnx network rather than a file path.

func (*Session[_]) Destroy

func (s *Session[_]) Destroy() error

func (*Session[T]) Run

func (s *Session[T]) Run() error

type SessionOptions

type SessionOptions struct {
	// contains filtered or unexported fields
}

Used to set options when creating an ONNXRuntime session. There is currently not a way to change options after the session is created, apart from destroying the session and creating a new one. This struct opaquely wraps a C OrtSessionOptions struct, which users must modify via function calls. (The OrtSessionOptions struct is opaque in the C API, too.)

Users must instantiate this struct using the NewSessionOptions function. Instances must be destroyed by calling the Destroy() method after the options are no longer needed (after NewAdvancedSession(...) has returned).

func NewSessionOptions

func NewSessionOptions() (*SessionOptions, error)

Initializes and returns a SessionOptions struct, used when setting options in new AdvancedSession instances. The caller must call the Destroy() function on the returned struct when it's no longer needed.

func (*SessionOptions) AppendExecutionProviderCUDA

func (o *SessionOptions) AppendExecutionProviderCUDA(
	cudaOptions *CUDAProviderOptions) error

Takes a pointer to an initialized CUDAProviderOptions instance, and applies them to the session options. This is what you'll need to call if you want the session to use CUDA. Returns an error if your device (or onnxruntime library) does not support CUDA. The CUDAProviderOptions struct can be destroyed after this.

func (*SessionOptions) AppendExecutionProviderCoreML

func (o *SessionOptions) AppendExecutionProviderCoreML(flags uint32) error

Enables the CoreML backend for the given session options on supported platforms. Unlike the other AppendExecutionProvider* functions, this one only takes a bitfield of flags rather than an options object, though it wouldn't suprise me if onnxruntime deprecated this API in the future as it did with the others. If that happens, we'll likely add a CoreMLProviderOptions struct and an AppendExecutionProviderCoreMLV2 function to the Go wrapper library, but for now the simpler API is the only thing available.

Regardless, the meanings of the flag bits are currently defined in the coreml_provider_factory.h file which is provided in the include/ directory of the onnxruntime releases for Apple platforms.

func (*SessionOptions) AppendExecutionProviderTensorRT

func (o *SessionOptions) AppendExecutionProviderTensorRT(
	tensorRTOptions *TensorRTProviderOptions) error

Takes an initialized TensorRTProviderOptions instance, and applies them to the session options. You'll need to call this if you want the session to use TensorRT. Returns an error if your device (or onnxruntime library version) does not support TensorRT. The TensorRTProviderOptions can be destroyed after this.

func (*SessionOptions) Destroy

func (o *SessionOptions) Destroy() error

func (*SessionOptions) SetInterOpNumThreads

func (o *SessionOptions) SetInterOpNumThreads(n int) error

Sets the number of threads used to parallelize execution across separate onnxruntime graph nodes. A value of 0 uses the default number of threads.

func (*SessionOptions) SetIntraOpNumThreads

func (o *SessionOptions) SetIntraOpNumThreads(n int) error

Sets the number of threads used to parallelize execution within onnxruntime graph nodes. A value of 0 uses the default number of threads.

type Shape

type Shape []int64

The Shape type holds the shape of the tensors used by the network input and outputs.

func NewShape

func NewShape(dimensions ...int64) Shape

Returns a Shape, with the given dimensions.

func (Shape) Clone

func (s Shape) Clone() Shape

Makes and returns a deep copy of the Shape.

func (Shape) Equals

func (s Shape) Equals(other Shape) bool

Returns true if both shapes match in every dimension.

func (Shape) FlattenedSize

func (s Shape) FlattenedSize() int64

Returns the total number of elements in a tensor with the given shape. Note that this may be an invalid value due to overflow or negative dimensions. If a shape comes from an untrusted source, it may be a good practice to call Validate() prior to trusting the FlattenedSize.

func (Shape) String

func (s Shape) String() string

func (Shape) Validate

func (s Shape) Validate() error

Returns a non-nil error if the shape has bad or zero dimensions. May return a ZeroShapeLengthError, a ShapeOverflowError, or a BadShapeDimensionError. In the future, this may return other types of errors if it others become necessary.

type Tensor

type Tensor[T TensorData] struct {
	// contains filtered or unexported fields
}

Used to manage all input and output data for onnxruntime networks. A Tensor always has an associated type and refers to data contained in an underlying Go slice. New tensors should be created using the NewTensor or NewEmptyTensor functions, and must be destroyed using the Destroy function when no longer needed.

func NewEmptyTensor

func NewEmptyTensor[T TensorData](s Shape) (*Tensor[T], error)

Creates a new empty tensor with the given shape. The shape provided to this function is copied, and is no longer needed after this function returns.

func NewTensor

func NewTensor[T TensorData](s Shape, data []T) (*Tensor[T], error)

Creates a new tensor backed by an existing data slice. The shape provided to this function is copied, and is no longer needed after this function returns. If the data slice is longer than s.FlattenedSize(), then only the first portion of the data will be used.

func (*Tensor[T]) Clone

func (t *Tensor[T]) Clone() (*Tensor[T], error)

Makes a deep copy of the tensor, including its ONNXRuntime value. The Tensor returned by this function must be destroyed when no longer needed. The returned tensor will also no longer refer to the same underlying data; use GetData() to obtain the new underlying slice.

func (*Tensor[T]) DataType

func (t *Tensor[T]) DataType() C.ONNXTensorElementDataType

Returns the value from the ONNXTensorElementDataType C enum corresponding to the type of data held by this tensor.

func (*Tensor[_]) Destroy

func (t *Tensor[_]) Destroy() error

Cleans up and frees the memory associated with this tensor.

func (*Tensor[T]) GetData

func (t *Tensor[T]) GetData() []T

Returns the slice containing the tensor's underlying data. The contents of the slice can be read or written to get or set the tensor's contents.

func (*Tensor[_]) GetInternals

func (t *Tensor[_]) GetInternals() *TensorInternalData

func (*Tensor[_]) GetShape

func (t *Tensor[_]) GetShape() Shape

Returns the shape of the tensor. The returned shape is only a copy; modifying this does *not* change the shape of the underlying tensor. (Modifying the tensor's shape can only be accomplished by Destroying and recreating the tensor with the same data.)

type TensorData

type TensorData interface {
	FloatData | IntData
}

This is used as a type constraint for the generic Tensor type.

type TensorInternalData

type TensorInternalData struct {
	// contains filtered or unexported fields
}

This wraps internal implementation details to avoid exposing them to users via the ArbitraryTensor interface.

type TensorRTProviderOptions

type TensorRTProviderOptions struct {
	// contains filtered or unexported fields
}

Like the CUDAProviderOptions struct, but used for configuring TensorRT options. Instances of this struct must be initialized using NewTensorRTProviderOptions() and cleaned up by calling their Destroy() function when they are no longer needed.

func NewTensorRTProviderOptions

func NewTensorRTProviderOptions() (*TensorRTProviderOptions, error)

Initializes and returns a TensorRTProviderOptions struct, used when enabling the TensorRT backend. The caller must call the Destroy() function on the returned struct when it's no longer needed.

func (*TensorRTProviderOptions) Destroy

func (o *TensorRTProviderOptions) Destroy() error

Must be called when the TensorRTProviderOptions are no longer needed, in order to free internal state. The struct is not needed as soon as you have passed it to the AppendExecutionProviderTensorRT function.

func (*TensorRTProviderOptions) Update

func (o *TensorRTProviderOptions) Update(options map[string]string) error

Wraps the call to the UpdateTensorRTProviderOptions in the C API. Requires a map of string keys to values.

The onnxruntime headers refer users to https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#cc for the list of available keys and values.

Directories

Path Synopsis
examples

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL