codegen

package
v0.0.0-...-fd0ffed Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 26, 2022 License: BSD-3-Clause Imports: 5 Imported by: 0

README

// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.

The codegen directory contains code generation tests for the gc
compiler.


- Introduction

The test harness compiles Go code inside files in this directory and
matches the generated assembly (the output of `go tool compile -S`)
against a set of regexps to be specified in comments that follow a
special syntax (described below). The test driver is implemented as a
step of the top-level test/run.go suite, called "asmcheck".

The codegen harness is part of the all.bash test suite, but for
performance reasons only the codegen tests for the host machine's
GOARCH are enabled by default, and only on GOOS=linux.

To perform comprehensive tests for all the supported architectures
(even on a non-Linux system), one can run the following command

  $ ../bin/go run run.go -all_codegen -v codegen

in the top-level test directory. This is recommended after any change
that affect the compiler's code.

The test harness compiles the tests with the same go toolchain that is
used to run run.go. After writing tests for a newly added codegen
transformation, it can be useful to first run the test harness with a
toolchain from a released Go version (and verify that the new tests
fail), and then re-runnig the tests using the devel toolchain.


- Regexps comments syntax

Instructions to match are specified inside plain comments that start
with an architecture tag, followed by a colon and a quoted Go-style
regexp to be matched. For example, the following test:

  func Sqrt(x float64) float64 {
  	   // amd64:"SQRTSD"
  	   // arm64:"FSQRTD"
  	   return math.Sqrt(x)
  }

verifies that math.Sqrt calls are intrinsified to a SQRTSD instruction
on amd64, and to a FSQRTD instruction on arm64.

It is possible to put multiple architectures checks into the same
line, as:

  // amd64:"SQRTSD" arm64:"FSQRTD"

although this form should be avoided when doing so would make the
regexps line excessively long and difficult to read.

Comments that are on their own line will be matched against the first
subsequent non-comment line. Inline comments are also supported; the
regexp will be matched against the code found on the same line:

  func Sqrt(x float64) float64 {
  	   return math.Sqrt(x) // arm:"SQRTD"
  }

It's possible to specify a comma-separated list of regexps to be
matched. For example, the following test:

  func TZ8(n uint8) int {
  	   // amd64:"BSFQ","ORQ\t\\$256"
  	   return bits.TrailingZeros8(n)
  }

verifies that the code generated for a bits.TrailingZeros8 call on
amd64 contains both a "BSFQ" instruction and an "ORQ $256".

Note how the ORQ regex includes a tab char (\t). In the Go assembly
syntax, operands are separated from opcodes by a tabulation.

Regexps can be quoted using either " or `. Special characters must be
escaped accordingly. Both of these are accepted, and equivalent:

  // amd64:"ADDQ\t\\$3"
  // amd64:`ADDQ\t\$3`

and they'll match this assembly line:

  ADDQ	$3

Negative matches can be specified using a - before the quoted regexp.
For example:

  func MoveSmall() {
  	   x := [...]byte{1, 2, 3, 4, 5, 6, 7}
  	   copy(x[1:], x[:]) // arm64:-".*memmove"
  }

verifies that NO memmove call is present in the assembly generated for
the copy() line.


- Architecture specifiers

There are three different ways to specify on which architecture a test
should be run:

* Specify only the architecture (eg: "amd64"). This indicates that the
  check should be run on all the supported architecture variants. For
  instance, arm checks will be run against all supported GOARM
  variations (5,6,7).
* Specify both the architecture and a variant, separated by a slash
  (eg: "arm/7"). This means that the check will be run only on that
  specific variant.
* Specify the operating system, the architecture and the variant,
  separated by slashes (eg: "plan9/386/sse2", "plan9/amd64/"). This is
  needed in the rare case that you need to do a codegen test affected
  by a specific operating system; by default, tests are compiled only
  targeting linux.


- Remarks, and Caveats

-- Write small test functions

As a general guideline, test functions should be small, to avoid
possible interactions between unrelated lines of code that may be
introduced, for example, by the compiler's optimization passes.

Any given line of Go code could get assigned more instructions than it
may appear from reading the source. In particular, matching all MOV
instructions should be avoided; the compiler may add them for
unrelated reasons and this may render the test ineffective.

-- Line matching logic

Regexps are always matched from the start of the instructions line.
This means, for example, that the "MULQ" regexp is equivalent to
"^MULQ" (^ representing the start of the line), and it will NOT match
the following assembly line:

  IMULQ	$99, AX

To force a match at any point of the line, ".*MULQ" should be used.

For the same reason, a negative regexp like -"memmove" is not enough
to make sure that no memmove call is included in the assembly. A
memmove call looks like this:

  CALL	runtime.memmove(SB)

To make sure that the "memmove" symbol does not appear anywhere in the
assembly, the negative regexp to be used is -".*memmove".

Documentation

Index

Constants

View Source
const (
	A = 7777777777777777
	B = 8888888888888888
)

Variables

This section is empty.

Functions

func AccessInt1

func AccessInt1(m map[int]int) int

func AccessInt2

func AccessInt2(m map[int]int) bool

func AccessString1

func AccessString1(m map[string]int) int

func AccessString2

func AccessString2(m map[string]int) bool

func Add

func Add(x, y, ci uint) (r, co uint)

func Add64

func Add64(x, y, ci uint64) (r, co uint64)

func Add64C

func Add64C(x, ci uint64) (r, co uint64)

func Add64M

func Add64M(p, q, r *[3]uint64)

func Add64MPanicOnOverflowEQ

func Add64MPanicOnOverflowEQ(a, b [2]uint64) [2]uint64

func Add64MPanicOnOverflowGT

func Add64MPanicOnOverflowGT(a, b [2]uint64) [2]uint64

func Add64MPanicOnOverflowNE

func Add64MPanicOnOverflowNE(a, b [2]uint64) [2]uint64

func Add64MSaveC

func Add64MSaveC(p, q, r, c *[2]uint64)

func Add64PanicOnOverflowEQ

func Add64PanicOnOverflowEQ(a, b uint64) uint64

func Add64PanicOnOverflowGT

func Add64PanicOnOverflowGT(a, b uint64) uint64

func Add64PanicOnOverflowNE

func Add64PanicOnOverflowNE(a, b uint64) uint64

func Add64R

func Add64R(x, y, ci uint64) uint64

func Add64Z

func Add64Z(x, y uint64) (r, co uint64)

func AddAddSubSimplify

func AddAddSubSimplify(a, b, c int) int

func AddC

func AddC(x, ci uint) (r, co uint)

func AddM

func AddM(p, q, r *[3]uint)

func AddMul

func AddMul(x int) int

func AddR

func AddR(x, y, ci uint) uint

func AddSubFromConst

func AddSubFromConst(a int) int

func AddZ

func AddZ(x, y uint) (r, co uint)

func ArrayAdd64

func ArrayAdd64(a, b [4]float64) [4]float64

Notes: - 386 fails due to spilling a register - arm & mips fail due to softfloat calls amd64:"TEXT\t.*, [$]0-" arm64:"TEXT\t.*, [$]0-" ppc64:"TEXT\t.*, [$]0-" ppc64le:"TEXT\t.*, [$]0-" s390x:"TEXT\t.*, [$]0-"

func ArrayCopy

func ArrayCopy(a [16]byte) (b [16]byte)

func ArrayInit

func ArrayInit(i, j int) [4]int

386:"TEXT\t.*, [$]0-" amd64:"TEXT\t.*, [$]0-" arm:"TEXT\t.*, [$]0-" (spills return address) arm64:"TEXT\t.*, [$]0-" mips:"TEXT\t.*, [$]-4-" ppc64:"TEXT\t.*, [$]0-" ppc64le:"TEXT\t.*, [$]0-" s390x:"TEXT\t.*, [$]0-"

func ArrayZero

func ArrayZero() [16]byte

func CallFunc

func CallFunc(f func())

func CallInterface

func CallInterface(x interface{ M() })

func CapDiv

func CapDiv(a []int) int

func CapMod

func CapMod(a []int) int

func Cmp

func Cmp(f float64) bool

func CmpFold

func CmpFold(x uint32)

func CmpLogicalToZero

func CmpLogicalToZero(a, b, c uint32, d, e uint64) uint64

func CmpMem1

func CmpMem1(p int, q *int) bool

func CmpMem2

func CmpMem2(p *int, q int) bool

func CmpMem3

func CmpMem3(p *int) bool

func CmpMem4

func CmpMem4(p *int) bool

func CmpMem5

func CmpMem5(p **int)

func CmpMem6

func CmpMem6(a []int) int

func CmpToOneU_ex1

func CmpToOneU_ex1(a uint8, b uint16, c uint32, d uint64) int

func CmpToOneU_ex2

func CmpToOneU_ex2(a uint8, b uint16, c uint32, d uint64) int

func CmpToZero

func CmpToZero(a, b, d int32, e, f int64, deOptC0, deOptC1 bool) int32

func CmpToZeroU_ex1

func CmpToZeroU_ex1(a uint8, b uint16, c uint32, d uint64) int

func CmpToZeroU_ex2

func CmpToZeroU_ex2(a uint8, b uint16, c uint32, d uint64) int

func CmpToZero_ex1

func CmpToZero_ex1(a int64, e int32) int

var + const 'x-const' might be canonicalized to 'x+(-const)', so we check both CMN and CMP for subtraction expressions to make the pattern robust.

func CmpToZero_ex2

func CmpToZero_ex2(a, b, c int64, e, f, g int32) int

var + var TODO: optimize 'var - var'

func CmpToZero_ex3

func CmpToZero_ex3(a, b, c, d int64, e, f, g, h int32) int

var + var*var

func CmpToZero_ex4

func CmpToZero_ex4(a, b, c, d int64, e, f, g, h int32) int

var - var*var

func CmpToZero_ex5

func CmpToZero_ex5(e, f int32, u uint32) int

func CmpWithAdd

func CmpWithAdd(a float64, b float64) bool

func CmpWithSub

func CmpWithSub(a float64, b float64) bool

func CmpZero1

func CmpZero1(a int32, ptr *int)

func CmpZero2

func CmpZero2(a int64, ptr *int)

func CmpZero3

func CmpZero3(a int32, ptr *int)

func CmpZero32

func CmpZero32(f float32) bool

func CmpZero4

func CmpZero4(a int64, ptr *int)

func CmpZero64

func CmpZero64(f float64) bool

func CompareArray1

func CompareArray1(a, b [2]byte) bool

func CompareArray2

func CompareArray2(a, b [3]uint16) bool

func CompareArray3

func CompareArray3(a, b [3]int16) bool

func CompareArray4

func CompareArray4(a, b [12]int8) bool

func CompareArray5

func CompareArray5(a, b [15]byte) bool

func CompareArray6

func CompareArray6(a, b unsafe.Pointer) bool

This was a TODO in mapaccess1_faststr

func CompareString1

func CompareString1(s string) bool

func CompareString2

func CompareString2(s string) bool

func CompareString3

func CompareString3(s string) bool

func ConstDivs

func ConstDivs(n1 uint, n2 int) (uint, int)

Check that constant divisions get turned into MULs

func ConstMods

func ConstMods(n1 uint, n2 int) (uint, int)

Check that constant modulo divs get turned into MULs

func ConstantLoad

func ConstantLoad()

Loading from read-only symbols should get transformed into constants.

func CountRunes

func CountRunes(s string) int

func Defer

func Defer()

Put a defer in a loop, so second defer is not open-coded

func Div

func Div(hi, lo, x uint) (q, r uint)

func Div32

func Div32(hi, lo, x uint32) (q, r uint32)

func Div64

func Div64(hi, lo, x uint64) (q, r uint64)

func Div64degenerate

func Div64degenerate(x uint64) (q, r uint64)

func DivMemSrc

func DivMemSrc(a []float64)

func DivPow2

func DivPow2(f1, f2, f3 float64) (float64, float64, float64)

func Divisible

func Divisible(n1 uint, n2 int) (bool, bool, bool, bool)

Check that divisibility checks x%c==0 are converted to MULs and rotates

func F

func F()

func FloatDivs

func FloatDivs(a []float32) float32

func FusedAdd32

func FusedAdd32(x, y, z float32) float32

func FusedAdd64

func FusedAdd64(x, y, z float64) float64

func FusedSub32_a

func FusedSub32_a(x, y, z float32) float32

func FusedSub32_b

func FusedSub32_b(x, y, z float32) float32

func FusedSub64_a

func FusedSub64_a(x, y, z float64) float64

func FusedSub64_b

func FusedSub64_b(x, y, z float64) float64

func IndexArray

func IndexArray(x *[10]int, i int) int

func IndexSlice

func IndexSlice(x []float64, i int) float64

func IndexString

func IndexString(x string, i int) byte

func Init1

func Init1(p *I1)

func InitNotSmallSliceLiteral

func InitNotSmallSliceLiteral() []int

func InitSmallSliceLiteral

func InitSmallSliceLiteral() []int

---------------------- //

Init slice literal   //

---------------------- // See issue 21561

func IterateBits

func IterateBits(n uint) int

func IterateBits16

func IterateBits16(n uint16) int

func IterateBits32

func IterateBits32(n uint32) int

func IterateBits64

func IterateBits64(n uint64) int

func IterateBits8

func IterateBits8(n uint8) int

func KeepWanted

func KeepWanted(t *T)

Notes: - 386 fails due to spilling a register amd64:"TEXT\t.*, [$]0-" arm:"TEXT\t.*, [$]0-" (spills return address) arm64:"TEXT\t.*, [$]0-" ppc64:"TEXT\t.*, [$]0-" ppc64le:"TEXT\t.*, [$]0-" s390x:"TEXT\t.*, [$]0-" Note: that 386 currently has to spill a register.

func LeadingZeros

func LeadingZeros(n uint) int

func LeadingZeros16

func LeadingZeros16(n uint16) int

func LeadingZeros32

func LeadingZeros32(n uint32) int

func LeadingZeros64

func LeadingZeros64(n uint64) int

func LeadingZeros8

func LeadingZeros8(n uint8) int

func Len

func Len(n uint) int

func Len16

func Len16(n uint16) int

func Len32

func Len32(n uint32) int

func Len64

func Len64(n uint64) int

func Len8

func Len8(n uint8) int

func LenDiv1

func LenDiv1(a []int) int

func LenDiv2

func LenDiv2(s string) int

func LenMod1

func LenMod1(a []int) int

func LenMod2

func LenMod2(s string) int

func LookupStringConversionArrayLit

func LookupStringConversionArrayLit(m map[[2]string]int, bytes []byte) int

func LookupStringConversionKeyedArrayLit

func LookupStringConversionKeyedArrayLit(m map[[2]string]int, bytes []byte) int

func LookupStringConversionNestedLit

func LookupStringConversionNestedLit(m map[[1]struct{ s [1]string }]int, bytes []byte) int

func LookupStringConversionSimple

func LookupStringConversionSimple(m map[string]int, bytes []byte) int

func LookupStringConversionStructLit

func LookupStringConversionStructLit(m map[struct{ string }]int, bytes []byte) int

func MULA

func MULA(a, b, c uint32) (uint32, uint32, uint32)

func MULS

func MULS(a, b, c uint32) (uint32, uint32, uint32)

func MapClearIndirect

func MapClearIndirect(m map[int]int)

func MapClearInterface

func MapClearInterface(m map[interface{}]int)

func MapClearNotReflexive

func MapClearNotReflexive(m map[float64]int)

func MapClearPointer

func MapClearPointer(m map[*byte]int)

func MapClearReflexive

func MapClearReflexive(m map[int]int)

func MapClearSideEffect

func MapClearSideEffect(m map[int]int) int

func MapLiteralSizing

func MapLiteralSizing(x int) (map[int]int, map[int]int)

func MergeMuls1

func MergeMuls1(n int) int

func MergeMuls2

func MergeMuls2(n int) int

func MergeMuls3

func MergeMuls3(a, n int) int

func MergeMuls4

func MergeMuls4(n int) int

func MergeMuls5

func MergeMuls5(a, n int) int

func MightPanic

func MightPanic(a []int, i, j, k, s int)

Check that simple functions get promoted to nosplit, even when they might panic in various ways. See issue 31219. amd64:"TEXT\t.*NOSPLIT.*"

func Mul

func Mul(x, y uint) (hi, lo uint)

func Mul2

func Mul2(f float64) float64

func Mul64

func Mul64(x, y uint64) (hi, lo uint64)

func MulMemSrc

func MulMemSrc(a []uint32, b []float32)

func Mul_96

func Mul_96(n int) int

func Mul_n120

func Mul_n120(n int) int

func NegAddFromConstNeg

func NegAddFromConstNeg(a int) int

func NegSubFromConst

func NegSubFromConst(a int) int

func NoFix16A

func NoFix16A(divr int16) (int16, int16)

func NoFix16B

func NoFix16B(divd int16) (int16, int16)

func NoFix32A

func NoFix32A(divr int32) (int32, int32)

func NoFix32B

func NoFix32B(divd int32) (int32, int32)

func NoFix64A

func NoFix64A(divr int64) (int64, int64)

Check that fix-up code is not generated for divisions where it has been proven that that the divisor is not -1 or that the dividend is > MinIntNN.

func NoFix64B

func NoFix64B(divd int64) (int64, int64)

func OnesCount

func OnesCount(n uint) int

TODO(register args) Restore a m d 6 4 / v 1 :.*x86HasPOPCNT when only one ABI is tested.

func OnesCount16

func OnesCount16(n uint16) int

func OnesCount32

func OnesCount32(n uint32) int

func OnesCount64

func OnesCount64(n uint64) int

func OnesCount8

func OnesCount8(n uint8) int

func Pow2DivisibleSigned

func Pow2DivisibleSigned(n1, n2 int) (bool, bool)

Check that signed divisibility checks get converted to AND on low bits

func Pow2Divs

func Pow2Divs(n1 uint, n2 int) (uint, int)

func Pow2Mods

func Pow2Mods(n1 uint, n2 int) (uint, int)

func Pow2Muls

func Pow2Muls(n1, n2 int) (int, int)

func RaceMightPanic

func RaceMightPanic(a []int, i, j, k, s int)

Check that we elide racefuncenter/racefuncexit for functions with no calls (but which might panic in various ways). See issue 31219. amd64:-"CALL.*racefuncenter.*" arm64:-"CALL.*racefuncenter.*" ppc64le:-"CALL.*racefuncenter.*"

func RegArgsCall

func RegArgsCall(int, int, int, S)

func ReverseBytes

func ReverseBytes(n uint) uint

func ReverseBytes16

func ReverseBytes16(n uint16) uint16

func ReverseBytes32

func ReverseBytes32(n uint32) uint32

func ReverseBytes64

func ReverseBytes64(n uint64) uint64

func RotateLeft16

func RotateLeft16(n uint16) uint16

func RotateLeft32

func RotateLeft32(n uint32) uint32

func RotateLeft64

func RotateLeft64(n uint64) uint64

func RotateLeft8

func RotateLeft8(n uint8) uint8

func RotateLeftVariable

func RotateLeftVariable(n uint, m int) uint

func RotateLeftVariable32

func RotateLeftVariable32(n uint32, m int) uint32

func RotateLeftVariable64

func RotateLeftVariable64(n uint64, m int) uint64

func SliceArray

func SliceArray(x *[10]int, i, j int) []int

func SliceClear

func SliceClear(s []int) []int

func SliceClearPointers

func SliceClearPointers(s []*int) []*int

func SliceExtensionConst

func SliceExtensionConst(s []int) []int

func SliceExtensionConstInt64

func SliceExtensionConstInt64(s []int) []int

func SliceExtensionConstUint

func SliceExtensionConstUint(s []int) []int

func SliceExtensionConstUint64

func SliceExtensionConstUint64(s []int) []int

func SliceExtensionInt64

func SliceExtensionInt64(s []int, l64 int64) []int

func SliceExtensionPointer

func SliceExtensionPointer(s []*int, l int) []*int

func SliceExtensionVar

func SliceExtensionVar(s []byte, l int) []byte

func SliceExtensionVarInt64

func SliceExtensionVarInt64(s []byte, l int64) []byte

func SliceExtensionVarUint

func SliceExtensionVarUint(s []byte, l uint) []byte

func SliceExtensionVarUint64

func SliceExtensionVarUint64(s []byte, l uint64) []byte

func SliceMakeCopyConst

func SliceMakeCopyConst(s []int) []int

func SliceMakeCopyConstPtr

func SliceMakeCopyConstPtr(s []*int) []*int

func SliceMakeCopyLen

func SliceMakeCopyLen(s []int) []int

func SliceMakeCopyLenPtr

func SliceMakeCopyLenPtr(s []*int) []*int

func SliceMakeCopyNoMemmoveDifferentLen

func SliceMakeCopyNoMemmoveDifferentLen(s []int) []int

func SliceMakeCopyNoOptBlank

func SliceMakeCopyNoOptBlank(s []*int) []*int

func SliceMakeCopyNoOptCap

func SliceMakeCopyNoOptCap(s []int) []int

func SliceMakeCopyNoOptCopyLength

func SliceMakeCopyNoOptCopyLength(s []*int) (int, []*int)

func SliceMakeCopyNoOptNoCap

func SliceMakeCopyNoOptNoCap(s []*int) []*int

func SliceMakeCopyNoOptNoCopy

func SliceMakeCopyNoOptNoCopy(s []*int) []*int

func SliceMakeCopyNoOptNoDeref

func SliceMakeCopyNoOptNoDeref(s []*int) []*int

func SliceMakeCopyNoOptNoHeapAlloc

func SliceMakeCopyNoOptNoHeapAlloc(s []*int) int

func SliceMakeCopyNoOptNoMake

func SliceMakeCopyNoOptNoMake(s []*int) []*int

func SliceMakeCopyNoOptNoVar

func SliceMakeCopyNoOptNoVar(s []*int) []*int

func SliceMakeCopyNoOptSelfCopy

func SliceMakeCopyNoOptSelfCopy(s []*int) []*int

func SliceMakeCopyNoOptTargetReference

func SliceMakeCopyNoOptTargetReference(s []*int) []*int

func SliceMakeCopyNoOptWrongAssign

func SliceMakeCopyNoOptWrongAssign(s []*int) []*int

func SliceMakeCopyNoOptWrongOrder

func SliceMakeCopyNoOptWrongOrder(s []*int) []*int

func SliceNilCheck

func SliceNilCheck(s []int)

---------------------- //

Nil check of &s[0]   //

---------------------- // See issue 30366

func SliceSlice

func SliceSlice(x []float64, i, j int) []float64

func SliceString

func SliceString(x string, i, j int) string

func SliceWithConstCompare

func SliceWithConstCompare(a []int, b int) []int

func SliceWithSubtractBound

func SliceWithSubtractBound(a []int, b int) []int

func StackArgsCall

func StackArgsCall([10]int)

func StackStore

func StackStore() int

386:"TEXT\t.*, [$]0-" amd64:"TEXT\t.*, [$]0-" arm:"TEXT\t.*, [$]-4-" arm64:"TEXT\t.*, [$]0-" mips:"TEXT\t.*, [$]-4-" ppc64:"TEXT\t.*, [$]0-" ppc64le:"TEXT\t.*, [$]0-" s390x:"TEXT\t.*, [$]0-"

func Sub

func Sub(x, y, ci uint) (r, co uint)

func Sub64

func Sub64(x, y, ci uint64) (r, co uint64)

func Sub64C

func Sub64C(x, ci uint64) (r, co uint64)

func Sub64M

func Sub64M(p, q, r *[3]uint64)

func Sub64MPanicOnOverflowEQ

func Sub64MPanicOnOverflowEQ(a, b [2]uint64) [2]uint64

func Sub64MPanicOnOverflowGT

func Sub64MPanicOnOverflowGT(a, b [2]uint64) [2]uint64

func Sub64MPanicOnOverflowNE

func Sub64MPanicOnOverflowNE(a, b [2]uint64) [2]uint64

func Sub64MSaveC

func Sub64MSaveC(p, q, r, c *[2]uint64)

func Sub64PanicOnOverflowEQ

func Sub64PanicOnOverflowEQ(a, b uint64) uint64

func Sub64PanicOnOverflowGT

func Sub64PanicOnOverflowGT(a, b uint64) uint64

func Sub64PanicOnOverflowNE

func Sub64PanicOnOverflowNE(a, b uint64) uint64

func Sub64R

func Sub64R(x, y, ci uint64) uint64

func Sub64Z

func Sub64Z(x, y uint64) (r, co uint64)

func SubAddNegSimplify

func SubAddNegSimplify(a, b int) int

func SubAddSimplify

func SubAddSimplify(a, b int) int

func SubC

func SubC(x, ci uint) (r, co uint)

func SubFromConst

func SubFromConst(a int) int

func SubFromConstNeg

func SubFromConstNeg(a int) int

func SubFromLen64

func SubFromLen64(n uint64) int

func SubM

func SubM(p, q, r *[3]uint)

func SubMem

func SubMem(arr []int, b, c, d int) int

func SubR

func SubR(x, y, ci uint) uint

func SubSubFromConst

func SubSubFromConst(a int) int

func SubSubNegSimplify

func SubSubNegSimplify(a, b int) int

func SubZ

func SubZ(x, y uint) (r, co uint)

func ToByteSlice

func ToByteSlice() []byte

func TrailingZeros

func TrailingZeros(n uint) int

func TrailingZeros16

func TrailingZeros16(n uint16) int

func TrailingZeros32

func TrailingZeros32(n uint32) int

func TrailingZeros64

func TrailingZeros64(n uint64) int

func TrailingZeros64Subtract

func TrailingZeros64Subtract(n uint64) int

func TrailingZeros8

func TrailingZeros8(n uint8) int

func UintGeqOne

func UintGeqOne(a uint8, b uint16, c uint32, d uint64) int

func UintGeqZero

func UintGeqZero(a uint8, b uint16, c uint32, d uint64) int

func UintGtZero

func UintGtZero(a uint8, b uint16, c uint32, d uint64) int

func UintLeqZero

func UintLeqZero(a uint8, b uint16, c uint32, d uint64) int

func UintLtOne

func UintLtOne(a uint8, b uint16, c uint32, d uint64) int

func UintLtZero

func UintLtZero(a uint8, b uint16, c uint32, d uint64) int

func Zero1

func Zero1(t *Z1)

func Zero2

func Zero2(t *Z2)

func ZeroLargeStruct

func ZeroLargeStruct(x *T)

386:"TEXT\t.*, [$]0-" amd64:"TEXT\t.*, [$]0-" arm:"TEXT\t.*, [$]0-" (spills return address) arm64:"TEXT\t.*, [$]0-" mips:"TEXT\t.*, [$]-4-" ppc64:"TEXT\t.*, [$]0-" ppc64le:"TEXT\t.*, [$]0-" s390x:"TEXT\t.*, [$]0-"

Types

type I

type I interface {
	// contains filtered or unexported methods
}

type I1

type I1 struct {
	// contains filtered or unexported fields
}

type S

type S struct {
	// contains filtered or unexported fields
}

type T

type T struct {
	A, B, C, D int // keep exported fields
	// contains filtered or unexported fields
}

type Z1

type Z1 struct {
	// contains filtered or unexported fields
}

type Z2

type Z2 struct {
	// contains filtered or unexported fields
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL