sync

package
v0.0.0-...-9ec6d29 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 9, 2022 License: Apache-2.0, MIT, BSD-3-Clause Imports: 7 Imported by: 0

README

sync

This package provides additional synchronization primitives not provided by the Go stdlib 'sync' package. It is partially derived from the upstream 'sync' package from go1.10.

Documentation

Overview

Package sync provides synchronization primitives.

+checkalignedignore

Index

Constants

View Source
const (
	WaitReasonSelect      uint8 = 9
	WaitReasonChanReceive uint8 = 14
	WaitReasonSemacquire  uint8 = 18
)

Values for the reason argument to gopark, from Go's src/runtime/runtime2.go.

View Source
const (
	TraceEvGoBlockRecv   byte = 23
	TraceEvGoBlockSelect byte = 24
	TraceEvGoBlockSync   byte = 25
)

Values for the traceEv argument to gopark, from Go's src/runtime/trace.go.

View Source
const RaceEnabled = false

RaceEnabled is true if the Go data race detector is enabled.

Variables

This section is empty.

Functions

func Gopark

func Gopark(unlockf func(uintptr, unsafe.Pointer) bool, lock unsafe.Pointer, reason uint8, traceEv byte, traceskip int)

Gopark is runtime.gopark. Gopark calls unlockf(pointer to runtime.g, lock); if unlockf returns true, Gopark blocks until Goready(pointer to runtime.g) is called. unlockf and its callees must be nosplit and norace, since stack splitting and race context are not available where it is called.

func Goready

func Goready(gp uintptr, traceskip int, wakep bool)

Goready is runtime.goready.

The additional wakep argument controls whether a new thread will be kicked to execute the P. This should be true in most circumstances. However, if the current thread is about to sleep, then this can be false for efficiency.

func Goyield

func Goyield()

Goyield is runtime.goyield, which is similar to runtime.Gosched but only yields the processor to other goroutines already on the processor's runqueue.

func MapKeyHasher

func MapKeyHasher(m any) func(unsafe.Pointer, uintptr) uintptr

MapKeyHasher returns a hash function for pointers of m's key type.

Preconditions: m must be a map.

func RaceAcquire

func RaceAcquire(addr unsafe.Pointer)

RaceAcquire has the same semantics as runtime.RaceAcquire.

func RaceDisable

func RaceDisable()

RaceDisable has the same semantics as runtime.RaceDisable.

func RaceEnable

func RaceEnable()

RaceEnable has the same semantics as runtime.RaceEnable.

func RaceRelease

func RaceRelease(addr unsafe.Pointer)

RaceRelease has the same semantics as runtime.RaceRelease.

func RaceReleaseMerge

func RaceReleaseMerge(addr unsafe.Pointer)

RaceReleaseMerge has the same semantics as runtime.RaceReleaseMerge.

func RaceUncheckedAtomicCompareAndSwapUintptr

func RaceUncheckedAtomicCompareAndSwapUintptr(ptr *uintptr, old, new uintptr) bool

RaceUncheckedAtomicCompareAndSwapUintptr is equivalent to sync/atomic.CompareAndSwapUintptr, but is not checked by the race detector. This is necessary when implementing gopark callbacks, since no race context is available during their execution.

func Rand32

func Rand32() uint32

Rand32 returns a non-cryptographically-secure random uint32.

func Rand64

func Rand64() uint64

Rand64 returns a non-cryptographically-secure random uint64.

func RandUintptr

func RandUintptr() uintptr

RandUintptr returns a non-cryptographically-secure random uintptr.

func Wakep

func Wakep()

Wakep is runtime.wakep.

Types

type Cond

type Cond = sync.Cond

Cond is an alias of sync.Cond.

func NewCond

func NewCond(l Locker) *Cond

NewCond is a wrapper around sync.NewCond.

type CrossGoroutineMutex

type CrossGoroutineMutex struct {
	// contains filtered or unexported fields
}

CrossGoroutineMutex is equivalent to Mutex, but it need not be unlocked by a the same goroutine that locked the mutex.

func (*CrossGoroutineMutex) Lock

func (m *CrossGoroutineMutex) Lock()

Lock locks the underlying Mutex. +checklocksignore

func (*CrossGoroutineMutex) TryLock

func (m *CrossGoroutineMutex) TryLock() bool

TryLock tries to acquire the mutex. It returns true if it succeeds and false otherwise. TryLock does not block.

func (*CrossGoroutineMutex) Unlock

func (m *CrossGoroutineMutex) Unlock()

Unlock unlocks the underlying Mutex. +checklocksignore

type CrossGoroutineRWMutex

type CrossGoroutineRWMutex struct {
	// contains filtered or unexported fields
}

CrossGoroutineRWMutex is equivalent to RWMutex, but it need not be unlocked by a the same goroutine that locked the mutex.

func (*CrossGoroutineRWMutex) DowngradeLock

func (rw *CrossGoroutineRWMutex) DowngradeLock()

DowngradeLock atomically unlocks rw for writing and locks it for reading.

Preconditions:

  • rw is locked for writing.

+checklocksignore

func (*CrossGoroutineRWMutex) Lock

func (rw *CrossGoroutineRWMutex) Lock()

Lock locks rw for writing. If the lock is already locked for reading or writing, Lock blocks until the lock is available. +checklocksignore

func (*CrossGoroutineRWMutex) RLock

func (rw *CrossGoroutineRWMutex) RLock()

RLock locks rw for reading.

It should not be used for recursive read locking; a blocked Lock call excludes new readers from acquiring the lock. See the documentation on the RWMutex type. +checklocksignore

func (*CrossGoroutineRWMutex) RUnlock

func (rw *CrossGoroutineRWMutex) RUnlock()

RUnlock undoes a single RLock call.

Preconditions:

  • rw is locked for reading.

+checklocksignore

func (*CrossGoroutineRWMutex) TryLock

func (rw *CrossGoroutineRWMutex) TryLock() bool

TryLock locks rw for writing. It returns true if it succeeds and false otherwise. It does not block. +checklocksignore

func (*CrossGoroutineRWMutex) TryRLock

func (rw *CrossGoroutineRWMutex) TryRLock() bool

TryRLock locks rw for reading. It returns true if it succeeds and false otherwise. It does not block. +checklocksignore

func (*CrossGoroutineRWMutex) Unlock

func (rw *CrossGoroutineRWMutex) Unlock()

Unlock unlocks rw for writing.

Preconditions:

  • rw is locked for writing.

+checklocksignore

type Gate

type Gate struct {
	// contains filtered or unexported fields
}

Gate is a synchronization primitive that allows concurrent goroutines to "enter" it as long as it hasn't been closed yet. Once it's been closed, goroutines cannot enter it anymore, but are allowed to leave, and the closer will be informed when all goroutines have left.

Gate is similar to WaitGroup:

  • Gate.Enter() is analogous to WaitGroup.Add(1), but may be called even if the Gate counter is 0 and fails if Gate.Close() has been called.

  • Gate.Leave() is equivalent to WaitGroup.Done().

  • Gate.Close() is analogous to WaitGroup.Wait(), but also causes future

calls to Gate.Enter() to fail and may only be called once, from a single goroutine.

This is useful, for example, in cases when a goroutine is trying to clean up an object for which multiple goroutines have pointers. In such a case, users would be required to enter and leave the Gate, and the cleaner would wait until all users are gone (and no new ones are allowed) before proceeding.

Users:

if !g.Enter() {
	// Gate is closed, we can't use the object.
	return
}

// Do something with object.
[...]

g.Leave()

Closer:

// Prevent new users from using the object, and wait for the existing
// ones to complete.
g.Close()

// Clean up the object.
[...]

func (*Gate) Close

func (g *Gate) Close()

Close closes the gate, causing future calls to Enter to fail, and waits until all goroutines that are currently inside the gate leave before returning.

Only one goroutine can call this function.

func (*Gate) Enter

func (g *Gate) Enter() bool

Enter tries to enter the gate. It will succeed if it hasn't been closed yet, in which case the caller must eventually call Leave().

This function is thread-safe.

func (*Gate) Leave

func (g *Gate) Leave()

Leave leaves the gate. This must only be called after a successful call to Enter(). If the gate has been closed and this is the last one inside the gate, it will notify the closer that the gate is done.

This function is thread-safe.

type Locker

type Locker = sync.Locker

Locker is an alias of sync.Locker.

type Map

type Map = sync.Map

Map is an alias of sync.Map.

type Mutex

type Mutex struct {
	// contains filtered or unexported fields
}

Mutex is a mutual exclusion lock. The zero value for a Mutex is an unlocked mutex.

A Mutex must not be copied after first use.

A Mutex must be unlocked by the same goroutine that locked it. This invariant is enforced with the 'checklocks' build tag.

func (*Mutex) Lock

func (m *Mutex) Lock()

Lock locks m. If the lock is already in use, the calling goroutine blocks until the mutex is available. +checklocksignore

func (*Mutex) TryLock

func (m *Mutex) TryLock() bool

TryLock tries to acquire the mutex. It returns true if it succeeds and false otherwise. TryLock does not block. +checklocksignore

func (*Mutex) Unlock

func (m *Mutex) Unlock()

Unlock unlocks m.

Preconditions:

  • m is locked.
  • m was locked by this goroutine.

+checklocksignore

type NoCopy

type NoCopy struct{}

NoCopy may be embedded into structs which must not be copied after the first use.

See https://golang.org/issues/8005#issuecomment-190753527 for details.

func (*NoCopy) Lock

func (*NoCopy) Lock()

Lock is a no-op used by -copylocks checker from `go vet`.

func (*NoCopy) Unlock

func (*NoCopy) Unlock()

Unlock is a no-op used by -copylocks checker from `go vet`.

type Once

type Once = sync.Once

Once is an alias of sync.Once.

type Pool

type Pool = sync.Pool

Pool is an alias of sync.Pool.

type RWMutex

type RWMutex struct {
	// contains filtered or unexported fields
}

A RWMutex is a reader/writer mutual exclusion lock. The lock can be held by an arbitrary number of readers or a single writer. The zero value for a RWMutex is an unlocked mutex.

A RWMutex must not be copied after first use.

If a goroutine holds a RWMutex for reading and another goroutine might call Lock, no goroutine should expect to be able to acquire a read lock until the initial read lock is released. In particular, this prohibits recursive read locking. This is to ensure that the lock eventually becomes available; a blocked Lock call excludes new readers from acquiring the lock.

A Mutex must be unlocked by the same goroutine that locked it. This invariant is enforced with the 'checklocks' build tag.

func (*RWMutex) DowngradeLock

func (rw *RWMutex) DowngradeLock()

DowngradeLock atomically unlocks rw for writing and locks it for reading.

Preconditions:

  • rw is locked for writing.

+checklocksignore

func (*RWMutex) Lock

func (rw *RWMutex) Lock()

Lock locks rw for writing. If the lock is already locked for reading or writing, Lock blocks until the lock is available. +checklocksignore

func (*RWMutex) RLock

func (rw *RWMutex) RLock()

RLock locks rw for reading.

It should not be used for recursive read locking; a blocked Lock call excludes new readers from acquiring the lock. See the documentation on the RWMutex type. +checklocksignore

func (*RWMutex) RUnlock

func (rw *RWMutex) RUnlock()

RUnlock undoes a single RLock call.

Preconditions:

  • rw is locked for reading.
  • rw was locked by this goroutine.

+checklocksignore

func (*RWMutex) TryLock

func (rw *RWMutex) TryLock() bool

TryLock locks rw for writing. It returns true if it succeeds and false otherwise. It does not block. +checklocksignore

func (*RWMutex) TryRLock

func (rw *RWMutex) TryRLock() bool

TryRLock locks rw for reading. It returns true if it succeeds and false otherwise. It does not block. +checklocksignore

func (*RWMutex) Unlock

func (rw *RWMutex) Unlock()

Unlock unlocks rw for writing.

Preconditions:

  • rw is locked for writing.
  • rw was locked by this goroutine.

+checklocksignore

type SeqCount

type SeqCount struct {
	// contains filtered or unexported fields
}

SeqCount is a synchronization primitive for optimistic reader/writer synchronization in cases where readers can work with stale data and therefore do not need to block writers.

Compared to sync/atomic.Value:

  • Mutation of SeqCount-protected data does not require memory allocation, whereas atomic.Value generally does. This is a significant advantage when writes are common.

  • Atomic reads of SeqCount-protected data require copying. This is a disadvantage when atomic reads are common.

  • SeqCount may be more flexible: correct use of SeqCount.ReadOk allows other operations to be made atomic with reads of SeqCount-protected data.

  • SeqCount is more cumbersome to use; atomic reads of SeqCount-protected data require instantiating function templates using go_generics (see seqatomic.go).

func (*SeqCount) BeginRead

func (s *SeqCount) BeginRead() SeqCountEpoch

BeginRead indicates the beginning of a reader critical section. Reader critical sections DO NOT BLOCK writer critical sections, so operations in a reader critical section MAY RACE with writer critical sections. Races are detected by ReadOk at the end of the reader critical section. Thus, the low-level structure of readers is generally:

for {
    epoch := seq.BeginRead()
    // do something idempotent with seq-protected data
    if seq.ReadOk(epoch) {
        break
    }
}

However, since reader critical sections may race with writer critical sections, the Go race detector will (accurately) flag data races in readers using this pattern. Most users of SeqCount will need to use the SeqAtomicLoad function template in seqatomic.go.

func (*SeqCount) BeginWrite

func (s *SeqCount) BeginWrite()

BeginWrite indicates the beginning of a writer critical section.

SeqCount does not support concurrent writer critical sections; clients with concurrent writers must synchronize them using e.g. sync.Mutex.

func (*SeqCount) BeginWriteOk

func (s *SeqCount) BeginWriteOk(epoch SeqCountEpoch) bool

BeginWriteOk combines the semantics of ReadOk and BeginWrite. If the reader critical section initiated by a previous call to BeginRead() that returned epoch did not race with any writer critical sections, it begins a writer critical section and returns true. Otherwise it does nothing and returns false.

func (*SeqCount) EndWrite

func (s *SeqCount) EndWrite()

EndWrite ends the effect of a preceding BeginWrite or successful BeginWriteOk.

func (*SeqCount) ReadOk

func (s *SeqCount) ReadOk(epoch SeqCountEpoch) bool

ReadOk returns true if the reader critical section initiated by a previous call to BeginRead() that returned epoch did not race with any writer critical sections.

ReadOk may be called any number of times during a reader critical section. Reader critical sections do not need to be explicitly terminated; the last call to ReadOk is implicitly the end of the reader critical section.

type SeqCountEpoch

type SeqCountEpoch uint32

SeqCountEpoch tracks writer critical sections in a SeqCount.

type WaitGroup

type WaitGroup = sync.WaitGroup

WaitGroup is an alias of sync.WaitGroup.

Directories

Path Synopsis
Package seqatomic doesn't exist.
Package seqatomic doesn't exist.
Package atomicptrmap instantiates generic_atomicptrmap for testing.
Package atomicptrmap instantiates generic_atomicptrmap for testing.
Package locking implements lock primitives with the correctness validator.
Package locking implements lock primitives with the correctness validator.
Package seqatomic doesn't exist.
Package seqatomic doesn't exist.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL