Documentation ¶
Overview ¶
Package sync provides synchronization primitives.
+checkalignedignore
Index ¶
- Constants
- func Gopark(unlockf func(uintptr, unsafe.Pointer) bool, lock unsafe.Pointer, reason uint8, ...)
- func Goready(gp uintptr, traceskip int, wakep bool)
- func Goyield()
- func MapKeyHasher(m any) func(unsafe.Pointer, uintptr) uintptr
- func RaceAcquire(addr unsafe.Pointer)
- func RaceDisable()
- func RaceEnable()
- func RaceRelease(addr unsafe.Pointer)
- func RaceReleaseMerge(addr unsafe.Pointer)
- func RaceUncheckedAtomicCompareAndSwapUintptr(ptr *uintptr, old, new uintptr) bool
- func Rand32() uint32
- func Rand64() uint64
- func RandUintptr() uintptr
- func Wakep()
- type Cond
- type CrossGoroutineMutex
- type CrossGoroutineRWMutex
- func (rw *CrossGoroutineRWMutex) DowngradeLock()
- func (rw *CrossGoroutineRWMutex) Lock()
- func (rw *CrossGoroutineRWMutex) RLock()
- func (rw *CrossGoroutineRWMutex) RUnlock()
- func (rw *CrossGoroutineRWMutex) TryLock() bool
- func (rw *CrossGoroutineRWMutex) TryRLock() bool
- func (rw *CrossGoroutineRWMutex) Unlock()
- type Gate
- type Locker
- type Map
- type Mutex
- type NoCopy
- type Once
- type Pool
- type RWMutex
- type SeqCount
- type SeqCountEpoch
- type WaitGroup
Constants ¶
const ( WaitReasonSelect uint8 = 9 WaitReasonChanReceive uint8 = 14 WaitReasonSemacquire uint8 = 18 )
Values for the reason argument to gopark, from Go's src/runtime/runtime2.go.
const ( TraceEvGoBlockRecv byte = 23 TraceEvGoBlockSelect byte = 24 TraceEvGoBlockSync byte = 25 )
Values for the traceEv argument to gopark, from Go's src/runtime/trace.go.
const RaceEnabled = false
RaceEnabled is true if the Go data race detector is enabled.
Variables ¶
This section is empty.
Functions ¶
func Gopark ¶
func Gopark(unlockf func(uintptr, unsafe.Pointer) bool, lock unsafe.Pointer, reason uint8, traceEv byte, traceskip int)
Gopark is runtime.gopark. Gopark calls unlockf(pointer to runtime.g, lock); if unlockf returns true, Gopark blocks until Goready(pointer to runtime.g) is called. unlockf and its callees must be nosplit and norace, since stack splitting and race context are not available where it is called.
func Goready ¶
Goready is runtime.goready.
The additional wakep argument controls whether a new thread will be kicked to execute the P. This should be true in most circumstances. However, if the current thread is about to sleep, then this can be false for efficiency.
func Goyield ¶
func Goyield()
Goyield is runtime.goyield, which is similar to runtime.Gosched but only yields the processor to other goroutines already on the processor's runqueue.
func MapKeyHasher ¶
MapKeyHasher returns a hash function for pointers of m's key type.
Preconditions: m must be a map.
func RaceAcquire ¶
RaceAcquire has the same semantics as runtime.RaceAcquire.
func RaceRelease ¶
RaceRelease has the same semantics as runtime.RaceRelease.
func RaceReleaseMerge ¶
RaceReleaseMerge has the same semantics as runtime.RaceReleaseMerge.
func RaceUncheckedAtomicCompareAndSwapUintptr ¶
RaceUncheckedAtomicCompareAndSwapUintptr is equivalent to sync/atomic.CompareAndSwapUintptr, but is not checked by the race detector. This is necessary when implementing gopark callbacks, since no race context is available during their execution.
func RandUintptr ¶
func RandUintptr() uintptr
RandUintptr returns a non-cryptographically-secure random uintptr.
Types ¶
type CrossGoroutineMutex ¶
type CrossGoroutineMutex struct {
// contains filtered or unexported fields
}
CrossGoroutineMutex is equivalent to Mutex, but it need not be unlocked by a the same goroutine that locked the mutex.
func (*CrossGoroutineMutex) Lock ¶
func (m *CrossGoroutineMutex) Lock()
Lock locks the underlying Mutex. +checklocksignore
func (*CrossGoroutineMutex) TryLock ¶
func (m *CrossGoroutineMutex) TryLock() bool
TryLock tries to acquire the mutex. It returns true if it succeeds and false otherwise. TryLock does not block.
func (*CrossGoroutineMutex) Unlock ¶
func (m *CrossGoroutineMutex) Unlock()
Unlock unlocks the underlying Mutex. +checklocksignore
type CrossGoroutineRWMutex ¶
type CrossGoroutineRWMutex struct {
// contains filtered or unexported fields
}
CrossGoroutineRWMutex is equivalent to RWMutex, but it need not be unlocked by a the same goroutine that locked the mutex.
func (*CrossGoroutineRWMutex) DowngradeLock ¶
func (rw *CrossGoroutineRWMutex) DowngradeLock()
DowngradeLock atomically unlocks rw for writing and locks it for reading.
Preconditions:
- rw is locked for writing.
+checklocksignore
func (*CrossGoroutineRWMutex) Lock ¶
func (rw *CrossGoroutineRWMutex) Lock()
Lock locks rw for writing. If the lock is already locked for reading or writing, Lock blocks until the lock is available. +checklocksignore
func (*CrossGoroutineRWMutex) RLock ¶
func (rw *CrossGoroutineRWMutex) RLock()
RLock locks rw for reading.
It should not be used for recursive read locking; a blocked Lock call excludes new readers from acquiring the lock. See the documentation on the RWMutex type. +checklocksignore
func (*CrossGoroutineRWMutex) RUnlock ¶
func (rw *CrossGoroutineRWMutex) RUnlock()
RUnlock undoes a single RLock call.
Preconditions:
- rw is locked for reading.
+checklocksignore
func (*CrossGoroutineRWMutex) TryLock ¶
func (rw *CrossGoroutineRWMutex) TryLock() bool
TryLock locks rw for writing. It returns true if it succeeds and false otherwise. It does not block. +checklocksignore
func (*CrossGoroutineRWMutex) TryRLock ¶
func (rw *CrossGoroutineRWMutex) TryRLock() bool
TryRLock locks rw for reading. It returns true if it succeeds and false otherwise. It does not block. +checklocksignore
func (*CrossGoroutineRWMutex) Unlock ¶
func (rw *CrossGoroutineRWMutex) Unlock()
Unlock unlocks rw for writing.
Preconditions:
- rw is locked for writing.
+checklocksignore
type Gate ¶
type Gate struct {
// contains filtered or unexported fields
}
Gate is a synchronization primitive that allows concurrent goroutines to "enter" it as long as it hasn't been closed yet. Once it's been closed, goroutines cannot enter it anymore, but are allowed to leave, and the closer will be informed when all goroutines have left.
Gate is similar to WaitGroup:
Gate.Enter() is analogous to WaitGroup.Add(1), but may be called even if the Gate counter is 0 and fails if Gate.Close() has been called.
Gate.Leave() is equivalent to WaitGroup.Done().
Gate.Close() is analogous to WaitGroup.Wait(), but also causes future
calls to Gate.Enter() to fail and may only be called once, from a single goroutine.
This is useful, for example, in cases when a goroutine is trying to clean up an object for which multiple goroutines have pointers. In such a case, users would be required to enter and leave the Gate, and the cleaner would wait until all users are gone (and no new ones are allowed) before proceeding.
Users:
if !g.Enter() { // Gate is closed, we can't use the object. return } // Do something with object. [...] g.Leave()
Closer:
// Prevent new users from using the object, and wait for the existing // ones to complete. g.Close() // Clean up the object. [...]
func (*Gate) Close ¶
func (g *Gate) Close()
Close closes the gate, causing future calls to Enter to fail, and waits until all goroutines that are currently inside the gate leave before returning.
Only one goroutine can call this function.
type Mutex ¶
type Mutex struct {
// contains filtered or unexported fields
}
Mutex is a mutual exclusion lock. The zero value for a Mutex is an unlocked mutex.
A Mutex must not be copied after first use.
A Mutex must be unlocked by the same goroutine that locked it. This invariant is enforced with the 'checklocks' build tag.
func (*Mutex) Lock ¶
func (m *Mutex) Lock()
Lock locks m. If the lock is already in use, the calling goroutine blocks until the mutex is available. +checklocksignore
type NoCopy ¶
type NoCopy struct{}
NoCopy may be embedded into structs which must not be copied after the first use.
See https://golang.org/issues/8005#issuecomment-190753527 for details.
type RWMutex ¶
type RWMutex struct {
// contains filtered or unexported fields
}
A RWMutex is a reader/writer mutual exclusion lock. The lock can be held by an arbitrary number of readers or a single writer. The zero value for a RWMutex is an unlocked mutex.
A RWMutex must not be copied after first use.
If a goroutine holds a RWMutex for reading and another goroutine might call Lock, no goroutine should expect to be able to acquire a read lock until the initial read lock is released. In particular, this prohibits recursive read locking. This is to ensure that the lock eventually becomes available; a blocked Lock call excludes new readers from acquiring the lock.
A Mutex must be unlocked by the same goroutine that locked it. This invariant is enforced with the 'checklocks' build tag.
func (*RWMutex) DowngradeLock ¶
func (rw *RWMutex) DowngradeLock()
DowngradeLock atomically unlocks rw for writing and locks it for reading.
Preconditions:
- rw is locked for writing.
+checklocksignore
func (*RWMutex) Lock ¶
func (rw *RWMutex) Lock()
Lock locks rw for writing. If the lock is already locked for reading or writing, Lock blocks until the lock is available. +checklocksignore
func (*RWMutex) RLock ¶
func (rw *RWMutex) RLock()
RLock locks rw for reading.
It should not be used for recursive read locking; a blocked Lock call excludes new readers from acquiring the lock. See the documentation on the RWMutex type. +checklocksignore
func (*RWMutex) RUnlock ¶
func (rw *RWMutex) RUnlock()
RUnlock undoes a single RLock call.
Preconditions:
- rw is locked for reading.
- rw was locked by this goroutine.
+checklocksignore
func (*RWMutex) TryLock ¶
TryLock locks rw for writing. It returns true if it succeeds and false otherwise. It does not block. +checklocksignore
type SeqCount ¶
type SeqCount struct {
// contains filtered or unexported fields
}
SeqCount is a synchronization primitive for optimistic reader/writer synchronization in cases where readers can work with stale data and therefore do not need to block writers.
Compared to sync/atomic.Value:
Mutation of SeqCount-protected data does not require memory allocation, whereas atomic.Value generally does. This is a significant advantage when writes are common.
Atomic reads of SeqCount-protected data require copying. This is a disadvantage when atomic reads are common.
SeqCount may be more flexible: correct use of SeqCount.ReadOk allows other operations to be made atomic with reads of SeqCount-protected data.
SeqCount is more cumbersome to use; atomic reads of SeqCount-protected data require instantiating function templates using go_generics (see seqatomic.go).
func (*SeqCount) BeginRead ¶
func (s *SeqCount) BeginRead() SeqCountEpoch
BeginRead indicates the beginning of a reader critical section. Reader critical sections DO NOT BLOCK writer critical sections, so operations in a reader critical section MAY RACE with writer critical sections. Races are detected by ReadOk at the end of the reader critical section. Thus, the low-level structure of readers is generally:
for { epoch := seq.BeginRead() // do something idempotent with seq-protected data if seq.ReadOk(epoch) { break } }
However, since reader critical sections may race with writer critical sections, the Go race detector will (accurately) flag data races in readers using this pattern. Most users of SeqCount will need to use the SeqAtomicLoad function template in seqatomic.go.
func (*SeqCount) BeginWrite ¶
func (s *SeqCount) BeginWrite()
BeginWrite indicates the beginning of a writer critical section.
SeqCount does not support concurrent writer critical sections; clients with concurrent writers must synchronize them using e.g. sync.Mutex.
func (*SeqCount) BeginWriteOk ¶
func (s *SeqCount) BeginWriteOk(epoch SeqCountEpoch) bool
BeginWriteOk combines the semantics of ReadOk and BeginWrite. If the reader critical section initiated by a previous call to BeginRead() that returned epoch did not race with any writer critical sections, it begins a writer critical section and returns true. Otherwise it does nothing and returns false.
func (*SeqCount) EndWrite ¶
func (s *SeqCount) EndWrite()
EndWrite ends the effect of a preceding BeginWrite or successful BeginWriteOk.
func (*SeqCount) ReadOk ¶
func (s *SeqCount) ReadOk(epoch SeqCountEpoch) bool
ReadOk returns true if the reader critical section initiated by a previous call to BeginRead() that returned epoch did not race with any writer critical sections.
ReadOk may be called any number of times during a reader critical section. Reader critical sections do not need to be explicitly terminated; the last call to ReadOk is implicitly the end of the reader critical section.
type SeqCountEpoch ¶
type SeqCountEpoch uint32
SeqCountEpoch tracks writer critical sections in a SeqCount.
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
Package seqatomic doesn't exist.
|
Package seqatomic doesn't exist. |
Package atomicptrmap instantiates generic_atomicptrmap for testing.
|
Package atomicptrmap instantiates generic_atomicptrmap for testing. |
Package locking implements lock primitives with the correctness validator.
|
Package locking implements lock primitives with the correctness validator. |
Package seqatomic doesn't exist.
|
Package seqatomic doesn't exist. |