Documentation ¶
Overview ¶
Package sync provides synchronization primitives.
Index ¶
- Constants
- func Gopark(unlockf func(uintptr, unsafe.Pointer) bool, lock unsafe.Pointer, reason uint8, ...)
- func Goready(gp uintptr, traceskip int)
- func MapKeyHasher(m interface{}) func(unsafe.Pointer, uintptr) uintptr
- func Memmove(to, from unsafe.Pointer, n uintptr)
- func RaceAcquire(addr unsafe.Pointer)
- func RaceDisable()
- func RaceEnable()
- func RaceRelease(addr unsafe.Pointer)
- func RaceReleaseMerge(addr unsafe.Pointer)
- func RaceUncheckedAtomicCompareAndSwapUintptr(ptr *uintptr, old, new uintptr) bool
- func Rand32() uint32
- func Rand64() uint64
- func RandUintptr() uintptr
- type Cond
- type CrossGoroutineMutex
- type CrossGoroutineRWMutex
- func (rw *CrossGoroutineRWMutex) DowngradeLock()
- func (rw *CrossGoroutineRWMutex) Lock()
- func (rw *CrossGoroutineRWMutex) RLock()
- func (rw *CrossGoroutineRWMutex) RUnlock()
- func (rw *CrossGoroutineRWMutex) TryLock() bool
- func (rw *CrossGoroutineRWMutex) TryRLock() bool
- func (rw *CrossGoroutineRWMutex) Unlock()
- type Locker
- type Map
- type Mutex
- type NoCopy
- type Once
- type Pool
- type RWMutex
- type SeqCount
- type SeqCountEpoch
- type WaitGroup
Constants ¶
const RaceEnabled = false
RaceEnabled is true if the Go data race detector is enabled.
const (
TraceEvGoBlockSelect byte = 24
)
Values for the traceEv argument to gopark, from Go's src/runtime/trace.go.
const (
WaitReasonSelect uint8 = 9
)
Values for the reason argument to gopark, from Go's src/runtime/runtime2.go.
Variables ¶
This section is empty.
Functions ¶
func Gopark ¶
func Gopark(unlockf func(uintptr, unsafe.Pointer) bool, lock unsafe.Pointer, reason uint8, traceEv byte, traceskip int)
Gopark is runtime.gopark. Gopark calls unlockf(pointer to runtime.g, lock); if unlockf returns true, Gopark blocks until Goready(pointer to runtime.g) is called. unlockf and its callees must be nosplit and norace, since stack splitting and race context are not available where it is called.
func MapKeyHasher ¶
MapKeyHasher returns a hash function for pointers of m's key type.
Preconditions: m must be a map.
func RaceAcquire ¶
RaceAcquire has the same semantics as runtime.RaceAcquire.
func RaceRelease ¶
RaceRelease has the same semantics as runtime.RaceRelease.
func RaceReleaseMerge ¶
RaceReleaseMerge has the same semantics as runtime.RaceReleaseMerge.
func RaceUncheckedAtomicCompareAndSwapUintptr ¶
RaceUncheckedAtomicCompareAndSwapUintptr is equivalent to sync/atomic.CompareAndSwapUintptr, but is not checked by the race detector. This is necessary when implementing gopark callbacks, since no race context is available during their execution.
func RandUintptr ¶
func RandUintptr() uintptr
RandUintptr returns a non-cryptographically-secure random uintptr.
Types ¶
type CrossGoroutineMutex ¶
CrossGoroutineMutex is equivalent to Mutex, but it need not be unlocked by a the same goroutine that locked the mutex.
func (*CrossGoroutineMutex) TryLock ¶
func (m *CrossGoroutineMutex) TryLock() bool
TryLock tries to acquire the mutex. It returns true if it succeeds and false otherwise. TryLock does not block.
type CrossGoroutineRWMutex ¶
type CrossGoroutineRWMutex struct {
// contains filtered or unexported fields
}
CrossGoroutineRWMutex is equivalent to RWMutex, but it need not be unlocked by a the same goroutine that locked the mutex.
func (*CrossGoroutineRWMutex) DowngradeLock ¶
func (rw *CrossGoroutineRWMutex) DowngradeLock()
DowngradeLock atomically unlocks rw for writing and locks it for reading.
Preconditions: * rw is locked for writing.
func (*CrossGoroutineRWMutex) Lock ¶
func (rw *CrossGoroutineRWMutex) Lock()
Lock locks rw for writing. If the lock is already locked for reading or writing, Lock blocks until the lock is available.
func (*CrossGoroutineRWMutex) RLock ¶
func (rw *CrossGoroutineRWMutex) RLock()
RLock locks rw for reading.
It should not be used for recursive read locking; a blocked Lock call excludes new readers from acquiring the lock. See the documentation on the RWMutex type.
func (*CrossGoroutineRWMutex) RUnlock ¶
func (rw *CrossGoroutineRWMutex) RUnlock()
RUnlock undoes a single RLock call.
Preconditions: * rw is locked for reading.
func (*CrossGoroutineRWMutex) TryLock ¶
func (rw *CrossGoroutineRWMutex) TryLock() bool
TryLock locks rw for writing. It returns true if it succeeds and false otherwise. It does not block.
func (*CrossGoroutineRWMutex) TryRLock ¶
func (rw *CrossGoroutineRWMutex) TryRLock() bool
TryRLock locks rw for reading. It returns true if it succeeds and false otherwise. It does not block.
func (*CrossGoroutineRWMutex) Unlock ¶
func (rw *CrossGoroutineRWMutex) Unlock()
Unlock unlocks rw for writing.
Preconditions: * rw is locked for writing.
type Mutex ¶
type Mutex struct {
// contains filtered or unexported fields
}
Mutex is a mutual exclusion lock. The zero value for a Mutex is an unlocked mutex.
A Mutex must not be copied after first use.
A Mutex must be unlocked by the same goroutine that locked it. This invariant is enforced with the 'checklocks' build tag.
func (*Mutex) Lock ¶
func (m *Mutex) Lock()
Lock locks m. If the lock is already in use, the calling goroutine blocks until the mutex is available.
type NoCopy ¶
type NoCopy struct{}
NoCopy may be embedded into structs which must not be copied after the first use.
See https://golang.org/issues/8005#issuecomment-190753527 for details.
type RWMutex ¶
type RWMutex struct {
// contains filtered or unexported fields
}
A RWMutex is a reader/writer mutual exclusion lock. The lock can be held by an arbitrary number of readers or a single writer. The zero value for a RWMutex is an unlocked mutex.
A RWMutex must not be copied after first use.
If a goroutine holds a RWMutex for reading and another goroutine might call Lock, no goroutine should expect to be able to acquire a read lock until the initial read lock is released. In particular, this prohibits recursive read locking. This is to ensure that the lock eventually becomes available; a blocked Lock call excludes new readers from acquiring the lock.
A Mutex must be unlocked by the same goroutine that locked it. This invariant is enforced with the 'checklocks' build tag.
func (*RWMutex) DowngradeLock ¶
func (rw *RWMutex) DowngradeLock()
DowngradeLock atomically unlocks rw for writing and locks it for reading.
Preconditions: * rw is locked for writing.
func (*RWMutex) Lock ¶
func (rw *RWMutex) Lock()
Lock locks rw for writing. If the lock is already locked for reading or writing, Lock blocks until the lock is available.
func (*RWMutex) RLock ¶
func (rw *RWMutex) RLock()
RLock locks rw for reading.
It should not be used for recursive read locking; a blocked Lock call excludes new readers from acquiring the lock. See the documentation on the RWMutex type.
func (*RWMutex) RUnlock ¶
func (rw *RWMutex) RUnlock()
RUnlock undoes a single RLock call.
Preconditions: * rw is locked for reading. * rw was locked by this goroutine.
func (*RWMutex) TryLock ¶
TryLock locks rw for writing. It returns true if it succeeds and false otherwise. It does not block.
type SeqCount ¶
type SeqCount struct {
// contains filtered or unexported fields
}
SeqCount is a synchronization primitive for optimistic reader/writer synchronization in cases where readers can work with stale data and therefore do not need to block writers.
Compared to sync/atomic.Value:
- Mutation of SeqCount-protected data does not require memory allocation, whereas atomic.Value generally does. This is a significant advantage when writes are common.
- Atomic reads of SeqCount-protected data require copying. This is a disadvantage when atomic reads are common.
- SeqCount may be more flexible: correct use of SeqCount.ReadOk allows other operations to be made atomic with reads of SeqCount-protected data.
- SeqCount is more cumbersome to use; atomic reads of SeqCount-protected data require instantiating function templates using go_generics (see seqatomic.go).
func (*SeqCount) BeginRead ¶
func (s *SeqCount) BeginRead() SeqCountEpoch
BeginRead indicates the beginning of a reader critical section. Reader critical sections DO NOT BLOCK writer critical sections, so operations in a reader critical section MAY RACE with writer critical sections. Races are detected by ReadOk at the end of the reader critical section. Thus, the low-level structure of readers is generally:
for { epoch := seq.BeginRead() // do something idempotent with seq-protected data if seq.ReadOk(epoch) { break } }
However, since reader critical sections may race with writer critical sections, the Go race detector will (accurately) flag data races in readers using this pattern. Most users of SeqCount will need to use the SeqAtomicLoad function template in seqatomic.go.
func (*SeqCount) BeginWrite ¶
func (s *SeqCount) BeginWrite()
BeginWrite indicates the beginning of a writer critical section.
SeqCount does not support concurrent writer critical sections; clients with concurrent writers must synchronize them using e.g. sync.Mutex.
func (*SeqCount) EndWrite ¶
func (s *SeqCount) EndWrite()
EndWrite ends the effect of a preceding BeginWrite.
func (*SeqCount) ReadOk ¶
func (s *SeqCount) ReadOk(epoch SeqCountEpoch) bool
ReadOk returns true if the reader critical section initiated by a previous call to BeginRead() that returned epoch did not race with any writer critical sections.
ReadOk may be called any number of times during a reader critical section. Reader critical sections do not need to be explicitly terminated; the last call to ReadOk is implicitly the end of the reader critical section.
type SeqCountEpoch ¶
type SeqCountEpoch uint32
SeqCountEpoch tracks writer critical sections in a SeqCount.