Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func ScaledDotProductAttention ¶
func ScaledDotProductAttention(q []mat.Tensor, k, v, scaleFactor mat.Tensor, useCausalMask bool) ([]mat.Tensor, []mat.Tensor)
ScaledDotProductAttention is a self-attention mechanism relating different positions of a single sequence to compute a representation of the same sequence. This method requires that the query, the key and the value vectors have already been obtained from the input sequence. The scaled factor is the square root of the dimension of the key vectors.
Types ¶
This section is empty.
Click to show internal directories.
Click to hide internal directories.