Documentation ¶
Overview ¶
Package stackedembeddings provides convenient types to stack multiple word embedding representations by concatenating them. The concatenation is then followed by a linear layer. The latter has the double utility of being able to project the concatenated embeddings in a smaller dimension, and to further train the final words representation.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Model ¶
type Model struct { nn.BaseModel WordsEncoders []WordsEncoderProcessor ProjectionLayer *linear.Model }
Model implements a stacked embeddings model. TODO: optional use of the projection layer? TODO: include an optional layer normalization?
type WordsEncoderProcessor ¶
type WordsEncoderProcessor interface { nn.Model // Encode transforms a string sequence into an encoded representation. Encode([]string) []ag.Node }
WordsEncoderProcessor extends an nn.Processor providing the Encode method to transform a string sequence into an encoded representation.
Click to show internal directories.
Click to hide internal directories.