Documentation
¶
There is no documentation for this package.
Directories
¶
Path | Synopsis |
---|---|
cmd
|
|
ner
This is the first attempt to launch a sequence labeling server from the command line.
|
This is the first attempt to launch a sequence labeling server from the command line. |
embeddings
|
|
store/diskstore
Module
|
|
graphviz
module
|
|
nn
|
|
approxlinear
Module
|
|
pkg
|
|
mat/internal/asm/f64
Package f64 provides float64 vector primitives.
|
Package f64 provides float64 vector primitives. |
ml/ag/fn
SparseMax implementation based on https://github.com/gokceneraslan/SparseMax.torch
|
SparseMax implementation based on https://github.com/gokceneraslan/SparseMax.torch |
ml/nn/birnncrf
Bidirectional Recurrent Neural Network (BiRNN) with a Conditional Random Fields (CRF) on top.
|
Bidirectional Recurrent Neural Network (BiRNN) with a Conditional Random Fields (CRF) on top. |
ml/nn/bls
Implementation of the Broad Learning System (BLS) described in "Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture" by C. L. Philip Chen and Zhulin Liu, 2017.
|
Implementation of the Broad Learning System (BLS) described in "Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture" by C. L. Philip Chen and Zhulin Liu, 2017. |
ml/nn/gnn/slstm
slstm Reference: "Sentence-State LSTM for Text Representation" by Zhang et al, 2018.
|
slstm Reference: "Sentence-State LSTM for Text Representation" by Zhang et al, 2018. |
ml/nn/gnn/startransformer
StarTransformer is a variant of the model introduced by Qipeng Guo, Xipeng Qiu et al.
|
StarTransformer is a variant of the model introduced by Qipeng Guo, Xipeng Qiu et al. |
ml/nn/lshattention
LSH-Attention as in `Reformer: The Efficient Transformer` by N. Kitaev, Ł. Kaiser, A. Levskaya.
|
LSH-Attention as in `Reformer: The Efficient Transformer` by N. Kitaev, Ł. Kaiser, A. Levskaya. |
ml/nn/normalization/adanorm
Reference: "Understanding and Improving Layer Normalization" by Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, Junyang Lin (2019).
|
Reference: "Understanding and Improving Layer Normalization" by Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, Junyang Lin (2019). |
ml/nn/normalization/fixnorm
Reference: "Improving Lexical Choice in Neural Machine Translation" by Toan Q. Nguyen and David Chiang (2018) (https://arxiv.org/pdf/1710.01329.pdf)
|
Reference: "Improving Lexical Choice in Neural Machine Translation" by Toan Q. Nguyen and David Chiang (2018) (https://arxiv.org/pdf/1710.01329.pdf) |
ml/nn/normalization/layernorm
Reference: "Layer normalization" by Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton (2016).
|
Reference: "Layer normalization" by Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton (2016). |
ml/nn/normalization/layernormsimple
Reference: "Understanding and Improving Layer Normalization" by Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, Junyang Lin (2019).
|
Reference: "Understanding and Improving Layer Normalization" by Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, Junyang Lin (2019). |
ml/nn/normalization/rmsnorm
Reference: "Root Mean Square Layer Normalization" by Biao Zhang and Rico Sennrich (2019).
|
Reference: "Root Mean Square Layer Normalization" by Biao Zhang and Rico Sennrich (2019). |
ml/nn/rae
Implementation of the recursive auto-encoder strategy described in "Towards Lossless Encoding of Sentences" by Prato et al., 2019.
|
Implementation of the recursive auto-encoder strategy described in "Towards Lossless Encoding of Sentences" by Prato et al., 2019. |
ml/nn/rc
This package contains built-in Residual Connections (RC).
|
This package contains built-in Residual Connections (RC). |
ml/nn/rec/horn
Higher Order Recurrent Neural Networks (HORN)
|
Higher Order Recurrent Neural Networks (HORN) |
ml/nn/rec/lstmsc
LSTM enriched with a PolicyGradient to enable Dynamic Skip Connections.
|
LSTM enriched with a PolicyGradient to enable Dynamic Skip Connections. |
ml/nn/rec/mist
Implementation of the MIST (MIxed hiSTory) recurrent network as described in "Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies" by Di Pietro et al., 2018 (https://arxiv.org/pdf/1702.07805.pdf).
|
Implementation of the MIST (MIxed hiSTory) recurrent network as described in "Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies" by Di Pietro et al., 2018 (https://arxiv.org/pdf/1702.07805.pdf). |
ml/nn/rec/nru
Implementation of the NRU (Non-Saturating Recurrent Units) recurrent network as described in "Towards Non-Saturating Recurrent Units for Modelling Long-Term Dependencies" by Chandar et al., 2019.
|
Implementation of the NRU (Non-Saturating Recurrent Units) recurrent network as described in "Towards Non-Saturating Recurrent Units for Modelling Long-Term Dependencies" by Chandar et al., 2019. |
ml/nn/rec/rla
RLA (Recurrent Linear Attention) "Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention" by Katharopoulos et al., 2020.
|
RLA (Recurrent Linear Attention) "Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention" by Katharopoulos et al., 2020. |
ml/nn/rec/srnn
srnn implements the SRNN (Shuffling Recurrent Neural Networks) by Rotman and Wolf, 2020.
|
srnn implements the SRNN (Shuffling Recurrent Neural Networks) by Rotman and Wolf, 2020. |
ml/nn/syntheticattention
This is an implementation of the Synthetic Attention described in: "SYNTHESIZER: Rethinking Self-Attention in Transformer Models" by Tay et al., 2020.
|
This is an implementation of the Synthetic Attention described in: "SYNTHESIZER: Rethinking Self-Attention in Transformer Models" by Tay et al., 2020. |
nlp/charlm
CharLM implements a character-level language model that uses a recurrent neural network as its backbone.
|
CharLM implements a character-level language model that uses a recurrent neural network as its backbone. |
nlp/contextualstringembeddings
Implementation of the "Contextual String Embeddings" of words (Akbik et al., 2018).
|
Implementation of the "Contextual String Embeddings" of words (Akbik et al., 2018). |
nlp/evolvingembeddings
A word embedding model that evolves itself by dynamically aggregating contextual embeddings over time during inference.
|
A word embedding model that evolves itself by dynamically aggregating contextual embeddings over time during inference. |
nlp/sequencelabeler
Implementation of a sequence labeling architecture composed by Embeddings -> BiRNN -> Scorer -> CRF.
|
Implementation of a sequence labeling architecture composed by Embeddings -> BiRNN -> Scorer -> CRF. |
nlp/stackedembeddings
StackedEmbeddings is a convenient module that stacks multiple word embedding representations by concatenating them.
|
StackedEmbeddings is a convenient module that stacks multiple word embedding representations by concatenating them. |
nlp/tokenizers
This package is an interim solution while developing `gotokenizers` (https://github.com/nlpodyssey/gotokenizers).
|
This package is an interim solution while developing `gotokenizers` (https://github.com/nlpodyssey/gotokenizers). |
nlp/tokenizers/basetokenizer
BaseTokenizer is a very simple tokenizer that splits per white-spaces (and alike) and punctuation symbols.
|
BaseTokenizer is a very simple tokenizer that splits per white-spaces (and alike) and punctuation symbols. |
nlp/transformers/bert
Reference: "Attention Is All You Need" by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin (2017) (http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf).
|
Reference: "Attention Is All You Need" by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin (2017) (http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf). |
Click to show internal directories.
Click to hide internal directories.