README ¶ Onward Uses self-entropy/self-attention to learn the first layer in a two layer neural network with a single pass. Self-entropy is calculated as: entropy(softmax(softmax(X*X^T)*X)) Citations Attention Is All You Need The Forward-Forward Algorithm: Some Preliminary Investigations Expand ▾ Collapse ▴ Documentation ¶ There is no documentation for this package. Source Files ¶ View all Source files main.go Click to show internal directories. Click to hide internal directories.