This repository contains tutorial-style notebooks for the Lecture Notes article "Bio-Inspired Artificial Neural Networks Based on Predictive Coding."
This folder contains a notebook demonstrating how backpropagation gradients can be approximated using predictive coding, as discussed in the Lecture Notes and in the following publication: An Approximation of the Error Backpropagation Algorithm in Predictive Coding Networks
This folder contains classification tasks. It includes a models folder with network architectures, a FLOPs file analyzing resource usage, and a comparison file comparing model performance when trained with BP or PC.
This folder contains autoencoder models for compression. It includes a models folder with network architectures, a FLOPs file analyzing resource usage, and a comparison file comparing model performance when trained with BP or PC.
This folder contains a schematic of the architectures used, showing the differences between PC and BP networks for both classification and compression/generation settings. Additionally, the file 'pseudocodes.png' provides the pseudocode for both algorithms.
Predictive coding models must maintain a consistent batch size throughout training because neurons are treated as nn.Parameters, just like network weights. As a result, their shape cannot be modified during training.
When computing the Free Energy gradient using self.energy.backward(), gradients are calculated for both neural activities and weights. After running T updates of neural activity during the predictive coding inference phase, the neural activity used by PyTorch to compute the weight gradients corresponds to the state at time step T-1. This happens because PyTorch computes gradients based on the saved variables at the moment of the backward call, and any subsequent updates to those variables do not affect the computed gradients. To ensure the weight gradients use neural activities optimized over all T steps, the inference phase should be run for T+1 steps before performing the weight update. This guarantees that the neural activity state PyTorch uses matches the fully optimized state after T updates.
This project is licensed under the Apache License 2.0.
- Leader-Follower Neural Networks with Local Error Signals Inspired by Complex Collectives — arXiv
- Unlocking Deep Learning: A BP-Free Approach for Parallel Block-Wise Training of Neural Networks — IEEE Xplore
- Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks — Frontiers in Neuroscience
- Direct Feedback Alignment Provides Learning in Deep Neural Networks — arXiv
- Hebbian Deep Learning Without Feedback — arXiv
- Levenberg-Marquardt — PDF
- Levenberg–Marquardt Training — Taylor & Francis