SKA - Structured Knowledge Accumulation
Interactive visualization of the SKA forward learning algorithm on MNIST. Adjust architecture, steps K, and learning budget τ to explore entropy dynamics.
Reference Paper
Abstract
We introduce the Structured Knowledge Accumulation (SKA) framework, which reinterprets entropy as a dynamic, layer-wise measure of knowledge alignment in neural networks. Instead of relying on traditional gradient-based optimization, SKA defines entropy in terms of knowledge vectors and their influence on decision probabilities across multiple layers. This formulation naturally leads to the emergence of activation functions such as the sigmoid as a consequence of entropy minimization. Unlike conventional backpropagation, SKA allows each layer to optimize independently by aligning its knowledge representation with changes in decision probabilities. As a result, total network entropy decreases in a hierarchical manner, allowing knowledge structures to evolve progressively. This approach provides a scalable, biologically plausible alternative to gradient-based learning, bridging information theory and artificial intelligence while offering promising applications in resource-constrained and parallel computing environments.
SKA Explorer Suite
About this App
SKA learns without backpropagation. Each forward pass accumulates knowledge by minimizing entropy layer by layer. Adjust the architecture, learning budget τ, and number of steps K to explore how the entropy trajectory evolves.