Rich Learning Documentation
Rich Learning is a reinforcement learning paradigm that replaces mutable weight matrices with a navigable Topological Graph Memory. No hidden layers. No gradient descent. Just structured navigation over a persistent knowledge graph.
Why Rich Learning?
Traditional deep RL suffers from catastrophic forgetting — learn a new task, lose the old one. Rich Learning solves this by encoding knowledge as graph topology rather than neural weights. Every learned state is preserved as an immutable node. The result: 100% retention across sequential tasks.
100% knowledge retention. Zero hidden layers. One graph.
Key Results
| Benchmark | Method | Retention |
|---|---|---|
| Split-MNIST | Bare MLP | 0.0% |
| Split-MNIST | EWC (λ=100) | 19.5% |
| Split-MNIST | Topological Memory | 100.0% |
| Split-Audio (FSD50K) | Topological Memory | 100.0% |
Documentation Sections
🚀 Getting Started
Install, configure, and run your first Rich Learning agent in under 5 minutes.
🏗 Architecture
How Topological Graph Memory, the Cartographer planner, and exploration strategies work.
⚙️ Configuration
Choose between LiteDB (embedded) and Neo4j (server) backends, and tune parameters.
📖 API Reference
Complete interface and model reference for IGraphMemory, IStateEncoder, and more.