// The Proof
Smarter Than a Neural Net
We trapped a Deep Learning agent and a Rich Learning agent in a visual loop β
a "DoppelgΓ€nger Trap." Only one escaped.
1 Finds Trap (99.97% similarity) β commits immediately
2 Trap loops back to start
3 Finds Trap again (99.97% β still the best match!)
β Spins in circles forever, confident it's right.
1 Finds Trap (99.97% similarity) β moves toward it
2 Trap loops back β recognizes the loop
3 Marks Trap as poison. Penalizes the path that led there.
4 Takes alternate route β finds real target
Intelligence isn't just pattern matching. It's knowing where you've been.
// The Engineering Log
Experiments, Not Marketing
We document our experiments with the same rigor we write our code.
No hype. Just data.
Featured Experiment
The 20 Million Node Experiment: Solving Chess Endgames Without Neural Networks
Fugue-GraphNative scales to 20.8 million positions using independent piece-level actors.
Zero lookahead. Zero neural compute. 87% draw rate against Stockfish 18.
The definitive record of our journey from architecture to superhuman knowledge density.
March 2026 Β· 18 min read
Read the definitive experiment β Experiment
What If Identity Search Didn't Need Deep Learning?
A doppelgΓ€nger with cosine similarity 0.9998 to the real target.
A pure embedding matcher accepts it unconditionally.
A four-step DAPSA walk with backward reinforcement detects the trap and escapes β
in two independent implementations, Python and C#.
February 2026 Β· 8 min read
Read the experiment β Experiment
83% Chess Accuracy: A 26K-Parameter Neural Approximation
A micro-network (768β32β32β1, ~26K params) was trained on StockFish evaluations
to bootstrap opening-phase value estimates. 83% at 5K samples, trained in seconds
on CPU β a navigable map, not a replacement engine.
February 2026 Β· 7 min read
Read the experiment β Experiment
44 Autonomous Agents, Zero Collisions
A warehouse digital twin where 44 AGVs achieve 154 deliveries with zero collisions β
and zero training. Side-by-side comparison against a 3,000-episode Deep Q-Network
that collapsed at inference.
February 2026 Β· 10 min read
Read the experiment β Coming Soon
Project Chimera: DAPSA Beyond Chess
The same per-piece actor architecture that masters chess endgames,
applied to domains where the "pieces" aren't chess pieces at all.
Q2 2026
Coming Soon
Financial Regime Detection Without Retraining
How topological memory lets a trading agent recognize market shifts
it has never seen β without forgetting what it already knows.
Q2 2026
Coming Soon
Medical Triage: Routing Without Neural Networks
A stateful navigation agent that routes emergency patients
using structured memory instead of deep classifiers.
Q2 2026