avantix

Cognitive infrastructure for AI agents

Agents that persist. Question. Improve.

scroll

LLMs don't learn. Systems do.

Large language models are frozen after training. They don't update from experience. What improves is the system around them — prompts evolve, signal reliability adjusts, thresholds tune, routing refines. The agent is the current expression of accumulated system knowledge. We build the system that learns.

Learning from truth

Every thought tracks its parent evidence. When evidence expires, the thought is invalidated and re-derived. Without this, conclusions persist after their evidence is gone — ghost reasoning that compounds silently. Run the simulation to see both modes side by side.

Ablation test
TM on
RSI < 30
Vol 2.1x
Trend 3d
TM off
RSI < 30
Vol 2.1x
Trend 3d

One of five

Truth maintenance is one cognitive loop. The full architecture runs five, at different speeds. Code handles detection. LLMs handle reasoning. Signals flow upward through typed channels on a shared bus. Most get filtered. The system gets quieter over time — expertise is knowing what to ignore.

heartbeat
awareness
thinking
learning
spike
interrupt
lateral inhibition active

hover each loop to explore

Cognitive depth

The same agent runs at different depths. Feature flags control which mechanisms are active — not different products, different configurations of the same system. Every level is ablation-tested against every other.

heartbeat
awareness
thinking
learning
spike
It watches.

Pure code. Threshold checks against real data. The frozen watchdog — fast, cheap, can't be captured because it doesn't learn. Spike interrupts bypass everything when urgency demands it.

Watch a feed. Flag anomalies. Fire alerts.

same agent — different cognitive depth — one feature flag

What's running

104
Tests passing
Context engine with 5 cognitive loops. Clean architecture — frozen dataclasses, protocol interfaces, dependency direction enforced by AST inspection. Feature flags on every mechanism for ablation testing.
107
Experiment runs
Full ablation framework — 7 configurations, every combination of mechanisms toggled. Same data, same model, one variable changed. Architecture decisions are hypotheses until the data confirms them.
0.635
Self-awareness score
Measured only with all 5 mechanisms running. Every other configuration — including 4 of 5 — scores 0.000. This is a phase transition, not a gradient. The architecture either knows what it believes and why, or it doesn't.