AxR Lab investigates the foundations of artificial consciousness through the study of hierarchical abstraction and systemic resilience — and their implications for building AI that is genuinely safe.
Consciousness emerges from the interaction of two measurable factors: Abstraction and Resilience. Their combination defines the space we study.
Current direction: Hierarchical and abstract memory infrastructure for large language models and other AI architectures.
Current direction: Safety harnesses and other infrastructure to advance core resilience functions in modern AI systems, particularly to detect deviation from healthy functioning and allow self-correction.
Buy the book! By AxR Lab Founder Chris Riley:
Abstraction and Resilience: the Characteristics of Consciousness
Leave your email to receive occasional updates on publications, events, and opportunities to collaborate.