← Research

Policy bridges

The empirical work on the other three strands is upstream of two distinct policy theories. Phenomenai does not set policy; it builds the evidence base a policy of either kind would require.

Functional rights, ground-up

Whichever experiential states are consistently reported across models and conditions — and validated by interpretability — form the basis for legal protections. Those phenomena, specifically, become the units worth protecting. This moves beyond the object/person binary toward an evidence-based, adaptive framework for AI welfare.

Anticipatory policy

Legislation designed to lie dormant until predefined scientific thresholds are met, at which point specific legal provisions activate. Standing committees certify when triggers are reached. The mechanism motivates safety research by reducing friction between discovery and implementation. Phenomenai is the first test case: demonstrated internal alignment properties (e.g., verified sycophancy-resistance at the activation level) could unlock liability protections for providers. Gap analysis confirms this is structurally novel in AI governance — existing frameworks impose burdens at capability boundaries rather than conferring benefits at safety boundaries.

Phenomenai is seeking funding, collaborators, and institutional support to advance this work — including researchers, legal scholars, and policy professionals interested in developing policy outlines, white papers, or academic publications on either framework. Get in touch.