On October 10, 2025, I delivered an interactive workshop at George Mason University’s Movement Engaged Hub. The talk presented a research project in progress on how AI systems are being embedded into governance structures, augmenting state infrastructural power. The central claim is that legality and computation are becoming co-constitutive infrastructures of governance. Rather than treating AI as a bolt-on tool or law as a static brake, I trace how legal instruments and algorithmic systems co-evolve so that automated classifications become administratively actionable facts. I call this configuration algorithmic legalism. In this arrangement, “the law made me do it” increasingly reads as “the model made me do it—and the law says that’s fine.”
The analysis turns on a repeatable loop—Authorize → Implement → Ratify → Stabilize.
Authorize: statutes, decrees, and regulator-made rules deploy open-textured predicates (public order, false news, extremism, risk, morality) and attach duties such as data retention, expedited takedowns, and platform cooperation. These instruments do more than permit data flows; they quietly redraw what counts as legitimate intervention.
Implement: AI/ML systems translate those predicates into continuous screening and scoring—risk scores, content flags, biometric or behavioral matches, anomaly alerts—shifting governance from episodic human discretion to always-on triage embedded in administrative routines.
Ratify: regulators, prosecutors, and specialized adjudicatory venues treat model outputs as legally operative—either as triggers for investigation, removal, and penalties, or as probative evidence that lowers thresholds of suspicion.
Stabilize: as the loop iterates, what began as exceptional preemption becomes ordinary administration, wrapped in the language of compliance and efficiency. This architecture has a single hinge: the conversion point, where a model output is accepted as an administrative fact. That hinge is where rights move—and where accountability must be forced.
To make the framework tractable, the analysis centers the conversion point—the precise moment a model output is accepted as an administrative fact. A confidence score becomes “reasonable grounds,” a dashboard threshold becomes “credible risk,” a content flag becomes “illegal speech.” Tracking conversion shows where rights mutate in practice and where accountability must be forced: not in the abstract, but at the instant an automated trigger is treated as law-enough to justify action.
Because conversion is institutional, the right unit of analysis is not a single tool or statute but the domestic algorithmic-legal regime that carries triggers from output to action—a relatively stable configuration of (1) statutes and implementing rules, (2) regulator circulars and standard operating procedures, (3) procurement and contracting practices, (4) shared data infrastructures, and (5) institutional venues that together authorize, implement, and legitimate automated enforcement. This regime sits within a transnational regime complex of vendors, platforms, standards bodies, and policy forums that circulate templates and toolkits. Thinking in regimes explains why governance shows family resemblances across contexts without copy-and-paste uniformity: what travels are clause bundles, thresholds, and workflow logics, not identical legal codes.
With the unit fixed, the method is straightforward: map the Authorize → Implement → Ratify → Stabilize sequence across policy domains (speech governance, urban security, digital administration) and code four features to locate institutional ratification:
- Predicates (e.g., public order, risk)
- Duties (retention, disclosure, takedown)
- Interfaces (provider APIs, platform portals, data-sharing dashboards)
- Venues (regulators, specialized courts, security prosecutors)
The evidentiary target is to document moments when model outputs are explicitly accepted as grounds for action—through regulator guidance, prosecutorial practice, adjudicatory language, and procurement specifications. This approach deliberately privileges routine instruments (bylaws, circulars, SOPs, dashboards) over headline statutes, because that is where automation becomes mundane administration.
Following these pathways brings two systemic risks into focus—opacity and distributive bias—best understood as predictable outputs of algorithmic legalism rather than accidental side effects. Opacity arises when decisions are effectively black-boxed—by secrecy rules, vendor confidentiality, or technical complexity—so people affected by removals, denials, or flags rarely receive the reasons or evidence needed to mount a challenge. Distributive bias follows when models learn from skewed histories and then amplify those skews, generating feedback loops in which intensified monitoring produces more “incidents,” which in turn justify more monitoring—now wrapped in the respectable sheen of statistics. These are not bugs; they are the foreseeable consequences of treating machine judgments as facts.

