From a pool of about seventy applicants, I was chosen—along with four other young scholars—for the APSA MENA Research Development Group (RDG), a highly competitive workshop convened at the APSA Annual Meeting in Vancouver this month. The program gathers early-career researchers around a pre-circulated, article-length manuscript for a full-day seminar with senior discussants, targeted feedback on writing and publishing, and a schedule of panels and networking to deepen scholarly linkages. It’s designed for early-career scholars and expects projects that are headed to peer-reviewed journals—no dissertation chapters or co-authored pieces—so the conversations are concrete, developmental, and publication-oriented.
The paper I presented: “From Legalism to Algorithmic Control” asks a simple but unsettling question: what happens when legal instruments (statutes, regulations, administrative rules) are redesigned to license algorithmic systems, and those systems’ outputs are then treated as legally operative judgments? I call this arrangement algorithmic legalism—a governance configuration where law doesn’t retreat before automation; it redeploys itself to enable “governing through code.” Elastic legal categories (think “public order,” “false news,” or “risk”) become mandates, and algorithms operationalize them at scale; their outputs are then ratified administratively or judicially, normalizing preemptive governance behind a veneer of neutrality.
The workshop was led by Curtis R. Ryan (Appalachian State University), with Zahra Babar (Georgetown University’s Center for International and Regional Studies) and Summer Forester (Carleton College) serving as co-leads—bringing complementary expertise in international and comparative politics; migration, labor governance, and citizenship; and feminist mobilization and gender policymaking. Discussants Lindsay J. Benstead (Portland State University) and Lisel Hintz (Johns Hopkins University–SAIS) contributed trenchant insights from work on identity politics, public diplomacy, survey methodology, and the politics of identity contestation, helping sharpen the project’s conceptual claims and evidentiary thresholds.

Mechanistically, the project identifies a three-part loop:
- Prior legal authorization. Open-textured rules and administrative powers (e.g., data-retention duties or platform-cooperation clauses) lower thresholds for intervention.
- Algorithmic implementation. Facial recognition, predictive analytics, and automated moderation convert those predicates into continuous identification, scoring, and triage.
- Institutional ratification. Regulators, prosecutors, or specialized venues accept machine outputs as actionable, with weak duties of explanation or audit.
When these steps reinforce one another, you get a durable algorithmic-legal regime: elastic categories are written into law; models translate them into routinized enforcement; and opaque outputs are blessed as administrative facts—stabilizing the very juridical–technical architecture that made automated preemption possible in the first place.
Why this matters (beyond any single setting)
The paper situates algorithmic legalism within a transnational regime complex—an overlapping web of venues, vendor pipelines, platform rules, and model-law templates through which ideas, tools, and justifications travel. This complex lowers the transaction costs of adopting automated control, harmonizes vocabulary (“risk,” “false news,” “extremism”), and disperses responsibility across public and private actors.
Two implications follow:
First, law is the infrastructure of automation. Rather than placing “technology versus law,” the evidence points to co-constitution: open-ended statutes authorize surveillance in advance; algorithmic systems operationalize those mandates afterward, at scale and with opacity. This shifts practical judgment from human officials to socio-technical systems while maintaining the appearance of legality.
Second, opacity and accountability gaps widen. When model outputs function as quasi-evidence, the burden of explanation often evaporates. Without strong duties of disclosure, audit, and reason-giving, automated classifications harden into administrative facts, even as feedback loops can entrench unequal impacts.
What I took from the workshop
The seminar format—pre-circulated papers, senior discussants, and professionalization sessions—was ideal for sharpening the argument’s theoretical spine and tightening the evidentiary standards (what counts as sufficient triangulation, how to flag single-source claims, and how to trace diffusion without overreach). The conversations pushed me to crystallize the project’s contribution in one line: contemporary repression is co-constitutive—legal instruments supply authority and legitimacy, while algorithms deliver capacity and deniability.
I’m grateful for the rigorous critique and collegial energy. Next steps are straightforward: clarify the coding scheme for legal predicates and administrative powers, foreground the authorization-implementation-ratification loop in the introduction, and tighten the section on the regime complex so the diffusion mechanisms are explicit rather than implied. The goal is a paper that speaks across fields—socio-legal studies, governance, and critical data studies—without sacrificing analytic precision.

