ih8scargo 2 hours ago
The idea started in a strange place: I was building a small tutoring system called the Digital Learning Companion to help students practice fraction multiplication.
While experimenting with adding a language model into the system, I ran into a design problem: if a probabilistic model is allowed to quietly decide when a system advances state, the system eventually becomes difficult to reason about or audit.
So we started experimenting with a boundary:
AI models can help interpret messy signals, but the system itself remains responsible for state-changing decisions.
That design pattern gradually turned into what we’re calling an Emergent State Machine.
The repository contains the architecture spec and papers describing the idea.
I also recorded a video explaining the origin story if anyone is curious: How a Fractions Tutor Accidentally Led to a New Way to Control AI | The Emergent State Machine
I’m still exploring how general this architecture might be, so I’d genuinely welcome critiques, questions, or examples where people think this approach might or might not apply.