Applied Coordination Technology (ACT) Workbench
See also: our work · expanded vision
What ACT Is (Right Now)
ACT is my workbench for exploring how small human teams (or pairs) + AI-augmented micro-interventions can:
- • surface hidden divergence,
- • turn that into clearer shared models,
- • and, sometimes, generate the "spark" of feeling more aligned and in motion together.
The core question I'm interested in this round is:
Can very small, AI-steered daily reflections (≈1 minute per person) meaningfully improve clarity and alignment in small, mission-driven teams?
And if so, what "style" and "dosage" of surfacing divergence actually helps vs. overwhelms?
Why This Intervention
This gets at something deeper than "AI doing existing tasks better." It's easy to imagine using AI to automate what organizations already do — research grants, summarize meetings, draft documents. That's useful, but it doesn't change how people work together.
I'm exploring whether AI enables new organizational paradigms entirely — new ways for small teams to coordinate, align, and make decisions that weren't really possible before. This micro-intervention is the smallest testable unit of that idea.
If you give a mission-aligned pair or team a tiny, daily AI-driven reflective prompt, does it create a higher-resolution shared model of what they're doing and why?
If the answer is yes, even modestly, that hints at what new structures or workflows might be possible when AI isn't just a tool, but part of the coordinating fabric. This is small by design, but it's aimed at probing the basic mechanics of how AI might help people build organizations that are inherently more aligned and effective.
This Month's Core Experiment
I plan to run a short, real-world experiment with a small number of mission-aligned teams (e.g. a two-person nonprofit, a small startup, etc.):
Baseline alignment snapshot
Each participant independently writes a 3–6 month roadmap / vision and lightly rates or comments on the others' versions. This gives us an initial picture of where they converge and where they silently diverge.
10–14 days of AI-guided micro-interventions
Each weekday, participants receive 1–3 short questions (≈1 minute total) via a simple interface.
The AI agent:
- • analyzes their previous answers,
- • gently surfaces areas of difference, and
- • asks reflective questions designed to clarify why they see things the way they do.
I'm especially interested in comparing different "levels" of divergence surfacing, for example:
- • very gentle, convergence-oriented prompts
- • neutral, contrast-highlighting prompts
- • more explicit "here are two different views; reflect on yours" prompts
The goal is not to create conflict, but to understand how much and what kind of contrast actually helps teams feel clearer and more coordinated.
Post-intervention snapshot
At the end, participants repeat a lightweight version of the initial exercise (new roadmap / vision + mutual ratings + a few qualitative questions about how it felt to work together during the period).
What I'll Deliver
By the end of this exploration window, I aim to have:
- • what we tried, with whom, and in what context
- • how the different "divergence surfacing" styles behaved in practice
- • examples of when (if at all) a real "spark" showed up for a team
- • enough structure that others could try a similar 10–14 day intervention with their own small teams
- • including suggestive parameters for "how gentle vs. explicit" the prompts should be
- • initial and final team roadmaps / visions
- • anonymized prompt–response traces that let us inspect which kinds of questions seemed to matter
- • qualitative feedback from participants on trust, burden, and perceived value
The bar here isn't to "solve coordination," but to produce one or two well-documented, real-world probes that clarify:
- • whether this kind of AI-mediated structured reflection is worth scaling up, and
- • how it might plug into larger FLF-aligned efforts (e.g. coordination labs, Epistack, if the signal looks promising.