Note: This vision is still a work in progress. If you're interested or curious to learn more, please reach out.

The Opportunity

We're entering an era where AI-human hybrid organizations are becoming inevitable. Organizations that integrate AI coordination will outpace those that don't—but speed alone doesn't lead to flourishing. It can just as easily lead to systems that are powerful but misaligned with human values and needs.

This lab exists to build the knowledge infrastructure for steering this transition. We need to discover and document what coordination patterns actually produce alignment, agency, and collective benefit—before the defaults are set by whoever moves fastest without care for these outcomes.

The Core Hypothesis

Step 1: If an organization lets us build an AI-powered tool that intervenes in their coordination process—and they're willing to iterate with us—we can increase both their alignment and productivity.

Step 2: As we build more of these tools across different contexts, new AI-human symbiotic organizational structures will emerge naturally. We'll discover patterns across the spectrum of human-AI control—from AI-facilitated to AI-governed—and understand what balance produces the best outcomes.

Step 3: The key is doing the groundwork without abstracting early. Like the parable of the pottery class where the group making the most pots produced the best pot—we learn by building, testing, and documenting at scale. Real experiments, real stakes, rigorous documentation.

What We Do

The Applied Coordination Technology Lab (ACT Lab) is applied research through tool-building. We:

  • Build coordination tools for mission-aligned organizations — Working with nonprofits, philanthropies, co-ops, and early-stage mission-driven startups to create AI interventions tailored to their actual bottlenecks
  • Test AI governance models at small scale — Running controlled experiments where AI has genuine decision-making authority over resource allocation, product direction, and strategic choices
  • Document everything — Every tool built, every intervention attempted, every failure mode encountered becomes research material
  • Synthesize into transferable knowledge — Creating an open compendium (wiki-style) that maps coordination mechanisms, their failure modes, and tested applications

This isn't consulting that happens to document. This is research that happens through building real tools for real stakes.

Why This Matters Now

We're in a narrow window where:

  • Organizations that integrate AI coordination will dramatically outpace those that don't
  • The patterns that emerge now will shape the future of collective coordination
  • There's a gap in the research landscape between academic theory and practical implementation

The question isn't whether AI will mediate human coordination—it's whether we'll have the knowledge infrastructure to steer that toward collective benefit.

Theory of Change

Power without the ability to control and direct it is a destructive force. Power with that ability is a productive force.

AI gives us unprecedented coordination capability. But capability alone doesn't tell us:

  • Which mechanisms produce genuine alignment vs. manufactured consensus
  • How to measure whether a system enables flourishing or produces other outcomes
  • What governance models enable groups to retain agency

This lab builds that knowledge base through systematic experimentation and rigorous documentation.

Collective flourishing

Concrete Deliverables

14 Days:

  • First pilot: AI agent that gets 5-10 minutes of my time daily, runs optimization loops to function as a life coach
  • Alternative pilot: Small team uses existing tools (Updraft, Prune) to collaboratively build and launch an MVP, with full documentation
  • AI-supported sensemaking circles: Weekly AI-facilitated discussions in a small co-op or philanthropy group, where the AI aggregates member inputs on shared challenges, synthesizes themes in real-time, and prompts clarifying questions—testing how this enhances collective intelligence and decision quality while preserving human agency
  • AI-mediated conflict resolution: Embed an AI mediator in a small nonprofit team's weekly meetings, anonymously collecting pre-meeting input on tensions, then facilitating dialogue through neutral reframes and compromise suggestions—tracking resolution speed and team satisfaction to identify effective hybrid governance patterns

6 Months:

  • 5+ documented tool interventions with partner organizations
  • Public wiki of tested coordination mechanisms
  • Working examples demonstrating different points on the human-AI control spectrum

12 Months:

  • Validated playbook for specific organizational archetypes
  • Published research on what measurably improves coordination (focusing on alignment delta and sensemaking velocity)
  • Active pipeline of mission-aligned organizations wanting interventions

3 Years:

  • Multiple AI-human hybrid organizations operating globally using these frameworks
  • Clear metrics for measuring successful coordination outcomes
  • Tools and propagation strategies proven to spread toward collective benefit

How We Position Ourselves

Over a decade of building tools for nonprofits and mission-driven organizations, combined with a proven track record creating novel coordination tools (Updraft for group alignment, Prune for personal epistemics, Resonance for collective resonance mapping). Experience with design thinking, agency-style intervention work, and ethnographic sensitivity for understanding what groups actually need. The ability to go from "here's a coordination problem" to "here's a working prototype" to "here's what we learned" in rapid cycles—moving fast, documenting rigorously, and building knowledge that serves the broader field rather than just individual clients.

Methodology

We don't abstract perfect processes before doing the work. Our approach:

  1. 1.
    Outreach First — Find organizations facing real coordination challenges who are willing to experiment
  2. 2.
    Build + Test Rapidly — Create custom tools in tight iteration cycles with real stakes
  3. 3.
    Document Exhaustively — Every tool, every meeting, every hypothesis becomes research material
  4. 4.
    Synthesize Publicly — Wiki updates, blog posts, open research sharing

The failure mode is building in silence. The success mode is building a knowledge commons that others can learn from and build upon.

Current Experiments

Active daily practice: Testing AI-facilitated alignment tracking through daily reflective conversations that track individual alignment over time.

Tool development and learning: Building and refining coordination tools (Updraft, Prune, Resonance) through real-world use, documenting where human-AI intervention creates value and where it falls short.

Founder Background

Rob combines technical skill (web development, UX, rapid prototyping) with ethnographic sensitivity honed through years of agency work with nonprofits.

Key earned insight: People's ideas about what they need are radically different until there's something concrete to react to. This lab creates those concrete artifacts rapidly, learns from real usage, and documents what works.

Vision: Headlines from 2030

What the future looks like if this work succeeds:

  • "Local Community Organizers Quadruple Food Donations Through AI Coordination Platform"
  • "Three Teens from Three Continents Build Multi-Million Dollar Company with AI Co-Founder"
  • "Research Breakthrough Made Possible Through AI Coordination Layer Between Global Labs"
  • "Experimental AI-Enabled Direct Democracy Holds First Election in Bhutan"
  • "No More Political Gridlock: City Council Passes Most Ordinances in 6 Years—Citizens Celebrate Cleaner Air, New Jobs"

In this future, groups at every scale have access to tested frameworks for AI-mediated coordination. The transition to AI-human hybrid organizations has happened—and it's produced more agency, more alignment, and more collective flourishing.

The knowledge infrastructure exists to steer toward this outcome. The work starts now.

The Ask

We're looking for two things:

Funding to support this research work—building tools, running interventions, and creating the knowledge infrastructure the field needs.

Partner organizations who are willing to be early adopters—mission-aligned groups facing real coordination challenges who want to experiment with us.

Above all, we want to connect with people and build a network. If you're working in this space, thinking about these problems, or know organizations that could benefit from this work—let's talk.

Contact: [Your contact information]