Most AI copilots are reactive. They wait for the user to type a question, then answer it. They have no idea what the user was doing before they opened the chat, no memory of what they’ve already learned, and no sense of where they should be going next.
A proactive copilot is different. It watches what the user is doing in real time, knows their history and which product workflows they’ve completed, and understands the ideal path through your product — so it can surface the right next step before the user has to ask.
Getting there is a progressive build-up. Each layer you add makes your copilot meaningfully smarter.
The three layers
Layer 1: Real-time events → what is the user doing RIGHT NOW?
Layer 2: User memory → what do they already know? what are their gaps?
Layer 3: Knowledge base → what SHOULD they be doing? what's the ideal path?
↓
copilot surfaces the precise next step
You can build and ship Layer 1 today. Layers 2 and 3 are in active development.
Step 1 — Connect real-time events
What you get
Every action a user takes in your product — page views, clicks, form inputs — is captured and delivered to your agent as a structured, LLM-ready payload.
This is not raw browser telemetry. Before events reach your agent, Autoplay’s pipeline processes them:
- Extraction — raw DOM events are extracted and normalised into typed actions
- Labelling — each action is given a human-readable description with inferred intent
- Grouping — actions are collected into a session with an inferred goal
- Summarisation (optional) — the connector can run an LLM pass to produce a compact prose summary of the full session
The result is an ActionsPayload that your agent can read and reason over directly.
What your copilot can do at this stage
Your agent can see exactly what the user is doing right now — which page they’re on, what they just clicked, and what their inferred intent is. It can respond contextually to any action without waiting for the user to describe the situation in chat.
Limitation
Every session still looks like a first session. Your agent has no memory of what this user has done before, which workflows they’ve already completed, or where they typically get stuck.
Code
from autoplay_sdk import AsyncConnectorClient, ActionsPayload
async def on_actions(payload: ActionsPayload) -> None:
# payload.to_text() is ready to inject directly into your LLM context
current_context = payload.to_text()
suggestion = await your_llm(
system="You are a product copilot. Based on what the user is doing, "
"suggest one helpful next step. Be brief.",
user=f"## What the user is doing\n{current_context}",
)
if suggestion:
await push_to_ui(payload.session_id, suggestion)
client = AsyncConnectorClient(url=CONNECTOR_URL, token=API_KEY)
client.on_actions(on_actions)
See Async client and Webhook receiver for setup details.
Step 2 — Attach user memory
What you add
Alongside the live event payload, fetch the user’s cross-session memory profile. This profile is built automatically by Autoplay — after every session, an LLM distillation pass updates it with what the user did, what they completed, and where they struggled.
memory = await client.get_user_memory(user_id=payload.user_id)
The profile tells you exactly where this user is in the product’s adoption journey:
{
"knowledge_state": {
"mastered": ["connect_data_source", "create_dashboard"],
"in_progress": {
"setup_slack_integration": {
"attempts": 2,
"last_issue": "missed final step: verify connection success message",
"best_completion_pct": 85.0
}
},
"untouched": ["set_up_alerts", "configure_webhooks", "export_to_csv"]
},
"journey_completion": {
"completed_workflows": ["connect_data_source", "create_dashboard"],
"total_known_workflows": 7,
"completion_rate": 0.29
}
}
What your copilot can do now
User memory is the relevance filter. Without it, your copilot treats every user as if it’s their first day. With it, suggestions are grounded in exactly what this person has and hasn’t done:
| Memory state | Copilot behaviour |
|---|
Workflow in mastered | Skip it — the user already knows it |
Workflow in in_progress | Surface the specific missed step, not the whole flow again |
Workflow in untouched | Proactively introduce it when the user is in a relevant context |
Area in struggle_patterns | Change approach — repeating what already failed is noise |
Code
async def on_actions(payload: ActionsPayload) -> None:
memory = await client.get_user_memory(user_id=payload.user_id)
# Don't suggest what they already know
mastered = memory.knowledge_state.mastered
gaps = memory.knowledge_state.untouched + list(memory.knowledge_state.in_progress)
suggestion = await your_llm(
system="You are a product copilot. Only suggest things the user hasn't mastered yet.",
user=f"## What the user is doing\n{payload.to_text()}\n\n"
f"## Already mastered (do not suggest)\n{', '.join(mastered)}\n\n"
f"## Gaps to focus on\n{', '.join(gaps)}",
)
See User memory for the full schema and field reference.
Step 3 — Define golden paths and connect the knowledge base
What you add
Record the ideal step-by-step journey for every key workflow in your product — using the Autoplay Chrome extension to capture golden paths directly in your UI. These are indexed in Autoplay’s vector database and queryable by semantic search at inference time.
golden_paths = await client.query_knowledge_base(
query=payload.to_text(),
top_k=2,
)
What your copilot can do now
With all three signals available, your copilot can compute an exact gap:
Where the user is now ← real-time events
What they've already done ← user memory (mastered / in_progress)
Where they should be going ← golden path from knowledge base
↓
gap = golden path steps − completed steps
↓
copilot surfaces the single most relevant next step
This is what makes a suggestion feel precise rather than generic. The copilot isn’t guessing at what might be useful — it knows which step in the ideal journey this specific user is missing, right now.
Code
async def on_actions(payload: ActionsPayload) -> None:
# Fetch all three signals in parallel
memory, golden_paths = await asyncio.gather(
client.get_user_memory(user_id=payload.user_id),
client.query_knowledge_base(query=payload.to_text(), top_k=2),
)
context = f"""
## What the user just did
{payload.to_text()}
## Their workflow history
Mastered: {', '.join(memory.knowledge_state.mastered) or 'none'}
In progress: {', '.join(memory.knowledge_state.in_progress) or 'none'}
Never started: {', '.join(memory.knowledge_state.untouched) or 'none'}
## Ideal path for this area
{golden_paths.to_text()}
"""
suggestion = await your_llm(
system="You are a proactive product copilot. Based on what the user is doing, "
"their history, and the ideal path, suggest one concrete next step. "
"Do not suggest anything they have already mastered. "
"Be brief and specific. Stay silent if there is no clear gap.",
user=context,
)
See Knowledge base for the endpoint reference and golden path management.
Step 4 — Define your proactive triggers
Plan ahead before you go live.
Not every event should fire a suggestion — that becomes noise. Define which intent and workflow conditions should cause your copilot to reach out, using signals from your real-time payload and (when you have it) user memory — not ad-hoc URL rules.
Examples:
- Inferred intent is stuck on the same workflow step (no forward progress toward the session goal) for more than 60 seconds
- The user hits failure signals for the same workflow three times in a row (e.g. validation or integration errors within one adoption flow)
- The user starts a feature workflow they have never completed before (first-time path through that workflow, not “first visit” to a URL)
- Workflow completion for a key journey stays low across attempts — e.g. stuck in
in_progress with completion below a threshold while memory shows repeat struggle on that workflow
Encode these as conditional checks in your on_actions callback (and optionally against memory / golden-path state) before calling your LLM, so the copilot stays silent unless a genuine trigger is met.
When to trigger proactively
Not every action should trigger a suggestion — that becomes noise. A good heuristic:
| Signal | Trigger condition | Example suggestion |
|---|
| Workflow stall (no progress on inferred goal) | Same workflow step / intent with no forward motion for several actions or minutes | ”It looks like you’re circling this step — want a nudge on what comes next?” |
| User skips a step in the golden path | Off the ideal path | ”Most users connect a data source before creating a dashboard.” |
| User returns after a gap of 3+ days | Resuming an incomplete task | ”Last time you were setting up your first integration — want to pick up where you left off?” |
| User has never completed a key workflow | Adoption gap | ”You haven’t finished exporting a report yet — here’s the short path.” |
User is in in_progress on a workflow | Repeat attempt without crossing completion | ”You got 85% through this last time — the one remaining step is verifying the success message.” |
What’s available today
| Layer | Status |
|---|
| Real-time events (SSE stream / push webhook) | Available now |
| User memory | Coming soon |
| Knowledge base (golden paths) | Coming soon |
Real-time events via the SSE stream and push webhook are available today — you can build and ship the event ingestion layer now, and progressively add user memory and golden paths as they ship.