DIGEST
Claude Dispatch & Computer Use: The Agent
Stack That Gets Work Off Your Desk
Source: Video transcript  |  Speaker: Nate (AI productivity commentator)  |  Topic: Anthropic product
launches — Dispatch, Scheduled Tasks, Computer Use
BOTTOM LINE UP FRONT
Anthropic has shipped a functional, managed alternative to OpenClaw —
combining scheduled cloud tasks, Dispatch (mobile orchestration), and computer
use into a coherent agent stack.
The defining quality of useful AI agents is work off your desk, not "proactive
briefings" or polish-optimized demos. Agents that generate more documents to read
are pseudo-work.
Three primitives — scheduled tasks, Dispatch, computer use — compose into a
system that can run work asynchronously, across parallel sessions, even through
apps with no API.
The self-hosted vs. managed split mirrors historical compute shifts (send mail
→ Gmail, rack servers → AWS): OpenClaw proves the category; Anthropic
captures mass adoption.
The hardest human shift is learning to walk away — trusting that the agent is
working when unobserved. That behavioral change, not the tooling, is the real
bottleneck in 2026.

The Central Distinction: Real Work vs. Pseudo-Work
The organizing claim of this analysis is a simple binary: work that lands on your desk vs.
work that gets off it. Most AI agent demos are optimized to look impressive, not to actually
reduce cognitive load or task completion burden. The tell is the output format — if the
agent's deliverable is another document to read, another briefing to review, another draft to
approve, it has created work, not removed it.
The briefing trap: Many agentic products showcase "proactive briefings" before
meetings as a flagship capability. But a briefing is still a doc to read. The bar for a
genuinely useful agent is higher: the thing itself just gets sorted — finished work
delivered while you're away, not a summary for review.
This framing has direct implications for how to evaluate and build agentic workflows. The
test is not "does this impress in a demo?" but "does this clear something off my plate that
would otherwise occupy mental space or calendar time?"
The Three Primitives
1. Scheduled Cloud Tasks
Claude's scheduled tasks run on Anthropic's infrastructure — not a local machine, not a
closet server. The execution environment is a controlled cloud setup with configurable
network access, environment variables, and setup scripts. Critically, tasks run whether or not
your laptop is on.

Anthropic uses scheduled tasks internally to keep a Go and Python library in sync — a
codebase in one language automatically mirroring a codebase in another on a recurring
schedule. This is a production engineering workflow that would otherwise require
several hours of engineer time per week on work that is important but never urgent —
exactly the work that falls through the cracks.
The practical cadence fits tasks that run every one to three hours — not real-time monitoring.
The primitive connects to any MCP server already configured in Claude (Linear, GitHub,
Slack, Google Drive, OpenBrain), so connectors carry without reconfiguration.
Non-developer use cases include: nightly AI news digests fed into a knowledge base; flight
price alerts on a specific route triggered below a threshold; bill payment reminders for
services that don't support autopay. Any recurring task with a clear trigger condition and a
defined output is a candidate.
2. Dispatch as an Orchestration Layer
Dispatch is commonly described as "persistent chat for your phone." That undersells its
architecture. When paired with Claude desktop via QR code, the phone becomes a command
surface and the desktop becomes an execution surface. From a single mobile conversation,
multiple Claude co-work sessions can be spawned and managed simultaneously — each
running independently with its own context, file access, and connectors.
This is parallel asynchronous work from your pocket. The mental model is not
remote control of a single thread — it is dispatching work to a pool of parallel agents,
then stepping away to do other things while they execute.

Product manager Pavle Hurin ran Dispatch for 48 consecutive hours. Over two days he
spent roughly 25 minutes entering commands. Claude executed in parallel across
multiple co-work instances for several hours of total work. He conducted competitor
analysis, drafted stakeholder messaging, and directed multiple rounds of iteration —
from a bounce house, watching his kids.
Current constraints worth noting: each subtask spawned by Dispatch individually requests
folder access (no bulk approval); files cannot yet be attached from the phone or received
back directly (workaround: sync co-work to Google Drive or Dropbox); complex multi-app
tasks succeed roughly 50% of the time in early testing; the desktop must remain powered on.
These are expected to improve — the product is labeled a research preview.
3. Computer Use for Apps Without MCP
MCP coverage is inherently incomplete. More than half the web — including legacy
enterprise software, bespoke ERP screens, old SAP instances, outdated Jira deployments —
will never have a clean API or MCP connector. Computer use addresses this directly: Claude
can operate the keyboard and mouse remotely through co-work, navigating any app a human
could navigate on screen.
Why This Is Consequential
The class of work this unlocks is enormous. Manual data extraction from legacy
systems, cross-system copy-paste workflows, portal-based reporting that requires
clicking through multiple screens — all of it becomes delegatable. The agent can be
sent the task via Dispatch, complete it using computer use, and deposit results in a
shared folder. The human never sits through the agonizing process.

OpenClaw vs. Anthropic: Self-Hosted vs. Managed
The difference between OpenClaw and Anthropic's stack is not primarily about safety — it is
about who maintains the infrastructure. OpenClaw requires the user to configure the server,
manage credentials, vet the skills marketplace, troubleshoot websocket connections, and
decide what the agent can access. For developers who want that control, it is a powerful
option. For most people, it is a second job.
Anthropic's stack abstracts all of that away. Scheduled tasks run on Anthropic servers.
Dispatch runs in a sandboxed environment with explicitly granted permissions. Computer
use asks before touching new applications. Network configuration, skill vetting, and server
maintenance are not the user's problem.
Historical pattern: Self-hosted always comes first and proves the category (sendmail,
rack servers, Jenkins). Managed infrastructure comes second and captures mass
adoption (Gmail, AWS, GitHub Actions). OpenClaw proved that always-on persistent
agents are something people want. Anthropic is now shipping the managed version.
The trade-off is real: OpenClaw offers more raw freedom — any LLM, local Ollama, deeper
permissions, no cloud dependency. Anthropic's stack is cloud-only and Claude-only. But for
the large majority of people who want agents that work without infrastructure overhead, the
managed version is sufficient and meaningfully lower friction.
A Framework for Getting Work Off the Desk
Three categories of work are especially well suited for agent delegation:
Open commitment loops. Every promise made — to a client, a team, a stakeholder — that
hasn't been delivered is an open loop generating low-grade cognitive load. Revised scopes,
meeting minutes, memoranda, follow-ups: these are exactly the kind of recurring outputs that

agents can produce with well-constructed prompts and appropriate context. Blaming agent
quality for failures to delegate is, in most cases, a prompting problem.
Decision preparation under time pressure. Walking into meetings having reviewed 30% of
available information is common. Agents can close that gap — scheduled or dispatched to
pull relevant dashboards, summarize docs, and surface data points that would not otherwise
be consulted. The right use of AI here is information expansion, not opinion confirmation.
Compound signal detection. When an agent has access to a persistent knowledge base
(such as OpenBrain via MCP), it can surface patterns across time — connecting a
competitor's hiring activity to a strategy conversation from three weeks ago, or linking patent
filings to market moves. This transitions the agent from reactive tool to proactive
collaborator.
Starting Points
Identify one recurring manual task that happens on a fixed schedule — automate it
with a scheduled cloud task.
Find one open commitment loop — a deliverable promised but not yet produced —
and delegate the drafting to a Dispatch co-work session.
Identify one legacy-system data extraction workflow that currently takes hours —
test computer use to automate it.
Practice walking away after dispatching a task. Check in on results, not process.
The Trust Problem: Learning to Walk Away
The hardest part of agentic workflows is not the tooling — it is the human behavioral shift.
The instinct to hover, to check whether the agent is really working, to pull focus back to the

screen, runs counter to the entire value proposition of asynchronous delegation. An agent
that is being watched is an agent that is also watching time that could be spent on other
things.
The people who extract the most value from agents in 2026 will be those who can genuinely
step away — who can go to the bounce house, cook dinner, pick up kids — and return to
completed work rather than progress to monitor. This is the management pattern: set clear
intent, delegate with context, check results.
Broader Significance
AI is not a bubble — token demand is outstripping supply and infrastructure is being
built as fast as possible. The real constraint is adoption quality: whether users learn to
treat agents as leverage rather than toys, and whether the work assigned to agents is
real work that would otherwise occupy human time. The shift from task-doer to
manager-of-agents is the meta-skill of the second half of 2026.
Further Exploration
What is the practical prompting stack that reliably produces quality outputs for open
commitment loops (meeting minutes, scope docs, MOU drafts)?
How does OpenBrain's MCP integration compound value over time, and what is the
minimum viable setup for compound signal detection?
As Dispatch matures — bulk folder approval, file attachment from phone, desktop wake
capability — which constraints will fall first, and how does that change the use case
surface?
What does the equivalent of "GitHub Actions for agents" look like when applied to non-
developer knowledge work?

How do organizations establish trust thresholds for agent-completed work — what QA
patterns apply at scale?