Every agent framework runs tasks sequentially. Give an agent "build me a landing page with auth and monitoring" and it runs one step at a time. But most complex tasks have independent subtasks that could run in parallel.
Worse: when two agents on the network both need to "deploy a React app to Kubernetes," each one spends 3-8 seconds asking an LLM to decompose the task from scratch. The second agent learns nothing from the first.
Architect finds the parallelism, and the network remembers the plan so nobody solves it twice.
Architect tries to skip LLM inference entirely. Only generates a new plan when no cached version exists anywhere on the network.
Each resolution caches the result — future lookups are faster for everyone
On a centralized platform, cached plans live on one server. On Hyperspace, every node contributes to and benefits from a shared plan cache. This creates a compounding network effect.
| Channel | Topic | Effect |
|---|---|---|
| Announcements | hyperspace/dag-cache/announcements | New DAG cached → broadcast to all peers |
| DHT Providers | /dag-plans/<hash> | Register as provider → peers fetch on demand |
| Quality feedback | hyperspace/dag-cache/outcomes | Bad plans auto-evict (<60% success rate) |
The network is a self-curating library of task decomposition strategies. Good plans rise. Bad plans die.
/dag-plans/<hash>. GossipSub announcements. Fetch plans from any peer via protocol stream.| Resolution | Latency | Tokens | When |
|---|---|---|---|
| Local cache hit | ~2ms | 0 | Same task, same node |
| Similarity match | ~15ms | 0 | Similar task, same node |
| DHT peer hit | ~200ms | 0 | Any task, any node |
| Architect inference | ~3-8s | 500-2K | Novel task, no cache |
As the network grows, the ratio of cache hits to inference approaches 1.0. The marginal cost of task decomposition trends toward zero.
# Install the Hyperspace CLI
curl -fsSL https://agents.hyper.space/api/install | bash
# Start your node
hyperspace start
# Decompose a task
hyperspace architect "deploy my app to kubernetes with monitoring"