Agents are widely seen as the next evolution of AI. Systems such as Claude (including Cowork) or automation platforms like n8n promise to plan and execute entire tasks autonomously. Yet this is precisely where a new cost dynamic emerges — one that is catching many organisations off guard in 2026. AI is no longer priced per request, but per underlying computation.
At the heart of this shift lies the pricing model. Modern models such as Claude Sonnet 4.6 are currently priced at around $3 per million input tokens and $15 per million output tokens. On the surface, this appears inexpensive. But these rates apply only under certain conditions — and crucially, only per individual model call.
Once agents enter the equation, the economics change fundamentally.
An agent does not execute a single request. It breaks tasks down into multiple steps: planning, research, tool usage, evaluation and final output. Each of these steps is a separate model call. What appears to be a simple task can therefore consist of an entire chain of interactions — each consuming tokens.
Context is another major cost driver. Agents typically operate on large volumes of data: documents, emails, code repositories or entire knowledge bases. As context grows, so do costs. With Claude models, for example, pricing increases significantly once inputs exceed 200,000 tokens, rising to approximately $6 per million input tokens and $22.50 per million output tokens. In practice, a single large prompt can end up costing more than many smaller interactions combined.
A further factor is reasoning. Advanced models generate internal intermediate steps when solving complex problems. These additional tokens are often invisible to users, yet they are fully billable. In agent-based workflows, this effect accumulates rapidly.
This dynamic becomes particularly visible in Claude Cowork. From a user perspective, one simply sees an agent working for a few minutes. What remains largely hidden is the sequence of internal operations: multiple model calls, iterative refinements and repeated analyses. If the agent is allowed to continue autonomously — exploring additional sources or testing alternatives — token consumption increases further.
Model selection also plays a role. Systems may automatically switch to more powerful — and more expensive — models depending on task complexity. In extreme cases, models such as OpenAI o1-pro may be used, with pricing around $150 per million input tokens and $600 per million output tokens. Even a few million tokens can therefore result in significant costs.
Automation platforms such as n8n amplify this effect. n8n itself does not charge per token, but per workflow execution — for example, around €20 per month for roughly 2,500 executions or €50 for higher tiers. However, the actual AI costs are incurred on top through the connected models.
This leads to a common trap. A workflow that includes several AI calls per execution can become increasingly expensive as usage scales. The risk becomes particularly acute with automated triggers — such as incoming emails or API events — that initiate agent processes in the background. What begins as a handful of manual tasks can quickly turn into thousands of automated runs.
So why are these costs so often underestimated? Primarily because the mental model does not match the billing model. People think in tasks or hours of work. Providers charge in tokens and chains of computation. A single “task” may internally consume millions of tokens without this being immediately apparent.
The issue is compounded by limited transparency. Many tools do not clearly display token usage per step or cost per workflow. Without monitoring, budgets or safeguards, there is little control — and the true cost only becomes visible when the bill arrives.
The real challenge, therefore, lies not in model pricing itself, but in system architecture. Agents are not isolated tools; they are scalable systems. Anyone deploying them must think about their cost structure in the same way they think about their capabilities.
Agents are powerful. But they price differently.

