DEPLOYMENT · DEEP DIVE

46% of teams say integration is the #1 blocker. Here's what they actually mean.

Description of image

The 2026 Claude State of Agents report dropped a number that surprised nobody who actually ships agents for a living: 46% of teams cite integration with existing systems as their #1 blocker to production deployment. Not model capability. Not cost. Not safety. Integration.

This number gets quoted in a lot of LinkedIn posts. What rarely gets explained is what "integration is hard" actually means in the room with the client at hour 40 of an engagement. So here's what it looks like up close — and the four patterns that have unblocked it for us.

What "integration is hard" actually looks like

When an analyst writes "integration is the top blocker," what they're describing is one of these four situations, every time:

1. The API exists, but it's from 2014

The client has the API you need. It's documented. It works. It also returns XML, requires SOAP envelopes, has no rate limit headers, and uses authentication that involves a 30-minute token refresh cycle plus a backup mechanism that nobody has touched since the original engineer left in 2019. This is most enterprise APIs. This is the default state of "integration available."

2. The API doesn't exist, but the data does

The client's data is in a system that has no public API at all. Maybe it's an on-prem ERP. Maybe it's a legacy CRM running on a SQL database that nobody wants to expose. Maybe it's a SaaS product that has an API but not for the data your agent actually needs. The data exists, but it lives behind a wall, and the wall is load-bearing.

3. The data is in seven places at once

The "customer record" your agent needs to read isn't in one place. It's a join across the CRM (basic info), the support ticketing system (history), the billing system (status), the data warehouse (segmentation), and a Google Sheet that someone in operations maintains by hand (the actual ground truth). To answer one question, the agent has to pull from five sources, each with its own auth, schema, and freshness.

4. Authorization is a maze

The agent needs to act on behalf of different users. The CRM uses one identity provider. The ticketing system uses another. The data warehouse has its own row-level permissions. Mapping "this customer's request" to "what this agent is allowed to see and do" is a multi-day project before you write any agent logic at all.

If you've shipped agents in production, every one of those probably sounds painfully familiar. They're not exotic edge cases — they're the median enterprise environment.

The four patterns that have unblocked this for us

None of these are silver bullets. All of them have been the difference between "stuck in pilot" and "running in production" on at least one engagement.

Pattern 1: The MCP wrapper for the legacy API

When the client has a 2014-era API, we don't expose it directly to the agent. We build a thin MCP server that sits in front of it and translates the messy SOAP/XML/token-refresh dance into clean tool calls the agent can use. The agent never sees the legacy weirdness. It sees three clean tools: get_customer, list_orders, create_ticket. The MCP server eats the complexity.

This sounds obvious in retrospect, but it took us a while to learn. Our first instinct was to teach the agent how to handle the legacy API directly. That was a mistake. Models can technically do it — they just do it badly enough often enough that you'll spend weeks debugging it. Wrap the ugly thing in a clean tool, every time.

Pattern 2: The read-only data layer first

When the data is in seven places, we don't try to unify them in week one. We build a single read-only data layer that the agent queries — usually a thin GraphQL or REST endpoint that does the joins server-side. The agent doesn't know the data lives in five systems. It just queries the layer and gets what it needs.

This is the part where you'll be tempted to talk yourself out of it because "that's basically a custom backend, that's a lot of work." Yes. It is. It's still less work than teaching an agent to handle five auth flows and five schemas correctly.

Pattern 3: The shadow mode launch

When authorization is a maze, we don't try to solve it perfectly before launch. We launch the agent in shadow mode: it runs alongside the human team, watches what they do, and outputs what it would have done — but doesn't actually do anything. For 2-4 weeks. We compare its outputs to the human decisions, find the divergences, fix the auth issues that come up in real situations instead of imagined ones, and only then flip the switch to active mode.

This gets you out of the trap where you spend two months mapping every possible permission edge case before launch. Most of the edge cases never come up. The ones that do come up come up fast, and you fix them with real data instead of speculative whiteboarding.

Pattern 4: The human handoff as a feature, not a fallback

When integration is genuinely impossible — the data is on-prem, the auth is legacy SSO with no programmatic access, the API requires manual approval per request — we stop trying to force it. Instead, the agent's tool for that integration is "ask a human." It generates a clean Slack message describing what it needs, a human in operations responds, and the agent continues. Yes, this is slower than full automation. It's also shippable, which "perfect automation" usually isn't.

This is where the human-in-the-loop conversation gets practical. We'll write more about this in a future post, but the short version: if integration is the blocker, sometimes the answer is a smarter handoff, not a smarter agent.

The thing nobody admits

Most of the work in shipping an agent is not building the agent. It's the integration. The model is the easy part. The prompting is the easy part. The 80% that takes 80% of the time is plumbing — connecting the agent to the systems it needs to actually do useful work in the messy, 15-year-old, half-documented environment where real businesses operate.

The model is the easy part. The integration is the engineering. Anyone telling you otherwise hasn't shipped an agent into a real company.

If you're evaluating agentic AI vendors and one of them tells you integration "isn't really a problem anymore" — that's how you know they haven't shipped one into a real enterprise. The 46% number isn't a phase. It's the work.