Skip to main content
All articles
By Bill Sourour

Agentic AI Is a Feature, Not a Product

agentic-aiai-automationenterprise-ai

Salesforce launched Agentforce in September 2024. By October 2025, it had a 2.0 and a 360. Microsoft shipped Copilot agents for Teams, Azure, and retail in the same window. ServiceNow acquired Moveworks. Google rebranded its agent builder twice.

Gartner put a number on the oversupply: supply of agentic AI far exceeds demand. A market correction is coming. The vendors are racing to define a category before the market figures out it doesn't need one.

Under the branding

Agentforce, Copilot agents, ServiceNow AI Agents, Vertex AI agents: they connect a language model to a set of actions within an existing platform. The model interprets a request, picks the right action, executes it, handles the result.

This is automation. Better automation, because language models handle unstructured inputs and make judgment calls that rule engines can't. But it sits on the same continuum as every automation tool that came before it. What gets it a new budget line is the name.

This happened before

Robotic process automation (RPA) was going to automate everything. Organizations bought platforms, hired teams, built centers of excellence. Then the bots started breaking: a new interface element, a shifted data field, a process change nobody flagged. RPA failed because vendors sold a capability as a category. "Buy our RPA platform" became a procurement decision instead of an engineering decision.

Big data did the same thing. Blockchain after that. Digital transformation became a department. The pattern: a real technical capability gets a marketing name, a Gartner quadrant, a conference circuit, and a buying frenzy. The capability survives. The category doesn't.

RPA vendors promised rapid payback. Organizations expected nine months. Actual results for the implementations that worked: twelve months. Most bots broke on small changes. The companies that got lasting value added automation to their existing systems. The bots that worked were features of a larger workflow.

Gartner's October 2025 warning on agentic AI reads the same way: the underlying technology is sound, but undifferentiated agent platforms will merge or disappear. The capability will live on inside the software companies already use.

The adoption data

A PYMNTS study found that only 11% of organizations are using AI agents in production. Another 38% are piloting. The rest are exploring or haven't started.

The 11% are almost entirely companies that already had high automation maturity. 25% of highly automated companies had adopted agentic AI by mid-2025. In companies with medium or low automation, adoption was effectively zero.

Companies are trying to buy their way to agent-based automation without the foundation: data pipelines that actually work, workflows defined end-to-end, systems that talk to each other. Gartner predicts that by 2027, over 40% of agentic AI projects will fail because the systems underneath can't support what the agents need.

An agent platform on top of broken plumbing just surfaces the broken plumbing.

The pattern plays out the same way. A CTO buys an agent platform, the vendor runs a polished proof of concept on clean data, and the pilot looks strong. Then the team tries to connect it to the actual claims system, the actual CRM, the actual document store. The integration work takes longer than the pilot did. The board approved a product purchase; what they needed was an architecture investment.

What to measure

Decision velocity: how fast a signal becomes a governed action.

A claim arrives. The system reads it, checks it against policy, flags what needs human review, routes the rest for payment. The metric: days from filing to resolution, and whether that number shrank.

Decision velocity captures the whole chain: the agent's work, the human review, the system handoff, the compliance check. McKinsey's data on the 6% of organizations generating real earnings before interest and taxes (EBIT) from AI tells the same story: they measure the business process, not the model. The agent is a tool inside that measurement.

Where to start

Where are decisions waiting on a person to move data between systems? The starting point is the integration.

Where does a process stall because someone has to read, interpret, and act on unstructured information: a document, an email, a form? A language model adds something there that rule-based automation never could. But only if the downstream workflow is already automated.

Where is decision velocity slowest? The bottleneck is usually two systems that don't share data, or an approval step that takes three emails and a week.

Fix the plumbing. The agents will follow.

Bill Sourour

Bill Sourour

Founder, Arcnovus

25 years in enterprise technology. Writes about AI strategy for CTOs.

Featured inFortuneWIREDCBC
Learn more
Subscribe to Bill on AISubscribe