The Role of AI Agents in Modernizing Healthcare Records and Manufacturing Inspections

| 5 min read

The rapid rise of agentic AI in industries ranging from healthcare to manufacturing is pushing enterprises into a critical juncture, but the core challenge is not the technology itself; it’s the trustworthiness of these AI agents. As most organizations stall at pilot phases, the issue at hand is highlighted starkly: 85% of enterprises are experimenting with agent technology, yet only 5% have taken the leap into production. This disparity reflects a deeper identity governance crisis that could endanger sensitive systems and operations.

The Trust Deficit Explained

Cisco President Jeetu Patel succinctly summarized the implications during a recent event: organizations are grappling with a lack of clarity over which AI agents have access to crucial systems and who is accountable for their actions. This uncertainty exacerbates the already fraught landscape, where a significant 44% increase in attacks targeting public-facing applications was reported in the 2026 IBM X-Force Threat Intelligence Index. The surge is driven by inadequate security controls, underscoring how the introduction of AI complicates existing vulnerabilities.

Architectural Implications

Michael Dickman, Cisco's SVP and GM of Campus Networking, steps into the fray, articulating a pressing architectural concern. The crux of the matter is that traditional technologies have always prioritized productivity, with security being an afterthought. "Trust is one of the key requirements," he emphasizes, urging executives to reconsider their approach. Unlike past transitions, where the pace of deployment often overwhelmed security frameworks, the strategies surrounding agentic AI must integrate trust as a foundational principle.

Challenges of Implementation

The rush to deploy autonomous agents opens organizations to a multitude of risks. Decisions made by these agents impact a wide scope of operations, from updating patient records in real time to managing financial transactions. "The blast radius of a compromised identity has expanded dramatically," Dickman warns, necessitating a comprehensive approach to agent governance. His framework identifies four essential preconditions: secure delegation, cultural readiness, token economics, and the irreplaceable value of human judgment.

Visibility into Agent Behavior

Agents operate in a landscape where data is increasingly siloed, and organizations are often unaware of how different systems communicate vital information. Dickman stresses the distinction between inferred connections and actual data transmissions, conveying that network visibility can offer unprecedented insights that traditional security measures miss. As IoT and AI proliferate, the complexity grows, demanding tighter controls to prevent misuse of highly sensitive information.

The Risk of Silos

In many organizations, different teams may independently develop agents, each based on fragmented data sets, resulting in superficial automation rather than profound insights. Without cohesive strategies to tackle these silos, enterprises risk repeating mistakes where one team's agent cannot communicate or correlate with another’s outputs. Independent analysts have echoed this issue, pointing out that cloning human permissions for agents often leads to a pervasive permission sprawl from the get-go. The fallout is that control over who does what can become as complex and unwieldy as the systems they are meant to protect.

Assuring Trust in Production

The critical question remains: how can companies operationalize trust as they scale agent deployment? Dickman emphasizes building a clear governance structure, starting with agent identity and access management (IAM). Each agent should be associated with defined actions and backed by human accountability, ensuring clarity and responsibility in critical operations. "If something goes wrong, there's a person to talk to," he stresses, suggesting that a solid foundation can lead to smoother implementations in less sensitive environments.

Five Strategic Priorities

To bridge the divide between pilots and production, Dickman proposes essential steps for organizations to prioritize:

  • Cross-Functional Alignment: Establish shared expectations for agentic AI across all departments involved.
  • Mature IAM and Privileged Access Management: Ready the governance structures for the complexities introduced by agents.
  • Platform Strategy: Utilize a cohesive data-sharing approach to foster cross-domain correlation.
  • Hybrid Architecture Design: Combine agentic AI with traditional tools to balance flexibility and reliability.
  • Solidify Use Cases: Start with several high-value applications to build confidence through robust practices.

In Dickman's view, proactive steps in these areas can mitigate the trust gap that persists within agent deployments. He asserts that identifying and understanding these critical intersections among identity management, visibility, and governance is non-negotiable for any enterprise gearing up for the future of work with agentic AI.

The Path Forward

In retrospect, the obstacles currently faced by organizations aren't purely technical; they mimic fundamental issues seen across industries where innovation outpaces security considerations. The urgency to cultivate robust governance that anticipates the fallout from agentic AI cannot be overstated. As enterprises look to capitalize on the transformative potential of AI, those that succeed in establishing a resolute framework of trust will undoubtedly leave their peers far behind in both capability and security. The conversation around agentic AI is rapidly evolving, and failing to keep pace with identity management may ultimately widen the trust gap well into the future.