Unravelling the Enterprise AI Maze: Unpacking Audit, Ownership, and Costs with MCP (Part 2 of 3)
Date Published
Categories

In the first part of this series, we introduced the Model Context Protocol (MCP) as a vital standardisation layer for integrating AI applications, particularly Large Language Models (LLMs), with diverse enterprise systems. We explored how unmanaged MCP adoption can create significant hurdles in terms of scalability, developer experience, and user trust. Now, as we continue our journey through the complexities of enterprise AI, we turn our attention to three equally critical areas: the challenges faced in security and auditing, the concerns of system owners, and the financial oversight of FinOps.
They represent fundamental governance and operational issues that, if left unaddressed, can severely impede the responsible and cost-effective scaling of AI within your organisation.
The Security and Audit Challenge

For any enterprise, robust security and clear audit trails are non-negotiable. When AI applications become deeply embedded through MCP, ensuring compliance and accountability becomes a complex undertaking:
- Verification of Credential and Consent Management
One of the most significant headaches is the difficulty in verifying that all AI applications are managing user credentials and consents properly. With a multitude of AI applications potentially holding or processing sensitive access information, maintaining a clear, auditable record of how these are handled becomes incredibly challenging. Without a standardised approach, verifying compliance with data protection regulations like GDPR or internal security policies is a constant uphill battle. - Integration Compliance Verification
Ensuring that systems are integrated according to the original planned design presents a significant challenge. AI applications are dynamic, and their interactions, especially when powered by LLMs, can evolve. Auditors need a clear, consistent way to verify that these integrations adhere to architectural blueprints, security baselines, and data flow policies, preventing unapproved data access or system manipulation. - Adequacy of Tool Usage Logging
Verifying that AI applications maintain proper logging of their MCP tool usage is currently difficult. For accountability and troubleshooting, it's crucial to know what MCP calls were made, by which AI application, at what time, and with what parameters. In a fragmented MCP landscape, gaining a holistic and trustworthy view of these interactions is often impossible, creating significant blind spots for security teams and compliance officers.
The Systems Owner’s Challenge

System Owners – those responsible for the core enterprise applications like ERPs, CRMs, or HR systems – face unique anxieties as AI integrations proliferate. Their primary concerns revolve around control, visibility, and accountability for their critical assets:
- Lack of Integration Visibility
It is often unclear to system owners which AI applications are integrating with their systems. Without a centralised registry or notification mechanism, a system owner might suddenly see unexpected traffic or data access, with no immediate way to identify the consuming AI application or its purpose. This lack of transparency can lead to suspicion and resistance to further AI adoption. - Uncertainty of Action Origin
Distinguishing between actions performed by a human user and those performed by an AI application is difficult. If a record is updated or a transaction is initiated, system owners need to precisely pinpoint the source for auditing, compliance, and troubleshooting. When AI applications act on behalf of users via MCP, this distinction can blur, complicating incident response and accountability. - Inability to Control AI Access
Crucially, specific AI applications cannot easily be prevented from accessing their systems. If an AI application is deemed rogue, inefficient, or simply no longer needed to access a particular system, system owners often lack a granular, self-service mechanism to revoke that specific AI's access without impacting legitimate human users or other AI applications. - Insufficient Integration Notification
Systems Owners are not adequately informed when their systems are being integrated with AI applications. This lack of proactive communication can lead to system owners feeling bypassed or unprepared for the new demands placed on their systems, fostering a sense of disempowerment rather than collaboration.
The FinOps Challenge

The financial operations (FinOps) team plays a key role in optimising technology spend and ensuring cost efficiency. The uncoordinated deployment of MCP can lead to significant, often hidden, expenditure:
- Duplication of Tool Development
Similar MCP tools are being developed by different teams without effective collaboration. For instance, multiple teams might independently build MCP servers to integrate with the same HR system, each incurring development, testing, and maintenance costs. This 'reinvention of the wheel' is a direct drain on resources and budget. - Cost Inefficiencies
The lack of coordinated development leads to inefficient use of resources and increased costs. Beyond development duplication, consider the compute resources consumed by multiple, potentially inefficient, MCP servers performing similar tasks. Without a centralised strategy, it's incredibly difficult to track, attribute, and optimise these operational expenses, leading to ballooning cloud bills that lack clear justification.
The Path Forward
The challenges we've outlined across security, system ownership, and FinOps are symptomatic of a broader issue: the lack of a cohesive, enterprise-wide strategy for MCP implementation. Ad-hoc adoption, while perhaps quick in isolated instances, introduces systemic risks and inefficiencies that undermine the very benefits AI promises.
In the final post of this series, we will pivot from problems to solutions. Part 3 explores how a strategic, centralised approach to MCP can not only mitigate these risks but also unlock true scalable, secure, and cost-effective AI integration across your entire enterprise! Stay tuned for insights into architectural patterns, governance models, and best practices that can transform your AI landscape.

Derek Ho
Senior AI & Cloud Consultant