MCP Server

interop.io MCP Server – Getting Started

In this post, we explore how the interop.io MCP Server enables AI agents to take real action across financial desktops, going behind the scenes in a real workflow. This is the first in a series of developer-focused blog posts authored by our Developer Community Manager, highlighting hands-on guidance, use cases, and lessons from the field.

Just a few months after ChatGPT was released and the world went crazy about Generative AI, autonomous AI projects like AutoGPT started popping up, promising to be the next big thing. But even now, several years later, we still haven’t seen many real-world examples of Autonomous AI agents. One of the reasons is that to act, Large Language Models (LLMs) like GPT, Claude and Gemini need to talk to other apps or APIs, and that communication logic needs to be coded first (as a tool/function call).

What is MCP?

MCP allows tools to be added and removed by just copy-pasting several lines in a config file (and usually restarting the application). That’s it. And then when chatting with the AI it will act and use the relevant tool based on the situation.

For example, if you tell it “send that file to my boss” without specifying how – it will look up the list of tools it has access to → ask you for permission to access the file system → find the file → look up the list of tools → select e.g. Outlook and ask you again if it should use it. After you confirm each tool use (keeping you in charge) it will take action and send the email.

That makes it possible to give AI complex tasks involving multiple steps, leveraging applications, services and building the whole logic itself.

How did MCP catch on?

Anthropic released MCP in November last year. There wasn’t a lot of noise around it until a couple of months ago when the interest in it exploded and many developers started building community MCP servers. Some companies have also released official MCP servers which allow an AI agent to connect to their services and tools and act semi-autonomously. The ecosystem that grew around it made adding tools to AI applications as easy as installing a Browser extension.

How does MCP actually work?

The architecture of Model Context Protocol (MCP) is straightforward, having three main components: the Client, the Host, and the Server.

  • Host: The MCP Host is usually an app that leverages LLMs (think Claude Desktop, Cursor, or VS Code). It’s the one asking for tools or APIs to do specific tasks. For instance, your host might say, “I need to fetch contacts from Outlook,” or “Give me the current figures from Excel.”
  • Client: The MCP Client is like a middle layer maintaining 1:1 connection with servers, inside the host application. It manages connections, makes sure requests get routed properly, and maintains state (which makes MCP feel closer to something like FTP than to stateless protocols like HTTP).

Note: Because the client lives inside the host, popular docs often call the host a “client.” You’ll often see people refer to Claude Desktop, etc. as Client. In practice host/client are often used interchangeably, although the spec reserves “client” for the low-level connector only, and I wouldn’t bother with these details.

  • Server: MCP Server is where the actual tool lives. Servers expose APIs or services the client wants to access. You can have servers for local desktop apps (like Outlook or Excel), databases, your file system, or web services. These servers handle the actual data fetching, updating, or processing.

Because MCP is stateful, sessions are maintained between requests. This statefulness means you can have longer interactions with richer context. Think of it like chatting with your APIs rather than just firing off requests and forgetting about them.

You mentioned Anthropic – am I required to use their models?

No. MCP was coined by Anthropic, but is open-source, and also model and app agnostic.

Many early examples use Claude Desktop as the MCP Client, since the concept was pioneered by Anthropic. But there are now many alternatives like Visual Studio Code, Cursor, Cline and more.

On the Large Language Models front you also have the freedom to choose which one to use and switch without rewriting hundreds of functions and tool calls. You can go with Claude or choose from the bunch of models from Open AI, Google, xAI, etc. If you want to run everything locally and not give any information to a third-party provider you can even run an open-source model like Llama, Mistral, Gemma, Phi, DeepSeek, etc. using Ollama.

Why use the interop.io MCP server?

interop.io + MCP is the perfect combo. Running locally, it connects to desktop programs such as Outlook, Excel, etc. and your custom apps through already existing APIs and adapters without extensive coding. Instead of routing all communication through centralized SaaS hubs (sitting on a cloud server somewhere), slowing down interactions and limiting real-time responsiveness, running locally the interop.io MCP serves as an orchestrator that doesn’t need multiple back-and-forth interactions and sharing data with additional 3rd party vendors.

In the case of Claude we still connect to a remotely hosted LLM, so in one of my next articles I plan to cover how we can replace it with a self-hosted model running locally and keeping all data/traffic inside the organization. This approach boosts both efficiency and reliability, enabling smoother, faster, and more responsive AI-driven workflows.

Integration services running in the cloud or iPaaS providers require predefined integrations, meaning every change or addition involves significant engineering effort. But with the interop.io MCP plug-and-play capability, you get immediate runtime integration, reduced deployment time and agility.

Sample use case

I think that was enough talking, how about we try it in action?

Let’s imagine I’m a wealth advisor, just preparing my first cup of coffee when a message from Alice Johnson lands in our inbox. She’s worried about some concerning rumours she read about Tesla on X. Her ask is clear and familiar: “What’s my exposure, and can you send me an updated report? Should I be worried?”

What usually follows is a scramble — dig through portfolio data, prepare spreadsheets, create PowerPoint slides, since we can’t simply send an Excel file, draft a response. What usually follows is busy work that takes a lot of time…but not today.

This time, we’ve got an AI helper, powered by interop.io’s MCP Server and APIs behind the scenes. For simplicity I’ll use Claude Desktop, but other hosts will also work as well (Open AI recently announced MCP support in their Agents SDK and support for remote MCP servers in their new Responses API).

Note: While we are users of many AI systems (like Claude) and appreciate their features, we strongly believe that the enterprise use case benefits from enterprise grade tooling. To that end, we are developing a ChatUI with multi-LLM support, prompt libraries, built-in observability, ability to natively participate in FDC3 channels and understand and act on FDC3 context. More to come here and we are excited to partner with clients on this new capability.

Let’s now give some instructions and guardrails to keep our helper in check. In this case, we say: “Act as my research analyst. Use only our internal tools. If anything’s unclear, ask first. Don’t send anything out without my review.” That’s our safety net — Human-Centered AI.

MCP Server in Blog images

I hand off Alice’s message, so Claude can read it, analyze the intent, and start building an action plan.

Then, it pulls the right client ID using getClientList, opens PowerPoint, pulls Tesla-specific data from our internal systems via extractData, and then runs generateReport to create a clean, firm-branded presentation all using secure interop.io tools.

When the deck is ready, Claude pauses for me to inspect. The result is a clean five-slide summary — exposure data, market context, a volatility chart, and the mandatory disclosure slide. All of it polished and aligned to my company’s standards.

MCP Server in Blog images

Next up, let’s ask Claude to draft an email. It pulls info from our CRM, summarizes the exposure clearly, includes a bullet point or two about Tesla’s recent performance, and even wraps with a Calendly link and our signature. Again, I need to confirm, since AI needs human supervision.

MCP Server in Blog images
And just before sending, we think maybe other customers with exposure to that stock will be interested in an update. Let’s type:

“AI Helper, repeat this for every other client who holds Tesla.”

Claude runs through the books, filters appropriately, and starts spinning up custom decks and emails for each affected client. It even remembered that there was no need to generate another one for Alice.

I can again see everything that my AI helper is doing and approve or reject, but that setup lets me turn hours of manual effort into minutes of focused decision-making. And all without having to create some accounts in the cloud, create complex workflows and fight with supposedly user-friendly drag-and-drop tools.

I can say that my first test drive of the interop.io MCP server was surprisingly productive. No wrestling with cloud setups or complicated APIs — just edit a few lines, restart, and new capabilities are ready to go. The AI assistant didn’t replace me; it just made my tools smarter, letting me skip repetitive tasks and focus on governance. And now, I’ve even got time for another cup of coffee.

Stay tuned for the next articles where we’ll explore other use cases and how to leverage different tools, LLMs, or even run an open-source model locally without having to be concerned with data privacy.

Want to see this demo live? Check out video

Related Resources
Want to stay connected?

Sign up for the latest news!

About the author
Toni Petrov
Developer Community Manager
Toni is the Developer Community Manager at interop.io, where he's leading the charge to grow a new developer community from the ground up in 2025. With over a decade of experience in web development, community building, and open-source engagement — he’s passionate about helping developers connect, share, and build smarter.
The UI Ecosystem: Building Blocks of the Desktop Superapp
The Importance of Seamless User Experience for Desktop Superapps
What is a Desktop Integration Platform Anyway?