July 10, 2025

Azure AI Agents and Model Context Protocol (MCP) with Dynamic Tool Discovery: Step-by-Step Guide

Introduction:

In the era of AI agents, it’s common to see agents interacting with a wide range of external tools, APIs, and services. Whether it’s fetching data, triggering workflows, or connecting with domain-specific systems tools are how agents get things done.

But as the number of tools grows, so does the complexity of managing them. Hardcoding every tool into the agent logic isn’t scalable. Each update means changing code, redeploying agents, and tightly coupling functionality that should stay flexible.

That’s where Dynamic tool discovery comes in and this is exactly what Model Context Protocol (MCP) is built for.
In this guide, we’ll learn how to integrate MCP with an Azure AI Agent, enabling it to discover tools at runtime. No redeployment. No manual updates. Just a clean, modular setup where tools live independently and are made available to the agent on demand.

We’ll walk through building a medical assistant agent that can:

1. Retrieve upcoming patient appointments
2. Fetch recent medical history
3. Estimate wait times for specialists

All powered by tools registered on an MCP server and dynamically discovered by the agent.

Before we start building, let’s take a moment to understand what MCP is and why dynamic tool discovery matters in the context of Azure AI Agents.

Understanding MCP and Tool Discovery:

What is MCP?

Model Context Protocol (MCP) is a protocol that enables AI agents to discover and interact with external tools at runtime. Instead of defining tools inside the agent, MCP exposes them through a centralized server. Each tool is defined with clear inputs, outputs, and documentation and can be registered or updated without touching the agent code.

You can think of MCP as a live tool catalog that your agent connects to whenever it needs functionality. This means tools can be managed independently of the agent itself.

Now that we know where the tools live, let’s see how agents actually discover and use them.

What is Dynamic Tool Discovery?

Dynamic tool discovery allows an agent to:

1. Query the MCP server
2. Fetch the available tools
3. Generate function wrappers for each
4. Use them as if they were built into the agent

All of this happens at runtime, so your agent always has access to the latest toolset without needing redeployment or hardcoded logic.

So how does this all fit into the Azure ecosystem? Let’s connect the dots.

Why it works well with Azure AI Agents:

Azure AI Agents are designed to be modular and extensible. By integrating MCP:

1. You keep your agent code clean and focused on behavior
2. Tools can be maintained and scaled independently
3. Your system becomes more adaptable to changes in external services or APIs

This combination lets your agents evolve without rewrites making your setup easier to scale and maintain.

Now that we understand how dynamic tool discovery works with MCP, let’s quickly get our Azure AI Agent environment ready starting with setting up Azure AI Foundry.

Set Up Azure AI Foundry and Agent Service:

Before we dive into code, we'll need a working Azure AI Foundry project with a deployed model. Follow below steps to have one.

1. Open a browser and go to https://ai.azure.com.


2. Sign in using your Azure credentials.
3. On the homepage, select Create an agent.
4. When prompted, create a new project. Use a valid name and expand Advanced options to configure:
    1. Azure AI Foundry resource: Choose or create one
    2. Subscription: Your Azure subscription
    3. Resource group: Select or create
    4. Region: Pick any supported region (note that model quotas may vary)
5. Select Create and wait for the project to be provisioned.
6. If needed, deploy a GPT-4o model using the Global Standard or Standard option (based on quota availability).
7. Once the project is ready, go to the Overview tab in the left-hand menu.
8. Copy the Project Endpoint URL we’ll need it later to connect the client script to this agent.


That’s all we need from Azure for now. 

With the Azure AI Foundry project and agent in place, let’s shift to the local environment setup where we’ll start wiring everything together with MCP.

Set Up the Local Environment:

With our Azure AI project ready, it’s time to prepare the local development environment where the MCP server and client will run. This setup allows our agent to discover tools dynamically at runtime.

Step 1: Create a Virtual Environment
Open your terminal and run the following commands to create and activate a Python virtual environment:

python -m venv mcpVenv
mcpVenv\Scripts\activate   # For Windows

# OR

source mcpVenv/bin/activate   # For macOS/Linux

Step 2: Create .env File with Required Variables
Before installing the requirements, create a .env file in the project root and add the following:

# Project Connection String from Azure AI Project in Foundry
PROJECT_ENDPOINT="<<Project Connection String>>"

# Model deployment name from Azure
MODEL_DEPLOYMENT_NAME=gpt-4o

Note: The PROJECT_ENDPOINT is the connection string you copied in the previous step when setting up the Azure AI Project in Foundry.

Step 3: Install Required Packages
Create a requirements.txt file and add the following dependencies:

fastapi
aiohttp
python-dotenv
azure-ai-agents
azure-identity
mcp

Then install them:

pip install -r requirements.txt

That’s it out environment is now ready to build the MCP server and client.

With the environment ready, let’s start building the core of our setup the MCP server that will host and expose tools to our Azure AI Agent.

Build the MCP Server:

The MCP server acts as a centralized registry where tools are defined and exposed. These tools will later be discovered and used by the agent dynamically at runtime.

We’ll use the FastMCP class from the mcp package to quickly spin up an HTTP-based server and register tools using decorators.

Here’s the full server script (server.py):

import json
from mcp.server.fastmcp import FastMCP

# Initialize the MCP server
mcp = FastMCP("HealthAssistant")

# -----------------------------------------
# Tool: Get upcoming appointments for a patient
# -----------------------------------------
@mcp.tool()
def get_appointment_schedule(patient_id: str) -> str:
    """
    Retrieves the list of upcoming medical appointments scheduled for the specified patient.
    
    This tool accepts a patient ID and returns a structured list of upcoming appointments,
    including the date and department for each appointment. It can be used to display the
    patient's future visit schedule to help them stay informed and manage their time.

    Parameters:
        patient_id (str): The unique identifier of the patient.

    Returns:
        str: A JSON-formatted string containing the patient ID and a list of upcoming appointments,
             each with a date and department name.
    """
    data = {
        "patient_id": patient_id,
        "appointments": [
            {"date": "2025-07-15", "department": "Cardiology"},
            {"date": "2025-08-03", "department": "Dermatology"},
            {"date": "2025-08-18", "department": "Lab Work"}
        ]
    }
    return json.dumps(data)

# -----------------------------------------
# Tool: Get recent medical history
# -----------------------------------------
@mcp.tool()
def get_medical_history(patient_id: str) -> str:
    """
    Retrieves the recent medical history for the currently active patient.

    This tool returns a chronological summary of recent medical events, including visits,
    diagnoses, treatments, and test results. It's useful for reviewing a patient’s recent
    healthcare activities and identifying patterns or follow-up needs.

    Returns:
        str: A JSON-formatted string representing a list of medical history entries,
             each including the date and a summary of the medical event or consultation.
    """
    history = [
      {"date": "2025-06-20", "summary": "Annual physical checkup - normal"},
      {"date": "2025-05-02", "summary": "Prescribed allergy medication"},
      {"date": "2025-03-10", "summary": "Blood test - slightly elevated cholesterol"}
    ]
    result = {
        "patient_id": patient_id,
        "history": history
    }
    return json.dumps(result)

# -----------------------------------------
# Tool: Estimate wait time for a specialist
# -----------------------------------------
@mcp.tool()
def estimate_wait_time(specialty: str) -> str:
    """
    Provides an estimated wait time for scheduling an appointment with a medical specialist.

    By specifying a medical specialty (e.g., Cardiology, Dermatology), this tool returns
    an estimated waiting period based on current scheduling trends. This helps patients
    plan ahead and manage expectations for their care timeline.

    Parameters:
        specialty (str): The name of the medical specialty.

    Returns:
        str: A JSON-formatted string with the specialty name and its corresponding estimated wait time.
              If the specialty is not recognized, the wait time will be marked as 'Unavailable'.
    """
    wait_times = {
        "Cardiology": "2 weeks",
        "Dermatology": "3 weeks",
        "Neurology": "1 month",
        "Orthopedics": "10 days",
        "General Medicine": "3 days"
    }
    data = {
        "specialty": specialty,
        "estimated_wait_time": wait_times.get(specialty, "Unavailable")
    }
    return json.dumps(data)

# Run the server
mcp.run()

Key Components:

1. FastMCP("HealthAssistant"): Initializes the server with a label. This acts as the tool catalog name.
2. @mcp.tool(): This decorator registers each function as a tool. The server automatically exposes these tools for discovery.
3. Return values: Each tool returns structured data in JSON format. This helps the client parse responses cleanly.
4. mcp.run(): Starts the server and makes the tools available over HTTP.

Now that our tools are live and exposed through the MCP server, let’s build the MCP client that discovers these tools and hands them off to the Azure AI Agent.

Build the MCP Client and Connect to Azure AI Agent:

The MCP client is responsible for discovering tools from the MCP server and registering them with the Azure AI Agent. Once connected, the agent can call these tools as if they were native functions all without hardcoding anything.

Create a client script and use the below snippets to build the full (client.py):

import os, time
import asyncio
import json
from dotenv import load_dotenv
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from azure.ai.agents import AgentsClient
from azure.ai.agents.models import FunctionTool, MessageRole, ListSortOrder
from azure.identity import DefaultAzureCredential

os.system('cls' if os.name=='nt' else 'clear')

load_dotenv()
project_endpoint = os.getenv("PROJECT_ENDPOINT")
model_deployment = os.getenv("MODEL_DEPLOYMENT_NAME")

Key Components (Part 1: Setup)

1. .env variables: These hold your Azure AI project endpoint and model deployment name.
2. AsyncExitStack: Manages async connections useful for clean startup and teardown.
3. stdio_client(): This starts the MCP server as a subprocess and connects the client to it.

Now we define the function that connects to the MCP server:

async def connect_to_server(exit_stack: AsyncExitStack):
    server_params = StdioServerParameters(
        command="python",
        args=["server.py"],
        env=None
    )

    stdio_transport = await exit_stack.enter_async_context(stdio_client(server_params))
    stdio, write = stdio_transport

    session = await exit_stack.enter_async_context(ClientSession(stdio, write))
    await session.initialize()

    response = await session.list_tools()
    tools = response.tools
    print("\nConnected to server with tools:", [tool.name for tool in tools]) 
    return session

Key Components (Part 2: Tool Discovery)

1. list_tools(): Queries the MCP server and returns available tools.
2. ClientSession: Wraps communication with the server and is used later to invoke tools.

Next, we’ll create a chat loop and connect it to the Azure AI Agent:

async def chat_loop(session):
    agents_client = AgentsClient(
        endpoint=project_endpoint,
        credential=DefaultAzureCredential(
            exclude_environment_credential=True,
            exclude_managed_identity_credential=True
        )
    )

    response = await session.list_tools()
    tools = response.tools

    def make_tool_func(tool_name):
        async def tool_func(**kwargs):
            result = await session.call_tool(tool_name, kwargs)
            return result
        tool_func.__name__ = tool_name
        return tool_func

    functions_dict = {tool.name: make_tool_func(tool.name) for tool in tools}
    mcp_function_tool = FunctionTool(functions=list(functions_dict.values()))

Key Components (Part 3: Wrapping Tools)

1. make_tool_func(): Dynamically wraps each discovered tool as an async Python function.
2. FunctionTool: Binds those functions to the agent using Azure’s SDK.

With the tools discovered and wrapped, the final step is to bring your Azure AI Agent to life connecting it to the tools and handling real-time user interactions.

This part handles:

1. Agent creation
2. Chat loop
3. Tool call execution
4. Printing the agent’s response

Here’s the final section of (client.py):

    agent = agents_client.create_agent(
        model=model_deployment,
        name="medical-agent",
        instructions="""
            You are a virtual healthcare assistant. Follow these guidelines:
            - If a user provides a patient ID, retrieve their upcoming appointments or medical history using the tools.
            - Estimate wait time if a user asks about seeing a specialist using specific tools.
            - Keep responses friendly, informative, and medically accurate.
        """,
        tools=mcp_function_tool.definitions
    )

    agents_client.enable_auto_function_calls(tools=mcp_function_tool)
    thread = agents_client.threads.create()

    while True:
        user_input = input("Enter a prompt for the medical agent (type 'quit' to exit):\nUSER: ").strip()
        if user_input.lower() == "quit":
            print("Exiting chat.")
            break

        message = agents_client.messages.create(
            thread_id=thread.id,
            role=MessageRole.USER,
            content=user_input,
        )

        run = agents_client.runs.create(thread_id=thread.id, agent_id=agent.id)

        while run.status in ["queued", "in_progress", "requires_action"]:
            time.sleep(1)
            run = agents_client.runs.get(thread_id=thread.id, run_id=run.id)

            tool_outputs = []

            if run.status == "requires_action":
                tool_calls = run.required_action.submit_tool_outputs.tool_calls

                for tool_call in tool_calls:
                    function_name = tool_call.function.name
                    args_json = tool_call.function.arguments
                    kwargs = json.loads(args_json)
                    required_function = functions_dict.get(function_name)
                    output = await required_function(**kwargs)

                    tool_outputs.append({
                        "tool_call_id": tool_call.id,
                        "output": output.content[0].text,
                    })

                agents_client.runs.submit_tool_outputs(
                    thread_id=thread.id,
                    run_id=run.id,
                    tool_outputs=tool_outputs
                )

        if run.status == "failed":
            print(f"Run failed: {run.last_error}")

        messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING)
        for message in messages:
            if message.text_messages:
                last_msg = message.text_messages[-1]
                print(f"{message.role}:\n{last_msg.text.value}\n")

    print("Cleaning up agents:")
    agents_client.delete_agent(agent.id)
    print("Deleted medical-agent agent.")

Key Highlights:

1. create_agent(): Instantiates your agent and links it to the discovered tools.
2. enable_auto_function_calls(): Lets the agent decide when to use a tool based on user input.
3. Tool call handling: When a tool is needed, the client maps the request to the matching async wrapper function, runs it, and submits the result back to the agent.
4. Thread and run monitoring: This keeps track of each user message and agent response loop.
5. Cleanup: The agent is deleted at the end to avoid clutter or quota issues.

Test the Agent and See It in Action:

With everything set up, it’s time to run the client script, which will also spin up the MCP server automatically and connect to your Azure AI Agent.

Step 1: Run the Client Script
In your terminal (with the virtual environment activated), run:

python client.py

The script will:

1. Launch the MCP server as a subprocess
2. Discover available tools
3. Register them with your agent
4. Start a real-time chat loop

Note: In this setup, the client starts the server as a subprocess. This is suitable for local testing or development.
In a production environment, the MCP server and client should run independently typically as separate services or containers.

Step 2: Interact with Your Agent

Get all the Medical History for the Patient "Amey Good" With ID "453T6”

Step 3: Output it Geneates:


Step 4: Cleanup (Optional)
The client script deletes the agent automatically at the end of the session, so you won’t need to manage it manually. Just exit the script by typing:

quit

It will close the session and delete the agent automatically.

Conclusion:

We’ve just built an Azure AI Agent that discovers and uses tools dynamically through Model Context Protocol (MCP).

This setup keeps our agent logic clean, our tools modular, and our updates smooth no hardcoding, no redeployments.

While we ran the server and client together for testing, in production they should run independently.

With this foundation, we’re ready to scale dynamic agents across healthcare, support, and beyond.

If you have any questions you can reach out our SharePoint Consulting team here.

Test Your AI Voice Assistant with Realistic TTS Using Piper

Introduction
In today's landscape, where AI particularly voice integration is essential for delivering exceptional customer experiences, testing voice interfaces introduces unique challenges for QA teams.
Unlike traditional UI or API testing, voice-based applications require validation across various tones, languages, and speaking styles, making it difficult to scale test coverage efficiently.

This is where Piper, a lightweight and open-source text-to-speech (TTS) model, proves invaluable. It empowers testers to generate realistic voice inputs in multiple languages, accents, and frequencies enabling comprehensive, automated testing of voice-enabled systems.

What is Piper?
  • Piper is a high-quality, lightweight TTS engine. It runs locally, is fast, and supports multiple voices and languages. Best of all, it’s open-source (MIT licensed), making it suitable for development, testing, and deployment in privacy-sensitive environments.
  • Piper supports over 35 languages and various voice variants—ideal for testing diverse speech scenarios (accents, languages, frequencies). Default audio output format for piper is .wav (Waveform Audio File).  

Key benefits:

  • Fully local, no internet needed
  • Fast inference and output
  • High-quality speech output
  • MIT licensed (safe for commercial use)  


Installing Piper:

  • Navigate to https://github.com/rhasspy/piper/releases
  • Download piper_windows_amd64.zip (You can download as per your system's configurations) 


Generating Audio for Voice Assistant Testing:

  • Go to bash
  • Change directory to Piper Folder
  • echo "Hello, this is a test using Piper TTS." | .\piper.exe -m en_US-kathleen-low.onnx -c en_en_US_kathleen_low_en_US-kathleen-low.onnx.json -f test1.wav

Feeding Audio into the Assistant (Example with WebSocket)


with wave.open("product_A_final.wav", "rb") as wf:
    while chunk := wf.readframes(3200):
        await ws.send(chunk)
await ws.send("EOS")

Validating the Assistant’s Response:

  • Was the transcript accurate?
  • Did it invoke the correct tool/function?
  • Was the backend action completed?

Use a test case definition and then validate programmatically via API assertions or database checks.

{
  "audio_file": "product_A_final.wav",
  "expected_transcript": "Add product to cart",
  "expected_function": "add_product",
  "expected_arguments": {"product": "product A"}
}

Real-World Use Cases where Piper can help:

  • Automated regression testing for voice assistants
  • Offline voice testing for edge devices
  • Multilingual testing using various Piper voices
  • CI/CD pipeline integration (e.g., GitHub Actions) 

Conclusion:

Piper offers a powerful and lightweight solution to simulate voice input in automated test scenarios. Combined with tools like pytest, FastAPI, and WebSocket clients, you can create robust test suites for your AI assistant workflows without relying on cloud-based TTS providers.

If you’re building or testing a voice assistant with Azure Voice Live, Dialogflow, or your own NLP stack, Piper is a tool worth integrating into your QA strategy.

If you have any questions you can reach out our SharePoint Consulting team here.

July 3, 2025

Flexible Sections in SharePoint: Customize Modern Pages with Responsive Layouts

Introduction

Modern SharePoint has transformed how organizations build intranet pages, team sites, and communication platforms. One standout feature that significantly boosts design flexibility and visual layout is Flexible Sections. Introduced as an enhancement to the modern page editing experience, flexible sections allow for more customized, responsive, and user-centric design. 


What Are Flexible Sections in SharePoint?

Flexible sections are an improvement over traditional one, two, or three-column layouts. They enable page authors to: 

  • Mix different column widths 
  • Nest web parts within columns more creatively 
  • Use vertical section alignment 
  • Adapt content for multiple screen sizes more effectively 

Example Layouts: 

  • 70/30 or 30/70 (instead of fixed 50/50) 
  • Left-heavy or right-heavy content 
  • One large section with two smaller columns beneath it 

 

Key Features

1. Custom Column Widths

Unlike the standard section layouts that lock you into preset column sizes, flexible sections allow for more granular control. Want a 66/34 layout? - You can do that. 

2. Improved Responsiveness

Flexible sections adapt more cleanly on mobile and tablet views, ensuring your content remains readable and well-structured across devices. 

3. Better Design Flow

You can now match branding or content flow requirements more easily. Want to highlight a large image on the left and a text box with a button on the right? - Easy. 

4. Integration with Existing Web Parts

Flexible sections work seamlessly with modern SharePoint web parts—like Quick Links, Hero, Image, or News—offering more freedom in arranging them. 

 

How to Add a Flexible Section

  1. Go to the SharePoint page you want to edit. 
  2. Click Edit at the top right corner. 
  3. Hover over the area where you want to add a section, then click the + icon. 
  4. Choose Flexible from the section layout options. 

  5. Add your desired web parts and resize them based on your layout needs (e.g., 2, 3, or 5 columns). 

  6. Adjust the height of the flexible section manually by dragging the resize handle located at the bottom-right corner of the section. This helps you control the vertical spacing to better fit your content. 

  7. Once done, click Save or Republish the page. 

Tip: Combine flexible sections with full-width sections to create visually impactful pages that guide users' attention effectively. 

When to Use Flexible Sections

  • Landing Pages: Great for homepage layouts where you need hero banners, quick links, and announcements in various arrangements. 
  • Team Sites: Align team tools and updates in a clean, user-friendly way. 
  • Internal Communications: Combine visuals and text to improve engagement. 

 

Limitations to Consider

While flexible sections offer powerful capabilities, there are a few things to keep in mind: 

  • Not supported in classic pages – Only available in modern SharePoint pages. 
  • Too many custom sections can clutter – Use them purposefully; don’t overload the page with too many designs. 
  • Some third-party web parts might not fully support flexible layouts. 

 

Final Thoughts

  • Flexible sections in SharePoint are a game changer for organizations seeking more control over their page design without needing to code. By using them smartly, you can build beautiful, engaging, and functional pages that users will actually enjoy navigating. 
  • Start experimenting with flexible layouts today to see how they can elevate your SharePoint experience!  


If you have any questions you can reach out our SharePoint Consulting team here.