July 24, 2025

Modern SharePoint - Creating Site Pages with Section Templates and Real-Time Previews

Introduction:

SharePoint has long been a cornerstone for businesses looking to manage content and collaborate effectively. With its user-friendly interface and out-of-the-box features, SharePoint enables users to create dynamic, engaging web pages with minimal technical knowledge. The Modern SharePoint experience has taken this a step further with new features like Section Templates, which allow you to create visually consistent pages without needing to start from scratch.

These templates not only save time but also ensure your pages are visually appealing, structured, and aligned with your brand's identity. SharePoint’s recent updates bring even more flexibility and real-time feedback to the content creation process, empowering users to build professional, polished pages.

In this blog, we’ll dive deep into how you can leverage Section Templates, customize section properties, and make the most of real-time previews to build dynamic pages that are ready for publication - without the need to save or publish.


Step-by-Step Guide to Using SharePoint Modern Page Templates and Section Templates:

1. Navigating to the Page Editor:

To get started, head to the Site Content section of your SharePoint site. From here:

  • Click on ‘New’ and select ‘Page’. This action will open the Page Editor. 


2. Accessing Section Templates:

  • Once you’re inside the page editor, on the right-side pane, you’ll find the ‘Section Templates’ option. Click on ‘See All Section Templates’ at the bottom to view a comprehensive list of pre-designed templates.
section templates 

  • These templates are organized based on the content type and layout, making it easy to choose one that fits your needs.

3. Selecting and Customizing Your Template:

SharePoint gives you the ability to browse and select from a wide variety of templates. Once you pick one that fits your page’s needs, you can start customizing it to fit your brand’s style. You can:

  • Modify text and images
  • Change the layout and section arrangement

This allows you to design pages without needing to start from scratch, which is especially useful for non-developers or anyone looking to save time.

4. Customizing Section Properties:

Once you’ve chosen your template, you can further customize individual sections to suit your specific needs. SharePoint allows you to modify several properties for each section, including:

  • Collapsible Sections: You can make sections collapsible to save space and control how much content is visible. This is especially useful for pages with a lot of information.
  • Heading Levels: Customize the heading structure for better hierarchy and readability.
  • Divider Lines: Use divider lines to create separation between sections for a cleaner, more organized look.
  • Alignment Options: Control the alignment of content within each section (e.g., left or right alignment for expand/collapse icons).
  • Mobile and Email Reflow: Adjust how content reflows on mobile devices or in email formats (top to bottom, left to right). This ensures a seamless experience across all devices.
section properties

By customizing these properties, you can fine-tune the page layout and user experience, all while maintaining flexibility for different content types.

5. Previewing the Changes:

One of the best features of SharePoint’s page editor is the Preview function. You can see how your page will look across devices (desktop, tablet, mobile) in real time. This means you can:

  • Make adjustments without having to publish the page first.
  • Fine-tune the layout to make sure it’s perfect across different screen sizes.
CMS

This feature eliminates the guesswork that typically comes with designing web pages, ensuring your page looks great before it’s live.

6. Saving and Publishing Your Page:

Once you’ve customized the template, previewed your changes, and are happy with the result, it’s time to save and publish your page. SharePoint makes this process as easy as clicking a button, allowing you to get your content online quickly.


Benefits of Using Modern Section Templates and Real-Time Previews in SharePoint:

1. Time Efficiency:

  • With pre-designed templates, the process of creating web pages becomes much faster. Instead of starting from scratch, you can simply select a template that fits your needs and modify it to suit your content.

2. Flexibility and Customization:

  • The ability to customize section properties—from text and images to layouts and headings—ensures that your page aligns with your organization’s branding while allowing for personal adjustments.

3. Real-Time Preview:

  • The Preview feature allows you to instantly see how your page will look across devices. This ensures that your page is mobile-responsive, aesthetically pleasing, and ready for prime-time publishing.

4. Empowering Non-Developers:

  • Thanks to SharePoint’s drag-and-drop interface, even users with little to no development experience can quickly create professional pages. The ease of use is one of the platform's strongest features, making it accessible for everyone in your organization to contribute to content creation.


Advantages of Section Templates:

  • Consistency: Ensures uniform layout structure within a single page for a professional look.
  • Time-saving: Pre-designed sections make it quick and easy to build structured content without starting from scratch.
  • Flexibility: Offers customization options for text, images, and content layout within the selected section.
  • Mobile Responsiveness: Automatically adjusts to provide an optimal viewing experience across devices. 
  • User-friendly: No coding required, making it easy for non-developers to create visually appealing pages. 


Disadvantages to Consider:

  • Limited Scope: Section templates apply only to a single page, with no cross-page functionality for consistency across multiple pages. 
  • Customization Constraints: While flexible, templates may not allow advanced customizations for complex page layouts.
  • Dependency on Pre-Designed Layouts: Overreliance on templates could lead to uniformity across pages, reducing uniqueness.
  • Performance Issues: Certain complex sections might introduce performance bottlenecks if not optimized properly.

 

Conclusion:

The introduction of Modern Section Templates in SharePoint has dramatically improved the content creation process. These tools allow users to quickly build professional, brand-consistent pages without needing advanced design skills or coding experience. Whether you’re updating a page with a new event, creating a team introduction, or posting an internal status update, these features make it easier than ever to create pages that look great and are optimized for mobile devices.

With real-time previews, customizable section properties, and a growing library of templates, SharePoint’s new features bring efficiency and flexibility to web content management. While there are limitations, such as limited customization options and potential for design uniformity, the benefits - time savings, ease of use, and consistency are undeniable. These tools are a game-changer for both developers and non-developers alike.

So, whether you’re building your first page or refining an existing one, SharePoint’s Section Templates are here to help you create stunning web pages that are ready to publish at the click of a button.


If you have any questions you can reach out our SharePoint Consulting team here.

React Best Practices: A Comprehensive Guide

In this guide, you'll learn the top React best practices that can help you create clean, scalable, and high-performing apps. Whether you're just getting started or looking to improve your skills, this comprehensive guide covers everything from component structure and state management to performance optimization and code organization.

Component Design and Architecture:

Keep Components Small and Focused
The Single Responsibility Principle applies strongly to React components. Each component should have one clear purpose and do it well. Large components become difficult to test, debug, and maintain.
// ❌ Bad: Large component doing too much
function UserDashboard({ userId }) {
  // 200+ lines of code handling user data, notifications, settings...
}

// ✅ Good: Focused components
function UserDashboard({ userId }) {
  return (
    <div>
      <UserProfile userId={userId} />
      <NotificationPanel userId={userId} />
      <UserSettings userId={userId} />
    </div>
  );
}
Use Composition Over Inheritance
React favors composition patterns. Instead of complex inheritance hierarchies, compose components together to build complex UIs.
// ✅ Good: Composition pattern
function Card({ children, title }) {
  return (
    <div className="card">
      <h2>{title}</h2>
      {children}
    </div>
  );
}

function UserCard({ user }) {
  return (
    <Card title="User Profile">
      <img src={user.avatar} alt={user.name} />
      <p>{user.bio}</p>
    </Card>
  );
}

State Management:

Use the Right Tool for State
Not all state needs to be in a global store. Choose the appropriate state management solution based on your needs:
  • Local state (useState): Component-specific data that doesn't need sharing
  • Lifted state: Shared between a few related components
  • Context: App-wide state like themes or user authentication
  • External libraries: Complex state logic (Redux, Zustand, MobX)
// ✅ Good: Local state for component-specific data
function SearchInput({ onSearch }) {
  const [query, setQuery] = useState('');

  const handleSubmit = (e) => {
    e.preventDefault();
    onSearch(query);
  };

  return (
    <form onSubmit={handleSubmit}>
      <input 
        value={query}
        onChange={(e) => setQuery(e.target.value)}
        placeholder="Search..."
      />
    </form>
  );
}
Minimize State Dependencies
Keep your state flat and avoid deeply nested objects. This makes updates easier and prevents unnecessary re-renders.
// ❌ Bad: Nested state
const [user, setUser] = useState({
  profile: {
    personal: {
      name: '',
      email: ''
    }
  }
});

// ✅ Good: Flat state
const [userName, setUserName] = useState('');
const [userEmail, setUserEmail] = useState('');

Performance Optimization:

Memoization Strategies
Use React's built-in memoization tools wisely, but don't overuse them. Premature optimization can make code harder to read without significant benefits.
// ✅ Good: Memoize expensive calculations
const ExpensiveComponent = memo(function ExpensiveComponent({ data }) {
  const processedData = useMemo(() => {
    return data.map(item => complexCalculation(item));
  }, [data]);

  return <div>{processedData.map(renderItem)}</div>;
});

// ✅ Good: Memoize callback functions passed to children
function ParentComponent({ items }) {
  const handleItemClick = useCallback((id) => {
    // Handle click logic
  }, []);

  return (
    <div>
      {items.map(item => 
        <ChildComponent 
          key={item.id} 
          item={item} 
          onClick={handleItemClick} 
        />
      )}
    </div>
  );
}
Avoid Inline Objects and Functions
Creating objects or functions inline in render methods causes unnecessary re-renders of child components.
// ❌ Bad: Inline objects cause re-renders
function MyComponent() {
  return <ChildComponent style={{ margin: 10 }} />;
}

// ✅ Good: Define objects outside render or use useMemo
const styles = { margin: 10 };

function MyComponent() {
  return <ChildComponent style={styles} />;
}

Custom Hooks:

Extract Reusable Logic
Custom hooks are perfect for sharing stateful logic between components. They keep your components clean and promote code reuse.
// ✅ Good: Custom hook for API calls
function useApi(url) {
  const [data, setData] = useState(null);
  const [loading, setLoading] = useState(true);
  const [error, setError] = useState(null);

  useEffect(() => {
    const fetchData = async () => {
      try {
        setLoading(true);
        const response = await fetch(url);
        const result = await response.json();
        setData(result);
      } catch (err) {
        setError(err);
      } finally {
        setLoading(false);
      }
    };

    fetchData();
  }, [url]);

  return { data, loading, error };
}

// Usage in component
function UserProfile({ userId }) {
  const { data: user, loading, error } = useApi(`/api/users/${userId}`);

  if (loading) return <Spinner />;
  if (error) return <ErrorMessage error={error} />;

  return <div>{user.name}</div>;
}
Follow Hook Rules
Always follow the Rules of Hooks: only call hooks at the top level of functions and only from React functions or custom hooks.
// ❌ Bad: Conditional hook usage
function MyComponent({ shouldFetch }) {
  if (shouldFetch) {
    const data = useFetch('/api/data'); // Violates rules of hooks
  }
}

// ✅ Good: Hooks at top level
function MyComponent({ shouldFetch }) {
  const data = useFetch(shouldFetch ? '/api/data' : null);
}

Error Handling:

Implement Error Boundaries
Error boundaries catch JavaScript errors in component trees and display fallback UIs instead of crashing the entire application.
class ErrorBoundary extends Component {
  constructor(props) {
    super(props);
    this.state = { hasError: false };
  }

  static getDerivedStateFromError(error) {
    return { hasError: true };
  }

  componentDidCatch(error, errorInfo) {
    console.error('Error caught by boundary:', error, errorInfo);
  }

  render() {
    if (this.state.hasError) {
      return <h1>Something went wrong.</h1>;
    }

    return this.props.children;
  }
}

// Usage
function App() {
  return (
    <ErrorBoundary>
      <UserDashboard />
    </ErrorBoundary>
  );
}
Handle Async Errors Gracefully
Always handle potential errors in async operations and provide meaningful feedback to users.
function DataComponent() {
  const [data, setData] = useState(null);
  const [error, setError] = useState(null);

  useEffect(() => {
    fetchData()
      .then(setData)
      .catch(err => {
        setError('Failed to load data. Please try again.');
        console.error(err);
      });
  }, []);

  if (error) {
    return <div className="error">{error}</div>;
  }

  return data ? <DataDisplay data={data} /> : <Loading />;
}

Code Organization and Structure:

Use Consistent File Structure
Organize your files in a logical, consistent manner. Group related files together and use clear naming conventions.












Use TypeScript for Better Development Experience
TypeScript provides excellent developer experience with better autocomplete, refactoring support, and catch errors at compile time.
interface User {
  id: string;
  name: string;
  email: string;
}

interface UserProfileProps {
  user: User;
  onEdit: (user: User) => void;
}

function UserProfile({ user, onEdit }: UserProfileProps) {
  return (
    <div>
      <h2>{user.name}</h2>
      <button onClick={() => onEdit(user)}>
        Edit Profile
      </button>
    </div>
  );
}

Accessibility:

Make Your Apps Inclusive
Always consider accessibility when building React components. Use semantic HTML, proper ARIA attributes, and ensure keyboard navigation works.
function Modal({ isOpen, onClose, title, children }) {
  useEffect(() => {
    if (isOpen) {
      document.body.style.overflow = 'hidden';
    }
    return () => {
      document.body.style.overflow = 'unset';
    };
  }, [isOpen]);

  if (!isOpen) return null;

  return (
    <div 
      className="modal-overlay" 
      onClick={onClose}
      role="dialog"
      aria-modal="true"
      aria-labelledby="modal-title"
    >
      <div className="modal-content" onClick={e => e.stopPropagation()}>
        <h2 id="modal-title">{title}</h2>
        <button 
          onClick={onClose}
          aria-label="Close modal"
          className="close-button"
        >
          ×
        </button>
        {children}
      </div>
    </div>
  );
}

Key Takeaways:

Following these React best practices will help you build applications that are:
  • Maintainable: Clean, organized code that's easy to update and extend
  • Performant: Optimized components that render efficiently
  • Accessible: Inclusive applications that work for all users
  • Scalable: Architecture that grows well with your application's needs
Remember, best practices evolve with the React ecosystem. Stay updated with the latest React documentation, follow the community discussions, and always consider the specific needs of your project when applying these guidelines. The goal is to write code that not only works today but remains maintainable and performant as your application grows.

If you have any questions you can reach out our SharePoint Consulting team here.

Defect Prediction and Risk Analysis with AI: Smarter Testing Decisions

AI in QA: Finding Bugs Before They Happen

AI can help testers by predicting where bugs are likely to appear even before they actually show up. This means testers can focus on risky parts of the code first and test them more carefully.

Instead of waiting for bugs to be found later, predictive QA helps teams catch problems early, test smarter, and avoid issues in the final product especially when the software is large and changing quickly.

What Is Defect Prediction?

Defect prediction uses past project data and AI to find out which parts of the software are most likely to have bugs in the future. By using machine learning and data analysis, it helps the QA team focus their testing on high-risk areas. 

This way, they can catch problems early and use their time and resources more effectively, instead of waiting for bugs to show up later.

How AI Helps Predict Defects


1. Historical Defect Mining and Code Changes

AI studies old bug reports (like from Jira or Bugzilla), code updates (like from Git), and code review comments to learn what kind of changes usually cause bugs.

Example: 

If one part of the code was changed a lot and had many bugs in the past, AI will mark it as “high-risk.” So, the team will test that part more in the next sprint.

2. Change Impact Analysis

AI checks how risky a code change is by looking at things like:

  • How big or complex the change is
  • Who made the change (e.g., experienced or new developer)
  • How often that part of the code changes
  • How connected that part is to other parts of the system

Example:

If a junior developer updates an important part of the payment system, AI will mark it as high-risk and suggest testing that area again to make sure nothing breaks.

3. Risk-Based Test Prioritization

AI helps choose which tests to run first by focusing on the parts of the app that are most likely to have bugs.

Example:

Instead of running all 2,000 test cases every night, AI picks the 300 most important ones that are more likely to find new bugs in the latest version.

4. Real-Time Risk Warnings in CI/CD Pipelines

When AI is added to the CI/CD process, it can check code changes in real-time and give quick feedback on risk levels.

Example:

  • A developer sends a pull request (PR).
  • AI scans it and sees that it changes old, sensitive code.
  • It marks the PR as “high-risk” and automatically adds extra tests to make sure nothing breaks.


Real-Life Example

A global financial company used AI to look at 3 years of bug and code change data. The AI found that just 15% of the code was causing almost 70% of the bugs. So, the QA team focused their testing on those parts.

As a Result:

  • Testing became 38% more efficient
  • Bugs found in UAT were reduced by 50%
  • They reduced the number of regression tests without losing quality


Tools That Support AI-Based Defect Prediction

Tool/Platform

Key Feature

Microsoft Azure DevOps + ML

Integrates ML models to predict defect-prone areas using pipelines

CodeScene

Behavioral code analysis for hotspot detection

Seerene

Visual code and defect analytics for enterprise codebases

Bugasura + ML

AI-based insights on issue trends, velocity, and risk areas

SonarQube + AI plugins

Predictive metrics for technical debt and defect probability


Benefits of AI-Driven Risk Analysis

  • Smarter Testing – QA teams can focus on the parts of the software that are most likely to have bugs.
  • Saves Time – No need to run all tests every time. Just test the areas with higher risk.
  • Real-Time Feedback – Get instant risk alerts during Agile sprints or in CI/CD pipelines.
  • Lower Costs – Finding bugs early means less rework and lower costs later.


Challenges to Consider
  • Needs quality historical data (bugs, code commits, test runs) for training 
  • Cannot replace exploratory or critical business logic testing 
  • Explain ability challenge: AI predictions can be opaque unless backed by transparency tools


Conclusion

AI-powered defect prediction adds smart, data-based decision-making to software testing. Instead of guessing, QA teams can focus on real risks, test faster, and reduce costs.
As companies move toward Agile and continuous delivery, using AI to guide testing will play a big role in building better, more reliable software.

If you have any questions you can reach out our SharePoint Consulting team here.

July 10, 2025

Azure AI Agents and Model Context Protocol (MCP) with Dynamic Tool Discovery: Step-by-Step Guide

Introduction:

In the era of AI agents, it’s common to see agents interacting with a wide range of external tools, APIs, and services. Whether it’s fetching data, triggering workflows, or connecting with domain-specific systems tools are how agents get things done.

But as the number of tools grows, so does the complexity of managing them. Hardcoding every tool into the agent logic isn’t scalable. Each update means changing code, redeploying agents, and tightly coupling functionality that should stay flexible.

That’s where Dynamic tool discovery comes in and this is exactly what Model Context Protocol (MCP) is built for.
In this guide, we’ll learn how to integrate MCP with an Azure AI Agent, enabling it to discover tools at runtime. No redeployment. No manual updates. Just a clean, modular setup where tools live independently and are made available to the agent on demand.

We’ll walk through building a medical assistant agent that can:

1. Retrieve upcoming patient appointments
2. Fetch recent medical history
3. Estimate wait times for specialists

All powered by tools registered on an MCP server and dynamically discovered by the agent.

Before we start building, let’s take a moment to understand what MCP is and why dynamic tool discovery matters in the context of Azure AI Agents.

Understanding MCP and Tool Discovery:

What is MCP?

Model Context Protocol (MCP) is a protocol that enables AI agents to discover and interact with external tools at runtime. Instead of defining tools inside the agent, MCP exposes them through a centralized server. Each tool is defined with clear inputs, outputs, and documentation and can be registered or updated without touching the agent code.

You can think of MCP as a live tool catalog that your agent connects to whenever it needs functionality. This means tools can be managed independently of the agent itself.

Now that we know where the tools live, let’s see how agents actually discover and use them.

What is Dynamic Tool Discovery?

Dynamic tool discovery allows an agent to:

1. Query the MCP server
2. Fetch the available tools
3. Generate function wrappers for each
4. Use them as if they were built into the agent

All of this happens at runtime, so your agent always has access to the latest toolset without needing redeployment or hardcoded logic.

So how does this all fit into the Azure ecosystem? Let’s connect the dots.

Why it works well with Azure AI Agents:

Azure AI Agents are designed to be modular and extensible. By integrating MCP:

1. You keep your agent code clean and focused on behavior
2. Tools can be maintained and scaled independently
3. Your system becomes more adaptable to changes in external services or APIs

This combination lets your agents evolve without rewrites making your setup easier to scale and maintain.

Now that we understand how dynamic tool discovery works with MCP, let’s quickly get our Azure AI Agent environment ready starting with setting up Azure AI Foundry.

Set Up Azure AI Foundry and Agent Service:

Before we dive into code, we'll need a working Azure AI Foundry project with a deployed model. Follow below steps to have one.

1. Open a browser and go to https://ai.azure.com.


2. Sign in using your Azure credentials.
3. On the homepage, select Create an agent.
4. When prompted, create a new project. Use a valid name and expand Advanced options to configure:
    1. Azure AI Foundry resource: Choose or create one
    2. Subscription: Your Azure subscription
    3. Resource group: Select or create
    4. Region: Pick any supported region (note that model quotas may vary)
5. Select Create and wait for the project to be provisioned.
6. If needed, deploy a GPT-4o model using the Global Standard or Standard option (based on quota availability).
7. Once the project is ready, go to the Overview tab in the left-hand menu.
8. Copy the Project Endpoint URL we’ll need it later to connect the client script to this agent.


That’s all we need from Azure for now. 

With the Azure AI Foundry project and agent in place, let’s shift to the local environment setup where we’ll start wiring everything together with MCP.

Set Up the Local Environment:

With our Azure AI project ready, it’s time to prepare the local development environment where the MCP server and client will run. This setup allows our agent to discover tools dynamically at runtime.

Step 1: Create a Virtual Environment
Open your terminal and run the following commands to create and activate a Python virtual environment:

python -m venv mcpVenv
mcpVenv\Scripts\activate   # For Windows

# OR

source mcpVenv/bin/activate   # For macOS/Linux

Step 2: Create .env File with Required Variables
Before installing the requirements, create a .env file in the project root and add the following:

# Project Connection String from Azure AI Project in Foundry
PROJECT_ENDPOINT="<<Project Connection String>>"

# Model deployment name from Azure
MODEL_DEPLOYMENT_NAME=gpt-4o

Note: The PROJECT_ENDPOINT is the connection string you copied in the previous step when setting up the Azure AI Project in Foundry.

Step 3: Install Required Packages
Create a requirements.txt file and add the following dependencies:

fastapi
aiohttp
python-dotenv
azure-ai-agents
azure-identity
mcp

Then install them:

pip install -r requirements.txt

That’s it out environment is now ready to build the MCP server and client.

With the environment ready, let’s start building the core of our setup the MCP server that will host and expose tools to our Azure AI Agent.

Build the MCP Server:

The MCP server acts as a centralized registry where tools are defined and exposed. These tools will later be discovered and used by the agent dynamically at runtime.

We’ll use the FastMCP class from the mcp package to quickly spin up an HTTP-based server and register tools using decorators.

Here’s the full server script (server.py):

import json
from mcp.server.fastmcp import FastMCP

# Initialize the MCP server
mcp = FastMCP("HealthAssistant")

# -----------------------------------------
# Tool: Get upcoming appointments for a patient
# -----------------------------------------
@mcp.tool()
def get_appointment_schedule(patient_id: str) -> str:
    """
    Retrieves the list of upcoming medical appointments scheduled for the specified patient.
    
    This tool accepts a patient ID and returns a structured list of upcoming appointments,
    including the date and department for each appointment. It can be used to display the
    patient's future visit schedule to help them stay informed and manage their time.

    Parameters:
        patient_id (str): The unique identifier of the patient.

    Returns:
        str: A JSON-formatted string containing the patient ID and a list of upcoming appointments,
             each with a date and department name.
    """
    data = {
        "patient_id": patient_id,
        "appointments": [
            {"date": "2025-07-15", "department": "Cardiology"},
            {"date": "2025-08-03", "department": "Dermatology"},
            {"date": "2025-08-18", "department": "Lab Work"}
        ]
    }
    return json.dumps(data)

# -----------------------------------------
# Tool: Get recent medical history
# -----------------------------------------
@mcp.tool()
def get_medical_history(patient_id: str) -> str:
    """
    Retrieves the recent medical history for the currently active patient.

    This tool returns a chronological summary of recent medical events, including visits,
    diagnoses, treatments, and test results. It's useful for reviewing a patient’s recent
    healthcare activities and identifying patterns or follow-up needs.

    Returns:
        str: A JSON-formatted string representing a list of medical history entries,
             each including the date and a summary of the medical event or consultation.
    """
    history = [
      {"date": "2025-06-20", "summary": "Annual physical checkup - normal"},
      {"date": "2025-05-02", "summary": "Prescribed allergy medication"},
      {"date": "2025-03-10", "summary": "Blood test - slightly elevated cholesterol"}
    ]
    result = {
        "patient_id": patient_id,
        "history": history
    }
    return json.dumps(result)

# -----------------------------------------
# Tool: Estimate wait time for a specialist
# -----------------------------------------
@mcp.tool()
def estimate_wait_time(specialty: str) -> str:
    """
    Provides an estimated wait time for scheduling an appointment with a medical specialist.

    By specifying a medical specialty (e.g., Cardiology, Dermatology), this tool returns
    an estimated waiting period based on current scheduling trends. This helps patients
    plan ahead and manage expectations for their care timeline.

    Parameters:
        specialty (str): The name of the medical specialty.

    Returns:
        str: A JSON-formatted string with the specialty name and its corresponding estimated wait time.
              If the specialty is not recognized, the wait time will be marked as 'Unavailable'.
    """
    wait_times = {
        "Cardiology": "2 weeks",
        "Dermatology": "3 weeks",
        "Neurology": "1 month",
        "Orthopedics": "10 days",
        "General Medicine": "3 days"
    }
    data = {
        "specialty": specialty,
        "estimated_wait_time": wait_times.get(specialty, "Unavailable")
    }
    return json.dumps(data)

# Run the server
mcp.run()

Key Components:

1. FastMCP("HealthAssistant"): Initializes the server with a label. This acts as the tool catalog name.
2. @mcp.tool(): This decorator registers each function as a tool. The server automatically exposes these tools for discovery.
3. Return values: Each tool returns structured data in JSON format. This helps the client parse responses cleanly.
4. mcp.run(): Starts the server and makes the tools available over HTTP.

Now that our tools are live and exposed through the MCP server, let’s build the MCP client that discovers these tools and hands them off to the Azure AI Agent.

Build the MCP Client and Connect to Azure AI Agent:

The MCP client is responsible for discovering tools from the MCP server and registering them with the Azure AI Agent. Once connected, the agent can call these tools as if they were native functions all without hardcoding anything.

Create a client script and use the below snippets to build the full (client.py):

import os, time
import asyncio
import json
from dotenv import load_dotenv
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from azure.ai.agents import AgentsClient
from azure.ai.agents.models import FunctionTool, MessageRole, ListSortOrder
from azure.identity import DefaultAzureCredential

os.system('cls' if os.name=='nt' else 'clear')

load_dotenv()
project_endpoint = os.getenv("PROJECT_ENDPOINT")
model_deployment = os.getenv("MODEL_DEPLOYMENT_NAME")

Key Components (Part 1: Setup)

1. .env variables: These hold your Azure AI project endpoint and model deployment name.
2. AsyncExitStack: Manages async connections useful for clean startup and teardown.
3. stdio_client(): This starts the MCP server as a subprocess and connects the client to it.

Now we define the function that connects to the MCP server:

async def connect_to_server(exit_stack: AsyncExitStack):
    server_params = StdioServerParameters(
        command="python",
        args=["server.py"],
        env=None
    )

    stdio_transport = await exit_stack.enter_async_context(stdio_client(server_params))
    stdio, write = stdio_transport

    session = await exit_stack.enter_async_context(ClientSession(stdio, write))
    await session.initialize()

    response = await session.list_tools()
    tools = response.tools
    print("\nConnected to server with tools:", [tool.name for tool in tools]) 
    return session

Key Components (Part 2: Tool Discovery)

1. list_tools(): Queries the MCP server and returns available tools.
2. ClientSession: Wraps communication with the server and is used later to invoke tools.

Next, we’ll create a chat loop and connect it to the Azure AI Agent:

async def chat_loop(session):
    agents_client = AgentsClient(
        endpoint=project_endpoint,
        credential=DefaultAzureCredential(
            exclude_environment_credential=True,
            exclude_managed_identity_credential=True
        )
    )

    response = await session.list_tools()
    tools = response.tools

    def make_tool_func(tool_name):
        async def tool_func(**kwargs):
            result = await session.call_tool(tool_name, kwargs)
            return result
        tool_func.__name__ = tool_name
        return tool_func

    functions_dict = {tool.name: make_tool_func(tool.name) for tool in tools}
    mcp_function_tool = FunctionTool(functions=list(functions_dict.values()))

Key Components (Part 3: Wrapping Tools)

1. make_tool_func(): Dynamically wraps each discovered tool as an async Python function.
2. FunctionTool: Binds those functions to the agent using Azure’s SDK.

With the tools discovered and wrapped, the final step is to bring your Azure AI Agent to life connecting it to the tools and handling real-time user interactions.

This part handles:

1. Agent creation
2. Chat loop
3. Tool call execution
4. Printing the agent’s response

Here’s the final section of (client.py):

    agent = agents_client.create_agent(
        model=model_deployment,
        name="medical-agent",
        instructions="""
            You are a virtual healthcare assistant. Follow these guidelines:
            - If a user provides a patient ID, retrieve their upcoming appointments or medical history using the tools.
            - Estimate wait time if a user asks about seeing a specialist using specific tools.
            - Keep responses friendly, informative, and medically accurate.
        """,
        tools=mcp_function_tool.definitions
    )

    agents_client.enable_auto_function_calls(tools=mcp_function_tool)
    thread = agents_client.threads.create()

    while True:
        user_input = input("Enter a prompt for the medical agent (type 'quit' to exit):\nUSER: ").strip()
        if user_input.lower() == "quit":
            print("Exiting chat.")
            break

        message = agents_client.messages.create(
            thread_id=thread.id,
            role=MessageRole.USER,
            content=user_input,
        )

        run = agents_client.runs.create(thread_id=thread.id, agent_id=agent.id)

        while run.status in ["queued", "in_progress", "requires_action"]:
            time.sleep(1)
            run = agents_client.runs.get(thread_id=thread.id, run_id=run.id)

            tool_outputs = []

            if run.status == "requires_action":
                tool_calls = run.required_action.submit_tool_outputs.tool_calls

                for tool_call in tool_calls:
                    function_name = tool_call.function.name
                    args_json = tool_call.function.arguments
                    kwargs = json.loads(args_json)
                    required_function = functions_dict.get(function_name)
                    output = await required_function(**kwargs)

                    tool_outputs.append({
                        "tool_call_id": tool_call.id,
                        "output": output.content[0].text,
                    })

                agents_client.runs.submit_tool_outputs(
                    thread_id=thread.id,
                    run_id=run.id,
                    tool_outputs=tool_outputs
                )

        if run.status == "failed":
            print(f"Run failed: {run.last_error}")

        messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING)
        for message in messages:
            if message.text_messages:
                last_msg = message.text_messages[-1]
                print(f"{message.role}:\n{last_msg.text.value}\n")

    print("Cleaning up agents:")
    agents_client.delete_agent(agent.id)
    print("Deleted medical-agent agent.")

Key Highlights:

1. create_agent(): Instantiates your agent and links it to the discovered tools.
2. enable_auto_function_calls(): Lets the agent decide when to use a tool based on user input.
3. Tool call handling: When a tool is needed, the client maps the request to the matching async wrapper function, runs it, and submits the result back to the agent.
4. Thread and run monitoring: This keeps track of each user message and agent response loop.
5. Cleanup: The agent is deleted at the end to avoid clutter or quota issues.

Test the Agent and See It in Action:

With everything set up, it’s time to run the client script, which will also spin up the MCP server automatically and connect to your Azure AI Agent.

Step 1: Run the Client Script
In your terminal (with the virtual environment activated), run:

python client.py

The script will:

1. Launch the MCP server as a subprocess
2. Discover available tools
3. Register them with your agent
4. Start a real-time chat loop

Note: In this setup, the client starts the server as a subprocess. This is suitable for local testing or development.
In a production environment, the MCP server and client should run independently typically as separate services or containers.

Step 2: Interact with Your Agent

Get all the Medical History for the Patient "Amey Good" With ID "453T6”

Step 3: Output it Geneates:


Step 4: Cleanup (Optional)
The client script deletes the agent automatically at the end of the session, so you won’t need to manage it manually. Just exit the script by typing:

quit

It will close the session and delete the agent automatically.

Conclusion:

We’ve just built an Azure AI Agent that discovers and uses tools dynamically through Model Context Protocol (MCP).

This setup keeps our agent logic clean, our tools modular, and our updates smooth no hardcoding, no redeployments.

While we ran the server and client together for testing, in production they should run independently.

With this foundation, we’re ready to scale dynamic agents across healthcare, support, and beyond.

If you have any questions you can reach out our SharePoint Consulting team here.