December 18, 2025

Security Trimming in Azure AI Search for Safe and Compliant RAG Pipelines

In modern enterprises, access to the right information drives productivity — but controlling who can see what is just as important. From HR policies and payroll guidelines to financial reports and legal contracts, sensitive documents must only be available to authorized users. This is where security trimming in Azure AI Search plays a crucial role. It ensures that users can only access the data they are permitted to see, even when using Retrieval-Augmented Generation (RAG) AI pipelines, which pull together enterprise data to answer natural language questions.


What is Security Trimming?

Security trimming in Azure AI Search is the process of filtering search results at query time based on user identity, group membership, or other security principals. Instead of enforcing direct authentication or full access-control lists (ACLs) on the search service, Azure AI Search utilizes a filterable field in the search index, such as group_ids, to simulate document-level authorization by dynamically filtering search results.

For example, when a user queries the search index, a filter expression matches their group IDs to the document’s group_ids field, so only authorized documents appear in the results.

Example Filter Query:

{
  "filter": "group_ids/any(g:search.in(g, 'group_id1, group_id2'))"
}

Here, group_ids is a field in your Azure AI Search index that stores which groups a document belongs to.


Why Security Trimming Matters for RAG

Retrieval-Augmented Generation (RAG) pipelines are architectures combining document retrieval with generative AI models. These pipelines synthesize answers based strictly on enterprise data sources. Without security trimming, RAG pipelines risk exposing confidential or restricted content to unauthorized users, leading to compliance violations and privacy risks.

Without Security Trimming:

  • Sensitive documents might be exposed to unauthorized users.
  • Compliance violations can occur.
  • AI-generated answers may leak confidential information.

Key Benefits of Security Trimming:

  • Confidentiality: Ensures sensitive documents are only accessible to authorized users.
  • Compliance: Adheres to internal policies and regulatory requirements like GDPR.
  • Context-Aware Generation: Answers are produced only from documents the user can access, preventing accidental leaks.

Use Case

Consider an enterprise scenario with two distinct user groups: HR and Finance.

  • HR users should access documents like leave policies, working guidelines, and salary rules, but should never see finance records.
  • Finance users require access to budgets, audits, and financial statements, but are barred from HR files.

Step 1: Defining the Search Index With a Security Field

Create an index schema including a filterable security field (group_ids) that stores group or user IDs as a collection of strings. The field should be filterable but not retrievable.

POST https://[search-service].search.windows.net/indexes/securedfiles?api-version=2025-09-01
Content-Type: application/json
api-key: [ADMIN_API_KEY]

{
  "name": "securedfiles",
  "fields": [
    { "name": "file_id", "type": "Edm.String", "key": true, "searchable": false },
    { "name": "file_name", "type": "Edm.String", "searchable": true },
    { "name": "file_description", "type": "Edm.String", "searchable": true },
    { "name": "group_ids", "type": "Collection(Edm.String)", "filterable": true, "retrievable": false }
  ]
}

Key Points:

  • filterable: true → allows filtering by group IDs.
  • retrievable: false → prevents exposing group IDs in search responses.

With the index schema in place, your foundation for secure, scalable search is ready—each document will now respect access policies from the start.


Step 2: Upload Documents With Group IDs

Push documents to the index, including the groups authorized to access each document.

POST https://[search-service].search.windows.net/indexes/securedfiles/docs/index?api-version=2025-09-01
Content-Type: application/json
api-key: [ADMIN_API_KEY]

{
  "value": [
    {
      "@search.action": "upload",
      "file_id": "1",
      "file_name": "secured_file_a",
      "file_description": "File access restricted to Human Resources",
      "group_ids": ["group_id1"]
    },
    {
      "@search.action": "upload",
      "file_id": "2",
      "file_name": "secured_file_b",
      "file_description": "File access restricted to HR and Recruiting",
      "group_ids": ["group_id1", "group_id2"]
    },
    {
      "@search.action": "upload",
      "file_id": "3",
      "file_name": "secured_file_c",
      "file_description": "File access restricted to Operations and Logistics",
      "group_ids": ["group_id5", "group_id6"]
    }
  ]
}

If document groups need updating, use the merge or mergeOrUpload action:

POST https://[search-service].search.windows.net/indexes/securedfiles/docs/index?api-version=2025-09-01
Content-Type: application/json
api-key: [ADMIN_API_KEY]

{
  "value": [
    {
      "@search.action": "mergeOrUpload",
      "file_id": "3",
      "group_ids": ["group_id7", "group_id8", "group_id9"]
    }
  ]
}

By assigning group IDs at upload, you ensure that every document is automatically filtered for the right audience—security is built into your search pipeline.


Step 3: Perform Filterable Search Query

When a user searches, issue a search query with a filter that restricts results to documents containing the user’s authorized groups.

POST https://[search-service].search.windows.net/indexes/securedfiles/docs/search?api-version=2025-09-01
Content-Type: application/json
api-key: [QUERY_API_KEY]

{
  "search": "*",
  "filter": "group_ids/any(g:search.in(g, 'group_id1, group_id2'))"
}
  • This query returns only documents where group_ids contains either "group_id1" or "group_id2", matching the user’s groups.

Sample response:

[
  {
    "@search.score": 1.0,
    "file_id": "1",
    "file_name": "secured_file_a"
  },
  {
    "@search.score": 1.0,
    "file_id": "2",
    "file_name": "secured_file_b"
  }
]

Executing a filtered search now guarantees that users see only what they’re authorized to access—empowering secure, context-aware AI responses.


How Security Trimming Works Under the Hood

Azure AI Search uses OData filter expressions to simulate document-level authorization. It filters results purely based on string values stored in the security field (group_ids) without direct authentication or ACL enforcement. This approach provides simple, performant security filtering that scales to large enterprises and integrates seamlessly into RAG AI pipelines.


Conclusion

Security trimming in Azure AI Search is essential for building enterprise-grade, compliant knowledge retrieval systems. Implementing group-based access filtering at the search layer empowers organizations to deliver personalized, secure AI experiences while safeguarding sensitive content and meeting regulatory requirements.

For AI-powered knowledge assistants leveraging RAG, security trimming should be the first priority—ensuring users receive answers strictly from content they are authorized to access.

By implementing security trimming in Azure AI Search, your enterprise ensures that AI-driven insights are both powerful and secure - delivering the right information to the right people, every time.

Extending C# MCP Server with GitHub Copilot and Custom Tools

Introduction

AI assistants are becoming more capable, but their real power emerges when they can tap into their own systems, logic, and data. The Model Context Protocol (MCP) makes this possible by providing a standardized way for tools and services to interact directly with assistants like GitHub Copilot Chat. By exposing your backend capabilities through an MCP server, you can extend Copilot far beyond code suggestions and turn it into a practical interface for your applications.

Key Topics Covered:

  • A breakdown of how the Model Context Protocol works and the components that make up its architecture.
  • Steps to create an MCP server in C# and implement your own custom tools.
  • How to link your .NET-based MCP server with GitHub Copilot Chat in VS Code so they can communicate seamlessly.

What is MCP?

MCP defines a standard protocol for AI clients to connect to external servers.

  • MCP Server → your app or API that provides tools.
  • MCP Client → AI assistant (like GitHub Copilot Chat) that calls those tools.

Think of it like plugins for Copilot, but built with simple attributes and a lightweight protocol.


Project Setup:

  1. Create a new .NET Core application to serve as the base for your MCP server.
  2. Add the required dependencies, including:
  • ModelContextProtocol.AspNetCore
  • Microsoft.Azure.Functions.Worker.Extensions.Mcp
  • System.Data.SqlClient (for database communication)

Defining a tool:

  • A tool is a simple class decorated with McpServerToolType. Each method marked McpServerTool is automatically exposed to the MCP client.
  • Define tools with clear, detailed descriptions so the LLM can interpret them effectively and deliver more accurate responses.
  • Below is an example of the EmployeeTool.cs that has the tool defined:
[McpServerTool, Description("Get Employee details")]
public string GetEmployeeDetails(
    [McpToolTrigger("employee_tool", "MCP Tool that fetches employee records based on hiring dates.")]
      ToolInvocationContext trigger,
      [McpToolProperty("startDate", "string",
          "Start date of the provided date range."
      )]
      string startDate,
      [McpToolProperty("endDate", "string",
          "End date of the provided date range."
      )]
      string endDate
)
{
    // Write business logic to retrieve data
    return $"Fetching employees hired between {startDate} and {endDate}";
}

  • Register your tool in the Program.cs file as shown below.
var builder = WebApplication.CreateBuilder(args);

builder.Services
    .AddMcpServer()
    .WithHttpTransport()
    .WithToolsFromAssembly();

builder.Services.AddSingleton<EmployeeTool>();

var app = builder.Build();

app.MapMcp();

app.Run();
  • Once you’ve defined your tools, run the project:

Connecting with GitHub Copilot Chat:

Now that the tools are defined, let’s connect them to Copilot Chat.

  • A GitHub account
  • The GitHub Copilot and GitHub Copilot Chat extensions are installed in VS Code

Next, we’ll add the server using the steps below:

  1. Open the Command Palette.
  2. Search for “MCP: Add Server” and select it.
  3. Choose HTTP as the transport mode.
  4. Enter the server URL (for example: http://localhost:5000).
  5. Provide a name for your server and choose whether to save it as Global (user) or just for the current workspace.
  6. When asked, confirm that you trust this MCP server.
  7. Your MCP server is now registered and ready to be used through Copilot Chat.

Verify the Server:

  • Access the Command Palette and select “MCP: List Servers” to verify the server’s presence in the list.
  • Alternatively, navigate to the Extensions view and examine the section labeled MCP Servers => Installed.

Using MCP Tools Inside Copilot Chat:

Once the MCP server is added, you can start using the tools directly inside Copilot Chat:

  1. Open the Copilot Chat interface in VS Code.
  2. Switch to Agent mode from the drop-down beneath the chat box.
  3. Click the Tools icon to explore available MCP tools.
  4. Provide a prompt like: “Provide me with the employees hired in the last month.”
  5. To explicitly invoke a tool, type # and select it by name.
  6. When Copilot suggests a tool invocation, review it and click Continue to execute.

That’s it - Copilot will now call your MCP tools and return live data straight into chat.


Conclusion

In this guide, we explored how to build a custom MCP server using C# .NET, define powerful tools, and integrate them with GitHub Copilot Chat to extend its capabilities. With MCP, you can enable Copilot to access real-time data, execute business logic, and provide accurate, context-aware responses.

For more details and official documentation, check out the C# MCP SDK on GitHub.