January 9, 2026

GraphQL in Action: Building an API with .NET Core

Introduction: What is GraphQL?

GraphQL is:

  • A query language used to request data
  • A runtime that executes those queries
  • An API design that usually works on one single endpoint
Instead of returning a fixed response like REST, the API provides flexible data fetching, allowing the client to request exactly what it needs. This makes the API more frontend-friendly and adaptable to changing UI requirements.

Why GraphQL?

Nowadays, applications like React apps, Angular apps, and mobile apps need APIs that are flexible and fast.
Frontend developers usually expect:
  • Only required data
  • Less number of API calls
  • Faster UI development
But when we use traditional REST APIs, we often face problems like:
  • API sends extra data that UI never uses
  • Multiple APIs are needed for one screen
  • Any UI change forces backend API changes
Because of these issues, GraphQL becomes a good solution.
It allows the client (frontend) to decide what data it wants, instead of backend forcing a fixed response.

REST vs GraphQL

REST API

 GET /api/products  
This returns many fields even if UI needs only product name and price.

GraphQL

 query {  
	  products {  
	   name  
	   price  
	  }  
	 }  
	
This returns only name and price, nothing extra.
This is the biggest advantage of GraphQL.

Why GraphQL is Better Than REST (Practical View)

  • UI gets only required data
  • One endpoint works for many clients
  • No unnecessary payload
  • Backend does not change for UI changes
This makes frontend and backend more independent.

When Should You Use GraphQL?

GraphQL is good when:

  • Frontend has heavy UI logic
  • Same backend is used by web and mobile apps
  • UI keeps changing frequently
  • You want to reduce network calls

GraphQL is not ideal when:

  • Application is very simple CRUD
  • API mainly handles file uploads/downloads
  • Strong HTTP caching is a top requirement

Setting Up GraphQL in .NET Core (Working Example)

 Create a New Project

 dotnet new webapi -n GraphQLDemo  
 cd GraphQLDemo  
This creates a basic ASP.NET Core Web API project.

Install Required Packages

 dotnet add package GraphQL  
 dotnet add package GraphQL.Server.Transports.AspNetCore  
 dotnet add package GraphQL.Server.Ui.GraphiQL  

These packages help us:

  • Define GraphQL schema
  • Run GraphQL queries
  • Use GraphQL GraphiQL UI

Product Model

What this code does

  • Creates a simple C# class
  • Represents a product entity

Why this is required

  • GraphQL works with strongly typed objects
  • This model acts as the data source
  • In real projects, this usually maps to a database table

So this model is the base of our GraphQL response.


ProductType (GraphQL Object Type)

What this code does

  • Converts the C# Product class into a GraphQL type
  • Exposes fields like id, name, and price

Why this is required

  • GraphQL does not directly expose C# models
  • You must clearly define which fields are allowed

Security benefit

Only fields defined here can be queried, so sensitive data is automatically protected.

ProductQuery (Query Resolver)

What this code does

  • Creates a GraphQL query named products
  • Defines how product data is fetched
  • Executes resolver logic when query runs

Why this is required

  • GraphQL needs resolver logic to get data
  • This is similar to a controller method in Web API
In real projects, resolver usually calls:
  • Service layer
  • EF Core
  • External APIs
 Resolver acts as a bridge between query and data.

AppSchema (GraphQL Schema)

What this code does

  • Registers all available queries
  • Acts as the main entry point for GraphQL execution

Why this is required

  • GraphQL cannot work without schema
  • Schema defines:
    • Available queries
    • Available mutations (later)
Schema is the contract between frontend and backend.

Registering GraphQL Services (Dependency Injection)

What this code does

  • Registers GraphQL components in ASP.NET Core DI
  • Allows GraphQL to resolve dependencies properly

Why this is required

  • GraphQL.NET depends on dependency injection.
  • Without this:
    • Schema won’t load
    • Queries will fail
This follows normal ASP.NET Core best practices

Add GraphQL Configuration 


What this code does

  • Enables GraphQL in ASP.NET Core
  • Configures JSON serialization

Why this is required

  • GraphQL responses are returned in JSON format
  • Uses System.Text.Json for better performance
  • Registers GraphQL middleware internally
Without this setup, GraphQL endpoint won’t work.

GraphQL Middleware Configuration

What this code does

  • UseGraphQL<ISchema>() exposes /graphql endpoint
  • UseGraphQLGraphiQL() provides UI to test queries

Why this is required

  • GraphQL works over HTTP
  • Middleware connects HTTP request to GraphQL engine
GraphiQL helps developers:
  • Test queries
  • Explore schema
  • Debug responses
GraphiQL should be disabled in production.

GraphQL Query Example

 query {  
  products {  
   id  
   name  
   price  
  }  
 }  

What happens here

  • Client requests only required fields
  • products resolver is executed

Why this is powerful

  • No over-fetching
  • One backend supports multiple UIs
  • Client controls response format
This is the core strength of GraphQL.

Testing Using GraphQL Playground

Open:

 https://localhost:{port}/ui/playground  

Run the query:

 query {  
  products {  
   id  
   name  
   price  
  }  
 }  

Response

 {  
  "data": {  
   "products": [  
    { "id": 1, "name": "Apple", "price": 120 },  
    { "id": 2, "name": "Banana", "price": 60 }  
   ]  
  }  
 }  

Security Considerations

GraphQL is not secure by default. You must implement:
  • Authentication (JWT / OAuth)
  • Query depth limit
  • Disable schema introspection in production
  • Rate limiting
Security depends on how you implement GraphQL, not GraphQL itself.

Real-World Architecture

Frontend (React / Mobile)
→ GraphQL Query
→ Resolver
→ Service Layer
→ Database
GraphQL acts as a smart data layer between UI and backend.

Conclusion

GraphQL is a strong API solution when:
  • UI changes frequently
  • Multiple clients use the same backend
  • Performance and flexibility are important
But for simple CRUD applications, REST APIs are still a very good and simple choice.


PostgreSQL Major Version Upgrades on Azure: A Terraform-based Approach

Introduction

PostgreSQL 11 has reached its end of life, and Azure recommends upgrading to PostgreSQL 13 or later for enhanced security, improved performance, and long-term support. Unlike minor upgrades, Azure Database for PostgreSQL (Flexible Server) does not support in-place major version upgrades. This makes the upgrade process slightly non-trivial—especially when the server is provisioned using Terraform, and some environments use VNet integration.

In this blog, we’ll walk through:
  • How Azure PostgreSQL upgrades work
  • Why does Terraform recreate the server
  • Multiple migration strategies
  • The exact steps I followed to upgrade PostgreSQL 11 → 13 safely


Existing Setup

My environment had the following characteristics:

  • Azure Database for PostgreSQL – Flexible Server
  • PostgreSQL version: 11
  • SKU: Burstable B1ms (1 vCore, 2 GiB RAM)
  • Storage: 32 GiB
  • Region: Central US
  • Provisioned using Terraform
  • Mixed environments: Some with public access, some with VNet integration
  • Firewall rules restricted to specific IPs

Terraform snippet (simplified):



Important Reality: No In-Place Major Version Upgrade

This is the most critical thing to understand: Azure PostgreSQL Flexible Server does NOT support in-place major version upgrades.

That means:
  • You cannot upgrade PostgreSQL 11 → 13 on the same server
  • Changing version = "13" in Terraform:
  • Deletes the existing PostgreSQL 11 server
  • Creates a brand-new PostgreSQL 13 server
  • All data is lost unless you migrate or restore it manually

 

Terraform makes this very clear: forces replacement. This is not really an upgrade — it’s a rebuild and a migration.

Why This Upgrade Looks Simple — and Why It Isn’t

At first glance, the upgrade appears trivial: version = "13" 

But behind this single line:
  • Azure treats PostgreSQL major versions as immutable
  • Terraform maps this to a ForceNew operation
  • Automated backups are tied to the old server lifecycle
  • Configuration and data do not carry over


What Actually Happens (Timeline)

Understanding the timeline helps avoid surprises:

T-0: PostgreSQL 11 running

  • Applications connected
  • Data live
  • Automated backups available


T-1: Terraform version updated

  • version = "11" → version = "13"
  • Plan shows forces replacement


T-2: Terraform apply

  • PostgreSQL 11 server is deleted
  • Databases and backups disappear


T-3: PostgreSQL 13 server created

  • Empty server
  • Default parameters
  • No firewall rules
  • No databases


T-4: Manual restore

  • Data restored
  • Configuration reapplied
  • Applications reconnect


Available Upgrade Approaches

1. Azure Database Migration Service (DMS)
2. Backup & Restore (pg_dump / pgAdmin)
3. Temporary Public Access

Here we focus on Option 3, which was simple, cost-effective, and acceptable for my downtime window.

Step 1: Take a Backup

I used pgAdmin 4 with a custom format backup.

Why Custom format?
  • Includes schema + data
  • Best compatibility across versions
  • Works cleanly with pg_restore
pg_dump `
  -h myserver.postgres.database.azure.com `
  -U pgsqladmin@myserver `
  -d master_data_service `
  -Fc `
  --sslmode=require `
  -f master_data_service_v11.dump

Step 2: Upgrade PostgreSQL Version via Terraform

In Terraform, change the code: version = "13"
Important: This destroys the PostgreSQL 11 server and creates a new PostgreSQL 13 server with the same name.
Run:
terraform plan
terraform apply

This immediately destroys the PostgreSQL 11 server and creates a new PostgreSQL 13 server with the same name.

Step 3: Restore the Database to PostgreSQL 13

pg_restore `
  -h myserver.postgres.database.azure.com `
  -U pgsqladmin@myserver `
  -d postgres `
  --create `
  -Fc `
  --sslmode=require `
  master_data_service_v11.dump

This:
  • Recreated the database
  • Restored schema and data
  • Worked cleanly from v11 → v13


Step 4: Server Parameters & Configuration
Azure applies default server parameters when a new PostgreSQL server is created.

Key learning:

  • Server parameters are NOT automatically migrated
  • If you changed parameters manually in the portal, you must reapply them


Step 5: VNet-Integrated Environments

For servers with VNet integration:
  • No public endpoint exists
  • Local pgAdmin / pg_dump won’t connect

Available options:
  • Use Azure DMS inside the VNet
  • Use a VM or jumpbox
  • Temporarily enable public access

We temporarily enabled public access with strict /32 firewall rules and disabled it immediately after migration.

Step 6: Validate & Cutover

After restoring:
  • Verified tables, row counts, and extensions
  • Tested application connectivity
  • Updated connection strings where required
  • Disabled public access again for private environments

Cost Considerations
  • PostgreSQL B1ms server: ~$25/month
  • Temporary overlap or migration time: a few dollars
  • Azure DMS (Standard): Often free for migration scenarios
  • Overall upgrade cost: minimal


Key Takeaways

  • Azure PostgreSQL major upgrades are not in-place
  • Terraform recreates the server when version changes
  • Always backup before upgrading
  • Server parameters must be reapplied
  • For VNet setups, plan connectivity carefully
  • PostgreSQL supports direct jump from 11 → 13


Final Thoughts

Upgrading PostgreSQL on Azure requires careful planning, but with the right approach, it can be a predictable and safe process.

If you’re using Terraform:

  • Treat major version upgrades as rebuild + restore
  • Automate as much as possible
  • Test in lower environments first

December 18, 2025

Security Trimming in Azure AI Search for Safe and Compliant RAG Pipelines

In modern enterprises, access to the right information drives productivity — but controlling who can see what is just as important. From HR policies and payroll guidelines to financial reports and legal contracts, sensitive documents must only be available to authorized users. This is where security trimming in Azure AI Search plays a crucial role. It ensures that users can only access the data they are permitted to see, even when using Retrieval-Augmented Generation (RAG) AI pipelines, which pull together enterprise data to answer natural language questions.


What is Security Trimming?

Security trimming in Azure AI Search is the process of filtering search results at query time based on user identity, group membership, or other security principals. Instead of enforcing direct authentication or full access-control lists (ACLs) on the search service, Azure AI Search utilizes a filterable field in the search index, such as group_ids, to simulate document-level authorization by dynamically filtering search results.

For example, when a user queries the search index, a filter expression matches their group IDs to the document’s group_ids field, so only authorized documents appear in the results.

Example Filter Query:

{
  "filter": "group_ids/any(g:search.in(g, 'group_id1, group_id2'))"
}

Here, group_ids is a field in your Azure AI Search index that stores which groups a document belongs to.


Why Security Trimming Matters for RAG

Retrieval-Augmented Generation (RAG) pipelines are architectures combining document retrieval with generative AI models. These pipelines synthesize answers based strictly on enterprise data sources. Without security trimming, RAG pipelines risk exposing confidential or restricted content to unauthorized users, leading to compliance violations and privacy risks.

Without Security Trimming:

  • Sensitive documents might be exposed to unauthorized users.
  • Compliance violations can occur.
  • AI-generated answers may leak confidential information.

Key Benefits of Security Trimming:

  • Confidentiality: Ensures sensitive documents are only accessible to authorized users.
  • Compliance: Adheres to internal policies and regulatory requirements like GDPR.
  • Context-Aware Generation: Answers are produced only from documents the user can access, preventing accidental leaks.

Use Case

Consider an enterprise scenario with two distinct user groups: HR and Finance.

  • HR users should access documents like leave policies, working guidelines, and salary rules, but should never see finance records.
  • Finance users require access to budgets, audits, and financial statements, but are barred from HR files.

Step 1: Defining the Search Index With a Security Field

Create an index schema including a filterable security field (group_ids) that stores group or user IDs as a collection of strings. The field should be filterable but not retrievable.

POST https://[search-service].search.windows.net/indexes/securedfiles?api-version=2025-09-01
Content-Type: application/json
api-key: [ADMIN_API_KEY]

{
  "name": "securedfiles",
  "fields": [
    { "name": "file_id", "type": "Edm.String", "key": true, "searchable": false },
    { "name": "file_name", "type": "Edm.String", "searchable": true },
    { "name": "file_description", "type": "Edm.String", "searchable": true },
    { "name": "group_ids", "type": "Collection(Edm.String)", "filterable": true, "retrievable": false }
  ]
}

Key Points:

  • filterable: true → allows filtering by group IDs.
  • retrievable: false → prevents exposing group IDs in search responses.

With the index schema in place, your foundation for secure, scalable search is ready—each document will now respect access policies from the start.


Step 2: Upload Documents With Group IDs

Push documents to the index, including the groups authorized to access each document.

POST https://[search-service].search.windows.net/indexes/securedfiles/docs/index?api-version=2025-09-01
Content-Type: application/json
api-key: [ADMIN_API_KEY]

{
  "value": [
    {
      "@search.action": "upload",
      "file_id": "1",
      "file_name": "secured_file_a",
      "file_description": "File access restricted to Human Resources",
      "group_ids": ["group_id1"]
    },
    {
      "@search.action": "upload",
      "file_id": "2",
      "file_name": "secured_file_b",
      "file_description": "File access restricted to HR and Recruiting",
      "group_ids": ["group_id1", "group_id2"]
    },
    {
      "@search.action": "upload",
      "file_id": "3",
      "file_name": "secured_file_c",
      "file_description": "File access restricted to Operations and Logistics",
      "group_ids": ["group_id5", "group_id6"]
    }
  ]
}

If document groups need updating, use the merge or mergeOrUpload action:

POST https://[search-service].search.windows.net/indexes/securedfiles/docs/index?api-version=2025-09-01
Content-Type: application/json
api-key: [ADMIN_API_KEY]

{
  "value": [
    {
      "@search.action": "mergeOrUpload",
      "file_id": "3",
      "group_ids": ["group_id7", "group_id8", "group_id9"]
    }
  ]
}

By assigning group IDs at upload, you ensure that every document is automatically filtered for the right audience—security is built into your search pipeline.


Step 3: Perform Filterable Search Query

When a user searches, issue a search query with a filter that restricts results to documents containing the user’s authorized groups.

POST https://[search-service].search.windows.net/indexes/securedfiles/docs/search?api-version=2025-09-01
Content-Type: application/json
api-key: [QUERY_API_KEY]

{
  "search": "*",
  "filter": "group_ids/any(g:search.in(g, 'group_id1, group_id2'))"
}
  • This query returns only documents where group_ids contains either "group_id1" or "group_id2", matching the user’s groups.

Sample response:

[
  {
    "@search.score": 1.0,
    "file_id": "1",
    "file_name": "secured_file_a"
  },
  {
    "@search.score": 1.0,
    "file_id": "2",
    "file_name": "secured_file_b"
  }
]

Executing a filtered search now guarantees that users see only what they’re authorized to access—empowering secure, context-aware AI responses.


How Security Trimming Works Under the Hood

Azure AI Search uses OData filter expressions to simulate document-level authorization. It filters results purely based on string values stored in the security field (group_ids) without direct authentication or ACL enforcement. This approach provides simple, performant security filtering that scales to large enterprises and integrates seamlessly into RAG AI pipelines.


Conclusion

Security trimming in Azure AI Search is essential for building enterprise-grade, compliant knowledge retrieval systems. Implementing group-based access filtering at the search layer empowers organizations to deliver personalized, secure AI experiences while safeguarding sensitive content and meeting regulatory requirements.

For AI-powered knowledge assistants leveraging RAG, security trimming should be the first priority—ensuring users receive answers strictly from content they are authorized to access.

By implementing security trimming in Azure AI Search, your enterprise ensures that AI-driven insights are both powerful and secure - delivering the right information to the right people, every time.