April 27, 2026

How to Fix "App Unresponsive" in Power Apps Using Chunked Processing

Introduction

You've built a Power App that works perfectly on small datasets. Then a colleague opens it with 400 real product records - and the dreaded "App Unresponsive" warning appears. The app isn't broken. It's doing exactly what you asked - processing everything simultaneously, on a single thread, with no capacity left for anything else.

Power Apps applications frequently handle large datasets with complex, record-level business logic such as pricing rules, tax computation, and discounts. When this logic runs entirely on the client side in a single operation, performance problems like freezing and "App Unresponsive" warnings become inevitable.

In this post, we walk through exactly why this happens and demonstrate how chunking - processing records in smaller sequential batches - resolves the problem without changing your underlying business logic.

Technical Challenge

Consider a typical Power Apps scenario where each record may contain nested or related data, complex per-record calculations such as pricing rules, tax computation, or discounts, and where all processing is performed using client-side Power Fx formulas.

When a large dataset is processed in a single operation, the application may freeze and users may receive "Unresponsive" warnings - even though the app is still executing logic in the background.

Important: Performance degradation depends not only on the number of records, but also on the complexity of calculations per record and the user's system configuration - CPU, memory, browser, and device performance.

Why Power Apps Become Unresponsive

1. All Records Are Processed at Once

A single ForAll() loop executes calculations for every record simultaneously. This becomes especially impactful when working with large collections that contain heavy logic per record. For light calculations, even 1,000+ records may process without issue. For heavy nested formulas, as few as 200–300 records can trigger an unresponsive state.

2. High Memory and CPU Consumption

Each record evaluation produces intermediate calculation results. When hundreds of records are processed simultaneously, memory and CPU usage spike - overwhelming the client device. Three compounding failures occur at once:

  • Memory Spike - All intermediate results are held in memory simultaneously, causing excessive consumption that overwhelms the client device.
  • CPU Overload - Parallel processing of hundreds of complex calculations saturates the processor, leaving no capacity for UI rendering or user interaction.
  • UI Freeze - The render thread is blocked entirely, preventing any screen updates until all processing completes.

3. UI Thread Blocking

Power Apps evaluates formulas on the same thread that renders the UI. While large calculations run, the UI cannot refresh - making the app appear frozen even if execution is still ongoing in the background.

Before Chunking: Processing All Records at Once

The following example processes all records in a single pass. While straightforward to write, this pattern blocks the UI thread for the entire duration and is the primary cause of "App Unresponsive" warnings in production.

Power Apps processing all records at once
ForAll(
    colProducts,
    Patch(
        colProducts,
        ThisRecord,
        {
            BaseRevenue: BasePrice * QuantitySold,
            DiscountAmount: BasePrice * QuantitySold * DiscountRate,
            TaxAmount: ((BasePrice * QuantitySold) -
                (BasePrice * QuantitySold * DiscountRate)) * TaxRate,
            FinalRevenue:
                (BasePrice * QuantitySold)
                - (BasePrice * QuantitySold * DiscountRate)
                - (((BasePrice * QuantitySold) -
                   (BasePrice * QuantitySold * DiscountRate)) * TaxRate)
        }
    )
);

Why this causes problems: The threshold at which this fails is not fixed. For light calculations, even 1,000+ records may process without issue. For heavy calculations - like the nested formula above - as few as 200–300 records can trigger an unresponsive state. The risk scales directly with per-record logic complexity.

After Chunking: Processing Records in Batches

The chunked implementation below processes records incrementally. The business logic is identical - only the delivery mechanism changes. Start with a batch size of 100 and adjust based on the complexity of your formulas and the capabilities of your target devices.

// Adjust _chunkSize based on formula complexity and device capability
Set(_chunkSize, 100);
Set(_totalRecords, CountRows(colProducts));

ForAll(
    Sequence(RoundUp(_totalRecords / _chunkSize, 0)) As _batch,
    With(
        {
            _start: (_batch.Value - 1) * _chunkSize + 1,
            _end:   Min(_batch.Value * _chunkSize, _totalRecords)
        },
        ForAll(
            Sequence(_end - _start + 1, _start, 1) As _row,
            With(
                { _p: Last(FirstN(colProducts, _row.Value)) },
                Patch(
                    colProducts,
                    _p,
                    {
                        BaseRevenue:    _p.BasePrice * _p.QuantitySold,
                        DiscountAmount: _p.BasePrice * _p.QuantitySold * _p.DiscountRate,
                        TaxAmount:      ((_p.BasePrice * _p.QuantitySold) -
                                         (_p.BasePrice * _p.QuantitySold * _p.DiscountRate)) * _p.TaxRate,
                        FinalRevenue:   (_p.BasePrice * _p.QuantitySold)
                                        - (_p.BasePrice * _p.QuantitySold * _p.DiscountRate)
                                        - (((_p.BasePrice * _p.QuantitySold) -
                                            (_p.BasePrice * _p.QuantitySold * _p.DiscountRate)) * _p.TaxRate)
                    }
                )
            )
        )
    )
);

How it works: The outer ForAll(Sequence(...)) iterates over batch numbers. For each batch, a With() scope calculates the start and end record indices. The inner loop retrieves and processes each record individually using Last(FirstN()) - a standard Power Fx idiom for positional record access. The UI thread is free to refresh between batches, keeping the experience smooth throughout.

Frequently asked questions

Does chunking change the final output or results?

No. Chunking only changes the order in which records are processed, not the calculations applied to each one. The final state of your collection will be identical to what a single ForAll() would produce - it simply gets there without freezing the app.

When should I not use chunking?

For small collections - typically fewer than 100 records with simple arithmetic - a standard ForAll() loop remains perfectly appropriate and easier to maintain. Chunking adds structural complexity, so apply it where the performance benefit is real.

How do I know if my batch size is too large?

If users still report freezes or unresponsive warnings after applying chunking, reduce the batch size. Start at 100, test on your lowest-spec target device, and decrease by 25–50 until the experience is consistently smooth.

Can this pattern be used with SharePoint lists instead of local collections?

Yes, with some adaptation. When working directly against a SharePoint data source, the same batching logic applies - however, each Patch() call will be a network request, so consider the additional latency and delegate where possible to reduce client-side load.

Conclusion

A single ForAll() loop across a large collection blocks the UI thread entirely until every record is processed - causing the freezes and "App Unresponsive" errors that users experience as crashes. The underlying logic isn't wrong; the delivery mechanism is.

Chunking resolves this by processing records in sequential batches. Memory stays bounded, the UI thread has room to breathe, and the app remains interactive throughout the operation - regardless of how complex your per-record formulas are.

How to Lock Down SharePoint Access with Sites.Selected Permissions

Introduction

If you've ever granted an app Sites.ReadWrite.All permissions in SharePoint, you know that sinking feeling you've just given it the keys to every single site in your tenant. For most applications, that's like using a sledgehammer to hang a picture frame.

There's a better way: Sites.Selected permissions. This feature lets you grant your application access to only the specific SharePoint sites it actually needs nothing more, nothing less. It's the principle of least privilege in action, and it's surprisingly straightforward to set up.

In this guide, I'll walk you through the complete process of configuring Sites.Selected with read-only access to a specific SharePoint site. We'll use Microsoft Graph Explorer (no PowerShell required), and by the end, you'll have a secure, properly scoped application that can only access the data it needs.

What Is Sites.Selected and Why Should You Care?

Sites.Selected is a Microsoft Graph permission that flips the traditional access model on its head. Instead of granting tenant-wide access upfront, you start with zero access and explicitly grant permissions to individual sites as needed.

Think of it this way:

  • Sites.ReadWrite.All = Master key to every door in the building
  • Sites.Selected = Specific key cards for only the rooms you need

This approach drastically reduces your attack surface. If your application's credentials are ever compromised, the blast radius is limited to just the sites you've explicitly granted access to not your entire SharePoint environment.

The Three-Phase Process

Setting up Sites.Selected permissions involves three distinct phases, each building on the last:

Phase Action Where
1 Create app registration & add Sites.Selected permission Azure Portal
2 Get SharePoint site ID Microsoft Graph Explorer
3 Grant read permission to site Microsoft Graph Explorer

Here's the crucial thing to understand: after Phase 1, your app has Sites.Selected consent but zero actual access. It's not until Phase 3 that you grant permission to specific sites. This two-step process is what makes the permission model secure.

What You'll Need

Before we dive in, make sure you have:

  • Global Admin or Application Admin role in Azure AD
  • SharePoint Admin access
  • The URL of the SharePoint site you want to grant access to

That's it. No PowerShell modules to install, no scripts to run just your browser and admin credentials.

Phase 1: Creating Your App Registration

The first phase happens entirely in the Azure Portal. We'll create a new app registration and add the Sites.Selected permission but remember, this doesn't grant access to anything yet.

Navigate to App Registrations

Head to https://portal.azure.com and sign in with your admin account. Use the search bar at the top to find App registrations and select it.

Register Your Application

Click + New registration and fill in these details:

Field Value
Name SharePoint-Reader-App
Supported account types Accounts in this organizational directory only
Redirect URI Leave blank

Click Register and you'll land on the app overview page.

Save Your Application ID

On the overview page, you'll see an Application (client) ID. This is a GUID that looks something like a3f44e63-e46e-4c31-9213-888c172ca160. Copy this and save it somewhere safe you'll need it in Phase 3.

Add the Sites.Selected Permission

In the left menu, click API permissions, then + Add a permission. Here's where it gets specific:

  1. Select Microsoft Graph
  2. Choose Application permissions (not Delegated this is critical)
  3. Search for Sites.Selected
  4. Check the box and click Add permissions

Application permissions are for background services and daemon apps that run without a signed-in user. Delegated permissions are for apps that act on behalf of a user. For Sites.Selected to work, you must use Application permissions.

Grant Admin Consent

Back on the API permissions page, click Grant admin consent for [Your Organization] and confirm. You'll see a green checkmark appear next to Sites.Selected with a status of Granted.

Important: At this point, your app has Sites.Selected consent, but it still can't access any sites. That's by design. Phase 3 is where you grant the actual site-level permissions.

Phase 2: Getting the SharePoint Site ID

To grant permissions to a specific site, you need its Site ID. This is a long, comma-separated string that uniquely identifies the site in your tenant. We'll use Microsoft Graph Explorer to retrieve it.

Open Graph Explorer and Sign In

Navigate to https://developer.microsoft.com/en-us/graph/graph-explorer and click Sign in to Graph Explorer in the top right. Use your admin account.

Consent to Permissions (Critical Step)

Here's something that trips people up: Graph Explorer needs its own permissions to make API calls. These are separate from your app's permissions.

Click the Modify permissions tab below the query box. Find these two permissions and consent to both:

  • Sites.Read.All (needed to retrieve the site ID)
  • Sites.FullControl.All (needed in Phase 3 to grant permissions)

For each permission, click Consent, then Accept in the pop-up. Verify both show a Consented status before proceeding.

Construct Your Query

Set the method dropdown to GET. Now you need to build the URL. The format is:

https://graph.microsoft.com/v1.0/sites/{tenant}.sharepoint.com:/sites/{sitename}

Let's say your site is https://contoso.sharepoint.com/sites/ProjectAlpha. Your Graph API URL would be:

https://graph.microsoft.com/v1.0/sites/contoso.sharepoint.com:/sites/ProjectAlpha

Note: Use the site name from the URL, not the display name. If your site is called "Project Alpha Site" but the URL is ProjectAlpha, use ProjectAlpha.

Run the Query and Extract the Site ID

Click Run query. In the Response preview, you'll see JSON that looks like this:

{
  "id": "contoso.sharepoint.com,a1b2c3d4-1234-5678-abcd-111122223333,e5f6g7h8-4321-8765-dcba-444455556666",
  "name": "ProjectAlpha",
  "displayName": "Project Alpha",
  "webUrl": "https://contoso.sharepoint.com/sites/ProjectAlpha"
}

Copy the entire id value the whole thing, including the commas. This is your Site ID. Save it alongside your Application ID.

The Site ID format is {hostname},{site-collection-id},{web-id}. Don't try to reconstruct it manually or use just part of it you need the complete string.

Phase 3: Granting Read Permission to Your Site

This is where everything comes together. We'll use a POST request in Graph Explorer to grant your application read access to the specific site.

Set Up the POST Request

In Graph Explorer, change the method to POST. The URL format is:

https://graph.microsoft.com/v1.0/sites/{siteId}/permissions

Replace {siteId} with the full Site ID you copied in Phase 2. For our example:

https://graph.microsoft.com/v1.0/sites/contoso.sharepoint.com,a1b2c3d4-1234-5678-abcd-111122223333,e5f6g7h8-4321-8765-dcba-444455556666/permissions

Add the Request Body

Click the Request body tab and enter this JSON:

{
  "roles": ["read"],
  "grantedToIdentities": [
    {
      "application": {
        "id": "YOUR-APPLICATION-CLIENT-ID",
        "displayName": "SharePoint-Reader-App"
      }
    }
  ]
}

Critical: Replace YOUR-APPLICATION-CLIENT-ID with the Application ID you saved in Phase 1. Using our example ID, the complete JSON would be:

{
  "roles": ["read"],
  "grantedToIdentities": [
    {
      "application": {
        "id": "a3f44e63-e46e-4c31-9213-888c172ca160",
        "displayName": "SharePoint-Reader-App"
      }
    }
  ]
}

The roles array specifies the permission level. We're using read for read-only access. Other options include write, manage, and fullcontrol.

Execute and Verify

Click Run query. If everything is configured correctly, you'll receive a 201 Created response with JSON that looks like:

{
  "id": "aTowaS50fG1zLnNwLmV4dHxlYTVmMDVlZ...",
  "roles": ["read"],
  "grantedToIdentitiesV2": [...],
  "grantedToIdentities": [...]
}

That id field in the response is the permission ID. Save it if you think you might need to update or revoke this permission later.

Verifying Everything Works

To confirm the permission was granted successfully, you can query the permissions endpoint. Change the method back to GET and use:

https://graph.microsoft.com/v1.0/sites/{siteId}/permissions

Click Run query. You should see your application listed in the response with roles set to ["read"].

Understanding Available Permission Roles

We used read in this guide, but Sites.Selected supports four permission levels:

Role Description
read Read-only access (used in this guide)
write Read and write access
manage Manage lists and libraries
fullcontrol Full control over the site

Choose the most restrictive role that meets your application's needs. If you only need to read documents, stick with read.

Common Issues and How to Fix Them

Here are the most common problems people run into and their solutions:

403 Forbidden Error

If you get a 403 error when trying to grant permissions in Phase 3, you likely haven't consented to Sites.FullControl.All in Graph Explorer. Go back to the Modify permissions tab and consent to it.

Site Not Found

Double-check that you're using the site name from the URL, not the display name. If your site URL is /sites/proj-alpha but the display name is Project Alpha Team Site, use proj-alpha.

Insufficient Privileges

Make sure you've clicked Modify permissions in Graph Explorer and consented to both Sites.Read.All and Sites.FullControl.All. These are Graph Explorer's permissions, not your app's.

Invalid Request Body

Check your JSON syntax carefully. Common mistakes include missing commas, mismatched brackets, or forgetting to replace YOUR-APPLICATION-CLIENT-ID with your actual Application ID.

Wrapping Up

Sites.Selected permissions represent a massive improvement in security posture for SharePoint integrations. Instead of granting blanket access to your entire tenant, you can scope applications down to exactly what they need and nothing more.

The process involves three phases: creating an app registration with Sites.Selected consent in Azure Portal, retrieving the Site ID using Graph Explorer, and granting site-specific permissions via a POST request. Each phase builds on the last, and by the end, you have a properly scoped application that follows the principle of least privilege.

If you're building SharePoint integrations, this should be your default approach.

If you have any questions you can reach out our SharePoint Consulting team here.

Deploying .NET Core Web API to IIS on Windows Server (Fix HTTP 500.19 Error)

Introduction

Deploying a .NET Core Web API to IIS on Windows Server is a standard requirement in enterprise environments. However, many developers encounter deployment failures such as HTTP Error 500.19 (Error Code: 0x8007000d) immediately after publishing.

This error is rarely complex. It is almost always caused by missing prerequisites, incorrect IIS configuration, or an improperly installed Hosting Bundle.

This guide walks you through the complete IIS deployment process step-by-step - and explains exactly how to diagnose and resolve HTTP 500.19 errors with confidence.

Prerequisites

Before beginning deployment, ensure the following components are installed and ready:

  • Windows Server installed and accessible
  • IIS (Internet Information Services) installed and running
  • .NET Core / .NET Hosting Bundle installed (version must match your project)
  • Visual Studio project built successfully in Release mode
  • Published output folder generated and accessible

Important: The Hosting Bundle version must precisely match your target framework (.NET 6, 7, 8, etc.). A version mismatch is one of the most common root causes of HTTP Error 500.19.

01 Install IIS

Open Server ManagerAdd Roles and Features → Select Role-based installation → Choose your server → Select Web Server (IIS).

During feature selection, ensure the following are included:

  • Web Server (IIS)
  • Application Development Features
  • .NET Extensibility
  • ASP.NET Core Module (if available)
  • Management Tools
  • IIS Management Console

Command - Verify IIS Manager

inetmgr

Run this command in the Run dialog (Win + R) to confirm IIS Manager opens successfully.

02 Install .NET Core Hosting Bundle

Download the Hosting Bundle from the official Microsoft .NET download page and install the version that precisely matches your project's target framework. Running a mismatched version is a leading cause of 500.19 errors and should be verified before any other troubleshooting.

Command - Restart IIS After Installation

iisreset

After installation, always restart IIS to ensure the .NET Core Module is properly registered.

03 Select Publish Target

  • Right-click your project in Solution Explorer → Click Publish
  • Select Folder as the publish target → Click Next
  • Provide a publish path, e.g.: C:\Users\YourName\Desktop\PublishedFolder
  • Click Finish to save the publish profile

04 Publish the Web API

  • Set Configuration to Release
  • Set Deployment Mode to Framework-dependent
  • Confirm Target Framework matches your project version
  • Enable Delete all existing files prior to publish to avoid stale artifacts
  • Click Publish and wait for the process to complete

Once published, copy the output folder to the IIS web root directory:

C:\inetpub\wwwroot\MyWebApi

05 Create Application Pool

  • Open IIS Manager → Click Application Pools
  • Click Add Application Pool in the Actions panel
  • Name: MyApiPool
  • .NET CLR Version: No Managed Code (required for all .NET Core apps)
  • Managed Pipeline Mode: Integrated

Setting the .NET CLR Version to No Managed Code is critical. .NET Core manages its own runtime independently and does not rely on the IIS CLR. Selecting a CLR version here will cause application pool startup errors.

06 Create Website in IIS

  • In IIS Manager, right-click Sites → Click Add Website
  • Site Name: MyWebApi
  • Physical Path: point to your published output folder
  • Application Pool: select MyApiPool (created in Step 05)
  • Binding Port: 80 (or a custom port such as 5000)
  • Click OK to create the site

07 Configure Windows Firewall (For Custom Ports)

If your IIS website is configured to use a custom port (e.g., 5000), you must create an inbound firewall rule to allow traffic on that port. Without this step, external requests will be blocked silently by Windows Firewall.

  • Press Win + R, type wf.msc, and press Enter
  • Click Inbound RulesNew Rule
  • Select Port as the rule type → Click Next
  • Choose TCP and enter 5000 under Specific local ports
  • Select Allow the connection → Click Next
  • Apply to profiles: Domain, Private, and Public
  • Name the rule MyWebApi Port 5000 → Click Finish

After the rule is created, verify connectivity by navigating to:

http://your-server-ip:5000

Common Issue: HTTP Error 500.19 - Internal Server Error

Error Code: 0x8007000d  |  HTTP Status: 500.19 – Internal Server Error

This error consistently points to one of the following root causes:

  • Hosting Bundle not installed - or the version does not match the target framework
  • Corrupted or invalid web.config - IIS cannot parse the configuration file
  • Missing AspNetCoreModuleV2 - module not registered after Hosting Bundle installation
  • Incorrect Application Pool configuration - CLR version not set to No Managed Code

Diagnosis Tip: After each corrective action, run iisreset from an elevated command prompt and re-test. Most 500.19 errors resolve after reinstalling the correct Hosting Bundle version followed by an IIS restart.

Conclusion

The majority of HTTP 500.19 errors are environment configuration issues - not application code problems. By correctly installing IIS, precisely matching the Hosting Bundle version to your target framework, setting the Application Pool to No Managed Code, and opening the required ports in Windows Firewall, you can deploy your .NET Core Web API reliably and predictably on any Windows Server environment.

Follow these steps in sequence, validate each stage before proceeding to the next, and run iisreset after any configuration change to ensure changes take effect immediately.

.NET Architecture Patterns: A Complete Developer's Guide

Introduction

If you've been developing .NET applications for any length of time, you've probably encountered the same question over and over: "Which architecture pattern should I use for this project?"

The .NET ecosystem offers a dizzying array of architectural patterns Repository, Unit of Work, CQRS, Clean Architecture, Onion Architecture, Vertical Slice, and more. Each has its advocates, each promises to solve your problems, and each can absolutely make things worse if applied incorrectly.

In this guide, I'll walk through the most important architecture patterns in C# and .NET, explain how each one works, and most importantly give you practical guidance on when to use each pattern based on your project's actual needs.

Part 1: Data Access Patterns

Let's start with patterns that govern how your application interacts with data. These are the foundation of most .NET applications.

1. Repository Pattern

What It Is

The Repository Pattern creates an abstraction layer between your business logic and data access logic. Instead of scattering database queries throughout your application, you centralize them in repository classes that expose methods like GetById, GetAll, Add, Update, and Delete.

Think of repositories as a collection-like interface to your data. Your business logic asks the repository for entities without knowing whether they come from SQL Server, Cosmos DB, or an API. The repository handles all the data retrieval details.

Key Benefits

  • Centralizes data access logic in one place
  • Makes your code more testable through interfaces
  • Easier to swap data sources without changing business logic
  • Enforces consistent data access patterns across teams

When to Use It

  • You're using Entity Framework Core and want to abstract away EF-specific code from your business layer
  • You need to switch between different data sources (SQL, NoSQL, APIs) or anticipate doing so
  • You want highly testable code without hitting the actual database in unit tests
  • Your team needs consistent, reusable data access patterns across the application
  • Complex querying logic that you want to encapsulate and reuse

When NOT to Use It

  • Simple CRUD applications where EF Core's DbContext already provides sufficient abstraction
  • You're using Dapper or raw SQL queries (Repository adds unnecessary complexity and overhead)
  • Your application has complex, custom queries that don't fit the generic repository pattern well
  • You're building a prototype or MVP where speed matters more than perfect architecture

2. Unit of Work Pattern

What It Is

Unit of Work maintains a list of objects affected by a business transaction and coordinates writing changes to the database. It ensures that all repository operations within a single business transaction share the same database context and get committed or rolled back together.

Imagine you're processing an order that requires updating inventory, creating an order record, and charging a payment. The Unit of Work ensures all these changes happen together if any step fails, everything rolls back. It's the transactional glue that binds multiple repository operations.

Key Benefits

  • Ensures transactional consistency across multiple operations
  • Coordinates commits across multiple repositories
  • Reduces database round-trips by batching changes
  • Provides a clear transaction boundary in your code

When to Use It

  • You have operations that span multiple repositories and need guaranteed transactional consistency
  • You're using the Repository Pattern and need coordinated saves across different entity types
  • Complex business operations that modify multiple entities and must succeed or fail as a unit
  • You want explicit control over when database changes are persisted

When NOT to Use It

  • EF Core's DbContext already implements Unit of Work adding another layer is redundant
  • Simple applications with single-entity operations that don't need cross-repository coordination
  • Microservices architecture where each service has its own database (distributed transactions are different)
  • You're not using the Repository Pattern (Unit of Work is typically paired with repositories)

3. Specification Pattern

What It Is

The Specification Pattern encapsulates business rules and query logic into reusable specification objects. Instead of writing the same filter conditions repeatedly throughout your codebase, you define them once as specifications and compose them as needed.

For example, instead of writing "where product.IsActive && !product.IsDeleted && product.Price > 0" in multiple places, you create an "ActiveProductsSpecification" that encapsulates this logic. You can then combine specifications (active AND in price range AND in stock) to build complex queries.

Key Benefits

  • Makes business rules explicit and reusable
  • Easy to test business rules in isolation
  • Supports composing complex queries from simple building blocks
  • Reduces code duplication across queries

When to Use It

  • Complex filtering logic that gets reused across different queries and contexts
  • Business rules that need to be validated, tested, and maintained independently
  • Dynamic query building based on user input or varying business conditions
  • Domain-driven design projects where specifications are part of the domain model

When NOT to Use It

  • Simple, straightforward queries that don't get reused or aren't business-critical
  • When LINQ queries are already clear, readable, and don't duplicate logic
  • Small applications where the added abstraction outweighs the benefits

Part 2: Architectural Patterns

These patterns define the overall structure of your application how layers communicate, where dependencies point, and how your code is organized at the highest level.

4. Layered (N-Tier) Architecture

What It Is

The classic approach to organizing code: horizontal layers where each layer has a specific responsibility. Typically you have a Presentation Layer (UI/Controllers), Business Logic Layer (Services), Data Access Layer (Repositories), and the Database. Each layer depends only on the layer directly below it, creating a top-down dependency flow.

This is the architecture pattern most developers learn first. It's straightforward: controllers call services, services call repositories, repositories talk to the database. Dependencies flow downward like water, and each layer is blissfully unaware of what's above it.

Key Benefits

  • Easy to understand and widely recognized
  • Clear separation of concerns by technical responsibility
  • Natural fit for traditional team structures (frontend, backend, data teams)
  • Works well for moderate complexity applications

When to Use It

  • Traditional enterprise applications with clear separation of UI, business logic, and data
  • Teams familiar with classic MVC or three-tier architecture patterns
  • Monolithic applications with moderate complexity that don't require advanced domain modeling
  • Internal business applications where speed of development matters more than perfect architecture

When NOT to Use It

  • Domain-driven design projects dependencies flow the wrong direction (infrastructure depends on domain)
  • Highly complex business domains requiring rich domain models with business logic
  • Microservices architecture where you need strong boundaries and independence
  • Applications where you frequently need to swap infrastructure components

5. Clean Architecture

What It Is

Clean Architecture inverts the traditional layering philosophy. Instead of having your database and infrastructure at the bottom supporting everything above, your business domain sits at the center, and everything else UI, database, external services depends on it. Infrastructure details are pushed to the outer layers.

Picture concentric circles: the innermost circle is your domain entities and business rules. The next layer out is your application use cases. Then comes the infrastructure layer (database, APIs, frameworks). Finally, the outermost layer is your UI. Dependencies point inward outer layers know about inner layers, but inner layers know nothing about what's outside them.

The Four Layers

  • Domain Layer: Entities, value objects, domain events your core business logic with zero dependencies
  • Application Layer: Use cases, commands, queries, DTOs orchestrates domain logic
  • Infrastructure Layer: Database access, external APIs, file systems implements interfaces defined in domain
  • Presentation Layer: Web API, UI, controllers handles user interaction

Key Benefits

  • Business logic is completely independent of frameworks and infrastructure
  • Easy to test you can test business rules without touching databases or UI
  • Highly maintainable for long-lived applications
  • Can swap infrastructure components (database, messaging) without touching business logic

When to Use It

  • Complex business domains with rich, evolving business logic
  • Long-lived applications where business rules change frequently but infrastructure stays stable
  • Need to swap infrastructure components (different databases, message queues, cloud providers)
  • Team values testability, maintainability, and separation of concerns
  • Enterprise applications with multiple teams working on different aspects

When NOT to Use It

  • Simple CRUD applications this is massive architectural overkill
  • Prototypes or MVPs that need to ship quickly and prove market fit
  • Small teams unfamiliar with domain-driven design principles
  • Projects with tight deadlines where speed matters more than perfect architecture

6. Onion Architecture

What It Is

Onion Architecture is similar to Clean Architecture but emphasizes the dependency rule even more strictly. Think of it as concentric circles (like an onion) where the domain model is the innermost circle, and all dependencies point inward toward the domain. No inner layer ever references an outer layer.

Key Differences from Clean Architecture

  • More explicit emphasis on domain services as a separate layer
  • Infrastructure and UI are strictly in the outermost layer with no exceptions
  • Application services orchestrate domain logic but never contain business rules
  • More prescriptive about what belongs in each layer

When to Use It

Same scenarios as Clean Architecture. The choice between them is mostly philosophical Onion Architecture tends to be more prescriptive and strict about layer organization, while Clean Architecture allows slightly more flexibility. Both achieve the same goal: domain-centric design with proper dependency management.

7. Hexagonal Architecture (Ports and Adapters)

What It Is

Hexagonal Architecture (also called Ports and Adapters) focuses on isolating the application core from external concerns through well-defined interfaces. The hexagon represents your application's business logic, with ports (interfaces) on each side connecting to different adapters (implementations).

Think of your application as a hexagon with multiple sides. Each side has a port an interface that defines how the outside world can interact with your application. Adapters plug into these ports to provide actual implementations. You might have a port for payment processing with adapters for Stripe, PayPal, and Square. Your business logic only knows about the port, not which adapter is plugged in.

Key Concepts

  • Ports: Interfaces defined by your application core (e.g., IPaymentGateway, IEmailService)
  • Adapters: Concrete implementations that plug into ports (e.g., StripeAdapter, SendGridAdapter)
  • Primary Adapters: Drive the application (Web API, Console UI)
  • Secondary Adapters: Driven by the application (Database, Email, External APIs)

Key Benefits

  • Extremely testable mock any adapter easily
  • Swap external services without touching core logic
  • Business logic remains technology-agnostic
  • Great for applications with many integrations

When to Use It

  • Applications with multiple external integrations (payment gateways, shipping providers, notification services)
  • Need to frequently swap external services based on configuration or business needs
  • Testing is critical and you need to easily mock all external dependencies
  • Building a system that might have different frontends (web, mobile, desktop, CLI)

When NOT to Use It

  • Simple applications with few external dependencies
  • When you're certain about your technology choices and won't need to swap them

8. Vertical Slice Architecture

What It Is

Vertical Slice Architecture is a radical departure from traditional layering. Instead of organizing code by technical layers (controllers, services, repositories), you organize by features or use cases. Each feature is a vertical slice through all layers, containing everything needed for that specific functionality request handling, business logic, data access, and response.

For example, instead of having a Products folder with controllers, a separate Services folder with ProductService, and a Repositories folder with ProductRepository, you'd have a CreateProduct folder containing everything needed to create a product: the command, handler, validator, and even the endpoint definition. Each feature stands alone.

Key Benefits

  • High cohesion everything related to a feature is together
  • Low coupling features are independent and don't share code
  • Easy to understand navigate by feature, not by technical layer
  • Teams can own features end-to-end
  • Easier to delete unused features just remove the folder

When to Use It

  • Feature-rich applications where features are relatively independent
  • Teams organized around features or product areas rather than technical roles
  • Rapid feature development with minimal cross-feature dependencies
  • Microservices where each service has focused, cohesive functionality
  • Applications using CQRS where commands and queries are naturally isolated

When NOT to Use It

  • Significant shared business logic across many features (leads to code duplication)
  • Complex domain logic requiring rich domain models with relationships
  • Teams organized by technical specialization (frontend, backend, data)

Part 3: Behavioral Patterns

These patterns govern how objects communicate and how responsibilities are distributed within your application.

9. CQRS (Command Query Responsibility Segregation)

What It Is

CQRS separates read operations (queries) from write operations (commands). Instead of using the same model, methods, and sometimes even database for both reading and writing data, you create distinct models optimized for each purpose.

On the write side (commands), you might have a normalized relational database optimized for transactional integrity. On the read side (queries), you could have denormalized views, read-optimized databases, or even cached projections. The two sides can evolve independently based on their specific performance and complexity needs.

Levels of CQRS

  • Simple CQRS: Different models and handlers for commands vs queries, same database
  • CQRS with read models: Denormalized read tables optimized for queries
  • Full CQRS: Separate databases for reads and writes with eventual consistency

Key Benefits

  • Read and write models can be optimized independently
  • Scales reads and writes separately
  • Simpler query logic no complex joins for reporting
  • Clear separation of intent (changing data vs reading data)

When to Use It

  • Complex domains where read and write requirements differ significantly
  • High-performance applications needing optimized read models for reporting
  • Systems with eventual consistency requirements between read and write sides
  • Different scaling needs for reads vs. writes (e.g., high read volume, low write volume)
  • Combining with Event Sourcing for audit trails and temporal queries

When NOT to Use It

  • Simple CRUD applications where the added complexity provides no benefit
  • Strong consistency requirements across all operations
  • Small applications where a single model works perfectly fine
  • Teams not ready to handle eventual consistency complexity

10. Mediator Pattern

What It Is

The Mediator Pattern reduces coupling between components by introducing a mediator object that handles all communication. Instead of your controllers directly calling multiple services, they send requests to a mediator, which routes them to the appropriate handler. In .NET, MediatR is the most popular implementation.

Without Mediator, your controller knows about OrderService, EmailService, InventoryService, PaymentService it's tightly coupled to all of them. With Mediator, your controller only knows about the mediator and sends a single command. The mediator routes it to a handler that knows about all those services. Your controller stays thin and focused solely on HTTP concerns.

Key Benefits

  • Controllers/endpoints become thin and focused only on HTTP concerns
  • Handlers are single-purpose and easy to test in isolation
  • Supports pipeline behaviors for cross-cutting concerns (logging, validation, caching)
  • Works perfectly with CQRS pattern

When to Use It

  • You want thin controllers/endpoints focused only on HTTP/presentation concerns
  • Using CQRS pattern MediatR works perfectly with commands and queries
  • Need cross-cutting concerns like logging, validation, caching via pipeline behaviors
  • Testing individual handlers in isolation without controller overhead
  • Large applications where organizing by request/response pairs makes sense

When NOT to Use It

  • Simple applications where direct service injection is clearer and more straightforward
  • Team is unfamiliar with the pattern and doesn't have time for the learning curve
  • You prefer seeing all dependencies explicitly in constructor parameters

Part 4: Choosing the Right Pattern - A Decision Framework

Now that we've covered the patterns, let's talk about how to actually choose. Here's a practical decision framework based on your project's real characteristics.

Project Type Recommended Pattern Why
Simple CRUD API Minimal API + EF Core (No Pattern) Keep it simple patterns would add overhead without benefit
Internal Business App Layered Architecture + Repository Familiar pattern, moderate complexity, fast delivery
Complex Domain Logic Clean Architecture + DDD Rich domain models, evolving business rules, long-term maintainability
Microservice Vertical Slice + CQRS Feature isolation, independent deployment, minimal coupling
High-Performance System CQRS + Event Sourcing Optimized read models, independent scaling, audit trails
Integration-Heavy App Hexagonal + Ports/Adapters Easy to swap external services, highly testable
E-commerce Platform Clean Architecture + CQRS + MediatR Complex workflows, separate read/write needs, testability
Rapid Prototyping/MVP No Pattern - Ship Fast Validate market fit first, refactor later if needed

Common Pattern Combinations That Work Well Together

Patterns aren't mutually exclusive. In fact, some patterns complement each other beautifully. Here are proven combinations used in production systems:

Combination 1: Clean Architecture + CQRS + MediatR

Perfect for: Enterprise applications with complex business domains

Why it works

  • Clean Architecture provides the overall structure and dependency flow
  • CQRS separates reads and writes at the application layer
  • MediatR handles command/query dispatch and cross-cutting concerns
  • Domain layer stays pure with rich business logic

Combination 2: Vertical Slice + MediatR + Feature Folders

Perfect for: Modern microservices and feature-focused teams

Why it works

  • Each feature is completely self-contained with everything it needs
  • MediatR provides consistent structure within each slice
  • Minimal cross-feature dependencies allow independent deployment
  • Easy to understand and navigate by business capability

Combination 3: Layered Architecture + Repository + Unit of Work

Perfect for: Traditional business applications with established teams

Why it works

  • Proven pattern familiar to most .NET developers
  • Good separation of concerns without excessive complexity
  • Transactional consistency via Unit of Work across repositories
  • Testable through repository interfaces

Combination 4: Hexagonal Architecture + Specification Pattern

Perfect for: Applications with complex business rules and multiple integrations

Why it works

  • Hexagonal keeps core logic isolated from infrastructure
  • Specifications encapsulate business rules that work across different adapters
  • Easy to test rules independently of infrastructure

Red Flags: Warning Signs You're Using the Wrong Pattern

Sometimes the pattern isn't the problem it's the wrong pattern for your situation. Watch out for these warning signs:

Too Much Abstraction

  • You have 5+ layers of indirection for a simple CRUD app
  • Generic repositories with only one implementation that never changes
  • More boilerplate and ceremony than actual business logic
  • Developers spend more time navigating the architecture than solving problems

Fighting the Framework

  • Wrapping EF Core in repositories that just delegate every call to DbContext
  • Complex workarounds and hacks to make the pattern fit your use case
  • Team constantly confused about where code should go
  • Documentation explaining the architecture is longer than the actual code

Premature Optimization

  • Implementing CQRS with Event Sourcing for a weekend prototype
  • Clean Architecture with full DDD for a simple internal tool
  • Microservices architecture when a single API would suffice
  • Planning for scale you'll never reach while ignoring features users actually need

Copy-Paste Architecture

  • Using a pattern because you saw it in a blog post or conference talk
  • Implementing a pattern without understanding why it exists
  • "This is how we always do it" without questioning if it fits this project

Practical Advice: Start Simple, Evolve Gradually

Here's the approach I recommend for most projects, learned from years of both under-engineering and over-engineering applications:

Phase 1: Start with the Simplest Thing That Could Work

  • Use Minimal APIs or controllers directly with EF Core no abstraction layers
  • No repositories unless you have a concrete, immediate reason for them
  • Focus on delivering features and understanding the domain
  • Let pain points emerge naturally through actual development

Phase 2: Add Patterns When Pain Points Become Clear

  • Controllers getting fat with too much logic? Add services or command handlers
  • Duplicate query logic everywhere? Consider Repository or Specification
  • Complex business rules scattered? Introduce domain services
  • Different read and write needs? Evaluate CQRS

Phase 3: Refactor to Full Pattern Only If Genuinely Needed

  • Project growing significantly in complexity? Consider Clean Architecture
  • Performance bottlenecks in reads vs writes? Full CQRS implementation
  • Multiple teams working on same codebase? Look at Vertical Slice
  • Growing list of external integrations? Hexagonal Architecture pays off

Key Principle: Let real problems drive architectural decisions, not theoretical ones. Your architecture should solve actual pain points you're experiencing, not hypothetical problems you might face someday.

Final Thoughts: Pattern Selection is Context-Dependent

Architecture patterns are tools in your toolbox, not commandments carved in stone. The "best" pattern is the one that solves your actual problems without creating new ones that are worse.

Don't choose Clean Architecture because it sounds impressive on your resume. Don't use CQRS because you read about it in this blog post (or any blog post). Don't implement microservices because Netflix does it. Choose patterns based on your project's real constraints: team size, domain complexity, timeline pressures, performance requirements, and long-term maintenance needs.

The best architecture is one your team can understand, maintain, and evolve over time. Sometimes that's a sophisticated multi-layered Clean Architecture with CQRS and Event Sourcing. Sometimes it's Minimal APIs with EF Core and some well-organized service classes. Both can be exactly right for their context.

Remember: premature abstraction is just as dangerous as premature optimization. Start simple. Add complexity only when simplicity fails. Refactor toward patterns when you feel the pain of not having them, not because you think you might need them someday.

Your future self the one maintaining this code at 2 AM trying to fix a production bug will thank you for choosing clarity over cleverness.

Quick Reference Cheat Sheet

Pattern Best Use Case Avoid When
Repository Multiple data sources, need for testability Simple CRUD with EF Core
Unit of Work Multi-repository transactions EF Core already provides it
Clean Architecture Complex domain, long-lived apps Prototypes, MVPs, simple apps
CQRS Different read/write optimization needs Simple CRUD, strong consistency
Vertical Slice Feature-focused, independent features Lots of shared business logic
Hexagonal Many external integrations Few external dependencies
Mediator Thin controllers, CQRS, cross-cutting concerns Direct service calls sufficient
Layered Traditional enterprise, familiar teams DDD projects, microservices

Remember: The goal is to ship working software that solves real problems, not to build the perfect architecture. Choose patterns that serve your users and your team.

March 3, 2026

Power Platform CI/CD Using Azure DevOps: Complete Step-by-Step Guide

Introduction

When working with multiple environments (Development, UAT, Production), manually moving Power Platform solutions can slow down your delivery cycle. With Azure DevOps and Microsoft's Power Platform Build Tools, we can automate:

  • Exporting the solution from the source
  • Storing and unpacking it in a repository
  • Packing and importing it into the target environment

Step 1: Azure App Registration

Go to Azure Portal and open App registrations.

Click on New registration and name your app (e.g., Pipelines CICD Deployment Power Apps).

After registration:

Save the Application (client) ID

Save the Directory (tenant) ID

Navigate to API permissions and add:

  • Azure DevOps → user_impersonation
  • Dynamics CRM → user_impersonation
  • PowerApps Runtime → user_impersonation

Go to Certificates & Secrets → New client secret and store it securely.

Azure App Registration screenshot

Step 2: Power Platform Setup

Create Your Solution and Components (Source Environment)

Start by logging into make.powerapps.com and select your source environment (e.g., "Dev" or "Default").

Click on Solutions → New solution.

Give it a name (e.g., Client Feedback Solution) and select a publisher.

After creating the solution, add your components — for example:

A Dataverse table (e.g., Client Feedback)

A Canvas app that connects to this table and allows users to submit feedback.

Once created, be sure to save and publish your solution.

Power Apps Solution screenshot
Grant Deployment Access: Configure Application Users

To enable Azure DevOps to interact with Power Platform environments, you must authorize your registered Azure App by creating Application Users with the right permissions.

Here's how to do it:

Go to the Power Platform Admin Centre.

In the left menu, click on Environments, then choose your source environment.

Click on Settings → Users + permissions → Application users.

Application Users settings screenshot

Click + New app user and follow these steps:

Select the app: Choose the Azure AD app you registered in Step 1.

Business Unit: Choose the root business unit.

Security Role: Assign the System Administrator role (this is required for full deployment access).

And then click "Save"

After that just open the app user that you just added and click on the refresh button (having tooltip as: "Update application name from Microsoft Entra ID")

Repeat the same steps for your target environment (e.g., "Test" or "Production").

New app user configuration screenshot

This setup ensures your DevOps pipeline can export and import solutions securely across environments.


Step 3: DevOps Project Configuration

Go to Azure DevOps and create a new private project.

Azure DevOps new project screenshot

Install the Microsoft Power Platform Build Tools from the Azure Marketplace

Power Platform Build Tools marketplace screenshot

Create Service Connections for both Source and Target environments using:

  • Client ID
  • Tenant ID
  • Client Secret
  • Environment URLs
Service connections page screenshot

Go to Project Settings (bottom-left gear icon) → Service connections → Click "Create service connection".

Search Power Platform service connection screenshot

Search for "Power Platform", select it, and click Next.

Source Connection configuration screenshot

Fill in the Source Connection details: select "Application Id and client secret" as the authentication method, enter the Source environment URL, Tenant ID, Application (Client) ID, Client Secret, and name it "Source Connection". Check "Grant access permission to all pipelines" and click Save. Repeat the same steps for the Target Connection using the Target environment URL.

Both service connections created screenshot

Both Source Connection and Target Connection are now created and visible under Service connections.


Step 4: Build Pipeline (Continuous Integration)

Navigate to Repos -> Files -> Initialize a Git Repository and configure permissions.

Initialize Git repository screenshot

Project Setting Repository Security – User Permission–contribute Allow

Repository security settings screenshot
Contribute permission allow screenshot
Install Self Host for Parallel Pipeline/Microsoft host agent
Step 1: Log in to Azure DevOps

Go to https://dev.azure.com with your account. Once you're in your organization's portal, click on Organization Settings in the bottom left corner.

Organization Settings screenshot
Step 2: Create a new Pool or use Default

Inside Organization Settings, select Agent Pools. Here you'll see a list of pools — typically there's Default (Microsoft-hosted) already there.

To create a new pool, click New Pool or reuse Default.

Agent Pools screenshot
Step 3: Download the Self-Hosted Agent

Select Default or your preferred pool. Click New Agent. This will prompt you to Download the agent package (for 64-bit or 32-bit).

Download agent package screenshot

Save this ZIP file to your machine.

Step 4: Extract and Configure

Extract the downloaded ZIP and inside, you'll see files like:

config.cmd

run.cmd

bin

Extracted agent files screenshot
Step 5: Run config.cmd as administrator

Navigate to your extracted directory. Then run: config.cmd

✅ This will launch a configuration script in cmd.

config.cmd running screenshot
Step 6: Provide Organization and Authentication

When prompted:

Enter your Server URL — typically your Azure DevOps organization's URL (e.g.: https://dev.azure.com/tenant).

Select Personal Access Token for authentication.

Server URL and authentication screenshot
Step 7: Generate and Provide PAT (Personal Access Token)

Go to your Azure DevOps profile -> Personal Access Tokens.

Personal Access Tokens menu screenshot

Click on New Token. Create a new PAT with Full Access (or custom) and expiration as needed. Copy this PAT and paste it into the configuration script.

New PAT creation screenshot
Step 8: Name Your Agent

It will ask:

"Enter agent pool (press enter for default)"

"Enter agent name"

Consider a custom, descriptive name (for example: self-hosted-agent).

Step 9: Finalize and Enable as a Service

It will scan for tool capabilities and connect successfully. Then:

Confirm working directory with Enter.

Say Y to enable service installation (run agent as service).

Say Y for the SERVICE_SID_TYPE_UNRESTRICTED

Account credentials or use network service as enter.

Agent service configuration screenshot
Step 10: Start the Service

Once configured, you can start the service:

Open Services (press Windows + S, then search Services -> run as admin).

Look for Azure Pipelines Agents with your agent's name.

Right Click on it -> Start the service if it's not already running.

Windows Services agent screenshot
Step 11: Confirm Online in Azure DevOps

Go back to Azure DevOps -> Organization Setting -> Agent Pools -> [Your Pool]. Your new agent should show up Online, ready to be used in your pipeline.

Congratulations! Your self-hosted agent is up and running. It's now available for your pipeline's parallel job execution in your CI process.

Add App to Source Control

Create a new pipeline (Classic Editor).

Select Empty Job

Add tasks in order:

Add variable

Take the solution name and put it in the variable name in the pipeline

Pipeline classic editor screenshot
Pipeline variable configuration screenshot
Solution name variable screenshot
Power Platform Tool Installer

Search and add Power platform tool installer

Power Platform Tool Installer task screenshot
Add Power Platform Export Solution

Search for the export solution and add the "Power Platform Export Solution"

Export Solution task screenshot

Set the following properties

Export Solution properties screenshot

Configure the Export Solution task: set Authentication type to Service Principal, choose the Source Connection, leave Environment URL as $(BuildTools.EnvironmentUrl), set Solution Name to $(SolutionName), and set Solution Output File to $(Build.StagingDirectory)/$(SolutionName)_unmanaged.zip. Leave "Export as Managed" unchecked.

Unpack Solution

Search for the unpack solution

Unpack Solution task screenshot

Set the following properties

Unpack Solution properties screenshot

Configure the Unpack Solution task: set Solution Input File to $(Build.StagingDirectory)/$(SolutionName)_unmanaged.zip, Target Folder to $(Build.SourcesDirectory)/$(SolutionName)/Unmanaged, and Type of Solution to "Unmanaged".

Publish Build Artifacts

Search for the "Publish build artifacts" and add that to steps.

Publish Build Artifacts task screenshot

Set the following properties.

Publish Artifacts properties screenshot

Set the Path to Publish to $(Build.SourcesDirectory)/$(SolutionName)/Unmanaged. This publishes the unpacked solution components as a build artifact.

Command Line Script

Search for the "Command line" and add that to steps

Command Line task screenshot

Add the following script in the command line:
echo commit all changes

git config user.email "test@test.com" git config user.name "Name" git checkout -B main git add --all git commit -m "code commit" git -c http.extraHeader="AUTHORIZATION: Bearer $(System.AccessToken)" push origin main
Command line script configuration screenshot

Remember to:

Use variables for solution name.

Enable "Allow scripts to access OAuth token" for Git push.

Allow scripts to access OAuth token screenshot
Save and Run

Click Save and Queue

Save and Queue screenshot

Change the settings, add commit message and click save and run

Run pipeline dialog screenshot

Check logs

Pipeline logs screenshot

The build process has now completed successfully.

Build pipeline succeeded screenshot

Step 5: Release Pipeline (Continuous Deployment)

Go to Releases → New Release Pipeline.

New release pipeline screenshot

Navigate to Pipelines → Releases in the left menu. Click "New pipeline" to create a new release pipeline.

Release pipeline view screenshot

The new release pipeline view shows two sections: Artifacts (left) and Stages (right). We need to configure both.

Add your build artifact as a source.

Add artifact source screenshot

Click "+ Add" under Artifacts. Select "Azure Repos Git" as the source type, choose your project, select the repository and "main" branch, set Default version to "Latest from the default branch", and click Add.

Add stage empty job screenshot

After the artifact is linked, click "+ Add a stage" under Stages. Select "Empty job" from the template list.

Name the stage screenshot

Name the stage (e.g., "Stage 2") and click on "1 job, 0 task" to start adding tasks.

Rename stage to First Release screenshot

Rename the stage to "First Release" under the Tasks tab. This is where you'll add the release tasks.

Pipeline variable SolutionName screenshot

Go to the Variables tab and add a pipeline variable: Name = SolutionName, Value = your solution's internal name (e.g., PipelinesAzureDevOpsDemo).

In the stage:

Tool Installer
Release Tool Installer task screenshot

Search for "Power Platform Tool Installer" and click Add. This must be the first task in the release stage.

Pack Solution
Pack Solution search screenshot

Search for "pack" and add the "Power Platform Pack Solution" task.

Pack Solution browse source folder screenshot

In the Pack Solution task, click the browse button (...) next to Source Folder. Navigate to the linked artifact's Unmanaged folder inside your solution directory.

Pack Solution properties screenshot

Configure the Pack Solution properties: set Source Folder to $(System.DefaultWorkingDirectory)/_Power Apps CI-CD Pipeline/$(Build.SourceDirectory)/$(SolutionName)/Unmanaged, Solution Output File to $(Build.StagingDirectory)/$(SolutionName).zip, and Type of Solution to "Unmanaged".

Import Solution
Import Solution search screenshot

Search for "import" and add the "Power Platform Import Solution" task.

Import Solution properties screenshot

Configure the Import Solution task: set Authentication type to "Service Principal/client secret", select the Target Connection as the service connection, leave Environment URL as $(BuildTools.EnvironmentUrl), and set Solution Input File to $(Build.StagingDirectory)/$(SolutionName).zip. Check "Import solution as asynchronous operation".

Publish Customizations
Publish Customizations search screenshot

Search for "publish" and add the "Power Platform Publish Customizations" task.

Publish Customizations properties screenshot

Configure Publish Customizations: set Authentication type to "Service Principal/client secret", select the Target Connection, and leave the Environment URL as default. Check "Publish Customizations as asynchronous operation".

Save release pipeline screenshot

Click Save at the top-right. Add a comment (e.g., "Release Created") and click OK.

Create a release.
All four tasks visible screenshot

Once saved, all four tasks are visible in the stage. Click "Create release" at the top-right.

Create new release dialog screenshot

In the "Create a new release" dialog, confirm the stage ("First Release"), verify the artifact version, add a release description (e.g., "Prod Release"), and click Create.

Release created banner screenshot

The release is created. A banner confirms "Release-1 has been created."

Monitor logs and verify the deployment in the target environment.

Deploy stage dropdown screenshot

Click the Deploy dropdown and select "Deploy stage" or "Deploy multiple" to trigger the deployment.

Deploy release panel screenshot

In the Deploy release panel, select the "First Release" stage, optionally add a comment, and click Deploy.

Release in progress screenshot

The release is now in progress. You can see the stage status showing "In progress" with a task counter and timer.

Release succeeded screenshot

The release has succeeded! The "First Release Stage" shows a green "Succeeded" status.

Solution in target environment screenshot

Verify the deployment by navigating to the target environment in Power Apps (make.powerapps.com). The solution (e.g., "Pipelines Azure DevOps…") now appears in the Solutions list, confirming a successful CI/CD deployment.


Conclusion

Congratulations! You now have an end-to-end automated pipeline for deploying Power Platform solutions using Azure DevOps. This setup:

  • Reduces manual effort
  • Ensures consistency across environments
  • Improves delivery speed and confidence

With this CI/CD process, you're better equipped to handle modern ALM practices for Power Platform.

If you need assistance implementing Power Platform ALM or automating enterprise deployments, feel free to contact our SharePoint & Power Platform consulting team here.