April 10, 2025

Restoring Windows Server from Backup – Bare Metal and Active Directory Recovery

Windows Server Backup Guide

Introduction

If your server crashes or Active Directory becomes corrupt, restoring from backup ensures minimal downtime. This guide explains the steps for Bare Metal Recovery and System State recovery.

📂 If you haven't created a backup yet, start with our Windows Server Backup Guide

Bare Metal Recovery (BMR) Steps

  1. Prepare a Windows bootable USB drive and insert it.
  2. Follow the on-screen instructions to configure language, time, and keyboard, then click Next.
  3. Select Repair your computer.
  4. Select Troubleshoot.
  5. Select Advanced options.
  6. Select System Image Recovery.
  7. Select your backup path (for demo purposes, we select a local drive), then click Next.
  8. Choose Additional restore options and click Next.
  9. Click Finish.
  10. Confirm that all disks will be formatted and replaced by clicking Yes.
  11. The restoring process begins.
  12. After the restoration process is complete, the system will automatically restart.
  13. After restart, log in using a Domain Administrator account.
  14. The restoration process completes, restoring the entire server, including the OS, system state, installed applications, and data.

System State Backup Restore (Active Directory)

  1. Launch Server Manager, navigate to Tools, select Windows Server Backup, and click Recover.
  2. Select your backup location and click Next.
  3. Choose the backup date and click Next.
  4. Select System State as the recovery type, then click Next.
  5. Select the location for system state recovery and click Next.
  6. Log into Active Directory Repair Mode and access it via DSRM authentication when prompted.

Note: Before restarting, ensure Active Directory Repair Mode is enabled and DSRM authentication is set up. If not, refer to the guide on resetting the DSRM password.

DSRM Authentication Password Reset

Before you reset the DSRM password, note that DSRM refers to the NTDS password set during domain server creation.

Reset Password using PowerShell Command Prompt

Open PowerShell as administrator and execute the following commands:

 ntdsutil  
 set dsrm password  
 reset password on server null  
  • Enter your password and confirm it.
  • Password has been set successfully.
  • Type q (quit) twice to exit.
  • DSRM password has been successfully reset.

Enable Active Directory Repair Mode

  • Open the Run command: msconfig
  • Navigate to Boot tab
  • Check Safe boot and Active Directory Repair

Restart your system and press F8.

Select Directory Services Repair Mode

Enter the DSRM administrator authentication password set above.

Windows Server Backup and Recovery

  • Launch Server Manager
  • Navigate to ToolsWindows Server Backup
  • Click on Recover
  • Select the backup date, time, and location.
  • Choose System State and click Next.
  • Select Original Location and confirm.
  • Click Recover and then Yes. Recovery progress starts.

Disable Active Directory Repair Mode

  • Open Run: msconfig
  • Go to the Boot tab and uncheck Safe boot

Final Steps

  • Log in using your domain name and administrator account.
  • Confirm that system recovery has been completed successfully.
  • Check if Active Directory Users, OUs, Groups, etc., have been restored.

Troubleshooting

During the in-place upgrade of Windows Server, two issues may encounter:

FSMO (Flexible Single Master Operations)

An FSMO role error occurred, where the FSMO role was incorrectly identified. To resolve:

  • Go to Active Directory Users and Computers
  • Navigate to Domain Controller
  • Right-click on the incorrect FSMO role and delete it.

Run the following command to check FSMO roles:

netdom query fsmo














Hyper-V Issue

After upgrading, virtual machines lost internet connectivity. The solution is to reinstall the Hyper-V role.

Windows Server Backup – Step-by-Step Guide to Safeguard Your System

Windows Server Backup Guide

Introduction

Data loss can be catastrophic for any organization, making a reliable backup and restore strategy essential for maintaining business continuity. Windows Server provides built-in tools to help administrators safeguard critical data.

In this guide, we’ll walk you through setting up Windows Server Backup, creating different types of backups, and best practices to ensure your system is secure and resilient.

⚠️ Need help with restoring data? Check out our restore guide here: How to Restore Windows Server from Backup?

Pre-requisites

  • Microsoft Server 2012/2016/2019/2022
  • Administrator Role via Server Manager
  • Resilient Storage (Local/Remote)

Backup Guide Steps

This backup guide is divided into four main steps:

  • Windows Server Backup
  • Restore Bare Metal Recovery Backup
  • Restore System State Backup
  • Troubleshooting

Windows Server Backup

Follow these steps to configure Windows Server Backup:

  1. Open Server Manager, select Tools, and then select Windows Server Backup.
  2. Select Local Backup.
  3. On the Action menu, select Backup once.
  4. In the Backup Once Wizard, on the Backup options page, select Different options, and then select Next.
  5. On the Select Backup Configuration page, choose one of the following:
    • Full Server: Backs up the entire system.
    • Custom: Select Bare metal recovery for a full system restore, including critical items.
    • System State: If you want to back up Active Directory.
  6. Click on Advanced Settings.
  7. Click on VSS settings and select VSS Full Backup.
  • What is Bare Metal Recovery

  • Bare Metal Recovery (BMR) is a feature in Windows Server Backup that allows for a complete server restoration in case of catastrophic failure.

    This process restores the entire system, including:

    • Operating System
    • System State
    • Installed Applications
    • Data
  • What is System State Backup

  • A System State backup includes essential components:

    • Active Directory database (on domain controllers)
    • System Registry
    • COM+ Class Registration database
    • Boot files
    • Performance Counters configuration
    • Cluster service information (on clustered servers)
    • Certificate Services database (if installed)
    • IIS Metabase (if installed)
    • System files protected by Windows File Protection

    A System State backup is often a prerequisite for Bare Metal Recovery (BMR) or Active Directory recovery.

Backup Destination

  1. On the Specify destination type page, select Local drives or Remote shared folder, and click Next.
  2. On the Select Backup Destination page, choose a backup location.
  3. On the confirmation screen, click Backup.
  4. Once completed, close Windows Server Backup.

🔄 Ready to recover your server? Head to Part 2: Restore Guide for Windows Server Backup

Aligning QA and Development: Strategies for Seamless Collaboration

QA (Quality Assurance) and Developers have the same goal: delivering great software. However, misunderstandings or frustrations can sometimes lead to conflicts.

Here is how to work together better:

1. Work as a Team, Not Against Each Other:
QA and Developers are not rivals. QA finds issues to improve the product, not to blame developers. Think of it as teamwork to build the best software possible.

🔹 Example: Instead of saying, "You always introduce bugs" a QA can say, "I noticed this issue - let's check it together to avoid similar ones in the future."

2. Communicate Clearly:
When reporting bugs, be specific. Instead of saying, "The feature is broken", explain what went wrong and how to reproduce it. Take screenshots, screen recordings, or logs to make things clearer.
Developers should also communicate openly if they disagree with a bug report - ask questions instead of rejecting it outright.

🔹 Example: Instead of saying, "Login is not working", say, "After entering valid credentials, clicking 'Login' causes the app to freeze. This occurs in Chrome (Version) and Edge (Version)."

3. Define Responsibilities from the Start:
Everyone should clearly understand their roles and responsibilities.
Both Developers and QA should clearly understand what needs to be developed and what needs to be tested. This prevents last-minute disagreements.

🔹 Example: Developers may think their task is complete once they implement the requirements. However, a product is truly high quality only if it is bug-free and tested for both positive and negative scenarios. Without QA, completion is not truly complete.

4. Involve QA Early in Development:
Instead of waiting until the end to test, QA should be involved from the start. This way, Developers can avoid common issues, and QA doesn't just find problems but helps prevent them.

🔹 Example: In an Agile project, QA can review requirements and suggest missing edge cases before coding starts. This helps catch issues before they become expensive to fix.

5. Use Facts, Not Opinions:
If there is a disagreement about a bug, check the logs, test reports, or user feedback.
Data helps settle arguments better than personal opinions.

🔹 Example: A Developer says, "This bug is not a big deal" but the QA shows that it crashes the app for 20% of users. Hard Facts make decisions easier.

6. Give and Accept Feedback Gracefully:
If Developer missed a bug, don't attack them - offer help to fix it.
If QA report is unclear, Developers should ask for details rather than ignore it.
Feedback should always be about improving the product, not about blaming people.

🔹 Example: Instead of saying, "You made a mistake", say, "Let's check this together to avoid similar issues in the future."

7. Work Together More Often:
Developers and QA can do joint reviews of features before testing starts. QA can explain common mistakes to developers, and Developers can show how certain parts of the code work. Pairing up can reduce misunderstandings and make bug fixing faster.

🔹 Example: Instead of Developers writing code alone and QA testing afterward, they can do a quick QA-Dev sync after each major change to catch issues early.

8. Handle Disagreements Professionally:
If there is a disagreement that can not be resolved, involve a neutral person like a QA Lead or Scrum Master. Stay focused on fixing the problem, not arguing about who is right.

🔹 Example: If QA says a bug is critical and the Developer disagrees, both can discuss with the Product Owner to decide its priority instead of arguing.

9. Celebrate Successes Together:
If a release goes smoothly or an important bug was caught early, appreciate each other's efforts.
Recognizing teamwork improves relationships between QA and Developers.

🔹 Example: A simple "Great catch!" from a Developer or "Nice fix!" from QA can improve teamwork and morale.

Final Thoughts:

🔹Conflicts between QA and Developers are normal, but they don't have to harm the team. With clear communication, teamwork, and a focus on quality, both teams can work together smoothly to build great software.

🔹Instead of seeing testing as a "blocker", Developers should see it as a way to improve their code. And QA should work with developers as partners, not critics. When both sides respect each other's roles, software quality improves, deadlines are met faster, and everyone benefits.

If you have any questions you can reach out our SharePoint Consulting team here.

April 3, 2025

Managing Multiple Azure Environments with Terraform

Introduction

Managing cloud infrastructure across multiple environments can be complex. Terraform simplifies this process using modules and workspaces, Allows us more efficient and scalable infrastructure management in any cloud. This guide explores leveraging Terraform modules in a multi-workspace setup for Microsoft Azure. 

Benefits of Terraform Modules and Workspaces

Terraform Modules: Enhancing Reusability

Modules allow infrastructure components to be defined once and reused across different environments. This reduces redundancy and enhances maintainability.


Terraform Workspaces: Isolating Environments

Workspaces create separate states for different environments, ensuring isolation and preventing conflicts between deployments. Utilizing Terraform variables further refines environment-specific configurations.


Structuring Terraform for Multi-Environment Deployment

A well-structured Terraform directory simplifies management across environments. Below is a recommended directory structure:


Directory Layout

$ tree complete-module/
.
├── README.md
├── main.tf
├── variables.tf
├── outputs.tf
├── ...
├── modules/
│   ├── nestedA/
│   │   ├── README.md
│   │   ├── variables.tf
│   │   ├── main.tf
│   │   ├── outputs.tf
│   ├── nestedB/
│   ├── .../
├── examples/
│   ├── exampleA/
│   │   ├── main.tf
│   ├── exampleB/
│   ├── .../

Creating a Reusable Terraform Module

Defining a Virtual Network Module:

 - modules/network/main.tf
resource "azurerm_virtual_network" "network" {
  name                = var.network_name
  location            = var.location
  resource_group_name = var.resource_group_name
  address_space       = var.address_space
}

 - modules/network/variables.tf

variable "network_name" {
  type = string
}

variable "location" {
  type = string
}

variable "resource_group_name" {
  type = string
}

variable "address_space" {
  type = list(string)
}

 - modules/network/outputs.tf

output "network_id" {
  value = azurerm_virtual_network.network.id
}

Utilizing the Module in the Main Configuration

- main.tf

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "4.16.0"
    }
  }

  backend "azurerm" {
    resource_group_name  = "terraform-state-rg"
    storage_account_name = "terraformstate"
    container_name       = "tfstate"
    key                  = "terraform.tfstate"
  }
}

provider "azurerm" {
  features {}
}

module "network" {
  source              = "./modules/network"
  network_name        = "my-network-${terraform.workspace}"
  location            = "East US"
  resource_group_name = "my-rg"
  address_space       = ["10.0.0.0/16"]
}

Managing Workspaces for Different Environments

Initializing and Creating Workspaces

Run the following commands to initialize Terraform and create new workspaces:

terraform init
terraform workspace new development
terraform workspace new staging
terraform workspace new production

Switch between workspaces:

terraform workspace select development

Applying Configuration to a Specific Workspace

terraform apply -var-file=environments/development.tfvars

Terraform plan output:

Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.network.azurerm_virtual_network.network will be created
  + resource "azurerm_virtual_network" "network" {
      + address_space       = ["10.0.0.0/16"]
      + id                  = (known after apply)
      + location            = "East US"
      + name                = "my-network-default"
      + resource_group_name = "my-rg"
    }

Plan: 1 to add, 0 to change, 0 to destroy.


Terraform apply output:

module.network.azurerm_virtual_network.network: Creating...
module.network.azurerm_virtual_network.network: Creation complete after 30s ...

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

network_id = "/.../my-rg/.../Microsoft.Network/virtualNetworks/my-network-default"

Advantages of This Approach

  • Code Efficiency: Reusable modules minimize code duplication.
  • Environment Segregation: Workspaces ensure different state for different environment.
  • Scalability: With this approach we can easily add multiple environments as needed.

Reference links:


Conclusion

Using Terraform modules and workspaces in Azure streamlines environment management, improves reusability, and enhances scalability. This structured approach keeps infrastructure organized and adaptable to change.

Happy Terraforming!

Mastering SQL Indexes: Boosting Database Performance with Smart Indexing

SQL indexing can be a game-changer for database performance, but its effectiveness hinges on how well you implement it. A while back, we faced a production database issue where queries were painfully slow, taking hours to complete. After some digging, we discovered that proper indexing was the key to solving the problem. This experience inspired me to dive deeper into SQL indexes, and in this guide, I’ll share what I’ve learned. We’ll cover the basics, explore how I used indexing to tackle a real-world challenge, and discuss the pros and cons to help you optimize your database effectively.


What is an Index in SQL?

An index in SQL is like the index in a book; it helps the database find data quickly without scanning every page (or row) in a table. Technically, it’s a database object created on one or more columns to improve the speed of data retrieval operations by providing an efficient way to locate data.


How Does It Work?

When you create an index on a column, the database builds a separate structure that organizes the data in that column for fast searching. Imagine a sorted list you can quickly reference instead of flipping through an entire unsorted table. Most databases use structures like B-trees behind the scenes, which allow for speedy lookups, inserts, and deletes. The result? The database can jump straight to the data it needs rather than checking every row.


Benefits of Indexing

Indexes turbocharge data retrieval, especially in large tables. Here’s how they shine:

  • Faster Searches: The database locates data quickly without a full table scan.
  • Quick Data Retrieval: Specific rows are fetched instantly using the index.
  • Better Query Performance: Queries with filters, sorts, or joins run more efficiently.
  • Easy Sorting: Indexed data can be pre-arranged, speeding up ORDER BY operations.


Understanding the Major Types of SQL Indexes

Choosing the right index depends on your application’s workload and query patterns. To make this clear, let’s break down the main types with examples.


Clustered Index

A clustered index dictates the physical order of data in a table, like a phone book sorted by last name. Since the data itself is stored in this order, a table can have only one clustered index. It is particularly useful for - 

 - Range queries (e.g., WHERE date BETWEEN '2023-01-01' AND '2023-12-31').
 - Primary key lookups (e.g., WHERE id = 123).

Non-Clustered Index

A non-clustered index is a separate structure from the table, like the index at the back of a book. It contains the indexed column values and pointers to the actual data rows. A table can have multiple non-clustered indexes. It is particularly useful for - 

 - Performing search on non-primary key columns (e.g., WHERE email = 'user@example.com').
 - Executing queries with WHERE, JOIN, or GROUP BY on non-clustered columns.

Unique Index

A unique index ensures no duplicate values exist in the indexed column(s), similar to a primary key but more flexible since it can apply to any column.

 - Example: A unique index on an email column prevents two users from registering with the same email address.

 - Best For: Enforcing data integrity (e.g., unique usernames or IDs).

Composite Index 

A composite index spans multiple columns, and the order of columns matters for query efficiency.

 - Example: In an orders table, a composite index on customer_id and order_date speeds up queries like WHERE customer_id = 100 AND order_date > '2023-01-01'.

 - Best for: Queries filtering or sorting on multiple columns.

Covering Index

A covering index (a type of non-clustered index) includes all columns a query needs, so the database can fetch everything from the index alone—like a mini-table.

 - Example: For SELECT first_name, email FROM users WHERE email = 'user@example.com', a covering index on email and first_name avoids accessing the full table.

 - Best For: Read-heavy queries retrieving multiple columns.

How I Optimized Indexing to Resolve a Major Database Performance Issue

Here’s a real-world example from my experience that shows indexing in action.

The Problem:

In our production environment, we had SQL jobs running stored procedures with data manipulation (DML) operations on tables holding 14 to 74 million rows. These jobs, which ran twice daily, took 7 to 9 hours to complete, unacceptable for our needs. The stored procedures also relied heavily on SQL functions, adding to the performance drag.

The Investigation:

We monitored the database and spotted a query with a staggering 2 billion logical reads. (Logical reads measure how many pages the database engine pulls from the buffer cache—a high number signals inefficiency.) This query was performing full table scans because the table lacked a non-clustered index on the columns in its WHERE clause.



The Solution:

We created a non-clustered index on the relevant columns. The impact was immediate: logical reads dropped dramatically, and query execution time shrank significantly.

Results:

To measure the improvement, we used SET STATISTICS IO ON; to track logical reads. Here’s the before-and-after:

Before:


After:



This fix not only sped up the jobs but also eased the load on the server.

When to Use Indexes

Indexes shine in these scenarios:

✅ Large Datasets: Speed up searches in tables with millions of rows.
✅ Frequent Filtering: Columns in WHERE, JOIN, or ORDER BY clauses.
✅ Uniqueness: Enforce constraints like unique emails or IDs.
✅ Primary/Foreign Keys: Often queried columns benefit from indexing.


When NOT to Use Indexes

Avoid indexes when:

🚫 Small Tables: The overhead outweighs the benefits for tiny datasets.

🚫 Heavy Writes: Indexes slow down INSERT, UPDATE, and DELETE operations since the index must be updated too.

🚫 Low-Cardinality Columns: Columns with few unique values (e.g., gender or status) don’t benefit much.

🚫 Temporary Tables: Indexing rarely justifies the cost for short-lived data.


Bringing It All Together

SQL indexing is a powerful tool for boosting database performance, but it requires a strategy. Index columns are frequently used in queries, especially for filtering, sorting, or joining, to unlock significant speed gains. However, avoid over-indexing: too many indexes can bloat storage and slow down write operations. By applying indexes thoughtfully, as we did to slash those 7-hour jobs, you can optimize performance without unnecessary overhead.

March 27, 2025

Bulk Delete SharePoint List Items Using Power Automate and REST API

Introduction

Deleting items one by one from a SharePoint list can be time-consuming and inefficient, especially when dealing with large volumes of data. Using Power Automate in combination with SharePoint’s $batch REST API offers a much faster and more scalable solution. In this article, we’ll walk through how to build a Flow that can delete multiple SharePoint list items in a single batch request - ideal for cleanups, data resets, or bulk removals.


Why Use Batch Operations?

  • Performance: Fewer API calls improve speed and reduce throttling risks.
  • Scalability: Handles thousands of items effortlessly.
  • Automation: No more manual deletions.

Batch Delete SharePoint List Items - Use Case

You’re managing a SharePoint list, and need to delete all existing items to start fresh. Let’s build the Flow step-by-step. Manually deleting items or making individual API calls for a large SharePoint list is slow and tedious. With Power Automate, you can batch delete items efficiently. Here’s how.
 
Batch Delete Flow

Setting Up the Batch Delete Flow

The goal is to delete all items in batches, looping until the list is empty.

1. Initialize Variables

Add a Set Variable action: 
  • Name: ItemCount 

  • Type: Integer 

  • Value: -1 (to initiate the loop) 

This variable track remaining items and drives the deletion loop. 

2. Add Scope Action for Perform Batch Delete Functionality

1. Add Compose:  

Compose for Defines the SharePoint list details, including the site address and list name. 

 {   
        "siteAddress": "https://Tenant-Name.sharepoint.com/sites/Site-Name",   
        "listName": "List Name"   
 }   

Replace the placeholders with your tenant, site, and list names. 

2.  Add Compose for Batch Delete Template: -  

Prepares a template for batch deletion, specifying the structure of requests to delete multiple items. The Batch Delete Template in Power Automate allows us to delete multiple items from a SharePoint list in a single request. This is done using the _api/web/lists/getByTitle endpoint, where items are deleted based on their IDs. 

   --changeset_@{actions('Sharepoint_List')?['trackedProperties']['changeSetGUID']}   
   Content-Type: application/http   
   Content-Transfer-Encoding: binary   
   DELETE @{outputs('Sharepoint_List')['siteAddress']}/_api/web/lists/getByTitle('@{outputs('Sharepoint_List')['listName']}')/items(|ID|)   
   HTTP/1.1   
   Content-Type: application/json;odata=verbose   
   Accept: application/json;odata=verbose   
   IF-MATCH: *

Step 2: Looping Through the Deletion Process 

Add a Do Until loop that runs until ItemCount equals 0. Inside the loop: 

1. Get SharePoint List Items 

Add a Get Items action: 

  • Site Address: @{outputs('Sharepoint_List')['siteAddress']} 

  • List Name: @{outputs('Sharepoint_List')['listName']} 

  • Top Count: 4999 (SharePoint’s retrieval limit). 

2. Update ItemCount 

Add a Set Variable action: 

 @{length(body('Get_items')['value'])}  

3. Select Item IDs 

Add a Select action: 

 From: @{body('Get_items')['value']}   
 Map: "@replace(outputs('BatchDelete_Template'), '|ID|', string(item()['Id']))"  

4. Combine Requests 

Add a Compose action (named BatchDelete): 

 @{join(body('Select_Item_ID'), decodeUriComponent('%0A'))}  

5. Send the Batch Delete Request 

Send an HTTP Request to SharePoint

Add a Send an HTTP Request to SharePoint action: 

  • Method: POST 

  • URI:

  • /_api/$batch  
    

  • Headers:

  •  X-RequestDigest: digest   
     Content-Type: multipart/mixed; boundary=batch_@{actions('Sharepoint_List')?['trackedProperties']['batchGUID']}  
  • Body:

  •  --batch_@{actions('Sharepoint_List')?['trackedProperties']['batchGUID']}   
     Content-Type: multipart/mixed; boundary="changeset_@{actions('Sharepoint_List')?['trackedProperties']['changeSetGUID']}"   
     Content-Length: @{length(outputs('BatchDelete'))}   
     Content-Transfer-Encoding: binary   
     @{outputs('BatchDelete')}   
     --changeset_@{actions('Sharepoint_List')?['trackedProperties']['changeSetGUID']}--   
     --batch_@{actions('Sharepoint_List')?['trackedProperties']['batchGUID']}--   
    

6. Check Results 

Add a Compose action: 

 @{base64ToString(body('Send_an_HTTP_request_to_SharePoint')['$content'])}  


How It Works?

  • The loop fetches up to 4999 items per batch, prepares a batch DELETE request, and repeats until ItemCount is 0.
  • Once complete, your list is empty!

Why Choose Batch Deletion?

  • Speeds Things Up: Cuts down API calls for a smoother run.
  • Tackles Big Loads: Clears up to 4999 items in one go.
  • Ditches the Grind: Turns a chore into a hands-off win.

Conclusion

Using Power Automate with the SharePoint REST API offers a practical and efficient way to delete list items in bulk. Whether you're performing a routine cleanup or resetting data, this approach helps streamline the process and save time.

If you have any questions you can reach out our SharePoint Consulting team here.