January 30, 2025

Replicate SharePoint Lists with Lookup Lists Automatically using Power Automate

Challenge:

Recently, we faced the challenge of duplicating a SharePoint site schema, which included lists and libraries with lookup columns. The task was to replicate the structure of a site seamlessly, but there was no simple, automated way to do this using Power Automate. Handling lookup columns added extra complexity to the process.

Solution:

After exploring different options, we came up with a solution using Microsoft SharePoint internal http calls. By using the internal APIs, we were able to copy the lists, including their lookup lists, while keeping the relationships between the lists properly connected.

 

Step 1: Get an existing template list and get its site script

Action: Send an HTTP request to SharePoint

Site Address: Use the source site URL

Set the request method to POST.

Endpoint URL:

_api/Microsoft.Sharepoint.Utilities.WebTemplateExtensions.SiteScriptUtility.GetSiteScriptFromList()
to retrieve the script for the existing list.

This API retrieves the site script for the specified list.

The request body:

{"listUrl":"Source_ListURL"}

The HTTP request body provides the list URL to the SharePoint API, which retrieves the site script containing the schema, columns, content types, and settings needed to duplicate the list.


Step 2: Parse and Structure the HTTP Response Data

After receiving the response from the HTTP request in Step 1, the next step is to extract and use the data.

Action: Parse JSON

Input: Use the Body of the HTTP response from the previous action.

In Power Automate, this is typically accessed using dynamic content like

outputs('Send_an_HTTP_Request_to_SharePoint')?['body']

Schema: Generate the schema for the Parse JSON action by providing a sample of the response body.

 


Step 3: Process the Response with Compose

 In this step, the Compose action processes the HTTP response by removing the $schema property from the list script, preparing it for further use.

 


 

Step 4: Create the List on the Destination Site

In this final step, the list is created on the destination site using the HTTP Request action.

Site Address: Use the destination site URL

Endpoint URL:

_api/Microsoft.Sharepoint.Utilities.WebTemplateExtensions.SiteScriptUtility.ExecuteTemplateScript()

The request body:

 Use the output of the Compose action from Step 3.


Conclusion:

This API automatically creates the base list and its lookup lists with intact relationships and all formatting, including views. It significantly reduces the number of calls previously required to create a lookup list and bind it to the parent list. 



If you have any questions you can reach out our SharePoint Consulting team here.

January 23, 2025

Upload a Large File in SharePoint Document Library from SPFx Web Part

      Introduction:

      • In this blog, we will demonstrate how to upload large files to a SharePoint document library from the SPFx web part, even when the file size exceeds 100MB or even reaches the GB range.
      • SharePoint provides us with the REST API for uploading files to the document library. The API Is {site_url}/_api/web/getfolderbyserverrelativeurl('/sites/{site_name}/{library_name}')/file s/add(overwrite=true,url='{file_name}'). However, the issue is that this API only allows us to upload files up to 2MB in size. Any files larger than 2MB cannot be uploaded using this API. 
      •  Here I came up with a solution that allows us to upload files larger than 10 MB, going up to GBs, in SharePoint Document Library from the SPFx Web Part. To achieve this, we can use the chunk upload process. The SharePoint REST API provides methods Or query parameters for the chunk upload process, including “Start Upload”, “Continue Upload”, and “Finish Upload”. Using chunk upload, we can handle uploading any size of the file. 

      Function that handles the large upload in the SharePoint document library:

      • Create a custom function to handle large file uploads in the library. This function requires parameters such as file data, filename, SharePoint site URL, document library name, user's digest value, and desired chunk size.
      • At the beginning of the function, we need to declare some variables, such as the headers to be passed in our REST API, the starting point for uploading, and the endpoint, and so on.int, and so on.
      • After that, we need to call another function to started the upload session for file uploading. In this function, we simply add a blank file to our document library to initialize our uploading session. From the API response, we receive the unique ID of the blank file, which helps us identify the file whose content we need to overwrite.
      • After that, we need to generate a unique GUID that is used in our method for starting upload, continuing upload, and finishing upload.
      • After that, we have to check the starting position of the file. According to that, we divide the file into chunks. If the condition is true, then we have to call the "Start Upload" method with a unique GUID, which we have generated, to begin uploading the first chunk of the file to the document library.
      • After uploading the first chunk, we loop through every subsequent chunk of the file and call the "Continue Upload" REST API using the same GUID that we used in the "Start Upload" method to upload the chunks. We continue uploading the chunks until we reach the 2nd to last chunk.
      •  For the last step, we upload the last chunk of the file using the SharePoint REST API method called "Finish upload". to signal to SharePoint that this is the last chunk of the file, thus completing the uploading process.

      private async UploadLargeFile(
        file: Blob,
        siteUrl: string,
        libraryName: string,
        fileName: string,
        chunkSize: number,
        digest: any
      ) {
        const headers = {
          "Accept": "application/json;odata=verbose",
          "X-RequestDigest": digest
        };
        const fileSize = file.size;
        const uploadId = this.GenrateUploadId();
        let start = 0;
        let end = chunkSize;
        let chunkNumber = 0;
        let fileId = "";
      
        const uploadSessionResponse = await this.StartUploadSession(siteUrl, libraryName, fileName, headers);
        fileId = uploadSessionResponse.d.UniqueId;
      
        while (start < fileSize) {
          const chunk = file.slice(start, end);
          const isLastChunk = end >= fileSize;
      
          if (chunkNumber === 0) {
            await this.UploadFirstChunk(siteUrl, libraryName, fileName, chunk, uploadId, headers, fileId);
          } else if (isLastChunk) {
            await this.UploadLastChunk(siteUrl, libraryName, fileName, chunk, uploadId, headers, start, fileId);
          } else {
            await this.UploadIntermediateChunk(siteUrl, libraryName, fileName, chunk, uploadId, headers, start, fileId);
          }
      
          start = end;
          end = start + chunkSize;
          chunkNumber++;
        }
      }
      
      // Function for the generate unique GUI ID
      private GenrateUploadId(): string {
        return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, c => {
          const r = Math.random() * 16 | 0;
          const v = c === 'x' ? r : (r & 0x3 | 0x8);
          return v.toString(16);
        });
      }
      
      // Starting Upload Session Method
      private async StartUploadSession(siteUrl: string, libraryName: string, fileName: string, headers: any) {
        try {
          return await this.Retry(async () => {
            const response = await fetch(
              `${siteUrl}/_api/Web/Lists/getByTitle('${libraryName}')/RootFolder/Files/Add(url='${fileName}',overwrite=true)`,
              {
                method: 'POST',
                headers: headers
              }
            );
      
            if (!response.ok) {
              const errorText = await response.text();
              console.error('Failed to start upload session:', errorText);
              throw new Error(`Failed to start upload session: ${errorText}`);
            }
      
            return response.json();
          });
        } catch (error) {
          console.error('Failed to start upload session after retries:', error);
          throw error;
        }
      }

      Start Upload Method:

      • This method is called when attempting to upload the first chunk of the file to our SharePoint document library.
      • In this method, we called the SharePoint Post REST API with the parameter of the start upload method along with a unique GUID.
      • The endpoint for the API is "`${siteUrl}/_api/web/GetFileById('${fileId}')/StartUpload(uploadId=guid'${uploadId}') `".
      private async UploadFirstChunk(
        siteUrl: string,
        libraryName: string,
        fileName: string,
        chunk: any,
        uploadId: string,
        headers: any,
        fileId: string
      ) {
        try {
          return await this.Retry(async () => {
            const response = await fetch(
              `${siteUrl}/_api/web/GetFileById('${fileId}')/StartUpload(uploadId=guid'${uploadId}')`,
              {
                method: 'POST',
                headers: headers,
                body: chunk
              }
            );
      
            if (!response.ok) {
              const errorText = await response.text();
              console.error('Failed to upload first chunk:', errorText);
              throw new Error(`Failed to upload first chunk: ${errorText}`);
            }
      
            return response.json();
          });
        } catch (error) {
          console.error('Failed to upload first chunk after retries:', error);
          await this.CancelUpload(siteUrl, fileId, uploadId, headers);
          await this.DeleteFile(siteUrl, fileId, headers);
          throw error;
        }
      }


      Continue Upload Method:

      • The "Continue Upload" method in SharePoint's REST API allows for the upload of intermediate chunks of a file during a large file upload session.
      •   The API endpoint for continuing the upload is: "/_api/web/GetFileById('')/ContinueUpload(uploadId=guid'',fil eOffset=)". 
      • This endpoint specifies the file being uploaded (fileId), the unique upload session ID (uploadId), and the starting byte position of the chunk (fileOffset). 
      • The "file Offset" parameter specifies the starting byte position of the chunk being uploaded in the overall file. It helps SharePoint understand where this chunk fits within the entire file. 
      • it indicates the position in the file where the current chunk starts. 
      • For example, if the first chunk is 1MB (1048576 bytes) in size, the file Offset for the second chunk would be 1048576, the third chunk would be 2097152, and so on. 
      private async UploadIntermediateChunk(siteUrl: string, libraryName: string, fileName: string, chunk: any, uploadId: string, headers: any, start: any, fileId: string) {
          try {
            return await this.Retry(async () => {
              const response = await fetch(`${siteUrl}/_api/web/GetFileById('${fileId}')/ContinueUpload(uploadId=guid'${uploadId}',fileOffset=${start})`, {
                method: 'POST',
                headers: headers,
                body: chunk
              });
      
              if (!response.ok) {
                const errorText = await response.text();
                console.error('Failed to upload chunk:', errorText);
                throw new Error(`Failed to upload chunk: ${errorText}`);
              }
              return response.json();
            });
          } catch (error) {
            console.error('Failed to upload intermediate chunk after retries:', error);
            await this.CancelUpload(siteUrl, fileId, uploadId, headers);
            await this.DeleteFile(siteUrl, fileId, headers);
            throw error;
          }
        }

      Finish Upload Method:

      • The "Finish Upload" method is used to upload the final chunk of a large file to a SharePoint library, signaling the end of the upload process.
      • The method sends a POST request to the SharePoint API endpoint to finish the upload.
      • API endpoint is: "/_api/web/GetFileById('<fileId>')/FinishUpload(uploadId=guid'<uploadId>',fileOffset=<start>)".
      private async UploadLastChunk(siteUrl: string, libraryName: string, fileName: string, chunk: any, uploadId: string, headers: any, start: any, fileId: string) {
        try {
          return await this.Retry(async () => {
            const response = await fetch(`${siteUrl}/_api/web/GetFileById('${fileId}')/FinishUpload(uploadId=guid'${uploadId}',fileOffset=${start})`, {
              method: 'POST',
              headers: headers,
              body: chunk
            });
      
            if (!response.ok) {
              const errorText = await response.text();
              console.error('Failed to upload chunk:', errorText);
              throw new Error(`Failed to upload chunk: ${errorText}`);
            }
      
            return response.json();
          });
        } catch (error) {
          console.error('Failed to upload last chunk after retries:', error);
          await this.CancelUpload(siteUrl, fileId, uploadId, headers);
          await this.DeleteFile(siteUrl, fileId, headers);
          throw error;
        }
      }


      Cancel Upload And Delete File:

      • The "Cancel Upload" method is used to cancel an ongoing large file upload session in SharePoint. This is typically done when an error occurs during the upload process, and you want to terminate the session to prevent incomplete or corrupted files from being saved.
      • Sends a request to the SharePoint API to cancel the current upload session identified by uploadId.Utilizes the unique fileId and uploadId to specify which upload session to cancel.
      • Helps ensure that partially uploaded files are not left in an inconsistent state.
      • The "Delete File" method is used to delete a file from a SharePoint library. This is usually called after canceling an upload session to remove any partially uploaded files and clean up the SharePoint library.
      • Sends a request to the SharePoint API to delete the file identified by file.
      • Ensures that any incomplete or unwanted file uploads are removed, maintaining the integrity of the document library.
      private async CancelUpload(siteUrl: string, fileId: string, uploadId: string, headers: any) {
         try {
           const response = await fetch(`${siteUrl}/_api/web/GetFileById('${fileId}')/CancelUpload(uploadId=guid'${uploadId}')`, {
             method: 'POST',
             headers: headers
           });
      
           if (!response.ok) {
             const errorText = await response.text();
             console.error('Failed to cancel upload session:', errorText);
             throw new Error(`Failed to cancel upload session: ${errorText}`);
           }
      
         } catch (error) {
           console.error('Error occurred while canceling upload session:', error);
         }
       };
       private async DeleteFile(siteUrl: string, fileId: string, headers: any) {
         try {
           const response = await fetch(`${siteUrl}/_api/web/GetFileById('${fileId}')`, {
             method: 'DELETE',
             headers: headers
           });
      
           if (!response.ok) {
             const errorText = await response.text();
             console.error('Failed to delete file:', errorText);
             throw new Error(`Failed to delete file: ${errorText}`);
           }
         } catch (error) {
           console.error('Error occurred while deleting file:', error);
         }
       }


      Summary:

      This blog explains how to upload large files to SharePoint using a segmented approach for efficiency and reliability. It starts with the Start Upload Method, which initializes the upload session and prepares the file. Next, the Continue Upload Method handles middle segments, ensuring sequential upload using fileOffset. Finally, the Finish Upload Method completes the upload by sending the last segment, ensuring all parts are integrated into SharePoint. These methods include error handling and retries to ensure successful uploads, overcome file size limits,  and enhance system performance.

      If you have any questions you can reach out our SharePoint Consulting team here.

      January 16, 2025

      "Intermittent Bugs": How to deal with it?

      Introduction:

      An intermittent bug refers to a bug that exists in the application but is difficult to reproduce. 
      This means that if you execute the same task twice, the behavior may differ each time. 
      It does not appear consistently, making debugging a herculean task. Any complex system or application, regardless of the underlying technology, may have intermittent bugs.

      Techniques to troubleshoot th intermittent bugs:

      1. Document the bug by recording detailed information such as steps to reproduce, expected vs. actual results, error messages, and environment details to assist in troubleshooting and future reference: 

      Create a detailed bug report which should include the following details:
      • Title: Title of the bug should be short and self explanatory [What, When and Where].
      • Preconditions: Steps to get the environment ready for testing.
      • Description: A detailed description of the bug.
      • Application Details: Build number and environment details.
      • Devices: Mobile or other devices used for testing.
      • Repro: It should be represented as the ratio of the occurrence of the bug to the  total number of the times the issue was verified.
      • Steps: Proper steps to reproduce the bug.
      • Actual Result: What is the outcome of the steps?
      • Expected Result: What should happen?
      • Notes: Other insights related to bug behavior.
      • Bug Evidence: Clear video, image or screenshot.

      2.  Reproduce the bug to analyze its behavior and identify the root cause:

      The bug may have occurred due to various reasons, and identifying the cause is important. This could be due to environmental factors, browsers used, or devices involved. Intermittent bugs can occur because of any of these reasons; therefore, identifying the root cause is essential for effective analysis.
      It is good to think from an end user perspective while testing.
       

      3.  Isolate the problem to narrow down the potential causes and simplify the debugging process:

      It is always beneficial to break down a complex system into smaller, more manageable parts, as this helps identify the root cause more effectively.

      4.  Use debugging tools to analyze the issue and gather detailed information about the bug: 

      Debugging tools like IDEs like Visual Studio, Eclipse, Standalone debuggers like GDB, Logging utilities etc. are available. These tools can help capture additional information about the bug.

      5.  Check the test environment to ensure it matches the production setup and is free from configuration issues: 

      Check for external factors such as network issues, system load, or third-party dependencies. External environmental factors, such as network issues, server load, or third-party service availability, can significantly impact the performance and behavior of the application and should be thoroughly assessed during troubleshooting

      6.  Use version control to track changes in the code and ensure consistency across different environments:

      Use the correct code version to eliminate any confusion. There should be no ambiguity between the development and production environments.

      7.  Collaborate with team members to leverage their expertise, share insights, and collectively identify the root cause of the issue:

      A collaborative approach always leads to better perspectives, which can help in identifying potential causes.

      8. Conduct code reviews to ensure code quality, identify potential bugs, and improve overall system reliability through collaborative feedback:

      The code should undergo a thorough review by experienced developers. This will be helpful in eliminating coding level issues.

      9.  Perform regression testing to ensure that recent changes or fixes have not introduced new issues or affected existing functionality:

      Make a regression plan for the modules, which should run automatically whenever any code changes are done. This would help us to catch any bug at early stage before they disappear or shows intermittent behavior.

      10.  Implement monitoring and alerting systems to detect and notify you of any issues in real-time, ensuring timely resolution of problems:

      We can implement a monitoring and alerting system in the production environment. This can provide valuable insights into the bug's frequency and the conditions under which it occurs.

      11. Keep records of bugs, fixes, and test results to track progress, identify recurring issues, and maintain a history for future reference:

      Each testing rounds should be recorded for future reference. Patterns in the bug's behavior might help to point out its cause.

      12. Conduct periodic defect checks to regularly evaluate the system for potential issues, ensuring that new bugs are identified early and existing ones are resolved promptly: 

      There should be a periodic cadence to revisit the application and test it thoroughly. The cadence for testing can be determined based on the interdependency between modules. The higher the interdependency, the shorter the gap between cadences.

      13. Conduct Exploratory Testing: 

      Everyday of at least 30 min of exploratory testing by experienced testers could be helpful to eliminate intermittent bug occurrence. Exploratory testing can help to determine edge-case scenarios and negative scenarios that may lead to intermittent bugs.

      14. Determination and patience are essential in debugging, as solving complex issues often requires time, thorough investigation, and continuous effort:

      Sometimes occurrence of intermittent bug might be a frustrating affair. So patience is the key.

      Intermittent bugs may not occur consistently, but when they do, they can disrupt the normal functioning of the application. Resolving these bugs ensures a more robust system or application.


      January 2, 2025

      Navigating the Start of Projects: Choosing the Right Path and Estimating Success

      A new project has arrived on the table. The client’s requirements are ambitious, and the stakes are high. The first crucial decision awaits: choosing the right approach to guide the project to success.

      As we begin the evaluation, it becomes clear that the initial choice between methodologies will set the tone for the project’s journey. Should we follow a predictive, structured framework with meticulously planned milestones? Or is an adaptive approach more suitable, allowing flexibility and evolution with changing requirements? Let’s explore these choices and how they influence the process of estimation.


      The Crossroads: Predictive vs. Adaptive

      The Predictive Path (Waterfall): Charting a Clear Course

      If we imagine embarking on a journey where every step is preplanned, the predictive approach - often referred to as the waterfall model - relies on thorough upfront planning, clearly defined requirements, and a step-by-step execution process. It’s like navigating with a detailed map, where the destination is known, and the path is fixed. This method works well for projects with:

      • Stable and well-defined requirements.
      • Minimal likelihood of changes.

      For example, if we were developing a payroll management system for an organization with fixed requirements and regulatory compliance, a predictive approach could ensure accurate delivery within defined timelines.

      The Adaptive Route (Agile): Navigating the Unknown

      Now, if we consider the adaptive approach, it’s like using a dynamic GPS that adjusts the route based on factors such as traffic conditions, unexpected roadblocks, or even a change in destination. Agile methodologies - Scrum, Kanban, or Lean - allow us to adapt swiftly to new information, priorities, and challenges. Instead of detailed blueprints, the focus is on iterative progress and incremental value delivery. This method is a good fit for:

      • Projects with evolving goals.
      • Uncertain or volatile environments.
      • Continuous user feedback shaping the end product.

      For instance, if we were building a Generative AI-powered Content Creation Platform, where user needs for personalized writing styles, support for new languages, and integration with evolving tools like image or video generation are continuously changing, an adaptive approach would allow iterative enhancements based on real-time feedback and emerging AI advancements.

      When to Choose What?

      Scenario Predictive Adaptive
      Requirements are well-defined
      Requirements are evolving
      Stakeholders need fixed timelines
      Stakeholders expect collaboration
      Innovation and uncertainty exist


      Estimation Challenges in Both Paths

      Both predictive and adaptive approaches come with unique estimation challenges:

      In Predictive Projects

      • Initial Precision: Every phase’s cost, time, and resource requirements need to be estimated upfront. Inaccurate estimates can lead to delays and budget overruns.
      • Risk Assessment: Anticipating potential obstacles and including contingencies in the plan can be vital.


      In Adaptive Projects

      • Dynamic Planning: Estimates evolve with each iteration, requiring continuous recalibration.
      • Measuring Velocity: Tracking the team’s pace of work helps refine future estimations.


      Exploring Estimation Techniques

      For Predictive Projects

      • Bottom-Up Estimation: This method involves breaking the project into smaller tasks, estimating each one, and aggregating the totals. It is highly accurate but can be time-intensive, making it a great option for detailed planning phases.
      • Analogous Estimation: This technique relies on historical data from similar projects to make predictions. For instance, if a previous project with similar scope and complexity took six months to complete, it can serve as a reference point. While quick and efficient, this approach may lack precision, making it more suitable for early-stage planning.
      • Other Techniques: There are additional approaches, such as:
        • Parametric Estimating: Uses statistical models based on variables, like cost per line of code, for predictions.
        • Single-Point Estimating: Provides one fixed estimate, which might oversimplify and add risks.
        • Three-Point Estimating: Considers optimistic, pessimistic, and most likely scenarios to create a balanced estimate. This can help in managing uncertainties.


      For Adaptive Projects

      • Affinity Estimating (Story Points): Tasks are grouped based on complexity or effort and assigned relative points. This collaborative approach allows for quick estimation while leveraging team consensus.
      • T-Shirt Sizing: Tasks are classified into sizes such as XS, S, M, L, and XL based on their complexity. This method is helpful in the early stages when tasks need broad categorization for planning.
      • Planning Poker: A gamified method where team members simultaneously share their estimates using cards. This fosters collaboration and helps refine discrepancies, often leading to more accurate estimates.


      Conclusion

      Starting a project often involves navigating through a range of decisions, and choosing the right methodology is one of the most critical steps. Both predictive and adaptive approaches have their merits, and sometimes a blend of both can address specific project needs effectively.

      Ultimately, the key for us is to align the methodology with the project’s characteristics and use the right estimation techniques to set realistic expectations. With thoughtful consideration and collaboration, a project can move closer to its goals, step by step.

      If you have any questions you can reach out our SharePoint Consulting team here.