August 28, 2025

Building a Reusable React Component Library with TypeScript and Rollup - A Step-by-Step Guide

Thinking of building your own reusable React component library? Whether it’s to keep your projects consistent or to make collaboration with your team easier, you’re in the right place.

In this guide, I’ll walk you through exactly how I created a shareable React component library from setup to publishing, complete with real code examples, clear explanations, and practical tips. Everything you need is right here in one place.

Use Case

Maintaining multiple React projects with variations of the same UI components presented significant challenges for our team. We encountered frequent issues such as inconsistent styling, duplicated bug fixes, and difficulties in propagating enhancements across codebases. This approach led to inefficiencies, unnecessary overhead, and a lack of coherence in user experience.

To address these challenges, we developed a centralizedReusable Component Library, a standardized collection of UI components designed for use across all our React projects. By consolidating our shared components into a single, well-maintained package, we significantly reduced development redundancy and ensured visual and behavioral consistency throughout our applications. Updates or improvements made to the component library are seamlessly integrated wherever the library is used, streamlining maintenance and accelerating development cycles.


1. Set Up Your Project Folder

First, create a new folder for your component library and initialize it:


mkdir my-react-component-library
cd my-react-component-library
npm init -y

With your project folder in place, you have established a solid foundation for the steps ahead.


2. Install Essential Dependencies

Install React, TypeScript, and essential build tools for a robust library setup:


npm install react react-dom
npm install --save-dev typescript @types/react @types/react-dom
npm install --save-dev rollup rollup-plugin-peer-deps-external rollup-plugin-postcss @rollup/plugin-node-resolve @rollup/plugin-commonjs @rollup/plugin-typescript sass

 The right dependencies are now in place, ensuring your project is equipped for modern development and efficient bundling.


3. Organize Your Project Structure

Establish a clear and logical directory structure for your components and outputs:


With your file structure organized, you are primed for scalable code and easy project navigation.

4. Write Your Component

Develop a simple reusable React component as a starting point for your library:


import React from 'react';
import styles from './HelloWorld.module.scss';
type HelloWorldProps = {
  name: string;
};
export const HelloWorld: React.FC<HelloWorldProps> = ({ name }) => (
  <div className={styles.centerScreen}>
    <div className={styles.card}>
      <span className={styles.waveEmoji}></span>
      <div className={styles.textBlock}>
        <span className={styles.helloSmall}>Hello,</span>
        <span className={styles.name}>{name}</span>
      </div>
    </div>
  </div>
);

Having your first component ready sets the stage for further expansion and consistent styling across your library.


5. Set Up TypeScript

Configure TypeScript for optimal type safety and the generation of type declarations:

{
  "compilerOptions": {
    "declaration": true,
    "declarationDir": "dist/types",
    "emitDeclarationOnly": false,
    "jsx": "react",
    "module": "ESNext",
    "moduleResolution": "node",
    "outDir": "dist",
    "rootDir": "src",
    "target": "ES6",
    "strict": true,
    "esModuleInterop": true
  },
  "include": ["src"]
}

TypeScript is now fully configured, bringing type safety and easy downstream integration for consumers.


6. Create an Index Export

Make src/index.ts like this:

export { HelloWorld } from './HelloWorld';

Centralizing your exports prepares your library for seamless adoption in other projects

7. Add a Type Declarations File

Enable TypeScript to recognize SCSS module imports and prevent type errors:

declare module '*.module.scss' {
  const classes: { [key: string]: string };
  export default classes;
}

With declaration files in place, your styling workflow integrates smoothly with TypeScript.


8. Configure Rollup

Set up Rollup for reliable library bundling and versatile output formats:

import peerDepsExternal from "rollup-plugin-peer-deps-external";
import postcss from "rollup-plugin-postcss";
import resolve from "@rollup/plugin-node-resolve";
import commonjs from "@rollup/plugin-commonjs";
import typescript from "@rollup/plugin-typescript";

export default {
  input: "src/index.ts",
  output: [
    {
      file: "dist/index.js",
      format: "cjs",
      sourcemap: true,
    },
    {
      file: "dist/index.esm.js",
      format: "esm",
      sourcemap: true,
    },
  ],
  plugins: [
    peerDepsExternal(),
    resolve(),
    commonjs(),
    typescript({ tsconfig: "./tsconfig.json" }),
    postcss({
      modules: true,
      use: ["sass"],
    }),
  ],
  external: ["react", "react-dom"],
};


An optimized bundling process now supports your library's compatibility with a variety of JavaScript environments.

9. Update package.json

Reference all build outputs and dependencies accurately in your package.json.:

{
  "main": "dist/index.js",
  "module": "dist/index.esm.js",
  "types": "dist/types/index.d.ts",
  "files": [
    "dist"
  ],
  "scripts": {
    "build": "rollup -c"
  },
  "peerDependencies": {
    "react": "^17.0.0 || ^18.0.0",
    "react-dom": "^17.0.0 || ^18.0.0"
  }
}

Your package metadata is set, paving the way for effortless installation and use.


10. Build the Package

Trigger Rollup to bundle your components:

npm run build

With a completed build, your library files are now ready for distribution.


11. Publishing to Azure Artifacts npm Registry

a) Set up your Azure Artifacts Feed

Go to Azure DevOps > Artifacts and create (or use) an npm feed.


b) Configure npm for Azure Artifacts

In your project root, create or update a .npmrc file with:

@yourscope:registry=https://pkgs.dev.azure.com/yourorg/_packaging/yourfeed/npm/registry/
always-auth=true

Replace @yourscope, yourorg, and yourfeed with your actual values.

c) Authenticate Locally

Use Azure's instructions for authentication, such as:

npm login --registry=https://pkgs.dev.azure.com/yourorg/_packaging/yourfeed/npm/registry/

In some setups, especially on Windows, you might need to install and run vsts-npm-auth to complete authentication.

d ) Build Your Package

Ensure your package is built and ready to publish (e.g., run npm run build if you have a build step.

e ) Publish Your Package

From the project root, run:

npm publish

You do not need to specify the registry in the publish command if your .npmrc is set correctly. The registry is picked up from .npmrc.

And just like that, your component library is available in your Azure feed for your team or organization to install and use!

If you’d prefer to publish to the public npm registry, follow these steps:


OR 

12. Publishing to NPM

Prerequisites

  • You already built your library (dist/ exists, with all outputs, after running npm run build).
  • You have an npmjs.com account.

a) Log in to npm 

In your terminal, from the root of your project, type:

npm login

Enter your npm username, password, and email when prompted.

b)  Publish 

Publish the package:

npm publish

After publishing to npmjs.com, you’ll want to showcase your package’s availability directly from your npm dashboard.


Instructions:

  1. Go to npmjs.com and log in to your account.

  2. Click on your username (top-right) and select Packages from the dropdown.

  3. Find and click your newly published package.



Seeing your package live in npm’s dashboard is a proud milestone—your code is now out there, ready to make life easier for every developer who needs it!

Once published, your component library is available for installation in any compatible React project.


install the library in any React project:


npm install your-package-name

Output:

Below is an example of what you'll see after successfully publishing your package to npm. This confirmation means your component library can now be installed and used in any of your React projects.

Troubleshooting/Common Tips:

  • Instructions: If the package name + version already exists on npm, bump your version in package.json.
  • Make sure your main, module, and types fields point to valid files in your dist/ directory (you’ve already done this!).
  • Check .npmignore or the "files" section in package.json so only necessary files are published.

Conclusion:

You've now created, bundled, and published your reusable React component library with TypeScript and Rollup.
This new workflow helps you:

  • Speed up development: No more duplicating code between projects.
  • Guarantee consistency: All your apps share the same reliable components.
  • Simplify updates: Bug fixes or enhancements are made once and shared everywhere.
  • Easily distribute privately or publicly: Works with both internal feeds (like Azure Artifacts) and public npm.

Now your custom components are ready to power future projects, speed up development, and ensure consistency across your apps.

Micro‑Frontends & Component-Driven Architecture: Modern Approaches to Scalable React Applications

In today’s fast-paced development world, building scalable and maintainable frontend applications is critical. As React continues to dominate the ecosystem, two powerful concepts can elevate your architecture: Micro-Frontends and Component-Driven Architecture (CDA). These paradigms help developers break down complex UI systems into manageable, reusable, and independently deployable parts - ideal for large teams and enterprise-scale applications.

What Are Micro-Frontends?


Definition

Micro-Frontends extend the idea of microservices to the frontend. Instead of building a monolithic frontend app, you split it into smaller, independent pieces that are owned by separate teams and developed, tested, and deployed independently.

Example Use Case

Imagine an e-commerce platform where the cart, product list, search, and user profile are each handled by different teams. With Micro-Frontends, each team builds and ships their part of the UI as a self-contained app.

Key Benefits

  • Scalability: Different teams can work in parallel.
  • Independence: Teams can choose their own tech stack.
  • Incremental upgrades: Refactor or rewrite parts without affecting the entire system.
  • Faster deployment cycles.

Common Implementation Strategies

  1. Module Federation (Webpack 5): Share and load code from remote sources.
  2. Iframe-based isolation: Not very modern, but sometimes used.
  3. Runtime integration with single-spa or qiankun.

What Is Component-Driven Architecture (CDA)?


Definition

CDA is a design methodology where UIs are built from the bottom-up using independent, reusable components. This aligns closely with tools like Storybook that enable development in isolation.

Example Use Case

In a component-driven app, a <Button />, <Card />, <LoginForm />, etc., are all designed and tested individually. They’re then composed to form complete pages.

Key Benefits

  • Reusability: Components are shareable across projects.
  • Faster development: Build UIs from pre-tested building blocks.
  • Design-system friendly: Ensures consistency across your app.
  • Better testing: Easier to write unit and visual tests.

Tools and Best Practices

  • Storybook: Develop and preview components in isolation.
  • Bit.dev: Share and reuse components across repositories.
  • Atomic Design Principles: Organize components as atoms, molecules, organisms, etc.

Micro-Frontends + CDA = Ultimate Scalability

Combining these two approaches creates a highly modular, flexible, and scalable frontend system.

  • Micro-Frontends provide separation of concerns at the app level.
  • Component-Driven Architecture ensures reusability and consistency within each micro-app.

Example Flow

  • Each Micro-Frontend owns a domain (e.g., “User Management”).
  • Within that domain, components are developed following CDA.
  • A design system or shared library may be used across teams to maintain consistency.

Challenges & Considerations


Micro-Frontends

  • Shared state management across MFEs can be tricky.
  • Initial setup and CI/CD pipeline management are complex.
  • Performance issues may arise if not optimized properly.

Component-Driven

  • Requires discipline and planning around component design.
  • Over-engineering risk - avoid breaking everything into components unnecessarily.

Conclusion

If you’re building modern React apps at scale, Micro-Frontends and Component-Driven Architecture are concepts worth exploring. They help bring agility, modularity, and maintainability to frontend development - especially in large teams or enterprise-grade projects.

As React ecosystems mature, we’ll see even tighter integrations between these approaches and tools like Turborepo, Module Federation, and Design Systems.

The Developer Toolkit: Essential Custom GPTs for Productivity

Custom GPTs Every Developer Should Try

With so many GPTs available, finding the ones that actually make a difference in your workflow can be a game-changer. Whether you're into architecture planning, coding, SQL optimization, or UI design, there’s something out there to make your life easier.

Header image for: My Favorite Custom GPTs Every Developer Should Try

Here are a few of my personal favorites that I keep going back to. They’ve genuinely saved my time and boosted productivity.

1. Software Architecture GPT

This is like having a senior architect in your pocket. It guides you through creating a complete software architecture document by asking the right questions and following industry best practices.

Screenshot of the Software Architecture GPT card

The best part? It uses the MoSCoW prioritization technique (Must Have, Should Have, Could Have, Won’t Have), which helps you focus on what's essential and avoid feature overload.

Try Software Architect GPT →

2. Code Copilot

Think of this GPT as your coding buddy who never sleeps. Whether you're stuck, looking to optimize a block of code, or want auto-complete magic, this tool’s got your back. It’s fast, smart, and feels like working alongside a 10x developer.

Screenshot of the Code Copilot GPT card

Try Code Copilot →

3. SQL Expert

SQL queries getting messy? Performance taking a hit? This GPT helps you write and optimize SQL queries effortlessly. From query structure to index suggestions, it makes database work a whole lot smoother.

Screenshot of the SQL Expert GPT card

Try SQL Expert →

4. Screenshot to Code GPT

UI/UX folks, you’ll love this. You can upload a screenshot of a website, or even a rough UI sketch from your notebook, and this GPT turns it into clean HTML, Tailwind CSS, and JavaScript. Great for prototyping or getting started quickly with frontend development.

Screenshot of the Screenshot to Code GPT interface

Try Screenshot to Code GPT →

Final Thoughts

These custom GPTs aren’t just cool—they’re genuinely helpful. Whether you’re planning architecture, writing code, or designing interfaces, these tools can seriously level up your workflow.

If you’ve got other cool GPTs that you’ve used, feel free to drop them in the comments. Always happy to explore more!

August 8, 2025

Predictive Analytics with AI and Big Data: Turning Data into Future Insights

Predictive Analytics with AI and Big Data

Introduction

As developers, we’re surrounded by data every day - logs, metrics, events, sensor streams, and much more. Storing data is easier than ever thanks to cloud technologies, but making sense of it, identifying patterns, and predicting future trends? That’s where the real challenge lies.

This is where Predictive Analytics comes into play.

Powered by the unstoppable duo of Artificial Intelligence (AI) and Big Data, predictive analytics is more than a buzzword - it’s a developer’s playground for building smarter, adaptive systems.


What is Predictive Analytics?

In simple terms: Predictive Analytics uses past and present data to make informed predictions about the future.

For developers, that could mean:

  • Predicting which users are likely to return in the next 30 days.
  • Flagging suspicious transactions before they are completed.
  • Estimating server load in advance and scaling early.

It’s about creating systems that don’t just respond to input - they anticipate it.

[Raw Data] ➝ [ETL Pipeline] ➝ [Feature Engineering] ➝ [Trained Model]
    ↓                   ↓
[Historical Logs]           [Cleaned Input]
                ↓
[Prediction / Action]

[ Visual: Predictive Analytics Flow ]


Big Data: The Fuel That Powers AI

Before AI can be useful, it needs one thing: lots of high-quality data.

Big Data is often defined by the 3 Vs:

V Description Example
Volume Massive datasets (TBs, PBs) IoT sensor logs
Velocity High-speed incoming data Tweets per second
Variety Structured + unstructured formats JSON, SQL, videos, CSV

Turning Data into Value

Simply dumping data into storage isn’t enough. We need to:

  • Build clean, scalable ETL/ELT pipelines using tools like Apache Spark, Apache Flink, or Airflow.
  • Optimize storage for refined data.
  • Plan for schema changes and maintenance.

Where AI Comes In

Once the data is prepared, AI helps us learn from it and make predictions at scale. The process includes:

1. Feature Engineering at Scale

  • Transforming raw data into meaningful inputs for models.
  • Popular tools: Spark MLlib, Tecton, custom Python pipelines.

2. Model Training and Validation

  • Training models that can forecast or classify data.
  • Popular frameworks:
    • Scikit-learn
    • XGBoost
    • TensorFlow
    • PyTorch

Model Training Example

Example: Model Training using Scikit-learn

3. Inference at Scale

  • Deploying models to production for real-time or batch predictions.
  • Ensuring efficient execution over large-scale data.

Conclusion

Artificial Intelligence (AI) and Big Data are no longer just tools we integrate into our applications - they are reshaping how we build software. Our systems don’t just run code anymore; they learn, adapt, and evolve with the data they see.

If you’re exploring predictive analytics, consider diving deeper into:

  • Distributed data pipelines
  • Model deployment strategies
  • Advanced model training processes

The future is data-driven, and as developers, we’re the ones driving it forward.

AI-Powered Test Case Prioritization: Making Cypress Faster, Smarter, and More Efficient

In today’s fast-paced world of continuous delivery and agile development, speed alone isn’t enough - test automation must also be strategic and results-driven.

While Cypress is a go-to framework for modern end-to-end web testing, many teams still struggle with:

  • Slow test execution as suites grow
  • Unstable results and flaky tests
  • Suboptimal coverage of high-risk, business-critical areas

These issues intensify as applications scale and release cycles shorten.

The solution?

AI-based test case prioritization - combine Cypress’s reliability with machine-learning intelligence to run the right tests first, catch critical bugs earlier, and streamline every CI/CD run.


What is Test Case Prioritization?

Test case prioritization orders tests so the most important or high-risk scenarios execute first, delivering the fastest path to defect detection.


Common Prioritization Criteria

  • Recent code changes and touched files
  • Areas with a history of defects or flakiness
  • Business-critical functionality and usage frequency
  • Test execution time and infrastructure cost
  • Module dependencies and integration impact

Manual prioritization helps, but it lacks the speed, precision, and adaptability that modern CI/CD pipelines demand.


Key Objectives

  • Catch high-priority issues early in the testing cycle
  • Speed up pipelines by executing the highest-value tests first
  • Optimize CI/CD resources by reducing unnecessary runs
  • Align testing with real risk in frequently used or fragile areas

AI Takes the Lead: Smarter, Data-Backed Prioritization

AI-based prioritization uses machine learning, historical data, and predictive analytics to automatically determine the optimal execution order. It can analyze:

  • Recent commits and file diffs
  • Pass/fail history and flakiness signals
  • Consistency vs. intermittency of failures
  • Execution time and compute cost
  • Usage analytics and business impact

The result: critical tests run first to catch regressions early - often without needing to run the entire suite every time.


Why Cypress + AI is a Powerful Combination

Cypress offers developer-friendly syntax, quick runs, and real-time browser feedback. Paired with AI-driven prioritization, teams gain:

  • Faster feedback loops: high-risk results in minutes, not hours
  • Shorter CI times: skip or defer low-impact, stable tests
  • Smarter debugging: detect recurring failures and flaky patterns
  • Better resource focus: spend time on new tests and coverage, not sorting noise

How It Works

A high-level workflow for integrating AI-based prioritization into Cypress:

1. Data Collection

  • Collect execution data: durations, pass/fail trends, flakiness
  • Extract metadata: tags, test names, file paths
  • Map tests to source changes via Git history

2. Feature Engineering

  • Compute stability scores, failure frequency, and “time since last change/failure”

3. Model Training

  • Train supervised or reinforcement models to predict failure likelihood/importance

4. Dynamic Test Ordering

  • Reorder Cypress tests pre-run based on AI recommendations
  • Run high-priority tests first; defer or batch low-impact ones

5. Continuous Learning

  • With every run, feed results back to the model to improve future prioritization

Limits of Cypress: Cypress doesn’t ship AI natively.


AI-Powered Test Prioritization Flow

    Code Commit / Change
              │
              ▼
    AI Prioritization Engine
              │
              ▼
    High-Risk Tests Run First
              │
              ▼
    Faster Feedback & Bug Detection
              │
              ▼
    Continuous Learning & Model Updates

This simple loop ensures that every code change triggers the most relevant tests first, leading to faster detection of regressions and more efficient pipelines.


Solution: Tools and platforms that bridge the gap

1. Testim

  • AI-assisted prioritization and maintenance of automated tests
  • Adapts to UI/code changes to reduce flakiness

2. Launchable

  • Predictive test selection and prioritization with ML
  • Integrates with CI to run the most relevant tests first

3. PractiTest

  • Test management with analytics-driven decision making
  • Highlights which tests to run first based on impact/history

4. Applitools Test Manager

  • Visual AI to analyze UI changes and prioritize affected tests
  • Reduces unnecessary runs by focusing on impacted areas

5. Allure TestOps

  • Advanced test analytics with ML-assisted planning
  • Prioritization informed by historical execution data

6. CircleCI + Launchable Integration

  • ML-based test selection embedded directly in CI pipelines

Let’s say you have a Cypress suite with 500 tests taking 40 minutes. With AI-based prioritization:

  • The top 50 high-risk tests run first in under 8 minutes
  • They cover ~85% of recent bugs based on commit and failure history
  • Low-impact or stable tests are deferred to off-peak hours or batched weekly

Best Practices for Implementation

  • Start small: bootstrap with historical Cypress runs
  • Phase it in: run AI ordering alongside full suites to validate
  • Keep feedback loops: review, retrain, and tune regularly
  • Combine tactics: parallelization, retries, and CI caching amplify gains

Conclusion

As test suites grow and release velocity increases, smart execution matters as much as fast execution. AI-driven test case prioritization helps Cypress teams detect critical issues sooner, trim CI/CD time and cost, and focus effort where it matters most.

“The next generation of test automation is not only fast - it’s smart.”

July 31, 2025

Automating Flow Duplication in Power Automate for New SharePoint Site Creations

Introduction:

Setting up workflows in Power Automate can take a lot of time, especially when the same workflows need to be recreated every time a new SharePoint site is created. 

Instead of manually creating the same workflows repeatedly, you can automate the process. This means that whenever a new SharePoint site is created, the necessary workflows are automatically duplicated and configured without any manual intervention. 

In this blog, we will walk through the steps to automatically duplicate Power Automate flows whenever a new SharePoint site is created.

Use case:

One of our clients required that a specific Power Automate flow be automatically replicated whenever a new SharePoint site was created. Manually duplicating the flow each time wasn’t scalable, so we implemented an automated solution. 

Architecture Overview:

Here's a high-level overview of the automation process: 

  • Trigger: A new SharePoint site is created. 

  • Retrieve: The definition of the existing (source) flow is fetched. 

  • Update: The flow definition is modified to align with the new site’s parameters. 

  • Recreate: A new flow is created from the modified definition and assigned to the new site.


Step-by-Step Guide to Automating Workflow Duplication

Step 1: Detect New Site Creation

Add a trigger that detects when a new SharePoint site is created.

Step 2: Get the Source Flow(Template Flow)

Use the Power Automate Management connector. 

Add the action "Get Flow" to retrieve the definition of the existing (template) flow. 

This action returns a JSON object containing the flow’s full definition, including triggers, actions, and metadata. 




Step 3: Get Flow Definition and Modify Site-Specific Values

You will now modify the values in the flow definition to suit the new site. 

Update the flow definition retrieved from the "Get Flow" action by replacing the template’s Site URL and List Name or Library ID with the values from the newly created SharePoint site. 

In Power Automate, this is typically accessed using dynamic content like

string(body('Get_Flow')?['properties']['definition'])

 


Step 4: Get All Connection References

Use the "Select" action to format the connection references by mapping fields like connectionName, id, and source from the connectionReferences array, These will be used when creating the new flow.

 

 

Step 5: Create New Flow in Target Environment

Use the "Create Flow" action from the Power Automate Management connector to create the new flow using the modified definition and updated connection references.

Environment Name: Choose your environment

Flow Display Name: Provide a unique name

Flow Definition: Pass the modified JSON definition from Step 3

Flow State: Set this to control whether the flow is turned on/off after creation

connectionReferences: Pass the formatted connection references from Step 4



Conclusion:

This blog demonstrated how to automate the creation of workflows in Power Automate by duplicating an existing flow. By implementing this automation, you can eliminate repetitive manual setup each time a new SharePoint site is created. This approach not only saves time and reduces the chance of errors but also ensures consistency across all sites.


If you have any questions you can reach out our SharePoint Consulting team here.