November 14, 2024

Level-up your .NET Skills: Automate, Validate, and Secure your Code


In the .NET ecosystem, there are many libraries that simplify common tasks, making the development process smoother and more efficient. In this post, we will explore three libraries that are indispensable in modern .NET applications: Automapper, FluentValidation, and BCrypt.Net. These libraries help with data mapping, validation, and security, respectively.


1. Automapper: Simplifying Object Mappings

Automapper is a library that eliminates the need to manually map properties from one object to another. This is especially helpful when dealing with Data Transfer Objects (DTOs) or ViewModels, where the structure may differ from the domain entities.

Problem:

Consider a scenario where you have a User entity with a lot of fields, but you only need a subset of those fields to be sent in an API response. Manually copying each property from the entity to a DTO can become tedious and error-prone.

Solution with Automapper:

Automapper provides a streamlined approach to map these objects.

Step-by-Step Example:

1. Define your domain model (User) and DTO (UserDTO):

public class User
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string Email { get; set; }
    public string PasswordHash { get; set; }
    public DateTime DateOfBirth { get; set; }
}

public class UserDTO
{
    public int Id { get; set; }
    public string FullName { get; set; }
    public string Email { get; set; }
}


2. Create an Automapper Profile to define the mapping:

using AutoMapper;

public class UserProfile : Profile
{
    public UserProfile()
    {
        CreateMap<User, UserDTO>()
            .ForMember(dest => dest.FullName, opt => opt.MapFrom(src => $"{src.FirstName} 
            {src.LastName}"));
    }
}
Here, CreateMap<User, UserDTO>() defines the mapping between the User entity and the UserDTO. The ForMember method maps the FullName property in UserDTO to the concatenation of FirstName and LastName from User.

3. Configure Automapper in your Startup class (or using Dependency Injection):
public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddAutoMapper(typeof(Startup));
    }
}

4. Use Automapper in your application:

public class UserController : ControllerBase
{
    private readonly IMapper _mapper;

    public UserController(IMapper mapper)
    {
        _mapper = mapper;
    }

    [HttpGet("{id}")]
    public ActionResult<UserDTO> GetUser(int id)
    {
        var user = _dbContext.Users.Find(id);
        if (user == null) return NotFound();

        // Map User to UserDTO
        var userDto = _mapper.Map<UserDTO>(user);
        return Ok(userDto);
    }
}

Explanation:

  • The User object is retrieved from the database.

  • The IMapper.Map method is used to convert the User object to a UserDTO object, with minimal effort.

2. FluentValidation: Clean and Readable Model Validation

FluentValidation simplifies model validation by allowing developers to write validation logic in a fluent, expressive syntax, keeping the validation logic separate from the model itself.

Problem:

Manually validating model fields (e.g., ensuring required fields are filled, data formats are correct, etc.) often leads to messy and repetitive code.

Solution with FluentValidation:

FluentValidation provides a cleaner way to handle validations with reusable, strongly-typed rules.

Step-by-Step Example:

1. Define your model (User):

public class User
{
    public string Email { get; set; }
    public string Password { get; set; }
    public DateTime DateOfBirth { get; set; }
}

2. Create a Validator class for the model:

using FluentValidation;

public class UserValidator : AbstractValidator<User>
{
    public UserValidator()
    {
        RuleFor(user => user.Email)
            .NotEmpty().WithMessage("Email is required.")
            .EmailAddress().WithMessage("A valid email is required.");

        RuleFor(user => user.Password)
            .NotEmpty().WithMessage("Password is required.")
            .MinimumLength(8).WithMessage("Password must be at least 8 characters long.");

        RuleFor(user => user.DateOfBirth)
            .NotEmpty().WithMessage("Date of birth is required.")
            .Must(BeAtLeast18).WithMessage("You must be at least 18 years old.");
    }

    private bool BeAtLeast18(DateTime dateOfBirth)
    {
        return dateOfBirth <= DateTime.Now.AddYears(-18);
    }
}

3. Use FluentValidation in your Controller or Service:

public class UserController : ControllerBase
{
    private readonly IValidator<User> _validator;

    public UserController(IValidator<User> validator)
    {
        _validator = validator;
    }

    [HttpPost]
    public IActionResult Register(User user)
    {
        var validationResult = _validator.Validate(user);
        if (!validationResult.IsValid)
        {
            return BadRequest(validationResult.Errors);
        }

        // Proceed with registration logic
        return Ok();
    }
}

Explanation:

  • The UserValidator class defines validation rules for the User model.

  • The RuleFor method is used to apply specific validation rules for each property, with a custom rule for checking the age. The Validate method checks if the model is valid, and any errors are returned as a response.

3. BCrypt.Net: Securing User Passwords

BCrypt.Net is a library for hashing passwords securely. Passwords should never be stored in plain text, and BCrypt helps ensure password security with hashing and salting.

Problem:

Storing passwords as plain text in databases makes user accounts vulnerable to data breaches and attacks.

Solution with BCrypt.Net:

BCrypt is widely regarded as a secure way to hash passwords, incorporating salt to protect against rainbow table attacks.

Step-by-Step Example:

1. Install BCrypt.Net:

dotnet add package BCrypt.Net-Next

2. Hash a password before storing it:  

public class UserService
{
    public string HashPassword(string password)
    {
        return BCrypt.Net.BCrypt.HashPassword(password);
    }
    public bool VerifyPassword(string password, string hash)
    {
       return BCrypt.Net.BCrypt.Verify(password, hash);
    }
}

3. Use BCrypt in your registration and login logic:

public class UserController : ControllerBase
{
    private readonly UserService _userService;

    public UserController(UserService userService)
    {
        _userService = userService;
    }

    [HttpPost("register")]
    public IActionResult Register(string password)
    {
        var hashedPassword = _userService.HashPassword(password);

        // Save hashedPassword to the database (omitted for brevity)

        return Ok("User registered successfully.");
    }

    [HttpPost("login")]
    public IActionResult Login(string password, string storedHash)
    {
        if (_userService.VerifyPassword(password, storedHash))
        {
            return Ok("Login successful.");
        }

        return Unauthorized("Invalid password.");
    }
}

Explanation:

  • HashPassword is used to hash the user's password before storing it in the database.
  • During login, VerifyPassword checks whether the entered password matches the stored hash, ensuring secure authentication.

Conclusion

These libraries - Automapper, FluentValidation, and BCrypt.Net—offer solutions to common problems encountered in .NET development. By using them, you can focus on writing cleaner, more maintainable, and secure code while relying on well-tested solutions for routine tasks.

If you have any questions you can reach out our SharePoint Consulting team here.

October 15, 2024

How to Set Up SharePoint Brand Center: A Step-by-Step Guide

The SharePoint Brand Center is a centralized application designed for managing branding elements such as logos, colors, and fonts, making it easier for organizations to maintain consistent branding. It utilizes the SharePoint Organization Asset Library (OAL) for backend storage and management of these assets. The Brand Center app can be found on a specific site within your tenant.

Important Considerations Before Activating the Brand Center:
  • One Brand Center per Organization: SharePoint currently allows only one Brand Center for each tenant. All branding assets and font management will be centralized in one location. 
  • Global Administrator Setup: Global Administrator privileges are necessary to activate the Brand Center, ensuring appropriate configuration and management of the organization's brand identity.

How to Set Up a SharePoint Brand Center 

Global admins can activate the SharePoint brand center app from the Microsoft 365 admin center by following these steps:

1. Go to the Microsoft 365 Admin Center: Sign in to your admin account and navigate to the admin center.

2. Navigate to Settings: From the left-hand navigation, go to Settings → Org settings.

3. Select Brand Center (Preview): In the "Services" section, choose Brand Center (preview).

4. Create a Site: Enter the desired site name and address. Alternatively, use the suggested name "Brand Guide" and activate the public CDN by providing your consent. This creates your centralized Brand Center.

5. Finalize Setup: Click Create site. Once complete, links to the Brand Center site and app will be generated. Copy the necessary link and sign in to access the app.

On the next page, you will find links generated for the brand center site and app. Simply copy the link you need and sign in with your credentials to access it.

Access your Brand Center via the following URL format:


Adding Custom Fonts to the SharePoint Brand Center

Now that your Brand Center is set up, follow these steps to add custom fonts:

1. Go to the Brand Center Site: Navigate to the site associated with your Brand Center in SharePoint Online. You will find it listed under your SharePoint Online sites.

2. Open the Brand Center App: Click on Settings and select Brand Center (preview). Once inside the app, click Add Fonts.

3. Upload Custom Fonts: Click the Upload button to upload your custom fonts. Ensure your font files are in one of the supported formats:
  • True Type fonts (.ttf)
  • Open Type fonts (.otf)
  • Web Open Format Font (.woff)
  • Web Open Format Font (.woff2)

4. To apply your chosen brand assets to SharePoint, click on SharePoint: Navigate to the Brand Center (preview), then click on SharePoint to apply the selected branding assets across your sites.

5. Choose your font in the Display font and Content font sections: Select the appropriate fonts for your site's display and content areas to ensure a consistent look across your SharePoint site.

6. Assign Fonts for Site Elements: Specify the fonts for titles, headings, body text, and interactive elements (e.g., buttons).

7. Name and Save the Font Package: Give your font package a unique name, save the configuration, and switch the Visible toggle to "Yes" to make it available on the Change the look page.


User Experience

Now that we have discussed how admins can add new fonts to the Brand Center app, let's look at how users can maximize the use of these new fonts. To do this, navigate to a SharePoint site →  click on the gear icon →  select Change the look.

Select the name of the font package you just uploaded from the list, then click Save.

Old SharePoint View:

New SharePoint View:

Fonts are now distributed across the site. Similarly, you can personalize your site's appearance by selecting the newly added fonts via the brand center.

The SharePoint Brand Center not only centralizes branding management but also introduces the long-awaited support for custom fonts, ensuring a consistent and unique look across all your SharePoint sites. By utilizing the Brand Center, you can enhance your brand identity, improve recognition, and provide a cohesive user experience throughout your digital workspace.

If you have any questions you can reach out our SharePoint Consulting team here.


September 26, 2024

Mastering SQL: 9 Best Practices for Writing Efficient Queries

Optimizing SQL queries is crucial for improving performance and readability in any database-driven application. In this blog, we will walk through key SQL query best practices, offering actionable tips to help developers write cleaner, faster, and more maintainable code.

SQL tips for beginners

SQL (Structured Query Language) is the backbone of relational databases, enabling us to interact with data seamlessly. However, writing effective SQL queries isn't just about getting the right results; it's also about optimizing performance, ensuring scalability, and keeping the code readable for future developers.

In this post, we’ll cover essential techniques that will improve your SQL query writing, making your code more efficient and easier to manage.

1. Use SQL Keywords in Capital Letters

When writing SQL queries, it's a good practice to capitalize all SQL keywords such as SELECT, WHERE, JOIN, etc. This simple step improves readability, making it easier for other developers to quickly understand the structure and logic of your query.

Example:
Use SQL Keywords in Capital Letters

2. Use Table Aliases with Columns When Joining Multiple Tables

Aliases provide a shorthand for table names, making your queries easier to read, especially when dealing with long or multiple table names. Aliases also help prevent ambiguity when columns from different tables share the same name.

Example:
Use Table Aliases with Columns When Joining Multiple Tables
In this example, e and d are aliases that make the query shorter and cleaner.

3. Never Use SELECT *; Always Specify Columns in SELECT Clause

Using SELECT * might seem convenient, but it fetches all columns, which can be inefficient, especially in tables with many columns. Instead, explicitly specify the columns you need to reduce unnecessary data retrieval and enhance performance.

Example:
Never Use SELECT *; Always Specify Columns in SELECT Clause
By only selecting the necessary columns, you optimize both performance and clarity.

4. Add Useful Comments for Complex Logic, But Avoid Over-Commenting

While it's important to comment on complex logic to explain your thought process, over-commenting can clutter the query. Focus on adding comments only when necessary, such as when the logic might not be immediately clear to others.

Example:
Add Useful Comments for Complex Logic, But Avoid Over-Commenting





5. Use Joins Instead of Subqueries for Better Performance

Joins are often more efficient than subqueries because they avoid the need to process multiple queries. Subqueries can slow down performance, especially with large datasets.

Example:
Use Joins Instead of Subqueries for Better Performance





6. Create CTEs Instead of Multiple Subqueries for Better Readability

Common Table Expressions (CTEs) make your queries easier to read and debug, especially when working with complex queries. CTEs provide a temporary result set that you can reference in subsequent queries, improving both readability and maintainability.

Example:
Create CTEs Instead of Multiple Subqueries for Better Readability











7. Use JOIN Keywords Instead of Writing Join Conditions in WHERE Clause

Using the JOIN keyword makes your SQL queries more readable and semantically clear, rather than placing join conditions in the WHERE clause.

Example:
Use JOIN Keywords Instead of Writing Join Conditions in WHERE Clause







8. Never Use ORDER BY in Subqueries, It Increases Runtime Unnecessarily

Using ORDER BY in subqueries can cause unnecessary performance hits since the sorting is often unnecessary at that point. Instead, apply ORDER BY only when absolutely needed in the final result set.

Example:
Never Use ORDER BY in Subqueries, It Increases Runtime Unnecessarily








9. Use UNION ALL Instead of UNION When You Know There Are No Duplicates

The UNION operator removes duplicates by default, which adds overhead to your query. If you're certain there are no duplicates, using UNION ALL can greatly improve performance by skipping the duplicate-checking step.

Example:
Use UNION ALL Instead of UNION When You Know There Are No Duplicates






By following these best practices, you’ll be able to write queries that run faster, are easier to understand, and scale well as your database grows. Small optimizations like avoiding SELECT *, using proper joins, and leveraging CTEs can make a big difference in the long run. 

Happy querying❗

If you have any questions you can reach out our SharePoint Consulting team here.

September 18, 2024

Automating Workflows: How Apache Airflow can streamline Data Processes - Step by Step Guide

Apache Airflow is an open-source platform for developing, scheduling, and monitoring batch-oriented workflows. Airflow’s extensible Python framework enables you to build workflows connecting with virtually any technology. A web interface helps manage the state of your workflows. Airflow is deployable in many ways, varying from a single process on your machine to a distributed setup to support even the biggest workflows.
Airflow is used to author workflows as Directed Acyclic Graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich CLI makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.

Requirements:

Source

Main Version (Dev)

Stable Version (2.10.0)

Python

3.8, 3.9, 3.10, 3.11, 3.12

3.8, 3.9, 3.10, 3.11, 3.12

Platform

AMD64/ARM64(*)

AMD64/ARM64(*)

Kubernetes

1.28, 1.29, 1.30, 1.31

1.27, 1.28, 1.29, 1.30

PostgreSQL

12, 13, 14, 15, 16

12, 13, 14, 15, 16

MySQL

8.0, 8.4, Innovation

8.0, 8.4, Innovation

SQLite

3.15.0+

3.15.0+


Workflows as Code:

The main characteristic of Airflow workflows is that all workflows are defined in Python code. Workflows as code serve several purposes.

  1. Dynamic: Airflow pipelines are configured as Python code, allowing for dynamic pipeline generation.
  2. Extensible: The Airflow framework contains operators to connect with numerous technologies. All Airflow components are extensible to easily adjust to your environment.
  3. FlexibleWorkflow parameterization is built-in leveraging the Jinja templating engine.
  4. Elegant: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful Jinja templating engine.
  5. Scalable: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers


Implementation sources:
Apache Airflow can be implemented using the below methods.
  1. PyPi
  2. Docker Images
  3. Docker Files
  4. Helm Charts
  5. Released sources
  6. Managed Airflow Services.
The recommended method for implementing, running, and installing Apache Airflow is to use a combination of Docker image/Dockerfile and Docker compose file. It provides the capability of running Airflow components in isolation from other software running on the same physical or virtual machines with easy maintenance of dependencies.

Installing Pre-requisites: 

Install Docker Community Edition (CE) on your workstation. Depending on your OS, you may need to configure Docker to use at least 4.00 GB of memory for the Airflow containers to run properly.

Install Docker Compose v2.14.0 or newer on your workstation.

Implementation Steps:
We will create a Docker compose YAML file for installing and deploying the Apache Airflow Docker container. This docker-compose will have the following service definitions

 - Airflow Scheduler which monitors all tasks and DAGs, then triggers the task instances once their dependencies are complete.
 - Airflow Webserver that provides a UI at specified IP:PORT URL.
 - Airflow Worker that executes the tasks given by the scheduler.
 - Airflow Triggerer that runs an event loop for deferrable tasks.
 - Airflow Init as the initialization service.
 - Postgres as the Database.
 - Redis is the broker that forwards the messages from the scheduler to the worker.


For this Docker compose, we will have mounted directories so that their contents are synchronized between the machine and the container. Directories would be running on user-based ownership which is generally for the user Airflow. 

 - DAGs Directory: The DAG files are stored here.
 - Logs Directory: It contains logs from task execution and scheduler.
 - Config Directory: This has the local configuration file airflow_local_settings.py for custom implementation.
 - Plugins Directory: Used to add custom plugins.


The crucial step before starting the implementation is to create the above directories and set the correct permissions for the Airflow user, as the Docker container will be running via Airflow user. Docker compose has a lot of services listed in it, but it’s important to note the UID and GID of the Airflow user on the machine to make sure the Docker compose has the correct values. 

mkdir -p ./dags ./logs ./plugins ./config
echo -e "AIRFLOW_UID=$(id -u)" > .env

Now we run the Docker command to deploy the Airflow-Init service which initialize the Database and user login. The account created has the login airflow and the password airflow.

docker compose up airflow-init








Once this has been deployed, we need to clean up the environment for running all the services listed in the docker compose. 

Run the below command in the directory you downloaded the docker-compose YAML file.
Once removed, recreate the Docker compose using the guide and store it in the Directory. 

docker compose down --volumes --remove-orphans

Now we can start all services

docker compose up -d –build


The above implementation can be processed using a Dockerfile instead of a Docker Image which provides more customization using a requirements text file reference inside the Dockerfile. To do that, just Create a Dockerfile in the same directory where your docker-compose YAML file is stored. Place the requirements text file in the same directory as the Dockerfile and Docker compose YAML file. Run the command to deploy the containers using the Dockerfile instead of the image.

docker compose up -d –build

Once the cluster has started up, you can log in to the web interface and begin experimenting with DAGs.

Using Airflow:
Let’s look at the following snippet of code (DAG):


DAG Breakdown:

DAG Definition:

dag_id = dag_with_taskflow_api_v02
This is the unique identifier for the DAG.

default_args
Defines default parameters with retries (5 attempts) and delay between retries (5 minutes).

start_date = datetime(2024, 09, 04) 
The DAG can start running from this date. If the DAG is triggered after this date, any missed runs will be backfilled unless the catchup behavior is turned off.

Tasks Breakdown:

get_name()
Returns a dictionary containing a first name and last name: 
{'first_name': 'Smit', 'last_name': 'Shanischara'}.

get_age()
Returns a static integer, representing an age.

greet()
Prints a greeting message 

Task Execution Flow:

The DAG starts by executing get_name() and get_age() tasks in parallel since there is no dependency between them.
Once these two tasks are complete, their outputs are passed to the greet() task, which uses the extracted data to print a message.

Summarizing, This DAG demonstrates a simple workflow where data is "extracted" (name and age), combined in a task, and then "loaded" by printing a greeting message.

Airflow evaluates this script and executes the tasks at the set interval and in the defined order. The status of the DAG is visible in the web interface:





Just like the above DAG, we can have multiple DAGs running on a single Apache Airflow instance. 




Removing Airflow: 


To stop and delete containers, delete volumes with database data, and download images, run:
docker compose down --volumes --rmi all


Best Practices: 

 - Modular DAGs: Break large workflows into smaller, reusable DAGs. This promotes better maintainability and scalability.
 - Task Dependencies: Properly define task dependencies using set_upstream() and set_downstream() methods to ensure correct execution order.
 - Handle Failures: Use retries, on_failure_callback, and alerts to handle task failures gracefully.
 - Version Control: Keep your DAGs under version control (e.g., Git) for better collaboration and tracking changes.
 - Monitoring: Set up monitoring for failed tasks and performance bottlenecks using tools like Grafana or integrating with cloud monitoring solutions.
 - Security: Secure Airflow with role-based access control (RBAC) and authentication mechanisms.
 - Logging: Ensure logs are properly stored in a centralized system for debugging and compliance.

References: