November 23, 2023

Unlocking the Power of Azure AI Language Service: A Comprehensive Overview and Document Summarization

Introduction:

In the ever-evolving world of artificial intelligence, Azure AI Language Service stands out as a formidable tool that promises to revolutionize the way we interact with and analyze textual content.


This article will dive you into the depths of Azure AI Language Service, offering a comprehensive overview and insight into its capabilities. Also we will witness the magic of document summarization in a small yet powerful React application.


This interactive experience will showcase how seamlessly Azure AI Language Service integrates with modern web technologies, providing a practical demonstration of its capabilities using Natural Language Processing (NLP) features for understanding and analyzing text.

Prerequisites:

  • Azure Subscription - Create free subscription from here.

  • NodeJS Installed on Machine (Tested on Node.js 16.19.0)

Azure AI Language Service: Unleashing Its Power:

Azure AI Language is a cloud-centric solution offering Natural Language Processing (NLP) capabilities for text comprehension and analysis. Using this service we can develop smart applications that manipulate textual content. Here is a complete overview of what the Language service can do with its powerful features:


Named Entity Recognition (NER): It spots entities like names, events, places, and dates from the text with named entity recognition.


Personally identifying (PII) and health (PHI) information detection: Detect and hide sensitive info like phone numbers, email addresses, and IDs in text with PII detection.


Language detection: Figure out the language of a document and get a language code with language detection that works for many languages and variations.


Sentiment Analysis and opinion mining: Learn what people think about our topic with sentiment analysis and opinion mining. These features analyze text to discover positive or negative feelings and link them to specific aspects.


Summarization: Generate document or conversation summaries using summarization, Which extracts key sentences to capture the most crucial information from the original contents


Key phrase extraction: It identifies and lists the main concepts in text with key phrase extraction, a preconfigured feature.


Explore additional features and functionalities within the Language service in this documentation available here. Let’s gain insights into the Summarization feature within the Language Service and integrate it into our compact React application.


Azure AI Language Service: Document Summarization:

In today's fast-paced and information-rich world, the need for efficient content processing has become paramount. Summarization plays a crucial role in addressing this need by distilling lengthy and complex information into concise and digestible forms.

Summarization constitutes one of the capabilities provided by Azure AI Language, a suite of cloud-based machine learning and AI algorithms tailored for crafting intelligent applications centered around written language.

Document summarization employs natural language processing techniques to create a condensed version of a document. The API supports two main approaches to automatic summarization: extractive and abstractive.

Extractive: Selects and extracts sentences directly from the original content that collectively capture the most crucial information.

Abstractive: Creates a summary by generating concise and coherent sentences or words, not limited to extracting sentences from the original document. This approach aims to provide a shortened version of lengthy content.

Let’s create an instance of the Language service to showcase practical summarization and seamlessly integrate it into our React application.

Follow Below Steps to Create an Instance of Language Service:

  • To Create Instance, log in to your Azure Subscription, go to “Create a resourceand type for language.


  • Click on Create and then Continue to Create your resource at bottom.

  • Fill all the Details with Name of the Instance and Resource. (You can use the free pricing tier (Free F0) to try the service, and upgrade later to a paid tier for production.)

  • Click on Next Until Review and Create tab.

  • Verify all the Details and then Click on Create.


After creating the service instance, review the details in the resource group. To utilize the Language service,
Now will obtain Endpoints and an API key by accessing the Language Studio through this link. Login using the Azure Subscription in which you created the instance.

Navigate to the Summarization text tab within Language Studio and choose the "Summarize Information" option.


Now you can explore summarization directly in the Playground or seamlessly integrate it into our application using the provided Endpoints and API Key at bottom. Scroll to the bottom to find Language endpoints and Subscription Key. Ensure you have chosen the correct Resource for the Language service.


Copy the Subscription Key and Endpoint URL; we will utilize them in our React project.

Setting Up a React Application for Azure Language Service Integration:

The API, along with the obtained Endpoints from the above step, can be employed in various frontend applications. However, for demonstration purposes, we will utilize them in the React app.

Follow below steps to Create the React app and Install all the Packages needs in order to Integrate this:

Note: Ensure that your local development machine has Node version 14 or higher.


  • Run the "npx create-react-app document-summarize" command to set up the scaffolding for the React app.

  • Then Install the Client Package Library “npm install --save @azure/ai-language-text@1.1.0” in order to work with Azure AI Language.

  • Now open the Project in the VS Code.

  • Create a .env file in the root folder.

  • Store the EndPoint and APIKey in it as shown below.



  • Navigate to App.js file in the Folder.

  • Replace the Code with below code.


import React, { useState } from 'react';
const { AzureKeyCredential, TextAnalysisClient } = require("@azure/ai-language-text");

const endpoint = process.env.REACT_APP_ENDPOINT;
const apiKey = process.env.REACT_APP_APIKEY;

function App() {
  const [loading, setLoading] = useState(false);

  // In Order to Generate the Download Link of the File
  const download = async(filename, text) => {
    var previousElement = document.getElementById('downloadLink')
    if(previousElement){
      document.body.removeChild(previousElement);
    }
    var element = document.createElement('a');
    element.setAttribute('id', "downloadLink");
    element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
    element.setAttribute('download', filename);
    var linkText = document.createTextNode("Download the summarized version of the file");
    element.appendChild(linkText);
    document.body.appendChild(element);
  }

  // In Order to Handle the Input element
  const handleFileChange = async (event) => {
    setLoading(true)
    const file = event.target.files[0];
    var input = event.target;
    var reader = new FileReader();
    reader.onload = async function () {
      var text = reader.result;
      await analyzeAndSummarizeText(file.name,text)
      setLoading(false)
    };
    reader.readAsText(input.files[0]);
  };

  // Analyze and Summarize the Text
  const analyzeAndSummarizeText = async (inputFileName, originalText) => {
    const client = new TextAnalysisClient(endpoint, new AzureKeyCredential(apiKey));
    const actions = [
      {
        kind: "ExtractiveSummarization",
        maxSentenceCount: 2,
      },
    ];
    const analyzeBatch = await client.beginAnalyzeBatch(actions, [originalText], "en");
    analyzeBatch.onProgress(() => {
      console.log(
        `Last time the operation was updated was on: ${analyzeBatch.getOperationState().modifiedOn}`
      );
    });
    const results = await analyzeBatch.pollUntilDone();
    for await (const actionResult of results) {
      if (actionResult.kind !== "ExtractiveSummarization") {
        throw new Error(`Expected extractive summarization results but got: ${actionResult.kind}`);
      }
      if (actionResult.error) {
        const { code, message } = actionResult.error;
        throw new Error(`Unexpected error (${code}): ${message}`);
      }
      for (const result of actionResult.results) {
        console.log(`- Document ${result.id}`);
        if (result.error) {
          const { code, message } = result.error;
          throw new Error(`Unexpected error (${code}): ${message}`);
        }
        let summarizedTextContent = result.sentences.map((sentence) => sentence.text).join("\n");
        await download(inputFileName, summarizedTextContent);
      }
    }
  };

  return (
    <div id="inputFile">
      <input type="file" onChange={handleFileChange} />
      {
        loading && <p>Summarizing the document please wait a while...</p>
      }
    </div>
  );
}

export default App;

Save the files, then run "npm start" to initiate the development server on port 3000 and test the solution. Upload the document and wait briefly for the generation of the summarized version. Once ready, click the download link to retrieve the summarized version.

Output:



Conclusion:

Throughout this article, we explored the capabilities of Azure AI Language Service, delving into its features and functionality. Specifically, we seamlessly integrated the Document Summarization feature of the Language service into a React application. By doing so, we harnessed the power of Azure AI Language Service to enhance document processing in a practical and user-friendly manner.

If you have any questions you can reach out our SharePoint Consulting team here.

September 1, 2023

ChatGPT Code Interpreter: Revolutionizing How We Write and Understand Code

In the digital age, the intersection of Artificial Intelligence (AI) and coding has given rise to powerful tools that transform the way we approach programming. Among these, the ChatGPT Code Interpreter stands out as a remarkable innovation. If you've ever wondered, "Is there a way for AI to help me understand or write code?", or "How can I simplify the coding process with the help of AI?", you're in the right place.

Dive into the world of ChatGPT Code Interpreter and discover how it's making waves in the programming landscape.

Discover the power of AI in coding with the ChatGPT Code Interpreter. Whether you're a seasoned developer or just starting, see how AI can revolutionize your coding experience.

What is AI?

Artificial Intelligence, commonly referred to as AI is a branch of computer science that aims to create machines that can perform tasks that typically require human intelligence. These tasks include problem-solving, understanding natural language, recognizing patterns, and making decisions. With advancements in machine learning and neural networks, AI systems like ChatGPT are now capable of mimicking human thought processes to an unprecedented degree.

Why Use the ChatGPT Code Interpreter?

  1. Efficiency: No more endless hours of debugging. The Code Interpreter can assist in identifying and suggesting fixes for your coding challenges.
  2. Learning: Whether you're a beginner trying to understand a complex code snippet or an expert seeking to optimize your code, ChatGPT offers insights and explanations tailored to your needs.
  3. Collaboration: Sharing code with peers? ChatGPT can act as a mediator, interpreting and explaining code segments for better team understanding.
  4. Versatility: From Python to JavaScript, the Code Interpreter is designed to understand and assist with a wide range of programming languages.

The most recent Code Interpreter ChatGPT model has new functionalities that prior AI models lacked. OpenAI has done an excellent job of allowing you to run Python inside of ChatGPT to perform interactive tasks. This blog serves as a reference for Code Interpreters.

According to OpenAI's website, the code interpreter is a new experimental model of ChatGPT as a completely new model. In this blog, I'll describe and demonstrate how it allows you to upload files and run code in a Python Sandbox. People are already utilizing it to create games in minutes, map the population density of the country by ZIP code, and even create pretty good diagrams for statistics and information based on Excel spreadsheets.

It can now do a few more things, such as use Python, upload files, and download files. 

So, how does this new model run code? 

Python exists in a sandbox, allowing it to be firewalled and then executed inside of a temporary region. This means that this version of ChatGPT can do Maths, which was one of the most significant restrictions of prior versions.

How to enable ChatGPT Code Interpreter?

Let's give it a shot. It has been enabled for all Premium Subscribers. Go to Settings, then to Beta features, then tick the code interpreter to make it available under GPT4 from the drop-down list.



Different ways to use Code Interpreter

Create Graphical Representations: 

First, I requested it "generate a graphical depiction of Pi". It doesn't only tell me what pi is; it also tries to run it in a code sandbox. Unfortunately, the initial attempt fails, but because this model is intelligent, it can detect when it fails. 
I told it to utilize Python libraries this time, and it did. The end result is this diagram, which is exactly what I was searching for. I can go back and look at the Python code; it imports a library; the code is clean and well-marked; and I could copy and paste it directly into an application.


Mathematical Calculations: 

You can ask mathematical calculations such as "How long would it take to drive to the nearest city from New Delhi at 100 kilometers per hour?" ChatGPT Code Interpreter identifies each city and its distance then develops a formula to calculate how long it would take vs. the distance. I double-checked this on Google, and it was mostly correct.


Work with files: 

The next amazing feature is the ability to upload files. You can upload nearly anything as long as it is 100 MB or less. In this scenario, I'll attach a PDF invoice for the product as well as a document for the product module. I can then ask queries such as, "What is this PDF about?" or "How much tax I have paid for this product?" or "Explain to me the Purpose of the module."


Analyzing Excel files or CSVs, is one of the nicest things that Code Interpreter can do. I attempted to upload a large CSV file and asked some interesting questions, such as how to create a bar chart. This is the stage where ChatGPT can conduct some pretty fantastic data analysis using spreadsheets like this.


Image Editing: 

Another function of the new ChatGPT Code Interpreter model is the ability to upload, access, and alter photos. I'm going to submit a photo and ask if it can recognize the face and its location in the photo. Python is being used to do this activity. There are Python packages for stuff like face detection that it can utilize to locate the face. It did an excellent job with the red square around the face in the supplied image. Then I asked it to crop around the face so I could make an avatar, and it could do that as well.



The ChatGPT Code Interpreter isn't just another tool; it's a testament to how AI is reshaping our approach to coding. For professionals, hobbyists, and learners alike, this AI-powered assistant offers an unparalleled blend of guidance, interpretation, and optimization. If you're on the fence about integrating AI into your coding journey, remember that in the world of programming, staying ahead means embracing the future. And the future is undeniably intertwined with AI.

There are numerous methods to use the OpenAI Code Interpreter. If I missed any, please feel free to add the same in the comments section.

If you have any questions you can reach out our SharePoint Consulting team here.

August 24, 2023

Integrating Azure OpenAI into Microsoft Teams using Teams Toolkit: A How-To Guide

Introduction:

In today's digital landscape, effective communication and collaboration are essential for productive teamwork. Microsoft Teams has emerged as a popular platform that brings people together. Now, imagine taking your Microsoft Teams experience to the next level by integrating it with Azure OpenAI, a powerful language model capable of generating human-like responses.


This article will walk you through the integration of Azure OpenAI with Microsoft Teams, enabling users to engage in chat-based conversations and receive intelligent responses within the familiar Teams interface.

Prerequisites:

  • Access to OpenAI Service on Azure (Please be aware that access to Azure OpenAI services is currently limited. If your Azure tenant does not have access, you have the option to apply for access through this link)

  • An M365 account.

  • NodeJS (Tested on Node.js 16.19.0)

  • Latest stable version of Teams Toolkit Visual Studio Code Extension (Tested on version 5.0.1)

Teams Toolkit:

Teams Toolkit is a user-friendly development framework by Microsoft, designed for creating apps, bots, and integrations within Microsoft Teams. It streamlines the process, making it easier to build collaborative solutions.


To install Teams Toolkit Visual Studio Code extensions, follow these steps:

  • Click on the Extensions icon on the left sidebar.

  • Search and install Teams Toolkit.


To scaffold the project, follow these steps within the Teams Toolkit interface:

  • Click on the Teams Toolkit icon located in the left sidebar.

  • Select "Create a New App" and then choose the "Bot" option.

 
  • Now in the next step select "Basic Bot".

  • Move ahead with selecting  "TypeScript" as a Programming Language.

  • Specify the Location and Name of the App for our case it’s "TeamsGPT".


After scaffolding the project, you can test the bot solution by following these steps:

  • From the left menu, select "Run and Debug."

  • Choose the desired run profile.

  • Click on the "Run" button to test the bot solution.

Azure Open AI Model Deployment:

Azure OpenAI is a service provided by Microsoft that allows us to access powerful artificial intelligence models. It enables us to integrate AI capabilities into our applications, making them smarter and more intelligent.


Follow Below Steps to Create an Instance of Azure Open AI:

  • Navigate to Azure Open AI under the Cognitive Service in the Azure Portal.

  • Click on "Create New".

  • Fill all the Details with Name of the Instance and Resource.

  • Click on Next Until Review and Submit tab.

  • Verify all the Details and then Click on Create.


This will Create a Resource Group which contains Azure Open AI Service Instance now Navigate to the Resource Group and Select this Instance.


Follow Below steps to deploy the Model using this Instance:

  • Click on  "Model Deployment" on the Left Hand Side.

  • Then Select "Manage Deployment" and it will take us to the Azure Open AI Studio.

  • Now, Click on "Deployments" on the Left Hand Side.

  • Then Click on "Create new deployment" and it will Prompt us to add the Details for our Model.


In Select a Model Field Add "text-davinci-003" and also same in the Deployment Name and the Click on Create.


Text-davinci-003 model is a language model developed by OpenAI. It is designed to generate human-like text and provide natural language processing capabilities. This model can be used for various tasks such as chatbots, language translation, content generation, and more.


  • Now, Select the Model and Click on Open in Playground.


  • Click on View Code and note down the Endpoint and Key.


Now, coming back to our teams toolkit solution we will store this Endpoint and Key in the Configuration File. Open Config.ts file and Modify the Code as:

const config = {
  botId: process.env.BOT_ID,
  botPassword: process.env.BOT_PASSWORD,
  Endpoint: 'https://XXXX-openai.openai.azure.com/', //Your EndPoint URL
  APIKey: 'XXXX' //Your API Key
};

export default config;

To enable receiving the response back in the Teams interface, modify the implementation of the welcome Adaptive Card logic as follows.


Go to adaptiveCards and then the welcome.json file modify the code as below.

{
  "type": "AdaptiveCard",
  "body": [
    {
      "type": "TextBlock",
      "size": "Medium",
      "weight": "Bolder",
      "text": "${title}"
    },
    {
      "type": "TextBlock",
      "text": "${body}",
      "wrap": true
    }
  ],
  "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
  "version": "1.4"
}

Now, the Final Change - modify the teamsBot.ts file as below in order to get the work done.

import {
  TeamsActivityHandler,
  CardFactory,
  TurnContext,
} from "botbuilder";
import rawWelcomeCard from "./adaptiveCards/welcome.json";
import { AdaptiveCards } from "@microsoft/adaptivecards-tools";
import config from "./config";

export class TeamsBot extends TeamsActivityHandler {

  constructor() {
    super();
    
    this.onMessage(async (context, next) => {
      console.log("Running with Message Activity.");
      let text = context.activity.text;
      const removeMentionedText = TurnContext.removeRecipientMention(context.activity);
      if (removeMentionedText) {
        text = removeMentionedText.toLowerCase().replace(/\n|\r/g, "").trim();
      }

      const fetch = require('node-fetch');

      const endpoint = config.Endpoint;
      const apiKey = config.APIKey;

      const prompt = text;
      const maxTokens = 100;
      const temperature = 1;
      const frequencyPenalty = 0;
      const presencePenalty = 0;
      const topP = 0.5;
      const bestOf = 1;
      const stop = null;

      const requestBody = JSON.stringify({
        prompt,
        max_tokens: maxTokens,
        temperature,
        frequency_penalty: frequencyPenalty,
        presence_penalty: presencePenalty,
        top_p: topP,
        best_of: bestOf,
        stop,
      });

      const headers = {
        'Content-Type': 'application/json',
        'api-key': apiKey,
      };

      const response = await fetch(endpoint, {
        method: 'POST',
        headers,
        body: requestBody,
      });

      const data = await response.json();
      if (response.ok) {
        const generatedtext = data.choices[0].text;
        const cardData = {
          title: "Response From Open AI",
          body: generatedtext,
      };
        const card = AdaptiveCards.declare(rawWelcomeCard).render(cardData);
        await context.sendActivity({ attachments: [CardFactory.adaptiveCard(card)] });
      } else {
        throw new Error('Failed to generate Response');
      }

    })
  }
}

Now, Follow Below Steps to Run and Debug the Bot.

  • From the left menu, select "Run and Debug."

  • Choose the desired run profile.

  • Click on the "Run" button to test the bot solution.


Now, Imagine having the ability to effortlessly generate responses and engage in intelligent conversations within the familiar interface of Microsoft Teams.

Conclusion:

With the integration of Azure OpenAI, this becomes a reality. Now, you can ask whatever comes to mind and receive seamless, insightful replies, unlocking a whole new level of communication and collaboration within Teams. Say goodbye to limitations and welcome a world where your thoughts and queries are met with intelligent conversations at your fingertips.


If you have any questions you can reach out our SharePoint Consulting team here.

August 3, 2023

To create terms dynamically in a TermSet and set them in a Managed Metadata column using Power Automate

Introduction: 

In this blog post, we will learn how to dynamically create Terms in a TermSet of a Termstore and set them in a managed metadata column using Microsoft Power Automate.

Requirement:

To obtain information about Skills and Past Projects from the Office 365 user profile and add it to a SharePoint list, we will enable Out-of-the-Box (OOTB) filtering and sorting operations on these fields.
We can use Single line of text OR Multiline of text site column and store information with delimiters, but limitation was, can’t perform OOTB filtering and sorting operation.

To overcome this issue, we are going to use a Managed Metadata site column to store the terms values dynamically. This approach will allow us to perform Out-of-the-Box (OOTB) filtering and sorting operations on the information, providing a more effective solution compared to using Single line of text or Multiline of text site columns with delimiters.

Approach:

Create “Managed Metadata field” with “Allow multiple values” checkbox enabled and “Customize your term set” option selected as shown in below screenshot.





Below are the steps in Power Automate to dynamically create Terms in a TermSet of a Termstore and then create an item in a SharePoint list with those terms in a Managed Metadata field.

Step 1: Add an action “Initialize Variable”, Name it as “Initialize variable Skills” as shown in below screenshot. This will be used to store JSON for use in SharePoint list Item creation.

 


Step 2: Add “Get user profile (V2)” action with user principal name or Email ID in “User (UPN)” as shown below.

 


Step 3: Append value to “Skills” variable as shown in below screenshot.


 Step 4: Add below set of actions to process each element in “Skills” array (which we retrieved in Step-2).
 



Step 5: Then we need to add an action called “Send an HTTP request to SharePoint” to retrieve terms from TermSet.

Uri - _api/v2.1/termStore/groups/97ac8ae5-1608-4047-8951-585b3a2640c5/sets/e0af947c-6cef-4766-8c79-9c454cc5a323/children.

In Uri, “97ac8ae5-1608-4047-8951-585b3a2640c5” is Termstore ID and “e0af947c-6cef-4766-8c79-9c454cc5a323” is TermSet ID in which we need to store or create terms.

 


Step 6: To compare the current item in the loop with the term name, we need to filter the values retrieved in Step-5 to check if they are "equal to" the term name.

string (item ()? ['labels'][0] ['name']) expression is used in comparison as shown in below screenshot.

 


Step 7: Add the following set of actions to form JSON or create JSON data for the "Skills" variable. The condition, as shown in the below screenshot, checks whether the term is present in the "Skills" TermSet or not, using the expression length(body('Filter_skills_array')). If the term is not present, we are adding a POST call to create a new term and then appending the values to the "Skills" variable. On the other hand, if the term is already present, we simply append the values to the "Skills" variable.


 
Step 8: If the condition is "Yes," indicating that the term is present in the "Skills" TermSet of the Termstore, then we simply append the values to the "Skills" variable.

Note: Value will be appended in JSON format as - 
{
“value”: “Sharepoint|3198df72-311b-4757-9246-052296af2”
}

Where, the expression in below screenshot consists of “{current term name in loop}|body('Filter_skills_array')[0]['id']”

 


Step 9: If the condition is "No," meaning that the term is not present in the "Skills" TermSet of the Termstore, in such a case, we need to create the term using a POST call with the action called "Send an HTTP request to SharePoint," as shown in the below screenshot.

Uri - _api/v2.1/termStore/groups/97ac8ae5-1608-4047-8951-585b3a2640c5/sets/e0af947c-6cef-4766-8c79-9c454cc5a323/children.

In Uri, “97ac8ae5-1608-4047-8951-585b3a2640c5” is Termstore ID and “e0af947c-6cef-4766-8c79-9c454cc5a323” is TermSet ID in which we need to store or create terms.



Step 10: Add “Compose” action to fetch and store term ID from Step-9, as shown below.

 


Step 11: After that, append the JSON value, as shown in the below screenshot, with the term name and ID from the "Step-10" compose action.



Step 12: Add the below action after completing the Condition and loop actions to append the closing of JSON in the "Skills" variable.

 


Step 13: Add the below action to parse the JSON, enabling us to create and store the data in a SharePoint list item.

Schema: {
    "type": "array",
    "items": {
        "type": "object",
        "properties": {
            "Value": {
                "type": "string"
            }
        },
        "required": [
            "Value"
        ]
    }
}

 


Step 14: Use Step-13 parsed JSON in “Create Item” action as shown in below screenshot.

 


Finally, we can see that terms have been added to the "Skills" TermSet using Power Automate flow.
 


Also, terms have been added to the "Skills" Managed Metadata field in the SharePoint list as well.

 


Additionally, we can view the "Skills" managed metadata field from the list settings, where we have dynamically appended terms using Power Automate






If you have any questions you can reach out our SharePoint Consulting team here.