December 16, 2016

Automated SQL Servers Error Logs to single HTML file

Normally, we check SQL Server error log through SQL Server Management Studio or any text editor. By default, error log is located at ~\Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\LOG\ ERRORLOG.n1 (Here, n is instance name and n1 is nth number - dynamically appended). But, if you’re managing thousands or even just hundreds of SQL Server systems, you might spend most of the day manually checking each server.

So, what is better way to automate error log process and get error log results for each instance as an output to a single HTML file?

Solution:
One possible solution to achieve this is by using PowerShell script. Below is a PowerShell script to automate error log and generating a report for daily monitoring of SQL Server error logs.  We need to add all SQL servers' instance name in one text file. This script will read all instances mentioned in text file one by one, and generate a error log in HTML file.

Prerequisites:
Module invokesqlquery needs to be installed. Invokesqlquery can be downloaded from http://powershell4sql.codeplex.com

Powershell script:
<#
This script displays the error log entries for all listed servers for the last one day.
The script can take a few minutes to run if the error logs are large and you are looking back over several days.
#>
cls # This command will clear the screen of PowerShell window.
import-module invokesqlquery #This will import the module named "invokesqlquery" which we have downloaded and placed in module folder
$today    = (get-date).toString() #This will store today's date in $today variable
$all      = @() #This is a declaration of an array type variable to store the output in $all variable
$lookback = ((get-date).adddays(-1)).ToString()  # This will store previous date in $lookback variable
$serversList = "{your serverlist's text file path}\serverlist.txt" #This will store a full path of text file in which you have listed down the sql server's name
foreach ($server in get-content $serversList | select-object @{Name = 'serverName'; Expression = {$_}})  # This foreach loop will read the text file from given path and execute the code for each server name in that text file
{
"$((get-date).toString()) - Checking SQL Error Logs on $($server.servername)..."   #This line is for just displaying the name of server on the screen for which the process of checking SQL Error Logs is running
try  {
<# Explanation of the below code :
invoke-sqlquery -query : This is a command which executes the sql query
master..xp_readerrorlog : This is a stored procedure of in sql server to get the logs. We have passed some parameters to filter the logs.
select-object : This command is for specifying the columns which should be shown in the output.    
where-object : This command will filter the output and display only those data which contains the specific word like "error" in the Text. Other records will be discarded and will not be there in the output.
` : This symbol is to concat the line
 #>
#invoke-sqlquery -query :
$all += invoke-sqlquery -query "EXEC master..xp_readerrorlog 0, 1, null, null, '$lookback', '$today' " -server $server.servername | select-object @{Name="Server"; Expression={$server.servername}}, LogDate, ProcessInfo, Text | `
where-object {$_.Text -match 'error'}| `
where-object {$_.Text -notmatch 'setting database option recovery to'}
}
catch {"Unable to read SQL error log from server $server"}}
$all | ConvertTo-Html | Out-File "{Full Path of the output}\HTMLReport.html" #This line will export the output in the html file at given path.
 
This PowerShell script can be added as a Task Scheduler to run daily basis. Hope this is useful !

If you have any questions you can reach out our SharePoint Consulting team here.

SSRS: How to open a linked SSRS Report with parameter in popup

Last month working with SSRS related project, requirement was to open another report on click of date field. Basically, when user clicks on date field in main SSRS report, it should open another report in pop up.

Approach: To achieve the requirement, I've used JavaScript function. Here are the steps , how I have done it !
1. Click on Textbox Property of Date field.
2. Click on Action , Select Go to URL.
3. Click on Expression and apply a code as below.

 ="javascript:void(window.open('" & Cstr(Parameters!BaseUrl.Value) & "?ScheduleId=" & Cstr(Fields!BS_ID.Value) & "','blank','toolbar=no, menubar=no,scrollbars=no, resizable=no, location=no, addressbar=no, left=100,top=100,height=350,width=500'))"  
 
blog16.png

blog17.png

Now, I'll explain you how we can pass BaseURL and other fields as a parameter to generate other SSRS Report. I have a Page named "AdminReport.aspx" which has following code.

 <rsweb:ReportViewer ID="rptViewer" runat="server" Height="500px" Style="-ms-overflow-y: scroll" Width="1100px" ShowToolBar="False" ShowParameterPrompts="False" ShowCredentialPrompts="False"></rsweb:ReportViewer>   

In Code behind of AdminReport Page, I've used below code to bind a second report:
 private void ShowReport()  
     {  
       try  
       {  
         string urlReportServer = ConfigurationManager.AppSettings["ReportViewerUrl"]; //It is a URL Of Report Server.
         rptViewer.ProcessingMode = ProcessingMode.Remote; // Processing Mode will be either Remote or Local.
         rptViewer.ServerReport.ReportServerUrl = new Uri(urlReportServer); //Set the Report Server URL.
         rptViewer.ServerReport.ReportPath = "/AdminReport"; //Set path of Report Page.
         //Create an Array List to combine the Parameters which are to be passed into SSRS Report.  
         ArrayList reportParam = new ArrayList();  
         reportParam = ReportDefaultParam();  
         ReportParameter[] param = new ReportParameter[reportParam.Count];  
         for (int k = 0; k < reportParam.Count; k++)  
         {  
           param[k] = (ReportParameter)reportParam[k];  
         }  
         rptViewer.ServerReport.ReportServerCredentials = new CustomReportServerCredentials();  // Set Credentials.
         rptViewer.ServerReport.SetParameters(param); //Set Report Parameters  
         rptViewer.ServerReport.Refresh();  
       }  
       catch (Exception ex)  
       {  
         CommonUtilities.WriteEventToLogFile(CommonUtilities.GetCurrentTime() + ":AdminViewReport : ShowReport : " + ex.ToString(), "", "", "");  
         // throw ex;  
       }  
     }  
     /// <summary>  
     /// Pass default parameters to the report  
     /// </summary>  
     /// <returns></returns>  
     private ArrayList ReportDefaultParam()  
     {  
       ArrayList arrLstDefaultParam = new ArrayList();  
       string strPathAndQuery = HttpContext.Current.Request.Url.PathAndQuery;  
       string strUrl = HttpContext.Current.Request.Url.AbsoluteUri.Replace(strPathAndQuery, "/");  
       string baseUrl = strUrl + "ScheduleMaterialReport.aspx";  
       arrLstDefaultParam.Add(CreateReportParameter("BaseUrl", baseUrl));  
       return arrLstDefaultParam;  
     }  
     /// <summary>  
     /// Create report using parameters  
     /// </summary>  
     /// <param name="paramName"></param>  
     /// <param name="pramValue"></param>  
     /// <returns></returns>  
     private ReportParameter CreateReportParameter(string paramName, string pramValue)  
     {  
       ReportParameter aParam = new ReportParameter(paramName, pramValue);  
       return aParam;  
     }  
  • BaseURL - is URL of the second report page, "/SubReport.aspx").
  • BS_ID.value - is the ID (value) of selected record in main Report.
  • ScheduledId - is the field passed as query string parameter to second SSRS report.
We pass this value from main report to other report as query string parameter. E.g.: http://siteurl/SubReport.aspx?ScheduleId=6
 int ScheduleId =Convert.ToInt32(Request.QueryString["ScheduleId"]))  

Now, when user clicks on Date field in Main Report ("AdminReport.aspx" page), other report will open in "SubReport.aspx" page as pop up.

Hope this would be helpful!! If you have any questions you can reach out our SharePoint Consulting team here.

December 12, 2016

Upload multiple files with Single Element and Remove Option Without Using Jquery

Recently working on C# ASP.NET project, one requirement was to provide an option to users to upload multiple files with single element and also with delete option in Entry Form. All files should be saved on Add/Save click. It was a bit tricky as I'd to achieve this functionality WITHOUT using JavaScript and jQuery.

Solution: To achieve this requirement, I've used very popular concept of View State in C#. Bind a data table in View State and display it in Grid. Add or remove rows dynamically from data table.

Source Code:

//Fetch a data table from View State.
DataTable dtCurrentTable = (DataTable)ViewState["EditMaterial"];
DataRow drCurrentRow = null;
drCurrentRow = dtCurrentTable.NewRow();

//Add row in View State when new file is added.
if (flMaterialUpload1.HasFile)
{
     drCurrentRow["FileName"] = objProperty.fileName;
     drCurrentRow["FilePath"] = objProperty.materialaPath;
     dtCurrentTable.Rows.Add(drCurrentRow);
}
     gveditmaterial.DataSource = dtCurrentTable;
     gveditmaterial.DataBind();
     ViewState["EditMaterial"] = dtCurrentTable;

//Write row command event of grid to delete file.
DataTable dtCurrentTable = (DataTable)ViewState["EditMaterial"];
DataRow[] rows;
rows = dtCurrentTable.Select("BPD_MaterialPath ='" + hdnmaterialpath.Value + "'");
foreach (DataRow row in rows)
{
    row.Delete();
    dtCurrentTable.AcceptChanges();
}
    gveditmaterial.DataSource = dtCurrentTable;
    gveditmaterial.DataBind();
    ViewState["EditMaterial"] = dtCurrentTable;

Multiple File Upload Screen will look like below.
blog18.png

If you have any questions you can reach out our SharePoint Consulting team here.

December 8, 2016

How to send an Email having HTML formatted Body through SharePoint Add-in

Scenario: 
While working with SharePoint Add-ins (SharePoint Hosted App), there was requirement to send an Email to users. We can do it through REST API with AJAX call in SharePoint but my concern was how I can send an email with HTML formatted body. 

Remember, The recipients users should be valid SharePoint Users. Emails cannot be sent to non-SharePoint users and external users.

Solution: Here is how I achieved it !
We have to include "AdditionalHeaders" in the mailObject which is required to specify the content type of the email, here is HTML. Without using this header, the email will be sent as a plain text, and the HTML formatting will be ignored.

function sendEMail(toList, subject, mailContent) {
    appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));
var restUrl = appweburl + "/_api/SP.Utilities.Utility.SendEmail",
restHeaders = { "Accept": "application/json;odata=verbose", "X-RequestDigest": $("#__REQUESTDIGEST").val(), "Content-Type": "application/json;odata=verbose" }, mailObject = { 'properties': { '__metadata': { 'type': 'SP.Utilities.EmailProperties' }, 'To': { 'results': toList }, 'Subject': subject, 'Body': mailContent, "AdditionalHeaders": { "__metadata":{ "type": "Collection(SP.KeyValue)" }, "results": [ { "__metadata": { "type": 'SP.KeyValue' }, "Key": "content-type", "Value": 'text/html', "ValueType": "Edm.String" } ] }
} }; return $.ajax({ contentType: "application/json", url: restUrl, type: "POST", data: JSON.stringify(mailObject), headers: restHeaders }); }
The above method can be called from SharePoint Add-in or Client Side code as shown below.
//An array of valid SharePoint Users, External users & emails are not supported.
var toUserList = [user1@email.com,
user2@email.com]
//Subject of the Email.
var mailSubject = "Download documents";
//HTML formatted Email Body.
var mailContent = "<h3>Here are all documents</h3><p>Links</p><div>document 1</div>";
sendEMail(toUserList, mailSubject , mailContent).done(function (response) {
    console.log("E-Mail Sent successfully.");
}).fail(function () {
    console.error("Error while sending an E-Mail.");
});
 
Hope this will help!

If you have any questions you can reach out our SharePoint Consulting team here.

November 25, 2016

Wrangling Large Data in Python (Part 1)

Present Day (TL;DR)

Before you go ahead and peek at the answer before understanding how it came about, I urge you to read the backstory and then come back to this section. By the way, what I will show you below is just like 30% of the answer. You can fill in the rest as per your use case.

You make a skeleton like so:

file_name = "folder/largefile.dat"  
nrows, frames = 0, []
for chunk in pd.read_csv(file_name, sep = "|", chunksize = 2500000, usecols = ["Col1""Col2"]):
    try:
        frames.append(chunk[chunk.Col1.isin(range(10))])
        nrows += chunk.shape[0]
        print("%d rows processed." % nrows)
    except Exception, e:
        print(e)

The place of interest is the line in bold.
I am merely building a list of data frames by chunk based on the condition that Col1 has values between 0 and 9 inclusive. But you can do a lot of things over there.
Here’s a hint: Create a function where you just take inputs such as the file name, columns of interest, and, most importantly, the function you want to apply to each chunk. Maybe you do not want to extract records. Maybe you want to build count tables, apply transformations, and about a million other things. Good news is: the limit to what you can do is your creativity.
One issue I faced is that when there is an “EOF character found” exception, the skiprows argument does not skip that record. It just stops the iterator and no exceptions are thrown.
My immediate idea for next steps is to try parallelizing this relatively snail-paced version that serially processes the file chunk by chunk.

Early in November 2016 (The month Trump became US President)

The day started like any other day. I came to the office, fired up my machine, and went to fetch my customary cup of java.

This was the day when everything changed. My faithful Jupyter Notebook environment was inundated with super-massive delimiter-separated files upward of 10 GB in size. Suddenly, 30 GB of RAM was not enough. The application stuttered. MemoryError exceptions were everywhere.
It was a welcome challenge to work with such large files without immediately resorting to Spark or such.

I wrote a simple first version that used nrows and skiprows arguments to manually iterate through chunks of the file. This version basically used a loop counter variable that I multiplied by a chosen chunk size. Then I’d manually set the values as nrows = <chunk size> and skiprows = i * <chunk size>. One exercise for the reader is to write this “manual” version.

There was nothing wrong with how the above way worked. But why not use something built-in and Pythonic? For now, the first way, which uses the chunksize argument, works well, and we use it for anything we do with large files.

In the next part of this post, we will discuss performance comparison. We will also begin considering more optimal ways of working with very large files including parallelism.

Watch this space for other posts and articles! If you have any questions you can reach out our team here.

November 23, 2016

Default Print button in SSRS Report is not showing/working in Chrome and Firefox browsers

Scenario
SSRS toolbox provides Print functionality. But it's browser dependent. It works fine only with Internet Explorer browser, not compatible with Firefox and Chrome browsers.

Resolution
To achieve this, we've to use custom Print button and JavaScript code which executes on button click.

HTML code for Print button and Report viewer:
 <asp:Button runat="server" CssClass="btn-addschedule-bot" Style="margin-left: 10px;" ID="btnPrint" CausesValidation="true" ValidationGroup="vgSubmit" OnClientClick="printReportClick();" Text="Print Report" />  
 <div style="border: 1px solid #A7B0E8; margin: 0px 10px; padding: 5px; float: left;">  
 <rsweb:ReportViewer ID="rptViewer" runat="server" Height="500px" Style="-ms-overflow-y: scroll" Width="1100px" ShowToolBar="False" ShowParameterPrompts="False" ShowCredentialPrompts="False"></rsweb:ReportViewer>  
 </div>  

JavaScript Code to print a report in Chrome and Firefox:
 <script type="text/javascript">  
     function printReport(report_ID) {  
       var rv1 = $('#' + report_ID);  
       var iDoc = rv1.parents('html');  
       // Reading the report styles  
       var styles = iDoc.find("head style[id$='ReportControl_styles']").html();  
       if ((styles == undefined) || (styles == '')) {  
         iDoc.find('head script').each(function () {  
           var cnt = $(this).html();  
           var p1 = cnt.indexOf('ReportStyles":"');  
           if (p1 > 0) {  
             p1 += 15;  
             var p2 = cnt.indexOf('"', p1);  
             styles = cnt.substr(p1, p2 - p1);  
           }  
         });  
       }  
       if (styles == '') { alert("Cannot generate styles, Displaying without styles.."); }  
       styles = '<style type="text/css">' + styles + "</style>";  
       //--- Reading the report html  
       var table = rv1.find("div[id$='_oReportDiv']");  
       if (table == undefined) {  
         alert("Report source not found.");  
         return;  
       }  
       //-- Generating a copy of the report in a new window  
       var docType = '<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/loose.dtd">';  
       var docCnt = styles + table.parent().html();  
       var docHead = '<head><title>Printing ...</title><style>body{margin:5;padding:0;}</style></head>';  
       var winAttr = "location=yes, statusbar=no, directories=no, menubar=no, titlebar=no, toolbar=no, dependent=no, width=720, height=600, resizable=yes, screenX=200, screenY=200, personalbar=no, scrollbars=yes";;  
       var newWin = window.open("", "_blank", winAttr);  
       writeDoc = newWin.document;  
       writeDoc.open();  
       writeDoc.write(docType + '<html>' + docHead + '<body onload="window.print();">' + docCnt + '</body></html>');  
       writeDoc.close();  
       // The print event will fire as soon as the window loads  
       newWin.focus();  
       // uncomment to autoclose the preview window when printing is confirmed or canceled.  
       // newWin.close();  
     };  
     function printReportClick() {  
       printReport('<%=rptViewer.ClientID %>');  
     }  
   </script>  

Print Preview in Chrome browser:



I hope this will help you out to make print functionality working in Chrome and Firefox.

If you have any questions you can reach out our SharePoint Consulting team here.

November 22, 2016

SharePoint 2013/SharePoint 2010: Service Unavailable - HTTP Error 503. The service is unavailable.

Issue:
Sometimes we get error in SharePoint 2010 or SharePoint 2013 like "Service Unavailable - HTTP Error 503. The service is unavailable" when we try to access a SharePoint site.



Cause of the issue:
IIS - Application Pool of specific Web Application is Disabled/Stopped.

Resolution Steps:
Important Note: If SharePoint Farm is multi tier farm, then, below steps need to be verified in all web servers.

1. Login to SharePoint Server as administrator.
2. Open Internet Information Services (IIS) Manager.
3. Navigate to Application Pools option under server name as shown in below screenshot. And verify that Application Pool for specific web application is in "Started" status. In screenshot, we can see that status of specific Application Pool is stopped.


4. Right click on this Application Pool and click "Start".
5. Open the site in browser and it should work as expected.


If you have any questions you can reach out our SharePoint Consulting team here.

November 15, 2016

How to Clear cache in SharePoint Designer?

When do you need to clear SharePoint Designer cache? 
  • Once you have opened SharePoint site with one user in SharePoint Designer, SPD will save connection details. Now, next time when you try to open same SharePoint site from SPD, It will open that site with saved connection details. If you want to open that site with different connection details (Credentials) then you need to clear SharePoint Designer cache. The cache is not even cleared after doing a reinstall, so this really is the only way to clear it.
  • Sometimes, SPD shows files are already checked out even though they aren't. While trying to check-in, SPD gives error "Cannot perform this operation. The file is no longer checked out or has been deleted." This is due to SharePoint Designer is out of sync with SharePoint and you have to clear cache in order to rebuild it.
  • The cache is not even cleared after doing a reinstall. So, how would you do it? Here, is the only way - detailed steps to clear cache in SharePoint Designer.

Steps to Clear SharePoint Designer Cache:

Step 1: 
  • Close SharePoint Designer if it is open.

Step 2: [Clear Website Cache
  • Open Windows Explorer, Click the address bar and Paste “%USERPROFILE%\AppData\Local\Microsoft\WebsiteCache”.
  • Delete everything within this location.

Step 3: [Clear Web Server Extensions Cache]
  • Open Windows Explorer, Click the address bar and Paste “%APPDATA%\Microsoft\Web Server Extensions\Cache”.
  • Delete everything in this location.

Step 4: [Disable “Cache site data across SharePoint Designer sessions”]
  • Open SharePoint Designer 2010/2013.
  • Navigate to the "File" menu then select "Options" -> "General" -> "Application Options".


     
  • On the “General” tab, under the “General” heading, uncheck “Cache site data across SharePoint Designer sessions”.



If you have any questions you can reach out our SharePoint Consulting team here.

November 14, 2016

Exchange Server 2013 - EAC/OWA is not accessible

Problem:
With Exchange Server 2013, there are certain scenarios wherein we might face issues to access Outlook Web Apps and Exchange Admin Center. This might be due to lack of permission or invalid configuration of Exchange Server 2013.

Such errors are as -
  1.  "400 Bad Request Error".
  2. "500 internal server error".
  3. "Sorry, something went wrong".
  4. "404 file not found".  

And for these kind of errors, there are certain remedies that can be followed to resolve.

We faced such issues after re-installing Exchange Sever 2013 on a server. After spending few hours, we've taken various steps from different sources to resolve the errors, And here in this article, the possible solutions are specified that can be applied to resolve it.

Solution - 1 [Check Account Permission]:

Service account that is being used for administrating Exchange Server, should be member of "Organization Management", "Records Management" and "Recipient Management" groups. To check this, run below command in Exchange Management Shell.

Get-RoleGroupMember "Organization Management"
Get-RoleGroupMember "Records Management"
Get-RoleGroupMember "Recipient Management"

If, Exchange Admin user account is not in results, then, just add Exchange Admin account in above mentioned groups in Active Directory.

Open your ECP/OWA URL in browser to verify whether issue is resolved or not. If not, then, proceed further with below steps.

Solution -2 [Check .NET Framework for Application Pools]:

Check that OWA and ECP Application Pools are running on .NET Framework v4.0. It may be incomplete installation of framework that causes the error.

If this is the case then, try running below command as Administrator in Command Prompt:

%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis.exe -i and then restart IIS.

Error should get resoled now, if not then follow solution -3 as below.

Solution -3 [Rebuild OWA interface]: 

  1. Login to your Exchange 2013 CAS server.
  2. Start the Exchange Management Shell.
  3. Navigate to Exchange 2013 binaries location, for example:"C:\Program Files\Microsoft\Exchange Server\V15\Bin".
  4. Execute the UpdateCas.ps1 PowerShell script to rebuild OWA interface.
  5. Now, execute UpdateConfigFiles.ps1 PowerShell script. [It is recommended to execute this PowerShell script after installing latest cumulative update of Microsoft Exchange 2013].
Ok, now, it's time to check whether the error is resolved or not. If you are still facing the error then follow the below steps.

Solution -4 [Delete Canary Data]:

  • For detailed information on canary data and for guidelines to delete canary data, kindly refer Microsoft Article.
Now, if you are still struggling with the errors, then, you can proceed with next step, re-creating AD schema and system mailbox attributes.

Solution - 5 [Recreate System mailbox attributes and AD schema]:

Next step is to proceed with creating new virtual directories, if you are still struggling with the errors.

Solution - 6 [Create new virtual directories for ECP and OWA]:

Now, it's time to create new virtual directories for ECP and for OWA. So, as first step, we will remove ECP virtual directories with below command in Exchange Management Shell.

  • Remove-EcpVirtualDirectory -Identity "BRExchange1\ecp (Default Web Site)"
  • Remove-EcpVirtualDirectory -Identity "BRExchange2\ecp (exchange back end)"

Now, we will create new virtual directories with below command. Here we have to specify internal and external URL for ECP.

  • New-EcpVirtualDirectory -Server BRExchange1 -InternalUrl "https://BRExchange1.mydomain.com/ecp" -ExternalUrl "https://ecp.mydomain.com" -WebSiteName "Default Web Site"
  • New-EcpVirtualDirectory -Server BRExchange2 -InternalUrl "https://BRExchange2.mydomain.com/ecp" -ExternalUrl "https://ecp.mydomain.com" -WebSiteName "Exchange Back End"

In second step, we will remove OWA virtual directories as below. Below steps is only required if your facing above error while opening mailbox.

  • Remove-OwaVirtualDirectory  -Identity "BRExchange1\owa (Default Web Site)"
  • Remove-OwaVirtualDirectory  -Identity "BREXchange2\owa (exchange back end)"

Now, we will create new virtual directories with below command. Here we have to specify internal and external URL for OWA.

  • New-OwaVirtualDirectory -Server BRExchange1 -InternalUrl "https://BRExchange1.mydomain.com/owa" -ExternalUrl "https://mail.mydomain.com/owa" -WebSiteName "Default Web Site"
  • New-OwaVirtualDirectory -Server BRExchange2 -InternalUrl "https://BRExchange2.mydomain.com/owa" -ExternalUrl "https://mail.mydomain.com/owa" -WebSiteName "Default Web Site"

Hope the possible solutions listed here help you out to resolve the Exchange 2013 - EAC/OWA errors.

If you have any questions you can reach out our SharePoint Consulting team here.

October 19, 2016

How to create Custom action to deploy with Sharepoint Add-ins(Apps)

There are two basic types of SharePoint Apps/Add-ins: SharePoint Hosted and Provider Hosted. Here, I will show how we can create custom action in host web document library through SharePoint Add-ins and open a model popup on click. Follow below steps:

Step 1:
Open your SharePoint App project in Visual Studio Solution. Add New Item, select Ribbon Custom action and give it name and click on Add.


Step 2:
Now in second dialog box, we have to set properties for custom action. In Dialog box. Select appropriate option based on requirement. Here, I have selected custom action scope to "List Template" and custom action scope location set to "Document Library" and click on Next.

You can select any type like Custom List, Calendar, Form Library etc.


Step 3:
Now in third dialog box, we have to specify the settings to generate a button control for ribbon. Here, I have selected "Ribbon.Documents.EditCheckout" as control location (Ribbon group where you would like to put the Control), "Demo Modal" as button control text (Name of your Control), and provide default page URL where button control navigates. Click on Finish. Now, custom action is created.


Step 4: 
Open Elements.xml file and add following attributes in <CustomAction> tag.

                HostWebDialog="TRUE"
                HostWebDialogWidth="600"
                HostWebDialogHeight="400" 


Your Elements.xml file looks like below

<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
  <CustomAction Id="26a48039-5fc9-45fc-aabb-287b3ccedd3f.Demo_Modal"
                RegistrationType="List"
                RegistrationId="101"
                Location="CommandUI.Ribbon"
                Sequence="10001"
                HostWebDialog="TRUE"
                HostWebDialogWidth="600"
                HostWebDialogHeight="400"
                Title="Demo Modal">
    <CommandUIExtension>      
      <CommandUIDefinitions>
        <CommandUIDefinition Location="Ribbon.Documents.EditCheckout.Controls._children">
          <Button Id="Ribbon.Documents.EditCheckout.Demo_ModalButton"
                  Alt="Demo_Modal"
                  Sequence="100"
                  Command="Invoke_Demo_ModalButtonRequest"
                  LabelText="Demo_Modal"
                  TemplateAlias="o1"
                  Image32by32="_layouts/15/images/placeholder32x32.png"
                  Image16by16="_layouts/15/images/placeholder16x16.png" />
        </CommandUIDefinition>
      </CommandUIDefinitions>
      <CommandUIHandlers>
        <CommandUIHandler 
          Command="Invoke_Demo_ModalButtonRequest"
          CommandAction="~appWebUrl/Pages/Default.aspx?{StandardTokens}&amp;
          SPListItemId={SelectedItemId}&amp;SPListId={SelectedListId}&amp;SPListURLDir=      {ListUrlDir}&amp;SPSource={Source}"/>
      </CommandUIHandlers>
    </CommandUIExtension >
  </CustomAction>
</Elements>

Step 5:
Now open navigation page (App Page - default.aspx) of custom action and add below code:
<WebPartPages:AllowFraming ID="AllowFraming" runat="server"/>

All done! 
 
Now, we can deploy app in SharePoint and custom action will be created. It will open Modal Popup on click of "Demo_Modal".
 
Deployed app will looks like above Image. This is a great way to extend functionality of SharePoint.

If you have any questions you can reach out our SharePoint Consulting team here.

October 14, 2016

How to get all the Lotus Notes Documents and its metadata from a Lotus Notes Database programmatically using C#


Before understanding the code for getting the Lotus Notes Documents, we will understand the structure of the Lotus Notes Document and how to analyse the properties of a Lotus Notes Document.

Understanding the Lotus Notes Database structure:

When we open any Lotus Notes Database in Lotus Notes Client, we can see different views of the documents in left panel. In right panel, we can see the respective document items of the view selected in left panel as shown in below image.


Now, to identify the Lotus Notes Documents metadata(properties), we need to double click document in the right panel.  It will show the Lotus Notes Document body with properties like Creator/Author, Created Date, Subject as shown in below image.


The Lotus Notes Document contains many other metadata apart from the above.  To get the other metadata, we need to open the document properties by right clicking the Lotus Notes Document item in the right panel as shown in below image.

In Document Properties window, we can analyse different properties for the documents as shown in below image.


Thus, we can analyse all the metadata of the Lotus Notes Document Item.

Programmatically accessing the Lotus Notes Documents and its properties:

We will understand the code step by step following the below sequence:

Note: This will require to add reference of "Interop.Domino.dll" assembly. This assembly is also available as NuGet Package.

Get Lotus Notes Database

Below code is used to get Lotus Notes Database instance from the Database Path and Domino Server:


        //Used to Get Notes Database from the Database Path and Domino Server.
        public static NotesDatabase GetDatabase()
        {
            InitializeSession();
            if (session == null)
            {
                return null;
            }
            return session.GetDatabase(Common.DominoServer, Common.DatabasePath, false);
        }

        private static NotesSession InitializeSession()
        {
            if (session != null)
                return session;
            session = new NotesSession();
            session.Initialize(Common.ClientPassword);
            return session;
        }

Note:
  1. Common. DominoServer is Domino Server full name.
  2. Common.DatabasePath is the database path in domino server.
  3. Common.ClientPassword is the password used for Lotus Notes Client.

Get all Documents Collection from database

Below code is used to get all the documents from the database:

     //Get document collection from Lotus Notes Database
     NotesDatabase database = null;
     NotesDocument doc = null;
     database = GetDatabase();
     NotesDocumentCollection documents = database.AllDocuments;
     doc = documents.GetFirstDocument();
     while (doc != null)
     {
         // Do Your Stuff with Lotus Notes documents
         doc = documents.GetNextDocument(doc);
     }

     //If we want to get documents from specific view, below code can be used:
     //Get “All” view from the Lotus Notes Database.
     NotesView view = null;
     NotesDatabase database = null;
     NotesDocument doc = null;
     database = GetDatabase();
     view = database.GetView("All");
     doc = view.GetFirstDocument();
     while (doc != null)
     {
         // Do Your Stuff with Lotus Notes documents
         doc = view.GetNextDocument(doc);
     }

Iterate the Lotus Notes Documents one by one and get its properties

Below code is used to access each documents properties from a document collection or a view:

     //Variables used to store Metadata
     static string category = string.Empty;
     static string subject = string.Empty;
     static string immediateParentSubject = string.Empty;
     static string newsLetterSubject = string.Empty;
     static string composedDate = string.Empty;
     static string composedTime = string.Empty;
     static object[] from = null;
     static object[] Authors = null;
     static string form = string.Empty;
     static object[] proLeader = null;
     static string body = string.Empty;
     static bool hasParent = false;
     
     //Iterating each document in a view or All Document collection.
     while (doc != null) {
      docMap = new Dictionary < NotesEmbeddedObject, string > (); //used to store the attachments of Notes Documents shown in body.
      object[] items = (object[]) doc.Items; //items object used to retrieve the properties of  document like Subject,  Body, etc.
      DateTime composedDateTime = (DateTime) doc.Created;

      //Retrieving  authors of Document
      Authors = (object[]) doc.Authors;
      if (Authors != null && Authors.Length > 0) {
       Authors = Authors.Where(val => val.ToString() != "").ToArray();
       Authors = Authors.Distinct().ToArray();
      }

      //If document has any attachments or any files
      if (doc.HasEmbedded) {
       NotesRichTextItem body = (NotesRichTextItem) doc.GetFirstItem("Body");
       if (body != null) {
        foreach(object filea in body.EmbeddedObjects) {
         NotesEmbeddedObject file = (NotesEmbeddedObject) filea;
         if (file.type.ToString() == "EMBED_ATTACHMENT") {
          docMap.Add(file, file.Source);
         }
        }
       }
      }

      //Get all properties of specific document from Lotus Notes database.
      foreach(NotesItem nItem in items) {
       if (nItem.Name == "Categories") // if item is “Categories” Property
       {
        category = ((object[]) nItem.Values)[0].ToString();
       } else if (nItem.Name == "Subject") // if item is “Subject” Property
       {
        subject = ((object[]) nItem.Values)[0].ToString();
       } else if (nItem.Name == "ParentForm") // if item is “ParentForm” Property
       {
        hasParent = true;
       } else if (nItem.Name == "ImmediateParentSubject") // if item is “ImmediateParentSubject” Property
       {
        immediateParentSubject = ((object[]) nItem.Values)[0].ToString();
       } else if (nItem.Name == "NewsLetterSubject") // if item is “NewsLetterSubject” Property
       {
        newsLetterSubject = ((object[]) nItem.Values)[0].ToString();
       } else if (nItem.Name == "Form") // if item is “Form” Property
       {
        if (((object[]) nItem.Values).Length > 0)
         form = ((object[]) nItem.Values)[0].ToString();
       } else if (nItem.Name == "From") // if item is “From” Property
       {
        if ((object[]) nItem.Values != null) {
         from = (object[]) nItem.Values;
        }
       } else if (nItem.Name == "ProLeader") // if item is “ProLeader” Property
       {
        if ((object[]) nItem.Values != null) {
         proLeader = (object[]) nItem.Values;
        }
       } else if (nItem.Name == "Body") // if item is “Body” Property
       {
        if (!string.IsNullOrEmpty(nItem.Values)) {
         body = body + nItem.Values;
        }
       }
      }
     
      doc = view.GetNextDocument(doc); // or doc=documents.GetNextDocument(doc);                                
     
     }

Thus, we can add conditions for more properties in above code snippet while iterating the document to access the property value.

To summarize, the above code helps us to access Lotus Notes Database, Lotus Notes Documents and Lotus Notes Document Properties programmatically using C#.Net.

If you have any questions you can reach out our SharePoint Consulting team here.

October 6, 2016

Sharepoint 2010: Excel Services workbook larger than maximum size allowed to be opened in browser.

Issue: Recently, when I was trying to open Excel workbook from one of SharePoint document library in browser, getting this error: "This workbook is larger than the maximum workbook size allowed to be opened in the browser".


Resolution:
By default, SharePoint as out-of-the-box allows workbook size to be opened in browser is 10 MB. If we try to open it with more than 10 MB size, then will get this error.

As part of solution, we can change the workbook size setting from central administration by following below steps:

1. Login Central Admin as a Farm Administrator.
2. Click on “Application Management” in the left navigation.
3. Click on “Manage service applications” under Service Applications.
4. Click on “Excel Services Application”.
5. Click the “Trusted File Locations” link.
6. Select the Reports library or Document library where the files are (i.e. https://www.binaryrepublik.com/work)
7. Scroll down to “Workbook Properties” and change the “Maximum Workbook Size” as per requirement. (i.e. 50 MB).
8. Save the changes.
9. Now, any workbook up to 50 MB in size from the location can be opened in browser.

If you have any questions you can reach out our SharePoint Consulting team here.