Quantcast
Channel: Richard diZerega's Blog
Viewing all 22 articles
Browse latest View live

Cross-site publishing alternatives in SharePoint Online/Office 365

$
0
0

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections).  Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse.  Here is a video that outlines the solution:

(Please visit the site to view this video)

 

NOTE: I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs
<divid="divContentContainer"></div>
<scripttype="text/javascript">
    $(document).ready(function ($) {
        var basePath = "https://tenant.sharepoint.com/sites/somesite/_api/";
        $.ajax({
            url: basePath + "search/query?Querytext='ContentType:News'",
            type: "GET",
            headers: { "Accept": "application/json;odata=verbose" },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs
<divid="divContentContainer"></div>
<scripttype="text/javascript">
    $(document).ready(function ($) {
        var basePath = "https://tenant.sharepoint.com/sites/somesite/_api/";
        $.ajax({
            url: basePath + "web/lists/GetByTitle('News')/items/?$select=Title&$filter=Feature eq 0",
            type: "GET",
            headers: { "Accept": "application/json;odata=verbose" },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!--#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + "_ImageTitle_");
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, "Picture URL");
var pictureId = encodedId + "picture";
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, "mtcImg140", line1, pictureId);
var pictureLinkId = encodedId + "pictureLink";
var pictureContainerId = encodedId + "pictureContainer";
var dataContainerId = encodedId + "dataContainer";
var dataContainerOverlayId = encodedId + "dataContainerOverlay";
var line1LinkId = encodedId + "line1Link";
var line1Id = encodedId + "line1";
 _#-->
<divstyle="width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;">
   <ahref="_#= linkURL =#_">
      <divstyle="float: left; width: 140px; padding-right: 10px;">
         <imgsrc="_#= pictureURL =#_" class="mtcImg140" style="width: 140px;" />
      </div>
      <divstyle="float: left; width: 170px">
         <divclass="mtcProfileHeadermtcProfileHeaderP">_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<divid="divUnfeaturedNews"></div>
<scripttype="text/javascript">
    $(document).ready(function ($) {
        var basePath = "https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/";
        $.ajax({
            url: basePath + "web/lists/GetByTitle('News')/items/?$select=Title&$filter=Feature eq 0",
            type: "GET",
            headers: { "Accept": "application/json;odata=verbose" },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $("<ul style='list-style-type: none; padding-left: 0px;' class='cbs-List'>");
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata["uri"] + "/FieldValuesAsHtml",
                        type: "GET",
                        headers: { "Accept": "application/json;odata=verbose" },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = "<li style='display: inline;'><div style='width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;'>";
                            htmlStr += "<a href='#'>";
                            htmlStr += "<div style='float: left; width: 140px; padding-right: 10px;'>";
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, '140');
                            htmlStr += "</div>";
                            htmlStr += "<div style='float: left; width: 170px'>";
                            htmlStr += "<div class='mtcProfileHeader mtcProfileHeaderP'>" + data.d.Title + "</div>";
                            htmlStr += "</div></a></div></li>";
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $("#divUnfeaturedNews").append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css('width', width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<divid="divFeaturedNews">
    <divclass="mtc-Slideshow" id="divSlideShow" style="width: 610px;">
        <divstyle="width: 100%; float: left;">
            <divid="divSlideShowSection">
                <divstyle="width: 100%;">
                    <divclass="mtc-SlideshowItems" id="divSlideShowSectionContainer" style="width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;">
                        <divid="divFeaturedNewsItemContainer">
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<scripttype="text/javascript">
    $(document).ready(function ($) {
        var basePath = "https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/";
        $.ajax({
            url: basePath + "web/lists/GetByTitle('News')/items/?$select=Title&$filter=Feature eq 1&$top=4",
            type: "GET",
            headers: { "Accept": "application/json;odata=verbose" },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata["uri"] + "/FieldValuesAsHtml",
            type: "GET",
            headers: { "Accept": "application/json;odata=verbose" },
            success: function (data) {
                processCount++;
                var itemHtml = "<div class='mtcItems' id='divPic_" + i + "' style='width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;'>"
                itemHtml += "<div id='container_" + i + "' style='width: 610px; height: 275px; float: left;'>";
                itemHtml += "<a href='#' title='" + data.d.Caption_x005f_x0020_x005f_Title + "' style='width: 610px; height: 275px;'>";
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += "</a></div></div>";
                itemHtml += "<div class='titleContainerClass' id='divTitle_" + i + "' data-originalidx='" + i + "' data-currentidx='" + i + "' style='height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;' onclick='changeSlide(this);'>";
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += "<span id='currentSpan_" + i + "' style='display: none; font-size: 16px;'>" + data.d.Caption_x005f_x0020_x005f_Body + "</span></div>";
                $('#divFeaturedNewsItemContainer').append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = '610px';
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(".mtc-SlideshowItems");
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $('#divTitle_0');
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $('#divTitle_' + i);
            cssTitle(divx, false);
            divx.css('top', top);
            top += 35;
        }
    }

    function cssTitle(div, selected) {
        if (selected) {
            div.css('height', 'auto');
            div.css('width', '300px');
            div.css('top', '10px');
            div.css('left', '0px');
            div.css('font-size', '26px');
            div.css('padding-top', '5px');
            div.css('padding-bottom', '5px');
            div.find('span').css('display', 'block');
        }
        else {
            div.css('height', '25px');
            div.css('width', 'auto');
            div.css('left', '0px');
            div.css('font-size', '18px');
            div.css('padding-top', '0px');
            div.css('padding-bottom', '0px');
            div.find('span').css('display', 'none');
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll('.titleContainerClass');
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute('data-currentidx'));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = '';
        var originalSelectedIndex = '';

        var nextSelected = '';
        var originalNextIndex = '';

        if (item == null) {
            var item0 = document.querySelector('[data-currentidx="' + currentIndexVals[0] + '"]');
            originalSelectedIndex = parseInt(item0.getAttribute('data-originalidx'));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute('data-currentidx');
            originalNextIndex = item.getAttribute('data-originalidx');
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $('[data-currentidx="0"]');
                cssTitle(div, false);
                div.css('left', '-400px');
                div.css('top', '230px');

                newIndexVals[i] = 3;
                var item0 = document.querySelector('[data-currentidx="0"]');
                originalSelectedIndex = item0.getAttribute('data-originalidx');

                //annimate
                div.delay(500).animate(
                    { left: '0px' }, 500, function () {
                    });
            }
            elseif (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $('[data-currentidx="' + nextSelected + '"]');
                cssTitle(div, true);
                div.css('left', '-610px');

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: '0px' }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $('[data-currentidx="' + curIdx + '"]');

                var topStr = div.css('top');
                var topInt = parseInt(topStr.substring(0, topStr.length - 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt - 35;
                    if (curIdx - 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx - 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount - 1;

        //adjust pictures
        $('#divPic_' + originalNextIndex).css('left', '610px');
        leftOffset = '-610px';

        $('#divPic_' + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $('#divPic_' + originalNextIndex).animate(
            { left: '0px' }, 500, function () {
            });

        var item0 = document.querySelector('[data-currentidx="' + currentIndexVals[0] + '"]');
        var item1 = document.querySelector('[data-currentidx="' + currentIndexVals[1] + '"]');
        var item2 = document.querySelector('[data-currentidx="' + currentIndexVals[2] + '"]');
        var item3 = document.querySelector('[data-currentidx="' + currentIndexVals[3] + '"]');
        if (newIndexVals[0] != null) { item0.setAttribute('data-currentidx', newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute('data-currentidx', newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute('data-currentidx', newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute('data-currentidx', newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

I hope this post helped illustrate ways to display content across traditional SharePoint boundaries without cross-site publishing and how to template those displays for reuse.  SharePoint Online might eventually get cross-site publishing feature, but that doesn’t mean you have to wait to achieve the same result.  In fact, the script approach is so similar to display templates, it should be an easy transition to cross-site publishing in the future.  I want to give a shout out to my colleague Nathan Miller for has assist in this vision.


Developing Apps against the Office Graph - Part 1

$
0
0

Last week, Microsoft started rolling out Delve to Office 365 customers. Delve is a cool new way to discover relevant information and connections across your work life. As cool as Delve is, I’m even more excited about the Office Graph that powers it. The Office Graph puts sophisticated machine learning on top of all the interactions you and your colleagues make with each other and content in Office 365. With the Office Graph you can identify information trending around people, content you have in common with others, and social connections that traverse organizational boundaries. Best of all, developers can leverage the Office Graph to create new and exciting scenarios that extend Office 365 like never before. In this post, I’ll illustrate some findings in developing my first Office Graph app. The video below illustrates some of the concepts of this post:

Also see Developing Apps against the Office Graph - Part 2

(Please visit the site to view this video)

 

NOTE: The Office Graph “learns” through the “actions” of users (aka - “actors”) against other user or objects in Office 365 (ex: site, document, etc.). Actions may take time to show up in the Office Graph because it leverages SharePoint’s search and search analytics technologies. Additionally, the more actors and actions, the more you will get out of the Office Graph. It might take some work to achieve this in a demo or developer tenant of Office 365. As a point of reference, I spent a good part of a Saturday signing in/out of 25 test accounts to generate enough desired activity and waited another 6-12 hours to see that activity show up in the Office Graph. Happy waiting :)

 

APIs and Graph Query Language (GQL)

I was extremely pleased to see detailed MSDN documentation accompany the release of Office Graph/Delve. Using GQL with the SharePoint Online Search REST API to query Office graph outlines the Graph Query Language (GQL) syntax and how to use it with the SharePoint Search APIs to query the Office Graph. The original “Marketecture” images for the Office Graph show a spider web of connections between people and content (see below).

In reality, this is exactly how the Office Graph and GQL work. People are “Actors” that perform activities/actions on other actors and objects. Ultimately, an activity/action generates a connection or “Edge”. When you query the Office Graph with GQL, you typically provide the Actor(s) and Action(s) and the Office Graph returns the Objects with “Edges” that match the actor/action criteria. Again, the Office Graph is queried through the standard SharePoint REST APIs for search, but with the additional GraphQuery syntax. Below are a few examples:

REST Examples with GQL

//Objects related to current user (ie - ME)
/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(ME)'

//Objects related to actor 342
/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(342)'

//Objects trending around current user (trending = action type 1020)
/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(ME\, action\:1020)'

//Objects related to current user and actor 342
/_api/search/query?Querytext='*'&Properties='GraphQuery:AND(ACTOR(ME)\, ACTOR(342))'

//Objects recently viewed by current user and modified by actor 342
/_api/search/query?Querytext='*'&Properties='GraphQuery:AND(ACTOR(ME\, action\:1001)\, ACTOR(342\, action\:1003))'

//People for whom the current worker works with
/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(ME\, action\:1019)'

//Objects related to actor 342 with 'Delve' in the title
/_api/search/query?Querytext='Title:Delve'&Properties='GraphQuery:ACTOR(342)'

 

Notice that the use of ME or specific IDs in the ACTOR part of the queries and the numeric action type code for the connection. Actors and Actions can be combined in numerous ways to deliver interesting intersections in the Office Graph. Below is a comprehensive list of action types and their visibility scope.

Action TypeDescriptionVisibilityID
PersonalFeedThe actor’s personal feed as shown on their Home view in Delve.Private1021
ModifiedItems that the actor has modified in the last three months.Public1003
OrgColleagueEveryone who reports to the same manager as the actor.Public1015
OrgDirectThe actor’s direct reports.Public1014
OrgManagerThe person whom the actor reports to.Public1013
OrgSkipLevelManagerThe actor’s skip-level manager.Public1016
WorkingWithPeople whom the actor communicates or works with frequently.Private1019
TrendingAroundItems popular with people whom the actor works or communicates with frequently.Public1020
ViewedItems viewed by the actor in the last three months.Private1001
WorkingWithPublicA public version of the WorkingWith edge.Public1033

 

The results returned from GQL queries are in a similar format as regular search queries against SharePoint’s REST APIs. However, GQL will add an additional “Edges” managed property that includes details about the action, the date of the action, and weight assigned by the ranking model. Below is an example of this property returned as part of the RelevantResult ResultTable of a Search API call.

Edges Managed Property
<d:element m:type="SP.KeyValue">
    <d:Key>Edges</d:Key>
    <d:Value>[{"ActorId":41391607,"ObjectId":151088624,
        "Properties":{"Action":1001,"Blob":[],
        "ObjectSource":1,"Time":"2014-08-19T13:46:29.0000000Z",
        "Weight":2}}]
    </d:Value>
    <d:ValueType>Edm.String</d:ValueType>
</d:element>

 

These Edge properties come into play when performing advanced GQL queries that specify a sort based on time (EdgeTime) or closeness (EdgeWeight). The sample below shows a search query to return people for whom the current user works with and sorted by closeness: 

WorkingWith by Closeness
//People for whom the current worker works with sorted by closeness
/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(ME\, action\:1019),GraphRankingModel:{"features"\:[{"function"\:"EdgeWeight"}]}'&RankingModelId='0c77ded8-c3ef-466d-929d-905670ea1d72'

 

One important note is that the Office 365 APIs do not currently support a search permission scope (I’m told it is coming). Until that exists, you will need to use the standard SharePoint app model to develop against the Office Graph.

Image Previews

One glimpse at Delve, and you immediately notice an attractively visual user interface. This isn’t your mama’s standard SharePoint search results. Visual previews accompany most Delve results without having to mouse/hover over anything. I could tell these visual previews would be a helpful in building my own applications against the Office Graph. After investigation, it appears that (at least for Office docs) a new layouts web handler generates on-demand previews based on some document parameters (very similar to dynamic image renditions). The getpreview.ashx handler accepts the Office document’s SiteID, WebID, UniqueID, and DocID to generate previews. All of these parameters can be retrieved as managed properties on a GCL query as is seen below and used in my app.

Managed Properties for Image Previews

//make REST call to get items trending around the current user GraphQuery:ACTOR(ME\, action\:1020)
$.ajax({
    url: appWebUrl + "/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(ME\\, action\\:1020)'&RowLimit=50" +
        "&SelectProperties='DocId,WebId,UniqueId,SiteID,ViewCountLifetime,Path,DisplayAuthor,FileExtension,Title,SiteTitle,SitePath'",
    method: "GET",
    headers: { "Accept": "application/json; odata=verbose" },
    success: function (data) {

 

Building Image Preview URL w/ getpreview.ashx
//build an image preview based on uniqueid, siteid, webid, and docid
o.pic = hostWebUrl + '/_layouts/15/getpreview.ashx?guidFile=' + o.uniqueId + '&guidSite=' + o.siteId + '&guidWeb=' + o.webId + '&docid=' + o.docId + '&ClientType=CodenameOsloWeb&size=small';

 

DISCLAIMER: The getpreview.ashx handler is a undocumented discovery. Use it with caution as it is subject to change without notice until officially documented.

 

The Final Product

For my first Office Graph app, I didn’t try to get too creative with GCL. Instead, I aimed at delivering an alternate/creative visual on top of some standard GCL queries. Specifically, I used d3.js to display Office Graph query results as animated bubbles sized by # of views. It’s a neat way to see the same results as Delve, but emphasized by popularity.

Delve (Browser)

Delve (Windows 8 Client)

Office Graph Bubbles App

Final Thoughts

I hope this post helped spark your interest in the power of the Office Graph for developers. The Office Graph opens up a new world of intelligence in Office 365 that you can harness your applications. You can download the “Office Graph Bubbles” app I built for this post HERE.

Developing Apps against the Office Graph – Part 2

$
0
0

Earlier this week I authored a blog post on Developing Apps against the Office Graph. In the post, I used static Graph Query Language (GQL) to display Office Graph results in a visualization different from Delve. In this post, I’ll take the solution further to utilize “edge weight” and dynamic GQL queries that include both object AND actors. Checkout the new solution in the video below, see how I built it in this post, and start contributing to the effort on GitHub!

(Please visit the site to view this video)

Dynamic GQL Queries

In my first post, I used a static GQL query that displayed trending content for the current user (GraphQuery:ACTOR(ME, action:1020)). I made no attempt to query other actors, actions, or combine GQL in complex AND/OR logic. It was a simple introduction into GQL with apps and served its purpose.

In the new solution, I wanted to add the ability query different actions of different actors (not just ME). I accomplished this by enabling actor navigation within the visualization and adding an actions filter panel. The active actor will always display in the “nucleus” and will default to the current user (similar to the first app).

Because the actor can change (through navigation) and multiple actions can be selected (through the filter panel), the app needed to support dynamic GQL queries. I broke the queries up into two REST calls…one for object actions types (ex: show trending content, show viewed content, etc) and one for actor action types (show direct reports, show colleagues, etc). This made it easy to parse query results with completely different managed properties. Some action types are considered Private and only work for the current user (ex: show viewed content) and other have specific Public action types (ex: 1019 = WorkingWith and 1033 = WorkingWithPublic). Notice how this is handled as the dynamic GQL query if constructed.

Building Dynamic GQL

//load the user by querying the Office Graph for trending content, Colleagues, WorkingWith, and Manager
var loadUser = function (actorId, callback) {
    var oLoaded = false, aLoaded = false, children = [], workingWithActionID = 1033; //1033 is the public WorkingWith action type
    if (actorId == 'ME')
        workingWithActionID = 1019; //use the private WorkingWith action type

    //build the object query
    var objectGQL = '', objectGQLcnt = 0;
    if ($('#showTrending').hasClass('selected')) {
        objectGQLcnt++;
        objectGQL += "ACTOR(" + actorId + "\\, action\\:1020)";
    }
    if ($('#showModified').hasClass('selected')) {
        objectGQLcnt++;
        if (objectGQLcnt > 1)
            objectGQL += "\\, ";
        objectGQL += "ACTOR(" + actorId + "\\, action\\:1003)";
    }
    if ($('#showViewed').hasClass('selected') && actorId == 'ME') {
        objectGQLcnt++;
        if (objectGQLcnt > 1)
            objectGQL += "\\, ";
        objectGQL += "ACTOR(" + actorId + "\\, action\\:1001)";
    }
    if (objectGQLcnt > 1)
        objectGQL = "OR(" + objectGQL + ")";

    //determine if the object query should be executed
    if (objectGQLcnt == 0)
        oLoaded = true;
    else {
        //get objects around the current actor
        $.ajax({
            url: appWebUrl + "/_api/search/query?Querytext='*'&Properties='GraphQuery:" + objectGQL + "'&RowLimit=50&SelectProperties='DocId,WebId,UniqueId,SiteID,ViewCountLifetime,Path,DisplayAuthor,FileExtension,Title,SiteTitle,SitePath'",
            method: 'GET',
            headers: { "Accept": "application/json; odata=verbose" },
            success: function (d) {
                if (d.d.query.PrimaryQueryResult != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results.length > 0) {
                    $(d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results).each(function (i, row) {
                        children.push(parseObjectResults(row));
                    });
                }

                oLoaded = true;
                if (aLoaded)
                    callback(children);
            },
            error: function (err) {
                showMessage('<div id="private" class="message">Errot calling the Office Graph for objects...refresh your browser and try again (<span class="hyperlink" onclick="javascript:$(this).parent().remove();">dismiss</span>).</div>');
            }
        });
    }

    //build the actor query
    var actorGQL = '', actorGQLcnt = 0;
    if ($('#showColleagues').hasClass('selected')) {
        actorGQLcnt++;
        actorGQL += "ACTOR(" + actorId + "\\, action\\:1015)";
    }
    if ($('#showWorkingwith').hasClass('selected')) {
        actorGQLcnt++;
        if (actorGQLcnt > 1)
            actorGQL += "\\, ";
        actorGQL += "ACTOR(" + actorId + "\\, action\\:" + workingWithActionID + ")";
    }
    if ($('#showManager').hasClass('selected')) {
        actorGQLcnt++;
        if (actorGQLcnt > 1)
            actorGQL += "\\, ";
        actorGQL += "ACTOR(" + actorId + "\\, action\\:1013)";
    }
    if ($('#showDirectreports').hasClass('selected')) {
        actorGQLcnt++;
        if (actorGQLcnt > 1)
            actorGQL += "\\, ";
        actorGQL += "ACTOR(" + actorId + "\\, action\\:1014)";
    }
    if (actorGQLcnt > 1)
        actorGQL = "OR(" + actorGQL + ")";

    //determine if the actor query should be executed
    if (actorGQLcnt == 0)
        aLoaded = true;
    else {
        //get actors around current actor
        $.ajax({
            url: appWebUrl + "/_api/search/query?Querytext='*'&Properties='GraphQuery:" + actorGQL + "'&RowLimit=200&SelectProperties='PictureURL,PreferredName,JobTitle,Path,Department'",
            method: 'GET',
            headers: { "Accept": "application/json; odata=verbose" },
            success: function (d) {
                if (d.d.query.PrimaryQueryResult != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results.length > 0) {
                    $(d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results).each(function (i, row) {
                        children.push(parseActorResults(row));
                    });
                }
                           
                aLoaded = true;
                if (oLoaded)
                    callback(children);
            },
            error: function (err) {
                showMessage('<div id="private" class="message">Error calling Office Graph for actors...refresh your browser and try again (<span class="hyperlink" onclick="javascript:$(this).parent().remove();">dismiss</span>).</div>');
            }
        });
    }
}

 

Using Edges

You might be wondering where ActorIDs come from in the code above. These are returned from the Office Graph as part of the “Edges” managed property that is included in Office Graph query results. In my last post, I provided an example of this property. The Edges value will be a JSON string that parses into an object array. Each item in the object array represents an edge that connects the actor to an object (the object could be any item/user in the Office Graph). The array can have multiple items if multiple edges/connections exist between the actor and object. Here are two examples described and as returned from the Office Graph.

Scenario 1: Bob (actor with ID of 111) might have an edge/connection to Frank (object with ID of 222) because Frank is his manager (action type 1013) and he works with him (action type 1019).

<d:element m:type="SP.KeyValue">
    <d:Key>Edges</d:Key>
    <d:Value>[{"ActorId":111,"ObjectId":222,
        "Properties":{"Action":1013,"Blob":[],
        "ObjectSource":1,"Time":"2014-08-19T10:46:49.0000000Z",
        "Weight":296}},
        {"ActorId":111,"ObjectId":222,
        "Properties":{"Action":1019,"Blob":[],
        "ObjectSource":1,"Time":"2014-08-17T18:16:21.0000000Z",
        "Weight":51}}]
    </d:Value>
    <d:ValueType>Edm.String</d:ValueType>
</d:element>

 

Scenario 2: Bob (actor with ID of 111) might have an edge/connection to Proposal.docx (object with ID of 333) because he viewed it recently (action type 1001) and is trending around him (1020).

<d:element m:type="SP.KeyValue">
    <d:Key>Edges</d:Key>
    <d:Value>[{"ActorId":111,"ObjectId":333,
        "Properties":{"Action":1001,"Blob":[],
        "ObjectSource":1,"Time":"2014-08-19T13:12:57.0000000Z",
        "Weight":7014}},
        {"ActorId":111,"ObjectId":333,
        "Properties":{"Action":1020,"Blob":[],
        "ObjectSource":1,"Time":"2014-08-17T11:44:34.0000000Z",
        "Weight":4056}}]
    </d:Value>
    <d:ValueType>Edm.String</d:ValueType>
</d:element>

 

To traverse the Office Graph through an actor, you need to use the ObjectId from the Edges managed property. The ActorId represents the actor the results came from. The new solution uses edge weight (ie – closeness) instead of view counts for bubble size. Because an object can have multiple edges, I decided to use the largest edge weight as seen in my parse function below:

Parse Query Results
//parse a search result row into an actor
var parseActorResults = function (row) {
    var o = {};
    o.type = 'actor';
    $(row.Cells.results).each(function (ii, ee) {
        if (ee.Key == 'PreferredName')
            o.title = ee.Value;
        else if (ee.Key == 'PictureURL')
            o.pic = ee.Value;
        else if (ee.Key == 'JobTitle')
            o.text1 = ee.Value;
        else if (ee.Key == 'Department')
            o.text2 = ee.Value;
        else if (ee.Key == 'Path')
            o.path = ee.Value;
        else if (ee.Key == 'DocId')
            o.docId = ee.Value;
        else if (ee.Key == 'Rank')
            o.rank = parseFloat(ee.Value);
        else if (ee.Key == 'Edges') {
            //get the highest edge weight
            var edges = JSON.parse(ee.Value);
            o.actorId = edges[0].ObjectId;
            $(edges).each(function (i, e) {
                var w = parseInt(e.Properties.Weight);
                if (o.edgeWeight == null || w > o.edgeWeight)
                    o.edgeWeight = w;
            });
        }
    });
    return o;
}

 

I also found that document objects had significantly larger edge weights than user objects. To adjust for this across two queries, I perform a normalization on user objects to keep their bubble size similar to document objects.

Edge Weight Normalization

//go through all children to counts and sum for edgeWeight normalization
var cntO = 0, totO = 0, cntA = 0, totA = 0;
$(entity.children).each(function (i, e) {
    if (e.type == 'actor') {
        totA += e.edgeWeight;
        cntA++;
    }
    else if (e.type == 'object') {
        totO += e.edgeWeight;
        cntO++;
    }
});

//normalize edgeWeight across objects and actors
totalEdgeWeight = 0;
$(entity.children).each(function (i, e) {
    //adjust edgeWeight for actors only
    if (e.type == 'actor') {
        //pct of average * average of objects
        e.edgeWeight = (e.edgeWeight / (totA / cntA)) * (totO / cntO);
    }
    totalEdgeWeight += e.edgeWeight
});

 

More on Preview Images

In the first post, I introduced the getpreview.ashx handler for generating on-demand preview images for documents. Some documents such as Excel and PDF don’t always render previews, so I added some logic for this. Ultimately, I try to pre-load the images (which I was already doing for SVG) and then revert to a static image if the pre-loaded image has a height or width of 0px. I also do this for users that don’t have a profile picture.

Handle Bad Preview Images

//load the images so we can get the natural dimensions
$('#divHide img').remove();
var hide = $('<div></div>');
hide.append('<img src="' + entity.pic + '" />');
$(entity.children).each(function (i, e) {
    hide.append('<img src="' + e.pic + '" />');
});
hide.appendTo('#divHide');
$('#divHide img').each(function (i, e) {
    if (i == 0) {
        entity.width = parseInt(e.naturalWidth);
        entity.height = parseInt(e.naturalHeight);
    }
    else {
        entity.children[i - 1].width = parseInt(e.naturalWidth);
        entity.children[i - 1].height = parseInt(e.naturalHeight);

        if (entity.children[i - 1].width == 0 ||
            entity.children[i - 1].height == 0) {
            if (entity.children[i - 1].type == 'actor') {
                entity.children[i - 1].width = 96;
                entity.children[i - 1].height = 96;
                entity.children[i - 1].pic = '../images/nopic.png';
            }
            else if (entity.children[i - 1].ext == 'xlsx' || entity.children[i - 1].ext == 'xls') {
                entity.children[i - 1].width = 300;
                entity.children[i - 1].height = 300;
                entity.children[i - 1].pic = '../images/excel.png';
            }
            else if (entity.children[i - 1].ext == 'docx' || entity.children[i - 1].ext == 'doc') {
                entity.children[i - 1].width = 300;
                entity.children[i - 1].height = 300;
                entity.children[i - 1].pic = '../images/word.png';
            }
            else if (entity.children[i - 1].ext == 'pdf') {
                entity.children[i - 1].width = 300;
                entity.children[i - 1].height = 300;
                entity.children[i - 1].pic = '../images/pdf.png';
            }
        }
    }
});

 

NOTE: the solution displays cross-domain user profile pictures, which can be buggy with Internet Explorer Security Zones. I’ve written a blog about handling cross-site images. However, I didn’t implement this pattern in the solution due to the potential volume of results. For best results, I recommend one of all of the following:

  • Sign into Office 365 with the “keep me signed in” option checked
  • Install the app in the MySiteHost or at least authenticate against the MySiteHost or OneDrive before opening the app
  • Make sure the app URL is in the same IE Security Zone as the hostweb and MySiteHost

 

Final Thoughts

Due to the popularity of the first post, I’ve decided to post the new solution on GitHub. Hopefully this will facilitate a community effort to add enhancements and fix bugs. Together, we can take the Office Graph to exciting new places.

Building Apps with the new Power BI APIs

$
0
0

Last month, Microsoft unveiled the new and improved Power BI, a cloud-based business analytics service for non-technical business users. The new Power BI is available for preview in the US. It has amazing new (HTML5) visuals, data sources, mobile applications, and developer APIs. This post will focus on the new Power BI APIs and how to use them to create and load data into Power BI datasets in the cloud. Microsoft is also working with strategic partners to add native data connectors to the Power BI service. If you have a great connector idea, you can submit it HERE. However, ANYONE can build applications that leverage the new APIs to send data into Power BI, so let’s get started!

[View:http://www.youtube.com/watch?v=5DCW834Vt6I]

Yammer Analytics Revisited

I’ve done a ton of research and development on using Power BI with Yammer data. In fact, last year I built a custom cloud service that exported Yammer data and loaded it into workbooks (with pre-built models). The process was wildly popular, but required several manual steps that were prone to user error. As such, I decided to use the Yammer use case for my Power BI API sample. Regardless if you are interested in Yammer data, you will find generic functions for interacting with Power BI.

Why are Power BI APIs significant?

Regardless of how easy Microsoft makes data modeling, end-users (the audience for Power BI) don’t care about modeling and would rather just answer questions with the data. Power BI APIs can automate modeling/loading and give end-users immediate access to answers. Secondly, some data sources might be proprietary, highly normalized, or overly complex to model. Again, Power BI APIs can solve this through automation. Finally, some data sources might have unique constrains that make it hard to query using normal connectors. For example, Yammer has REST end-points to query data. However, these end-points have unique rate limits that cause exceptions with normal OData connectors. Throttling is just one example of a unique constraint that can be addressed by owning the data export/query process in a 3rd party application that uses the Power BI APIs.

Common Consent Vision

My exploration of the Power BI APIs really emphasized Microsoft’s commitments to Azure AD and “Common Consent” applications. Common Consent refers to the ability of an application leveraging Azure AD to authenticate ONCE and get access to multiple Microsoft services such as SharePoint Online, Exchange Online, CRM Online, and (now) Power BI. All a developer needs to do is request appropriate permissions and (silently) get service-specific access tokens to communicate with the different services. Azure AD will light up with more services in the future, but I’m really excited to see how far Microsoft has come in one year and the types of applications they are enabling.

Power BI API Permissions

Power BI APIs use Azure Active Directory and OAuth 2.0 to authenticate users and authorize 3rd party applications. An application leveraging the Power BI APIs must first be registered as an Azure AD Application with permissions to Power BI. Currently, Azure AD supports three delegated permissions to Power BI from 3rd party applications. These include “View content properties”, “Create content”, “Add data to a user’s dataset”. “Delegated Permissions” means that the API calls are made on behalf of an authenticated user…not an elevated account as would be the case with “Application Permissions” (“Application Permissions” could be added in the future). The permissions for an Azure AD App can be configured in the Azure Management Portal as seen below.

Access Tokens and API Calls

With an Azure AD App configured with Power BI permissions, the application can request resource-specific access tokens to Power BI (using the resource ID “https://analysis.windows.net/powerbi/api”). The method below shows an asynchronous call to get a Power BI access token in a web project.

getAccessToken for Power BI APIs
/// <summary>
/// Gets a resource specific access token for Power BI (“https://analysis.windows.net/powerbi/api“)
/// </summary>
/// <returns>Access Token string</returns>
private static async Task<string> getAccessToken()
{
    // fetch from stuff user claims
    var signInUserId = ClaimsPrincipal.Current.FindFirst(ClaimTypes.NameIdentifier).Value;
    var userObjectId = ClaimsPrincipal.Current.FindFirst(SettingsHelper.ClaimTypeObjectIdentifier).Value;
    // setup app info for AuthenticationContext
    var clientCredential = new ClientCredential(SettingsHelper.ClientId, SettingsHelper.ClientSecret);
    var userIdentifier = new UserIdentifier(userObjectId, UserIdentifierType.UniqueId);
    // create auth context (note: no token cache leveraged)
    AuthenticationContext authContext = new AuthenticationContext(SettingsHelper.AzureADAuthority);
    // get access token for Power BI
    return authContext.AcquireToken(SettingsHelper.PowerBIResourceId, clientCredential, new UserAssertion(userObjectId, UserIdentifierType.UniqueId.ToString())).AccessToken;
}

 

The Power BI APIs offer REST endpoints to interact with datasets in Power BI. In order to call the REST end-points, a Power BI access token must be placed as a Bearer token in the Authorization header of all API calls. This can be accomplished server-side or client-side. In fact, the Power BI team has an API Explorer to see how most API calls can be performed in just about any language. I decided to wrap my API calls behind a Web API Controller as seen below. Take note of the Bearer token set in the Authorization header of each HttpClient call.

Web API Controller
public class PowerBIController : ApiController
{
    [HttpGet]
    public async Task<List<PowerBIDataset>> GetDatasets()
    {
        return await PowerBIModel.GetDatasets();
    }
    [HttpGet]
    public async Task<PowerBIDataset> GetDataset(Guid id)
    {
        return await PowerBIModel.GetDataset(id);
    }
    [HttpPost]
    public async Task<Guid> CreateDataset(PowerBIDataset dataset)
    {
        return await PowerBIModel.CreateDataset(dataset);
    }
    [HttpDelete]
    public async Task<bool> DeleteDataset(Guid id)
    {
        //DELETE IS UNSUPPORTED
        return await PowerBIModel.DeleteDataset(id);
    }
    [HttpPost]
    public async Task<bool> ClearTable(PowerBITableRef tableRef)
    {
        return await PowerBIModel.ClearTable(tableRef.datasetId, tableRef.tableName);
    }
    [HttpPost]
    public async Task<bool> AddTableRows(PowerBITableRows rows)
    {
        return await PowerBIModel.AddTableRows(rows.datasetId, rows.tableName, rows.rows);
    }
}

 

Power BI Model Class
/// <summary>
/// Gets all datasets for the user
/// </summary>
/// <returns>List of PowerBIDataset</returns>
public static async Task<List<PowerBIDataset>> GetDatasets()
{
    List<PowerBIDataset> datasets = new List<PowerBIDataset>();
    var token = await getAccessToken();
    var baseAddress = new Uri(“https://api.powerbi.com/beta/myorg/”);
    using (var client = new HttpClient{ BaseAddress = baseAddress })
    {
        client.DefaultRequestHeaders.Add(“Authorization”, “Bearer “ + token);
        client.DefaultRequestHeaders.Add(“Accept”, “application/json; odata=verbose”);
        using (var response = await client.GetAsync(“datasets”))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            JObject oResponse = JObject.Parse(responseString);
            datasets = oResponse.SelectToken(“datasets”).ToObject<List<PowerBIDataset>>();
        }
    }
    return datasets;
}
/// <summary>
/// Gets a specific dataset based on id
/// </summary>
/// <param name=”id”>Guid id of dataset</param>
/// <returns>PowerBIDataset</returns>
public static async Task<PowerBIDataset> GetDataset(Guid id)
{
    PowerBIDataset dataset = null;
    var token = await getAccessToken();
    var baseAddress = new Uri(“https://api.powerbi.com/beta/myorg/”);
    using (var client = new HttpClient { BaseAddress = baseAddress })
    {
        client.DefaultRequestHeaders.Add(“Authorization”, “Bearer “ + token);
        client.DefaultRequestHeaders.Add(“Accept”, “application/json; odata=verbose”);
        using (var response = await client.GetAsync(String.Format(“datasets/{0}”, id.ToString())))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            JObject oResponse = JObject.Parse(responseString);
        }
    }
    return dataset;
}
/// <summary>
/// Creates a dataset, including tables/columns
/// </summary>
/// <param name=”dataset”>PowerBIDataset</param>
/// <returns>Guid id of the new dataset</returns>
public static async Task<Guid> CreateDataset(PowerBIDataset dataset)
{
    var token = await getAccessToken();
    var baseAddress = new Uri(“https://api.powerbi.com/beta/myorg/”);
    using (var client = new HttpClient{ BaseAddress = baseAddress })
    {
        var content = new StringContent(JsonConvert.SerializeObject(dataset).Replace(“\”id\”:\”00000000-0000-0000-0000-000000000000\”,”, “”), System.Text.Encoding.Default, “application/json”);
        client.DefaultRequestHeaders.Add(“Authorization”, “Bearer “ + token);
        client.DefaultRequestHeaders.Add(“Accept”, “application/json”);
        using (var response = await client.PostAsync(“datasets”, content))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            JObject oResponse = JObject.Parse(responseString);
            dataset.id = new Guid(oResponse.SelectToken(“id”).ToString());
        }
    }
    return dataset.id;
}
/// <summary>
/// !!!!!!!!!!!! THIS IS CURRENTLY UNSUPPORTED !!!!!!!!!!!!
/// Deletes a dataset
/// </summary>
/// <param name=”dataset”>Guid id of the dataset</param>
/// <returns>bool indicating success</returns>
public static async Task<bool> DeleteDataset(Guid dataset)
{
    bool success = false;
    var token = await getAccessToken();
    var baseAddress = new Uri(“https://api.powerbi.com/beta/myorg/”);
    using (var client = new HttpClient { BaseAddress = baseAddress })
    {
        client.DefaultRequestHeaders.Add(“Authorization”, “Bearer “ + token);
        client.DefaultRequestHeaders.Add(“Accept”, “application/json”);
        using (var response = await client.DeleteAsync(String.Format(“datasets/{0}”, dataset.ToString())))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            success = true;
        }
    }
    return success;
}
/// <summary>
/// Clear all data our of a given table of a dataset
/// </summary>
/// <param name=”dataset”>Guid dataset id</param>
/// <param name=”table”>string table name</param>
/// <returns>bool indicating success</returns>
public static async Task<bool> ClearTable(Guid dataset, string table)
{
    bool success = false;
    var token = await getAccessToken();
    var baseAddress = new Uri(“https://api.powerbi.com/beta/myorg/”);
    using (var client = new HttpClient { BaseAddress = baseAddress })
    {
        client.DefaultRequestHeaders.Add(“Authorization”, “Bearer “ + token);
        client.DefaultRequestHeaders.Add(“Accept”, “application/json”);
        using (var response = await client.DeleteAsync(String.Format(“datasets/{0}/tables/{1}/rows”, dataset.ToString(), table)))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            success = true;
        }
    }
    return success;
}
/// <summary>
/// Adds rows to a given table and dataset in Power BI
/// </summary>
/// <param name=”dataset”>PowerBIDataset</param>
/// <param name=”table”>PowerBITable</param>
/// <param name=”rows”>List<Dictionary<string, object>></param>
/// <returns></returns>
public static async Task<bool> AddTableRows(Guid dataset, string table, List<Dictionary<string, object>> rows)
{
    bool success = false;
    var token = await getAccessToken();
    var baseAddress = new Uri(“https://api.powerbi.com/beta/myorg/”);
    using (var client = new HttpClient { BaseAddress = baseAddress })
    {
        //build the json post by looping through the rows and columns for each row
        string json = “{\”rows\”: [“;
        foreach (var row in rows)
        {
            //process each column on the row
            json += “{“;
            foreach (var key in row.Keys)
            {
                json += “\”” + key + “\”: \”” + row[key].ToString() + “\”,”;
            }
            json = json.Substring(0, json.Length – 1) + “},”;
        }
        json = json.Substring(0, json.Length – 1) + “]}”;
        var content = new StringContent(json, System.Text.Encoding.Default, “application/json”);
        client.DefaultRequestHeaders.Add(“Authorization”, “Bearer “ + token);
        client.DefaultRequestHeaders.Add(“Accept”, “application/json”);
        using (var response = await client.PostAsync(String.Format(“datasets/{0}/tables/{1}/rows”, dataset.ToString(), table), content))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            success = true;
        }
    }
    return success;
}

 

Here are a few examples of calling these Web API methods client-side.

Client-side Calls to Web API
// sets up the dataset for loading
function createDataset(name, callback) {
    var data = {
        name: name, tables: [{
            name: “Messages”, columns: [
                { name: “Id”, dataType: “string” },
                { name: “Thread”, dataType: “string” },
                { name: “Created”, dataType: “DateTime” },
                { name: “Client”, dataType: “string” },
                { name: “User”, dataType: “string” },
                { name: “UserPic”, dataType: “string” },
                { name: “Attachments”, dataType: “Int64″ },
                { name: “Likes”, dataType: “Int64″ },
                { name: “Url”, dataType: “string” }]
        }]};
    $.ajax({
        url: “/api/PowerBI/CreateDataset”,
        type: “POST”,
        data: JSON.stringify(data),
        contentType: “application/json”,
        success: function (datasetId) {
            callback(datasetId);
        },
        error: function (er) {
            $(“#alert”).html(“Error creating dataset…”);
            $(“#alert”).show();
        }
    });
}
// clear rows from existing dataset
function clearDataset(datasetId, callback) {
    var data = { datasetId: datasetId, tableName: “Messages” };
    $.ajax({
        url: “/api/PowerBI/ClearTable”,
        type: “POST”,
        data: JSON.stringify(data),
        contentType: “application/json”,
        success: function (data) {
            callback();
        },
        error: function (er) {
            $(“#alert”).html((“Error clearing rows in dataset {0}…”).replace(“{0}”, $(“#cboDataset option:selected”).text()));
            $(“#alert”).show();
        }
    });
}
// adds rows to the dataset
function addRows(datasetId, rows, callback) {
    var data = { datasetId: datasetId, tableName: “Messages”, rows: rows };
    $.ajax({
        url: “/api/PowerBI/AddTableRows”,
        type: “POST”,
        data: JSON.stringify(data),
        contentType: “application/json”,
        success: function (data) {
            callback();
        },
        error: function (er) {
            $(“#alert”).html(“Error adding rows to dataset”);
            $(“#alert”).show();
        }
    });
}

 

My application can create new datasets in Power BI or update existing datasets. For existing datasets, it can append-to or purge old rows before loading. Once the processing is complete, the dataset can be explored immediately in Power BI.

Conclusion

The new Power BI is a game-changer for business analytics. The Power BI APIs offer amazing opportunities for ISVs/Developers. They can enable completely new data-driven scenarios and help take the modeling burden off the end-user. You can download the completed solution outlined in this post below (please note you will need to generate your own application IDs for Azure AD and Yammer).

Solution Download

Using SignalR to communicate between an App for Office and Popups

$
0
0

Apps for Office are a powerful and flexible way to extend Office across all the new Office form factors (browser, PC, phone). Apps for Office come in many sizes/shapes (Mail Apps, Compose Apps, Task Pane Apps, Content Apps). Although users can resize apps for Office, they will typically launch in a default dimension that developers should design around. Often this doesn’t provide enough screen real estate to display everything the app needs (ex: task pane apps default to 320px wide). A good example is an OAuth flow against a 3rd party, where the app has no control over design. For these scenarios, it might be appropriate to leverage popups (in general popups should be avoided). The challenge is that apps run in an isolation that prevents popups from communicating back into them. This post outlines the use of SignalR to solve this communication challenge.

[View:http://www.youtube.com/watch?v=FGsa2J5qmcE]

What is SignalR

SignalR is an ASP.NET technology that enables near real-time communication between servers and web browsers. It uses “Hubs” on the server that can broadcast data over WebSocket to all or specific web browsers that are connected to the hub. WebSockets enable SignalR to push data to web browser (as opposed to a web browser polling for data). Here is the SignalR Hub for the sample app…it simply accepts a message and sends it to the specified client ID.

SignalR Hub for sending messages
public class PopupCommunicationHub : Hub
{
    public void Initialize()
    {
    }
    public void SendMessage(string clientID, string message)
    {
        //send the message to the specific client passed in
        Clients.Client(clientID).sendMessage(message);
    }
}

 

How it Works

When a web browser establishes a connection to a SignalR hub, it is assigned a unique client ID (a GUID).  The Office app and its popup(s) will all have their own unique client IDs. SignalR can push messages through the hub to all or specific client IDs. We can enable app-to-popup communication by making each aware of the others client ID. First we’ll pass the client ID of the parent to the popup via a URL parameter on the popup.

Passing the SignalR client ID of the app to the popup
// Start the connection.
$.connection.hub.start().done(function () {
    hub.server.initialize();
    //get the parentId off the hub
    parentId = $.connection.hub.id;
    //wire the event to launch popup (passing the parentId)
    $(“#btnLaunchPopup”).click(function () {
        //pass the parentId to the popup via url parameter
        window.open(“/Home/PopupWindow?parentId=” + parentId, “”, “width=850, height=600, scrollbars=0, toolbar=0, menubar=0, resizable=0, status=0, titlebar=0″);
        $(“#authorizeModal”).modal(“show”);
        $(“#btnLaunchPopup”).html(“Waiting on popup handshake”);
        $(“#btnLaunchPopup”).attr(“disabled”, “disabled”);
    });
    //wire the send message
    $(“#btnSend”).click(function () {
        if (popupId != null) {
            //send the message over the hub
            hub.server.sendMessage(popupId, $(“#txtMessage”).val());
            $(“#txtMessage”).val(“”);
        }
    });
});

 

The popup can read the app’s client ID off the URL and then send its own client ID as the first message to the parent (once the hub connection is setup).

Logic for the popup to read the client ID of app and send app its own client ID
var parentId = null, popupId = null;
$(document).ready(function () {
    //utility function to get parameter from query string
    var getQueryStringParameter = function (urlParameterKey) {
        var params = document.URL.split(‘?’)[1].split(‘&’);
        var strParams = ;
        for (var i = 0; i < params.length; i = i + 1) {
            var singleParam = params[i].split(‘=’);
            if (singleParam[0] == urlParameterKey)
                return singleParam[1];
        }
    }
    //get the parentId off the url parameters
    parentId = decodeURIComponent(getQueryStringParameter(‘parentId’)).split(‘#’)[0];
    //setup signalR hub
    var hub = $.connection.popupCommunicationHub;
    // Create a function that the hub can call to broadcast messages
    hub.client.sendMessage = function (message) {
        $(“#theList”).append($(“<li class=’list-group-item’>” + message + “</li>”));
    };
    // Start the connection.
    $.connection.hub.start().done(function () {
        hub.server.initialize();
        //get the popupId off the hub and send to the parent
        popupId = $.connection.hub.id;
        hub.server.sendMessage(parentId, popupId);
        //initialize the textbox
        $(“#txtMessage”).removeAttr(“disabled”);
        //wire the send message
        $(“#btnSend”).click(function () {
            //send the message over the hub
            hub.server.sendMessage(parentId, $(“#txtMessage”).val());
            $(“#txtMessage”).val(“”);
        });
    });
});

 

The app will expect the popup client ID as the first message it receives from the hub. At this point, the app and the popup know the client IDs of the other. These client IDs are like the browser windows phone number for communication. Messages can be sent to specific client IDs and get pushed through the hub in near real time.

Logic in app to treat first message as the client ID of the popup
// Create a function that the hub can call to broadcast messages
hub.client.sendMessage = function (message) {
    //first message should be the popupId
    if (popupId == null) {
        popupId = message;
        $(“#init”).hide();
        $(“#send”).show();
    }
    else {
        $(“#theList”).append($(“<li class=’list-group-item’>” + message + “</li>”));
    }
};

 

Conclusion

Although popups should be avoided with apps for Office, it is sometime unavoidable. In those scenarios, SignalR give apps a better user experience. You can download the completed solution outlined in the video and post below.

Download the Solution: http://1drv.ms/1EOhtyl

Next Generation Office 365 Development with APIs and Add-ins

$
0
0

This week at //build, Microsoft made a number of exciting announcements regarding Office 365 development. If you haven’t had a chance, I highly encourage you to watch the foundational keynote that Jeremy Thake and Rob Lefferts delivered on the opening day…it was epic. In the months leading up to //build, I had the pleasure of working with Do.com on a solution to showcase many of the new Office 365 extensibility investments. I thought I’d give my perspective on working with the new extensibility options in Office 365 and how we applied them to Do.com (a solution already richly integrated with Office 365). I’ll break my thoughts down into “Next Generation Add-ins” and “New and Unified APIs”. But first, here is the prototype of the Do.com Outlook add-in that uses a number of the new announcements.

 

[View:http://www.youtube.com/watch?v=IcBix75IqlE]

 

NOTE: You should really checkout Do.com if you haven’t already. I really hate senseless and unorganized meetings, which Do.com has helped me reduce. The video above is a prototype, but they already have a great site and mobile apps that integrate nicely with Office 365 and an aggressive vision for more integration.

 

Next Generation Add-ins

This week we demonstrated Office apps running within the iPad Office clients. This was an exciting announcement that confirms Microsoft’s commitment to Office extensibility and “write-once run anywhere”. However, it also set off some concerns that an “app within app” could be confusing (if the term “app” wasn’t confusing enough already). Moving forward, these extensions to the Office experience will be called “add-ins” (a term we hope more people can relate to).

Office Add-in in Office for iPad

We also announced a new type of add-in called an “add-in command”. Add-in commands are custom commands pinned to the Office user experience to perform custom operations. An add-in command might launch a full add-in for the user to interact with or just perform some background process (similar to Outlook’s “mark as read”). The first generation add-in commands are concentrated to the ribbon (an area developers want to target and on-par with VSTO solutions). At //build we showcased a Do.com task pane add-in launched from the Outlook ribbon (task pane read add-ins are also new). For Do.com, an add-in command provided more visibility to their add-in and brand from within Outlook (especially compared to previous mail add-ins). Checkout the presence of the Do.com logo in the Outlook ribbon.

Do.com Outlook add-in via add-in command

Speaking of Outlook add-ins, we also announced that the same Outlook add-ins built for Office 365 and Exchange Server will be able to target the 400 million users of Outlook.com. This is just one example of unification efforts across consumer and commercial Microsoft services. If you build for Office, you could have a HUGE customer audience!

New and Unified APIs

APIs have been a significant investment area for Office 365 extensibility. About a year ago, Microsoft announced the preview of the Office 365 APIs. Since that time, the APIs have graduated to general availability and added several new services. At //build we announced perhaps our most strategic move with these APIs…unifying them under a single end-point (https://graph.microsoft.com).

Why is the Office 365 Unified API end-point so significant? Most of the services in Office 356 offer APIs, but they have traditionally been resolved under service-specific or even tenant-specific end-points. For tenant-specific end-points like SharePoint/Files, the Discovery Service had to be leveraged just to determine where to make API calls. Although 3rd party apps could provide a single sign-on experience across all the Office 365 services, resource-specific access tokens had to be request behind the scenes. Both the Discovery Service and token management made first-gen Office 365 apps chatty. The Office 365 Unified API end-point solves both these challenges by eliminating the need for the Discovery Service and providing a single access token that can be used against any service that falls under the unified end-point.

Consider Do.com, which needed access to Azure AD (for first/last name), Exchange Online (for high-res profile picture), and OneNote in Office 365 (for exporting agendas). The table below compares the flow with and without the Office 365 Unified API end-point:

w/ O365 Unified API End-Point w/o O365 Unified API End-Point
1. Get Access Token for the resource
https://graph.microsoft.com 
(O365 Unified API End-Point)
1. Get Access Token for the resource
https://api.office.com/discovery/
(Discovery Service)
2. Use the Unified API end-point to get users properties (first/last name and manager):
https://graph.microsoft.com/beta/me
2. Call Discovery Service to get users capabilities:
https://api.office.com/discovery/v1.0/me/
3. Use Unified API end-point to get the users high-res profile picture from Exchange Online:
https://graph.microsoft.com/beta/me/
userPhoto/$value
3. Get Access Token for the resource
https://graph.windows.net
(Azure AD Graph)
4. Use the Unified API end-point to get the users notebooks in Office 365*:
https://graph.microsoft.com/beta/me/notes/
notebooks
4. Call Azure AD Graph to get user properties (first/last name and manager):
https://graph.windows.net/me
  5. Get Access Token for the resource
https://outlook.office365.com
(Exchange Online)
  6. Call Exchange Online to get the user’s high-resolution profile picture:
https://outlook.office365.com/api/beta/me/
userphoto/$value
 

7. Get Access Token for the resource
https://onenote.com/
(OneNote in Office 365) 

  8. Call OneNote API to get the users notebooks in Office 365:
https://www.onenote.com/api/beta/me/notes/
notebooks

* OneNote APIs for Office 365 were announced this week, but the unified end-point won’t be live until a future date

Hopefully this illustrates the significance of unification. Oh and by the way…the Office 365 Unified API end-point also supports CORS from day one (#HighFiveRedmond).

The Microsoft Unified API wasn’t the only exciting API announcement made at //build. We also announced a number of completely new services, permissions to existing services, and SDKs:

The Do.com site already had options to export meeting agendas to Evernote, so this offered an opportunity to integrate the new OneNote APIs with Office 365. “New” might be a little deceiving as the APIs are identical to the existing OneNote consumer APIs (just different authentication/access token providers). In fact big efforts were announced to provide common APIs across consumer/commercial services with OneDrive/OneDrive for Business, OneNote, and Outlook/Outlook.

OneNote integration with Office 365

Do.com is all about facilitating a more productive meeting and in the Land of Gates, most meetings involve Skype for Business (formerly Lync) as a meeting bridge. The new Skype Web SDK posed a great opportunity to Skype-enable the Do.com add-in with audio and video (the SDK support other modalities).

Skype Web SDK integration for Audio/Video

Finally, the Do.com add-in leveraged the new Exchange Online endpoints to display a user’s high-resolution profile picture. This is a nice touch that almost any application connecting into Office 365 can benefit from.

Do.com leveraging new Exchange Online APIs for Profile Picture

Conclusion

I hope you can see how this was a significant week of announcements for Office 365 developers. Be sure to check out some of the great sessions delivered at //build on Channel 9 and let us know what you think of the new Office 365 extensibility announcements! Below are some helpful links related to the announcements at //build, but you can always find the latest info at http://dev.office.com

Office 365 Unified API Endpoint
http://channel9.msdn.com/Events/Build/2015/3-641

OneNote APIs for Office 365
http://channel9.msdn.com/Events/Build/2015/2-715

Office 365 Groups REST API
http://channel9.msdn.com/Events/Build/2015/3-701

Developing for Outlook.com AND Office 365
http://channel9.msdn.com/Events/Build/2015/3-742

Developing for OneDrive AND OneDrive for Business
http://channel9.msdn.com/Events/Build/2015/3-734

Skype Developer Platform
http://channel9.msdn.com/Events/Build/2015/3-643

Building Solutions with Office Graph
http://channel9.msdn.com/Events/Build/2015/3-676

Next-gen Outlook Add-ins
http://channel9.msdn.com/Events/Build/2015/3-694

Busy week…time to relax in my #DoSocks

Performing app-only operations on SharePoint Online through Azure AD

$
0
0

As all the shock and aw announcements were made this week at //build, Microsoft quietly turned on the ability to make app-only calls into SharePoint Online using Azure AD. The enables a whole new variety of scenarios that SharePoint Online and ACS couldn’t alone deliver (such as leveraging multiple services secured by Azure AD). It also provides a more secure way of performing background operations against Office 365 services (more on that later). In this post, I will provide a step-by-step outline for creating a background process that talks to SharePoint Online and can run as an Azure Web Job.

Azure AD App-only vs. ACS App-only

Before jumping into the technical gymnastics of implementation, you might wonder why not just use SharePoint and appregnew.aspx/appinv.aspx to register an app that has app-only permissions? After all, this is a popular approach and has been well documented by numerous people including myself. Well, consider the scenario where you want a background or anonymous service to leverage more than just SharePoint. Applications defined through Azure Active Directory can leverage the full breadth of the common consent framework. That is, they can connect to any service that is defined in Azure AD and offers application permissions. Secondly, these applications (are in my opinion) a little more secure since their trust is established through a certificate instead of an application secret that ACS uses.

Getting Started

Applications defined in Azure AD are allowed to make app-only calls by sharing a certificate with Azure AD. Azure AD will get the public key certificate and the app will get the private key certificate. Although a trusted certificate should be used for production deployments, makecert/self-signed certificates are fine for testing/debugging (similar to local web debugging with https). Here are the steps to generate a self-signed certificate with makecert.exe and exporting it for use with Azure AD.

Part 1: Generate a Self-signed Certificate

1. Open Visual Studio Tools Command Prompt

2. Run makecert.exe with the following syntax:

makecert -r -pe -n “CN=MyCompanyName MyAppName Cert” -b 12/15/2014 -e 12/15/2016 -ss my -len 2048

Example:

makecert -r -pe -n “CN=Richdizz O365AppOnly Cert” -b 05/03/2015 -e 05/03/2017 -ss my -len 2048

 

3. Run mmc.exe and add snap-in for Certificates >> My user account

4. Locate the certificate from step 2 in the Personal certificate store

 

5. Right-click and select All tasks >> Export

6. Complete the Certificate Export Wizard twice…once with the private key (specify a password and save as .pfx) and once without the private key (save as .cer)

Part 2: Prepare the certificate public key for Azure AD

1. Open Windows PowerShell and run the following commands:

$certPath = Read-Host “Enter certificate path (.cer)”
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import($certPath)
$rawCert = $cert.GetRawCertData()
$base64Cert = [System.Convert]::ToBase64String($rawCert)
$rawCertHash = $cert.GetCertHash()
$base64CertHash = [System.Convert]::ToBase64String($rawCertHash)
$KeyId = [System.Guid]::NewGuid().ToString()
Write-Host $base64Cert
Write-Host $base64CertHash
Write-Host $KeyId

2. Copy the values output for $base64Cert, $base64CertHash, and $KeyId for Part 3

Part 3: Create the Azure AD App

1. Log into the Azure Management Portal and go to the Azure Active Directory for your Office 365 tenant

2. Go to the Applications tab and select click the add button in the footer to manually add an Application

3. Select “Add an application my organization is developing”

4. Give the application a name, keep the default selection of “Web Application and/or Web API” and click the next arrow

5. Enter a Sign-on URL and App ID Uri (values of these don’t really matter other than being unique) and click next to create the application

6. Click on the “Configure” tab and scroll to the bottom of the page to the section titled “Permissions to other applications”

7. Select the desired “Application Permissions” such as permissions to SharePoint Online and/or Exchange Online and click the Save button in the footer

Part 4: Configure certificate public key for App

1. Click the Manage Manifest button in the footer and select “Download Manifest” to save the app manifest locally

2. Open the downloaded manifest file and locate the empty keyCredentials attribute

3. Update the keyCredentials attribute with the following settings:

Some Title

  “keyCredentials”: [
    {
      “customKeyIdentifier”: “<$base64CertHash FROM ABOVE>”,
      “keyId”: “<$KeyId FROM ABOVE>”,
      “type”: “AsymmetricX509Cert”,
      “usage”: “Verify”,
      “value”:  “<$base64Cert FROM ABOVE>”
     }
  ],

Example:

  “keyCredentials”: [
    {
      “customKeyIdentifier”: “r12cfITjq64d4FakvA3g3teZRQs=”,
      “keyId”: “e0c93388-695e-426b-8202-4249f8664301″,
      “type”: “AsymmetricX509Cert”,
      “usage”: “Verify”,
      “value”:  “MIIDIzCCAg+gAwI…shortened…hXvgAo0ElrOgrkh”
     }
  ],

 

4. Save the updated manifest and upload it back into Windows Azure using the same Manage Manifest button in the footer (select “Upload Manifest” this time)

5. Everything should now be setup in Azure AD for the app to run in the background and get app-only access tokens from Azure AD.

Building the background process

I used the Visual Studio console application template to build my background service. Just the normal template with Nuget packages for the Azure Active Directory Authentication Libraries (ADAL) and JSON.NET. In fact, most of the code is exactly like a normal .NET project that leverages ADAL and makes REST calls into Office 365. The only difference is the leverage of the certificate private key being passed into the authenticationContext.AcquireTokenAsync method using the ClientAssertionCertificate class.

A few important notes:

  • Although an app-only AAD app can be multi-tenant, it cannot use the /common authority. You must determine the tenant id to tack onto the authority (ex: request id_token response on authorize end-point)
  • My method for storing the certificate and private key is atrocious and only done this way for brevity. Azure Key Vault is a really good solution for security these sensitive items and is outlined HERE
Console Applications with App-only AAD Tokens

using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Security.Cryptography.X509Certificates;
using System.Text;
using System.Threading.Tasks;

namespace MyO365BackgroundProcess
{
    class Program
    {
        private static string CLIENT_ID = “4b7fb8dd-0b22-45a2-8248-3cc87a3560a7″;
        private static string PRIVATE_KEY_PASSWORD = “P@ssword”; //THIS IS BAD…USE AZURE KEY VAULT
        static void Main(string[] args)
        {
            doStuffInOffice365().Wait();
        }

        private async static Task doStuffInOffice365()
        {
            //set the authentication context
            //you can do multi-tenant app-only, but you cannot use /common for authority…must get tenant ID
            string authority = “https://login.windows.net/rzna.onmicrosoft.com/”;
            AuthenticationContext authenticationContext = new AuthenticationContext(authority, false);

            //read the certificate private key from the executing location
            //NOTE: This is a hack…Azure Key Vault is best approach
            var certPath = System.Reflection.Assembly.GetExecutingAssembly().Location;
            certPath = certPath.Substring(0, certPath.LastIndexOf(‘\\’)) + “\\O365AppOnly_private.pfx”;
            var certfile = System.IO.File.OpenRead(certPath);
            var certificateBytes = new byte[certfile.Length];
            certfile.Read(certificateBytes, 0, (int)certfile.Length);
            var cert = new X509Certificate2(
                certificateBytes,
                PRIVATE_KEY_PASSWORD,
                X509KeyStorageFlags.Exportable |
                X509KeyStorageFlags.MachineKeySet |
                X509KeyStorageFlags.PersistKeySet); //switchest are important to work in webjob
            ClientAssertionCertificate cac = new ClientAssertionCertificate(CLIENT_ID, cert);

            //get the access token to SharePoint using the ClientAssertionCertificate
            Console.WriteLine(“Getting app-only access token to SharePoint Online”);
            var authenticationResult = await authenticationContext.AcquireTokenAsync(“https://rzna.sharepoint.com/”, cac);
            var token = authenticationResult.AccessToken;
            Console.WriteLine(“App-only access token retreived”);

            //perform a post using the app-only access token to add SharePoint list item in Attendee list
            HttpClient client = new HttpClient();
            client.DefaultRequestHeaders.Add(“Authorization”, “Bearer ” + token);
            client.DefaultRequestHeaders.Add(“Accept”, “application/json;odata=verbose”);

            //create the item payload for saving into SharePoint
            var itemPayload = new
            {
                __metadata = new { type = “SP.Data.SampleListItem” },
                Title = String.Format(“Created at {0} {1} from app-only AAD token”, DateTime.Now.ToShortDateString(), DateTime.Now.ToShortTimeString())
            };

            //setup the client post
            HttpContent content = new StringContent(JsonConvert.SerializeObject(itemPayload));
            content.Headers.ContentType = MediaTypeHeaderValue.Parse(“application/json;odata=verbose”);
            Console.WriteLine(“Posting ListItem to SharePoint Online”);
            using (HttpResponseMessage response = await client.PostAsync(“https://rzna.sharepoint.com/_api/web/Lists/getbytitle(‘Sample’)/items”, content))
            {
                if (!response.IsSuccessStatusCode)
                    Console.WriteLine(“ERROR: SharePoint ListItem Creation Failed!”);
                else
                    Console.WriteLine(“SharePoint ListItem Created!”);
            }
        }
    }
}

 

Conclusion

There you have it…performing background processing against SharePoint Online (and other Office 365 services) using app-only tokens from Azure AD. You can download the solution from the following GitHub repo: https://github.com/richdizz/MyO365BackgroundProcess 

Connecting to Office 365 APIs from a Windows 10 UWP

$
0
0

Unless you have been living under a rock, you probably heard that Microsoft released Windows 10 last week. For app developers, Windows 10 and the new Universal Windows Platform (UWP) realizes a vision of write-once run on any Windows device (desktop, tablet, mobile). In this post, I’ll illustrate how to build a Windows 10 UWP connected to Office 365 using the new WebAccountProvider approach.

[View:http://www.youtube.com/watch?v=Ui2g8Fl79y0]

Truly Universal

Microsoft first introduced the concept of a Universal Windows App at the Build Conference in 2014. This first generation universal app contained separate projects for desktop and mobile with a “shared” project for common code. The goal was to put as much code as possible in the shared project, which often required some technical gymnastics to accomplish. The Windows 10 UWP collapses this 3-project solution into a single unified project.

Old Universal App Structure New Universal App Structure

 

Connecting to Office 365

Connecting to Office 365 from a Windows 10 UWP uses an updated Connected Service Wizard within Visual Studio 2015. This wizard registers the native application in Azure AD, copies details from Azure AD into the application (ex: Client ID, authority, etc), and pulls down important Nuget packages such as the Office 365 SDK.

Once the Office 365 Service has been added to the UWP, you can start coding against the Office 365 APIs (either via REST or the Office 365 SDK). However, all the Office 365 APIs require access tokens from Azure AD which requires the app to perform an OAuth flow. In the past, native Windows apps used a WebAuthenticationBroker to manage this flow. The WebAuthenticationBroker was a browser control on OAuth steroids. The Azure AD Authentication Libraries (ADAL) automatically leveraged this when you requested a token. The WebAuthenticationBroker worked great, but didn’t always look great within an app given it was loading a framed login screen. The WebAuthenticationBroker still exists in 2015, but the WebAccountProvider is a new mechanism to UWPs and provides a first class experience.

The WebAccountProvider is optimized for multi-provider scenarios. Imagine building a UWP that leverages file storage across a number of providers (ex: OneDrive, OneDrive for Business, DropBox, Box, etc). Or maybe files from one place but calendar from another. The WebAccountProvider handles these scenarios and token management in a more generic and consistent way when compared to WebAuthenticationBroker. The WebAccountProvider will be the default authentication experience for Office 365 in a Windows 10 UWP. In fact, if you look at the application that the Connected Service Wizard registers in Azure AD, you will notice a new reply URI format that is specific to supporting the WebAccountProvider:

Working with the WebAccountProvider is very similar to traditional ADAL. We will use it to get access tokens by resource. When we do this, we will first try to get the token silently (the WebAccountProvider could have a token cached) and then revert to prompting the user if the silent request fails. Here is a completed block of code that does all of this:

Using WebAccountProvider to get Azure AD Access Tokens
private static async Task<string> GetAccessTokenForResource(string resource)
{
    string token = null;
    //first try to get the token silently
    WebAccountProvider aadAccountProvider = await WebAuthenticationCoreManager.FindAccountProviderAsync(“https://login.windows.net“);
    WebTokenRequest webTokenRequest = new WebTokenRequest(aadAccountProvider, String.Empty, App.Current.Resources[“ida:ClientID“].ToString(), WebTokenRequestPromptType.Default);
    webTokenRequest.Properties.Add(“authority“, “https://login.windows.net“);
    webTokenRequest.Properties.Add(“resource“, resource);
    WebTokenRequestResult webTokenRequestResult = await WebAuthenticationCoreManager.GetTokenSilentlyAsync(webTokenRequest);
    if (webTokenRequestResult.ResponseStatus == WebTokenRequestStatus.Success)
    {
        WebTokenResponse webTokenResponse = webTokenRequestResult.ResponseData[0];
        token = webTokenResponse.Token;
    }
    else if (webTokenRequestResult.ResponseStatus == WebTokenRequestStatus.UserInteractionRequired)
    {
        //get token through prompt
        webTokenRequest = new WebTokenRequest(aadAccountProvider, String.Empty, App.Current.Resources[“ida:ClientID“].ToString(), WebTokenRequestPromptType.ForceAuthentication);
        webTokenRequest.Properties.Add(“authority“, “https://login.windows.net“);
        webTokenRequest.Properties.Add(“resource“, resource);
        webTokenRequestResult = await WebAuthenticationCoreManager.RequestTokenAsync(webTokenRequest);
        if (webTokenRequestResult.ResponseStatus == WebTokenRequestStatus.Success)
        {
            WebTokenResponse webTokenResponse = webTokenRequestResult.ResponseData[0];
            token = webTokenResponse.Token;
        }
    }
    return token;
}

 

The WebAccountProvider also looks much different from the WebAuthenticationBroker. This should provide more consistent sign-in experience across different providers:

WebAuthenticationBroker WebAccountProvider
   

Once you have tokens, you can easily use them in REST calls to the Office 365 APIs, or use the GetAccessTokenForResource call in the constructor of Office 365 SDK clients (SharePointClient, OutlookServicesClient, etc).

Using Office 365 SDKs
private static async Task<OutlookServicesClient> EnsureClient()
{
    return new OutlookServicesClient(new Uri(“https://outlook.office365.com/ews/odata“), async () => {
        return await GetAccessTokenForResource(“https://outlook.office365.com/“);
    });
}
public static async Task<List<IContact>> GetContacts()
{
    var client = await EnsureClient();
    var contacts = await client.Me.Contacts.ExecuteAsync();
    return contacts.CurrentPage.ToList();
}

 

Using REST
public static async Task<byte[]> GetImage(string email)
{
    HttpClient client = new HttpClient();
    var token = await GetAccessTokenForResource(“https://outlook.office365.com/“);
    client.DefaultRequestHeaders.Add(“Authorization“, “Bearer ” + token);
    client.DefaultRequestHeaders.Add(“Accept“, “application/json“);
    using (HttpResponseMessage response = await client.GetAsync(new Uri(String.Format(“https://outlook.office365.com/api/beta/Users(‘{0}’)/userphotos(’64×64′)/$value“, email))))
    {
        if (response.IsSuccessStatusCode)
        {
            var stream = await response.Content.ReadAsStreamAsync();
            var bytes = new byte[stream.Length];
            stream.Read(bytes, 0, (int)stream.Length);
            return bytes;
        }
        else
            return null;
    }
}

 

Conclusion

The unification achieved with the new Windows Universal Platform (UWP) is exactly what Windows developers have been waiting for. Office 365 is poised to be a dominate force with Windows 10. Together, some amazing scenarios can be achieved that developers have the power to deliver. I have published two complete Windows 10 UWP samples on GitHub that you can fork/clone today:

Contacts API Win10 UWP
https://github.com/OfficeDev/Contacts-API-Win10-UWP

MyFiles API Win10 UWP
https://github.com/OfficeDev/MyFiles-API-Win10_UWP


Connecting to Office 365 from an Office Add-in

$
0
0

Earlier in the year, I authored a post on Connecting to SharePoint from an Office add-in. In that post, I illustrated 5 approaches that were largely specific to SharePoint. However, the last pattern connected to SharePoint using the Office 365 APIs. SharePoint is one of many powerful services exposed through the Office 365 APIs. In this post, I’ll expand on leveraging the Office 365 APIs from an Office add-in. I’ll try to clear up some confusion on implementation and outline some patterns to deliver the best user experience possible with Office add-in that connect to Office 365. Although I’m authoring this post specific to the Office 365 APIs, the same challenges exist for almost any OAuth scenario with Office add-ins (and the same patterns apply).

Mail CRM sample provided in post

 

The Office add-in Identity Crisis

Since 2013, users have become accustomed to signing into Office. Identity was introduced into Office for license management and roaming settings such as file storage in OneDrive and OneDrive for Business. However, this identity is not currently made available to Office add-ins. The one exception is in Outlook mail add-ins, which can get identity and access tokens specifically for calling into Exchange Online APIs. All other scenarios (at the time of this post) require manual authentication flows to establish identity and retrieve tokens for calling APIs.

User may sign into Office, but identity isn’t available to add-ins

 

Why Pop-ups are a Necessary Evil

Office add-ins can display almost any page that can be displayed in a frame and whose domain is registered in the add-in manifest (in the AppDomains section). Both of these constraints can be challenging when performing OAuth flows. Due to the popularity of clickjacking on the internet, it is common to prevent login pages from being display inside frames. The X-FRAME-Options meta tag in HTML makes it easy for providers to implement this safeguard on a widespread or domain/origin-specific basis. Pages that are not “frameable” will not load consistently in an Office add-in. For example, Office Online displays Office add-ins in IFRAME elements. Below is an example of an Office add-in displaying a page that cannot be displayed in a frame:

Office add-in display an page that is NOT “frameable”

 

The other challenge facing Office add-ins that perform OAuth flows is in establishing trusted domains. If an add-in tries to load any domain not registered in the add-in manifest, Office will launch the page in a new browser window. In some cases, this can be avoided by registering the 3rd party domain(s) in the AppDomains section of the add-in manifest (ex: https://login.microsoftonline.com). However, this might be impossible with identity providers that support federated logins. Take Office 365 as an example. Most large organizations use a federated login to Office 365 (usually with Active Directory Federation Services (ADFS)). In these scenarios, the organization/subscriber owns the federated login and the domain that hosts it. It is impossible for an add-in developer to anticipate all domains customers might leverage. Furthermore, Office add-ins do not support wildcard entries for trusted domains. In short, popups are unavoidable.

Rather than trying to avoid popup, it is better to accept them as a necessary evil in Office add-ins that perform OAuth/logins. Redirect your attention to popup patterns that can deliver a better user experience (which I cover in the next section).

Good User Experience without Identity

To address the challenges with identity in Office add-ins, I’m going to concentrate on patterns for improving the user experience in the popup and with “single sign-on”. For popups, we want to deliver an experience where the popup feels connected to the add-in (a feat that can be challenging in some browsers). For “single sign-on” we want to provide a connected experience without requiring the user to sign-in every time they use the add-in. Technically, this isn’t really “single sign-in” as much as token cache management (which is why I put “single sign-on” in quotes).

Mastering the Popup

Almost as soon as the internet introduced popup, they started being used maliciously by both hackers and advertisers. For this reason, popups have established a bad reputation and browsers have built-in mechanisms to control them.  These safeguards can make client-side communication between add-ins and popups problematic (don’t get me started on IE Security Zones). Ultimately, we are using popups to acquire access tokens so that add-ins can make API calls. Instead of passing tokens back client-side (via window.opener or window.returnValue), consider a server-side approach that browsers cannot (easily) interfere with.

One server-side method for popup/add-in communication is by temporarily caching tokens on a server or in a database that both the popup and add-in can communicate with. With this approach, the add-in launches the popup with an identifier it can use to later retrieve the access token for making API calls. The popup performs the OAuth flow and then caches the token by the identifier passed from the add-in. This was the approach I outlined in the Connecting to SharePoint from an Office add-in blog post. It is solid, but relies upon cache/storage and requires the add-in to poll for tokens or the user to query for tokens once the OAuth flow is complete.

[View:http://www.youtube.com/watch?v=bgWNQcmPfoo]

 

We can address both these limitations by delivering popup/add-in communication via web sockets. This method is similar to the previous approach. The add-in still passes an identifier to the popup window, but now “listens” for tokens using web sockets. The popup still handles the OAuth flow, but can now push the token directly to the add-in via the web socket the add-in is listening on (this “push” goes through a server and is thus considered server-side). The benefit of this method is that nothing needs to be persisted and the add-in can immediately proceed when it gets the access token (read: no polling or user actions required). Web sockets can be implemented numerous ways, but I’ve become a big fan of ASP.NET SignalR. Interestingly, SignalR already provides an identifier when a client established a connection to the server (which I can use as my identifier sent to the popup).

Sound complicated? It can be, so I’ll try to break it down. When the add-in launches, we need to get the identifier (to pass into the popup) and then start listening for tokens:

Get the Client Identifier and Start “listening” on Hub for Tokens
//initialize called when add-in loads to setup web sockets
stateSvc.initialize = function () {
    //get a handle to the oAuthHub on the server
    hub = $.connection.oAuthHub;

    //create a function that the hub can call to broadcast oauth completion messages
    hub.client.oAuthComplete = function (user) {
        //the server just sent the add-in a token
        stateSvc.idToken.user = user;
        $rootScope.$broadcast(“oAuthComplete”, “/lookup”);
    };

    //start listening on the hub for tokens
    $.connection.hub.start().done(function () {
        hub.server.initialize();
        //get the client identifier the popup will use to talk back
        stateSvc.clientId = $.connection.hub.id;
    });
};

 

The client identifier is passed as part of the redirect_uri parameter in the OAuth flow of the popup:

Page loaded in popup for perform OAuth flow
https://login.microsoftonline.com/common/oauth2/authorize?
client_id=cb88b4df-db4b-4cbe-be95-b40f76dccb14
&resource=https://graph.microsoft.com/
&response_type=code
&redirect_uri=https://localhost:44321/OAuth/AuthCode/A5ED5F48-8014-4E6C-95D4-AA7972D95EC9/C7D6F7C7-4EBE-4F45-9CE2-EEA1D5C08372
//the User ID in DocumentDB
//the Client Identifier listening on web socket for tokens…think of this as the “address” of the add-in

 

The OAuthController completes the OAuth flow and then uses the client identifier to push the token information to the add-in via the web socket:

OAuthController that handles the OAuth reply
[Route(“OAuth/AuthCode/{userid}/{signalrRef}/”)]
public async Task<ActionResult> AuthCode(string userid, string signalrRef)
{
    //Request should have a code from AAD and an id that represents the user in the data store
    if (Request[“code”] == null)
        return RedirectToAction(“Error”, “Home”, new { error = “Authorization code not passed from the authentication flow” });
    else if (String.IsNullOrEmpty(userid))
        return RedirectToAction(“Error”, “Home”, new { error = “User reference code not passed from the authentication flow” });

    //get access token using the authorization code
    var token = await TokenHelper.GetAccessTokenWithCode(userid.ToLower(), signalrRef, Request[“code”], SettingsHelper.O365UnifiedAPIResourceId);

    //get the user from the datastore in DocumentDB
    var idString = userid.ToLower();
    var user = DocumentDBRepository<UserModel>.GetItem(“Users”, i => i.id == idString);
    if (user == null)
        return RedirectToAction(“Error”, “Home”, new { error = “User placeholder does not exist” });

    //update the user with the refresh token and other details we just acquired
    user.refresh_token = token.refresh_token;
    await DocumentDBRepository<UserModel>.UpdateItemAsync(“Users”, idString, user);

    //notify the client through the hub
    var hubContext = GlobalHost.ConnectionManager.GetHubContext<OAuthHub>();
    hubContext.Clients.Client(signalrRef).oAuthComplete(user);

    //return view successfully
    return View();
}

 

Here is a video that illustrates the web socket approach. Notice that the add-in continues on after the OAuth flow without the user having to do anything.

[View:http://www.youtube.com/watch?v=irn_pToBinw]

 

Cache Management

Ok, we have established a consistent and smooth method for getting tokens. However, you probably don’t want to force the user through this flow every time they use the add-in. Fortunately, we can cache user tokens to provide long-term access to Office 365 data. An access token from Azure AD only has a one hour lifetime. So instead, we will cache the refresh token, which has a sliding 14-day lifetime (maximum of 90 days without forcing a login). Caching techniques will depend on the type of app.

The Exchange/Outlook Team already has a published best practice for caching tokens in an Outlook mail add-in. It involves using the Identity Token that is available through JSOM (Office.context.mailbox.getUserIdentityTokenAsync) and creating a hashed combination of ExchangeID and AuthenticatedMetadataUrl. This hashed value is the lookup identifier the refresh token is stored by. The Outlook/Exchange Team has this documented on MSDN, including a full code sample. I followed this guidance in my solutions. For the sample referenced in this post, I used Azure’s DocumentDB (a NoSQL solution similar to Mongo) to cache refresh tokens by this hash value. Below, you can see a JSON document that reflects a cached user record. Take note of the values for hash and refresh_token:

DocumentDB record for user (with cached refresh token by hash)

 

For document-centric add-ins with Excel, Word, and PowerPoint, there is no concept of an identity in JSOM. Thus, these types of add-ins can’t take the same token caching approach as an Outlook mail add-in. Instead, we must revert to traditional web caching techniques such as cookies, session state, or database storage. I would probably not recommend local cache of the actual refresh tokens. So if you want to use cookies, try storing some lookup value in the cookie that the add-in can use to retrieve the refresh token stored on a server. Consider also that cookie caching in an Office add-in could expose information in a shared workstation scenario. Ultimately, be careful with your approach here.

Conclusion

I have full confidence that these add-in identity challenges will be short lived. In the meantime, the patterns outlined in this post can help deliver a better user experience to users. To get you jumpstarted, you can download a Mail CRM sample to uses these patterns and many more. You can also download the Office 365 API sample from the Connecting to SharePoint from an Office add-in post back in March. Happy coding!

Mail CRM Sample outlined in blog post: https://github.com/OfficeDev/PnP-Store/tree/master/DXDemos.Office365

Connecting with SharePoint from add-in sample (from March 2015): http://1drv.ms/1HaiupJ 

Working with the converged Azure AD v2 app model

$
0
0

Microsoft recently announced the public preview of a new application model that offers a unified developer experience across Microsoft consumer and commercial services. This is so significant it is being called the “V2” application model. Why is it so significant? Now a single application definition and OAuth flow can be used for consumer services (ex: OneDrive, Outlook.com, etc) AND commercial services in Office 365 (ex: Exchange Online, SharePoint Online, OneDrive for Business). In this post, I’ll outline the major differences in the v2 app model and how to perform a basic OAuth flow using it.

[View:http://www.youtube.com/watch?v=ZhGemMWFEWI]

What’s Different

Registering applications and performing OAuth have become common practices when building applications that connect to Microsoft services. However, the new converged “V2” app model brings some significant changes to both of these tasks. I have listed the major differences below, but you should also read the announcement by the Azure Active Directory team.

  • Unified Applications – V2 Apps converge the disparate application definitions that exist today between Microsoft Accounts (MSA) that are used for consumer services and Azure AD (AAD) accounts that are used for Office 365. By offering one unified application, developers can register apps from a centralized portal (https://apps.dev.microsoft.com) that work with either MSA or AAD accounts.
  • One App, Multiple Platforms – V2 Apps support multiple platforms within a single application definition. In the past, multiple application definitions were required to deliver web and mobile experiences. In V2 apps, both web and mobile experiences can be delivered from the same application definition.
  • Permissions at Runtime – V2 apps don’t declare permissions during app registration. Instead, they request permission dynamically by providing a scope parameter in token requests.
  • Deferred Resources – V2 apps no longer pass a resource parameter to get resource-specific access tokens. Instead, the resource can be automatically determined by the service based on the scopes passed in. 
  • Refresh Tokens by Request – V2 apps do not automatically get refresh tokens when requesting tokens from the service. Instead, you must explicitly request a refresh token by using the offline_access permission scope in the request a token.

Performing OAuth

There are a number of OAuth flows that the V2 model supports. I’m going to walk through the OAuth2 Authorization Code Flow, which is the most popular and used in most web applications. To demonstrate the flow, I’m going to take the raw browser/fiddler approach popularized by Rob Howard and Chakkaradeep “Chaks” Chandran blogged about HERE. The OAuth2 Authorization Code Flow can be simplified into these simple steps:

  1. Redirect the user to an authorize URL in Azure AD with some app details, including the URL Azure should reply back with an authorization code once the user logs in and consents the application.
  2. Post additional app details (including the authorization code from Step 1) to a token end-point in Azure AD to get an access token.
  3. Include the access token from Step 2 in the header when calling services secured by the V2 app model.

Sounds simple enough right? The Azure Active Directory Authentication Libraries (ADAL) make this flow simple on a number of platforms, but I find it very helpful to understand the flow ADAL manages. Let’s perform this flow using nothing but a browser and Fiddler (any web request editor will work in place of Fiddler).

Step 0 – Register the V2 Application

Before we can perform an OAuth flow, we need to register a new V2 application in the new registration portal.

  1. Open a browser and navigate to https://apps.dev.microsoft.com.
  2. Sign in with either a Microsoft Account (MSA) such as outlook.com/live.com/hotmail.com or an Azure AD account you use for Office 365.
  3. Once you are signed in, click the Add an app button in the upper right.
  4. Give the application a name and click Create application.
  5. Once the application is provisioned, copy the Application Id somewhere where it will be readily available for the next section.
  6. Next, generate a new application password by clicking the Generate New Password button in the Application Secrets section. When the password is displayed, copy it down for use in the next section. Warning: this is the only time the app registration portal will display the password.
  7. Next, locate the Platforms section and click Add Platform to launch the Add Platform dialog.
  8. Select Web for the application type. Notice that the V2 application model supports multiple platforms in the same application.
  9. Finally, update the Redirect URI of the new platform to https://localhost and save your changes by clicking the Save button at the bottom of the screen.
  10. The V2 application should be ready to use!

Step 1 – Get Authorization Code

The first step of the OAuth2 Authorization Code Flow is to redirect the user to an authorize URL in Azure AD with some app details, including the URL Azure should reply back with an authorization code once the user logs in and consents the application. The format of this authorize URL is listed below. Replace the placeholders with details from your app registration and paste the entire URI into your browser.

NOTE: The authorize URI uses the new v2.0 end-point versioning. It also uses the scope parameter to tell the authorize flow what permissions the application is requesting (aka – Runtime Permissions). Here we are requesting openid (sign-in), https://outlook.office.com/contacts.read (read access to contacts), and offline_access (required to get refresh tokens back for long-term access).

 

Authorize URI
https://login.microsoftonline.com/common/oauth2/v2.0/authorize
?client_id={paste your client id}
&scope=openid+https://outlook.office.com/contacts.read+offline_access
&redirect_uri={paste your reply url}
&response_type=code

 

Immediately after pasting the authorization URI into the browser, the user should be directed to a login screen. Here, they can provide either a consumer account (MSA) or an Azure AD account (if an MSA account is provided, the login screen change)

Azure AD Sign-in MSA Sign-in
   

 

Once the user signs in, they will be asked to grant consent for the permissions the application is requesting. This consent screen will only display the first time through this flow. The screen will look a little different based on the type of account provided.

Azure AD Grant Consent MSA Grant Consent
   

 

After granting consent to the application, the browser will be redirected to the location specified in the redirect_uri parameter. However, the authorization flow will include a code URL parameter as part of this redirect. This is your authorization code and completes this section!

Step 2 – Get Access Token

After acquiring the authorization code with the help of the user (logging in and granting consent) you can get an access token silently. To do this, POST additional app details (including the authorization code, application password, and permission scopes) to a token end-point in Azure AD. To perform the POST, you need a web request editor such as Fiddler or Postman. The end-point, headers, and body are listed below, but make sure you replace the placeholders with details from your app registration.

NOTE: The token end-point also uses the new v2.0 end-point versioning. The POST body also uses the same scope parameters you used to get the authorization code.

 

Get Access Token with Authorization Code
Method: POST
———————————————————-
End-Point: https://login.microsoftonline.com/common/oauth2/v2.0/token
———————————————————-
Headers:
Content-Type: application/x-www-form-urlencoded
———————————————————-
Body:
grant_type=authorization_code
&redirect_uri={paste your reply url}
&client_id={paste your client id}
&client_secret={paste your client secret}
&code={paste authorization code from previous step}
&scope=openid+https://outlook.office.com/contacts.read+offline_access

 

Here I’m using Fiddler’s Composer to perform the POST to get an access token.

The response to this POST should include both an access token and refresh token (because we included the offline_access scope).

Step 3 – Call Service with Access Token 

Congratulations…you have an access token, which is your key to calling services secured by the V2 application model. For the initial preview, only Outlook.com/Exchange Online services support this new flow. However, Microsoft is working hard deliver widespread support for this flow, so other popular services will become available very soon. For Outlook.com/Exchange Online, we can hit one API end-point and the service will determine which mail platform to use based on the token provided. Use an MSA account and the API will automatically go against Outlook.com. Use an AAD Account and the API will automatically hit Exchange Online in Office 365. It’s magic!

You can call a service in Outlook.com/Exchange Online using the web request editor. Use the REST end-point and headers below to GET contacts for the user. The header has a placeholder that should be replaced with the access_token acquired in the previous section. 

Calling Outlook.com/Exchange Online REST API
Method: GET
———————————————————- 
End-Point: https://outlook.office.com/api/v1.0/me/contacts
———————————————————-
Headers:
Accept:application/json
Content-Type:application/json
Authorization: Bearer {access_token from previous step}

 

GET Composer GET Response
   

 

NOTE: There are millions of MSA around the world and not all of them have been migrated to support this flow. Microsoft is working hard to migrate all MSA accounts, but it won’t happen overnight. If your MSA account hasn’t been migrated, you will get a 404 response querying contacts with the following error:

{“error”:{“code”:”MailboxNotEnabledForRESTAPI”,”message”:”REST API is not yet supported for this mailbox.”}}

 

Conclusion

App Unification…OAuth Unification…End-Point Unification…goodness all around! I’ll be posting an actual code sample in the next few days, so check back soon. Below is a raw text file with the calls used in this post:

https://raw.githubusercontent.com/richdizz/Azure-AD-v2-Authorization-Code-Flow/master/OAuthRaw.txt

Building Office 365 Applications with Node.js and the Azure AD v2 app model

$
0
0

Earlier today I authored a post on the new Azure AD v2 app model that converges the developer experience across consumer and commercial applications. The post outlines the key differences in the v2 app model and illustrates how to perform a manual OAuth flow with it. Most developers won’t have to perform this manual flow, because the Azure AD team is building authentication libraries (ADAL) to handle OAuth on most popular platforms. ADAL is a great accelerator for application developers working with Microsoft connected services. However, the lack of an ADAL library doesn’t prevent a platform from working in this new app model. In this post, I’ll share a Node.js application that doesn’t use any special libraries to perform OAuth in the v2 app model.

[View:http://www.youtube.com/watch?v=5r5JmdoP3J4]

NOTE: Node.js has an ADAL library, but wasn’t updated to support the v2 app model flows at the time of this post. The Azure AD team is working hard on an update Node.js library. The Outlook/Exchange team has published a sample that uses the simple-oauth2 library for Node.js

authHelper.js

The solution uses an authHelper.js file, containing application registration details (client id, client secret, reply URL, permission scopes, etc) and utility functions for interacting with Azure AD. The three primary utility functions are detailed below:

  • getAuthUrl returns the authorization end-point in Azure AD with app details concatenated as URL parameters. The application can redirect to this end-point to initiate the first step of OAuth.
  • getTokenFromCode returns an access token using the app registration details and a provided authorization code (that is returned to the application after user signs in and authorizes the app)
  • getTokenFromRefreshToken returns an access token using the app registration details and a provided refresh token (that might come from cache)
authHelper.js

var https = require(‘https’);

var appDetails = {
 authority: ‘https://login.microsoftonline.com/common’,
 client_id: ‘1d9e332b-6c7d-4554-8b51-d398fef5f8a7′,
 client_secret: ‘Y0tgHpYAy3wQ0eF9NPkMPOf’,
 redirect_url: ‘http://localhost:5858/login’,
 scopes: ‘openid+https://outlook.office.com/contacts.read+offline_access’
};

//builds a redirect url based on app detail
function getAuthUrl(res) {
 return appDetails.authority + ‘/oauth2/v2.0/authorize’ +
  ‘?client_id=’ + appDetails.client_id +
  ‘&scope=’ + appDetails.scopes +
  ‘&redirect_uri=’ + appDetails.redirect_url +
  ‘&response_type=code';
};

//gets a token given an authorization code
function getTokenFromCode(code, callback) {
 var payload = ‘grant_type=authorization_code’ +
  ‘&redirect_uri=’ + appDetails.redirect_url +
  ‘&client_id=’ + appDetails.client_id +
  ‘&client_secret=’ + appDetails.client_secret +
  ‘&code=’ + code +
  ‘&scope=’ + appDetails.scopes;
 
 postJson(‘login.microsoftonline.com’,
  ‘/common/oauth2/v2.0/token’,
  payload,
  function(token) {
   callback(token);
  });
};

//gets a new token given a refresh token
function getTokenFromRefreshToken(token, callback) {
 var payload = ‘grant_type=refresh_token’ +
  ‘&redirect_uri=’ + appDetails.redirect_url +
  ‘&client_id=’ + appDetails.client_id +
  ‘&client_secret=’ + appDetails.client_secret +
  ‘&refresh_token=’ + token +
  ‘&scope=’ + appDetails.scopes;
 
 postJson(‘login.microsoftonline.com’,
  ‘/common/oauth2/v2.0/token’,
  payload,
  function(token) {
   callback(token);
  });
};

//performs a generic http POST and returns JSON
function postJson(host, path, payload, callback) {
  var options = {
    host: host,
    path: path,
    method: ‘POST’,
    headers: {
      ‘Content-Type': ‘application/x-www-form-urlencoded’,
      ‘Content-Length': Buffer.byteLength(payload, ‘utf8′)
    }
  };

  var reqPost = https.request(options, function(res) {
    var body = ”;
    res.on(‘data’, function(d) {
      body += d;
    });
    res.on(‘end’, function() {
      callback(JSON.parse(body));
    });
    res.on(‘error’, function(e) {
      callback(null);
    });
  });
 
  //write the data
  reqPost.write(payload);
  reqPost.end();
};

exports.getAuthUrl = getAuthUrl;
exports.getTokenFromCode = getTokenFromCode;
exports.getTokenFromRefreshToken = getTokenFromRefreshToken;
exports.TOKEN_CACHE_KEY = ‘TOKEN_CACHE_KEY';

 

Application Routes

The Node.js solution was built to using express and handlebars. Two routes handle the entire flow:

Index Route

  • If the user has a cached refresh token, use it to get a new token
    • If the new token is valid, get and display data
    • If the new token is invalid, send the user to login
  • If the user doesn’t have a cached refresh token, send the user to login

Login Route

  • If the URL contains an authorization code, use it to get tokens
    • If the token is valid, cache the refresh token and send the user back to index
    • If the token is invalid, and error must have occurred
  • If the URL doesn’t contain and authorization code, get the redirect URL for authorization and send user there

Here is the JavaScript implementation of this.

Route Controller Logic

var express = require(‘express’);
var router = express.Router();
var authHelper = require(‘../authHelper.js’);
var https = require(‘https’);

/* GET home page. */
router.get(‘/’, function(req, res, next) {
  if (req.cookies.TOKEN_CACHE_KEY === undefined)
    res.redirect(‘/login’);
  else {
    //get data
    authHelper.getTokenFromRefreshToken(req.cookies.TOKEN_CACHE_KEY, function(token) {
      if (token !== null) {
        getJson(‘outlook.office.com’, ‘/api/v1.0/me/contacts’, token.access_token, function(contacts) {
          if (contacts.error && contacts.error.code === ‘MailboxNotEnabledForRESTAPI’)
            res.render(‘index’, { title: ‘My Contacts’, contacts: [], restDisabled: true });
          else
            res.render(‘index’, { title: ‘My Contacts’, contacts: contacts[‘value’], restDisabled: false });
        });
      }
      else {
        //TODO: handle error
      }
    });
  }
});

router.get(‘/login’, function(req, res, next) {
  //look for code from AAD reply
  if (req.query.code !== undefined) {
    //use the code to get a token
    authHelper.getTokenFromCode(req.query.code, function(token) {
      //check for null token
      if (token !== null) {
        res.cookie(authHelper.TOKEN_CACHE_KEY, token.refresh_token);
        res.redirect(‘/’);
      }
      else {
        //TODO: handle error
      }
    });
  }
  else {
    res.render(‘login’, { title: ‘Login’, authRedirect: authHelper.getAuthUrl }); 
  }
});

//perform a fet based on parameters and return a JSON object
function getJson(host, path, token, callback) {
  var options = {
    host: host,
    path: path,
    method: ‘GET’,
    headers: {
      ‘Content-Type': ‘application/json’,
      ‘Accept': ‘application/json’,
      ‘Authorization': ‘Bearer ‘ + token
    }
  };

  https.get(options, function(res) {
    var body = ”;
    res.on(‘data’, function(d) {
      body += d;
    });
    res.on(‘end’, function() {
      callback(JSON.parse(body));
    });
    res.on(‘error’, function(e) {
      callback(null);
    });
  });
};

module.exports = router;

 

Conclusion

Authentication libraries make great solution accelerators, but certainly aren’t necessary to leverage the Azure AD v2 app model or consume Microsoft connected services. You can get the full Node.js solution on GitHub using the link below:

Sample use in Blog
https://github.com/OfficeDev/Contacts-API-NodeJS-AppModelV2

Outlook/Exchange Team Sample
https://dev.outlook.com/RestGettingStarted/Tutorial/node

Building File Handler Add-ins for Office 365

$
0
0

Microsoft recently announced the general availability of file handler add-ins for Office 365. This add-in type enables Office 365 customers to implement custom icons, previews and editors for specific file extensions in SharePoint Online, OneDrive for Business, and Outlook Web App (OWA). In this post, I’ll outline a solution for creating a file handler add-in for .png images. The add-in will allow .png images to be opened in an in-browser editor (think browser-based paint) for drawing/annotation/whiteboarding.

[View:http://www.youtube.com/watch?v=4lCVWqj2EUE]

Azure AD and Add-ins

Until now, Azure AD applications were considered stand-alone in the context of Office 365. File handler add-ins are similar to a stand-alone Azure AD web application, but are dependent on contextual information passed when invoked from Office 365 (more on that later).

The file handler add-in can be provisioned in Azure AD using the Connected Service Wizard in Visual Studio or manually through the Azure Management Portal or New Getting Started experience. However, file handler add-ins require a new permission that are currently only surfaced through the Azure Management Portal. This new permission is Read and write user selected files (preview) using the Office 365 Unified API. This permission allows a 3rd party file handler add-in to get access only to the files the user select (vs. ALL of the users files).

Once the application is registered, some additional configuration needs to be performed that the Azure Management Portal doesn’t (yet) surface in the user interface. Azure needs to know the extension, icon url, open URL, and preview URL for the file handler. This can be accomplished by submitting a json manifest update using Azure AD Graph API queries (outlined HERE). However, the Office Extensibility Team created an Add-in Manager website that provides a nice interface for making these updates. It allows you to select an Azure AD application and register file handler add-ins against it. Below, you can see the file handler registration for my .png file handler.

When the file handler add-in is invoked, the host application (SharePoint/OneDrive/OWA) posts some form details to the application. This includes the following parameters.

  • Client – The Office 365 client from which the file is opened or previewed; for example, “SharePoint”.
  • CultureName – The culture name of the current thread, used for localization.
  • FileGet – The full URL of the REST endpoint your app calls to retrieve the file from Office 365. Your app retrieves file using a GET request.
  • FilePut – The full URL of the REST endpoint your app calls to save the file back to Office 365. You must call this with the HTTP POST method.
  • ResourceID – The URL of the Office 365 tenant used to get the access token from Azure AD.
  • DocumentID – The document ID for a specific document; allows your application to open more than one document at the same time.

If the application (which is secured by Azure AD) isn’t authenticated, it will need to perform an OAuth flow with Azure AD. This OAuth flow (which includes a redirect) will cause the application to loose these contextual parameters posted from Office 365. To preserve these parameters, you should employ a caching technique so they can be used after a completed OAuth flow completes. In the code below, you can see that the form data is cached in a cookie using the RedirectToIdentityProvider handler provided with Open ID Connect.

ConfigureAuth in Startup.Auth.cs

public void ConfigureAuth(IAppBuilder app)
{
    ApplicationDbContext db = new ApplicationDbContext();

    app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);

    app.UseCookieAuthentication(new CookieAuthenticationOptions());

    app.UseOpenIdConnectAuthentication(
        new OpenIdConnectAuthenticationOptions
        {
            ClientId = clientId,
            Authority = Authority,
            PostLogoutRedirectUri = postLogoutRedirectUri,
            TokenValidationParameters = new System.IdentityModel.Tokens.TokenValidationParameters
            {
                // instead of using the default validation (validating against a single issuer value, as we do in line of business apps), 
                // we inject our own multitenant validation logic
                ValidateIssuer = false,
            },
            Notifications = new OpenIdConnectAuthenticationNotifications()
            {
                // If there is a code in the OpenID Connect response, redeem it for an access token and refresh token, and store those away.
                AuthorizationCodeReceived = (context) =>
                {
                    var code = context.Code;
                    ClientCredential credential = new ClientCredential(clientId, appKey);
                    string signedInUserID = context.AuthenticationTicket.Identity.FindFirst(ClaimTypes.NameIdentifier).Value;
                    AuthenticationContext authContext = new AuthenticationContext(Authority, new ADALTokenCache(signedInUserID));
                    AuthenticationResult result = authContext.AcquireTokenByAuthorizationCode(
                    code, new Uri(HttpContext.Current.Request.Url.GetLeftPart(UriPartial.Path)), credential, graphResourceId);

                    //cache the token in session state
                    HttpContext.Current.Session[SettingsHelper.UserTokenCacheKey] = result;

                    return Task.FromResult(0);
                },
                RedirectToIdentityProvider = (context) =>
                {
                    FormDataCookie cookie = new FormDataCookie(SettingsHelper.SavedFormDataName);
                    cookie.SaveRequestFormToCookie();
                    return Task.FromResult(0);
                }
            }
        });
}

 

Opening and Saving Files

One of the form parameters Office 365 posts to the application is the GET URI for the file. The code below shows the controller to Open the file using this URI. Notice that is first loads the form data using ActivationParameters object, then gets an access token, and retrieves the file. The controller view also adds a number of ViewData values that will be used later for saving changes to the file (including the refresh token, resource, and file put URI).

View Controller for Opening Files

public async Task<ActionResult> Index()
{
    //get activation parameters off the request
    ActivationParameters parameters = ActivationParameters.LoadActivationParameters(System.Web.HttpContext.Current);

    //try to get access token using refresh token
    var token = await GetAccessToken(parameters.ResourceId);

    //get the image
    HttpClient client = new HttpClient();
    client.DefaultRequestHeaders.Add(“Authorization”, “Bearer “ + token.AccessToken);
    var imgBytes = await client.GetByteArrayAsync(parameters.FileGet);
           
    //return the image as a base64 string
    ViewData[“img”] = “data:image/png;base64, “ + Convert.ToBase64String(imgBytes);
    ViewData[“resource”] = parameters.ResourceId;
    ViewData[“refresh_token”] = token.RefreshToken;
    ViewData[“file_put”] = parameters.FilePut;
    ViewData[“return_url”] = parameters.FilePut.Substring(0, parameters.FilePut.IndexOf(“_vti_bin”));
    return View();
}

 

Saving changes to the file is implemented in a Web API controller. The application POSTs the save details including the updated image, resource, refresh token, and file put URI to this end-point. The end-point gets an access token (using the refresh token) and POST and update to the existing image.

WebAPI Controller to Save Files

[Route(“api/Save/”)]
[HttpPost]
public async Task<HttpResponseMessage> Post([FromBody]SaveModel value)
{
    //get an access token using the refresh token posted in the request
    var token = await GetAccessToken(value.resource, value.token);
    HttpClient client = new HttpClient();
    client.DefaultRequestHeaders.Add(“Authorization”, “Bearer “ + token.AccessToken);

    //convert base64 image string into byte[] and then stream
    byte[] bytes = Convert.FromBase64String(value.image);
    using (Stream stream = new MemoryStream(bytes))
    {
        //prepare the content body
        var fileContent = new StreamContent(stream);
        fileContent.Headers.ContentType = new MediaTypeHeaderValue(“application/octet-stream”);
        var response = await client.PostAsync(value.fileput, fileContent);
    }

    return Request.CreateResponse<bool>(HttpStatusCode.OK, true);
}

 

Client-side Script for Calling the Save Web API

var canvas = document.getElementById(“canvas”);
var data = JSON.stringify({
    token: $(“#refresh_token”).val(),
    image: canvas.toDataURL(“image/png”).substring(22),
    resource: $(“#resource”).val(),
    fileput: $(“#file_put”).val()
});
$.ajax({
    type: “POST”,
    contentType: “application/json”,
    url: “../api/Save”,
    headers: {
        “content-type”: “application/json”
    },

    data: data,
    dataType: “json”,
    success: function (d) {
        toggleSpinner(false);
        window.location = $(“#return_url”).val();
    },
    error: function (e) {
        alert(“Save failed”);
    }
});

 

The completed solution uses a highly modified version of sketch.js to provide client-side drawing on a canvas element (modified to support images and scaling).

Conclusion

In my opinion, these new file handler add-ins are incredibly powerful. Imagine the scenarios for proprietary file extensions or extensions that haven’t traditionally been considered a first-class citizen in Office 365 (ex: .cad, .pdf, etc). You can download the .png file handler add-in solution from GitHub: https://github.com/OfficeDev/Image-FileHandler. You should also checkout Dorrene Brown’s (@dorrenebIgnite talk on File Handler Add-ins and Sonya Koptyev’s (@SonyaKoptyevOffice Dev Show on the same subject.

Angular 2.0 and the Microsoft Graph

$
0
0

Yesterday, the AngularJS Team announced the official beta release of Angular 2. The beta milestone is a significant achievement that should motivate developers to give Angular 2 a serious consideration for new web/mobile projects. Besides getting ready for primetime, Angular 2 offers some significant improvements over Angular 1 (just read here). In this post, I’ll describe the steps of building an Angular 2 application that authenticates to Azure Active Directory (using an implicit OAuth2 flow) and calls into the Microsoft Graph API. In addition to the completed solution on GitHub, I’ve also provided a step-by-step video of building the Angular 2/Office 365 app from scratch:

[View:http://www.youtube.com/watch?v=QoTKK2_-dC0]

Angular 2 and TypeScript

The AngularJS Team worked closely with Microsoft to take advantage of TypeScript in Angular 2. If you are new to TypeScript, it is a strongly-typed JavaScript superset that compiles down to JavaScript. Because it is class-based and type-strict, it is favored by many developers for client-side development (especially those with object oriented backgrounds). TypeScript introduces a few interesting challenges to a web project.

TypeScript must be compiled to JavaScript before run in a browser (some browsers can compile TypeScript, but this is slow). Luckily, the TypeScript compiler can be run in a watch mode that will automatically re-compile when a TypeScript file has been saved. Notice the start script we have configured below in the package.json (used by the Node Package Manager). This script will concurrently start the TypeScript complier in watch mode (tsc –w) and start the live-server web hosts (which also listens for files changes to provide automatic refreshes).

package.json with start script
{
	"name": "Angular2Files",
	"version": "1.0.0",
	"scripts": {
		"start": "concurrent \"tsc -w\" \"live-server\""
	},
	"dependencies": {
		"jquery": "*",
		"bootstrap": "*",
		"angular2": "2.0.0-beta.0",
		"systemjs": "0.19.6",
		"es6-promise": "^3.0.2",
		"es6-shim": "0.33.0",
		"reflect-metadata": "0.1.2",
		"rxjs": "5.0.0-beta.0",
		"zone.js": "0.5.10"
	},
	"devDependencies": {
		"concurrently": "^1.0.0",
		"live-server": "^0.9.0",
		"typescript": "1.7.3"
	}
}

 

The class-based implementation of TypeScript promotes the separation of different classes into different script files. This can make script references messy in HTML. To overcome this, the project (and samples on angular.io) use SystemJS to reference all the generated .js files as a package. Below shows the System.config that tells SystemJS to reference all the .js files found under the “src” folder.

System.config to dynamically load .js scripts
<script type="text/javascript">
	System.config({
		packages: {
			"src": {
				format: "register",
				defaultExtension: "js"
			}
		}
	});
	System.import("src/app/app");
</script>

 

OAuth2 with Azure AD

The Azure AD Team created an Angular module for integrating the Azure AD Authentication Library (ADAL) into Angular 1 projects. For this project, I decided to leverage a manual/raw OAuth2 flow instead of using a library (which I think is valuable for all developers to understand). However, I’m starting to investigate the ngUpgrade and ngForward goodness the AngularJS Teams are offering for mixing Angular 1 and 2 in one project…perhaps my next post

For an implicit OAuth flow with Azure AD, we will first redirect the user through a sign-in and consent flow to get an id_token. Once we have an id_token, we know the user is signed in and we should be able to get an access_token using a different redirect. These two redirects are depicted in the AuthHelper.ts file as the login and getAccessToken functions. Note that both the id_token and access_token will be passed back from Azure AD as URL parameters. The constructor handles this parameter check.

authHelper for manage Azure AD authentication
import { Injectable } from "angular2/core";
import { SvcConsts } from "../svcConsts/svcConsts";

@Injectable()
export class AuthHelper {
	//function to parse the url query string
	private parseQueryString = function(url) {
		var params = {}, queryString = url.substring(1),
		regex = /([^&=]+)=([^&]*)/g, m;
		while (m = regex.exec(queryString)) {
			params[decodeURIComponent(m[1])] = decodeURIComponent(m[2]);
		}
		return params;
	}
	private params = this.parseQueryString(location.hash);
	public access_token:string = null;

	constructor() {
		//check for id_token or access_token in url
		if (this.params["id_token"] != null)
			this.getAccessToken();
		else if (this.params["access_token"] != null)
			this.access_token = this.params["access_token"];
	}

	login() {
		//redirect to get id_token
		window.location.href = "https://login.microsoftonline.com/" + SvcConsts.TENTANT_ID +
			"/oauth2/authorize?response_type=id_token&client_id=" + SvcConsts.CLIENT_ID +
			"&redirect_uri=" + encodeURIComponent(window.location.href) +
			"&state=SomeState&nonce=SomeNonce";
	}

	private getAccessToken() {
		//redirect to get access_token
		window.location.href = "https://login.microsoftonline.com/" + SvcConsts.TENTANT_ID +
			"/oauth2/authorize?response_type=token&client_id=" + SvcConsts.CLIENT_ID +
			"&resource=" + SvcConsts.GRAPH_RESOURCE +
			"&redirect_uri=" + encodeURIComponent(window.location.href) +
			"&prompt=none&state=SomeState&nonce=SomeNonce";
	}
}

 

Angular 2 Routes

Many single page applications in Angular 1 make use of Angular routing (or Angular UI Router) to dynamically load partial views without reloading the entire page. Implementing Angular 1 routing required an additional script reference and dependency to the routing module (ex: ngRoute). Angular 2 routing has many similarities to its predecessor…references an additional script reference, added as dependency on the root module, contains route config, offers object/functions to navigate between views, etc. However, these look much different when implemented in TypeScript. Below is the routing implementation for my project where I’ve highlighted routing specific code.

Note: the ADAL module for Angular 1 provided a route extension for determining if a view required authentication or not (via requireADLogin flag). Given my simple project contains only two views (login and files), I simply perform a check in the constructor of the App to navigate between the two based on the existence of an access token.

Route configuration in Angular 2
import { Component, provide } from "angular2/core";
import { bootstrap } from "angular2/platform/browser";
import { Router, RouteConfig, ROUTER_DIRECTIVES, ROUTER_PROVIDERS, LocationStrategy, HashLocationStrategy } from "angular2/router";
import { HTTP_PROVIDERS } from "angular2/http";

import { Login } from "../login/login";
import { Files } from "../files/files";
import { AuthHelper } from "../authHelper/authHelper";

@Component({
	selector: "files-app",
	template: "<router-outlet></router-outlet>",
	directives: [ROUTER_DIRECTIVES],
	providers: [HTTP_PROVIDERS]
})

// Configure the routes for the app
@RouteConfig([ { name: "Login", component: Login, path: "/login" }, { name: "Files", component: Files, path: "/files" } ])

export class App {
	constructor(router:Router, auth:AuthHelper) {
		// Route the user to a view based on presence of access token
		if (auth.access_token !== null) {
			// access token exists...display the users files
			router.navigate(["/Files"]);
		}
		else {
			// access token doesn't exist, so the user needs to login
			router.navigate(["/Login"]);
		}
	}
}

bootstrap(App, [AuthHelper, ROUTER_PROVIDERS, provide(LocationStrategy, { useClass: HashLocationStrategy })]);

 

Calling the Microsoft Graph

In Angular 1, the $http object was commonly used for performing REST calls into the Microsoft Graph. Angular 2 offers an Http object that performs the same operations. This requires an additional script reference and import as seen below. Also notice the addition of the Authorization Bearer token included in the header of the Microsoft Graph request.

Calling Microsoft Graph
import { Component, View } from "angular2/core";
import { Http, Headers } from "angular2/http";

import { AuthHelper } from "../authHelper/authHelper"

@Component({
	selector: "files"
})

@View({
	templateUrl: "src/files/view-files.html"
})

export class Files {
	private files = [];
	constructor(http:Http, authHelper:AuthHelper) {
		// Perform REST call into Microsoft Graph for files on OneDrive for Business
		http.get("https://graph.microsoft.com/v1.0/me/drive/root/children", {
			headers: new Headers({ "Authorization": "Bearer " + authHelper.access_token })
		})
		.subscribe(res => {
			// Check the response status before trying to display files
			if (res.status === 200)
				this.files = res.json().value;
			else
				alert("An error occurred calling the Microsoft Graph: " + res.status);
		});
	}
}

 

Conclusions

I’m really excited the see Angular 2 reach Beta and anxious to see the creative ways the Microsoft community leverages is in their solutions. You can download the Angular 2 Files project from GitHub: https://github.com/OfficeDev/O365-Angular2-Microsoft-Graph-MyFiles

Using OneDrive and Excel APIs in the Microsoft Graph for App Storage

$
0
0

The Microsoft Graph is constantly evolving with new and powerful endpoints. If you want a glimpse into current engineering investments and future of the Graph, take a look at the latest developments on beta branch. One of the beta endpoints I’m particularly excited for is the new Excel APIs. The Excel APIs allow you to perform advanced manipulations of Excel remotely via REST calls into the Microsoft Graph. I recently recorded an Office Dev Show on Channel 9 discussing these APIs. I found it incredibly easy to manipulate worksheet data using the Excel APIs. So much so, that I thought I would try to use Excel and OneDrive as the data layer for a mobile application. In this post, I’ll illustrate how to perform CRUD operations on worksheet data using the Excel APIs in the Microsoft Graph. I’ll also discuss a few patterns for working with files in OneDrive for Business and provisioning application assets at run-time.

NOTE: The sample used in this post is built with Ionic2/Angular2/TypeScript, but the patterns and API end-points apply to any language platform.

Video showcase of solution:

Ensuring App Resources

If OneDrive for Business and Excel will serve as the data layer for my application, I need to ensure the appropriate files and folders are configured each time a user launches the application. OneDrive already provides a special folder for each custom application to store app-specific resources. These special app folders get provisioned on-demand with the app’s name to the “Apps” folder of the drive (ex: Apps/MyExpenses). You aren’t limited to working in this folder, but it keeps things clean for app-specific files and consistent with competitors like Dropbox. You can reference your application’s special folder by referencing the endpoint /drive/special/approot. My expense application uses this special folder to store receipt images captured by the application and an Expenses.xlsx file that it used for store all application data (more on this later). Below is the TypeScript I use to check for the existence of Expenses.xlsx.

ensureConfig ensures the Expenses.xlsx is provisioned:

    //ensures the "Expenses.xslx" file exists in the approot
    ensureConfig() {
        let helper = this;
        return new Promise((resolve, reject) => {
            helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) {
               helper.http.get('https://graph.microsoft.com/v1.0/me/drive/special/approot:/Expenses.xlsx', {
                   headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken })
               })
               .subscribe(res => {
                   // Check the response status
                   if (res.status === 200) {
                        helper.workbookItemId = res.json().id;
                        window.localStorage.setItem('CACHE_KEY_WORKBOOK', helper.workbookItemId);
                        resolve(true);
                   }
                   else {
                       //create the files
                       helper.createWorkbook().then(function(datasourceId: string) {
                           helper.workbookItemId = datasourceId;
                           window.localStorage.setItem('CACHE_KEY_WORKBOOK', helper.workbookItemId);
                           resolve(true);
                       }, function(err) {
                           reject(err);
                       });
                  }
               });
            }, function(err) {
                reject(err); //error getting token for MS Graph
            });
        });
    }

Provisioning the Workbook

If the Expenses.xlsx does not exist, the mobile application will provision it on-demand (creating it’s own datasource…pretty cool). For this, I decided to store the Excel workbook template IN the Cordova mobile application. If is it determined that Expenses.xlsx file doesn’t exist, the application will read this template and provision it in OneDrive for Business. The function below illustrates this provisioning. One important note below…the Angular2 documentation for http.put indicates that the body can be any object. However, my testing determined it only supports a string body right now which is not appropriate for binary content of an upload. For this reason, I’m using an XmlHttpRequest instead. Angular2 is still in beta, so hopefully this will be fixed before final release.

createWorkbook provisions the Expenses.xslx file when it does not exist in OneDrive:

    //creates the "Expenses.xslx" workbook in the approot folder specified
    createWorkbook(folderId: string) {
        //adds a the workbook to OneDrive
        let helper = this;
        return new Promise((resolve, reject) => {
            //get token for resource
            helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) {
                //reference the Excel document template at the root application www directory
                window.resolveLocalFileSystemURL(cordova.file.applicationDirectory + 'www/Expenses.xlsx', function (fileEntry) {
                    fileEntry.file(function (file) {
                        //open the file with a FileReader
                        var reader = new FileReader();
                        reader.onloadend = function(evt: ProgressEvent) {
                            //read base64 file and convert to binary
                            let base64 = evt.target.result;
                            base64 = base64.substring(base64.indexOf(',') + 1);

                            //perform the PUT
                            helper.uploadFile(base64, 'Expenses.xlsx').then(function(id: string) {
                                resolve(id);
                            }, function(err) {
                                reject(err);
                            });
                        };

                        //catch read errors
                        reader.onerror = function(err) {
                            reject('Error loading file');
                        };

                        //read the file as an ArrayBuffer
                        reader.readAsDataURL(file);
                    },
                    function(err) {
                        reject('Error opening file');
                    });
                }, function(err) {
                    reject('Error resolving file on file system');
                });
            }, function(err) {
                reject(err); //error getting token for MS Graph
            });
        });
    }

 

CRUD Operations with Excel

CRUD operations with the Excel APIs are relatively easy if you understand the data format Excel expects (multi-dimensional array of values). Retrieving data is accomplished by performing a GET on the rows of a specific table (ex: /drive/items/workbook_id/workbook/worksheets(‘worksheet_id’)/tables(‘table_id’)/rows)

getRows function retrieves rows from the Excel workbook:

    //gets rows from the Expenses.xslx workbook
    getRows() {
        let helper = this;
        return new Promise((resolve, reject) => {
            helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) {
               helper.http.get('https://graph.microsoft.com/beta/me/drive/items/' + helper.workbookItemId + '/workbook/worksheets(\'Sheet1\')/tables(\'Table1\')/rows', {
                   headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken })
               })
               .subscribe(res => {
                  // Check the response status before trying to resolve
                  if (res.status === 200)
                     resolve(res.json().value);
                  else
                     reject('Get rows failed');
               });
            }, function(err) {
                reject(err); //error getting token for MS Graph
            });
        });
    }

Adding data to the worksheet uses the exact same endpoint as above, but with a POST. Also, the row data to add must be included in the body of the POST and formatted as a multi-dimensional array (ex: { “values”: [[“col1Value”, “col2Value”, “col3Value”]]}).

addRow function adds a row to the Excel workbook:

    //adds a row to the Excel datasource
    addRow(rowData: any) {
        let helper = this;
        return new Promise((resolve, reject) => {
            helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) {
               helper.http.post('https://graph.microsoft.com/beta/me/drive/items/' + helper.workbookItemId + '/workbook/worksheets(\'Sheet1\')/tables(\'Table1\')/rows', JSON.stringify(rowData), {
                   headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken })
               })
               .subscribe(res => {
                  // Check the response status before trying to resolve
                  if (res.status === 201)
                     resolve();
                  else
                     reject('Get rows failed');
               });
            }, function(err) {
                reject(err); //error getting token for MS Graph
            });
        });
    }

Updating a row in a worksheet is a little different than you might expect. Instead of referencing the table in Excel, you PATCH a specify range with new values (in the same multi-dimensional array format as add). Because updates (and deletes as you will see later) are performed against ranges, it is important to keep track of the row index of the data you work with.

updateRow function update a row in the Excel workbook (via Range):

    //updates a row in the Excel datasource
    updateRow(index:number, rowData:any) {
        let helper = this;
        return new Promise((resolve, reject) => {
            helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) {
               let address = 'Sheet1!A' + (index + 2) + ':D' + (index + 2);
               helper.http.patch('https://graph.microsoft.com/beta/me/drive/items/' + helper.workbookItemId + '/workbook/worksheets(\'Sheet1\')/range(address=\'' + address + '\')', JSON.stringify(rowData), {
                   headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken })
               })
               .subscribe(res => {
                  // Check the response status before trying to resolve
                  if (res.status === 200)
                     resolve();
                  else
                     reject('Get rows failed');
               });
            }, function(err) {
                reject(err); //error getting token for MS Graph
            });
        });
    }

Deleting a row is accomplished by performing a POST to a specific range with /delete tacked on to the end. You can also include instruction in the body on how the Excel should treat the delete. For CRUD operations, we want deleting to shift rows up, so {“shift”, “Up”} is in the body of the POST.

deleteRow function deletes a row in the Excel workbook (via Range):

    //deletes a row in the Excel datasource
    deleteRow(index:number) {
        let helper = this;
        return new Promise((resolve, reject) => {
            helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) {
                let address = 'Sheet1!A' + (index + 2) + ':D' + (index + 2);
               helper.http.post('https://graph.microsoft.com/beta/me/drive/items/' + helper.workbookItemId + '/workbook/worksheets(\'Sheet1\')/range(address=\'' + address + '\')/delete', JSON.stringify({ 'shift': 'Up' }), {
                   headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken })
               })
               .subscribe(res => {
                  // Check the response status before trying to resolve
                  if (res.status === 204)
                     resolve();
                  else
                     reject('Delete row failed');
               });
            }, function(err) {
                reject(err); //error getting token for MS Graph
            });
       });
    }

CRUD Operations with OneDrive

Nothing too special about working with files and OneDrive. However, I found the OneDrive documentation to be a little unrealistic and unhelpful as it shows operations on text files and not binary data. Binary data (ex: images and Office documents) introduces additional complexity (especially for a client-side application) so I thought I would document some of the utilities I wrote to work with files. Thanks to Waldek Mastykarz, Stefan Bauer, and Sahil Malik for advisement on upload…it wasn’t working at first (turned out to be Angular2 bug), and they were very helpful. The getBinaryFileContents function is directly from Waldek’s blog HERE.

uploadFile uploads a binary file to a specific location in OneDrive using PUT

    //uploads a file to the MyExpenses folder
    uploadFile(base64: string, name: string) {
        let helper = this;
        return new Promise((resolve, reject) => {
            helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) {
                //convert base64 string to binary
                let binary = helper.getBinaryFileContents(base64);

                //prepare the request
                let req = new XMLHttpRequest();
                req.open('PUT', 'https://graph.microsoft.com/v1.0/me/drive/special/approot:/' + name + '/content', false);
                req.setRequestHeader('Content-type', 'application/octet-stream');
                req.setRequestHeader('Content-length', binary.length.toString());
                req.setRequestHeader('Authorization', 'Bearer ' + token.accessToken);
                req.setRequestHeader('Accept', 'application/json;odata.metadata=full');
                req.send(binary);

                //check response
                if (req.status === 201)
                    resolve(JSON.parse(req.responseText).id); //resolve id of new file
                else
                    reject('Failed to upload file');
            }, function(err) {
                reject(err); //error getting token for MS Graph
            });
        });
    }

deleteFile deletes a specific file using DELETE

    //deletes a file from OneDrive for business
    deleteFile(id:string) {
        let helper = this;
        return new Promise((resolve, reject) => {
            helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) {
               helper.http.delete('https://graph.microsoft.com/beta/me/drive/items/' + id, {
                   headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken })
               })
               .subscribe(res => {
                  // Check the response status before trying to resolve
                  if (res.status === 204)
                     resolve();
                  else
                     reject('Delete row failed');
               });
            }, function(err) {
                reject(err); //error getting token for MS Graph
            });
       });
    }

For images, I decided to take advantage of another beta API in the Microsoft Graph…thumbnails. The thumbnails API allow you to download small formats of an image in OneDrive. It will generate Small, Medium, and Large thumbnails for all images. I decided a medium thumbnail would look fine in my mobile application and be MUCH more performant.

loadPhoto loads the medium thumbnail for a specified image in OneDrive

    //loads a photo from OneDrive for Business
    loadPhoto(id:string) {
        //loads a photo for display
        let helper = this;
        return new Promise((resolve, reject) => {
            helper.authHelper.getTokenForResource(helper.authHelper._graphResource).then(function(token: Microsoft.ADAL.AuthenticationResult) {
                //first get the thumbnails
                helper.http.get('https://graph.microsoft.com/beta/me/drive/items/' + id + '/thumbnails', {
                   headers: new Headers({ 'Authorization': 'Bearer ' + token.accessToken })
               })
               .subscribe(res => {
                    // Check the response status before trying to resolve
                    if (res.status === 200) {
                        var data = res.json().value;
                        var resource = data[0].medium.url.substring(8);
                        resource = "https://" + resource.substring(0, resource.indexOf('/'));
                        helper.authHelper.getTokenForResource(resource).then(function(thumbtoken: Microsoft.ADAL.AuthenticationResult) {
                            //prepare the content request
                            let req = new XMLHttpRequest();
                            req.open('GET', data[0].medium.url, true);
                            req.responseType = 'blob';
                            req.setRequestHeader('Authorization', 'Bearer ' + thumbtoken.accessToken);
                            req.setRequestHeader('Accept', 'application/json;odata=verbose');
                            req.onload = function(e) {
                                //check response
                                if (this.status === 200) {
                                    //get the blob and convert to base64 using FileReader
                                    var blob = req.response;
                                    var reader = new FileReader();
                                    reader.onload = function(evt){
                                        var base64 = evt.target.result;
                                        base64 = base64.substring(base64.indexOf(',') + 1);
                                        resolve(base64);
                                    };
                                    reader.readAsDataURL(blob);
                                }
                                else
                                    reject('Failed to read image');
                            };
                            req.onerror = function(e) {
                               reject('Failed to download image');
                            };
                            req.send();
                        }, function(err) {
                            reject('Error getting token for thumbnail');
                        });
                    }
                    else
                        reject('Thumbnail load failed');
               });
            }, function(err) {
                reject(err); //error getting token for MS Graph
            });
       });
    }

Final Thoughts

The Excel APIs in the Microsoft Graph have capabilities well beyond CRUD operations for an app, but I thought this was an interesting pattern (especially for a guy that is always going over my monthly Azure quota). You can grab the MyExpenses solution written with Ionic2/Angular2/TypeScript at GitHub repo listed below:

https://github.com/richdizz/Ionic2-Angular2-ExcelAPI-Expense-App

 

Adding Office 365 Connectors from a Mobile App

$
0
0

Office 365 Connectors are a powerful way to aggregate group information and content into one place (regardless of the source). Third parties can use connectors to send useful information and content into Office 365 Groups where users can have conversations, collaborate, and take action on that third party data. Developers can leverage the Connectors Developer Dashboard to register custom Connectors and even generate the HTML markup for integrating the Connector with their websites. However, there is little guidance on adding the same functionality to a mobile application. In this post, I will detail the steps to integrate a third-party Office 365 Connector with a custom mobile application.

Adding Connectors from Mobile Apps

The two documented ways for adding a Office 365 Connector to a group are through the Connector Catalog in Office 365 or via “Connect to Office 365″ button on a third party website. All published Connectors are displayed in the Connector Catalog, but third parties can choose to update their website(s) to support the “Connect to” flow. I call this a “flow” because it involves navigating the user to Office 365 to select a group and consent the Connector before returning to the third party website with connection details. I found this very similar to a OAuth2 code authorization flow. So much so, I thought I could use OAuth2 mobile patterns to add Connectors from a mobile application.

The “Connect to” flow for Connectors involves a redirect to the Connector consent screen (where the user also selects a group). This redirect includes a callback_url that Office 365 will use to send back connection details (the group name and webhook URL). This can be accomplished in a native mobile application by leveraging a browser control or in-app browser plug-in and an event listener for when the callback_url is called. Every mobile platform supports some form of a browser control with events for address changes in that control. Below is a code block in TypeScript that uses the Cordova InApp Browser Plugin to “listen” for the callback and then grab the group and webhook details off that response.

Use of In-App Browser to perform Connector consent flow (TypeScript/Cordova):

  connectToO365() {
    let ctrl = this;
    let ref = cordova.InAppBrowser.open("https://outlook.office.com/connectors/Connect?state=myAppsState&app_id=4b543361-dbc2-4726-a351-c4b43711d6c5&callback_url=https://localhost:44300/callback", "_blank", "location=no,clearcache=yes,toolbar=yes");
    ref.addEventListener("loadstart", function(evt) {
      ctrl.zone.run(() => {
        //listen for the callback URI
        if (evt.url.indexOf('https://localhost:44300/callback') == 0) {
          //decode the uri into the response parameters
          let parts = evt.url.split('&');
          ctrl.sub.webhook = decodeURIComponent(parts[1].split('=')[1]);
          ctrl.sub.group = decodeURIComponent(parts[2].split('=')[1]);
          ref.close();
        }
      });
    });
  }

Final Thoughts

I believe that Office 365 Groups are incredibly powerful and the future of Microsoft collaboration. In fact, most of the mobile applications I have been building are group-centric (checkout Good with Faces), so incorporating Connectors just adds an additional layer of integration value. You can download the full Angular2/Ionic2/TypeScript sample used in this post at the repo below:

https://github.com/richdizz/Adding-O365-Connector-From-Mobile-App


Delivering Better Applications through Microsoft Teams

$
0
0

When Microsoft announced the public preview of Microsoft Teams, I think they unveiled the future of Microsoft productivity. It provides a single canvas that aggregates relevant information and tools across Office 365 and many 3rd party applications. 3rd party integration is made possible by a compelling extensibility story that developers can take advantage of today.

While in preview, developers can integrate custom experiences into Microsoft Teams through bots, custom tabs, and connectors. Each of these capabilities offers unique capabilities, but developers can also combine them to build even more compelling and comprehensive applications. Although I’ll cover each of the capabilities in more detail, here is my perspective on where each fits into the :

  • Bots – great for real-time interactions with individuals and teams
  • Tabs – great for apps that need a more dynamic or complex user interface
  • Connectors – great for actionable notifications/messages with light weight user experience

It all started with bots…

My journey with Microsoft Teams actually started with a bot project. Mat Velloso gives a great talk on “what is a bot” (he even built a bot that presents about bots). As Mat describes it, a bot is just another app. With that in mind, I decided to take an existing mobile app I built last year and deliver it as a bot. Even with cross-platform tools like Xamarin and Cordova, developers are still challenged to get users to install their mobile app(s). A bot on the other hand can run within existing applications that have enormous user bases (ex: Facebook Messenger, Skype, WeChat, etc).

As I started to build my bot, I quickly realized that more advanced user interfaces were hard to deliver. In my case, photo tagging was a challenge. Yes, cognitive services can face detect most pictures, but some photos need manually drawn photo tags. I also found long/paged lists could be more easily delivered in a traditional user interface. I’m sure there are many other scenarios, but these were my roadblock.

Note: many bot channels support the ability to send messages with rich formatting and controls. These are great, but don’t yet come close to the empty canvas of a traditional user interface.

Enter Microsoft Teams. Not only did Microsoft Teams provide a channel for my bot, but it also offered custom tabs for complex user interfaces. I also found connectors as a valuable tool in Microsoft Teams for sending proactive actionable notifications to users. Below is a video of my original mobile application and the application re-imagined for Microsoft Teams:

Bots

2016 was a huge year for bots, with a surge of interest coming from exciting platforms like Alexa/Cortana and new bot channels like Facebook Messenger and Skype. Microsoft went “all-in” on bots with high-profile bot launches (who can forget Tay), significant investments in the Bot Framework and bot support in popular platforms like Skype. Microsoft Teams was launched with bot support (and even has a bot of its own). For developers, Microsoft Teams is just another channel in the bot framework. That means that bot developers can enable their bot(s) for Microsoft Teams with little to no code modification. Users can interact with bots in Microsoft Teams through one:one chats and team conversations (not enabled during preview).

Bots in Microsoft Teams can take advantage of “cards”, which is a light weight UI framework for delivering formatted and actionable messages/controls to users. It is the exact card framework used by Office 365 Connectors (discussed later in this post). It uses json instead of markup to deliver messages that can reader across a number of platforms and clients.

Lots of information exists around building bots, especially with the bot framework. Microsoft is busy creating a Bot Design Center, a set of online documentation which will detail best practices and patterns for building bots (I will provide link once published). The only unique constraint in my bot was authenticating users to establish identity and call into the Microsoft Graph. In the next few weeks, I’ll author a post on the effort of performing a secure OAuth flow from within a bot.

Tabs

Tabs are Microsoft Teams version of an Office Add-in. They give developers the ability to load web content into a specific Channel of a Team. There are three potential web pages tab developers need to consider:

  • Configuration Page: this is the initial page that is launched when a tab is added to Microsoft Teams. It gives the end-user the ability to configure specifics about the tab and allows the developer to capture configuration information and set the page that will be displayed in the tab. This page is also used if the developer allows the tab configuration to be updated.
  • Tab Content Page: this is the actual page that displays in a tab within Microsoft Teams.
  • Tab Remove Page: this is an optional page a developer can use to “clean up” when a user deletes a tab. For example, you might want to allow the user to delete any data associated with the tab.

The Microsoft Teams Tab Library is a JavaScript library that helps developers integrate their web content with Microsoft Teams. One of the most important components of this is the settings object. Developers can use this to retrieve and save settings for the tab. Retrieving settings is useful in determining what team/channel the tab is running in or the Theme the user has applied to Microsoft Teams. Saving settings is useful in capturing both the content and remove page URLs.

Retrieving Settings

microsoftTeams.getContext(function (context) {
    // Do something with context...ex: context.theme, context.groupId, context.channelId;
});

Saving Settings

microsoftTeams.settings.setSettings({
    contentUrl: "https://somedomain/tab.html",
    suggestedDisplayName: "Some default tab name",
    websiteUrl: "https://somedomain/index.html",
    removeUrl: "https://somedomain/tabremove.html"
});

Beyond settings, the Microsoft Teams Tab Library has a number of additional objects, functions, and events that can be used to build great solutions. In the next few weeks, I’ll author posts that demonstrate how to use this library for setting styles/themes for a tab (including listening for theme changes in Microsoft Teams) and performing authentication flows and establishing identity. In the meantime, Microsoft Teams engineering has aggregate some great documentation about tab development.

Connectors

Office 365 Connectors are the same connectors that run in Outlook. The only difference is that Microsoft Teams expands the group/team construct with an additional layer of detail…the channel, which provide additional organization for Microsoft Teams. Connectors are added through a trust flow that gives a 3rd party app a “web hook” for sending cards into Microsoft Teams. During preview, Connectors do not support side loading into Microsoft Teams like they can in Outlook. Instead, you would use the add connector menu option on a channel and select the one from the Connector Gallery. It should be noted that bots can deliver the same cards as connectors. However, there are compelling scenarios for using each. In my case, I only wanted my bot used in one:one conversations and the connectors to message an entire Team (even when the bot wasn’t added to the Team).

Conclusion

I hope this post expanded your knowledge of Microsoft Teams extensibility and started to illustrate how you can use it to deliver powerful applications. I hate the concept of blog “series”, so instead I will post some additional patterns a learnings from working with Microsoft Teams in the next weeks. In the meantime, checkout the documentation at https://msdn.microsoft.com/en-us/microsoft-teams/index

Microsoft Teams and Custom Tab Theme

$
0
0

Custom tabs are an extensibility component in Microsoft Teams that allows developers to embed web content within a team channel. Tab content is effectively loaded in an IFRAME to ensure the appropriate isolation. Isolation should not compromise the native feel of a tab, so Microsoft offers the Microsoft Teams Tab Library to enable a tab page to establish context and interact with Microsoft Teams. One way this library can be used is in matching the active theme in Microsoft Teams (at preview options include light, dark, and high contrast). It is important to draw the distinction between theme and style…tab developers are encouraged to use their own brand styles, but should try to match theme so their tab feels integrated. In this post, I will detail how to use the Microsoft Teams Tab Library to get the active theme in Microsoft Teams and how to register an event handler to listen for theme changes (neither of these are in the preview documentation).

Example of handling theme changes in custom tab

Tab Theme Changes

Getting Active Theme

The context provided by the Microsoft Teams Tab Library includes the active theme the user is leveraging in the Microsoft Teams client. The context can be retrieved by calling microsoftTeams.getContext as seen below (notice the theme attribute on context):

Using microsoftTeams.getContext for active theme

microsoftTeams.getContext(function (context) {
    setTheme(context.theme);
});

Listening for Theme Changes

The Microsoft Teams Tab Library also includes an event handler that can be used to “listen” for theme changes in the Microsoft Teams client. The event handler can be registered by passing a callback function into microsoftTeams.registerOnThemeChangeHandler as seen below:

Register handler to listen for theme changes

// Setup themes refs from teams css
var themedStyleSheets = [];
themedStyleSheets.push("https://statics.teams.microsoft.com/hashedcss/stylesheets.min-e05e0092.css");
themedStyleSheets.push("https://statics.teams.microsoft.com/hashedcss/stylesheets.theme-contrast.min-669e1eed.css");
themedStyleSheets.push("https://statics.teams.microsoft.com/hashedcss/stylesheets.theme-dark.min-fe14eeb8.css");

// setTheme function for initialize and theme changes
var setTheme = function (theme) {
    if (theme === "default")
        document.getElementById("themeCSS").setAttribute("href", themedStyleSheets[0]);
    else if (theme === "contrast")
        document.getElementById("themeCSS").setAttribute("href", themedStyleSheets[1]);
    else if (theme === "dark")
        document.getElementById("themeCSS").setAttribute("href", themedStyleSheets[2]);
};
microsoftTeams.registerOnThemeChangeHandler(setTheme);

Notice for matching theme, my approach was to leverage the same themed css that Microsoft Teams uses. At the very least, you should probably leverage the background colors used for the default, dark, and contrast themes (in preview these are #eef1f5, #2b2b30, and #000000 respectfully).

What about Config/Remove pages?

The tab config and remove pages are loaded into a Microsoft Teams dialog. This dialog is meant to feel native, which makes it even more important to match the theme and styles of Microsoft Teams. Notice I say theme and style here…you shouldn’t try to use your brand styles in this dialog…it is for settings. In general, you can use the exact same approach with these pages as we used with the main tab content (get active theme via getContext and listen for theme changes via registerOnThemeChangeHandler). However, the one difference is that the dialog displays with a white background in the default theme instead of #eef1f5 (all other themes can inherit background color). Here is my slightly modified script to handle this.

Theme management from Config/Remove pages

// setTheme function for initialize and theme changes
var themeChanged = function (theme) {
    if (theme === "default") {
        document.getElementById("themeCSS").setAttribute("href", themedStyleSheets[0]);
        document.getElementById("body").style.background = "#fff"; //special case for default
    }
    else if (theme === "contrast") {
        document.getElementById("themeCSS").setAttribute("href", themedStyleSheets[1]);
        document.getElementById("body").style.background = "inherit";
    }
    else if (theme === "dark") {
        document.getElementById("themeCSS").setAttribute("href", themedStyleSheets[2]);
        document.getElementById("body").style.background = "inherit";
    }
};

Conclusion

Developers should build their tabs to feel integrated with Microsoft Teams. Matching theme is a great step in achieving an integrated feel optimal user experience. Config and Remove pages should take that a step further to march the theme AND style of Teams. Hopefully this post illustrated how to use the Microsoft Teams Tab Library to work with themes. You can find the sample used in this post on GitHub: https://github.com/richdizz/Microsoft-Teams-Tab-Themes.

Microsoft Teams and OAuth in Custom Tab

$
0
0

Custom Tabs provide developers with canvas to integrate their own user interface into Microsoft Teams. Developers have a lot of flexibility in what they load in the tab, but source domain(s) must be registered in the tab manifest. This is very similar to Office add-ins. In fact, you can think of tabs as Microsoft Teams version of an Office add-in. Like Office add-ins, there are some scenarios where source domains can be hard to predict. This is especially true in federated authentication scenarios. But like Office add-ins, Microsoft Teams offers a dialog API can be used to achieve complex authentication flows. In this post, I will illustrate how to use the Microsoft Teams authenticate dialog to perform an OAuth flow. I’ll user Azure AD and the Microsoft Graph, but you could replace those with any identity provider/service.

tabauth

Microsoft Teams Authentication Dialog

The Microsoft Teams Tab Library provides an authentication dialog for performing authentication flows. For a comprehensive explanation on why dialogs are necessary, you should review my older post on Connecting to Office 365 from an Office Add-in. You need a page or view to manage the authentication flow and pass the appropriate authentication results (ex: tokens) back to the tab via the Microsoft Teams Tab Library. The dialog can be launched by calling microsoftTeams.authentication.authenticate:

Launching authentication dialog

microsoftTeams.authentication.authenticate({
    url: "/app/auth.html",
    width: 400,
    height: 400,
    successCallback: function(token) {
        // Use access token to get some data from a service
        getData(token);
    },
    failureCallback: function(err) {
        document.getElementById("auth").innerText("Token failure.");
    }
});

Passing information back to the tab is done by calling microsoftTeams.authentication.notifySuccess and microsoftTeams.authentication.notifyFailure. Here is an example of passing an access token back from the authentication dialog.

Passing information from dialog to tab

authContext.acquireToken("https://graph.microsoft.com", function(error, token) {
    if (error || !token)
        microsoftTeams.authentication.notifyFailure(null);
    else
        microsoftTeams.authentication.notifySuccess(token);
});

As mentioned in the opening, my sample performs OAuth against Azure AD to call the Microsoft Graph. To do this, I leverage the Azure AD Authentication Library for JavaScript (adal.js). Here is my completed auth.html that is used in the authentication dialog.

Auth.html using adal.js

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Document</title>
</head>
<body>
    <script src="https://secure.aadcdn.microsoftonline-p.com/lib/1.0.13/js/adal.min.js"></script>
    <script src="https://statics.teams.microsoft.com/sdk/v0.2/js/MicrosoftTeams.min.js"></script>
    <script type="text/javascript">
    // Initialize microsoft teams tab library
    microsoftTeams.initialize();

    // Setup auth parameters for ADAL
    window.config = {
        instance: "https://login.microsoftonline.com/",
        tenant: "common",
        clientId: "c6951c6d-dcaa-4e45-b4b8-2763c7916569",
        postLogoutRedirectUri: window.location.origin,
        cacheLocation: "localStorage",
        endpoints: {
            "https://graph.microsoft.com": "https://graph.microsoft.com"
        }
    };

    // Setup authcontext
    var authContext = new AuthenticationContext(window.config);
    if (authContext.isCallback(window.location.hash))
        authContext.handleWindowCallback(window.location.hash);
    else {
        // Check if user is cached
        var user = authContext.getCachedUser();
        if (!user)
            authContext.login(); // No cached user...force login
        else {
            authContext.acquireToken("https://graph.microsoft.com", function(error, token) {
                if (error || !token) {
                    // TODO: this could cause infinite loop
                    // Should use microsoftTeams.authentication.notifyFailure after one try
                    authContext.login();
                }
                else
                    microsoftTeams.authentication.notifySuccess(token);
            });
        }
    }
    </script>
</body>
</html>

I should note that Microsoft Teams offers a “silent” authentication option if Azure AD is the identity provider. This is only valid for scenarios where the existing Azure AD session cookies (from the user having already signed in to Microsoft Teams) will be sufficient for the custom tab. Any user consent, federation, or two-factor authentication constraints would require the dialog approach.

Conclusion

Integrating 3rd party identity and services into Microsoft Teams can deliver compelling scenarios to users. Hopefully this post illustrated how to achieve that with the Microsoft Teams Tab Library. You can find the official documentation on the authentication dialog here and the sample used in this post on GitHub: https://github.com/richdizz/Microsoft-Teams-Tab-Auth

SharePoint Framework and Contextual Bots via Back Channel

$
0
0

This year Microsoft has made significant developer investments in SharePoint and bots, with new developer surfaces in the all-new SharePoint Framework and Bot Framework (respectively). Combining these technologies can deliver some very powerful scenarios. In fact, SharePoint PnP has a sample on embedding a bot into SharePoint. The sample does a good job of explaining the basics of the Bot Framework DirectLine channel and WebChat component (built with React). However, it really just shows how to embed a bot in SharePoint with no deeper integration. I imagine scenarios where the embedded bot automatically knows who the SharePoint user is and make REST calls on behalf of the user. In this post, I will demonstrate how a bot can interact and get contextual information from SharePoint through the Bot Framework “back channel”.

Bot Architecture

To build a more contextual SharePoint/bot experience, it helps to understand the architecture of a bot built with the Bot Framework. Bot Framework bots use a REST endpoint that clients POST activity to. The activity type that is most obvious is a “Message”. However, clients can pass additional activity types such as pings, typing, etc. The “back channel” involves posting activity to the same REST endpoint with the “Event” activity type. The “back channel” is bi-directional, so a bot endpoint can send “invisible” messages to a bot client by using the same “Event” activity type. Bot endpoints and bot clients just need to have additional logic to listen and respond to “Event” activity. This post will cover both.

Setup

Similar to the PnP sample, our back channel samples will leverage the Bot Framework WebChat control and the DirectLine channel of the Bot Framework. If you haven’t used the Bot Framework before, you build a bot and then configure channels for that bot (ex: Skype, Microsoft Teams, Facebook, Slack, etc). DirectLine is a channel that allows more customized bot applications. You can learn more about it in the Bot Framework documentation or the PnP sample. I have checked in my SPFx samples with a DirectLine secret for a published bot…you are welcome to use this for testing. As a baseline, here is the code to leverage this control without any use of the back channel.

Bot Framework WebChat before back channel code

import { App } from 'botframework-webchat';
import { DirectLine } from 'botframework-directlinejs';
require('../../../node_modules/BotFramework-WebChat/botchat.css');
import styles from './EchoBot.module.scss';
...
public render(): void {
   this.domElement.innerHTML = `<div id="${this.context.instanceId}" class="${styles.echobot}"></div>`;

   // Initialize DirectLine connection
   var botConnection = new DirectLine({
      secret: "AAos-s9yFEI.cwA.atA.qMoxsYRlWzZPgKBuo5ZfsRpASbo6XsER9i6gBOORIZ8"
   });

   // Initialize the BotChat.App with basic config data and the wrapper element
   App({
      user: { id: "Unknown", name: "Unknown" },
      botConnection: botConnection
   }, document.getElementById(this.context.instanceId));
}

Client to Bot Back Channel

I don’t think all embedded bots need bi-directional use of the back channel. However, I do think all embedded bots can benefit from the client-to-bot direction, if only for contextual user/profile information. To use the back channel in this direction, the client needs call the postActivity method on the DirectLine botConnection with event data. Event data includes type (“event”), name (a unique name for your event), value (any data you want to send on the back channel), and from (the user object containing id and name). In the sample below, we are calling the SharePoint REST endpoint for profiles to retrieve the user’s profile and sending their name through the back channel (using the event name “sendUserInfo”).

Sending data from client to bot via back channel

// Get userprofile from SharePoint REST endpoint
var req = new XMLHttpRequest();
req.open("GET", "/_api/SP.UserProfiles.PeopleManager/GetMyProperties", false);
req.setRequestHeader("Accept", "application/json");
req.send();
var user = { id: "userid", name: "unknown" };
if (req.status == 200) {
   var result = JSON.parse(req.responseText);
   user.id = result.Email;
   user.name = result.DisplayName;
}

// Initialize the BotChat.App with basic config data and the wrapper element
App({
   user: user,
   botConnection: botConnection
}, document.getElementById(this.context.instanceId));

// Call the bot backchannel to give it user information
botConnection
   .postActivity({ type: "event", name: "sendUserInfo", value: user.name, from: user })
   .subscribe(id => console.log("success"));
}

On the bot endpoint, you need to listen for activity of type event. This will be slightly different depending on C# or Node bot implementation, but my sample uses C#. For C#, the activity type check can easily be implemented in the messages Web API (see here for Node example of back channel). Notice in the sample below we are extracting the user information sent through the back channel (on activity.Value) and saving it in UserState so it can be used throughout the conversation.

Using data sent through the back channel from client to bot

public async Task<HttpResponseMessage> Post([FromBody]Activity activity)
{
   if (activity.Type == ActivityTypes.Event && activity.Name == "sendUserInfo")
   {
      // Get the username from activity value then save it into BotState
      var username = activity.Value.ToString();
      var state = activity.GetStateClient();
      var userdata = state.BotState.GetUserData(activity.ChannelId, activity.From.Id);
      userdata.SetProperty<string>("username", username);
      state.BotState.SetUserData(activity.ChannelId, activity.From.Id, userdata);

      ConnectorClient connector = new ConnectorClient(new Uri(activity.ServiceUrl));
      Activity reply = activity.CreateReply($"The back channel has told me you are {username}. How cool is that!");
      await connector.Conversations.ReplyToActivityAsync(reply);
   }
   else if (activity.Type == ActivityTypes.Message)
   {
      // Handle actual messages coming from client
      // Removed for readability
   }
   var response = Request.CreateResponse(HttpStatusCode.OK);
   return response;
}

Bot to Client Back Channel

Sending data through the back channel from bot to client is as simple as sending a message. The only difference is you need to format the activity as an event with name and value. This is a little tricky in C# as you need to cast an IMessageActivity to IEventActivity and back (as seen below). The IEventActivity is new to BotBuilder, so you should update the Microsoft.Bot.Builder package to the latest (mine uses 3.5.2).

Sending data from bot to client via back channel

[Serializable]
public class RootDialog : IDialog<IMessageActivity>
{
   public async Task StartAsync(IDialogContext context)
   {
      context.Wait(MessageReceivedAsync);
   }

   public async Task MessageReceivedAsync(IDialogContext context, IAwaitable<IMessageActivity> result)
   {
      var msg = await result;

      string[] options = new string[] { "Lists", "Webs", "ContentTypes"};
      var user = context.UserData.Get<string>("username");
      string prompt = $"Hey {user}, I'm a bot that can read your mind...well maybe not but I can count things in your SharePoint site. What do you want to count?";
      PromptDialog.Choice(context, async (IDialogContext choiceContext, IAwaitable<string> choiceResult) =>
      {
         var selection = await choiceResult;

         // Send the query through the backchannel using Event activity
         var reply = choiceContext.MakeMessage() as IEventActivity;
         reply.Type = "event";
         reply.Name = "runShptQuery";
         reply.Value = $"/_api/web/{selection}";
         await choiceContext.PostAsync((IMessageActivity)reply);
      }, options, prompt);
   }
}

Listening for the back channel events on the client again involves the DirectLine botConnection object where you filter and subscribe to specific activity. In the sample below we listen for activity type of event and name runShptQuery. When this type of activity is received, we perform a SharePoint REST query and return the aggregated results to the bot (again via back channel).

Using data sent through the back channel from bot to client

// Listen for events on the backchannel
var act:any = botConnection.activity$;
act.filter(activity => activity.type == "event" && activity.name == "runShptQuery")
   .subscribe(a => {
      var activity:any = a;
      // Parse the entityType out of the value query string
      var entityType = activity.value.substr(activity.value.lastIndexOf("/") + 1);

      // Perform the REST call against SharePoint
      var shptReq = new XMLHttpRequest();
      shptReq.open("GET", activity.value, false);
      shptReq.setRequestHeader("Accept", "application/json");
      shptReq.send();
      var shptResult = JSON.parse(shptReq.responseText);

      // Call the bot backchannel to give the aggregated results
      botConnection
        .postActivity({ type: "event", name: "queryResults", value: { entityType: entityType, count: shptResult.value.length }, from: user })
        .subscribe(id => console.log("success sending results"));
   });

Conclusion

Hopefully you can see how much more powerful a SharePoint or Office embedded bot can become with additional context provided through the back channel. I’m really excited to see what creative solutions developers can come up with this, so keep me posted. Big props to Bill Barnes and Ryan Volum on my team for their awesome work on the WebChat and the back channel. Below, I have listed four repositories used in this post. I have purposefully checked in the SharePoint Framework projects with a DirectLine secret to my bot so you can immediately run them without deploying your own bot.

SPFx-1-Way-Bot-Back-Channel
Simple SharePoint Framework Project that embeds a bot and uses the Bot Framework back channel to silently send the bot contextual information about the user.

SPFx-2-Way-Bot-Back-Channel
Simple SharePoint Framework Project that embeds a bot and uses the Bot Framework back channel to silently integrate the bot and client to share contextual information and API calls.

CSharp-BotFramework-OneWay-BackChannel
Simple C# Bot Framework project that listens on the back channel for contextual information sent from the client (WebChat client in SharePoint page)

CSharp-BotFramework-TwoWay-BackChannel
Simple C# Bot Framework project that uses the back channel for contextual information and API calls against the client (WebChat client in SharePoint page)

Office Add-ins with Contextual Bots via Back Channel

$
0
0

Last week I published a post on using the SharePoint Framework to embed contextual bots in SharePoint. In it, I described how the same approach could be used to embed a contextual bot in an Office Add-in. This post will illustrate how to do exactly that. I will walk through the development of a modern day “Clippy” powered by the Bot Framework and Office.js. The sample that accompanies this post works in Word, Excel, and Outlook, but could also be updated for OneNote and PowerPoint.

Determine the host

Clippy was a help tool that was pervasive across the Office suite. As such, I wanted to build my Clippy add-in to run in most of the Office clients. This required my add-in and bot to be “client aware”. If you are writing an Office Add-in for a specific client, then you can likely omit this section and code directly against that Office product. For Word/Excel/PowerPoint/OneNote you can use Office.context.host to determine what Office product the add-in is being hosted in. For Outlook/OWA, you can check if Office.context.mailbox is defined. In the Clippy Bot, I check for the host as soon as the add-in is launched and pass that information to my bot via an “initialize” event to the back channel. You can use any event name you want with the back channel and can use the name for event listening logic.

Initializing the add-in upon launch

<div id="botContainer"></div>
<script src="https://unpkg.com/botframework-webchat/botchat.js"></script>
<script type="text/javascript">
Office.initialize = function (reason) {
   var host = "", user = { id: "", name: "" };

   // Determine the host and user information (Outlook)
   if (Office.context.host) {
      host = Office.context.host;
      user.id = host + " User";
      user.name = host + " User";
   }
   else if (Office.context.mailbox) {
      host = "Outlook";
      user.id = Office.context.mailbox.userProfile.emailAddress;
      user.name = Office.context.mailbox.userProfile.displayName;
   }

   // Initialize the bot connection and webchat component
   var botConnection = new BotChat.DirectLine({
      secret: "ezjFbUYS_TRUNCATED_0w6f0"
   });
   BotChat.App({
      botConnection: botConnection,
      user: user
   }, document.getElementById("botContainer"));

   // Post the initialize event to the bot backend
   botConnection
      .postActivity({ type: "event", name: "initialize", value: host, from: user })
      .subscribe(console.log("success"));

Mail add-ins provide better context

Office.js provides richer contextual information for Mail Add-ins such as user context and the abilities to get to additional data in Exchange/Exchange Online (all in the context of the mailbox user). In the code above, you may have noticed I use Office.js to retrieve user information via Office.context.mailbox.userProfile and passed the details to my bot (via back channel). However, I could have gone much further with Office.js by retrieving tokens that can be used to call into Exchange/Exchange Online (Office.context.mailbox.getCallbackTokenAsync or Office.context.mailbox.makeEwsRequestAsync).

Bot Logic

Earlier in the post I demonstrated sending an “initialize” event through the back channel from my add-in to the bot. The bot looks for events as activity is sent to the messages endpoint. When an activity of type “event” and name “initialize” occurs, the bot pulls out the data sent with the event (host client and user information) and stores it in the bot state.

Listening for events in the bot’s messaging endpoint

[BotAuthentication]
public class MessagesController : ApiController
{
   // POST: api/Messages
   public async Task<HttpResponseMessage> Post([FromBody]Activity activity)
   {
      if (activity.Type == ActivityTypes.Event)
      {
         if (activity.Name == "initialize")
         {
            // Get the Office host from activity Properties then save it into BotState
            var host = activity.Value.ToString();
            var state = activity.GetStateClient();
            var userdata = state.BotState.GetUserData(activity.ChannelId, activity.From.Id);
            userdata.SetProperty<string>("host", host);
            userdata.SetProperty<string>("user", activity.From.Name);
            state.BotState.SetUserData(activity.ChannelId, activity.From.Id, userdata);

            // Route the activity to the correct dialog
            await routeActivity(activity);
         }
         ...TRUNCATED

Messages sent into the bot are dispatched to client-specific dialogs corresponding to the different Office clients that can host the add-in.

Dispatching messages to client-specific dialogs

private async Task routeActivity(Activity activity)
{
   // Make sure we know the host
   var state = activity.GetStateClient();
   var userdata = state.BotState.GetUserData(activity.ChannelId, activity.From.Id);
   var host = userdata.GetProperty<string>("host");

   switch (host)
   {
      case "Word":
         await Conversation.SendAsync(activity, () => new WordDialog());
         break;
      case "Excel":
         await Conversation.SendAsync(activity, () => new ExcelDialog());
         break;
      case "Outlook":
         await Conversation.SendAsync(activity, () => new OutlookDialog());
         break;
      default:
         ConnectorClient connector = new ConnectorClient(new Uri(activity.ServiceUrl));
         Activity reply = activity.CreateReply($"Sorry, I can't figure out where you are running me from. You may not have given me enough time to initialize.");
         await connector.Conversations.ReplyToActivityAsync(reply);
         break;
   }
}

Each of the client-specific dialogs offer the user a choice of operations. For example, the ExcelDialog might allow the user to insert a Range or a Chart. When they select an operation, the bot sends an event with the name “officeOperation” to the add-in through the back channel. In the case of the Clippy Bot, I am only sending the operation name, but it could be any complex data.

Client-specific dialog logic

[Serializable]
public class ExcelDialog : IDialog<IMessageActivity>
{
   public async Task StartAsync(IDialogContext context)
   {
      context.Wait(MessageReceivedAsync);
   }

   public async Task MessageReceivedAsync(IDialogContext context, IAwaitable<IMessageActivity> result)
   {
      var msg = await result;

      string[] options = new string[] { "Chart", "Range" };
      string prompt = $"I see you are running the Clippy Bot in Excel. Select something and I'll insert it into a *NEW* worksheet:";
      PromptDialog.Choice(context, async (IDialogContext choiceContext, IAwaitable<string> choiceResult) =>
      {
         var selection = await choiceResult;
         OfficeOperation op = (OfficeOperation)Enum.Parse(typeof(OfficeOperation), selection);

         // Send the operation through the backchannel using Event activity
         var reply = choiceContext.MakeMessage() as IEventActivity;
         reply.Type = "event";
         reply.Name = "officeOperation";
         reply.Value = op.ToString();
         await choiceContext.PostAsync((IMessageActivity)reply);
      }, options, prompt);
   }
}

Add-in listening on back channel

The add-in can easily listen for events sent through the back channel by leveraging the filter and subscribe functions of the activity on the bot connection. In the Clippy Bot, the add-in listens for activity of type=event and name=officeOperation. When one of these activities comes through, the activity.value contains the data sent from the bot. In the case of Clippy Bot, this is the operation the add-in should perform with office.js.

Listening and responding to events in the add-in

// Listen for events from the bot back channel
botConnection.activity$
   .filter(function (a) {
      return (a.type === "event" && a.name === "officeOperation");
   })
   .subscribe(function (a) {
      switch (a.value)
      {
         case "Reply":
            Office.context.mailbox.item.displayReplyForm(
            {
               "htmlBody": "<h1>Hello from the Clippy Bot!!!</h1><img href='https://https://klippybot.azurewebsites.net/images/clippy.png' alt='Clippy Bot Image' />"
            });
            botConnection.postActivity({ type: "event", name: "confirmation", value: true, from: user }).subscribe(console.log("bot operation success"));
            break;
         case "ReplyAll":
            Office.context.mailbox.item.displayReplyAllForm(
            {
               "htmlBody": "<h1>Hello from the Clippy Bot!!!</h1><img href='https://https://klippybot.azurewebsites.net/images/clippy.png' alt='Clippy Bot Image' />"
            });
            botConnection.postActivity({ type: "event", name: "confirmation", value: true, from: user }).subscribe(console.log("bot operation success"));
            break;
         case "Chart":
            if (Office.context.requirements.isSetSupported("ExcelApi", "1.2")) {
               Excel.run(function (context) {
                  var sheet = context.workbook.worksheets.add("ClippySheet" + cnt++);
                  var rangeData = [["Character", "Coolness Score"],
                     ["Clippy", 10],
                     ["Cortana", 8],
                     ["Siri", 4],
                     ["Alexa", 6]];
                  sheet.getRange("A1:B5").values = rangeData;
                  sheet.getRange("A1:B1").format.font.bold = true;
                  sheet.tables.add("A1:B5", true);
                  sheet.charts.add("ColumnClustered", sheet.getRange("A1:B5"), "auto");
                  return context.sync().then(function () { botConnection.postActivity({ type: "event", name: "confirmation", value: true, from: user }).subscribe(console.log("bot operation success")); });
                  // Ignore any errors on context.sync
               }); // Ignore any errors on Word.run
            } // Ignore old versions of Office
            break;
         case "Range":
            if (Office.context.requirements.isSetSupported("ExcelApi", "1.2")) {
               Excel.run(function (context) {
                  var sheet = context.workbook.worksheets.add("ClippySheet" + cnt++);
                  var rangeData = [["Character", "Coolness Score"],
                     ["Clippy", 10],
                     ["Cortana", 8],
                     ["Siri", 4],
                     ["Alexa", 6]];
                  sheet.getRange("A1:B5").values = rangeData;
                  sheet.getRange("A1:B1").format.font.bold = true;
                  sheet.tables.add("A1:B5", true);
                  return context.sync().then(function () { botConnection.postActivity({ type: "event", name: "confirmation", value: true, from: user }).subscribe(console.log("bot operation success")); });
                  // Ignore any errors on context.sync
               }); // Ignore any errors on Word.run
            } // Ignore old versions of Office
            break;
         case "Image":
            if (Office.context.requirements.isSetSupported("WordApi", "1.2")) {
               Word.run(function (context) {
                  context.document.body.insertInlinePictureFromBase64("iVB_TRUNCATED_5CYII=", "End");
                  return context.sync().then(function () { botConnection.postActivity({ type: "event", name: "confirmation", value: true, from: user }).subscribe(console.log("bot operation success")); });
                  // Ignore any errors on context.sync
               }); // Ignore any errors on Word.run
            } // Ignore old versions of Office
            break;
         case "Paragraph":
            if (Office.context.requirements.isSetSupported("WordApi", "1.2")) {
               Word.run(function (context) {
                  context.document.body.insertText("Hello from the Clippy Bot!!!", "End");
                  return context.sync().then(function () { botConnection.postActivity({ type: "event", name: "confirmation", value: true, from: user }).subscribe(console.log("bot operation success")); });
                  // Ignore any errors on context.sync
               }); // Ignore any errors on Word.run
            } // Ignore old versions of Office
            break;
      }
   });

Final Thoughts

Although “Clippy” was just a silly way to illustrate the back channel techniques with bots and add-ins, I hope you can see how the techniques could be used to deliver powerful scenarios. The entire Clippy Bot solution is available at the GitHub repo below. Like the SharePoint Framework samples, I have checked-in the solution with a working DirectLine secret of a published bot so you can try it immediately. Enjoy!

https://github.com/richdizz/Office-Embedded-Bot-Back-Channel

Viewing all 22 articles
Browse latest View live