The SharePoint Trip to Modern – Are We There Yet? Part 2

On the trip to SharePoint “Modern”…  I’m having a serious case of deja vu. Some macro-patterns I saw eons ago as a Microsoft Windows Developer I see again in SharePoint “Modern”… especially with SharePoint “Modern” forms.

SharePoint forms – “classic” and “Modern” – fall into three broad categories: list and library forms, survey forms, and “forms over data” applications (aka InfoPath). Classic forms in each category are functional… but hard to customize, and definitely not beautiful. So where are we, exactly, on the trip to modern forms, and what are these “macro-patterns” I see?

Let’s break it down.

Microsoft Forms is the “Modern” replacement for SharePoint classic surveys, PowerApps is the “Modern” replacement for InfoPath, and Modern list forms …we’ll discuss those momentarily.

I never cared much for SharePoint “classic”  surveys.  They seem too toy-like for anything but the simplest of surveys, and look ugly (to me). In fact, beautifying them requires custom development… typically with React or Angular, and supporting documentation for users to use it, site owners to configure it, and developers to support it. All for very little functionality. Which is why I was happy to see Microsoft Forms…  the “Modern” answer to SharePoint “classic” surveys.

Actually, I thought Microsoft Forms (which sort of sneaked up on me last year) was also too toy-like at first. But with constant improvements over the last year…  I think Microsoft has a winner here. The surveys are easy to design, look nice, are easily embedded in a SharePoint page, and easy to configure. Microsoft Forms plays well with others too – particularly with Microsoft Flow. There is even a curious integration with Excel (someone please tell me if this is useful). If Microsoft Forms stays on its current trajectory, it’ll be the SurveyMonkey® for corporate intranets.

PowerApps is out of the toy-stage, and positioned to replace InfoPath.  I expect PowerApps will do for SharePoint what Visual Basic did for Microsoft Windows development… and become the most popular tool for SharePoint “Modern” power users and developers.  A word to the wise for “I’m too cool for PowerApps… I do React/Angular/Vue” developers out there….  Visual Basic supplanted C++/MFC for corporate Windows development – because it was easier to learn, and quicker for creating “Forms over Data” applications. Will history repeat here, and relegate SPFx development to a smaller – but important – niche for high-end customizations? Maybe.  Regardless, I predict the PowerApps tsunami is coming. Are you ready?

Lastly, consider SharePoint List forms. Specifically, the classic EditForm.aspx, NewForm.aspx and listview.aspx forms, and their Modern alternatives. Many sophisticated SharePoint workflows depend on highly customized versions of these forms to view, create and update SharePoint list items.

SharePoint List/Library forms can be quite complex mini-applications in service to SharePoint workflows. Form data must be loaded, and fields labelled, disabled/enabled, validated, hidden or revealed – depending on the workflow state and who is looking. Error handling must be friendly and helpful.  Extra points for beauty.

Clients often don’t realize the effort required to achieve all this…  so it’s important to set expectations, and decide if functional – albeit ugly – forms are sufficient.  Or is beauty important too?

The “Microsoft approved” way to customize SharePoint “classic” forms is via JS Link and Client Side Rendering (CSR)  – you add customizations with client-side JavaScript/TypeScript, using Microsoft-provided hooks at key points in the form life-cycle. The coding patterns take some getting used to. And to make the forms visually appealing, you’re on your own. Many SharePoint developers have crashed and burned attempting to wrangle the ridiculously daunting CSS in SharePoint “classic” forms. Which brings me to SharePoint “Modern” form options.

We’re still screwed…  but less so. Beauty is within reach – with Microsoft Office UI Fabric React-based components and SPFx. But a few things are missing for developers.

When I built my first SharePoint “Modern” modal dialog box as an SPFx web part with these components, I was reminded of the early days of Microsoft Windows development, when Microsoft provided a Windows Software Development Kit (SDK) without a GUI-based dialog editor. Windows developers resorted to “imagining” their dialog layouts while coding them with a text editor. It was highly inefficient….  much like the situation today with SharePoint “Modern” modal dialogs. There is no GUI-based form Editor… the React-based field controls are “laid out” using JSX in a code editor. I hope Microsoft will get around to providing a GUI-based editor for list-item forms with Office UI Fabric React components (yes… I know Modern lists can be customized with PowerApps – but it’s not a viable option – not yet – for a SharePoint Modern modal dialog box containing Office UI Fabric components).

What else is missing? For starters, the Office UI Fabric React components lack “SharePoint awareness”. So you’ll need an abstraction layer over them. You could add your own abstraction layer… but smarter to use the SharePoint Patterns and Practices SPFx React Controls. These extend the Office UI Fabric React controls… adding SharePoint awareness.  But even here… we see signs of immaturity. Take for instance the oh-so-innocent looking People Picker control.

A SharePoint People Picker control is one of the nastiest, most difficult form controls to write yourself. And this is from personal experience. It’s the “iceberg” of SharePoint field controls, because so much is hidden beneath the surface. So I was very happy to see it offered in the Office UI Fabric React component library… that is, until I discovered it was neutered.  That is, the darn thing isn’t connected to SharePoint (or rather, the Azure AD instance attached to a SharePoint tenant).

But the PnP People Picker control fixes this.  You add it within your Office UI Fabric Dialog component…  and it just works…. almost. At the time of this writing, it lacks a property to initialize the people picker with users and/or SharePoint Groups (I’ve requested an enhancement to rectify this… so stay tuned). This means that if you have an existing list item containing a people field, and that people field is not empty… you cannot initialize the people picker control with that person.  While we’re waiting for a fix, you can do what I did and create your own people picker abstraction layer, starting from this React people picker control.

My beloved Knockout library is useless with these Office UI Fabric controls (and the PnP offshoots)… because you cannot add the Knockout bindings to the HTML attributes within the controls.. you don’t have access to the innards. Knowing how much Knockout reduces “classic” form complexity,  I worried that Modern form complexity would be unwieldy. But my fears were unfounded, since the controls have sufficient hooks to capture mouse and keyboard events, and properties to control the enabling, disabling, and show/hide behavior. At this point, it’s probably better to show you, rather than tell you about it.

But let’s pull over at the next rest stop….   I need a short break before showing you some code. And no… stop asking… we’re still not there yet… on our SharePoint trip to Modern. But almost.

Here’s our stop… want some Ice cream?

-bob

 

 

 

 

 

 

 

 

Want Simplicity? Use the SharePoint PnP JavaScript Core Library

If you’re a SharePoint developer and not using the SharePoint PnP JavaScript Core Library… you’re working too hard.

In an earlier blog post, I looked at Typescript’s asynch/await feature as a way to simplify asynchronous SharePoint code. In retrospect, the SharePoint PnP JavaScript Core Library is a better, simpler, more elegant solution.

We’ve all been here…  deciding to drop what works… in favor of risking something better. For SharePoint client-side asynchronous code… this was an easy decision.  Here’s a short snippet to illustrate the point:

static GetRoles ()
{
let deferred = $.Deferred();
// Get the groups the current user belongs to.
//
$.ajax({
url: _spPageContextInfo.webAbsoluteUrl + 
"/_api/web/currentuser/Groups?$select=Title",
method: "GET",
headers: {"accept": "application/json;odata=verbose"},
success: function (resultsData)
   {
   deferred.resolve(resultsData.d.results);
   },
error: function (jqXHR, textStatus, errorThrown)
   {
   window.console.log('error: loggedInUser.GetRoles returned an error');
   deferred.reject();
   }
});
return deferred.promise();
}

:
:

let promise1 = GetRoles();
:
$.when (promise1).done(function(data1)
   {
   processRoles (data1);

The GetRoles function make an asynch call into SharePoint, returning a promise, and the results are processed later. The GetRoles function is loaded with (typical) ugly asynch artifacts.

Now compare that version to this PnP version:
$pnp.sp.web.siteUsers.getById(_spPageContextInfo.userId).groups.select("Title").get()
.then ( groups=>
{
for (let group of groups)
   {
   // process groups...

Wow…  the GetRoles function is reduced to a one-liner…  with low-level asynch plumbing out of sight. I found that stringing multiple asynch calls together in parallel or serial fashion was easy, too.

I took the PnP code snippet above from a “traditional CSR” SharePoint list form…  where the PnP library was “included” via this JSLink expression:

https://cdnjs.cloudflare.com/ajax/libs/sp-pnp-js/3.0.2/pnp.min.js

I used the dollar sign, like this: “$pnp” to reference the PnP library in my (non-React) TypeScript code.

But in my React components, I import the PnP library like this:

import pnp from "sp-pnp-js";
 and reference the PnP library without the $,  like this:

pnp.sp.web.siteUsers.getById(loggedInUserId).get().then ( result=>
It’s all explained in the PnP Core Library github site.
I hope you find the SharePoint PnP JavaScript Core Library as enjoyable as I do.

Was waiting for async/await worth it?

I read once that async and await were coming to ES2015 (ES6) and thought “hmm…   that’s nice – call me when it works in ES5.” Time passed. Then Microsoft announced TypeScript 1.7 support for async/await –  but only for ES2015 or above. More time passed. Then recently, Hell froze over and ES5 support arrived with TypeScript 2.1.

Prior to async/await, I used jQuery “Deferreds” to manage asynchronous function calls. My needs were simple – typically requiring one asynchronous function call. Sometimes, two – called sequentially or in parallel. In all cases, jQuery “Deferreds” met my needs.

But I could never remember the intricacies of these jQuery “Deferreds”. Anytime I needed to work with asynchronous functions, I had to review – once more – how Promises worked, how to chain together two function calls, how to handle errors.

I wondered, would async/await make asynchronous programming easier? The logic clearer? The error-handling better? I investigated. And here is what I found.

Yes, async/await does make asynchronous function call error-handling easier to understand, and easier to implement. And I found that async/await makes it easier to know which code fragments will “run now” versus those that will “run later”.

But async/await doesn’t remove your need to know what promises are, and how to use them. And I did not experience a significant reduction in lines of code after refactoring some legacy code to use async/await. And you can abandon all hope of trying to understand the ES5 code generated from the async/await statements, unless you’re a masochist (or work on the TypeScript team). The Typescript transpiler literally vomits bizarre ES5 code… bearing no semblance to the original TypeScript source. So you absolutely need to use Source Maps for debugging, and trust that Microsoft got the transpilation right. But I trust they did and I like debugging with Source Maps (as long as the output isn’t minified) so that I can follow the TypeScript code I wrote and not the JavaScript code emitted.

(Incidentally, you will benefit greatly if you understand how JavaScript uses a single threaded event loop in conjunction with a message queue to process “run now” and “run later” code fragments. There are many good articles explaining the details… just search on ‘JavaScript single threaded event loop’).

If you recall from my last blog post, I inquired if async/await reduced the number of lines of code in my jQuery “Deferred” code (and as mentioned above, the answer is “yes”). But there are additional benefits – better syntax, better error-handling – which make async/await a must going forward.

Let me show you two simple examples I used to explore the goodness of async/await. In the first example, I used a classic “Promises-only” approach (well… classic for me, anyway). The second example uses async/await.

Example 1 – Calling two asynchronous functions using Promises

Here is the first example (the VS Code project files are here). It consists of two asynchronous functions, asynch1 and asynch2, and a function that calls them sequentially and another function that calls them in parallel.

The getAsyncDataSequentially function calls asynch1 and waits for it to return before calling asynch2, and then waits for asynch2 to return before proceeding. This is the classic “sequential” pattern.

The getAsyncDataInParallel function calls asynch1 and then – without waiting – calls asynch2 allowing both asynchronous functions to execute in parallel, and waits for both to return before proceeding. This is the classic “parallel” pattern.

Note that asynch1 and asynch2 are identical, except for their name, and the duration of their setTimeout delays. They simulate asynchronous ajax functions that take different amounts of time to complete.

fig01

I’m not going to delve into the mechanics of how all this works, since I expect you’ve already used these patterns yourself. But you may not be familiar with the import statement at the top

import "es6-promise"
This is needed to bring in the ES5 “Promise” polyfill (and described nicely here).

With a “Promises only” approach, you use .then to resolve a promise, and .catch to handle errors. These add considerable complexity to the getAsyncDataSequentially function, due to the need to nest them. The getAsyncDataInParallel function is less complex because we don’t need to nest .then and .catch.

Example 2 – Calling two asynchronous functions using async/await

Example 2 (the VS Code project files are here) refactors Example 1 to use async/await. I was expecting syntactic miracles with async/await… hoping they’d remove the need for Promises. But it was magical thinking on my part. Promises are still needed in the asynchronous functions. So asynch1 and asynch2 don’t get refactored.  The refactoring is with getAsyncDataSequentially and getAsyncDataInParallel – the functions that call asynch1 and asynch2. This was a surprise.

Here is a side-by-side comparison of getAsyncDataSequentially – before and after using async/await… so you can see for yourself:

fig02.png

To my eyes…  async/await (on the right) cleans up the code (on the left) beautifully. In the refactored code, error-handling is exclusively try/catch. And 10 lines of code were eliminated. These benefits – cleaner code, better error-handling – convinced me to use async/await going forward.

Here’s getAsyncDataInParallel – before and after:

fig03

The refactoring didn’t reduce the code… but I still like the improved try/catch error-handling. I also like the cleaner semantics.  In the example on the left, the statement on line 84:
console.log("3") ;

executes before the .then block. But not the async/await code on the right. It will pause at the await statement, and proceed only after the asynchronous calls complete (with or without an error).  I found this much more intuitive.

Feel free to try out the examples yourself (I’ve configured a launch.json file so you can execute & debug them from within VS Code).

In my last blog post, I promised to refactor my Client Side Rendering example to see if async/await improved the code.  It did…  consistent with the examples above. The VS Code files are here.

Here is a comparison of using jQuery Deferreds vs ES2015 Promises in the getRoles asynchronous function:

fig04

I replaced the jQuery “Deferred” statements with “real Promises”. Other than that, the code remained the same.

The bigger change was in the postProcessEditForm function, which calls two asynchronous functions using a parallel pattern:

fig05

Both of the postProcessEditForm functions above call two asynchronous functions – GetRoles and GetCTMemberValues – “in parallel”,  pausing until both return. The jQuery “Deferred” version uses $.when to pause, the async/await version uses await.

I didn’t use error-handling in the jQuery Deferred version (no reason not to, I merely left it out of the example), but I used try/catch with the async/await version, and I find it easy to understand.

In Conclusion…

I heartily recommend using async/await and real “ES2015 Promises” in place of jQuery “Deferreds”. They make your code cleaner, and the error-handling better. And you will get the benefit of less code if you need to chain several asynchronous functions sequentially.

You’ll continue to need Promises… async/await requires them.

In my next blog post, I’ll see if the latest PnP JS Core JavaScript library  can simplify my code more than async/await did. From what I heard, it has functions for calling the SharePoint REST API without the need for Promises or async/await.

Stay tuned.

Is TypeScript worth the bother for SharePoint JavaScript snippets?

Introduction

With the SharePoint Framework and other JavaScript Frameworks (React, Angular, VueJS, Aurelia) all the rage, and Typescript (and Babel) bringing C#-like capabilities to JavaScript … does it still make sense to write “raw JavaScript” (ES5) for SharePoint customizations? You know what I’m talking about…  those small snippets of JavaScript you’ve embedded within Script Web Parts, or linked to from Content Editor Web Parts, or injected into a SharePoint page via JSLink. You’ve been following the latest technical advancements with tooling, keeping abreast of TypeScript and ECMAScript advancements. Perhaps taken some online training. But still, you’ve hung back .. stayed in your comfort zone with JavaScript. And you’ve wondered… do the benefits of TypeScript – with its static type checking, ES2015 language enhancements, and advanced language features like async/await – outweigh the hassle of setting up a “build” process for transpilation, bundling and minification – for relatively small amounts of JavaScript?

(Note…  if you’ve been out of JavaScript development for a year, then this article is required reading before proceeding further.  Special thanks to Eric Maffei for bringing it to my attention)

I decided to investigate. And for a test case, I chose an example from an actual project, but simplified for this article. The JavaScript example I chose uses Client Side Rendering (CSR) to provide “field-level permissions” for items in a custom list.  CSR was introduced by Microsoft a few years ago as a way to control – via client side JavaScript – the rendering of list views and list item forms.  I’ve found CSR especially useful for manipulating field controls on the New/Edit/View forms of SharePoint custom lists.  Essentially, CSR allows for custom “field-level” permissions.

(I often hear “CSR” confused with “JSLink”. They are not the same.  “CSR” is a Microsoft-supplied SharePoint framework for allowing developers to customize fields within lists and forms. “JSLink” refers to a new string property exposed in list and form web parts to allow CSR JavaScript files to be injected into the page that is rendering the list or form.)

The chosen example contains almost 300 lines of ES5 code, and uses two popular JavaScript libraries – jQuery (for DOM manipulation and Ajax calls) and KnockOutJS (for field validation and display). After converting the example to TypeScript, using several TypeScript-specific language features, experimenting with various “build process” tool chains, and examining the debugging experience, what is my conclusion?

TypeScript is worth the effort (for 300 lines of code definitely… 30 lines of code… maybe not).  Especially if you’re starting from scratch. Converting the JavaScript to TypeScript was easy, and went quickly. And by “convert”, I mean using classes and type annotations and namespaces and a few other TypeScript features. During the conversion, I half-expected to see lots of TypeScript bugs, but that didn’t happen – probably because the JavaScript had already been thoroughly tested and debugged (that was a first…. being disappointed to not see bugs). But in fairness, TypeScript didn’t drastically simplify the code, or reduce the number of lines of code. With only 300 lines of (dare I say) well-crafted code to start with, TypeScript language improvements could only go so far. So, you ask, why my favorable conclusion?

For one thing, TypeScript eliminated several traditional JavaScript “gotchas” that tend to blow-up at runtime. You know the usual suspects – bugs arising from JavaScript hoisting “var” declarations you placed in a block deep down inside a function, and bugs from using “==” instead of “===”. TypeScript can eliminate them easily (for example, using let instead of var eliminates hoisting bugs). Also, TypeScript helps clarify the code. TypeScript type annotations –  while providing the wonderful benefit of “transpile-time” type checking – improves code “readability” by making variables, function parameters and function return types explicit.  So the developer maintaining your code immediately knows your intentions rather than having to infer it. Additionally, TypeScript classes provide that “Object-Oriented” expressiveness I sorely missed with JavaScript. Interestingly, the latest version of TypeScript (2.2.2 as I write this) boasts the ability to transpile asynch/await statements to ES5 (previous versions were restricted to ES2015 and above), and using asynch/await did simplify my example’s ajax calls somewhat – but not enough to declare them a “must use” feature.

Ok…  so TypeScript offers these nice features (and many more I haven’t touched on here). But what about the hassle of creating a “tool chain” for transpiling, bundling and minifying that TypeScript code?

To my surprise, establishing a tool chain wasn’t too much hassle. It’s largely a “one-time cost” in the initial setup… knowing which node.js npm packages to assemble and configure. But once that’s figured out… the tool chain is easily re-used on other projects.  During my investigation, I used two popular “tool chains” – Gulp with Browserify, and webpack. (Spoiler alert… I found webpack to be the better approach).

So yes, I think TypeScript is worth the effort even for small amounts of JavaScript code – for the likely chance it will catch bugs earlier in development, improve code readability, and allow for more elegant code abstractions.  For larger amounts of code, TypeScript is a no-brainer.  If only because of the immense advantages of using ES2015 modules to break the code into manageable pieces (demonstrated in this article – admittedly somewhat artificially – by importing jQuery and KnockOut).

If you’re interested in the details of my investigation, please read on. We’ll take a quick look at the example EditForm.aspx page I “modified” with the CSR script, look at some of the TypeScript conversions used, discuss my experience with the two tool chains and debugging experience, and conclude with some final observations.

The Clean Team Form example

Here’s some context about our example.

The ACME Crime Labs corporation uses SharePoint Online to collaborate on criminal investigations. For each investigation, a custom site is provisioned, containing a custom list – CTMembers – for detectives and aides of a “Clean Team” crime unit (“Clean” means the members have no conflicts of interest to the crime investigation). Clean Team members can request access to a secure data room containing highly confidential documents. In our example, we have a CTMembers list with two items… I mean, two detectives – Robert and duke:

fig01

The list has a “Person” column and a “Yes/No” column for Data Room Access:

fig02

Clean Team members can only edit their own “Needs Data Room Access” setting (I said this example was simplified), unless they are site “Owners”. Owners can edit anyone.

Robert is a site “Owner”. Duke is not. Here is what they see when attempting to edit a member:

Duke cannot edit Robert (“Save” button and checkbox are disabled):
fig03a

Duke can edit himself:
fig03b

Robert can edit himself:
fig03c

Robert can edit Duke:
fig03d

Duke can check only his own checkbox, while Robert can check anyone’s – since Robert is a site owner and Duke isn’t. The “field-level” permissions that control this edit behavior is implemented with CSR in under 300 lines of JavaScript.  (If you want to learn more about CSR, here’s a nice example by my friend and colleague Javed Ansari.  For CSR fundamentals, go here ).

Getting the CSR script into the EditForm.aspx page (along with the jQuery and Knockout libraries) is easy. I put them in a Site Assets library within the site collection – and “inject them” into the page via the JSLink property in the EditForm.aspx web part tool pane:

fig03

To get all three JavaScript files injected via the JSLink property… I concatenated the URLs with a vertical separator character ‘|’, like this:

https://…/SiteAssets/CSR.js | https://…/SiteAssets/jQuery.js | https://…/SiteAssets/knockout.js

[I prefer SharePoint Designer to set the JSLink property – it’s easier than navigating the SharePoint UI].

My dev/test/deploy cycle is straightforward:  Edit the CSR JavaScript with VS Code, copy it to SharePoint, debug in Chrome.  Wash, rinse and repeat.

With only 300 lines of JavaScript code to deal with, this “application life cycle” is not terrible… nor is the debugging too bad (I’m particularly fond of Chrome Developer Tools for debugging). But it took many cycles to ensure the JavaScript code was solid.

Some Easy-Peezee TypeScript Refactoring

So what magic can TypeScript bring to this party? And what is the least amount of effort needed to configure the VS Code editor for TypeScript – no Gulp, no Browserify, no webpack. (Bowden Kelly, Program Manager for Visual Studio, gave an excellent talk  on this very topic). But I had to see for myself, so here’s what I did.

“No Tool Chain” VS Code Configuration for TypeScript
(Effort: 5 minutes)

There are tons of examples for how to configure VS Code for TypeScript with a “build” tool chain to transpile/bundle/minify.  I wanted none of it… just VS Code (on my PC) and TypeScript. Here are the steps:

  1. From the integrated terminal prompt within vscode:
    • Type “npm init” (use all the defaults) to create and initialize package.json.
    • Add the following tsconfig.json file to the folder:fig04a
  1. Add the JavaScript file “js” to the top folder and change the extension from “.js” to “.ts”.
  2. Type CTRL-SHIFT-B to compile (transpile) “ts”. The first time you do this, you’ll be prompted to configure a task runner:
    fig04b

Go ahead and click “Configure Task Runner”, and then click the menu option for “TypeScript – tsconfig.json” :

fig04c

This creates a tasks.json file.

  1. Type CTRL-SHIFT-B again – the task runner invokes tsc.exe (the TypeScript compiler) and transpiles “CTMemberEditFormCSR.ts“, outputting “CTMemberEditFormCSR.js” to the “dist” folder, and places the TypeScript compiler into watch-mode (I’m assuming much of this is familiar to you).

Here is a screenshot after the initial transpilation:

fig04d

Wow… look at those errors.  The “Cannot find name…” errors are because TypeScript doesn’t know about jQuery, KnockOutJS, and SharePoint (e.g. SP.js). So let’s fix that.

5. Add Type Declarations for External Libraries

To be clear… I don’t want to download and bundle jQuery and KnockOutJS with my code. Those libraries are already deployed to SharePoint (and “included” in the SharePoint page via JSLink). I just want their TypeScript declarations so VS Code can provide Intellisense for them. According to Marius Schulz (one of my favorite TypeScript bloggers):

In TypeScript 2.0, it has become significantly easier to acquire type information for JavaScript libraries. There’s no longer a need for additional tools such as typings or tsd. Instead, type declaration packages are directly available on npm.

So to obtain the JavaScript TypeScript declarations for these libraries, I typed these npm commands in the VS Code terminal window:

npm install –save @types/jQuery
npm install –save @types/knockout
npm install –save @types/sharepoint

This created a node_modules folder and downloaded the declaration files into it and updates the project.json file (you may need to restart vscode to see the changes). For an in-depth discussion on this “typings” goodness, see Marius’s excellent blog.

At this point, VS Code can transpile my one TypeScript file to ES5 JavaScript, and the resulting JavaScript code is identical to the TypeScript code, except that it has been stripped of all blank lines.

Here are the files for VS Code.

This setup took about 5 minutes, so not too annoying. And now I’m ready to investigate the pros and cons of TypeScript. There are 5 areas I specifically wanted to refactor in my example code:

  1. Replace IIFE with a namespace
  2. Replace var with let
  3. Remove implicit casts
  4. add type annotations
  5. replace object literals with static classes

So without further ado, here are the results:

Refactoring – Replacing the IIFE with a namespace (Effort: 20 seconds)

I replaced the JavaScript IFFE (Immediately Invoked Function Expression) – a hideous ES5 mechanism for keeping variables out of the global JavaScript namespace – with a respectable TypeScript “namespace”.

That is, I replaced this:

//  IIFE

(function () {
all code goes here...
 })();

with this:

namespace myCTMemberEditFormCSR
{
all code goes here...
 }

Refactoring – Replacing var with let (Effort:  20 seconds)

JavaScript variables declared with var are scoped to the nearest enclosing function – and not the nearest enclosing block, always leaving me with that sneaking suspicion a bug may be lurking in my ES5 code. I much prefer let, which scopes variables to the nearest enclosing block – like C# declarations. In fact, I see no reason to ever use var when let is available. So I replaced all occurrences of var with let in the example

Refactoring – Remove Implicit Cast (Effort: 5 seconds)

I had been relying on the implicit casting behavior of the ‘==’ in ES5 to logically compare two numbers.  Here is the expression I was using:

// using "==" instead of "===" to cast string to number
//
(myCTMemberEditForm.CTMemberId == _spPageContextInfo.userId);

I was relying on the implicit casting behavior of the double ‘==’ in ES5 – a known source of errors in ES5 if used without awareness of this behavior, and the reason for many “Best Practice” recommendations to avoid it in favor of the triple ‘===’, which doesn’t cast.  Typescript will have none of this implicit casting nonsense and require you be explicit.  Hence the need to tack toString() on the end of the _spPageContextInfo.userId  ‘number’ variable in order to work with TypeScript:

(myCTMemberEditForm.CTMemberId == _spPageContextInfo.userId.toString());

Refactoring – Replacing Object Literals with Static Classes
(Effort: 10 minutes)

I often use JavaScript object literals as singletons to organize my code. These are “sort of” analogous to C# static class instance singletons. They do a decent job of making the code easier to read by grouping functions and variables that logically belong together. In the example, you’ll notice two “object literal singletons” –
loggedInUser (representing the user who is editing the form) and myCTMemberEditForm (representing the form and its field controls.)

By refactoring them into TypeScript classes with static properties and methods… we get the additional benefits of static type checking. So let’s do that.

Here’s the original:

var loggedInUser = loggedInUser || {};
loggedInUser.IsCTMember = false;
loggedInUser.IsSiteOwner= false;
:

And here’s the refactored version:

class loggedInUser
{
static IsCTMember: boolean  = false;
static IsSiteOwner: boolean = false;
 :
}

By annotating each member variable with a type, my intentions are explicit. And VS Code warns if there is a type mismatch, with visual “squiggles” in the editor.

Babel – another popular transpiler – does not offer type annotations. Babelites tout this as a benefit… because Babel can determine types by analyzing the code. TypeScript also does this in the absence of type annotations.  And this can be helpful in certain instances –  like when it’s too painful to figure out the proper type for a complex JSON string (similar to using var in C# to handle LINQ results). But in general, I prefer TypeScript annotations to make my intentions explicit, for better readability. But that’s just my opinion.

One thing I noticed while refactoring the “object literal singletons” into classes is how I had sprinkled declarations throughout the code, rather than in one place. In retrospect, this makes the JavaScript harder for someone else to understand. TypeScript classes tighten this up, improving readability.

Refactoring – Annotating the CSR object literal (effort: 1 hour)

When using CSR, you create a CSR “Template Override” object literal and initialize it with various callback functions. I wanted to annotate the object, and the callbacks.  It took significant effort. For example, I changed this:

var overrideCtx= {};

To this:

let overrideCtx: SPClientTemplates.TemplateOverridesOptions = {};

It was difficult locating the correct annotations… especially for the nested types within overrideCtx. In particular, it took me awhile to figure out that the correct function parameter used for overrideCtx.OnPostRender is:

static OnPostRenderFunc (ctx:SPClientTemplates.RenderContext_Form)

I suppose if I were doing many CSR scripts, this would be less of a burden, but for a one-off… it was a chore achieving this level of Typescript perfection.

Debugging TypeScript

It is comforting to debug in TypeScript. If you’re familiar with debugging JavaScript in Chrome – it’s almost identical.  You set your breakpoints in a TS file (you can also add a “debugger” statement in TS, just as you would in JS, and the Chrome debugger will pause when it hits it.) and away you go.

To get this to work, I needed to do three things:

  1. Set “sourceMap” to true in the tsconfig.json file
    fig05a
  1. Copy the transpiled output (and source map) to SharePoint. In my example, I copied them here:
    fig05b
  2. Copy the original typescript file to SharePoint, with the same folder hierarchy used in VS Code. So here it is in SharePoint:

fig05c

Bring in the Chains

You are probably using a more sophisticated “build” process (“tool chain”) for client-side development than what I’ve demonstrated so far. But really, is that needed here? In my opinion, perhaps not needed, but recommended. The hassle of configuring a more advanced build process is mostly a “one time” cost assembling the right combination of node.js npm packages to perform transpilation, bundling and minification tasks. And orchestrating those tasks. Once you’ve figured that out…. the tooling is easily re-used in subsequent projects.

I investigated two of the most popular “tool chains” to compare the effort needed to set them up, and to offer my advice choosing one over the other.

If you’ve been following the changes with client side tooling, you’ll know change is a constant. Within the last 18 months, I’ve personally used Grunt, then Gulp, then npm scripts, and experienced the joys of ES2015 modules and how best to link modules together, starting with runtime (dynamic) linking with loaders like SystemJS and JSPM, before switching to build-time (static) linking with bundlers like Browserify, and more recently, webpack. I see no end to the churn.

I suppose a tool chain might be overkill for one TypeScript file since there’s nothing to bundle. But then again, it might be nice to package up everything –  the TypeScript file + libraries – into a single bundle.js file. Here’s the idea graphically:

bundling workflowThe “Browserify Tool Chain” required several npm packages, a simplified tsconfig.json file, and gulp to co-ordinate the transpilation, bundling and minification tasks. The three files that control it all are shown below (and familiar if you’ve used gulp before):

fig06a

To separate my TypeScript file from these “controlling” files, I placed the TypeScript file into a src folder.  Here is the new folder structure in vscode:

fig06b

I admit my “Browserify” configuration is not optimal…  but my objective was to judge the effort to configure a “traditional front-end build process”. And so, while it’s more complex than before… it didn’t take too long to throw this together. Type “gulp” at a terminal prompt… and watch gulp do its thing.

I was disappointed in minification, however (which is why I commented out the minification step on line 25 in gulpfile.js. And minification isn’t going to do much with 300 lines anyway). The Browserify “Uglifier” severely degraded the runtime debugging experience in Chrome….   I couldn’t set breakpoints on every line nor reliably single-step through the minified code. So I recommend not minifying your transpiled TypeScript code.  And sure, I could optimize my Browserify build process, but Browserify is – in my opinion – obsolete. I prefer webpack (webpack 2, actually).

Here are the files for the Browserify build.

Webpack is another popular tool for transpiling, bundling and minifying TypeScript. It’s been around for several years, but seems (to me) to have really taken off in the last year. No doubt webpack is already obsolete for some of you (rollup, anyone?), but I think webpack will remain very popular for the foreseeable future.

Some of the coolest webpack features have cool names – like “Hot Module Reloading” and “tree shaking”. Two benefits I particularly like are “no gulp needed” (self-explanatory), and “vendor code splitting” (webpack bundles can be split up – such that 3rd party libraries are bundled separately from application code).

I was disappointed to find debugging of webpack minified code remained problematic, so I omitted minification as before. A good work-around is to use “vendor code splitting” to separate jQuery & KnockOutJS from the TypeScript code, keeping the application code relatively small, while also providing a good debugging experience (maybe I’ll do that in another blog)

Here are the three files that control the “webpack 2 tool chain”:

fig07a

And here is the file structure in VS Code:

fig07b

Note the npm script (in package.json) named “build”. To kick-off the build, just type “npm run build” in a terminal window prompt.

Here are the files for the webpack build.

In Conclusion…

So is TypeScript worth it for relatively small amounts of JavaScript? For new snippets, yes… I think TypeScript is worth the effort (along with a re-usable build process). TypeScript will catch more bugs at “transpile time” (instead of at run-time). And TypeScript offers much better language abstractions to better model your application code, and the debugging experience (as long as you don’t minify it) is good. But it’s probably not worth the effort to convert existing snippets of JavaScript to TypeScript.

One area I didn’t cover in this blog… was how TypeScript might improve asynchronous REST calls with asynch/await. I’ll cover that in my next blog (sneak peek… it helped).  And I’ll also look at the new Microsoft PnP JavaScript library offerings – which do promise (pun intended) to greatly simplify the asynchronous REST calls (especially PUT).

Happy Coding.