Sie sind auf Seite 1von 78

Azure CLI, DAX, PDFs, VUE, Accessibility, WPF

JAN
FEB
2020
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

Compiling Scripts
to Get Compiled
Language
Performance

Azure CLI Data Reporting Building


Basics with DAX Business Apps
with WPF
April attendees registering
by March 20 will receive
a Disney park ticket at twilight

REGISTER EARLY
for a WORKSHOP PACKAGE
and receive a choice of
hardware or hotel gift card!
Shown are samples of past
Surface Go
hardware choices.
Xbox One X

Xbox One S Surface


Headphones

@DEVintersection @AzureAIConf

DEVintersection.com 203-264-8220 M-F, 9-4 EDT AzureAIConf.com


SPRING & FALL DATES
April 7–9, 2020
Workshops April 5, 6, 10

Orlando, FL
WALT DISNEY WORLD SWAN
AND DOLPHIN

Dec 8–10, 2020


Workshops Devember 6, 7, 11

Las Vegas, NV
MGM GRAND

Powered by DEVintersection.com
203-264-8220 M-F, 9-4 EDT
TABLE OF CONTENTS

Features
8 Azure CLI 56 Nest.js Step-by-Step: Part 3
Sahil shows you how to use your browser and Azure CLI to manage (Users and Authentication)
Azure resources no matter which platform you choose. This third part of Bilal’s Nest.js exploration introduces a Users Module
Sahil Malik that allows you to create a user and locate them in the database,
and shows how to add the Auth Module that allows users to register
14 Working with iText for PDFs their new account and log in.
Bilal Haidar
You’ve probably worked with PDFs. Did you know that you can use them
to create great interactive documents, like sign-in screens or usage
agreement screens? John shows you how. 64 Financial Modeling with Power
John V. Petersen
BI and DAX: Life Insurance
20 A Design Pattern for Building WPF Calculations
Business Applications: Part 4 Insurance companies keep track of an astonishing amount of data
and manage complex calculations. Helen shows you how to use DAX
Paul continues his series on WPF with how to manage the state of buttons, to keep up with it all.
and to add, edit, and delete users. Helen Wall
Paul D. Sheriff

28 Vuex: State Management


Simplified in Vue.js
Shawn shows you how—and why—to use Vuex in your Vue.js projects
Columns
to simplify and centralize state management.
Shawn Wildermuth 74 Managed Coder: On Contribution
Ted Neward

34 Accessibility Guidelines and Tools:


How Do I Know My Website
Is Accessible? Departments
Of course, you want the maximum number of people to use your website
or product. To achieve this potential, read Ashleigh’s insightful exploration
of accessibility to make sure that people with disabilities can have access
6 Editorial
to your work too.
Ashleigh Lodge 18 Advertisers Index
46 Compiling Scripts to Get Compiled 73 Code Compilers
Language Performance
Vassili teaches you how to improve compiler performance of a scripting
language by splitting the script into functions. That way, you don’t have
to worry about one statement’s failure causing the whole script to fail.
Vassili Kaplan

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $49.99 USD. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Bill Me option is available only for US subscriptions. Back issues are available. For subscription information,
send e-mail to subscriptions@codemag.com or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.

4 Table of Contents codemag.com


EDITORIAL

Theory Meets Reality


I’ve been programming full time for nearly three decades now. Early in my college days, I learned about
structured programming, how to design systems to be modular, and to re-use code as much as possible.
In the early 90s, I brought what I learned in the procedural era forward into the object-oriented era.

I was a big fan of the Design Patterns book. Over Three or four years ago, I rewrote big parts of this When looking back on this overhaul, I consider
the years, I continued to move this practice for- system so we could build new features using a that a combination of several factors helped its
ward in some way, shape, or form. “cookie-cutter” approach. The logic for managing success , which brings to mind a prophetic saying:
our forms library was encapsulated into a com- “Luck is what happens when preparation meets
One of the tenets of this style of development was mon module called baseform.js. This module has opportunity.” I was both prepared and lucky.
the idea that if applications were written prop- three main components: a set of HTML files for
erly, you could replace entire sub-systems with- the interface, a set of definition files used to de- When it comes to being prepared, the initial design
out rewriting the entire application. So how many fine the data used by the form, and a single wire- of the code was modular enough that we had spe-
times have I had to actually perform this task? up file that’s used to tell the baseform.js module cific points of code. We’d isolated their functions
Doing a scientific survey of the various systems I how to stitch all of these different parts together properly, particularly the points that gathered
have written over the years; I have come to a final into a running form. Part of this process is used and stored data from forms. This was also lucky.
calculated value: ZERO. Until last week that is… to create the SQLite tables used to persist our We essentially created our own luck. Finally, the
form data. These tables are generated using the “opportunity” was created by our fine friends at
As I took the Tube from Heathrow into Central definition files required by the base form. Apple. They broke our code, which created this op-
London, an e-mail from a client popped up on my portunity for us to switch out our storage engine.
phone: It read: “We are currently experiencing This is where the story really begins. How do we As luck would have it, we were prepared to have
show-stopping errors for the mobile application change from a SQL-based storage engine to an our theory meet reality head on!
for people who upgraded to iOS 13.” As of late, in-memory storage engine? Luckily for us, we had
a lot of issues with our HTML 5 application stem a foundation to start with.
from Apple’s arbitrary changes to their JavaScript Rod Paddock
implementation in Safari. My question to myself
was “What did Apple do THIS time?”
With this question in mind,
It turns out that a feature we relied on heavily was I started my investigation.
deprecated in the latest version of Safari. When we
started this offline application, we built the founda-
tion of our application using a feature of the HTML5
spec called Web-SQL. It turns out that Web-SQL was Over the last few months, we’ve been working on
deprecated a few years ago because of browser vendor a time-tracking application that will provide the
politics. Apple chose to snuff out this iOS 13 feature foundation for our changes. During the construc-
once and for all. For those readers savvy about such tion of the time tracker, we decided to store its data
things, I’m aware of a feature of iOS-Safari called in memory versus using the SQL infrastructure we’d
“Experimental Features,” and how I can theoretically always used. Basically, we stored all our time data
turn Web-SQL back on via the simple flick of a switch. in an array of JSON-encoded data. Could we use this
same mechanism for our form data? With this ques-
tion in mind, I started my investigation.

You can theoretically turn The first step was to narrow down how baseform.
Web-SQL back on via js handled the CRUD (Create, Read, Update, De-
lete) operations for a given form. Fortune shined
the simple flick of a switch. upon me and the process of handling these op-
erations was confined to a few narrow slices of
code. These slices of code copied the contents
In our case, turning this feature on didn’t fix the of form data into arrays. The original logic then
issue. We issued an alert to our engineers in the took these arrays and created their respective
field asking them to hold off upgrading iOS 13 SQL INSERT and UPDATE statements. With the
until we could come up with a fix. I thought to heavy lifting already completed, I simply needed
myself (actually, cursed to myself), “what am I to create a mechanism for storing the data into a
going to do to solve this?” large JSON-encoded array. That was the only re-
ally difficult part and it wasn’t that tough, in all
Well, lucky for me, those principles I learned early reality. It took around eight hours from start to
in my education and career finally come into play. finish to complete this structural overhaul.

6 Editorial codemag.com
UR
GET YO R
OU
FREE H

TAKE
AN HOUR
ON US!

Does your team lack the technical knowledge or the resources to start new software development projects,
or keep existing projects moving forward? CODE Consulting has top-tier developers available to fill in
the technical skills and manpower gaps to make your projects successful. With in-depth experience in .NET,
.NET Core, web development, Azure, custom apps for iOS and Android and more, CODE Consulting can
get your software project back on track.

Contact us today for a free 1-hour consultation to see how we can help you succeed.

codemag.com/OneHourConsulting
832-717-4445 ext. 9 • info@codemag.com
ONLINE QUICK ID 2001021

Azure CLI
CLI—or command line interface—because the clickety click in the Azure portal gets real old real quick! The thing is, the
Azure portal is great, but as soon as I’m done doing something, I want to document it. That means taking lots of screenshots,
marking them nicely with annotations, and then saving those screenshots in a folder, referencing them in markdown, and,

because I use a retina screen, those screenshots are never You can access Azure CLI directly from your mobile phone
the right size! Ugh! as well. Just download the Azure app and look for the cloud
shell icon on the bottom right. Tapping on it brings up Azure
I could just write up Azure CLI commands instead. That way, CLI right on your phone. This can be seen in Figure 2.
I can automate stuff, it’s documented, checked in, version
controlled, repeatable, and frankly, it isn’t much more ef- There’s something really awesome about being able to auto-
fort than doing it in the portal. In fact, I’d argue that Azure mate the power of the cloud from your phone.
CLI almost makes me never actually use the portal, except
for trivial one-time use-and-toss things!
Sahil Malik Install Azure CLI
www.winsmarts.com So what is the Azure CLI? It’s the cross-platform command Although accessing Azure CLI from the browser is conve-
@sahilmalik line experience that lets you manage Azure resources. nient and accessing from the phone is cool, let’s be honest.
You’ll frequently want to install Azure CLI. Why? Because
Sahil Malik is a Microsoft
MVP, INETA speaker, Fair Warning
a .NET author, consultant,
All right, I’m going to get some flak for this, but this article is
and trainer.
severely opinionated toward the *nix shell. I realize that Azure
Sahil loves interacting with CLI runs fine on a DOS prompt. And I also realize that this is
fellow geeks in real time. purely my opinion. But let’s be honest, PowerShell works better
His talks and trainings are on Windows. I realize there’s a PowerShell for Mac as well, but
full of humor and practical it never seems to have the commands I need. All the modules I
nuggets. love to use don’t work my Mac. Meanwhile, I love the *nix ter-
minal; it’s so much more powerful than the usual DOS prompt.
His areas of expertise are When I think of Azure CLI, I just go with the assumption that
cross-platform Mobile app it’s running on some sort of environment that’s *nix.
development, Microsoft
anything, and security and The good news is that even on Windows, you can easily get
identity.
the *nix shell. It’s really not a showstopper.

Use Azure CLI Without Installing It


The best part about Azure CLI is that you don’t really need to
install it to use it. Simply open your browser and visit https://
shell.azure.com. Before you know it, you’re using Azure CLI
right within the browser. You can see it running in Figure 1.

Figure 1: Using Azure CLI in the browser Figure 2: Azure CLI on my phone

8 Azure CLI codemag.com


installing it on your local computer allows greater control,
better typing experience, an environment and shell you can
customize to your liking, etc.

You can install Azure CLI on Mac, Windows, or Linux. Or, if


you prefer, you can even run it as a Docker image. Here’s
how you can do any of the four.

To install Azure CLI on Windows, you can use this command.

Invoke-WebRequest
-Uri https://aka.ms/installazurecliwindows
-OutFile .\AzureCLI.msi;
Start-Process msiexec.exe -Wait
-ArgumentList '/I AzureCLI.msi /quiet' Figure 3: Output of az vm

Running this command downloads the Azure CLI installer


and installs it. If you have Azure CLI already installed, this instance, “vm” is a valid command that lets you manage Linux
command updates it. or Windows virtual machines. Run this command on terminal:

To install Azure CLI on Mac OS, you can use homebrew. Once az vm
homebrew is installed, go ahead and run the command be-
low: This should produce an output, as shown in Figure 3.

brew update && brew install azure-cli As you can see, this command shows you all the sub com-
mands that “az vm” supports. For instance, “az vm create”.
Similarly, if you wish to upgrade Azure CLI on a Mac, run If you ever wish to find out how to use that command, you
this command: simply say “az vm create -h”, for help text. This writes out
examples, documentation, and all that good stuff you need
brew upgrade azure-cli to get going with this command.

On Linux, you can install Azure CLI using the below com- This is another thing I love about Azure CLI, it teaches itself
mand: to you.

curl -sL
https://aka.ms/InstallAzureCLIDeb Azure CLI Login
| sudo bash Azure CLI exposes a number of commands, but hardly any
of them are usable unless you log in first. I think it’s quite
Let’s say that you don’t want to install WSL and you don’t important to understand how the log in process works, not
want to install Azure CLI. Just use Docker. Simply use the only as a user of Azure CLI but also behind the scenes, be-
command below to install Azure CLI as a Docker image. cause more and more other CLIs, such as Terraform, are pig-
gybacking on this mechanism.
docker run -it mcr.microsoft.com/azure-cli
To login using Azure CLI, you simply issue this command:
As you can see, this is one of the reasons that Azure CLI is
so popular. It runs everywhere, it runs easily, and it’s con- az login
sistent across various operating systems.
Running this command launches your operating system brows-
er. At this point, you do your usual Azure AD login and go back
Use Azure CLI to your terminal. All of the usual advantages of the Azure AD
No matter how you installed Azure CLI and on what OS, the login apply here, such as security policies, threat detection,
process of using it is consistent. This is a big deal because MFA, etc., to keep your Azure CLI login experience protected.
it bridges the differences among operating systems. Now
you can check in your Azure CLI scripts and be assured that What really happened here? Azure CLI acts as a native client. The
they’ll work for your co-workers, no matter what OS they process of authentication was similar to that of any first-class
use. identity citizen. At the end of authentication, it stores a refresh
token and access token pair in ~/.azure/TokenCache.dat.
This is why I prefer to do this on a *nix operating system.
Invariably you’ll see open source projects that write Azure Now, any third-party CLI can piggyback on this authenticat-
CLI commands in shell scripts87 intertwined with *nix com- ed session. This truly simplifies the job of other CLIs. There
mands. Just make your life easier and install WSL on Win- are a number of additional files in that same folder, so feel
dows. free to explore further.

To use Azure CLI, you type “az” on your terminal prompt. Writ- As you can see, the browser pops open to authenticate you.
ing this command throws a wall of ASCII at you. It shows all But what if you don’t have the flexibility of launching the
the commands you can use. Pick any of those commands. For browser? For instance, let’s say that you run Azure CLI as

codemag.com Azure CLI 9


a Docker container. Or perhaps you SSH into another com- You can use managed identities or service principals to per-
puter and wish to launch Azure CLI from there? It’s simple; form automation scenarios. We all know that certain APIs
you issue the command like this: are stubborn, albeit those are very few. Those APIs must
require a user identity, or perhaps your boss dictates that
az login --use-device-code you must use a user identity for some silly reason. Azure CLI
can entirely skip the browser for authentication and allow
Using this login mechanism, you’re now using an alternate you to specify credentials directly in the login command.
authentication mechanism called the device code flow. This You can do so using the command shown below.
mechanism gives you a code that you can type into a URL and
perform authentication, so there’s no dependence on the na- az login -u <username> -p <password>
tive browser. Once you’re authenticated, you can party as usual.
I must discourage you from using this approach as much as
You can also choose to log in as a service principal using you can. Not only do you have the headache of managing
the command: that username and password, this approach won’t work if you
use things such as MFA. Behind the scenes, this mechanism
az login --service-principal uses something called the ROPC grant, which suffers from
-u <username> -p <password> -t <tenant> numerous issues. You can read more about it here https://
winsmarts.com/ropc-prefer-strongly-to-use-8b99039573d8.
You will find logging in as a service principal particularly
useful when automating various processes, such as in Azure Finally, sometimes you may wish to use Azure CLI without an
DevOps pipelines or unattended processes. Another particu- Azure subscription. For instance, you may want to manage
larly useful trick using service principals is when you wish to an Azure AD that has no Azure subscriptions tied to it. You
get access tokens for any random API that you wish to test. can do so using this command:
More on that later in this article.
az login --allow-no-subscriptions
Another really amazing feature of Azure is managed iden-
tity. Managed identity is an identity that you assign to an As you can see, there’s almost no conceivable situation that they
Azure resource so that resource can do things as that iden- haven’t thought of. This makes Azure CLI a very versatile tool.
tity. This is a great feature because you have no password to
manage; the identity is self-contained within Azure. So now you’ve logged in using Azure CLI, but what can you
do with it?
You could have a Docker container, or perhaps a virtual
machine, or perhaps an app service, that’s given an Azure
identity. Wouldn’t it be nice to use Azure CLI under the per- Tenants and Subscriptions
missions of that managed identity from within that process? An important concept to know is that Azure AD is tied to a
You can do so using this command: tenant, i.e., one tenant on Azure AD. However, you may have
multiple subscriptions attached to a tenant. Additionally, your
az login --identity login account, for example, sahilmalik@tenantxyz.onmicro-

Figure 4: Azure CLI in interactive mode

10 Azure CLI codemag.com


soft.com may have access to another subscription in a com- able may feel annoying. There’s a middle ground, and that’s
pletely different tenant, such as tenantabc.onmicrosoft.com. autocomplete.

This is, in fact, a very common scenario, given that compa- Autocomplete simply means that you can type a command
nies frequently hire third parties to manage certain func- partially and then hit the Tab key and your shell will show
tions in their Azure subscriptions. This is all tied down using you the various autocompletions available for the typed
Azure RBAC (role-based access control). command.

Why I bring this up now is because soon, as you login using To enable autocomplete, just issue this command at terminal:
Azure CLI, you’ll have authenticated as a user, or service
principal etc. But this identity can have access to numerous autoload -U +X bashcompinit &&
subscriptions in your tenant or other tenants. bashcompinit
source /usr/local/etc/bash_completion.d/az
You can easily see the subscriptions attached to your ac-
count using the command below. Now type in any partial Azure CLI command, hit Tab, and
you’ll be shown autocomplete choices, as can be seen in
az account list Figure 5.

Like any other Azure CLI command, if you wish to learn how to You can hit Tab repeatedly to scroll through the various op-
manage these accounts, you can issue the command below. tions, which even works on command parameters, as can be
seen in Figure 6.
az account -h

As can you see, you can switch subscriptions using the set com- Azure CLI Query Results
mand or view the current subscription using the show command. Frequently, you’ll run a command and it spews out a wall of
JSON in your face. For instance, if you have many service
principals in your subscription, you can run a command,
Azure CLI Interactive Mode such as the one in the next snippet, to list all the service
Perhaps the best part of using Azure CLI is that it teaches principals in your tenant.
itself to you. You can write --help or shorthand -h in front
of any command, and it writes out exactly what you can do. az ad sp list --all

But there is an even cooler way to learn: Azure CLI interac- The output will be a bit of a shock. It scrolls out reams of
tive mode. You can start this mode using this command: JSON super fast! What if all you wanted was to know the
names of those service principals?
az interactive –update

Running this command turns your entire terminal into an


interactive studio, as can be seen in Figure 4.

Once you get used to the underlying


commands, you’ll just type them
yourself, zippity fast.

This is quite incredible, really. There’s a lot of interesting Figure 5: Azure CLI autocomplete
detail in Figure 4. You can see basic help for the command
and parameters you’re using, and it’ll even show you ex-
amples as you build your command by showing you Intel-
liSense. It’s a great way to get started with Azure CLI.

This is where the fun ends. The first time I saw Azure CLI
interactive, I was blown away. But as I got more proficient
with Azure CLI, I felt that it slowed me down. It’s a great
way to get started with or familiarize yourself with a new
command set, but once you get used to the underlying com-
mands, you’ll just type them yourself, zippity fast.

Azure CLI Auto Complete


Interactive mode may feel too heavy after a while. And con-
stantly typing -h to understand what commands are avail- Figure 6: Azure CLI autocomplete on parameters

codemag.com Azure CLI 11


Figure 7: JMES queries in an interactive mode

package that gives you an interactive UI to work with JMES


queries. To use it, first go ahead and install it, like this:

pip install jmespath-terminal

Oh wait, what is “pip”? Well, it’s a Python package installer.


If you don’t have it on your OS, run a Google/Bing search
for it and install pip first. Once you have it installed, you can
run this command:

az ad sp list | jpterm

Now you’ll see a user interface like that shown in Figure 7

Now you can play around with the data to your heart’s con-
tent and figure out exactly the JMES query you need. This
really simplifies the task for building JMES queries.

Remember, you’re on a Unix shell. So you don’t have to rely


on JMES as the only querying mechanism. For instance, I can
Figure 8: Output of the Find command simply request the results as a table, pick column #2 using
awk, and get the results using a single command, like this:

Every Azure CLI command supports a --query input flag. You az ad sp list --output table
can simply pass in a query parameter that lets you do basic | awk '//{print $2}'
things, such as perform some basic filtering, or extract and
show some basic properties. This command is: As you can imagine, as you get more proficient, you’ll start
interacting with the power of Azure at the speed of thought.
az ad sp list
--query "[].appDisplayName"
The Azure CLI Find Extension
Running this command writes out only the app display name This section was hard to name. It’s not about “finding an
property of all the apps in your subscription. extension,” it’s about an extension called “find.”

I don’t know about you, but frequently, when I’m trying to Azure CLI is awesome but it isn’t enough. For instance, you may
figure out complex queries in a complex JSON document, I want to extend it yourself, for your organization, and you’ll want
can’t seem to come up with the JMES query in the first go. I to author extensions to do so. Microsoft has authored a number
wish there were an interactive way to play with the data and of such extensions for various products also. You can find the
figure out the query I want. That’s where jpterm/jmespath- various extensions that Microsoft offers at https://docs.micro-
terminal comes in. The jpterm/jmespath-terminal is a Python soft.com/en-us/cli/azure/azure-cli-extensions-overview.

12 Azure CLI codemag.com


One of my favorite extensions is the find extension. The find As long as you’re logged into Azure CLI, you can request
extension talks to a Microsoft AI-powered API called Azure a token for Microsoft Graph using the following command:
Aladdin, and helps you find commands. Let’s take it for a spin.
az account get-access-token
The first step is to install the extension using the command --resource https://graph.microsoft.com
below.
Really, that’s how simple it is. No more opening a browser
az extension add -n find window, performing a sign in, catching a redirect, grabbing
the auth code, and then replaying the auth code on a POST
If you wish to use this extension, you type “az find” and, as request to the authorization endpoint to get an access to-
usual, you can ask for help using az find -h. ken. All of that can be simplified with just that one call.

Imagine that I wanted to do something but I don’t quite I do have some bad news for you. This approach will only work
remember what the exact command was. For instance, let’s for a small list of white-listed APIs that Azure CLI has access
say that I want to create a virtual machine. I can simply is- to. For a more generic mechanism that lets you get an access
sue a command like this: token for any API, use a service principal. Here’s how.

az find “create vm” Create an app:

The output of this command can be seen in Figure 8. az ad app create


--display-name sampleApp SPONSORED SIDEBAR:
As you can see from Figure 8, the AI-based engine tried to --identifier-uris <identifier_uri>
interpret what I was trying to do and gave the exact com- Moving to Azure?
mand, and even command examples, on how to achieve that. Next, create a service principal. CODE Can Help!

az ad sp create-for-rbac Microsoft Azure is a robust


Azure CLI Debug Mode --name dellaterserviceprincipal
and full-featured cloud
platform. Take advantage
Azure CLI can do some amazing things with so many valu-
of a FREE hour-long
able commands. I frequently find myself scratching my head, Once this service principal is created, carefully note down
CODE consulting session
thinking, hmm…how did they do that? I mean, I’m sure some the created appid (username) and password. (yes, FREE!) to jumpstart
really smart folks have figured out some amazing code and your organization’s plans
REST API calls to get the information they need. And some- Next, login using Azure CLI as this service principal. to develop solutions on
times I want to have similar functionality in my applications. the Microsoft Azure platform.
For example, the command az group list lists all resource az login --service-principal For more information,
groups. And for some crazy reason, I wish to know what REST --username "appid" visit www.codemag.com/
API I can call to get the same information in my applications. --password "pasword" consulting or email us at
--tenant "tenantid" info@codemag.com.
Well, you simply issue the command:
Now that you are logged in, request an access token.
az group list --debug --verbose
az account get-access-token
This command scrolls a lot of text, but inside that text, it --resource <resource_uri>
clearly shows the REST URL that a request was made to. It
shows the details of the request and the response received. This simple command will save you loads of headaches if
Now all you have to do is copy-and-paste that URL into your you’re authoring Azure AD protected APIs or simply testing
app, make sure that it has a valid access token and the nec- other APIs.
essary permissions, and make the same REST call as Azure
CLI showed you. Voila, you should have the same results!
Summary
I haven’t even begun to touch the surface of the productivity
Get the Access Token superpowers that Azure CLI gives you. Once you start combining
I’ll bet you’ve seen or heard of products like Microsoft Graph Azure CLI with the power of the Unix terminal, you can move at
and that there’s a lot of power behind Microsoft Graph. the speed of thought. Pair that with the power of Azure, and
Really, Graph is just a bunch of well-documented REST APIs. now you’re seriously talking about the next level in productivity.
Calling the REST API isn’t hard and figuring out how to get
an access token to call the API is the fun part. Just a few years ago, we were shipping servers, replacing
hard disks, and plugging in network cables. Now, a few key-
Azure CLI can get you an access token. You might ask why you strokes on a keyboard, and you have a globally distributed
should use Azure CLI to get an access token. There are two situ- datacenter ready to go.
ations. First, in your automation scripts, you might wish to do
some automation that requires Microsoft Graph, or, for that mat- I can’t wait to see what we’ll build in the next few years.
ter, any Azure AD-protected API. Second, you’ve written an API,
you wish to test it, and the onerous process of hand-performing Until next time, be careful with those Azure credits.
auth code flow isn’t the best thing to do on a lazy afternoon.
Sahil Malik
In either of those two cases, Azure CLI can help.

codemag.com Azure CLI 13


ONLINE QUICK ID 2001031

Working with iText for PDFs


If your application requires some form of document rendering, you’ve likely needed to work with PDF files. Rendering output
as PDFs has several advantages: they are browser and device independent; and they can be interacted with in a disconnected
way. If you’re adept at JavaScript, you already possess the skills necessary to extend a PDF’s functionality. The question may be

“How can developers and users easily interact with PDFs in with older versions of iText (v5 or iTextSharp), you’ll find a
our applications?” The first step to answering that question number of changes to the iText API in v7. Most of the time,
involves understanding what PDFs are. PDFs are just another somebody else has built the PDF and typically, all your ap-
way to host data. A PDF is just a document that may have plications need to do is take a PDF template and apply data
one or more fields and it can be highly graphic and format- values to it for some downstream process. In other cases, a
ted. In your applications, the most common task that users PDF, and specifically its data fields, are the input to a pro-
need is to read and write data to and from fields. cess. This article covers the basics of reading and writing
data to and from a PDF file. In another article, I’ll demon-
The second step to answering that question is to build a strate how to consume the library in a Web application.
John V. Petersen library to handle the core functions of reading and writ-
ing to and from a data source. In the Java and .NET world,
johnvpetersen@gmail.com
linkedin.com/in/johnvpetersen there are the iText 7 PDF libraries. (https://itextpdf.com/). PDFLibrary Source Code and Samples
In this article, I’ll demonstrate a library I created to interact If you want to go directly to the code and work through the
Based near Philadelphia,
with PDF files. Specifically, using the custom PDFLibrary il- tests, you can find the code in the following GitHub reposi-
Pennsylvania, John is an
lustrated in this article, you’ll be able to read data from a tory: https://github.com/johnvpetersen/PDFLibrary.
attorney, information
PDF and write data to a PDF. You’ll also be able to determine
technology developer,
what fields a specific PDF has. With the PDF Library, built on
consultant, and author.
the iText NuGet Package, you’ll be able to easily incorporate Examining the iText7 NuGet Package
PDF functionality into your applications, whether they be Figure 1 illustrates the NuGet Package Manager with the
WPF-, Web-, or API-based. iText7 package highlighted. It’s interesting to note that un-
der .NET Standard, the number of dependencies skyrocket!
As a matter of full disclosure, I didn’t create a .NET-Standard
version and therefore, I can’t opine on whether that aspect
PDF stands for Portable of the NuGet Package is well constructed.
Document Format.
The source code for this article was compiled under .NET
Framework 4.7. The Bouncy Castle Package is to support
cryptographic features that are often incident to secured
This article isn‘t a detailed or in-depth examination of iText. PDFs in HIPAA, PCI, and other environments that have regu-
iText is a very extensive set of libraries that allow you to ful- latory requirements related to personal and medical infor-
ly interact with and manipulate PDF files. If you’ve worked mation. That dependency isn’t necessary if you’re not using

Figure 1: The iText7 entry in the NuGet Package Manager

14 Working with iText for PDFs codemag.com


iText 7’s cryptologic features. The same goes for the Com- • A “covered work” means either the unmodified pro-
mon Logging NuGet Package Dependencies. Unless you’re gram or a work based on the program.
going to take advantage of iText’s logging features, you • To “modify” a work means to copy from or adapt all
won’t need these logging packages. or part of the work in a fashion requiring copyright
permission, other than the making of an exact copy.
Given today’s corporate IT governance requirements, you The resulting work is called a “modified version” of
should always examine and understand every dependency the earlier work or a work “based on” the earlier
your application directly and indirectly takes on because work.
of other package dependencies. The iText7 NuGet Package
is like many other packages with dependencies that aren’t What can you conclude from these three definitions? First,
hard dependencies. In this case, it would be a better design the “program” is what’s directly licensed under the GPL/
if iText7 had additional NuGet Packages to bring in the cryp- AGPL. A “covered work” is the program itself or other code
tographic and logging features on an as-needed basis. The that’s based on the program, which includes your modifica-
good news is that with NuGet, you can break packages apart tions to the program. This is what strong copy left is all
and recompose them for better dependency management. about: to ensure that modifications to GPL/AGPL code are
shared and made available in the OSS ecosystem. Instead
of making changes to the program, you may create a new
work based on the program. What does based on mean?
In your applications, the most In the copyright context, it’s a derivative work. Whether
you make changes to a program or create a new work based
common task is to read and write on the program, for copyright purposes, you’re creating a The PDF Standard
data to and from fields. PDFs are derivative work. Therefore, what the GPL and AGPL call a
PDFs are often referred to as
just another way to host data. covered work is one of two things: the original program or
“Adobe PDF Files” in the way
a derivative work. In other words, the only things that can all tissues may be referred to
be subject to the GPL/AGPL, based on the GPL/AGPL terms as Kleenex or photocopies
is the original program or a derivative work. as Xeroxes. Adobe is only
iText’s License: The AGPL 3.0 License one of many brands that
What follows is not specific legal advice on what you do Does that mean that a Using statement referencing a GPL/ markets tools to interact
in your situation. What follows is just my own opinion AGPL library make your entire application a derivative work? with PDFs. Chances are, you
from the standpoint of being both a software developer In my opinion, you need something more than just a library have Adobe Acrobat Reader
and an attorney. In all things legal that you encounter; reference. If the majority of your application’s functionality or the full Adobe Acrobat
you should always engage competent counsel in your own requires the GPL/AGPL library, then your application may be Pro DC application. But you
jurisdiction for legal advice appropriate for your specific a derivative work. The PDFLibrary I created for this article, don’t need to purchase the
facts and circumstances. in my opinion, is a derivative work and would need to be ability to create and edit
licensed under the GPL/AGPL. What about a website that PDFs. Microsoft Word, like
The iText7 DLLs in the NuGet Package are distributed un- leverages the PDFLibrary and, by reference, the GPL/AGPL many other tools, is capable
der the AGPL License: http://www.gnu.org/licenses/agpl- library? Would the website be a derivative work? In my opin- of creating PDF documents
3.0.html. The AGPL license grants you the right to use, ion, the answer is no because an entire website is not based because PDF is an open ISO
modify, and distribute code (and DLLs) in your applications, on any one specific library. A website is often a composite Standard: https://www.iso.
including commercial applications. The AGPL’s primary pur- of many things, including open source works with different org/standard/51502.html.
pose is to remediate what’s generally referred to as the “ap- licenses and although the website itself wouldn’t need to be
plication service-provider loophole.” In a service environ- licensed under any open source license, the requirements
ment, you never “distribute” software. The AGPL is based on of any associated open source licenses still apply. If you’re
the GPL. GPL/AGPL is a strong copy-left license mandating interested in more in-depth coverage on this issue, you can
that modifications to covered work are shared when such read my recently published LinkedIn article: https://www.
code is “distributed,” but it doesn’t cover what’s referred linkedin.com/pulse/dispelling-myth-gnu-licenses-gpl-agpl-
to today as SaaS (software as a service). AGPL addresses lgpl-john-petersen/.
that gap.
The following is the link to iText’s license: https://github.
It’s a common misconception that if you merely “incorpo- com/itext/itext7-dotnet/blob/develop/LICENSE.md. The pro-
rate” GPL/AGPL code in your applications, your application’s vision I want to draw your attention to is the final paragraph:
source code must be made available to the public. This gets
to what’s commonly referred to as the “viral nature” of You can be released from the requirements of the license
GPL./AGPL I put the phrase “incorporate” in quotes because [AGPL] by purchasing a commercial license. Buying such
in all things legal, conclusions depend on terms, and more a license is mandatory as soon as you develop commer-
specifically, defined terms. Reviewing the AGPL text, the cial activities involving the iText software without dis-
term “incorporate” is in the preamble. The preamble is NOT closing the source code of your own applications. These
binding language. The only binding language in the GPL/ activities include: offering paid services to customers as
AGPL is enumerated under the terms and conditions. The an ASP, serving PDFs on the fly in a Web application, and
first place to review is the definitions. The definitions of in- shipping iText with a closed source product.
terest are the following (emphasis mine):
If you find yourself scratching your head from confusion,
• “The program” refers to any copyrightable work li- you’re not alone. I’m a lawyer with a lot of experience in
censed under this license. Each licensee is addressed OSS and OSS licensing and I was confused. The first point
as “you.” “Licensees” and “recipients” may be indi- of confusion is the implication that the AGPL forbids com-
viduals or organizations. mercial usage. The GPL/AGPL has no such prohibition. The

codemag.com Working with iText for PDFs 15


second point of confusion is the implication that if you use that end, if you find iText useful in your commercial ap-
the library under the AGPL, your application’s entire source plication, you should support their efforts and purchase
code must be disclosed. I just provided a detailed explana- a commercial license.
tion of why this isn’t the case. The other interesting point is
that if you obtained this library in a .NET environment, you Even if, legally, there are enforceability problems, there’s the
likely did so via NuGet. The NuGet license URL is: https:// question of ethics. There’s a tremendous free-rider problem
www.gnu.org/licenses/agpl-3.0.html. In other words, un- in OSS today. Too many just take…and too few contribute.
less you interrogated the GitHub repository, you wouldn’t For open source to thrive, it needs more than unicorns and
have notice of other license terms. rainbows: It requires funding. At the same time, companies
that seek to monetize OSS need to be good citizens too.
Is the paragraph quoted above supposed to represent a In that spirit, I submitted a pull request to fix the license
dual license scenario? If it is such an attempt, the attempt verbiage: https://github.com/itext/itext7-dotnet/pull/10/
is in-artfully drafted. In the legal context, words matter. commits/a672afe5522f39436c68f720366e593fdda48111.
Words are composed to create instruments which a party The modified text achieves the dual-licensing objective in
seeks to enforce in an effort to achieve some remedy? Is the the AGPL context that was iText’s original intent. It remains
paragraph enforceable? I’ll go as far to say that it creates to be seen whether iText accepts my pull request.
ambiguities. The only thing I’m certain of is the AGPL’s role
in the licensing scheme. Setting aside the license drafting Who says the legal stuff can’t be interesting! Let’s get to the
problems, the iText library is functionally quite good. To stuff we know is interesting, the code.

Listing 1: IsPDF Method The PDFLibrary and Its Methods


//Determine if a file is a PDF The PDFLibrary’s primary function is to be an abstraction over
public static ImmutableBoolean IsPDF(ImmutableArray<byte> file) the iText 7 library. To that end, the PDFLibrary handles two
{ broad tasks: to read data from a PDF and to write data to a
using (var ms = new MemoryStream(file.ToArray())) PDF. These two broad tasks encompass four distinct functions:
using (var reader = new PdfReader(ms))
{
return new ImmutableBoolean(true); • File-Based Functions:
} • Read a byte array from an existing PDF file.
} • Write a byte array to create a new PDF file or replace
an existing PDF file.

Figure 2: HelloWorld.cs program example from iText’s online tutorials.

16 Working with iText for PDFs codemag.com


• Field-Based Functions: Note on the Immutable Classes
• Read field data from a PDF byte array. Throughout the code in this article, you’ll see numerous
• Write field data to a PDF byte array. references to immutable classes. For every parameter argu-
ment sent to the PDFLibrary method and for every type a
Additional functions include: PDF Library returns, an immutable type will be involved. In
this way, you can presume that the data, when created, is
• Determining whether a given file is a PDF. what was sent to the PDF Library and is what the PDF Li-
• Retrieving a field name list from a PDF. brary received. In other words, variables used in PDF Library
calls, can’t be subject to side effects. The libraries neces-
The iText7 has many objects and sparse documentation. The sary to support the immutable types used in this article
PDFLibrary’s goal is two-fold. First, to make it as easy as are included in this source code link: https://github.com/
possible to handle the basic functions that application will johnvpetersen/ImmutableClass. If you’re interested in delv-
need to perform on a PDF, namely reading and writing data. ing further into the immutable classes, you can find that
Second, to improve upon iText’s samples which, candidly, source code here: https://github.com/johnvpetersen/Im-
perpetuate poor .NET coding practices. Figure 2 illustrates mutableClass.
the problem with iText’s HelloWorld.cs program.
Retrieving a List of PDF Field Names
There are several problems with the code. First, although You may need to determine the field names contained in a PDF.
the text mentions that streams can be used to create files, Listing 2 illustrates the call to get a field array which, in turn,
nowhere is that technique demonstrated. Instead, they just relies on the private method that retrieves the form fields.
show a figure with a rendered PDF, notwithstanding the
fact that the helloworld.cs code illustrated doesn’t render Reviewing Listing 3, you can see the core iText objects re-
a PDF! Second, in the real world, not all layers in your ap- lied upon:
plication architecture have knowledge of or can use a file
path. This is especially true if you’re using loosely coupled • PdfReader: Used to read and expose the PDF attri-
services (as you should be using). In the real world, you butes at a low-level (think security, encryption, etc.)
need to deal with byte arrays and streams to read from and • Pdfdocument: Uses a PdfReader to expose a PDF’s
write to components that, in turn, will eventually write that form and its fields
data to some source, whether it’s a disk or another service. • PdfFormField: Exposes the data and attributes of a
And when you deal with streams or iText objects like PDF- specific form field in a PDF
Document, PDFReaders, and PDFWriters, you’re dealing with
components that implement the IDisposable Interface. That In this case, the code cycles through a PDF’s form fields and
necessarily means that when you use such things, you must extracts the field name and the PdfFormField reference and
wrap their usage in a Using statement. To not do so ends up adds each to a dictionary. The PdfFormField object contains
leading to the real and likely possibility of memory leaks. a lot of information that concerns the field’s attributes in
Just because something is a “Hello World” type of example, the context of form and the PDF itself that, in many cases,
that’s not an excuse to perpetuate bad and incomplete pro- aren’t relevant if all you’re concerned with is reading and
gramming practices. writing data. If your application is concerned with PDF gen-
eration where field attributes such as page placement, size,
Determine Whether a File is a PDF font name, etc. are concerned, then the PdfFormField object
There are techniques that advocate opening the file and takes on more relevance.
reading the first five bytes looking for { %PDF- }. As Listing
1 illustrates, I like to take it a step further and use iText to
open the file with the PDFReader object. If it’s successful, Listing 2. Fields Method
no exception will be thrown. If it’s not successful, an excep- //Get a list of fields in a PDF
tion is thrown. As for how to deal with the exception, that’s public static ImmutableArray<string> Fields(ImmutableArray<byte> pdf)
up to your application that consumes the PDFLibrary. The {
PDFLibrary is a low-level library and its job isn’t to trap and return getFormFields(pdf).Keys.ToArray().ToImmutableArray();
}
swallow exceptions. That’s a higher-level function.

Riddling a low-level function with excessive try catches is


another common mistake. If a low-level function traps and Listing 3. getFormFields Method (private)
swallows an error, your component will fail silently. For the
purposes of this library, I made the design decision to vest static ImmutableDictionary<string, PdfFormField> getFormFields(ImmutableArray<byte> pdf)
{
such error handling in a calling library. using (var ms = new MemoryStream(pdf.ToArray()))
using (var reader = new PdfReader(ms))
The IsPDF method accepts one parameter, an immutable using (var doc = new PdfDocument(reader))
array of bytes. You‘ll see byte arrays used often in code {
because that’s the only way to pass data from one applica- var builder = ImmutableDictionary.CreateBuilder<string, PdfFormField>();
PdfAcroForm.GetAcroForm(doc, false)
tion layer to another. In Listing 1, once the byte array is .GetFormFields().ToList()
received, it’s applied to a memory stream that, in turn, is .ForEach(x => builder.Add(x.Key, x.Value));
used to create an iText7 PDFReader object. If that instantia-
tion process succeeds, you can infer that the file is indeed return builder.ToImmutable();
a valid PDF file. If the file isn’t a valid PDF, iText throws an }
}
exception. How that exception is dealt with is a matter for
the calling code to handle.

codemag.com Working with iText for PDFs 17


Read and Write PDF Files of these methods is to read from or write to a file on disk.
So far, I’ve covered how to determine whether a file is a PDF What’s returned is a byte array. The Write method, after the
and the fields a PDF contains. The next task is how to read bytes are written to the specified path, invokes the Read
and write PDF files. Listing 4 illustrates the Read and Write method to return the bytes just written. I concede that this
Methods. The capabilities in these methods come from Sys- is a bit inefficient because why should I have to read to get
tem.IO in .NET, not from iText. This is one area where a path the bytes I just provided? That’s a fair critique. Like making
value becomes a necessary because the ultimate disposition the one method private, this too was an arbitrary design
decision I made. The idea is that if I get bytes back, the
calling program can presume the bytes were written in the
Listing 4: Read and Write Methods first place. Could the method return a Boolean instead? Of
public static ImmutableArray<byte> Read(ImmutableString path) course it could, and if you want that behavior, you’re free to
{ change the code as you see fit.
return ImmutableArray.Create<byte>(File.ReadAllBytes(path.Value));
} Read from and Write to PDF Files
Now that you can create and read PDF files, the next and
public static ImmutableArray<byte> Write(ImmutableString path, ImmutableArray<byte> bytes)
{ final step is to read and write field values from and to those
File.WriteAllBytes(path.Value, bytes.ToArray()); PDF files. Listing 5 illustrates the code to accomplish the
return Read(path); first task, to get field data. The GetData method leverages
} the same getFormFields private method as does the public
Fields method. The one thing to take note of is that in all
cases, the value retrieved is a string. Once a field value is
rendered on a PDF, it’s a string and often, it’s a formatted
string, as would be the case for phone numbers, dates, and
Listing 5: GetData Method
social security numbers, to name three examples.
public static ImmutableDictionary<string,string> GetData(ImmutableArray<byte> pdf)
{
The SetData method, illustrated in Listing 6, accepts two
return getFormFields(pdf).ToDictionary(x => x.Key,
x => x.Value.GetValueAsString()).ToImmutableDictionary(); arguments:
}
A PdfField array where each element contains the field
name, the value, and the formatted value. Listing 7 illus-
trates the PdfField Class.

A byte array that’s the target PDF containing the fields to


ADVERTISERS INDEX update

Why doesn’t the SetData method use the getFormFields


Azure CLI, DAX, PDFs, VUE, Accessibility, WPF
Advertisers Index method? The getFormFields method closes the PdfDocu-
ment instance. When writing data to PDF fields, you need
the PdfDocument to remain open for that process. Accord-
JAN

CODE Consulting
FEB
2020
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

Compiling Scripts
to Get Compiled www.codemag.com/consulting 7 ingly, while you strive for maximum re-use, practicality of-
Language ten dictates that you sometimes need to put that ideal on
Performance
CODE Framework the shelf.
www.codemag.com/framework 55
CODE Legacy Why is the field value in this context always a string. It has
www.codemag.com/legacy 33 to do with the context in which the data is being used. When
you render data in a report, the fact that the underlying
CODE Magazine value is a date, integer, or Boolean type isn’t important.
Azure CLI Data Reporting Building
www.codemag.com/subscribe 76 When you deal with and manipulate data in code, its un-
Basics with DAX Business Apps
derlying data type matters. Integers are a great example. In
with WPF
DeveloperWeek
code, an integer value will be 12345. When rendered on a
www.developerweek.com 27
Advertising Sales: report or a PDF, the value becomes “12345” and the format-
Tammy Ferguson DEVintersection Conference ted value may be “12,345.” It’s up to some other facility to
832-717-4445 ext. 26
tammy@codemag.com
www.DEVintersection.com 2 know what the underlying types and display formats are.
The PDFLibrary simply takes that information and applies it.
dtSearch
From a separation of concerns standpoint, the PDFLibrary
www.dtSearch.com 19
isn’t concerned with how or why a given piece of data is
IoT Tech Expo World Series an integer or a date or why it’s displayed in a certain way.
www.iottechexpo.com 75 Rather, the PDFLibrary takes the data as it finds it.
LEAD Technologies Why Are the PDFLibrary Methods Static?
www.leadtools.com 5 The PDFLibrary is stateless. It accepts arguments (immutable
This listing is provided as a courtesy
arguments, to be precise), acts on those arguments, and pro-
to our readers and advertisers. vides an immutable response. The PDFLibrary is inherently
The publisher assumes no responsibi- thread safe. Therefore, there’s nothing to be gained from
lity for errors or omissions. creating an PDFLibrary instance variable. If you don’t need to
create state, don’t. If you don’t need a variable, don’t create

18 Working with iText for PDFs codemag.com


Listing 6: SetData Method
public static ImmutableArray<byte> SetData( ®
ImmutableArray<PdfField> fields, ImmutableArray<byte> pdf)
{
var writer = new PdfWriter(ms);

Instantly Search
var stream = new MemoryStream(pdf.ToArray())
using (var ms = new MemoryStream())
using (var doc = new PdfDocument(new PdfReader(stream), writer))
{
var form = PdfAcroForm.GetAcroForm(doc, false);
var formFields = form.GetFormFields();
Terabytes
foreach (var field in fields)
{
if (string.IsNullOrEmpty(field.DisplayValue))
{
formFields[field.Name].SetValue(field.Value);
} dtSearch’s document filters
else support:
{
formFields[field.Name].SetValue(field.Value, field.DisplayValue); • popular file types
} • emails with multilevel
}
doc.Close(); attachments
}
return ImmutableArray.Create<byte>(ms.ToArray()); • a wide variety of databases
} • web data

Listing 7: PdfField Class Over 25 search options


public class PdfField including:
{
public PdfField(string name, string value, string displayValue = null) • efficient multithreaded search
{ • easy multicolor hit-highlighting
DisplayValue = displayValue;
Value = value; • forensics options like credit
Name = name; card search
}

public string Name { get; }


public string Value { get; }
public string DisplayValue { get; } Developers:
} • SDKs for Windows, Linux,
macOS
• Cross-platform APIs for
one. The same goes for always implementing a Using state- iText .
C++, Java and NET with
ment when the underlying class implements the IDisposable
interface. Performance issues always start with a simple over- The iText .NET-specific
. .
NET Standard / NET Core
looked detail, which eventually repeats itself to the point that documentation is a bit sparse. • FAQs on faceted search,
the “death by a thousand cuts” idiom applies. A good place to start is granular data classification,
the .NET jump start tutorial: Azure and more
https://itextpdf.com/en/
Next Steps resources/books/itext-7-
jump-start-tutorial-net. Most
The PDFLibrary is my approach to a simple abstraction over
of core API documentation is
the iText library. Now that you have an understanding of
from their Java API. The good
how the PDFLibrary works, I encourage you to get and run news is that most of the Java Visit dtSearch.com for
the code from GitHub. The best place to start is with the docs translate to .NET. iText
unit tests. In my next article, I’ll discuss how to implement 7’s NuGet page can be found
• hundreds of reviews and
the PDFLibrary in a Web application. In the Web application, here: https://www.nuget.org/ case studies
I’ll cover how to create an abstraction to the PDFLibrary
that’s better suited to meet the Web application’s needs.
packages/itext7/. • fully-functional enterprise
If you’re interested in taking
the deepest dive into
and developer evaluations
John V. Petersen the iText7 .NET ecosystem,
check out the GitHub Repo: The Smart Choice for Text
https://github.com/itext/
itext7-dotnet.
Retrieval® since 1991
1-800-IT-FINDS
www.dtSearch.com

codemag.com Working with iText for PDFs 19


ONLINE QUICK ID 2001041

A Design Pattern for Building WPF


Business Applications: Part 4
In the previous articles in this series on building a WPF business application (check www.CODEMag.com for the others), you
created a new WPF business application using a pre-existing architecture. You added code to display a message while loading
resources in the background. You also learned how to load and close user controls on a main window. You built a login screen,

a user feedback screen, and a user maintenance screen to the following controls on the screen should be set to the
display a list of users, and the detail for a single user. In this following states.
article, you’re going to finish this user maintenance screen
by learning to manage button state, and to add, edit, and • The List View is enabled.
delete users. • The Detail User control is disabled.
• The Add button is enabled.
This article is the fourth, and final, in a multi-part series • The Edit button is enabled.
on how to create a WPF business application. Instead of • The Delete button is enabled.
starting completely from scratch, I’ve created a starting • The Undo button is disabled.
Paul D. Sheriff architecture that you can learn about by reading the blog • The Save button is disabled.
www.fairwaytech.com post entitled “An Architecture for WPF Applications” located
at https://bit.ly/2BxpK0P. Download the samples that go The Edit State
Paul D. Sheriff is a Business along with the blog post to follow along step-by-step with When the user clicks on the button to add or edit a user,
Solutions Architect with this article. This series of articles is also a Pluralsight.com you want them to focus only on the detail area until they’re
Fairway Technologies, course that you can view at https://bit.ly/2SjwTeb. You finished modifying that user. When in this Edit state, the
Inc. Fairway Technologies
can also read the previous articles in the May/June, July/ only way to finish is to click the Undo or the Save buttons.
is a premier provider
August, and September/October issues of CODE Magazine Modify the controls on the screen to the following states.
of expert technology
(https://www.codemag.com/Magazine/AllIssues).
consulting and software
development services, • The List View is disabled.
• The Detail User control is enabled.
helping leading firms A Design Pattern for Master/Detail • The Add button is disabled.
convert requirements into
top-quality results. Paul is
Screens in WPF • The Edit button is disabled.
also a Pluralsight author. In the last article, you built the UI for a user maintenance • The Delete button is disabled.
Check out his videos at screen (Figure 1). In this article, you’re going to add the • The Undo button is enabled.
http://www.pluralsight. code necessary to manage state so that different controls • The Save button is enabled.
com/author/paul-sheriff. on the screen can be enabled or disabled depending on
what the user is doing. You also create code to add, edit, The reason to disable the List view when editing is to force the
and delete users. user to focus on just editing a user. If you didn’t disable the
List view, they might accidently click on the list and move to
a new user before they’ve had a chance to save their changes.
Overview of Managing State Yes, you could add an IsDirty property on the view model, but
When building a maintenance screen that will list, add, edit, this requires a lot more code than simply disabling the list view.
and delete any records in a database, you need to keep track
of what state the user is currently in. There are three differ- The Add State
ent states that you need to keep track of: Adding a user is almost the same state as when editing a
user. When you are ready to save the data, you need a flag,
• Displaying the list of users so you know to add a new record using the Entity Framework
• Editing user information versus merely updating the record. You’re going to use an
• Adding a new user IsAddMode property to keep track of this state.

When you are in each of these states, you need to change


various controls on the maintenance screen to be enabled
A View Model Base Class for Add,
or disabled. The following sections describe the state that Edit, Delete
each of the controls should be in, depending on what the For any maintenance screen like the one shown in Figure 1,
user is currently doing on that screen. The reason to en- you’re going to need to keep track of the different states
able or disable controls is to keep the user focused on what outlined in the previous section. Create a class named View-
they are currently doing with the data on the screen. For ex- ModelAddEditDeleteBase in the ViewModelLayer project,
ample, the user may be browsing the list of users or maybe have it inherit from the ViewModelBase class, and add three
trying to modify a single user. new properties; IsListEnabled, IsDetailEnabled and IsAd-
dMode, as shown in Listing 1.
The List State
When you first enter the user maintenance screen, the list Add a BeginEdit() method in this class to set these proper-
of all users is displayed. When you are in this List state, ties to the valid state for adding or editing a record.

20 A Design Pattern for Building WPF Business Applications: Part 4 codemag.com


public virtual void BeginEdit(
bool isAddMode = false)
{
IsListEnabled = false;
IsDetailEnabled = true;
IsAddMode = isAddMode;
}

Add a CancelEdit() method to reset these properties back to


the normal mode of displaying a list of users only.

public virtual void CancelEdit()


{
base.Clear();

IsListEnabled = true;
IsDetailEnabled = false;
IsAddMode = false;
}

Add two additional methods to this class, Save() and De-


lete(). The Save() method is a virtual method with no func-
tionality in this class; it provides a design pattern for you to
use in your view models. Figure 1: The sample application with a user list and detail user controls

public virtual bool Save() {


return true; Modify User Maintenance Detail View Model
} Open the UserMaintenanceDetailViewModel.cs file in
the WPF.Sample.ViewModelLayer project and override the
The Delete() method is also virtual and provides no function- Save() and Delete() methods. In the Save() method, call
ality in this class; it also serves as a design pattern for your the CancelEdit() method to put the state back to List mode,
own view models. The signature for this method is as follows. as shown in the following code. You are going to fill in the
code to save and delete records later in this article.
public virtual bool Delete() {
return true; public override bool Save()
} {
// TODO: Save User
Modify User Maintenance List View Model CancelEdit();
Originally, you had the UserMaintenanceListViewModel class return true;
inherit from the ViewModelBase class. Change that class }
to inherit from the ViewModelAddEditDeleteBase class by
opening the UserMaintenanceListViewModel.cs file in the public override bool Delete()
WPF.Sample.ViewModelLayer project and modify the inheri- {
tance, as shown in the code snippet below. // TODO: Delete User

public class UserMaintenanceListViewModel : return true;


ViewModelAddEditDeleteBase }

Listing 1: Create a view model to handle standard add, edit and delete screens
using Common.Library; public bool IsDetailEnabled
{
namespace WPF.Sample.ViewModelLayer get { return _IsDetailEnabled; }
{ set {
public class ViewModelAddEditDeleteBase : ViewModelBase _IsDetailEnabled = value;
{ RaisePropertyChanged("IsDetailEnabled");
private bool _IsListEnabled = true; }
private bool _IsDetailEnabled = false; }
private bool _IsAddMode = false;
public bool IsAddMode
public bool IsListEnabled {
{ get { return _IsAddMode; }
get { return _IsListEnabled; } set {
set { _IsAddMode = value;
_IsListEnabled = value; RaisePropertyChanged("IsAddMode");
RaisePropertyChanged("IsListEnabled"); }
} }
} }
}

codemag.com A Design Pattern for Building WPF Business Applications: Part 4 21


Bind Controls to State Properties Detail User Control
Now that you’ve changed the inheritance on your view mod- The final binding is on the UserMaintenanceDetailControl
el that’s bound to your WPF screen, you can bind up each of user control. Bind the IsEnabled property of this user con-
the three new properties to the appropriate controls. These trol to the IsDetailEnabled property. Binding this property
properties enable and disable controls, depending on the on the user control is much less code than setting each in-
state the form is in. put control’s IsEnabled property individually.

Save and Undo Buttons <UserControls:UserMaintenanceDetailControl


Open the UserMaintenanceDetailControl.xaml file and lo- Grid.Row="2"
cate the Undo and the Save buttons. Bind the IsDetailEn- x:Name="detailControl"
abled property to the IsEnabled property of each of these IsEnabled="{Binding Path=IsDetailEnabled}"
buttons. DataContext="{StaticResource viewModel}" />

<Button IsCancel="True" Try it Out


IsEnabled="{Binding Path=IsDetailEnabled}" Run the application and click on the Users menu item. You
Style="{StaticResource toolbarButton}"> should see that the various buttons are all enabled or dis-
<StackPanel Orientation="Horizontal"> abled because they have been bound to the Boolean proper-
... ties in your view model.
</StackPanel>
</Button>
<Button IsDefault="True" Changing State
IsEnabled="{Binding Path=IsDetailEnabled}" When you want to go into Add or Edit mode, call the Begin-
Style="{StaticResource toolbarButton}"> Edit() method in the view model. You do this by adding click
<StackPanel Orientation="Horizontal"> events on each button in the toolbar and the buttons on the
... detail screen. You can use WPF commanding if you want to,
</StackPanel> but I prefer click events because I can follow the logic of the
</Button> screen easier.

List Control Add Events to Detail Screen


Open the UserMaintenanceListControl.xaml file, locate the Open the UserMaintenanceDetailControl.xaml file and add
ListView control and bind the IsListEnabled property to the a Loaded event to the <UserControl> element.
IsEnabled property of this control.
<UserControl
<ListView ItemsSource="{Binding Path=Users}" x:Class="WPF.Sample.UserControls.
IsEnabled="{Binding Path=IsListEnabled}" UserMaintenanceDetailControl"
SelectedItem="{Binding Path=Entity}"> ...
... mc:Ignorable="d"
</ListView> d:DesignHeight="450"
d:DesignWidth="800"
Toolbar Buttons Loaded="UserControl_Loaded">
Open the UserMaintenanceControl.xaml file and locate the
buttons within the toolbar control. Bind each button’s IsEn- Open the UserMaintenanceDetailControl.xaml.cs file and
abled property to the appropriate property in the ViewMod- add a using statement at the top of this file so you can refer-
elAddEditDeleteBase class, as shown in Listing 2. ence the view model class from the code behind.

Listing 2: Bind all the toolbar buttons to the appropriate properties


<ToolBar Grid.Row="0"> WPF.Common;component/Images/Trash_Black.png"
<Button Style="{StaticResource toolbarButton}" Style="{StaticResource toolbarImage}" />
IsEnabled="{Binding Path=IsListEnabled}" </Button>
ToolTip="Add New User"> <Separator />
<Image Source="pack://application:,,,/ <Button Style="{StaticResource toolbarButton}"
WPF.Common;component/Images/Plus_Black.png" IsEnabled="{Binding Path=IsDetailEnabled}"
Style="{StaticResource toolbarImage}" /> ToolTip="Undo Changes">
</Button> <Image Source="pack://application:,,,/
<Separator /> WPF.Common;component/Images/Undo_Black.png"
<Button Style="{StaticResource toolbarButton}" Style="{StaticResource toolbarImage}" />
IsEnabled="{Binding Path=IsListEnabled}" </Button>
ToolTip="Edit Current User"> <Button Style="{StaticResource toolbarButton}"
<Image Source="pack://application:,,,/ IsEnabled="{Binding Path=IsDetailEnabled}"
WPF.Common;component/Images/Edit_Black.png" ToolTip="Save Changes">
Style="{StaticResource toolbarImage}" /> <Image Source="pack://application:,,,/
</Button> WPF.Common;component/Images/Save_Black.png"
<Button Style="{StaticResource toolbarButton}" Style="{StaticResource toolbarImage}" />
IsEnabled="{Binding Path=IsListEnabled}" </Button>
ToolTip="Delete Current User"> </ToolBar>
<Image Source="pack://application:,,,/

22 A Design Pattern for Building WPF Business Applications: Part 4 codemag.com


using WPF.Sample.ViewModelLayer; using WPF.Sample.DataLayer;
using WPF.Sample.ViewModelLayer;
Create a private field in the screen to reference the User-
MaintenanceViewModel object. In the UserControl_Loaded() Create a private field in the screen to reference the UserMain-
event, grab the instance of the UserMaintenanceViewModel tenanceViewModel object. In the UserControl_Loaded() event,
object from the DataContext and assign that value to the grab the instance of the UserMaintenanceViewModel object
field _viewModel, as shown in the code below. from the DataContext and assign it to the field _viewModel.

private UserMaintenanceViewModel _viewModel; private UserMaintenanceViewModel _viewModel;

private void UserControl_Loaded(object s, RoutedEventArgs e) private void UserControl_Loaded(object s, RoutedEventArgs e)


{ {
_viewModel = (UserMaintenanceViewModel)this.DataContext; _viewModel = (UserMaintenanceViewModel)this.DataContext;
} }

Open the UserMaintenanceDetailControl.xaml file and Write the code for the EditButton_Click event to set the En-
modify the Undo and Save buttons to fire a click event. tity property in the view model to the value you retrieve from
the button’s Tag property. Once this property has been set,
<Button IsCancel="True" ... call the BeginEdit() method. The reason you need to set the
Click="UndoButton_Click" Entity property is in case the ListView is currently displaying
Style="{StaticResource toolbarButton}"> the first user, but the user clicks on the third user. You need
<StackPanel Orientation="Horizontal" the Entity property to be set to the third user instead of the
... one that has focus.
</StackPanel>
</Button> private void EditButton_Click(object s, RoutedEventArgs e)
<Button IsDefault="True" ... {
Click="SaveButton_Click" // Set selected item
Style="{StaticResource toolbarButton}"> _viewModel.Entity = (User)((Button)sender).Tag;
<StackPanel Orientation="Horizontal"
... // Go into Edit mode
</StackPanel> _viewModel.BeginEdit(false);
</Button> }

In the UndoButton_Click event, call the CancelEdit() method Write a DeleteUser() method to be called from the Delete-
on the view model to reset the state back to List mode. In the Button_Click event procedure you created earlier. This
SaveButton_Click() event call the Save() method on the view method asks the user if they wish to delete the current user.
model. The Save() method right now just calls the CancelEdit() If they answer Yes, call the Delete() method on the view
method to reset the state back to the List mode. Later in this model to delete the user from the database.
article, you’ll write code to save the user information.
public void DeleteUser()
private void UndoButton_Click(object s, RoutedEventArgs e) {
{ // Ask if the user wants to delete this user
_viewModel.CancelEdit(); if (MessageBox.Show("Delete User " +
} _viewModel.Entity.LastName + ", " +
_viewModel.Entity.FirstName + "?",
private void SaveButton_Click(object s, RoutedEventArgs e) "Delete?", MessageBoxButton.YesNo)
{ == MessageBoxResult.Yes) {
_viewModel.Save(); _viewModel.Delete();
} }
}
Add Click Events to List Screen
Open the UserMaintenanceListControl.xaml file and at the Write the DeleteButton_Click event to set the Entity prop-
top of this user control, add a Loaded event procedure to erty on the view model from the button’s Tag property. Just
the <UserControl> element. like you did for the Edit button, ensure that the Entity prop-
erty is set to the one the user clicked on, and not the one
<UserControl currently selected in the DataGrid. Once the Entity property
x:Class="WPF.Sample.UserControls. has been set, call the DeleteUser() method.
UserMaintenanceListControl"
mc:Ignorable="d" private void DeleteButton_Click(object s, RoutedEventArgs e)
d:DesignHeight="450" {
d:DesignWidth="800" // Set selected item
Loaded="UserControl_Loaded"> _viewModel.Entity = (User)((Button)sender).Tag;

Open the UserMaintenanceListControl.xaml.cs file and add // Delete user


two using statements so you can access classes in the data DeleteUser();
layer and the view model layer projects. }

codemag.com A Design Pattern for Building WPF Business Applications: Part 4 23


Add Click Events to Toolbar You are going to set each property on this User object with
Open the UserMaintenanceControl.xaml file and add Click the values from the currently selected user.
event procedures to each Toolbar button. The complete code
for the Toolbar is shown in Listing 3. After you create each Override BeginEdit Method
of these Click events, add the appropriate calls to the meth- In the UserMaintenanceDetailViewModel class, override the
ods in the view model class, as shown in Listing 4. BeginEdit() method from the ViewModelAddEditDeleteBase
class. Copy all of the properties in the Entity object and
Notice that the DeleteButton_Click event procedure calls place them into the corresponding properties of the _Origi-
the public DeleteUser() method on the UserMaintenaceList- nalEntity field. The CommonBase class has a Clone() method
Control class. This is done because the DeleteUser() method that performs this copying for you.
must display a message box, and UI code doesn’t belong in
a view model class. public override void BeginEdit(bool isAddMode = false)
{
Try it Out // Create a copy in case user wants undo their changes
Run the application and click on the Users screen and then base.Clone<User>(Entity, _OriginalEntity);
try pressing the different buttons to watch the screen move if (isAddMode) {
in and out of the different states. Entity = new User();
}
base.BeginEdit(isAddMode);
Begin and Cancel Edits }
When a user starts making changes to the user data in the
text boxes, those changes are updated into the bound prop- You can’t just assign _OriginalEntity property equal to the Entity
erties in your view model. If the user wishes to cancel the property, as that creates a reference between the two objects.
edit mode, you must have some way of putting back the When you have a reference between two objects, the changes
original data. One way to do this is to add another field to you make to one are changed in the other too. The Clone()
your view model to hold the original entity data. Open the method uses reflection to perform a GET on each property in
UserMaintenanceDetailViewModel.cs file and add the fol- the Entity object, then calls the set on the corresponding prop-
lowing variable. erty in the _OriginalEntity object. This accomplishes two goals: it
makes a copy of the data, and it fires each property’s RaiseProp-
private User _OriginalEntity = new User(); ertyChanged event. This is important when the user cancels the

Listing 3: Add Click events to each of the Toolbar items


<ToolBar Grid.Row="0"> <Image Source="pack://application:,,,/
<Button Style="{StaticResource toolbarButton}" WPF.Common;component/Images/Trash_Black.png"
Click="AddButton_Click" Style="{StaticResource toolbarImage}" />
IsEnabled="{Binding Path=IsListEnabled}" </Button>
ToolTip="Add New User"> <Separator />
<Image Source="pack://application:,,,/ <Button Style="{StaticResource toolbarButton}"
WPF.Common;component/Images/Plus_Black.png" Click="UndoButton_Click"
Style="{StaticResource toolbarImage}" /> IsEnabled="{Binding Path=IsDetailEnabled}"
</Button> ToolTip="Undo Changes">
<Separator /> <Image Source="pack://application:,,,/
<Button Style="{StaticResource toolbarButton}" WPF.Common;component/Images/Undo_Black.png"
Click="EditButton_Click" Style="{StaticResource toolbarImage}" />
IsEnabled="{Binding Path=IsListEnabled}" </Button>
ToolTip="Edit Current User"> <Button Style="{StaticResource toolbarButton}"
<Image Source="pack://application:,,,/ Click="SaveButton_Click"
WPF.Common;component/Images/Edit_Black.png" IsEnabled="{Binding Path=IsDetailEnabled}"
Style="{StaticResource toolbarImage}" /> ToolTip="Save Changes">
</Button> <Image Source="pack://application:,,,/
<Button Style="{StaticResource toolbarButton}" WPF.Common;component/Images/Save_Black.png"
Click="DeleteButton_Click" Style="{StaticResource toolbarImage}" />
IsEnabled="{Binding Path=IsListEnabled}" </Button>
ToolTip="Delete Current User"> </ToolBar>

Listing 4: Call methods in the view model from each Toolbar’s Click event
private void AddButton_Click(object s, RoutedEventArgs e) listControl.DeleteUser();
{ }
_viewModel.BeginEdit(true);
} private void UndoButton_Click(object s, RoutedEventArgs e)
{
private void EditButton_Click(object s, RoutedEventArgs e) _viewModel.CancelEdit();
{ }
_viewModel.BeginEdit(false);
} private void SaveButton_Click(object s, RoutedEventArgs e)
{
private void DeleteButton_Click(object s, RoutedEventArgs e) _viewModel.Save();
{ }

24 A Design Pattern for Building WPF Business Applications: Part 4 codemag.com


Listing 5: Use the Entity Framework to add or update a user in the user table
public override bool Save()
{ // If new entity, add to view model Users collection
var ret = false; if (IsAddMode)
SampleDbContext db = null; {
try Users.Add(Entity);
{ // TODO: Send user name and password to user
db = new SampleDbContext(); }
if (IsAddMode)
{ // Set mode back to normal display
// Generate a random password CancelEdit();
Entity.Password = StringHelper.CreateRandomString(16); }
catch (DbEntityValidationException ex)
// Add new user to EF Users collection {
db.Users.Add(Entity); ValidationMessages =
} new ObservableCollection<ValidationMessage>
else (db.CreateValidationMessages(ex));
{ IsValidationVisible = true;
db.Entry(Entity).State = EntityState.Modified; }
} catch (Exception ex) {
db.SaveChanges(); PublishException(ex);
ret = true; }
return ret;
// Set Original Entity equal to changed entity }
_OriginalEntity = Entity;

edit because you want the old values to propagate to the screen, The Save() method creates a new instance of the SampleDb-
and this is done by firing the RaisePropertyChanged event. Context class. This class inherits from the Entity Frame-
work’s DbContext class. If the user is adding a new user, a
After cloning the user, check whether the user is adding a random password is generated for the new user. Optionally,
new user, and if so, create a new user and put it into the you could add a PasswordBox control that only shows up on
Entity property. The Entity property is bound to the product the screen when you’re in add mode. This allows the person
detail user control, so creating a new instance of the User entering a new user to add a password. The new user object
class, all fields are displayed as blanks. Finally, call the Be- in the Entity property is added to the Users collection in the
ginEdit() method to change the state of the UI. EF object. If the user is in Edit mode, the state of the exist-
ing entity is changed to Modified.
Override CancelEdit Method
Override the CancelEdit() method so if the user clicks the The SaveChanges() method is called to have EF submit the
Undo button, the Entity property is set back to what it was changes to the User table in SQL Server. If this call is success-
prior to beginning the add or edit process. You once again ful, the return value is set to true and the Entity property is
call the Clone() method to put all the values from the _Orig- set to the _OriginalEntity field. If adding a new record, the
inalEntity object into the Entity object. new user object is added to the Users collection property in
the view model. The ListView control is notified so that it can
public override void CancelEdit() redisplay the new collection. Finally, the CancelEdit() method
{ is called to reset the state of the form back to List mode.
base.CancelEdit();
Try it Out
// Clone Original to Entity object Run the application, add a new user, then save the new user.
// so each RaisePropertyChanged event fires Also, try starting the add process, but then click the Undo
base.Clone<User>(_OriginalEntity, Entity); button to ensure that the new user is aborted and is placed
} back to the original user. Try editing a user and, while editing.
try clicking the Undo button to ensure that the changes you
made to the existing user are reverted to the original values.
Add/Update a User
It’s now time to write code to add or update a user in the user table Validation Messages
using the Entity Framework. To start, add some using statements Add a list box to display validation messages that may arise
to the top of the UserMaintenanceDetailViewModel.cs file. from the user entering incorrect information. Open the
UserMaintenanceDetailViewModel.xaml file and, just be-
using System; fore the final </Grid> element, add the following XAML.
using System.Collections.ObjectModel;
using System.Data.Entity; <!-- Validation Message Area -->
using System.Data.Entity.Validation; <ListBox Grid.Row="5"
using System.Linq; Grid.ColumnSpan="2"
using Common.Library; Style="{StaticResource validationArea}"
Visibility="{Binding IsValidationVisible,
Modify the Save Method Converter={StaticResource
Modify the Save() method you created earlier with the ap- visibilityConverter}}"
propriate code to add or update a record in the User table. ItemsSource="{Binding ValidationMessages}"
The complete Save() method is shown in Listing 5. DisplayMemberPath="Message" />

codemag.com A Design Pattern for Building WPF Business Applications: Part 4 25


Listing 6: Delete a user using the Entity Framework
public override bool Delete() // Remove from view model collection
{ Users.Remove(Entity);
var ret = false;
int index; // Calculate the selected entity after deleting
SampleDbContext db; if (Users.Count > 0)
User entity; {
index++;
try if (index > Users.Count)
{ {
db = new SampleDbContext(); index = Users.Count - 1;
}
// Find entity in EF Users collection Entity = Users[index];
entity = db.Users.Find(Entity.UserId); }
if (entity != null) else
{ {
// Find index where this entity is located Entity = null;
index = db.Users.ToList().IndexOf(entity); }
}
// Remove entity from EF collection }
db.Users.Remove(entity); catch (Exception ex)
{
// Save changes to database PublishException(ex);
db.SaveChanges(); }
ret = true; return ret;
}

Getting the Sample Code If you look at the User class, you can see that it’s decorated erties and a few lines of code, you can keep the user focused
with Data Annotations. The validation system works the same on what they’re doing, and they always know what state
You can download the sample as that described for the login screen shown in the second they’re in just by looking at the buttons. You also added
code for this article by visiting part of this article series (CODE Magazine, July/August 2019). code to add, edit, and delete users in the User table. Having
www.CODEMag.com under the a good set of base classes helps you follow a design pattern
issue and article, or by visiting for standard add, edit, and delete screens. Use reflection
resources.fairwaytech.com/ Delete a User to copy properties from a current user into another User
downloads. Select “Fairway/
The last bit of functionality to add to your user maintenance object. This allows you to put values back if the user cancels
PDSA Articles” from the
Category drop-down. Then
screen is to delete a user. Modify the Delete() method that the editing process.
select “ A Design Pattern you created earlier in the UserMaintenanceDetailView-
for Building WPF Business Model class and fill in the appropriate code to delete a re- Paul D. Sheriff
Applications - Part 4” from the cord from the User table, as shown in Listing 6.
Item drop-down.
Although there may seem to be a lot of code in Listing 6 for a
simple delete operation, it’s necessary. If you delete a record
from the User table, you also need to delete the entity object
from the Users collection property in the view model class. This
leaves the Entity property pointing to an invalid user and thus
the ListView object isn’t highlighting a user. The screen is now
in an invalid state as there’s nothing selected in the list view,
and nothing being displayed in the detail area of the screen.

To avoid this invalid state, locate the user to delete in the


EF Users collection. Retrieve the index of where this user
is in the EF Users collection. Save this index in the vari-
able index. Remove the user from the EF collection and call
SaveChanges() on the DbContext object.

Remove the user from the Users collection property of the


view model class. Find a valid user in the Users collection
to set the Entity property to, so the ListView control can
display a valid user. If there are no users left, set the Entity
property to a null value.

Try it Out
Run the application and try deleting a user. Ensure that a
valid user is selected after the delete has been run.

Summary
In this article, you learned to move from one state to an-
other on the user maintenance screen. With just a few prop-

26 A Design Pattern for Building WPF Business Applications: Part 4 codemag.com


codemag.com Title article 27
ONLINE QUICK ID 2001051

Vuex: State Management


Simplified in Vue.js
Sometimes when you think about managing state in large applications, it’s easy to think of everything as its own small
island of functionality. But sometimes centralizing state actually simplifies the tangled web of properties and events.
In this article, I’ll show you how and why to use Vuex in your Vue.js projects to simplify and centralize your state.

Complexity Is the Problem What Is Vuex?


Lots of projects start out small. Maybe yours did. Before too As per the Vuex website: “Vuex is a state management
long, you’re up to a large number of components in your Vue pattern + library for Vue.js applications.” What does that
project. This isn’t special about Vue or about React, Angular, mean? It’s a way of centralizing state in your application so
WinForms, C++ apps, etc. They all eventually run into this that every component has access to the state your applica-
problem. It comes down to having individual components tion needs. This may be typical application data (e.g., CRUD
that need to interact with others. data), but also could include UI data (e.g., errors and busy
flags) as well as lookup data (e.g., states, countries).
Naively, devs often think of a Vue application as a hierarchy
Shawn Wildermuth (as seen in Figure 1). The problem with this approach is Part of the central conceit in Vuex is that allowing compo-
shawn@wildermuth.com that you think of navigating the hierarchy in order to com- nents to use data should be easy and changing state should
wildermuth.com municate with other components. This leads to fragile code be purposeful. In other words, everyone should be able to
twitter.com/shawnwildermuth that’s dependent on the existing design of a particular app, get state that they need without regard of who “owns” that
or worse, refactoring the composition could easily break and data. But at the same time, it should be more difficult to
Shawn Wildermuth has been
tinkering with computers lead to performance issues. change that state to prevent accidental changes or copies
and software since he of the data. Enough talk; let’s see it in action.
got a Vic-20 back in the The other issue at the heart of the problem is coupling. A
lot of modern frameworks (including Vue) are built to allow
early ’80s. As a Microsoft
MVP since 2003, he’s also for building and testing of components in isolation. Loose Applying Vuex
involved with Microsoft coupling is the best way to allow this, but once you hit real To start out, you’ll need to add Vuex to your project. Usually
as an ASP.NET Insider and requirements, it can be easy to start coupling Vue compo- this is done by calling npm or Yarn to add the project:
ClientDev Insider. nents through props and events (as seen in Figure 2):
He’s the author of over C:\>npm install vuex --save
twenty Pluralsight courses, Last, reactivity is a double-edged sword. It can be easy for
written eight books, components to change each other’s data without realizing Once the package is installed, you can start using Vuex. Like
an international conference it. Finding these bugs can be laborious and requires the un- many libraries with Vue.js, you need to opt into using Vuex
speaker, and one of derstanding of complex interactions. like so:
the Wilder Minds.
You can reach him at his blog I think that a solution to this is to centralize state through- import Vue from “vue”;
at http://wildermuth.com. out many Vue applications. Although this isn’t necessarily import Vuex from “vuex”;
He’s also making for the more basic Vue projects, it does represent a powerful
his first, feature-length tool for projects whose complexity is starting to increase. Vue.use(Vuex);
documentary about software
developers today called
“Hello World: The Film.”
You can see more about it at
http://helloworldfilm.com.

Figure 1: Hierarchies

28 Vuex: State Management Simplified in Vue.js codemag.com


Figure 2: Coupling

For most cases, you’ll opt into using Vuex directly in the The store object is structured as four main properties that
store that you need to create. Let’s do that next. contain the four parts of the store:

Creating a Store export default new Vuex.Store({


The central idea behind Vuex is to create a Store that’s accessi- state: {
ble from anywhere in your Vue.js application. This happens by },
creating an instance of an object inside of Vuex called Store: mutations: {
},
// store.js actions: {
import Vue from “vue”; },
import Api from “@/services/api”; getters: {
}
Vue.use(Vuex); });

export default new Vuex.Store({ Each part of the store has its own job:
...
}); • State: The actual data in the store
• Mutations: Where state is changed (or mutated)
Notice that you’re exporting the store, so you probably • Actions: Operations on the data. Often resulting in
know that the next step is to import it into the application: one or more mutations
• Getters: Computed operations on the state
// main.js
import Vue from ‘vue’; One of the key ideas here is to centralize the changing of
import App from ‘./App.vue’; state. This way, any changes to state (and reactivity from
import router from ‘./router’; that state) only need to be concerned in one place. In fact,
import store from “@/store.js”; you can ensure that data isn’t changed anywhere but in mu-
Vue.config.productionTip = false tations by adding:
new Vue({
router, export default new Vuex.Store({
store, strict: true,
render: h => h(App) state: {
}).$mount(‘#app’) },
mutations: {
Notice that as you import the store, you’re just adding it },
to the new Vue object. The effect of this is that the store actions: {
is available through out every view throughout the system. },
Because this Vue object is the parent of the entire project, getters: {
it’s projected as a property throughout the app. Note that }
in this case, you’re injecting it in the main.js (of a Vue CLI });
project) so that it’s available everywhere. To access it, you
use the $store property in your code. As you add to the Turning on strictness throws errors when the state is at-
store, that will become more apparent. tempted to be changed outside of the mutations. Strictness

codemag.com Vuex: State Management Simplified in Vue.js 29


is a double-edged sword in that it forces more checking code
to be included, so it’s generally suggested that you only At this point, you might be
turn on strictness during development.
frustrated. To get this simple
Let’s walk through using the store to see how it works, be- piece of data exposed, there’s a
ginning with state. boilerplate. Stay with me. We’re
Using State headed somewhere amazing.
Let’s start with some simple state: isBusy and error. This
way, you can show errors and wait cursors wherever you
need them in your app:
</div>
export default new Vuex.Store({ <div v-if=”isBusy”>
strict: true, <i class=”fas fa-spinner fa-spin”/>
state: { Please Wait...
isBusy: false, </div>
error:””
}, At this point, you might be frustrated. To get this simple
mutations: { piece of data exposed, there’s a boilerplate. Stay with me.
}, We’re headed somewhere amazing.
actions: {
}, The magic happens when you use Vuex’s helpers. In the
getters: { view, you can import a few helpers, including mapState,
} mapMutations, mapActions, and mapGetters. Let’s start
}); with mapState to see how this works:

You can see that state is just a bag of properties. These can import { mapState } from “vuex”;
be simple, scalar properties like you have here, or they can export default {
be collections or trees of information. In a view, you can computed: mapState([“error”, “isBusy”])
simply bind to these using the $state object. For example: }

<div class=”alert alert-warning” v-if=”$store.state.error”> This simplifies the mapping from the store to the state you
{{ $store.state.error }} want to expose. This allows you to simply pick the parts of
</div> the state that this view uses and map them as computed
<div v-if=”$store.state.isBusy”> properties. The result is that your views can use most of the
<i class=”fas fa-spinner fa-spin”/> Vuex as standard data and methods that you’re already used
Please Wait... to implementing. Let’s look at mutations next.
</div>
Mutations
Of course, you’re accessing these through the state directly Now you have state, but no way to change them. That’s
from the $store object. That’s pretty ugly. In fact, this can where mutations come in. Mutations are simple functions
cause some issues with reactivity. To use the state, you really that make the change:
want to expose the state as computed values in your code:
mutations: {
computed: { setError(state, error) {
isBusy() { state.error = error;
return this.$store.state.isBusy; },
}, setBusy(state, busy) {
error() { state.isBusy = busy;
return this.$store.state.error; }
} },
},
The first parameter of a mutation is always the state ob-
Why are computed properties important here? Because ject, and the second is the properties sent to the mutation.
that’s the path to make sure that the properties are reac- Mutations are called by using the commit call on the store:
tive, and so changes to isBusy and error mark changes to
the views. Just using the global $store object gives you this.$store.commit(“setError”, “Failed to get sites”);
access to the underlying reactive object and ensuring that
Of note, if you need to send more than one parameter, you’ll
the object (or its properties) change means that you really
need to wrap it in an object because mutations always take
should use computed properties.
only one parameter. For example:
You can simply change the use of the computed values to be this.$store.commit(“setError”,
simpler markup: {
message: “Failed to get sites”,
<div class=”alert alert-warning” v-if=”error”> exception
{{ error }} });

30 Vuex: State Management Simplified in Vue.js codemag.com


If you have to do this, you can easily destructure it in the export default new Vuex.Store({
mutation: strict: true,
state: {
mutations: { error: “”,
setError(state, { message, exception }) { isBusy: “”,
state.error = message; regions: []
}, },
setBusy(state, busy) { mutations: {
state.isBusy = busy; setError(state, error) {
} state.error = error;
}, },
setBusy(state, busy) {
Although it’s more uncommon, you can use the mapMuta- state.isBusy = busy;
tions helper to map your mutations to methods on your },
view: setRegions(state, regions) {
state.regions = regions;
methods: { },
...mapMutations([“setError”, “setBusy”]) },
} actions: {
async loadRegions({ commit }) {
In the case of mapMutations, you’ll need to use the spread let regions = await Api.loadRegions(); SPONSORED SIDEBAR:
operator (...) to expand the mutations as separately named commit(“setRegions”, regions);
methods. Like the mapState, it takes an array of strings } Are Your Apps Stuck
that represent the name(s) of mutations. In this way, you }); in the Past?
can just call the mutation like any other method:
What this generally means is that most of the interesting Need free advice on
try { work you’re doing ends up in actions. The idea here is that migrating legacy
applications to a modern
this.setBusy(true); these things chain together:
platform? CODE Consulting
await this.loadRegions();
has years of experience
} • Actions execute mutations doing so and can help
catch { • Mutations change state migrate your application
this.setError(“Failed to load regions”); • Views bind to state (or getters, as you’ll see soon) to ASP.NET MVC, .NET
} Core, HTML5, JavaScript/
finally { To execute an action, you can use the dispatch function: TypeScript, NodeJS, mobile
this.setBusy(false); (iOS and Android), WPF,
} this.$store.dispatch(“loadRegions”); and more. Contact us today
to schedule your free
Again, using the Vuex helpers can simplify this: hour of CODE consulting
call with our expert
Mutations are required to be import { mapState, mapMutations, mapActions } consultants (not a sales call!).
from “vuex”; For more information, visit
synchronous, so that means www.codemag.com/consulting
that you should change export default { or email us at
computed: mapState([“regions”]), info@codemag.com.
the state and move on. // ...
If you have other work to do, methods: {
that’s what actions are for. ...mapActions([“loadRegions”])
}
};

One thing to be sure of is that you should do as little work in This really means that if you use the mappings, your views
the mutation as possible. Mutations are required to be syn- and view code shouldn’t even realize that they are using
chronous, so that means that you should change the state Vuex. This means that Vuex should be a transparent way to
and move on. If you have other work to do, that’s what ac- integrate with your components. The fact that you need to
tions are for. Let’s look at those next. do the mapping should just be the glue between the state
management and the components.
Actions
Actions are where a lot of the work happens in Vuex. Actions Getters
are simply methods that have access to the store but can Finally, the last part of Vuex (and likely the least used) are
be asynchronous as well. Typically, I use actions for where getters. Think of getters as computed values for Vuex:
to do network operations and search operations. Although
I usually keep the actual API calls in a separate class, using getters: {
async and await in actions seems to be a natural fit for API siteCount(state) {
calls. For example, assume that you have a state for a set return state.currentSites.length;
of regions and you want to be able to set the regions and }
load the regions: }

codemag.com Vuex: State Management Simplified in Vue.js 31


Listing 1: Complete Store Using the method-type access requires you call it like a
function:
// store.js
import Vue from “vue”;
import Vuex from “vuex”; let site = this.$store.getters.findSite(1);
import Api from “@/services/api”;
Vue.use(Vuex); Unlike property-style getters, method-style getters aren’t
export default new Vuex.Store({ cached and are executed every time they’re called.
strict: true,
state: {
Probably unsurprisingly, Vuex helpers support mapping get-
error: “”,
isBusy: “”, ters as well:
regions: [],
currentSites: [], import {
siteCart: [] mapState,
}, mapMutations,
mutations: {
mapActions,
setError(state, error) { state.error = error; },
setBusy(state, busy) { state.isBusy = busy; }, mapGetters
setRegions(state, regions) { state.regions = regions; }, } from “vuex”;
setCurrentSites(state, sites) { state.currentSites = sites; }, export default {
addToCart(state, site) { state.siteCart.push(site); }, name: “home”,
clearCart(state) { state.siteCart = []; } computed: {
},
...mapState([“regions”]),
actions: {
async loadRegions({ commit }) { ...mapGetters([“siteCount”])
let regions = await Api.loadRegions(); },
commit(“setRegions”, regions);
}, If you need to map getters and state, you’ll need to use the
async loadSites({ state, commit }, key) { spread operator to mix them.
let region = state.regions.find(r => r.key == key);
if (region) {
let sites = await Api.loadSites(region);
commit(“setCurrentSites”, sites);
Where Are We?
return; At this point, you can see the value of centralizing your state
} with Vuex. I’ve talked about the benefits of controlling where
commit(“setError”, “Failed to get sites”); change can happen to prevent unintended state change. I’ve
}
also talked about increasing the testability of code that uses
},
getters: { Vuex. Hopefully, walking through an existing application and
siteCount(state, getters) { seeing how I’d add Vuex to it will help you get started with
return state.currentSites.length; Vuex, as you can see in Listing 1. Although it’s common to
} start with Vuex, it’s also not uncommon to migrate an ap-
} plication once it’s reached some level of complexity to Vuex.
});
You can download the source code at: https://github.com/
shawnwildermuth/VueStateManagementExample.

Shawn Wildermuth
You’re passed the state (and, optionally, the getters object
itself) so that you can compute a value if necessary. These
are accessed via the getters object in the store and can be
accessed like properties:

let count = this.$store.getters.siteCount

Getters are treated like computed values, so that multiple


calls to the getters cache the value instead of re-executing
the code. And because the getters are like computed values,
they respond to reactivity by re-executing the code if the
underlying state changes.

Getters also support a function-like syntax that can take a


variable number of arguments, using arrow functions:

getters: {
siteCount(state, getters) {
return state.currentSites.length;
},
findSite: (state) => (siteId) => {
state.currentSites.find(s => s.id == siteId);
}
}

32 Vuex: State Management Simplified in Vue.js codemag.com


OLD
TECH HOLDING
YOU BACK?

Are you being held back by a legacy application that needs to be modernized? We can help.
We specialize in converting legacy applications to modern technologies. Whether your application
is currently written in Visual Basic, FoxPro, Access, ASP Classic, .NET 1.0, PHP, Delphi…
or something else, we can help.

codemag.com/legacy
832-717-4445 ext. 9 • info@codemag.com
ONLINE QUICK ID 2001061

Accessibility Guidelines and Tools:


How Do I Know My Website
Is Accessible?
Once you’ve learned a little bit about accessibility—what it is, why it’s important, how to start implementing it—you naturally
also start to think about testing and verifying accessibility. How do you know that these changes you’re making are good, or
right or helpful? What if you’re just making new, more interesting mistakes? Or maybe you’ve been told that a third party will be

doing an accessibility audit on your site or application, and sibility, and AAA is the highest. For the most part, you’ll want
you want to see for yourself what kind of issues may be found. to aim for AA, as this is the level commonly targeted by leg-
islation and the first functional level for actual accessibility.

In some cases, it’ll be relatively easy to jump up to AAA for


What if you’re just making new, some criteria, so keep those in mind and aim for AAA when
more interesting mistakes? at all possible. For instance, the AA criteria for sufficient
color contrast is 4.5:1, while the AAA criteria is 7:1. If you’re
already working to create an accessible color palette, focus-
Ashleigh Lodge ing on meeting AAA instead of AA won’t be much, if any,
ashleigh.lodge@gmail.com I’ll give you a bit of a spoiler: There’s no “one neat trick” extra work.
twitter.com/shimmoril for accessibility; there’s no such thing as a 100% accessible
website or application; and it’s always something you’ll be POUR
Ashleigh is the Application
working on, just like security and performance. Two people There are four major principles of digital accessibility: Per-
Development Manager at
with the same disability can have different accessibility ceivable, Operable, Understandable, and Robust, or POUR.
Neovation Learning Solu-
needs, and accessibility fixes or accommodations for one Every WCAG criterion falls under one of these principles in
tions (www.neovation.com/)
in Winnipeg, Manitoba, user can make things worse for a different user. Fortunately, some way or another, so if you’re confused or not sure how
Canada. there are a number of tools and resources out there to help to make a feature accessible, start with POUR:
you with this journey, and more are being created every day.
Ashleigh is a vocal advo- • Is it Perceivable by all users, including those with low
cate for accessibility and vision, those who are blind, and/or users who are
inclusive design. Earlier this Standards d/Deaf or hard of hearing?
year, Ashleigh was a speaker Accessibility has been around since the very beginning of the • Is it Operable by all users, even if they don’t use a
at TedxWinnipeg, bringing Web, so it’s no surprise that there are official standards out mouse? Is it always clear where you are on the page,
the idea and foundations there. You’re probably familiar with the World Wide Web Con- and are users able to interact with all the controls?
of digital accessibility to sortium (W3C) (https://www.w3.org/), which is the primary • Is it Understandable by all users? Is your language too
an audience of nearly 700 standards body for the Web. The Web Accessibility Initiative esoteric, or is your functionality obscured by the use
Winnipeggers. (WAI) (https://www.w3.org/WAI/) is a part of the W3C that of non-standard icons?
focuses on accessibility and is where you will find all the of- • And finally, is it Robust? Can users choose their own
In her free time, Ashleigh
ficial standards and documents for Web accessibility. technologies, including devices, browsers, and assis-
consumes a truly frighten-
ing amount of pop culture tive technology?
media, including movies, WCAG
TV shows, comic books, and One of the primary standards created by the WAI is called ARIA
novels. You can usually find the Web Content Accessibility Guidelines, or WCAG. Currently Accessible Rich Internet Applications (ARIA) is a technique
her with Pokémon Go open at version 2.1, version 2.0 was released in 2008, and version used when it’s not possible to make a feature or control ac-
on her phone, no matter 3 is being developed. If you’re being held to WCAG 2.0, you cessible using the standard functionality and behavior that
where she is or what she’s can use version 2.1, as they are backward compatible, and exists in HTML. ARIA should always be considered a solution
(supposed to be) doing. of course there have been many developments in the last 10 of last resort, which can seem a bit odd at first glance. It’s
years that you don’t want to miss out on. right in the name: ARIA means making accessible Internet
applications, so having ARIA is better than not having ARIA,
right? Well…

AA is the level commonly targeted Many disabled people are familiar with how sites and el-
by accessibility legislation. ements break for them: This is (unfortunately) a frequent
occurrence, and they have learned to recognize common
issues and know how to work around them. When ARIA is
introduced and used badly, the result can be strange and
WCAG has three levels, or ratings: single A, double A (AA) and unknown ways for things to break, leaving the assistive
triple A (AAA), where A indicates the lowest level of acces- technology user more lost than ever.

34 Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? codemag.com


Figure 1: The Understanding WCAG document

WCAG, POUR, and ARIA are the standards and guidelines criteria is relatively straightforward and clear, and the In-
you’ll be using to check and verify that your sites and apps tent section provides some handy background as to why this
are accessible, but how do you actually use this information? criterion is important for accessibility. The Success Criteria
are also helpful—it’s always nice to have an example of good
accessibility to work toward.
Understanding WCAG
For WCAG, you may want to refer to these three documents In general, this document is overly complicated and it’s a
when working to meet the criteria: lot to parse, especially if you’re not sure what you should
be looking for.
• Understanding WCAG 2.1 (https://www.w3.org/WAI/
WCAG21/Understanding/) Techniques
• Techniques for WCAG 2.1 (https://www.w3.org/WAI/ Next is the Techniques document (Figure 3), which focuses
WCAG21/Techniques/ on ARIA, client-side scripting, and CSS techniques for ac-
• How to Meet WCAG 2 (Quick Reference) (https://www. cessibility. As mentioned earlier, ARIA should only be used
w3.org/WAI/WCAG21/quickref) when absolutely necessary, but when that does happen, it’s
important to follow the guidelines and rules that have been
Understanding WCAG 2.1 provides detailed information on developed.
each of the principles, as well as suggestions for implement-
ing accessible functionality. As seen in Figure 1, it’s an of- The Techniques document is very similar to the Understand-
ficial document and isn’t very accessible! ing document: It’s an official specification, the language
isn’t very accessible, and it’s not very easy to understand
The primary issue with these documents is that they are of- without reading it through multiple times.
ficial, and in some cases, legally binding standards. If you’re
not already familiar with a significant amount of accessibili- One neat thing about the Techniques document is the ex-
ty-specific jargon, it’s going to be very hard to read through amples section. As seen in Figure 4, near the bottom of
or even skim for what you’re interested in. In addition, the each criterion you’ll see a simplified code example of how
language throughout is very formal and uses longer, more to use a specific ARIA or CSS feature. If you’re unsure how
complicated words when shorter, clearer ones would do the to implement a specific technique, start with one of the ex-
job just and well (and more accessibly). amples provided, and build up from there.

Once you dig into a specific guideline—let’s go with Text Quick Reference
Alternatives, as seen in Figure 2—you can see that it’s a Last, there is the How to Meet WCAG Quick Reference,
little more readable. The initial paragraph describing the which is an interactive document that you can filter to show

codemag.com Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? 35


Figure 2: The Text Alternatives section of the Understanding WCAG document

the levels, techniques, and technologies that you’re inter- the guide accessible; it simply collapses the criteria down to
ested in (Figure 5). their headers and makes them inactive.

Out of the three documents, I use the Quick Reference The last couple of filters (Techniques and Technologies) can be
most frequently. Accessibility-wise it’s the easiest and most handy if you’re already aware of what you’re looking for, but
straightforward of the three documents, and the built-in I’ll suggest leaving them alone at first, until you’re more com-
filtering can be very useful. If you have any questions or fortable with this document and the information it provides.
would like more detail on a criterion, it includes links to the
Understanding and Techniques documents. Switching back to the Table of Contents tabs, you’ll see four
main sections, one for each of the POUR principles (Perceiv-
Let’s take a more in-depth look at the Quick Reference and able, Operable, Understandable, and Robust).
how you may want to use it.
Within each section, you’ll find a breakdown of the various
First, there’s the Filters section, as you can see in Figure aspects of a criterion. For example, under Perceivable, you
6. As I mentioned before, you’ll want to target version 2.1, can see Distinguishable, which covers areas like making sure
which is conveniently also the default. Remember that 2.1 that color isn’t the only thing that distinguishes two pieces
is backward-compatible with 2.0, so you shouldn’t need to of information and ensuring that there’s enough contrast
target version 2 specifically. between the background and text colors.

The Tags filters can be helpful if there’s a specific type of


functionality you’re looking to make accessible. For in-
stance, maybe you have a carousel of hero images at the Make sure that color isn’t
top of your site, so you can add the Carousel tag to filter the only thing that differentiates
the results down to criteria that are specific to that type of
functionality. two pieces of information.

For the Levels filter, I’d suggest removing the checkmark


by Level A to clear out some of the clutter and allow you to You can click on any of the headers in the table of contents
focus on the AA and AAA criteria. One interesting thing to to jump directly to that section. If you look for guidelines
note: Un-checking a level doesn’t remove any of the con- around contrast, you can see that it’s in the list twice: once
tent from this page, which is an interesting way to keep for Minimum and once for Enhanced.

36 Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? codemag.com


Figure 3: The Techniques for WCAG document

Figure 4: An example from the Techniques document, showing how to use aria-describedby

codemag.com Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? 37


Note that Contrast (Minimum) is the AA criterion, while Con- description and then a longer, more detailed description hid-
trast (Enhanced) is the AAA criterion. Just as with Level A, den in a hide/show area. If you expand the full description
if you were to go into the filters and uncheck Level AAA, the for Contrast (Minimum), you can see that larger text has a
Contrast Enhanced option would be grayed out and disabled. smaller required contrast ratio and that you don’t need to
In Figure 7, you can see that each criterion consists of a short worry about the contrast ratio of decorative images or logos.

Figure 5: The How to Meet WCAG Quick Reference guide

.
Figure 6: The How to Meet WCAG Quick Reference filters section
.
38 . Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? codemag.com
Each criterion also includes information on techniques for that will help with some of the verification and allow you “Capital-D” Deaf vs. deaf
implementation and examples of what failures may look to automate what you can, so you can focus your limited
like. For Contrast (Minimum), two very useful scenarios are resources where they’re needed most. You may have seen “d/Deaf” in
provided under Sufficient Techniques, offering exact text other articles, or maybe this is
size guidelines and links to the Techniques document for Linting your first time encountering it.
even more information. As with other best practices, accessibility linters are out there This term is meant to indicate
to do some basic checking as you write your code. If you’re and distinguish someone
who is culturally Deaf, versus
The failures listed for contrast aren’t terribly interesting, as it’s using Vue or JSX, look for ESLint plugins, and there are many
someone who is deaf but not
a pretty straightforward principle, but for more complicated cri- others for different languages and frameworks.
integrated with Deaf culture.
teria, it can be helpful to know what a failure might look like.
If you can’t find one for your particular situation, all of the Capital-D Deaf indicates
The Quick Reference, in particular, is a great resource, but information you’d need to create your own is freely available. people who are culturally Deaf:
are you really going to open up your site or app and go You’re probably not the only person looking for accessibility They attended Deaf schools,
through this document manually to check that you’ve met linting in a specific language, so think about creating one and use Deaf languages (officially
all the criteria? Not likely! Fortunately, there are several putting it up on GitHub for everyone in the community. recognized sign languages
automated and semi-automated ways to get started with and dialects as well as private
checking accessibility, and they’re easy to integrate into It wasn’t a linter per se, but at Neovation, we ended up doing family ones), and are interested
your current design, development and testing processes. something similar for a project that makes heavy use of iframes, in Deaf arts, such as cinema
which tend to confuse a lot of automated accessibility checkers. and theatre. People who
In our case, it was easier to start from scratch and develop a identify as capital-D Deaf have
tool that would work for us and our very particular situation strong ties to their local Deaf
culture, and being Deaf means
Accessibility testing CANNOT than to jerry-rig an existing tool.
belonging to a community.
be fully automated. As with any kind of testing, the earlier in the development process On the other hand, small-d deaf
you can begin accessibility checking, the better. You don’t want to people are not usually connected
be trying to do a bunch of testing and verification the day before with the wider Deaf culture,
you’re due to go live or even worse, skipping it altogether.
Checking Accessibility which can occur for multiple
reasons: They were unable to
It’s critical to understand that accessibility testing CANNOT CI/CD Pipeline participate in Deaf programs
be fully automated—there’s just too much nuance and too Along with a linter, you’ll likely want to add some automat- or schools; their hearing loss is
many judgment calls to eliminate humans from the process. ed accessibility checking to your code check-in and deploy- recent; or they see deafness as a
However, there are a huge number of tools and processes ment processes. strictly medical issue.

Figure 7: The Level AA guideline for contrast

codemag.com Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? 39


Figure 8: Results from an axe analysis showing a color contrast issue

Extensions and Bookmarklets


Browser extensions and bookmarklets are a perfect place to
start your manual checking: They are easy to find and use,
and they provide a lot of value.

Although extensions and bookmarklets are similar to auto-


mated testing in that they are checking the obvious and
easy things, some also provide a checklist of items that
can’t be tested automatically. More importantly, they get
you actually using a site or app and seeing how items pass
or fail in the real world.

The axe tool is also available as a browser extension, which


makes it available in your browser’s dev tools. As you will
see, the line between extensions and dev tools is pretty
fuzzy. I had to choose to install axe, so that’s where I’ve
decided to draw the line: Dev tools are built right into the
browser and available by default, and extensions require
some manual intervention to use.

By first using axe as a browser extension, you can start to


see how it works and what the results of a scan look like.
This can be a good first step before adding axe to your pipe-
line so you know what to expect.

Once axe is installed, it will be available as a tab in your dev


tools (F12) with some general update and version informa-
tion, as well as a big Analyze button. Activating this button
kicks off the checking process, and you’ll get back a list
of issues for your page, such as “Elements must have suf-
ficient color contrast” or “Images must have alternate text,”
as shown in Figure 8.

Selecting an issue provides you with more information, in-


Figure 9: The WAVE extension panel showing the results of cluding a link to inspect the problematic code, a description
an analysis of the issue, and suggestions about how to fix it. For any
color contrast issues, axe provides you with the hex color
values for the background and text, as well as the current
A tool from Deque Labs, called axe, is an excellent auto- contrast value.
mated accessibility checker that you can add to your contin-
uous integration and/or continuous deployment pipeline. For alt text errors, you’ll be provided with a list of ways to fix
It’s open-source and hosted on GitHub, making it easy for the problem: You could simply add an alt or title attribute to
those who might be interested in making changes or im- the image; you could use ARIA to indicate that the image is
provements. decorative and doesn’t require alt text; or you could provide
a label with aria-label or aria-labelledby.
Although I do recommend automated accessibility checking
tools such as linters and pipeline integrations, remember Alt text is crucial for non-decorative images: A screen reader
that automated testing will never find all accessibility is- user who encounters an image without alt text will have the
sues. These types of tools are really good at finding the low- image’s file name read out as a fallback. Depending on your
hanging fruit, but at some point, you’re going to need to file names, this could be inconvenient—or absolutely awful.
get your hands dirty and do some manual testing. A file called “MyCompanyLogo.png” is probably pretty easy

40 Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? codemag.com


to figure out, but one named “4d7946696c65576974684e- ement and open the color picker for the text/foreground
6f416c7454657874.jpg” provides absolutely no information color, and you’ll see a little icon indicating success or failure
about the image whatsoever. The crucial consideration for as well as the contrast value. To try this for yourself, you’ll
alt text is whether it provides the same experience to screen need to open your dev tools (F12), and inspect an element.
reader users as it does to sighted ones. Then you can click on the color preview (the circle or square
showing the color) in the Styles pane to see the color picker
and the contrast information in the popup that appears.

Provide the same experience Firefox has an Accessibility tab in dev tools with an auto-
mated accessibility checker similar to others that you’ve
to screen reader users as you seen already in this article. It also has a really unique and
do to sighted ones. interesting feature—it provides a visual representation of
the accessibility tree.

If you’re familiar with the DOM (document object model),


Figure 9 shows another accessibility checking extension the accessibility tree is similar, but it’s generated based on
that I like to use: WAVE from WebAIM (Web Accessibility in anything relevant for accessibility, rather than the entire
Mind), a nonprofit focused on Web accessibility. As you use code base for the page. CSS and styling are stripped out,
more of these types of tools, you’ll notice that they gener- anything that’s hidden from assistive technology is re-
ally pick up on the same types and categories of issues, even moved, etc.
if the UI and organization of each tool is unique. Accessibility Tools:
You can think of the accessibility tree as a more robust ver- Linters and Pipeline
WAVE adds icons to the page for each of the accessibility sion of WAVE’s Styles toggle that allows you to see exactly
The ESLint accessibility
issues it finds, which makes it very easy to find all of the is- what information is being presented to assistive devices, plugins for Vue and JSX are
sues detected without needing to individually inspect them. such as screen readers. Understanding and interacting with available on NPM:
the accessibility tree is an advanced technique, so keep it in
One unique issue picked up by WAVE is the order of head- your back pocket until you’re ready to level up your acces- • eslint-plugin-vue-a11y:
ings on the page. These are reported as structural errors, sibility skills. www.npmjs.com/package/
and there’s also a tab dedicated to Structure. Many screen eslint-plugin-vue-a11y
readers have hotkeys that allow users to navigate through Chrome has an Audits tab that you may be familiar with: OR npm install eslint-
a page via the headings, so it’s crucial to ensure that your You can run Performance, Progressive Web App, Best Prac- plugin-vue-a11y.
headings are used in order, without skipping a level. Think tices, and Accessibility audits, while also emulating desktop
of it like reading a table of contents or a bulleted list: or mobile displays, or throttling your connection to look • eslint-plugin-jsx-a11y:
at loading speeds. The accessibility audit is, again, a fairly www.npmjs.com/package/
eslint-plugin-jsx-a11y
• If the list standard automated test, but the nice thing about this one
OR npm install eslint-
• Starts out like this is that it provides you with a list of items that need to be
plugin-jsx-a11y.
• Then having it suddenly skip an indentation level manually tested, including tab order and keyboard naviga-
• Like this tion, heading levels, and whether landmark elements are There are multiple versions
• Would be really confusing properly used to improve navigation. of the axe library,
and they can all be found
It’s easy to control how headings display visually, so there’s The Chrome accessibility audit also provides an accessibility on GitHub: https://github.
no reason to use levels out of order or to skip one (or more) “grade,” as it does for the other types of audits (Figure 10). com/dequelabs. You can
levels. I’m not fond of the idea of putting a number value on ac- use axe-core for your CI/CD
cessibility for a couple of reasons: These types of tests can’t pipeline, react-axe if you’re
A feature of WAVE that I like to use is the Styles toggle. check or find all possible issues. There’s no such thing as working with that framework,
Turning this off strips all of the CSS and other styling from 100% accessible, and, how accessible a site is varies from and there are axe libraries
the page and gives you an HTML-only version. Some people, user to user. for iOS and Android native
disabled or not, may prefer to interact with sites in this way, development as well.
due to problems with contrast or color schemes, so it can
be enlightening to see what your site looks like naked. Is
everything still in the correct order? Does anything strange There’s no such thing
happen with the content, making it unreadable or inacces-
sible?
as 100% accessible.

Finally, there is HTML_CodeSniffer, which picks up on most


of the same issues you’ve seen with other tools. I like the However, the “grade” can be a good way to present informa-
UI for CodeSniffer and the fact that you can choose your tion to a manager or to non-technical team members. If you
WCAG level before running a scan. Additionally, CodeSn- have a site that’s getting 70% on the audit, and you have
iffer is available as a JavaScript bookmarklet, rather than a five color contrast issues and 10 images without alt text, it
browser extension, so it can be a bit easier to use on locked can help to say something like “we’re at 70% right now, but
down systems. if we spent half an hour fixing these 15 small issues, we’d
be over 80%!”
Browser Dev Tools
Both Chrome and Firefox have accessibility tools that come There are many options for automated accessibility check-
installed by default. For example, both browsers have built ing, and they’re all quite similar, so I won’t recommend any
in contrast checkers in their code inspectors. Select an el- one solution specifically. It’s more important that you have

codemag.com Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? 41


Accessibility Tools:
Extensions and
Bookmarklets
The axe and WAVE browser
extensions are available
on the Chrome Web Store
(chrome://extensions) and
via Firefox Browser Add-ons
(about:addons).

HTML_CodeSniffer is
developed by SquizLabs and
is available on GitHub:
https://squizlabs.github.io/
HTML_CodeSniffer/.

There are many other similar


tools available, such as:

• AInspector WCAG (Firefox


only): http://ainspector.org/.

• Accessibility Insights for Web


(Chrome and Edge): https:// Figure 10: Chrome Accessibility Audit results
accessibilityinsights.io/.

• ARIA Validator (Chrome any accessibility checking at all, so pick whichever tool or lot of the time, you’ll see nothing more than a dotted light gray
only): https://code.google. tools that you (or your company) prefer. outline, which is the browser default kicking in. When you see
com/archive/p/aria-toolkit/. this, it’s almost always because the CSS for this site has delib-
Keyboard Navigation erately removed focus states, and the browser is attempting to
There’s one accessibility test that can’t be automated, but compensate for that removal. And don’t forget about contrast—
it’s so easy and simple to do that it’s baffling to me that it’s your focus state must meet contrast criteria on all elements!
not tested more often: keyboard navigation. Unplug your
mouse, turn off your touchpad and try to navigate through When you hit an interactive element—a button or link, a
your site. Is it always clear where you are on the page (focus menu, a drop-down, or an input field—can you activate it,
state)? Can you interact with controls as expected? Do you enter and remove data, and move between options? When
ever get trapped somewhere, like in a menu or pop-up, with you tab out of a field and there’s client-side validation on
no ability to get back to the main content of the page? the page, does tabbing out trigger that validation, or does
it rely on click events?
Marcy Sutton recently released a NPM package called No
Mouse Days (https://github.com/marcysutton/no-mouse- If you’ve launched a pop-up or overlay, are you placed into
days) that allows you to force developers to use only their the pop-up, or do you keep moving through the page under-
keyboard, configurable by day of the week. Maybe it’s time neath instead? Is there a way to close this overlay via the
to implement No Mouse Mondays in your company or team? keyboard, or is it click-only? When you reach the end of the
content in the pop-up, does it bring you back to the top of
As you start tabbing through your site, you’ll want to keep an the pop-up content, or do you move back to the main page
eye out for distinct focus states on every interactive element. A while the pop-up remains open?

42 Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? codemag.com


Keyboard navigation is also considered a feature for power us- Readability Analyzer, shown in Figure 11, to check your
ers: I’m usually comfortable tabbing through a well-designed content.
site, rather than reaching for my mouse every few seconds.
This means that useful, consistent, and well-designed keyboard Add a passage to the page and click on the Analyze but-
navigation will benefit more than just your disabled users. ton to receive a report on your text, including information
that helps you more easily target problematic parts. In par-
Contrast and Color Blindness ticular, you may want to look at the Percentage of Difficult
Color contrast is more crucial and more complicated than Words and possibly run your text through the Difficult and
people expect. Someone with low vision may have difficul- Extraneous Word Finder. Be aware that it doesn’t deal par-
ty with common color schemes that use light gray text on ticularly well with contractions. Also note that, just because
white backgrounds. Buttons and links can have hover states you have a higher reading level or larger percentage of dif-
with different colors; these also need to be checked for con- ficult words, that doesn’t necessarily mean that you need to
trast. And of course, you need to have enough contrast on make any changes to your text.
focus states as well.
As with almost everything, these types of checks are con-
Fortunately, checking for color contrast is both easy and textual—if you’re in a situation or speaking to an audience
easily automated—mostly. Automated tools won’t generally with domain-specific knowledge, it‘s likely that you’ll receive
be able to pick up on hover states or links that change after numbers that are “too high” from both of these tests. You
they’re clicked, so you may need to use a different kind of could make changes to your text to make it more readable
tool to check your entire palette at once. for general audiences, but that will probably negatively affect
the readability for your particular audience. Run the analysis Accessibility Tools: Other
I don’t do a lot of design work, so one I like to use is Con- but use your best judgment before making any changes.
trast Ratio by Lea Verou (https://contrast-ratio.com/). This There are many other types of
is a Web-based tool that allows you to enter your colors by Accessibility is always going to be a series of judgment calls. accessibility tools out there,
name, Hex, HSLA, or RGBA. It also provides a link for your There are types and aspects of disability that conflict with from contrast checkers, to
color combination and shows you how the two colors look each other, and you’ll need to pick a direction. Or maybe color blindness simulators, to
against each other right on the page. there’s an extremely good reason to go outside of the WCAG disability simulators:
criteria in some situations; you’ll need to decide if it’s worth
• Contrast Ratio by Lea
Focusing on contrast is meant to make things easier for low- any potential accessibility (or usability) problems. Verou: https://contrast-
vision users, as well as those who are colorblind. Additional ratio.com/
issues that can affect colorblind users include only using
color to distinguish elements. You can use a colorblindness • Color Blindness Simulator:
simulator to check for issues like this: Access your site or Accessibility is always going https://www.color-
take a screenshot of it (or use a mockup or prototype) and blindness.com/coblis-
play around with the different types of colorblindness. You’ll
to be a series of judgment calls. color-blindness-simulator/
see issues that you never would have expected, like that
your beautiful corporate color scheme looks significantly • Colorblinding (Chrome
less beautiful to one in 10 users. Oops! Screen Readers only): https://github.
I strongly believe that everyone (developers, designers, com/LeonardoCardoso/
Text Analysis project managers, product owners) should be familiar with Colorblinding
Depending on your role, you may not be responsible for de- the basics of how a screen reader works. Screen readers are • Readability Analyzer:
veloping text or content, but if you are, keep in mind that highly specialized tools, though, and they are difficult to https://datayze.com/
language choices matter to accessibility. master when not used on a consistent basis. readability-analyzer.php

As a general guideline, you’ll want to target a fourth-grade Because of this, as well as availability, licensing, and pric- • Funkify (Chrome only):
reading level on the Flesch-Kincaid scale. Use tools like the ing issues, I’ll suggest becoming familiar with the screen https://www.funkify.org/

Figure 11: Readability Analyzer results

codemag.com Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? 43


watch someone who uses a screen reader frequently, you’ll
notice that they usually have the voice speed set very fast.
This is similar to speed reading or visually skimming a page,
where the user learns to listen for certain keywords, rather
than fully listening to each element as it’s announced.

Test with Actual Disabled People!


There’s another browser extension out there called Funkify
(https://www.funkify.org/) that bills itself as a disability
simulator. The idea is that you browse to a website and then
pick a persona from Funkify, each of whom have a specific
type of disability. Selecting a persona/disability will then
change the site in some way—the low-vision persona may
make the content blurry, or the fine motor control persona
will add some jerkiness to your mouse movements.

As interesting and enlightening as tools such as Funkify can


be, they are absolutely not a replacement for testing your site
or application with actual disabled human beings. In the same
way that it’s not sufficient to only perform automated acces-
sibility checking, it’s highly unlikely that your team alone will
find or be aware of all the accessibility issues that can exist
on your site. Even if you’re lucky enough to have one or more
disabled people on your team, or in your company in general,
disabilities are different for every person, and there’s a huge
variety of disability and accessibility needs out there.

Test your site or app with actual


disabled human beings.

If you’re doing user research or A/B testing, make sure to


include people with a variety of disabilities, whether in per-
son or online. It’s also possible to hire disabled people as
contractors or advisors to assist with planning and testing.

When Neovation was first starting out with accessibility in


our newest product, we worked with the Canadian National
Institute for the Blind (CNIB) who put us in contact with a
blind woman. She came into our office and showed us how
Figure 12: iOS Accessibility settings, with the VoiceOver she interacts with commonly used sites like Facebook and
toggle at the top Twitter. It was very early in our planning stages, so we didn’t
have anything of our own to have her try out, but experienc-
ing how she uses a screen reader gave us a lot of informa-
reader built into your phone or tablet. On both Apple and tion on what works, what doesn’t—and why.
Android devices, a built-in screen reader is available in the
Settings, usually under Accessibility (Figure 12). The Apple In many cases, companies effectively outsource this type of
screen reader is called VoiceOver, and Android’s is TalkBack. testing to their users by releasing their application or site and
They’re both relatively easy to pick up and start using, al- waiting for feedback. If this is how you plan to handle accessi-
though they can behave differently from what someone bility checking, above and beyond automated testing, be aware
might expect based on previous experience with desktop that someone experiencing accessibility issues with a site or
screen readers, such as NVDA or JAWS. app isn’t going to be very motivated to file a bug report. Most
of the time, they’re just going to go somewhere else.
One issue to be aware of for all screen readers is that they
rely on the information provided via the accessibility tree of Connecting with disability groups and organizations in your
the browser. This means that using the same screen reader community and asking members to test your site or application
in two different browsers can provide completely different is beneficial for both parties. It’s also less risky than leaving
results. As you progress in your accessibility efforts, you’ll proper accessibility testing to your prospective or existing users.
need to move beyond browser testing in a single dimension
and instead create a matrix of browsers and screen readers.
Easy Accessibility Wins
When beginning to use a screen reader, you’ll want the voice As you start with linting and automated accessibility check-
speed to be relatively slow. If you ever have the chance to ing, you’ll notice that a lot of accessibility issues are small

44 Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? codemag.com


and very quick to repair. In that spirit, I’d like to suggest a to be entered into each field—and what was that format for
few more changes that you can implement in five minutes or phone numbers again? Just say no to placeholders as labels.
less that will significantly improve usability for anyone using
your site or application—not just people with disabilities.

Setting the Language Just say no to placeholders


It’s crucial to set the language appropriately for your site as labels.
and for any content that doesn’t use the page’s default lan-
guage. Not only is this an accessibility issue because it con-
trols the voice and pronunciation used by screen readers,
it’s also critical for internationalization because it allows 200% Zoom
browsers to identify whether a page needs to be translated A large (and growing) number of people have vision issues
and it also determines the dictionary used for spell check- that require them to use zoom functionality of some kind,
ing. It’s a win-win-win! whether it’s the magnifier on their computer or phone or
the browser’s zoom feature, so it’s crucial to make sure that
If all content on your page is in one language, you only need it works as expected.
to set the lang attribute on the HTML element:
The criterion for browser zoom is that it should work up to
<html lang=”en”></html> 200% without any controls overlapping each other or push-
ing any content off the page. You’ll also want to work to
If your page contains content in multiple languages, you ensure that there’s little to no horizontal scrolling, as this Screen Readers
still need to set the primary language on the HTML element, is annoying and tedious for zoomed-in users. This is less of
Screen readers are tools used
but you can also override it on individual elements, such as an issue than it used to be, due to more focus on responsive
by people with impaired vision,
divs or paragraphs: design, but it still needs to be checked. or who are blind, to access
websites and applications.
<html lang=”en”> Start by using relative units in your CSS: EMs, REMs, and The two most popular desktop
<p lang=”fr”></p> percentages will all help with creating clean and usable sites screen readers are NonVisual
</html> when zoomed, whereas anything specifically in pixels won’t Desktop Access (NVDA) and
respond nearly as well. And of course, browser zooming is Job Access With Speech
There’s also an attribute called hreflang indicating that ac- just as easy to test as keyboard navigation, so make sure (JAWS). ChromeVox is a screen
tivating a button or a link will change the primary language that it’s on your manual testing list. reader that comes pre-installed
of the page. This is particularly useful if you have a language on Chromebooks, and can also
switching element on your site: Semantic Code be used in the Chrome browser.
Although I haven’t mentioned it explicitly, the first and pri-
<html lang=”en”> mary thing you should focus on when developing accessible • NVDA:
<ul> websites and apps is to use semantic code. Semantic code www.nvaccess.org/
<li lang=”en” hreflang=”en”>English</li> provides context and meaning to your code beyond what
• JAWS:
<li lang=”es” hreflang=”fr”>French</li> you’ve explicitly written and is heavily used by assistive de- www.freedomscientific.com/
<li lang=”es” hreflang=”es”>Spanish</li> vices to present relevant information to users. products/software/jaws/
</ul>
</html> For websites and Progressive Web Apps (PWAs), this means • ChromeVox:
never doing things like styling a div to look like a button www.chromevox.com/
This code indicates that the page’s content is in English. or using some kind of custom drop-down element you’ve
Each item in the list has its language specified: English, created yourself. In HTML in particular, each element has
French, or Spanish, as well as the hreflang attribute let- functionality and accessibility built in, and you can either
ting users know that selecting this option will change the take advantage of this or start from scratch—and inevitably
primary language for the page. If you had a drop-down that miss or forget to re-implement a crucial feature.
allowed users to set their language for their profile, you
wouldn’t use hreflang in that case, just the lang attribute. If you’re developing an app, use native controls as much as
possible. If you’ve got a drop-down in your app somewhere,
Associate Labels and Controls the native component is going to be much more familiar,
When creating a form, ensure that each label has a for at- usable, and accessible to users than any third-party library
tribute that correctly associates the label with its input. This or custom implementation you can come up with.
has multiple benefits: Users will be able to use labels to ac-
tivate controls, providing more affordance for users without Move toward building accessible websites and applications
fine motor control. And screen reader users will always know by becoming familiar with the standards and guidelines,
which labels and controls belong together, even if some- start with automated accessibility testing, and don’t forget
thing odd or unexpected has happened with the visual order to include manual testing on your own, as well as with dis-
of the controls in the form. abled users, community members, and testers.

As an aside, if you’re thinking that this means you can’t use Ashleigh Lodge
placeholders to label your form fields—you’re right! Using
placeholders instead of labels is a terrible design pattern for
all users, not just those with disabilities. What happens when
someone fills out multiple fields in a form and then starts get-
ting errors? How are they to know which data was supposed

codemag.com Accessibility Guidelines and Tools: How Do I Know My Website Is Accessible? 45


ONLINE QUICK ID 2001071

Compiling Scripts to Get Compiled


Language Performance
Everyone assumes that a scripting language is an interpreted language, i.e., its instructions are interpreted by the runtime one
by one, rather than being compiled to the machine-executable code all at once. The main advantage of the pre-compilation is
performance. There’s also a drawback: some flexibility will be lost. For instance, if your script’s nine statements are correct and

the tenth is wrong, at least the first nine will be executed To accomplish this task, let’s use Microsoft.CSharp and Sys-
when being interpreted. But when you compile these 10 tem.CodeDom.Compiler namespaces. These are the standard
statements, all 10 will be either compiled and then executed namespaces that come with every .NET distribution, with
or none of them will. the exception of Xamarin mobile development for iOS and
Android—unfortunately you can’t use the techniques ex-
In this article, I’m going to discuss how you can improve plained in this article for mobile development (this restric-
performance of a scripting language. The article is directed tion is imposed by the iOS and Android architectures). CSCS
to mostly custom scripting languages but could also be ap- can still be used for cross-platform mobile development, but
plied to the industry-standard scripting languages. I’m go- without pre-compilation.
Vassili Kaplan ing to use CSCS (Customized Scripting in C#) as a sample
vassilik@gmail.com scripting language to be compiled. I’ve talked about this
language in previous CODE Magazine articles: https://www.
Vassili Kaplan is a former codemag.com/article/1607081 introduced it, https://www.
Microsoft Lync developer. codemag.com/article/1711081 showed how you can use it
He’s been studying and on top of Xamarin to create cross-platform native mobile
working in a few countries,
apps, and https://www.codemag.com/article/1903081
such as Russia, Mexico, the
showed how you can use it for Unity programming.
USA, and Switzerland.

He has a Masters in Ap- To simplify things, I’m going to pre-compile not the whole
plied Mathematics with script, but a function containing script. Ultimately, the
Specialization in Computa- whole script can be split into different functions.
tional Sciences from Purdue
University, West Lafayette, To precompile a function, I’ll use the following strategy: I’ll
Indiana and a Bachelor in translate the function scripting code into C# code, then
Applied Mathematics from compile it into a C# assembly, and add it to the executing
ITAM, Mexico City. binary at runtime. Then, as soon as there’s a request
to run the compiled code, I’ll bind the run-time func-
In his spare time, Vassili tion arguments with the pre-complied function argu-
works on the CSCS scripting ments and execute the compiled code.
language. His other hob-
bies are traveling, biking,
badminton, and enjoying a
glass of a good red wine.

You can contact him


through his website:
http://www.iLanguage.ch

46 Compiling Scripts to Get Compiled Language Performance codemag.com


Listing 1: C# Code Generated from the CSCS Function helloCompiled
using System; List<Dictionary<string, string>> __varMapStr,
using System.Collections; List<Dictionary<string, double>> __varMapNum,
using System.Collections.Generic; List<Variable> __varVar) {
using System.Collections.Specialized; string __argsTempStr = “”;
using System.Globalization; string __actionTempVar = “”;
using System.Linq; ParsingScript __scriptTempVar = null;
using System.Linq.Expressions; ParserFunction __funcTempVar = null;
using System.Reflection; Variable __varTempVar = null;
using System.Text;
using System.Threading; var i = 1;
using System.Threading.Tasks; for (i = 1; i <= __varNum[0]; i++)
using static System.Math; {
ParserFunction.AddGlobalOrLocalVariable(“i”,
namespace SplitAndMerge new GetVarFunction(Variable.ConvertToVariable(i)));
{ Console.WriteLine(“Hello, “ + __varStr[0] + “! 2^” +
public partial class Precompiler i + “ = “ + Pow(2, i));
{ }
public static Variable helloCompiled __varTempVar = Variable.EmptyInstance;
(List<string> __varStr, return __varTempVar;
List<double> __varNum, }
List<List<string>> __varArrStr, }
List<List<double>> __varArrNum, }

That’s it! Sounds easy? Well, one part is not necessarily


straightforward: translating the scripting code into the C#
code. It depends on the scripting language—if it’s your own
language, chances are that it doesn’t have as many options
and functions as Python and you should be able to do the
translation.

There’s an advantage of taking CSCS as a sample language,


because CSCS is implemented in C#. Still, there are some
quirks because the syntax isn’t the same and the variable
types aren’t declared explicitly in CSCS but are deduced from
the context. I hope you can use the techniques explained in
this article for other scripting languages as well.

Strategy: Translate the function-


scripting code into C# code,
compile it into a C# assembly,
then add it to the executing
binary at runtime.

Compiling a “Hello, World!” Script


Let’s start with a relatively simple example that should
tell you where you’re heading. Consider the following CSCS
function that you’ll compile:

cfunction helloCompiled(name, int n)


{
for (i = 1; i <= n; i++) {
printc(“Hello, “ + name + “! 2^” + i +
“ = “ + pow(2, i));
}
}

Listing 1 shows the resulting C# code after translating the


code above to C#. The main function body of the resulting
C# code is the following:

codemag.com Compiling Scripts to Get Compiled Language Performance 47


Listing 2: Main Definitions of the Precompiler Class
using System.Collections.Generic; List<List<string>>,
using System.Text; List<List<double>>,
using System.Reflection; List<Dictionary<string, string>>,
using System.Linq.Expressions; List<Dictionary<string, double>>,
using System.CodeDom.Compiler; List<Variable>,
using Microsoft.CSharp; Variable> m_compiledFunc;

namespace SplitAndMerge static List<string> s_namespaces = new List<string>();


{
public class Precompiler Dictionary<string, string> m_paramMap =
{ new Dictionary<string, string>();
string m_cscsCode;
public string CSharpCode { get; private set; } static string NUMERIC_VAR_ARG = “__varNum”;
string[] m_actualArgs; static string STRING_VAR_ARG = “__varStr”;
StringBuilder m_converted = new StringBuilder(); static string NUMERIC_ARRAY_ARG = “__varArrNum”;
Dictionary<string, Variable> m_argsMap; static string STRING_ARRAY_ARG = “__varArrStr”;
Dictionary<string, string> m_; static string NUMERIC_MAP_ARG = “__varMapNum”;
HashSet<string> m_newVariables = new HashSet<string>(); static string STRING_MAP_ARG = “__varMapStr”;
string m_currentStatement; static string CSCS_VAR_ARG = “__varVar”;
string m_nextStatement;
string m_depth; static string ARGS_TEMP_VAR = “__argsTempStr”;
bool m_knownExpression; static string SCRIPT_TEMP_VAR = “__scriptTempVar”;
static string PARSER_TEMP_VAR = “__funcTempVar”;
Func<List<string>, static string ACTION_TEMP_VAR = “__actionTempVar”;
List<double>, static string VARIABLE_TEMP_VAR = “__varTempVar”;

Listing 3: Main Method to Compile C# Code


public void Compile() CompilerParams.ReferencedAssemblies.Add(uri.LocalPath);
{ }
var CompilerParams = new CompilerParameters(); }

CompilerParams.GenerateInMemory = true; CSharpCode = ConvertScript();


CompilerParams.TreatWarningsAsErrors = false;
CompilerParams.GenerateExecutable = false; var provider = new CSharpCodeProvider();
CompilerParams.CompilerOptions = “/optimize”; var compile = provider.CompileAssemblyFromSource(CompilerParams,
CSharpCode);
Assembly[] assemblies = AppDomain.CurrentDomain.GetAssemblies(); if (compile.Errors.HasErrors) {
foreach (Assembly asm in assemblies) { string text = “Compile error: “;
AssemblyName asmName = asm.GetName(); foreach (var ce in compile.Errors) {
if (asmName == null || text += ce.ToString() + “ -- “;
string.IsNullOrWhiteSpace(asmName.CodeBase)) { }
continue; throw new ArgumentException(text);
} }

var uri = new Uri(asmName.CodeBase); m_compiledFunc = CompileAndCache(compile, m_functionName);


if (uri != null && File.Exists(uri.LocalPath)) { }

Variable __varTempVar = null; the helloCompiled code snippet above), it’s considered to
var i = 1; be a string.
for (i = 1; i <= __varNum[0]; i++) {
ParserFunction.AddGlobalOrLocalVariable(“i”, Note that it’s not necessary to specify the function return
new GetVarFunction( type. This is because when translated to C#, the resulting C#
Variable.ConvertToVariable(i))); function always returns a Variable object. This Variable ob-
Console.WriteLine(“Hello, “ + __varStr[0] + ject is a static Variable.EmptyInstance in case nothing needs
“! 2^” + i + “ = “ + Pow(2, i)); to be returned, as in our example above. In other words,
} Variable.EmptyInstance imitates a void function. Other-
__varTempVar = Variable.EmptyInstance; wise, the Variable object being returned holds the return
return __varTempVar; value.

Listing 1 contains much more C# stuff. Below, you’ll see Here’s the created C# function signature (this signature will
why it’s needed and what it does. be the same for all of the C# functions compiled from CSCS):

The main difference between a normal CSCS function and a public static Variable helloCompiled(
CSCS function that’s intended to be compiled is the header. List<string> __varStr,
It’s the functionc keyword that tells the CSCS Parsing run- List<double> __varNum,
time that the function is intended to be compiled. The ar- List<List<string>> __varArrStr,
guments may contain the types (CSCS function arguments List<List<double>> __varArrNum,
never have types because they are deduced at runtime). But List<Dictionary<string, string> __varMapStr,
in C#, all types of the arguments must be supplied at com- List<Dictionary<string, double>> __varMapNum,
pile time. When an argument isn’t supplied (like “name” in List<Variable> __varVar) {

48 Compiling Scripts to Get Compiled Language Performance codemag.com


The arguments for all of the compiled C# functions are al- This allows writing all of the Math functions without specify-
ways the same. This is because all of the string arguments ing the namespace.
in the CSCS cfunction definition are a part of the C# List of
strings __varStr, all of the numeric arguments are a part of You just need to uppercase the first letter and lowercase
the C# List of doubles __varNum, all of the arrays of strings the rest. That’s how pow(2, i) got converted to Pow(2, i).
are inside of the List<List<string>> __varArrStr, and so on. Note that all of the functions in the Math namespace have a
first upper-case letter and the rest are lowercase. The only
exception to this rule is the Math.PI constant, so you deal
with it explicitly. See the definition of the IsMathFunction()
You can have an unlimited in the next section’s code snippet. This function also checks
number of function arguments to see if the passed parameter is a math function or not.
of different types. I’ll be looking into how to compile CSCS functions in the
next section.

This is the explanation of why the CSCS string variable name


was replaced with the C# variable __varStr[0] and the CSCS Compiling C# Code at Runtime
integer variable “n” was replaced with __varNum[0] in the To compile the C# code at runtime, you’re going to use Mi-
resulting C# function in Listing 1. crosoft.CSharp and System.CodeDom.Compiler namespaces.
Listing 2 contains Precompiler variable definitions that are
Because you’re using lists in the function signature, you can going to be used at the compilation stage.
have an unlimited number of function arguments of differ-
ent types. The main result of the compilation is the following function
delegate:
To run the compiled function, the CSCS call is the same as it
would’ve been when running a non-compiled version: Func< List <string>,
List <double>,
helloCompiled(“World”, 5); List <List <string>>,
List <List <double>>,
The results of running the helloCompiled script defined List <Dictionary <string, string>>,
above are the following: List <Dictionary <string, double>>,
List <Variable>,
Hello, World! 2^1 = 2 Variable > m_copiledFunc;
Hello, World! 2^2 = 4
Hello, World! 2^3 = 8 This function is going to be used when running a pre-compiled
Hello, World! 2^4 = 16 function at runtime. The first six arguments are the arguments
Hello, World! 2^5 = 32 of the C# method that you’re going to create (see the definition
of the helloCompiled() function in the previous section). The
One question might arise: How come C# recognizes that last argument, Variable, is the return value of the C# method.
“pow(2, i)” is in the Math namespace and it’s equivalent to
Math.Pow(2, i)? You can check in Listing 1 for the presence Listing 3 contains the compilation code. Let’s briefly discuss it.
of the following header line:
First, the code creates the System.CodeDom.Compiler.Com-
using static System.Math; pilerParams object that’s going to be used at the compila-

Listing 4: An Auxiliary Function to Compile and Cache the Results


static Func<List<string>, List <double>, List<List<string>>, typeof(List <Variable>), CSCS_VAR_ARG));
List<List<double>>, List<Dictionary<string, string>>,
List<Dictionary<string, double>>, List<Variable>, Variable> List<Type> argTypes = new List<Type>();
CompileAndCache(CompilerResults compile, string functionName) for (int i = 0; i < paramTypes.Count; i++) {
{ argTypes.Add(paramTypes[i].Type);
Module module = compile.CompiledAssembly.GetModules()[0]; }
Type mt = module.GetType(“SplitAndMerge.Precompiler”);
MethodInfo methodInfo =
var paramTypes = new List<ParameterExpression>(); mt.GetMethod(functionName, argTypes.ToArray());
paramTypes.Add(Expression.Parameter( MethodCallExpression methodCall =
typeof(List <string>), STRING_VAR_ARG)); Expression.Call(methodInfo, paramTypes);
paramTypes.Add(Expression.Parameter(
typeof(List <double>), NUMERIC_VAR_ARG)); var lambda = Expression.Lambda< Func<List <string>,
paramTypes.Add(Expression.Parameter( List<double>, List<List<string>>,
typeof(List<List<string>>), STRING_ARRAY_ARG)); List<List<double>>, List<Dictionary<string, string>>,
paramTypes.Add(Expression.Parameter( List<Dictionary<string, double>>, List<Variable>,
typeof(List<List<double>>), NUMERIC_ARRAY_ARG)); Variable >>(methodCall, paramTypes.ToArray());
paramTypes.Add(Expression.Parameter( var func = lambda.Compile();
typeof(List<Dictionary<string, string>>), STRING_MAP_ARG)); return func;
paramTypes.Add(Expression.Parameter( }
typeof(List<Dictionary<string, double>>), NUMERIC_MAP_ARG));
paramTypes.Add(Expression.Parameter(

codemag.com Compiling Scripts to Get Compiled Language Performance 49


tion stage. Note that you add all of the assemblies, refer- Converting Scripting Code to C#
enced in the currently running assembly, to the assembly Probably the most complicated step is converting the script-
being compiled. This way, you can use all of your C# classes ing code to the C# code. In this section, you’ll see how it’s
in the compiled functions. This is done by collecting these done for CSCS scripting. The strategy is to start converting
assemblies as follows: something small and then gradually extend the conversion.

Assembly[] assemblies = Listing 5 shows the main conversion method, ConvertScript().


AppDomain.CurrentDomain.GetAssemblies(); First, it splits the script into statements (the separation to-
kens being “;”, “{“, and “}” characters) and then converts the
Then you get the actual C# code to compile in the Con- statements one by one, looking ahead into the next statement.
vertScript() method—I’ll talk about how to covert CSCS script
to the C# code in the next section. Note the following for-loop in the ConvertScript method:

The actual compilation and the creation of the new assem- for (int i = 0; i < s_namespaces.Count; i++)
bly takes place in the Microsoft.CSharp.CSharpCodeProvider. {
CompileAssemblyFromSource() method. m_converted.AppendLine(s_namespaces[i]);
}
After the code has been compiled, you need to be able to
use it at some later point in time. This is done in the Compi- It allows adding any namespace to the Precompiler by using
leAndCache() method in Listing 4. The CompileAndCache() its AddNamespace() static method:
method creates a System.Linq.Expressions.Expression ob-
ject and binds to it the method input parameters. This ob- public static void AddNamespace(string ns)
ject name is m_compiledFunc (see its definition in Listing {
2). It will be used later on to invoke the compiled method s_namespaces.Add(ns);
at runtime. }

Listing 5: Main Method to Convert CSCS script to C#


string ConvertScript() “ List<List<string>> “ + STRING_ARRAY_ARG + “,\n” +
{ “ List<List<double>> “ + NUMERIC_ARRAY_ARG + “,\n” +
m_converted.Clear(); “ List<Dictionary<string, string>> “ + STRING_MAP_ARG + “,\n” +
int numIndex = 0; “ List<Dictionary<string, double>> “ + NUMERIC_MAP_ARG + “,\n”+
int strIndex = 0; “ List<Variable> “ + CSCS_VAR_ARG + “) {\n”);
int arrNumIndex = 0; m_depth = “ “;
int arrStrIndex = 0;
int mapNumIndex = 0; m_converted.AppendLine(“ string “ + ARGS_TEMP_VAR + “= \”\”;”);
int mapStrIndex = 0; m_converted.AppendLine(“ string “ + ACTION_TEMP_VAR+” = \”\”;”);
int varIndex = 0; m_converted.AppendLine(“ ParsingScript “ +SCRIPT_TEMP_VAR +
// Mapping from the original arg to the element array it is in “ = null;”);
for (int i = 0; i < m_actualArgs.Length; i++) { m_converted.AppendLine(“ ParserFunction “ + PARSER_TEMP_VAR +
Variable typeVar = m_argsMap[m_actualArgs[i]]; “ = null;”);
m_paramMap[m_actualArgs[i]] = m_converted.AppendLine(“ Variable “+VARIABLE_TEMP_VAR+” =null;”);
typeVar.Type == Variable.VarType.STRING ? STRING_VAR_ARG + m_newVariables.Add(ARGS_TEMP_VAR);
“[“ + (strIndex++) + “]” : m_newVariables.Add(ACTION_TEMP_VAR);
typeVar.Type == Variable.VarType.NUMBER ? NUMERIC_VAR_ARG + m_newVariables.Add(SCRIPT_TEMP_VAR);
“[“ + (numIndex++) + “]” : m_newVariables.Add(PARSER_TEMP_VAR);
typeVar.Type == Variable.VarType.ARRAY_STR ? STRING_ARRAY_ARG+ m_newVariables.Add(VARIABLE_TEMP_VAR);
“[“ + (arrStrIndex++) + “]” :
typeVar.Type ==Variable.VarType.ARRAY_NUM ? NUMERIC_ARRAY_ARG+ m_cscsCode = Utils.ConvertToScript(m_originalCode, out _);
“[“ + (arrNumIndex++) + “]” : RemoveIrrelevant(m_cscsCode);
typeVar.Type == Variable.VarType.MAP_STR ? STRING_MAP_ARG +
“[“ + (mapStrIndex++) + “]” : m_statements = TokenizeScript(m_cscsCode);
typeVar.Type == Variable.VarType.MAP_NUM ? NUMERIC_MAP_ARG + m_statementId = 0;
“[“ + (mapNumIndex++) + “]” : while (m_statementId < m_statements.Count) {
typeVar.Type == Variable.VarType.VARIABLE ? CSCS_VAR_ARG + m_currentStatement = m_statements[m_statementId];
“[“ + (varIndex++) + “]” : “”; m_nextStatement = m_statementId < m_statements.Count - 1 ?
} m_statements[m_statementId + 1] : “”;
string converted = ProcessStatement(m_currentStatement,
m_converted.AppendLine(“using System; using System.Collections; m_nextStatement, true);
using System.Collections.Generic; if (!string.IsNullOrWhiteSpace(converted)) {
using System.Collections.Specialized;using System.Globalization; m_converted.Append(m_depth + converted);
using System.Linq; using System.Linq.Expressions; }
using System.Reflection;using System.Text; m_statementId++;
using System.Threading; }
using System.Threading.Tasks;using static System.Math;”);
for (int i = 0; i < s_namespaces.Count; i++) { if (!m_lastStatementReturn) {
m_converted.AppendLine(s_namespaces[i]); m_converted.AppendLine(CreateReturnStatement(
} “Variable.EmptyInstance”));
m_converted.AppendLine(“namespace SplitAndMerge {\n” + }
“ public partial class Precompiler {“);
m_converted.AppendLine(“ public static Variable “ + m_converted.AppendLine(“\n }\n }\n}”);
m_functionName); return m_converted.ToString();
m_converted.AppendLine(“(List<string> “ + STRING_VAR_ARG +”,\n”+ }
“ List<double> “ + NUMERIC_VAR_ARG + “,\n”+

50 Compiling Scripts to Get Compiled Language Performance codemag.com


Listing 6: Implementation of the ResolveToken Method
string ResolveToken(string token, out bool resolved, return replacement;
string arguments = “”) }
{
resolved = true; string arrayName, arrayArg;
if (IsString(token) || IsNumber(token)) { if (IsArrayElement(token, out arrayName, out arrayArg)) {
return token; token = arrayName;
} }
string replacement;
if (IsMathFunction(token, out replacement)) { if (m_paramMap.TryGetValue(token, out replacement)) {
return replacement; return replacement + arrayArg;
} }

replacement = GetCSharpFunction(token, arguments); resolved = !string.IsNullOrWhiteSpace(arrayArg) ||


if (!string.IsNullOrEmpty(replacement)) { m_newVariables.Contains(token);
return replacement; return token + arrayArg;
} }

if (ProcessArray(token, ref replacement)) {

their type is always deduced from the expression. Because References


You start with a small trivial project, this does matter in C#, you check whether the variable “i”
GitHub CSCS Source Code:
and you should never expect has been defined in this method before the for-loop, and if
https://github.com/vassilych/cscs
not, you prepend the “var i = 1;” statement before the for-
it to get large. If you do, you’ll loop (see Listing 1). VS Code CSCS Debugger
just overdesign. Extension:
Let’s now see how you can figure out if a particular token is a https://marketplace.
Linus Torvalds
mathematical function or not. Note that the CSCS language visualstudio.com/
is case-insensitive but most of the mathematical functions items?itemName=vassilik.
have the first letter in uppercase and the rest in lower case, cscs-debugger
For example, this can be used as follows: with the exception of the Math.PI constant. This is how we
deal with this case: Split-and-Merge Algorithm
Precompiler.AddNamespace(“using MyNamespace;”); and CSCS Language Free
public static bool IsMathFunction(string name, E-book:
Each statement is split into a list of tokens and each token is out string corrected) https://www.syncfusion.com/
processed one by one (looking ahead to a few next tokens). { ebooks/implementing-a-
custom-language
I won’t show you the full implementation here (it can be corrected = name;
consulted in the accompanying source code or on GitHub), string candidate = name[0].ToString().
but I’m going to discuss some of the main points of the ToUpperInvariant() +
conversion. name.Substring(1).ToLower();
if (candidate == “Pi”)
When processing each statement token, there are different {
checks being made. For instance, you check to see if you corrected = “Math.PI”;
override a particular CSCS token with a C# function. This return true;
is the case of the printc token shown in the “Hello, World” }
example in the first section, where it was replaced by the
C# Console.WriteLine() statement. All of these token over- Otherwise, you check if the passed token exists in the Sys-
rides happen in the GetCSharpFunction() method. Here’s an tem.Math namespace:
implementation of this method with just one token, printc,
overridden—this is the place where you can add as many Type mathType = typeof(System.Math);
overrides as you wish: try {
MethodInfo myMethod =
string GetCSharpFunction(string functionName, mathType.GetMethod(candidate);
string arguments = “”) if (myMethod != null) {
{ corrected = candidate;
if (functionName == “printc”) { return true;
arguments = ReplaceArgsInString( }
arguments.Replace(“\\\””, “\””)); return false;
return “Console.WriteLine(“ + arguments + “);”; }
} catch (AmbiguousMatchException) {
return “”; corrected = candidate;
} return true;
}
A special case is parsing a for-loop. Consider the statement }
“for (i = 1; i <= n; i++)” of the helloCompiled function from
the first section. You don’t know if the variable “i” was de- One of the most important methods for converting the CSCS
fined before the for-loop or not. It doesn’t matter for CSCS script to the C# code is the ResolveToken() method shown
because you don’t define variables before they’re used and in Listing 6. What happens if the ResolveToken() method

codemag.com Compiling Scripts to Get Compiled Language Performance 51


doesn’t resolve the token (i.e., the value of the “resolved” Basically, what the above method does is, among other things,
variable will be false after calling this method)? This will to print the following lines of the Split-and-Merge parsing
happen when the passed token: algorithm, which is the base for CSCS parsing (see https://
msdn.microsoft.com/en-us/magazine/mt573716.aspx):
• Is not a string or a number
• Is not one of the function arguments (they are all keys __funcTempVar = new ParserFunction(
of the m_paramMap dictionary) __scriptTempVar, functionName,
• Is not a mathematical function from System.Math ch, ref __actionTempVar);
• Is not a special case of a C# function defined in the __varTempVar = __funcTempVar.GetValue(
GetCSharpFunction() method __scriptTempVar);
• Is not one of the variables that have been already
defined in this method (they are all part of the m_ The first statement gets the appropriate CSCS implementa-
newVariables list) tion function (an object derived from the ParserFunction
class and previously registered with the Parser), and the
If a token can’t be resolved, you do with it what you’d have second statement will eventually invoke the Evaluate() pro-
done with any CSCS token—think of it as if it were a CSCS tected method on the object from the first statement.
function or a variable, resolving it as if it were a part of
the CSCS script. This part is done in the GetCSCSFunction() Every time you can’t resolve a token, you assume that it’s a
method: CSCS token and perform same steps that you would’ve per-
formed when interpreting this token with the CSCS parser.
string GetCSCSFunction(string argsStr,
string functionName, char ch = ‘(‘) { Let’s see an example where you have a print token instead
StringBuilder sb = new StringBuilder(); of the printc token in the “Hello, World!” script you saw in
sb.AppendLine(m_depth + ARGS_TEMP_VAR + “ =\”” the first section, i.e., suppose that the CSCS script is the
+ argsStr + “\”;”); following:
sb.AppendLine(m_depth + SCRIPT_TEMP_VAR +
“ = new ParsingScript(“+ARGS_TEMP_VAR+”);”); cfunction helloCompiled2(name, int n)
sb.AppendLine(m_depth + PARSER_TEMP_VAR+ {
“ = new ParserFunction(“+SCRIPT_TEMP_VAR+ for (i = 1; i <= n; i++) {
“, \”” + functionName + “\”, ‘” + ch + print(“Hello, “ + name + “! 2^” + i +
“’, ref “ + ACTION_TEMP_VAR + “);”); “ = “ + pow(2, i));
sb.AppendLine(m_depth + VARIABLE_TEMP_VAR + }
“ = “+ PARSER_TEMP_VAR+ }
“.GetValue(“+SCRIPT_TEMP_VAR+”);”);
return sb.ToString(); Then the resulting C# code inside of the for-loop will be
} different because the print token won’t be resolved in the

Listing 7: Implementation of the Custom Function RunCompiled() Method


public Variable RunCompiled(List<Variable> args) }
{ argsArrNum.Add(subArrayNum);
RegisterArguments(args); }
var argsStr = new List<string>(); else if (typeVar.Type == Variable.VarType.MAP_STR) {
var argsNum = new List<double>(); var subMapStr = new Dictionary<string, string>();
var argsArrStr = new List<List<string>>(); var tuple = args[i].Tuple;
var argsArrNum = new List<List<double>>(); var keys = args[i].GetKeys();
var argsMapStr = new List<Dictionary<string, string>>(); for (int j = 0; j < tuple.Count; j++) {
var argsMapNum = new List<Dictionary<string, double>>(); subMapStr.Add(keys[j], tuple[j].AsString());
var argsVar = new List<Variable>(); }
argsMapStr.Add(subMapStr);
for (int i = 0; i < m_args.Length; i++) { }
Variable typeVar = m_argsMap[m_args[i]]; else if (typeVar.Type == Variable.VarType.MAP_NUM) {
if (typeVar.Type == Variable.VarType.STRING) { var subMapNum = new Dictionary<string, double>();
argsStr.Add(args[i].AsString()); var tuple = args[i].Tuple;
} var keys = args[i].GetKeys();
else if (typeVar.Type == Variable.VarType.NUMBER) { for (int j = 0; j < tuple.Count; j++) {
argsNum.Add(args[i].AsDouble()); subMapNum.Add(keys[j], tuple[j].AsDouble());
} }
else if (typeVar.Type == Variable.VarType.ARRAY_STR) { argsMapNum.Add(subMapNum);
var subArrayStr = new List<string>(); }
var tuple = args[i].Tuple; else if (typeVar.Type == Variable.VarType.VARIABLE) {
for (int j = 0; j < tuple.Count; j++) { argsVar.Add(args[i]);
subArrayStr.Add(tuple[j].AsString()); }
} }
argsArrStr.Add(subArrayStr);
} Variable result = m_precompiler.Run(argsStr, argsNum, argsArrStr,
else if (typeVar.Type == Variable.VarType.ARRAY_NUM) { argsArrNum, argsMapStr, argsMapNum, argsVar, false);
var subArrayNum = new List<double>(); ParserFunction.PopLocalVariables();
var tuple = args[i].Tuple; return result;
for (int j = 0; j < tuple.Count; j++) { }
subArrayNum.Add(tuple[j].AsDouble());

52 Compiling Scripts to Get Compiled Language Performance codemag.com


ResolveToken() method and therefore GetCSCSFunction() precompiler, argsMap, script);
will be called. This function creates most of the code inside ParserFunction.RegisterFunction(
of the for-loop: funcName,customFunc);
return new Variable(funcName);
for (i = 1; i <= __varNum[0]; i++) { }
ParserFunction.AddGlobalOrLocalVariable(“i”,
new GetVarFunction( The GetCompiledFunctionSignature() gets all of the function
Variable.ConvertToVariable(i))); arguments and their types (if the types are provided—by de-
__actionTempVar =””; fault they’re all strings). This function can be consulted in
__argsTempStr = “\”Hello, \”+name+ the accompanying source code download (see the link to
\”! 2^\”+i+\” = \”+pow(2,i)”; GitHub in the sidebar). The mapping is between the variable
__scriptTempVar = new ParsingScript( name and a variable object. The variable’s Type field shows
__argsTempStr); the actual argument type.
__funcTempVar = new ParserFunction(
At the end, the GetCompiledFunctionSignature() method cre-
SPONSORED SIDEBAR:
__scriptTempVar, “print”, ‘(‘,
ref __actionTempVar); ates a CustomCompiledFunction object and registers it with Need FREE Project
__varTempVar = __funcTempVar.GetValue( the Parser so that its Evaluate() method will be triggered as Advice? CODE Can
__scriptTempVar); soon as the function name is encountered by the CSCS Parser Help!
} runtime.
How does some no
You probably noticed that every time a variable changes its strings, free advice on a
value in the CSCS script (either by an assignment “=” or by
Running Compiled Functions new or existing project
any other operator like “*=”, “+=”, etc.), a statement like at Runtime sound? Do you need free
the following is inserted into the C# code: At runtime, the CustomCompiledFunction’s Evaluate() advice on migrating an
method, shown below, is triggered: existing application from
ParserFunction.AddGlobalOrLocalVariable(“i”, new an aging legacy platform
GetVarFunction(Variable.ConvertToVariable(i))); protected override Variable Evaluate(
to a modern cloud or
Web application? CODE
ParsingScript script)
Consulting experts have
This registers the new variable value with the Parser run- {
experience in cloud,
time, so the Parser runtime knows about any changes done List<Variable> args =
Web, desktop, mobile,
in the C# code. Without the statement above, the value of script.GetFunctionArgs();
microservices, and DevOps
“i” would be updated in C# code but not in any CSCS func- if (args.Count != m_args.Length) { and are a great resource
tion that might be called from the C# code. The Convert- throw new ArgumentException(“Function [“ + for your team! Contact us
ToVariable() is just a convenient method that creates a vari- m_name + “] arguments mismatch: “ + today to schedule your free
able as a wrapper of any type passed to this method (string, m_args.Length + “ declared, “ + Hour of CODE consulting
number, array, etc.). args.Count + “ supplied”); call with our expert
} consultants (not a sales
Variable result = RunCompiled(args);
Registering CSCS Functions return result;
call!). For more information
visit www.codemag.com/
for Compilation with Parser } consulting or email us at
To let the CSCS Parser runtime know that the token cfunc- info@codemag.com.
tion means “pre-compile a function,” you need to register The implementation of the RunCompiled() method is shown
the cfunction handler in the initialization phase as follows: in Listing 7. In particular, it binds passed arguments to the
compiled function arguments. Here is the implementation
ParserFunction.RegisterFunction(“cfunction”, new of the Precompiler.Run() method:
CompiledFunctionCreator());
public Variable Run(List<string> argsStr,
As usual, the Evaluate() method of the CompiledFunction- List<double> argsNum,
Creator() class will do the actual work of the CSCS script List<List<string>> argsArrStr,
translation into C# and its consequent compilation: List<List<double>> argsArrNum,
List<Dictionary<string, string>> argsMapStr,
protected override Variable Evaluate( List<Dictionary<string, double>> argsMapNum,
ParsingScript script) List<Variable> argsVar,
{ bool throwExc = true)
string funcName; {
Utils.GetCompiledArgs(script, out funcName); if (m_compiledFunc == null) {// “Late binding”
Dictionary<string, Variable> argsMap; Compile();
var args = Utils.GetCompiledFunctionSignature( }
script, out argsMap); Variable result = m_compiledFunc.Invoke(
string body = Utils.GetBodyBetween(script, argsStr, argsNum, argsArrStr, argsArrNum,
‘{‘, ‘}’); argsMapStr, argsMapNum, argsVar);
Precompiler precompiler = new Precompiler( return result;
funcName, args, argsMap, body, script); }
precompiler.Compile();
var customFunc = new CustomCompiledFunction( As you can see, the function body is almost trivial, because all
funcName, body, args, the work was done in the compiling and caching stages above.

codemag.com Compiling Scripts to Get Compiled Language Performance 53


Result (Not Compiled) = 3348.26807565568
Runs (n) Compiled Version, ms Not Compiled Version, ms Numerical Result
Time: 68 ms.
100 3 68 3348.26807565568 Runs: 100
500 8 298 16718.7736536161
1000 14 577 33259.1836291118 When the for-loop is executed 100 times, the pre-compiled
version runs about 20 times faster! I did some testing for
5000 44 2888 166297.860117355 different number of runs. The results are shown in Table 1.
10000 83 5769 332591.313075893
50000 345 31428 1662530.99749815 Note that the running time increases faster by the non-com-
piled version than by the compiled one. I think the reason
Table 1: Comparison of the Running Times for the Compiled and Not Compiled Functions is that the C# optimized compiled version deals much better
with loops internally than the straightforward way of ex-
ecuting a loop, statement by statement, as it’s done in the
scripting version. This means that there’s still some work to
Performance Gains from do to make the interpreted version more efficient.
Pre-compilation
In this section, you’re going to see if it makes sense to pre-
compile scripting functions from the performance point of Wrapping Up
view. Let me relieve you from the suspense—yes, it does! The main disadvantage of a scripting, or an interpreted, lan-
guage, is that it’s usually much slower than a compiled lan-
Consider this CSCS function: guage. In this article, you saw how you can have the best of
two worlds by converting a script to C# at runtime and then
cfunction exprCompiled(int n) compiling the created C# code into a C# assembly.
{
start = pstime; Note that this technique makes sense only if you intend to
complexExpr = 0.0; use the compiled code more than a few times, otherwise the
for (i = 0; i < n; i++) { script conversion and compilation time should also be taken
baseVar = exp(sin(i) + cos(i)); into account.
complexExpr += pow(baseVar, pi) * 2;
}
end = pstime;
print(“Result (Compiled) =” + complexExpr + Premature optimization
“ Time: “, (end - start), “ ms. Runs: “ + n); is the root of all evil.
return complexExpression;
} Donald Knuth

This was the version that requires precompiling. A “normal”


CSCS function looks very similar (only the header differs): Also, you saw an example of a performance gain when pre-
compiling a script with some mathematical calculations. The
function exprNotCompiled(n) performance gains were between 20 and 100 times. The math-
{ ematical functions are when you see the most speed improve-
start = pstime; ments. In general, it should be evaluated case-by-case if the
complexExpr = 0.0; script pre-compilation makes sense or not. For short scripts
for (i = 0; i < n; i++) { and, in most cases without big loops, it probably doesn’t mat-
baseVar = exp(sin(i) + cos(i)); ter if a script runs for three or for 68 milliseconds. In other
complexExpr += pow(baseVar, pi) * 2; words, you should always remember the famous Knuth quote.
}
end = pstime; One of the improvements might be saving the compiled as-
print(“Result (Not Compiled) =” + complexExpr+ semblies to disk (this is done by setting the parameter value
“ Time: “, (end - start), “ ms. Runs: “ + n); CompilerParams.GenerateInMemory = false; and setting the
return complexExpression; assembly name with the CompilerParams.OutputAssembly pa-
} rameter or with the “/out” command-line option set in the
CompilerParams.CompilerOptions property) and then loading
The pstime is a CSCS function that returns the CPU time all of the compiled assemblies at the start up time. See List-
in milliseconds for the current process. This is how you’re ing 3 for details on setting different compiler options.
going to run the scripts above and measure execution time:
Note that the complete up-to-date pre-compiler code is avail-
runs = 100; able at the GitHub repository (see the links in the sidebar).
exprCompiled(runs);
exprNotCompiled(runs); I’d be happy to hear from you about how you are pre-com-
piling your scripts and what performance gains you observe.
This is a sample output when runs = 100: Also, it would be interesting to hear if you can use the com-
pilation explained here for any other scripting language.
Result (Compiled) = 3348.26807565568
Time: 3 ms. Vassili Kaplan
Runs: 100

54 Compiling Scripts to Get Compiled Language Performance codemag.com


ONLINE QUICK ID 2001081

Nest.js Step-by-Step:
Part 3 (Users and Authentication)
In the second part of this series, published in the September/October issue (https://www.codemag.com/Article/1909081/Nest.
js-Step-by-Step-Part-2), I linked the To Do REST API to a real database by making use of PostgreSQL, TypeORM, and an @nestjs/
Typeorm module. Now, for every To Do item created by the API, there must be a valid Owner. The Owner is the user who’s

currently logged in. This article, Part 3 in the series, intro- The ultimate benefit for using JWTs is going stateless by re-
duces a new Users Module that allows the application to moving the need to track session data on the server and
create a user and to locate them in the database. To sup- cookies on the client, which is, at today’s standards, an
port user authentication, you’ll add the Auth Module that outdated practice.
exposes two endpoints and allows users to Register new ac-
counts and log in. In brief, a token consists of several sections. The most im-
portant section is the body of the token. The JWT body is
For user authentication, I’ve chosen to use the Passport.js called the JWT payload. It’s the application’s duty to decide
module. By far, this is the most popular and flexible Node.js what goes into the payload. The recommendation is always
Bilal Haidar authentication module because it supports a variety of au- not to overload it and to keep the relevant information that
bhaidar@gmail.com thentication strategies ranging from Local Strategy, to JWT identifies the user when they login next. It’s of utmost im-
https://www.bhaidar.dev Strategy to Google Authentication Strategy and other Social portance to not include sensitive data or private data like
twitter.com/bhaidar Media authentication strategies. passwords in your payload.
Bilal Haidar is an
accomplished author, Nest.js embraces Passport.js and wraps it inside the @ The authentication cycle with Passports.js involves a few
Microsoft MVP of 10 years, nestjs/passport library. This library integrates the Passport. steps that give the user access to protected parts of your
ASP.NET Insider, and has js module into the Nest.js Dependency Injection system, app.
been writing for CODE giving you a smooth and Nest-native experience in authen-
Magazine since 2007. ticating users using Passport.js authentication module. 1. The user submits their registration to the back-end app
for validation.
With 15 years of extensive Let’s start by introducing Passport.js and how it works, then 2. If the user is successful:
experience in Web develop- explore how Nest.js integrates with the Passport.js module a. The app creates the token.
ment, Bilal is an expert in via the @nestjs/passport library. Finally, the step-by-step b. The app signs the token using the jsonwebtoken
providing enterprise Web demonstration shows you how I introduced the concept of library that you download as an NPM package
solutions. users into the To Do REST API, how users register themselves, (https://www.npmjs.com/package/jsonwebtoken).
and how they can authenticate via JWT tokens generated by c. The back-end app returns a response to the client-
He works at Consolidated
the application in response to successful authentications. side app including the signed token and any rel-
Contractors Company in
evant information.
Athens, Greece as a full-
stack senior developer.
You can find the source code of this article and the rest of 4. The client-side app usually stores the token inside Lo-
this series here: https://github.com/bhaidar/nestjs-todo- calStorage, SessionStorage, or inside a cookie in some
Bilal offers technical app. cases.
consultancy for a variety 5. On each subsequent request sent to the server, the
of technologies including client-side app includes the token stored locally in an
Nest JS, Angular, Vue JS, What Is Passport.js? authorization header, or in other parts of the request,
JavaScript and TypeScript. Passport.js is a mature, popular, and flexible Node.js au- in the form of Bearer {Token}.
thentication middleware that offers more than 300 Request 6. The back-end app, using the Passport.js JWT strategy:
Authentication strategies. All of these strategies can be ac- a. Extracts the token.
cessed via this URL: http://www.passportjs.org/packages. b. Validates the token to make sure it was signed by
this app and wasn’t tampered with.
Passport.js handles user authentication based on selected c. Hands in the validation of the user, whose informa-
strategies in your application. For the To Do REST API, I’ve tion is contained inside the token payload.
selected the JWT Strategy that’s implemented by the pass- d. Prompts the back-end app to ensure that the user
port-jwt library. in the payload is stored in the database and has a
real account.
Why JWT? JSON Web Tokens is an authentication standard
that works by generating and signing tokens, passing them That, in brief, is how users are authenticated using Pass-
around between the client-side and server-side applica- port.js and JWTs.
tions, passed around via query strings, authorization head-
ers, or other mediums. Having such a valid and non-expired
token, extracted from an HTTP Request, signals the fact that
How Nest Framework Integrates
the user is authenticated and is allowed to access protected with Passport.js
resources. You can read more about JWT by following this The @nestjs/passport package wraps the Passport.js au-
URL: https://jwt.io/. thentication middleware, configures, and uses the Passport.js

56 Nest.js Step-by-Step: Part 3 (Users and Authentication) codemag.com


on your behalf, gives you a way to customize the Passport. Listing 1: AuthModule setup
js default configurations, and, in return, it exposes the Au-
@Module({
thGuard() decorator that you can use in your application to
imports: [
protect any Route Handler or Controller class and force the ...,
user to be authenticated before accessing the resource. PassportModule.register({
defaultStrategy: ‘jwt’,
The @nestks/passport package integrates the Passport.js property: ‘user’,
session: false,
middleware into the Nest.js Dependency Injection system by }),
providing the PassportModule.register() and PassportMod- ...
ule.registerAsync() methods that you have to import to your ],
Auth Module in your application to provide any configura- controllers: [AuthController],
tion needed by Passport.js middleware. providers: [AuthService, JwtStrategy],
exports: [PassportModule],
})
In addition, the package provides the Passport Strategy export class AuthModule {}
class that you extend when creating your own Passport
Strategy to be used for authenticating users in your ap-
plication. For any custom strategy you create, you have to
provide the Passport Strategy class in the AuthModule so Listing 2: User Entity
that @nestjs/passport is aware of it, to pass over to the @Entity(‘user’)
Passport.js middleware later on. Listing 1 shows a sample export class UserEntity {
AuthModule setup. @PrimaryGeneratedColumn(‘uuid’) id: string;
@Column({ type: ‘varchar’, nullable: false, unique: true }) username: string;
@Column({ type: ‘varchar’, nullable: false }) password: string;
The PassportModule.register() takes an instance of the Au- @Column({ type: ‘varchar’, nullable: false }) email: string;
thModuleOptions as input. The most important property to @BeforeInsert()
configure on the PassportModule is to specify the AuthMod- async hashPassword() {
uleOptions.defaultStrategy property. Without it, the Pass- this.password = await bcrypt.hash(this.password, 10);
}
portModule throws an exception. }

By default, the PassportModule, once it runs the Passport.


js Strategy, extends the Request object and appends a new
property pointing to the authenticated user (or whatever is yarn add bcrypt @nestjs/passport @nestjs/jwt
placed in the JWT Payload). The property name added to the passport passport-jwt
Request object is user by default. You can change this de-
fault behavior by assigning a new property name to the Au- In addition, you need to install some dev-dependencies for
thModuleOptions.property property. In addition, the Pass- the types of the above non-Nest.js packages.
portModule, by default, disables storing any authentication
information in the Server Session. This can be changed by yarn add @types/bcrypt @types/passport
enabling the AuthModuleOptions.session property. @types/passport-jwt -D

Remember to export the PassportModule from your Auth- Step 2: Create the Users Module that will eventually hold
Module. The reason for this is that in every module where all code related to Users and their management, by running
you want to make use of AuthGuard(), you have to import the command:
the AuthModule and import the PassportModule.
nest g m user
To protect any Router Handler or Controller use the @Use-
Guards() decorator provided by Nest.js as follows: The command creates a new folder and places the new Us-
ersModule inside it. In addition, this module is imported by
// Controller default on the AppModule.
@UseGuards(AuthGuards())
export class FeatureController { ... } Step 3: Create the /users/entity/user.entity.ts class. List-
ing 2 shows the source code for the UserEntity.
// Route Handler
@Post() The UserEntity class holds only the basic information need-
@UseGuards(AuthGuards()) ed to authenticate a user in your application. If you were
public async createTodo(...): to build a full user management module, of course, you’d
Promise<any> { ...} capture more user information.

You can check the source code for this package by following Notice the @BeforeInsert() hook that the code uses from Ty-
this URL: https://github.com/nestjs/passport. peORM module. This hook runs and gives the developer the op-
portunity to run any code before saving the Entity in the data-
base. In this case, the code hashes the original password entered
Demo by the user so that you don’t store any plain text passwords. For
Let’s start by installing the required NPM packages. this purpose, the code makes use of a bcyrpt package to do so.

Step 1: Add the following NPM packages that you need to Finally, make sure that you import the TypeORM module into
use throughout building the AuthModule: the UsersModule and provide the UserEntity so that @nestjs/

codemag.com Nest.js Step-by-Step: Part 3 (Users and Authentication) 57


TypeOrm can generate a corresponding Repository class that The next time you run the application, the migrations are
you’re going to use later when you build the UsersService class. checked and if there are any pending migrations, the appli-
cation runs them automatically, ensuring that the database
@Module({ structure is always in sync with the entity structure in your
imports: application.
[TypeOrmModule.forFeature([UserEntity])],
... Step 5: Create the DTO objects the application needs.
})
export class UsersModule {} Listing 3 shows the source code for the CreateUserDto class.

Step 4: Generate a TypeORM migration to create the user ta- The CreateUserDto class is used to pass the information pro-
ble inside the database by running the following command: vided by the user upon registering a new account.

yarn run “migration:generate” AddUserTable Listing 4 shows the source code for the UserDto class:

The UserDto is used when you want to return the User in-
Listing 3: CreateUserDto class formation. Notice how the password field is omitted from
export class CreateUserDto { this class because you don’t ever want to return the user’s
@IsNotEmpty() stored password.
username: string;
The last DTO you need for the application is the LoginUserD-
@IsNotEmpty()
password: string; to class that the application uses to verify the user’s creden-
tials when they are trying to login.
@IsNotEmpty()
@IsEmail() export class LoginUserDto {
email: string;
@IsNotEmpty()
}
readonly username: string;

@IsNotEmpty()
Listing 4: UserDto class
readonly password: string;
export class UserDto { }
@IsNotEmpty()
id: string;
Step 6: Create the /users/users.services.ts class by running
@IsNotEmpty() this command:
username: string;
nest g s users
@IsNotEmpty()
@IsEmail()
email: string; The command creates the UsersService class and imports it
} automatically to the UsersModule.

You’re going to build only the necessary pieces you need to fa-
Listing 5: toUserDto helper method cilitate the user authentication process in the To Do application.
export const toUserDto = (data: UserEntity): UserDto => {
const { id, username, email } = data; Step 7: Locate the /src/shared/mapper.ts file and add a new
mapper utility function to map a UserEntity to UserDto in-
let userDto: UserDto = { stance. Listing 5 shows the source code for the toUserDto()
id,
username,
mapping function.
email,
}; Step 8: Generate the /users/users.service.ts class by run-
ning the command:
return userDto;
};
nest g s users

This command creates the UsersService class and automati-


Listing 6: findByLogin() method cally provides this service inside the UsersModule.
async findByLogin({ username, password }: LoginUserDto):
Promise<UserDto> { Step 9: Inject the UsesRepository class into the constructor
const user = await this.userRepo.findOne({ where: { username } }); of the UsersService class as follows:
if (!user) {
throw new HttpException(‘User not found’, HttpStatus.UNAUTHORIZED);
} constructor(
// compare passwords @InjectRepository(UserEntity)
const areEqual = await comparePasswords(user.password, password); private readonly userRepo:
if (!areEqual) {
throw new HttpException(‘Invalid credentials’, HttpStatus.UNAUTHORIZED); Repository<UserEntity>,
} ) {}
return toUserDto(user);
} Step 10: Add the findOne() function to the service as follows:

58 Nest.js Step-by-Step: Part 3 (Users and Authentication) codemag.com


async findOne(options?: object): Promise<UserDto> { The command creates a new folder and inside it, the new
const user = AuthModule. In addition, this module is imported by default
await this.userRepo.findOne(options); on the AppModule.
return toUserDto(user);
} Step 2: Configure the AuthModule to use the @nestjs/
passport and configure a few settings in Passport.js mid-
This function is a building block for other functions. As in- dleware.
put, it accepts an object that accepts any valid TypeORM Fil-
ter object structure. Listing 8 shows the complete source code for the AuthModule.

The function uses the repository to find a single user record The module:
in the database and returns the user in the form of a UserDto.
• Imports the UsersModule to enable the use of Us-
Step 11: Add the findbyLogin() function to the service. ersService.
• Imports the PassportModule provided by @nestjs/
Listing 6 shows the complete source code. This function is passport package. It also configures this module by
used later when the user wants to log in to the application. explicitly specifying the default strategy to use to au-
It accepts the user’s username and password. It starts by thenticate users, in this case, it’s the jwt strategy.
querying for the user and then comparing the user’s stored • Imports the JwtModule provided by @nestjs/jwt pack-
hashed passport to the one passed to the function. If the age. This module provides utility functions related to
user isn’t found or the passwords don’t match, the function JWT authentication. The only function you’re inter-
throws an Unauthorized HttpException. ested in from this module is the sign() function that
you’ll use to sign the tokens with. The module requires
Step 12: Add the findbyPayload() function to the service setting the JWT expiry time and the secret code that’s
as follows: used to sign the token.
• Provides the JwtStrategy class. The implementation of
async findByPayload({ username }: any): this class will be discussed very shortly.
Promise<UserDto> { • Exports the PassportModule and JwtModule so that
return await this.findOne({ where: other modules in the application can import the Auth-
{ username } }); Module and make use of the AuthGuard() decorator to
} protect Route Handlers or entire Controllers.

Once Passport.js, validates the JWT on the current Request and


if the token is valid, it then calls a Callback function, defined by Listing 7: create() method
your application, to check for the user in the database (maybe async create(userDto: CreateUserDto): Promise<UserDto> {
check if the user is not locked, etc.). The callback function then const { username, password, email } = userDto;
passes the user object back to the Passport.js middleware so // check if the user exists in the db
that it can append it to the current Request object. const userInDb = await this.userRepo.findOne({ where: { username } });
if (userInDb) {
throw new HttpException(‘User already exists’, HttpStatus.BAD_REQUEST);
Step 13: Add the create() function to the service. Listing 7 }
shows the complete source code for this function. It’s used const user: UserEntity = await this.userRepo.create({
to register a new user in the application and makes sure that username, password, email, });
await this.userRepo.save(user);
the user is a new one. return toUserDto(user);
}
Step 14: Finally, make sure to export the UsersService on
the UsersModule so that other modules, specifically the Au-
thModule, can communicate with the database to perform
its function via an access to UsersService. Listing 8: AuthModule class
@Module({ @Module({
imports: [
... UsersModule,
exports: [UsersService], PassportModule.register({
}) defaultStrategy: ‘jwt’,
export class UsersModule {} property: ‘user’,
session: false,
}),
The UsersService is now ready. JwtModule.register({
secret: process.env.SECRETKEY,
Building the AuthModule signOptions: {
expiresIn: process.env.EXPIRESIN,
Let’s switch gears and start building the AuthModule. },
}),
Step 1: Create the Auth Module that will eventually expose ],
the /auth endpoint to allow user registration, login, and controllers: [AuthController],
providers: [AuthService, JwtStrategy],
privacy protection in your application. Generate the module exports: [PassportModule, JwtModule],
by running the following command: })
export class AuthModule {}
nest g m auth

codemag.com Nest.js Step-by-Step: Part 3 (Users and Authentication) 59


Listing 9: JwtStrategy class This configures the Strategy (imported from passport-jwt
package) to look for the JWT in the Authorization Header of
import { PassportStrategy } from ‘@nestjs/passport’;
import { ExtractJwt, Strategy } from ‘passport-jwt’;
the current Request passed over as a Bearer token.
@Injectable()
export class JwtStrategy extends PassportStrategy(Strategy) { secretOrKey: process.env.SECRETKEY
constructor(private readonly authService: AuthService) {
super({
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(), This configures the secret key that JWT Strategy will use to de-
secretOrKey: process.env.SECRETKEY, crypt the JWT token in order to validate it and access its payload.
});
}
async validate(payload: JwtPayload): Promise<UserDto> {
const user = await this.authService.validateUser(payload);
if (!user) { Make sure to pass the same secret
throw new HttpException(‘Invalid token’, HttpStatus.UNAUTHORIZED);
} key in the JWT Strategy and
return user;
} the JwtModule once it’s imported
} into AuthModule.

Step 3: Add the /auth/jwt.strategy.ts class. Listing 9 shows


the complete source code for the JwtStrategy class. What actually happens is that the JWT Strategy extracts the
token and validates it. If the token is invalid, the current
The JwtStrategy class is defined as an @Injectable() ser- Request is stopped and 401 Unauthorized response is re-
vice. Hence, Nest.js can inject it anywhere this service is turned to the user. Otherwise, the validate() function is
needed via its Dependency Injection system. called passing it to the JWT token, to allow your application
to check whether the user exists in the database (maybe
The class extends the PassportStrategy class defined by @ also check that the user isn’t locked, etc.).
nestjs/passport package. The PassportStrategy class takes as
input a Passport.js strategy. In this case, you’re passing the The validate() function should throw an Unauthorized ex-
JWT Strategy defined by the passport-jwt Node.js package. ception if the user isn’t valid. Otherwise, it should return
the user back to the PassportModule. The PassportModule,
The constructor of this service injects the AuthService class. in return, appends the user object returned by the vali-
This service will be implemented in a moment. date() function into the current Request object.

In addition, the constructor calls the PassportStrategy’s


constructor and passes two important options.
You are free to return any
jwtFromRequest:
information on the User object to be
ExtractJwt.fromAuthHeaderAsBearerToken()
appended on the current Request
object so that you can retrieve them
later inside the Route Handlers.

The JwtPayload object is a helper object to hold the content


of the JWT payload and is defined as follows:

export interface JwtPayload {


username: string;
}

Step 4: Generate the /auth/auth.service.ts class by running


this command:

nest g s auth

The command creates the AuthService class and automati-


cally provides this service inside the AuthModule.

Step 5: Inject the UsersService and JwtService classes into


the constructor of the AuthService class as follows:

constructor(
private readonly usersService: UsersService,
private readonly jwtService: JwtService,
Figure 1: POST /auth/register ) {}

60 Nest.js Step-by-Step: Part 3 (Users and Authentication) codemag.com


The JwtService is imported from @nestjs/jwt package. This
service exposes utilities to help sign a JWT payload.

Step 6: Add the register() function to the service. Listing 10


shows the complete source code for the register() function.
This function takes the CreateUserDto as an input parameter
and delegates the actual user creation to the UsersService.
create() function. It returns a RegistrationStatus to indicate
a success or fail user creation.

The RegistrationStatus helper class is defined as:

export interface RegistrationStatus {


success: boolean;
message: string;
}

Step 7: Add the login() function to the service. Listing 11


shows the complete source code for the login() function.
The function receives the LoginUserDto as an input param-
eter. Internally, it uses the UsersService.findByLogin() func-
tion to validate the user credentials.

It then prepares the JWT payload and signs this payload


using the JwtService.sign() function. Finally, it returns the Figure 2: POST /auth/login
signed token together with the username of the current
user. You must return the signed token and you can also
return any arbitrary user fields you wish to return to the Listing 10: register() method
client-side app upon a successful login. async register(userDto: CreateUserDto): Promise<RegistrationStatus> {
let status: RegistrationStatus = {
Step 8: Add the validateUser() function to the service. success: true,
Listing 12 shows the complete source code for the valida- message: ‘user registered’,
};
teUser() function. The function receives the JWT payload as try {
input and it retrieves the user from the database via Us- await this.usersService.create(userDto);
ersService.findByPayload() function. }
catch (err) {
status = { success: false, message: err, };
Remember that from above, this function is called by the }
JwtStrategy.validate() function once a token is validated by return status;
Passport.js middleware. }

Step 9: Generate the /auth/auth.controller.ts class by run-


Listing 11: login()
ning this command:
async login(loginUserDto: LoginUserDto): Promise<LoginStatus> {
// find user in db
nest g c auth
const user = await this.usersService.findByLogin(loginUserDto);
// generate and sign token
The command creates the AuthController class and auto- const token = this._createToken(user);
matically adds it into the controllers property on the Au- return {
username: user.username,
thModule. ...token,
};
Step 10: Configure the controller’s endpoint name by giving }
it a prefix of auth: private _createToken({ username }: UserDto): any {
const user: JwtPayload = { username };
@Controller(‘auth’) const accessToken = this.jwtService.sign(user);
export class AuthController { ... } return {
expiresIn: process.env.EXPIRESIN,
accessToken,
Step 11: Inject the AuthService into the constructor of this };
controller: }

constructor(private readonly authService:


AuthService) {} Listing 12: validateUser() method
async validateUser(payload: JwtPayload): Promise<UserDto> {
Step 12: Add the register() route handler. Listing 13 shows const user = await this.usersService.findByPayload(payload);
if (!user) {
the complete source code for this route handler. The reg- throw new HttpException(‘Invalid token’, HttpStatus.UNAUTHORIZED);
ister() route handler is a POST route handler that receives }
an instance of CreateUserDto object and delegates creating return user;
a new user to the AuthService.register() function. Depend- }

codemag.com Nest.js Step-by-Step: Part 3 (Users and Authentication) 61


credentials are valid, this route handler returns a signed
JWT to the calling app.

The application is now ready to register users and authenti-


cate them with JWT.

Let’s register a new user by sending a “POST /auth/register”


request with a payload, using Postman client as in Figure 1.

Make sure that you add the Content-Type: application/json


request header; otherwise, Nest.js won’t be able to read
your request payload.

The application successfully registers the user. Let’s now


log into the application by sending a “POST /auth/login”
request with a payload, as in Figure 2.

The response of a successful login returns the Access Token


(JWT) together with other information that the application
sends with it such as username and expiresIn fields.

Users Must Be Logged-In to Create a New To-Do Item


Now that authentication works in the application, let’s
switch to the TodoModule and ensure that users must be
logged in before they can create any To Do or Task items.

Step 1: Import the UsersModule and AuthModule into the


TodoModule as follows:

Figure 3: Create todo @Module({


imports: [
UsersModule,
Listing 13: register() action AuthModule,
@Post(‘register’) TypeOrmModule.forFeature([
public async register(@Body() createUserDto: CreateUserDto,): Promise<RegistrationStatus> TodoEntity, TaskEntity, UserEntity]),
{ ],
const result: RegistrationStatus = await this.authService.register(createUserDto,); ...
if (!result.success) {
throw new HttpException(result.message, HttpStatus.BAD_REQUEST); })
} export class TodoModule {}
return result;
} By importing the AuthModule, you’ll be able to make use
of AuthGuard() to protect the Route Handlers and force a
logged-in user.
Listing 14: User AuthGuard
Also notice that the code injects the UserEntity class into
@Post() the TypeOrmModule so that the TodoModule can access it to
@UseGuards(AuthGuard())
async create(@Body() createTodoDto: CreateTodoDto, @Req() req: any,): Promise<TodoDto> { retrieve a User entity from the database.
const user = <UserDto>req.user;
return await this.todoService.createTodo(user, createTodoDto); Step 2: Extend the UserEntity class by adding the Owner
} property:

@ManyToOne(type => UserEntity)


ing on the status of registration, this route handler might owner?: UserEntity;
either throw a BAD_REQUEST exception or the actual regis-
tration status. The owner property is of type UserEntity. The @ManyToOne()
decorates this new property to signal to TypeORM module to
Step 13: Add the login() route handler as follows: store the User ID on the Todo table and configure it as a For-
eign Key. Every user can own one or more To Do items and in
@Post(‘login’) return, every To Do is owned by one and only one user.
public async login(@Body() loginUserDto: LoginUserDto):
Promise<LoginStatus> { Step 3: Generate a TypeORM migration to add the owner
return await this.authService.login(loginUserDto); column on the todo table inside the database by running
} the following command:

The login() route handler simply returns the response of the yarn run “migration:generate”
call to AuthService.login() function. Basically, if the user AddOwnerColumnToTodoTable

62 Nest.js Step-by-Step: Part 3 (Users and Authentication) codemag.com


The next time you run the application, the migrations are Listing 15: createTodo() method
checked and if there are any pending ones, the application
async createTodo({ username }: UserDto, createTodoDto: CreateTodoDto,): Promise<TodoDto> {
runs them automatically, ensuring that the database structure
const { name, description } = createTodoDto;
is always in sync with the entity structure in your application. // get the user from db
const owner = await this.usersService.findOne({ where: { username } });
Step 4: Protect the route handlers to force a logged-in user. const todo: TodoEntity = await this.todoRepo.create({ name, description, owner, });
Listing 14 shows how to require the AuthGuard inside the await this.todoRepo.save(todo);
return toTodoDto(todo);
TodoController. }

The JWT Authentication Strategy kicks in whenever the cre-


ate() route handler is called to validate the JWT and the
user. If it succeeds in doing so, the create() router handler
is executed.

Note how the code makes use of @UseGuards(AuthGuard())


and also injects the @Req() req as an input parameter to
the create() route handler.

The body of the route handler retrieves the logged-in user


via req.user. This information was injected into the current
Request object by Passport.js middleware. It then passes
this information to the TodoService.createTodo() function.

Step 5: Query for the user inside the TodoService.crea-


teTodo() function. Listing 15 shows the complete source
code for the createTodo() function. The function queries the
database for the logged-in user via UsersService.findOne()
function. It then sets the owner property on the UserEntity
to the value of the user object. Finally, it saves the new To
Do item into the database.

Let’s create a new To Do item sending a POST /api/todos/


request with a payload, using the Postman client, as in Fig-
ure 3.

Make sure that you add the Content-Type: application/json


request header, otherwise, Nest.js won’t be able to read
your request payload.
Figure 4: Invalid createTodo() request
Also, add the Authorization request header, otherwise,
Nest.js won’t be able to find the token and it won’t authen-
ticate the request. The authorization header should look made use of the @UseGuards(AuthGuard()) to protect other
similar to this (except without the line breaks forced by the route handlers to force a logged-in user before being able to
printing process): execute route handlers.

Authorization Bearer eyJhbGciOiJIUzI1 You can find the source code of this article and the rest of
NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImJpbGFsa this series here: https://github.com/bhaidar/nestjs-todo-
GFpZGFyIiwiaWF0IjoxNTU1MjU3NDg0LCJleHAiOjE1NTUz app.
MDA2ODR9.-wUfMaJ37gkM6OWqvKpNck5nQGV8SlGvl_Dwdd
LkYJU
Conclusion
The application responds with 200 OK response signaling You’ve seen how easy it is to add authentication to your
the success of creating a new To Do item. Notice how the Nest.js application using the famous and flexible Node.js
owner property is now populated on the To Do item with the authentication middleware and the Passport.js package.
currently logged-in user details.
Soon, you’ll be looking at integrating Swagger into your
Let’s try to create a new To Do item without supplying an Nest.js application to provide full documentation of the To
authorization request header, as in Figure 4. Do REST API and adding an Angular client-side application
that connects to the REST API and allows the user to regis-
The application now responds with a 401 Unauthorized re- ter, login, and manage To Do items via a Web app instead of
sponse signaling the failure of creating a new To Do item counting only on Postman.
because the application had no access to the user token and
couldn’t validate the request. Happy Nesting!

You can go deeper into the source code accompanying this Bilal Haidar
article to see where else in the application source code I’ve

codemag.com Nest.js Step-by-Step: Part 3 (Users and Authentication) 63


ONLINE QUICK ID 2001091

Financial Modeling with


Power BI and DAX:
Life Insurance Calculations
Microsoft debuted the Power BI application in 2015 to enable scalable data analysis, analytics, and visualizations across even
the largest of organizations. Opening up a Power BI dashboard enables users to dynamically interact with the data and visuals
they see on the screen. Microsoft officially defines these dashboard views as reports, but I think the term dashboards invites

users to interact with the data dynamically rather than premium they will charge for you to enter their insurance
statically, as you would with a paper report, for example. pools by analyzing factors such as:
I recommend that you pick your own data visualization ap-
plications (if you can) based on how you anticipate that the • The face amount of the insurance policy
end user will interact with the data. • The age of the policyholder when they purchase the
contract
The capabilities of Power BI lie in the semantic layer of the • Underwriting risks such as smoking and general health
application, which enables you to create powerful models • Duration of the life insurance contract
Helen Wall with an unlimited number of calculations. Creating financial • Interest rate projected over the policy duration
models in Power BI presents challenges like how to replicate
www.helendatadesign.com
financial calculations dynamically so the user can control You can set up these calculations dynamically using DAX
Helen Wall is a power user of the input. The solution comes through creating DAX mea- measure calculations, which allows the users to select some
Microsoft Power BI, Excel, and sure calculations using Microsoft’s DAX language. of their own assumptions to put into the model. For ex-
Tableau. The primary driver behind ample, you will see that higher interest rates lead to lower
working in these tools is finding premiums because of how the time value of money impacts
the point where data analytics Insurance Aggregates Risk Among Many the cash flow over the term duration.
meets design principles, thus
When you buy an insurance policy, you’re distributing the
making data visualization plat-
forms both an art and a science.
quite low risk for needing your policy benefits with the many Actuarial Life Tables
She considers herself both a life- other similar policyholders in your much larger insurance In the last few years, the premise of merging a traditional
long teacher and learner. She is a pool. The likelihood that one individual in the entire risk business segment, such as finance or insurance, with the ca-
LinkedIn Learning instructor for pool will receive a payout is quite high, but the probabil- pabilities of technology went from niche discussion areas to
Power BI courses that focus on all ity that you will personally receive a payout is, conversely, widely known buzzwords such as “Fintech” and “Insurtech.”
aspects of using the application, quite low. The actuarial science profession, which predates most of
including data methods, dash- modern technology, calculates the costs of financial risks
board design, and programming Centuries ago, merchants swapped cargo space on their and determines how to mitigate them.
in DAX and M formula language. ships with other merchants to mitigate the risk of losing
Her work background includes an all their goods in a catastrophic event, such as the ship Actuarial tables refer to standardized life tables that pro-
array of industries, and in numer- sinking. These risk mitigation activities eventually evolved vide data to use in risk analysis and in insurance calcula-
ous functional groups, including into more organized insurance exchanges, such as Lloyd’s tions. To create your own life insurance calculations, you’re
actuarial, financial reporting, of London, which today serves as the standard bearer of the going to use the life tables provided by an actuarial profes-
forecasting, IT, and management
modern insurance exchange market. sional organization, the Society of Actuaries. These tables
consulting. She has a double
provide key figures for actuarial calculations, such as the
bachelor’s degree from the Uni-
versity of Washington where she The term insurance refers to a wide range of insurance numbers of surviving lives for each aggregated age cohort
studied math and economics, and product types. You can purchase insurance policies for your (Figure 1). These tables can also break down risks into more
also was a Division I varsity rower. house, car, and business. Large companies purchase com- detailed categories, such as gender and smoking risk fac-
On a note about brushing with mercial insurance policies to mitigate their risks. Insurance tors, but for the sake of simplicity, let’s use a single stan-
history, the real-life characters companies enter into reinsurance contracts to redistribute dardized life table.
from the book The Boys in the their larger risks among an even larger pool of insurers. I’m
Boat were also Husky rowers that going to focus on personal insurance contracts, and more You can check out the Society of Actuaries tables on their
came before her. She also has a specifically life insurance contracts as the data source for webite, here: https://www.soa.org/globalassets/assets/
master’s degree in financial man- this article. Files/Edu/2018/ltam-standard-ultimate-life-table.pdf
agement from Durham University
(in the United Kingdom). Setting Up Insurance Calculations
When you purchase a life insurance policy, you select the Setting Up Financial Models in Power BI
desired face amount for the policy that your beneficiaries I chose to use Power BI for this data visualization endeavor
will receive as the death benefit value in the event of your because it lets you develop impressive financial models us-
death. In exchange for a financial offset to the event of ing the application’s built-in capabilities. In Power BI, Mi-
your death, you pay premiums to the insurance company to crosoft combines functionalities from Excel, Access, Power
keep the policy in force. Insurance companies calculate the Query, and PowerPivot together in a single power tool.

64 Financial Modeling with Power BI and DAX: Life Insurance Calculations codemag.com
Data Connections in Power Query volves a lot of creativity, data interaction, and critical think-
To import the data into Power BI, you create a new query ing to set up the calculations that I want the model to do.
in the Power Query Editor for a Web connection to the on-
line life tables from the Society of Actuaries website that’s For those new to DAX, you will find the language syntax pretty
available in a PDF format. You want to specifically connect straightforward. Unlike many other programming languages, it
to the tables that provide the number of surviving lives by does not give you grief for case sensitivity. Understanding the
each age. This data spans across two pages in the PDF, so logic behind DAX, however, presents a much bigger challenge.
you need to combine these two table objects into a single You’ll find that putting the calculation into the context of how
data table. The code below, written in the Power Query M DAX returns results will best help you understand how to cre-
language, connects to the online tables separately and then ate the intended calculation. In this Power BI financial model,
combines them with the Table.Combine function. you’re going to use two different DAX functionalities: Figure 1: Example of an
actuarial life table
let • DAX queries
Source1 =Pdf.Tables(Web.Contents("https://www.soa.org/glob • DAX measures
alassets/assets/Files/Edu/2018/ltam-standard-ultim
ate-life-table.pdf")),
Table1 = Source1{[Id="Table003"]}[Data], DAX Calculations, Part 1: Premiums
Headers1 = Table.PromoteHeaders(Table1, [PromoteAllScalars=true]), If you go out to purchase an insurance policy, you can choose
Table2 = Source1{[Id="Table004"]}[Data], among many insurance product options or whether you want to
Headers2 = Table.PromoteHeaders(Table2, [PromoteAllScalars=true]), pay the premium monthly or annually. For these calculations, you
Source = Table.Combine({Headers1,Headers2}), want to simplify the assumptions for the life insurance model to
Columns = Table.SelectColumns(Source,{"x", "l_{x}"}), eliminate the noise and make it easier to replicate. Your term life
Type = Table.TransformColumnTypes( insurance model works under the following assumptions:
Columns,{{"x", Int64.Type},
{"l_{x}", Int64.Type}}), 1. Term life insurance contracts have fixed durations. You’ll
Rename = Table.RenameColumns( use a thirty-year term period for your calculations.
Type,{{"l_{x}", "l(x)"}}) 2. The face amount of the policy represents the amount
in the beneficiaries will receive if the policyholder dies
Rename before the end of the thirty-year term.
3. The insurer pays the death benefit out at the end of the
To set up your own query connection in Power BI: year if the policyholder dies that year.
4. The policyholder pays level premiums at the beginning
1. Open a new Power BI Desktop file. of each year to keep the policy in force.
2. On the splash screen, select Get Data. 5. If they die, they no longer need to pay premiums on
3. When the Power Query Editor dialog window opens, the policy (the death benefit gets paid out instead).
search for the Blank Query option. 6. The policy pays out the face amount if the policyholder dies,
4. Don’t populate the formula bar in this new query, but nothing otherwise. Some insurance products have cash val-
instead select Advanced Editor from the top ribbon. ues, but term life insurance does not have cash values.
5. This opens another dialog box where you delete the
existing M code in the frame and copy and paste the Determining Relationships Between Tables
above into the now empty space. Then select Okay. When I started learning DAX several years ago, I learned that
Make sure to remove any spaces between the code if getting the DAX measures to work started with understand-
you are having trouble getting the query to work. ing the context of the calculations I wanted to do. The Lives
6. Double click on the query name in the query list on the table you imported into Power BI gives you the numbers of
right side of the screen and rename it SOA Life Tables. lives surviving to a given age. If you look at the data trends,
7. Select Close and Load in the upper left side of the you see that at age 20, the table says 100,000 lives. This actual
screen to load this Web data connection into Power BI. number doesn’t mean much, other than giving a benchmark to
set up the rest of your calculations against. At age 21, you see a
I encourage you to learn more about Power Query because slightly smaller number of lives because some in the age group
it allows you to easily import large data sets, perform ETL died in the intervening year. The difference between those two
processes on them, and refresh the data queries. Its func- numbers represents the number of deaths that year relative to
tionality scales across other Microsoft tools, including Ex- the 100,000 lives the tables starts with at age 20. This logic
cel. For the purposes of this article, I want to focus your continues down the table for the rest of the age cohorts.
Power BI efforts on the DAX language, so you only need to
copy the query into your own Power BI file to proceed into Starting at the age when the policyholder purchased the
the next steps. contract, they get older each year into the term duration.
Therefore, the life insurance calculations need to use the age
Working with the DAX Language at each subsequent year into the policy term rather than the
To set up the financial model in Power BI, you’re going to initial purchase age. To get the DAX logic to work for this sce-
leverage the DAX programming language. Microsoft didn’t nario, you need to disconnect the purchase age from the life
create DAX specifically for Power BI, but they do maximize table by creating a separate table for these ages.
its capabilities within the application by letting you create
an amazing array of calculations. Many Power BI developers The life tables ages range from 20 to 100, but you realistically
don’t like using it because it takes them out of their comfort want to set the upper age to 70 because that means the last
zones. This means that they miss out on creating powerful year of a thirty-year term duration occurs when they turn age
models in Power BI. I personally really like DAX because in- 100. You can create this table in multiple ways, but because

codemag.com Financial Modeling with Power BI and DAX: Life Insurance Calculations 65
you’re already working in DAX, you can create a new table other programming languages, the results that a DAX mea-
through the DAX function GENERATESERIES. This function pro- sure calculation returns depend on the context in which
duces a table of continuous values over your specified numbers you’re evaluating the expression. DAX measures don’t depend
range that you enter in the formula using the following steps: on any other DAX measures in the matrix or table visual, but
they do depend on the row and column dimensions (the pivot
1. Select the option in the Home ribbon for New Table. coordinates) that you add to the tables. You’re going to cre-
2. In the formula bar, enter the DAX function to create a ate DAX measures to populate the values field of this matrix
continuous table starting at 20 and ending at 70 us- visual. Because the starting age remains the same no matter
ing the GENERATESERIES function you see below. You how many years into the term duration that you’re calculat-
don’t need to enter the third formula term because the ing the measure at, you expect to see these numbers remain
GENERATESERIES function defaults to using 1 as the in- unchanged for each of the column’s pivot coordinates within
terval unless you specify otherwise. each of the fixed row pivot coordinates (Figure 2).

Ages = GENERATESERIES(20,70) Let’s set up a matrix with the purchase age in the rows, and
the years into term in the columns:
In order to set up life insurance calculations over a thirty-
year period, you also need to create a separate table for the 1. Select the Matrix visual from the visualization pane op-
years. The term duration, unlike the ages, doesn’t already tions at the top (next to the Table visual).
exist in the data, but it will serve as a table to do the DAX 2. Add the Ages field to the rows field space in the visualiza-
measure calculations against. You can also set it up using tion pane, and the Year to the columns fields below that.
the GENERATESERIES function. 3. Make sure that Power BI doesn’t aggregate either of
these fields by right-clicking on the drop-down arrow
Years into Term = GENERATESERIES(1,30) in the field name and selecting Do Not Summarize.
4. This leaves us with the Values field to populate. If you
You can rename the Value field in both these tables to Ages drag the lives field to this space, notice that Power BI
and Year, respectively, to make them easier to recognize throws as error because you set up the tables in the dis-
when setting up the visuals and DAX measure calculations. connected format, which means that it can’t recognize
the relationship between them. To populate this field,
Start your modeling process by adding a matrix visual to the you need to build out your DAX measure calculations.
Power BI dashboard. In the matrix, you can add fields for the
rows and columns. You then add values that populate the cells To make it easier to track calculations, you can create a
in the middle of the matrix visual. Think of the cell locations as separate table to hold only these DAX measures:
the pivot coordinates where the row and column values create
the matrix table dimensions. In this matrix table, you’re going 1. Select the option in the Home ribbon to Enter Data.
to populate the values in the middle with the DAX measures 2. Don’t enter anything in the dialog box that opens but
that you set up to model the life insurance calculations. Unlike rename this table Calculations and hit Okay to close it.
other programming languages, the results that the DAX mea- 3. After you add the Starting Age measure to this table,
sure calculation returns depend on context that you evaluate you’ll see that the new DAX measure has a calculator
the results at, or, more simply, the pivot coordinates. icon next to it.
4. You can then delete Column1 from the model and the
Building DAX Measures Step-by-Step entire table will contain only measures you need for
DAX measures work as portable formulas that allow you to this financial model. From there, you just add more DAX
create dynamic calculations in Power BI dashboards. Unlike measures to the table.

Figure 2: Starting age DAX measure

66 Financial Modeling with Power BI and DAX: Life Insurance Calculations codemag.com
5. To add a DAX measure to the Calculations table, keep • Take this table that you just filtered and apply a cal-
this new table selected, and choose New Measure from culation on this filtered table. Because the Life Table
the Home menu. You now see a formula bar appear at only contains unique age values, the filtered table at
the top where you can enter the DAX code each pivot coordinate in the table only matches up to
one age in the filtered table. This means that you can
Another key to setting up DAX measures is building them in use any arithmetic function, such as SUM or MAX, and
pieces. You first create a starting age measure that references the expression will return the same result.
the age values in the row dimension. In setting up your calcu-
lations, you need to convert the starting age from the values Current Lives (Start of Year) =
in the age table to a DAX measure for the starting age. If you CALCULATE(MAX('Life Tables'[l(x)]),
drag the Age field to the values space, Power BI gives an error FILTER(ALL('Life Tables'), Figure 3: Current age
message. Not only does this Age field not tie to the Years in the 'Life Tables'[x]=[Current Age]))
columns, but you also need to turn the age into a DAX measure.
You need to set up a parameter harvesting measure to convert You can add this measure to the matrix table by first remov-
these ages from field values into DAX measure values. ing the Current Age measure from the Values section and re-
placing it with the Current Lives measure. You see that the
You use MAX as the function to reference the age in the number of lives reflects the matching age in the life table that
DAX measure formula below, but you can theoretically use matches the age of each of the policyholder age at the pivot
other DAX calculation functions such as MIN or SUM as well. coordinates in the values (Figure 4), and that the number of
I default to using the MAX function for consistency across lives decreases as the years progress into the term duration.
many calculations as I build out the model. You also see this
function nested inside the CALCULATE function. I’ll discuss To get the starting lives for each of the ages, you set up an-
the CALCULATE function in much more detail later, but for other DAX measure using similar logic to the Current Lives mea-
now, I added it as a safety step to make sure that the DAX sure, except you want to reference the Starting Age measure
measure works in the built-out calculations. rather than the Current Age as the filtering condition. Notice
that each row in the matrix visual has the same starting age
Starting Age = CALCULATE(MAX(Ages[Age])) that matches the age you see in the row dimensions when you
swap the new DAX measure into the matrix visual (Figure 5),
Unlike other coding, where the code gets us to the result, and each of the columns in a row has the same starting lives.
DAX calculates the result through the pivot coordinates that
you see through the row and column labels for example. Starting Lives =
When you add the Starting Age measure to the Values field CALCULATE(MAX('Life Tables'[l(x)]),
space for the matrix visual, you see the same age for each FILTER('Life Tables',
row of the table across all the columns (Figure 2). 'Life Tables'[x]=[Starting Age]))

You now need to determine the age of the policy purchaser Calculating Probability Rates
at each year into the term duration that represents their You can then use the built-out DAX measures calculating the
current ages. This means that you need to create a new DAX number of surviving lives by age in other DAX measures to
measure to reference the Year values in the column labels, determine key probability numbers at each of these age co-
and you also need to add it to the existing starting age horts. You can determine the likelihood of an age group as
measure and subtract one year from this expression. a whole surviving to a given age by dividing the number of
surviving lives for each age within the 30-year term by the
Current Age = CALCULATE(MAX('Year in Term'[Year]))+[Starting Age]-1 starting number of lives at the age when they purchased their
life insurance policy. Using the DIVIDE function works in the
When you replace the starting age, this DAX measure repre- same way as the divisor symbol within a DAX expression, ex-
sents the policyholder getting older within the duration of cept that you can add an alternative result (in this case the
the policy. You now see a table matrix (Figure 3) filled with value of 0) to return in case the calculation returns an error.
ages that change depending on the pivot coordinates of the
row and the column dimensions within the table. Survival Probability = DIVIDE([Current Lives
(Start of Year)],[Starting Lives],0)
On their own, these ages don’t tell us much, but you want
to use these DAX measures to determine the corresponding You can see the trends of these probabilities in Figure 6.
number of lives for each age. Getting these DAX measures
for the number of lives by age cohort (or group) set up re- You also need to determine the probability of death each year
quires understanding the logic behind DAX measures. The by age group. To get this likelihood, you first need to deter-
CALCULATE function can work as a conditional function, like
the logic you see in the SUMIF or COUNTIF Excel functions.
You can break this logic into two components to make set-
ting up the formulas become a more methodical process:

• Work backward in the expression by first setting up


the filters on the tables you want to reference. The
FILTER function allows you to select a table, then cre-
ate a condition to match fields within this table. You
want to filter the Life Table by the age field x so that it
matches the Current Age measure. Figure 4: Current lives

codemag.com Financial Modeling with Power BI and DAX: Life Insurance Calculations 67
mine the number of deaths for a given age group in a single assumptions complex by modeling the interest rates on their own
year. The Current Lives value represents the number of lives at or using historical numbers, and you can also use an interest
the beginning of the year. The number of lives at the begin- rate assumption to make the calculations easier (at least to start
ning of the next year represents the same number as the lives with). Let’s use a 4% interest rate for the calculations because
at the end of the previous year. this closely reflects the interest rate you see toward the end of
2019. Historically, life insurance calculations used a 6% interest
Current Lives (End of Year) = rate, but let’s set up the initial calculations using the more realis-
CALCULATE(MAX('Life Tables'[l(x)]), tic lower interest rate. I recommend storing fixed values (such as
FILTER('Life Tables', the current selected interest rate) as standalone DAX measures,
'Life Tables'[x]=[Current Age]+1)) which makes it easier to change these numbers later.

The difference between the two measures gives us the total Interest Rate = 0.04
number of deaths that year. Because you already set up both
these calculations as DAX measures, you can simply subtract You use this 4% interest rate to calculate the time value of
the Lives (n+1) from the Current Lives measure. money or the projected discounted amount of money at a time
period in the future. For timing purposes, you can anticipate
Deaths = [Current Lives (Start of Year)]- the policyholder pays insurance premiums at the beginning
[Current Lives (End of Year)] of the year, and the insurance company pays out any death
benefits at the end of the year. This means that you need to
To determine the likelihood of death each year, the policyholder factor for two discount factors for each year in the term. You
Parameter-Harvesting must first survive to the start of that year and then die in exactly don’t need to reference the ages for this calculation, as you’ll
Measures that year. You can also think of it as the number of deaths each see that each age group has the same discount rate for each
year divided by the starting number of lives. Like the calculation year into the term’s duration. You use the POWER DAX func-
If you want to use the
values in the row or column for the probability of survival, you can use the DIVIDE function tion to calculate the time value of money. You then reference
dimensions in your DAX to efficiently set up the probability of death in a given year. the interest rate measure in the first part of the function, and
measure calculations, you the pivot coordinates for the Years columns in the second part
need to convert it into a Probability of Death at current age = of the function. To reference the years in the columns, you
measure first. DIVIDE([Deaths that year],[Starting Lives],0) need to set up a parameter harvesting measure to reference
this field as a measure by putting a MAX function around it.
To convert a field into a You can see the numbers changes by each age cohort in
parameter-harvesting Figure 7. Discount Factor = POWER(1+[Interest Rate],
measure, you need to add -CALCULATE(MAX('Year in Term'[Year])))
the MAX function around Calculating Time Value of Money
the referenced field to convert In financial modeling, you want to make assumptions for the time You also need to put a negative sign in front of this param-
the field into a measure value of money to use in your calculations. You can make these eter harvesting measure because you’re discounting the in-
that you can use in other terest rate that in turn makes the time value of money grow
calculations. smaller over time (shown in Figure 8).

You also want to calculate the time value of money at the


beginning of the year. You set this up using the same struc-
ture for calculating the discount factor at the end of the
year, except you need to add one to the second part of the
expression after the parameter harvesting measure to move
it back a year (as seen in Figure 9).

Figure 5: Starting lives Discount Factor (n-1) = POWER(1+[Interest Rate],


-CALCULATE(MAX('Year in Term'[Year]))+1)

Calculating Present Values


Once you create the calculations for the mortality rates,
as well as the discount factors, you can start to create the
DAX measures that directly shape the financial model from
these other previously built-out DAX measure calculations.
For your example, assume a face amount for a term life in-
surance policy of $1,000,000. Enter the face amount as a
Figure 6: Survival probability standalone DAX measure to make it easier to reference and
change later if you decide to do so.

Face Amount = 1000000

The death benefits represent the amount that the insurance


company pays out to beneficiaries if the policyholder dies. It
doesn’t represent the face amount, but rather projected cash
payment per policy based on the probabilities and discount fac-
tors you already set up. To calculate the death benefit by year
Figure 7: probability of death (this logic works the same for all the age groups), think about

68 Financial Modeling with Power BI and DAX: Life Insurance Calculations codemag.com
doing the financial projection not at each year within the term
duration, but instead the purchase year of the policy at the be-
ginning of year one when the insurer issues the policy, which I
will refer to as “time zero.” For thirty years into the future, you
want to know the cost of insurance, but you want to measure
when they issue the policy by bringing back the death benefit
at each year and age to this period of time zero.

Each one of these years has a cost of insurance that repre-


sents the present value of the death benefits for each year Figure 8: Discount factors
calculated at time zero. To calculate the value of a death ben-
efit, you now multiply the likelihood of death each year by the
discount factor for each year by the face amount of the insur-
ance policy. This gives the present value of the cost of insur-
ance for each year. You see that because the chance of death
rises as the policyholder gets older, that the cost of insurance
also rises, even though the discount factor offsets the calcu-
lations in the other direction (as shown in Figure 10).

PV of Death Benefits (Cost of Insurance) =


[Face Amount]*[Death Probability] Figure 9: Discount factor (start of period)
Working with Filters
*[Discount Factor]
Before you think about the
calculation you want to
You use a similar logic to get the present value of what the do when setting up a DAX
policyholder pays to keep the insurance policy in force every measure, first think about the
year, again calculated when the insurer issues the policy at filters to apply to the table you
time zero. This time, you need to multiply the probability that want to use in the calculation.
they’ll survive to the beginning of that year (which means
that they make a payment that year) by the time value of If you already calculated a
money discounted to the start of that year in the term du- measure from the life tables,
ration. You see that these values decrease over time as the for example, you already
probability of survival and the time value of money decrease Figure 10: present value of death benefits referenced the life table
over the years in the term duration (as shown in Figure 11). in the calculations. To use
this measure in another
PV of Annuity Due = [Survival Probability] calculation, you need to
*[Discount Factor (n-1)] override the filters for these
calculations by putting
an ALL function around
Solving for Premiums
the table name.
The present value of the payments into the policy (which
you initially set to $1), and the present value of the cost of
insurance each year, serve together as key components to
the calculations, but this doesn’t answer the key question
that you ask when you purchase a policy: How much do you Figure 11: Present value of annuity due
need to pay each year for the premium?

You calculated the present value at time zero for each year in NPV of Death Benefits (at Time 0) =
the term duration, but now you need to add the cost of all of SUMX(ALL('Year in Term'[Year]),
the insurance pieces together to get the net present value of [PV of Death Benefits (Cost of Insurance)])
the death benefit. To accomplish this, you use the SUMX func-
tion to override the term year dimension in the columns. Like If you’re purchasing a term life insurance policy with a sin-
the CALCULATE function, you set up the SUMX function by first gle lump sum payment, the NPV of the death benefit techni-
thinking about the filters to apply to the expression. In this cally represents the total cost of the life insurance policy.
case, you already calculated the present values of the annuity However, you want to set up your calculations using thirty
and insurance based on the row and column pivot coordinates. flat yearly premiums. You can again use the ALL and SUMX
Each row’s coordinates represent a different age. You want to function to calculate the NPV of the annuity due at time zero
keep this dimension to the matrix visual. You want to filter (as shown in Figure 13).
the term-year column coordinates so that the DAX expression
no longer references the column coordinate, but instead ref- NPV of Annuity Due (at Time 0) =
erences all the column values from 1-30 years. You place the SUMX(ALL('Year in Term'[Year]),
ALL function around the year field to remove the filters for the [PV of Annuity Due])
years and reference all the years when you do perform the cal-
culation. The second part of the expression references the DAX You calculate the premium by setting the NPV of the death
measure that you created for the present value for each year. benefit equal to the annuity due multiplied by the premium
When you add this new DAX measure to the matrix, notice that that you want to solve for. This means that you divide the
you now get the same NPV (net present value) of death benefit NPV of the death benefit by the annuity due to get the pre-
for each year in the term duration (as shown in Figure 12). mium as a DAX measure calculation (as shown in Figure 14).

codemag.com Financial Modeling with Power BI and DAX: Life Insurance Calculations 69
Premium = DIVIDE( chart only shows the age ranges from 20-70. I also selected
[NPV of Death Benefits (at Time 0)], color in this visual (and the other visuals in the final dash-
[NPV of Annuity Due (at Time 0)],0) board) based on the colors in the Society of Actuaries’ logo.
To change the color values in the bar chart, go to the format-
Notice that the premium remains even over the course of the ting section, and in the Colors menu, select Custom Color.
term duration, so you don’t even have to use the year field You then can enter the hex value that matches up the colors
to analyze the life insurance calculations anymore. These you want to use, such as 4E7DA6 for this bar chart.
tables can become fatiguing to read, so you can convert
these DAX calculations into the visuals to make sure easier You set up the life insurance calculations using a fixed interest
to digest and maximize the Power BI graphic capabilities. rate of 4%. Although this makes it easy to set up and test the
DAX measure calculations, it limits the users’ interactivity with
Encouraging User Interactivity their dashboard. To make the dashboard more dynamic, you can
You just calculated the yearly premium amount for a range let the user select the interest rate that they want to model in
of fifty years for the policyholder in a table visual format. I the calculations. You first create a new table for the interest rates
highly recommend setting up and testing the DAX measures using the DAX GENERATESERIES function. The formula starts the
using these table and matrix visuals because it allows you to values at 1% and continues until 20% in intervals of 1%, which
easily confirm that the calculations work as you expect them you add as the third component in the DAX expression.
to. However, the end users of your dashboard will much more
easily see the trends in the data and calculations if you repre- Interest Rates = GENERATESERIES(0.01,0.2,0.01)
sent them in stacked bar charts (like that shown in Figure 15),
where you put the age on the axis and the premium DAX mea- You then need to update the interest rate DAX measure that you
sure you just created in the values field. Make sure that the already set up to replace the fixed interest rate for setting up
the calculations. To make these updates, you go to the Inter-
est Rate DAX measure in the Calculations table and change the
fixed 4% interest rate to the MAX function with new interest
rates table field you created using GENERATESERIES nested in-
side it. It defaults to using the maximum interest rate value of
20% you don’t have an interest rate selected in the dashboard.

Interest Rate =
Figure 12: Net present value of death benefits MAX('Interest Rates'[Interest Rate])

To allow the user to dynamically select the interest rate they


want to use, you need to set up a separate slicer visual as a
user-selected filter when you design the dashboard, which you
need to add as a visual. You want to use the interest rates field
you just created with the GENERATESERIES function, and not
the interest rate DAX measure used in the life insurance cal-
culations. Changing the slicer visual format into a drop-down
menu rather than a list saves space in the dashboard. You can
Figure 13: Net present value of annuity due turn off the formatting for the slicer visual header and turn on
the title. You want to nudge the user to change the interest
rate used in the calculation by putting a very brief instruction
at the top of this visual. Rename the title of the slicer visual
“Select interest rate for calculation” and change the font size
and color (to a dark gray) to make the title easier to read.

Notice how the premiums decrease if you select a 10% inter-


est rate instead of 4% (as shown in Figure 16). This occurs
Figure 14: Premiums because higher interest rates mean that the time value of

Figure 15: Premium bar chart

70 Financial Modeling with Power BI and DAX: Life Insurance Calculations codemag.com
money will decrease in the future, which then means that for both the death benefits and premiums. Because the ag-
the present value of future death benefits decreases as well. gregated population dies at higher rates at older ages, you
see an initial premium higher than the payouts that the in-
surance company expects to make to the policyholders. In
DAX Calculations, Part 2: later years, this pattern reverses and the payments become
Actuarial Reserves higher than the premiums. You see a year into the policy
Financial modeling allows you to analyze numbers and cal- that this balance changes for the present values for each
culations from several different perspectives. At the time year (as shown in Figure 17). The difference between the
the insurer sold the policy, the NPV (net present value) of NPV of death minus the NPV of premiums at a given year
the death benefit for the face amount equals the NPV (net in the term duration represents the actuarial reserve that
present value) of the premiums. However, these balances the insurance company needs to set aside to pay projected
change a year later if you calculate the NPV balances again death benefits in the future.

Figure 16: Dynamic interest rates for premium chart

Figure 17: PV of death benefits versus PV of annuity due trends

Figure 18: Future NPV of death benefits versus future NPV of premiums

codemag.com Financial Modeling with Power BI and DAX: Life Insurance Calculations 71
Calculating by Time Dimensions part of the DAX expression for the premiums multiplies the
In this model, you assume no expenses or profits for the premium by the present value of the annuity due for each
insurance company so that you can focus on setting up the of the future years.
calculation logic in DAX measures. You can calculate the NPV
of the future death benefits for each year in the term dura- NPV of Future Premiums =
tion again using the SUMX function. First, you set up the SUMX(FILTER(ALL('Year in Term'),
filters for the function using the FILTER function on the year 'Year in Term'[Year]>=MAX('Year in Term'[Year])),
in term table. Unlike the NPV of the death benefits and pay- [PV of Annuity Due]*[Premium])
ments at time zero, however, you don’t want to set up the
filter to calculate the expression across all the years in the If you put each of the NPVs for the future death benefits and
term, but instead use the ALL function to remove the filters premiums together in a single chart, you can see the differ-
for the pivot coordinates that define the present value for ence in the trends between the two future NPV amounts (Fig-
the year in term field. Next, you can set up the conditions ure 18). I used hex value 8AA6BF for the light blue in the
for the filters where you filter the year in the term duration premium bars, and the hex value 024873 for the blue death
to only refer to the years after the current year referenced benefits bar. The difference between the trends gives you the
in the calculation. actuarial reserve at each year.

NPV of Future Death Benefits = Actuarial Reserve = [NPV of Future Death Benefits]
SUMX(FILTER(ALL('Year in Term'), -[NPV of Future Premiums]
'Year in Term'[Year]>=MAX('Year in Term'[Year])),
[PV of Death Benefits (Cost of Insurance)]) Let’s see what the actuarial reserve looks like in a bar graph
over the entire duration of the term life insurance policy (Fig-
From there, you just calculate the net present value over the ure 19). Notice the humpback shape of the trends. I find it
remaining years. You do the same for the present value of helpful to think of the numbers as calculated measurements
premiums over the same time period. Notice that the second at each year within the term duration, where the highest

Figure 19: Actuarial reserves by year

Figure 20: Finished Power BI dashboard

72 Financial Modeling with Power BI and DAX: Life Insurance Calculations codemag.com
CODE COMPILERS

measurements represent where the policyholder You can also add other helpful summaries for
pays a lot of the premium for the policy, but still the end user, such as summary card visuals, to
has the risk to receive death benefits from the show the premium and interest rate. If you add
policy. Again, I used a color from the Society of borders to the summary cards and place them in Jan/Feb 2020
Actuaries’ logo, except this time I formatted the prominent locations on the dashboard, this al- Volume 21 Issue 1
bar using the teal color in hex value 025373. lows those selected total numbers to stand out
as key figures numbers in the dashboard. Nudge Group Publisher
Curating Dynamic Dashboards prompts give instructions in the titles to encour- Markus Egger
To provide the best path for the user to interact age the end user to interact with the dashboard Associate Publisher
Rick Strahl
with your calculations and maximize their un- by subtly telling them to make selections within
derstanding of this financial model, you want to visuals. You can also line up the color scheme to Editor-in-Chief
Rod Paddock
communicate the premiums and reserves trends match up with the Society of Actuaries logo so
Managing Editor
through visuals that also interact with one anoth- that the blues and grays in the logo match the Ellen Whitney
er. Put the Premiums chart at the top of the dash- blue and gray in the bar charts.
Content Editor
board toward the right (as shown in Figure 20). Melanie Spiller
Now put the reserves bar chart directly underneath For first-time users of the DAX language, the
Editorial Contributors
it so they have the same width. I put a thin gray syntax and logic seem intimidating, but testing Otto Dobretsberger
line horizontally to separate the charts and indi- out DAX measure calculations really teaches you Jim Duffy
cate to the end user the difference in the dimen- to learn the language logic, even if it involves a Jeff Etter
Mike Yeager
sions that you want to nudge the user to see. I also little frustration along the way. As you can see
Writers In This Issue
added the Society of Actuaries logo to the top left with this example, Power BI allows you to per- Bilal Haidar Vassili Kaplan
corner, along with a label underneath it that gives form some incredible financial modeling calcula- Ashleigh Lodge Sahil Malik
the user the title of the dashboard. Underneath tions once you understand how the logic works. Ted Neward John V. Petersen
that, you can add a filter for the interest rate slic- What financial modeling do you want to do? Don’t Paul D. Sheriff Helen Wall
Shawn Wildermuth
er, as well as adding a slicer visual with an actual let the initial hurdle of trying to understand DAX
Technical Reviewers
slider that you can move around for the ages. Add all at once intimidate you. Think about how the Markus Egger
nudge prompts to the titles of these slicer visuals calculations work, and then build and test these Rod Paddock
so that the user knows to select input values that calculations step-by-step until you arrive at the Production
update the rest of the dashboard. You also want to model you want to see in your dashboard. Franz Wimmer
indicate on the premium bar chart that they can King Laurin GmbH
39057 St. Michael/Eppan, Italy
select the bar in the chart to filter the rest of the Helen Wall
dashboard by that purchaser age. Printing
Fry Communications, Inc.
800 West Church Rd.
Mechanicsburg, PA 17055
Advertising Sales
(Continued from 74) does the company do poorly that you can take on Tammy Ferguson
832-717-4445 ext 26
as a personal quest? tammy@codemag.com
to be able to make productive the fragment the
Circulation & Distribution
specialist produces.” That is to say, we have to Drucker sums up by suggesting that “the focus on General Circulation: EPS Software Corp.
think about how best to combine the various out- contribution by itself supplies the four basic re- Newsstand: The NEWS Group (TNG)
put of the different specialist communities within quirements of effective human relations: commu- Media Solutions
our organizations—developers, QA, operations, nications; teamwork; self-development; and de- Subscriptions
agilists, business analysts, product owners, or velopment of others.” In other words, if you focus Subscription Manager
Colleen Cade
however your company has segregated your spe- on how you can contribute, you can communicate ccade@codemag.com
cialization—into a concrete and useful outcome. that more effectively with your own management,
you begin to see how your contribution is used by US subscriptions are US $29.99 for one year. Subscriptions
outside the US are US $49.99. Payments should be made
This means that managers are, in many ways, others and can tailor it accordingly, you begin to in US dollars drawn on a US bank. American Express,
troubleshooters. The job of management be- see where you require additional growth, and you MasterCard, Visa, and Discover credit cards accepted.
comes that of figuring out what obstacles lie in can see how you can help others grow in turn. Bill me option is available only for US subscriptions.
Back issues are available. For subscription information,
the path of the team and removing them. “A pol- e-mail subscriptions@codemag.com.
icy does nothing by itself,” Drucker quotes a new
chief executive as saying. “My contribution is to Summary Subscribe online at
make sure that this actually gets done.” Contribution is, in many ways, the lifeblood of the www.codemag.com
successful enterprise (commercial or otherwise).
CODE Developer Magazine
When you have a team of people who are engaged
Your Contribution in asking themselves how they can contribute and
6605 Cypresswood Drive, Ste 425, Spring, Texas 77379
Phone: 832-717-4445
This, then, brings us to the last point of contri- who see their efforts as part of the larger fabric
bution: What’s your contribution? Regardless of of the team and the company, they begin to de-
role or position in the hierarchy, you can begin to velop the necessary view to be successful in mak-
walk this path of management-slash-leadership ing others successful, which in turn becomes the
by asking yourself this basic question: What can force multiplier that the company needs. But it
I contribute? This doesn’t have to be something all begins with the basic question: What can you
that’s unique to you, or even within the company, contribute?
but it does require an honest and thorough as-
sessment of your skillset and the needs of the Ted Neward
company around you. What do you do well? What

codemag.com Financial Modeling with Power BI and DAX: Life Insurance Calculations 73
MANAGED CODER

On Contribution
For an industry that prides itself on its analytical ability and abstract mental processing, we often don’t
do a great job applying that mental skill to the most important element of the programmer’s tool
chest—that is, ourselves. Ever wondered what your contribution is?

As developers, we’re often measured by metrics either internal or external—are retained to provide the consequences of a bad decision: “Hey, the team
that have something to do with code. The amount satisfaction to the customer. Certainly, it’s often agreed, so it’s not my fault.” As much as it may feel
of code used to be a common metric, at least un- the case that the customer wants some bespoke authoritarian to overrule members of the team,
til managers realized that we could write programs software created, but to make that assumption go- strong managers understand that a decision made
that could generate programs that could generate ing into the exchange is making the same mistake is often better than languishing in a pit of ambigu-
programs. (It turns out that software is great for au- that the real estate agent made earlier. Sometimes ity for months or years.
tomating tasks—who knew?) We create it, we design the right answer is to write some bespoke C# ap-
it, we architect it, we test it, we deploy it, we modify plications that are full-blown Web applications; at The act of making decisions or setting out grand
it, we sometimes replace it with more of it, we even other times, though, the right answer is to slam out strategy is, in and of itself, only a small fraction of
sometimes get the chance to delete it, but no mat- a PowerShell script. And in most cases, the answer the job. Management—or leadership, if you prefer—
ter how you slice it, it feels like the key contribution will be a range of possible solutions, and it’s up to is much more about seeing what the job needs and
developers make is entirely around code. the developer—and the organization—to help the moving to adapt. “The executive who keeps on do-
customer find the “right” solution, where “right” is ing what he has done successfully before he moved is
Except that’s not really the case. something contextual to the customer’s particular almost bound to fail,” writes Peter F. Drucker, of the
situation. The “right” solution may—and often will Drucker School of Management. “The executive who
—involve bespoke code, but that’s simply a step to- fails to understand this will suddenly do the wrong
Developer Contribution ward the actual goal. When developers lose sight things the wrong way—even though he does exactly
Let’s imagine for a moment that you’re looking for of this basic reality, we lose sight of so many other what, in his old job, had been the right things done
a new place to live. You go and talk to a real estate things that all of the other trappings (like Kanban the right way.” Management is about much more
agent about finding a place to live. They immedi- boards or burndown charts or bug counts or…) es- than “same job different title” that many developers
ately come back with a list of six different places, sentially lose all meaning. (and other leaf-node contributors) believe it to be.
ranging from $700,000 to $900,000 in price, on Technical leads can make technology decisions. De-
some lovely lots, all across the Eastside Seattle velopers can figure out which testing framework to
area. That would be great, except you were look- Management Contribution use. Management is about people—and more to the
ing for rentals in Florida. Was it reasonable to ex- What if you’re in management? What’s your con- point, about figuring out how best to work with the
pect the real estate agent to understand that you tribution there? highly nondeterministic, incredibly diverse collection
wanted to rent instead of buy, or that you were of skills, history, and preferences that they represent.
looking for a place in the Sunshine State rather To many of those new to the role or observing it from
than the Pacific Northwest? Perhaps, but the real a distance, it seems that management’s job is to Drucker talks almost specifically to developers when
estate agent could easily come back with, “Yes, but make big, strategic decisions and then observe the he talks about “specialists” and “knowledge work-
my job is to help people buy property.” results. For this reason, managers are (sometimes ers” in the same section. “Knowledge workers do
inspirationally, other times aspirationally) referred not produce a ‘thing’,” he writes. “They produce
Except that’s not really the case. Your job, dear to as “leaders” or “leadership,” and are expected ideas, information, concepts. The knowledge work-
agent, is to help me find a place to live. And if to be the rallying point for the company and so on. er, moreover, is usually a specialist. In fact, he can,
that’s not something you can help me with, then (For the moment, we’ll ignore the more cynical take, as a rule, be effective only if he has learned to do
I should probably find a new agent. that managers make no contribution whatsoever.) one thing very well. By itself, however, a specialty
is a fragment and sterile. Its output has to be put
In the end, that’s really the point: If I engage with Frankly, this is a pretty distorted view of the job. together with the output of other specialists before
you in some sort of economic transaction, what I’m it can produce results.” In a world where we find
really looking for is satisfaction. In other words, I To be sure, managers are sometimes called upon to ourselves in companies that have begun to collapse
compensate you when you’ve provided me with some make hard decisions. This often isn’t by choice or developers with QA and operations folks into teams
good or service that, for some reason, I don’t care preference. Many companies seek to make decisions of DevOps, this seems to be entirely prescient.
to provide for myself (regardless of the reason—it’s by consensus, but this often leads to an unaccept-
immaterial whether I can’t or simply choose not to). able state of affairs; when the entire room has to “The task is not to breed generalists,” he contin-
come to an agreement before they can consider a ues. “It is to enable the specialist to make himself
When developers fall back on the idea that their decision made, the person willing to be the most and his specialty effective. This means that he
contributions to the world revolve entirely around stubborn and intransigent often gets the decision must think through who is to use his output and
code, they take a dangerously short-sighted view of they want, largely because others just reach a point what the user needs to know and to understand
what the transaction is between customer and pro- of emotional exhaustion. Weak managers prefer
vider. Developers—or, rather, their organizations, this approach, because it means that they can avoid (Continued on page 73)

74 Managed Coder codemag.com


World Leading Internet of Things Event Series

26,000 Attendees // 21 Conference Tracks // 1500+ Speakers // 950+ Exhibitors

IoT Tech Expo Global // 17 - 18 March 2020 // Olympia London // UK

IoT Tech Expo Europe // 1-2 July 2020 // RAI // Amsterdam

IoT Tech Expo North America // 4- 5 November 2020 // Silicon Valley // CA

Co-Located Events
Find out more and register for tickets
www.iottechexpo.com
KNOWLEDGE
IS POWER!

Sign up today for a free trial subscription at www.codemag.com/subscribe/DNC3SPECIAL

codemag.com/magazine
832-717-4445 ext. 8 • info@codemag.com

Das könnte Ihnen auch gefallen