Sie sind auf Seite 1von 153

n

th itio
7 ar yE
d
rs facebook twitter
ive
let's connect
n n facebook.com/dotnetcurry twitter.com/dotnetcurry
A
linkedin
linkedin.com/suprotimagarwal
GITHUB
github.com/dotnetcurry
socially:

editor
letter from the
contributors
Contributing Authors :
@suprotimagarwal
Damir Arh
Daniel Jimenez Garcia
If you want to go fast, go alone. If Dobromir Nikolov
you want to go far, go together! Gouri Sohoni
Hardik Mistry
Today, we celebrate DotNetCurry (DNC) Magazines' 7th Imran Siddique
Anniversary edition, and I value you all who are sharing this Mahathi
special day with us. Subodh Sohoni
Vikram Pendse
I want to take this opportunity to congratulate my team Yacoub Massad
of authors and reviewers for all their time, efforts and
accomplishments, to you the reader who continues to inspire Technical Reviewers :
us, and to our sponsors who have helped us keep this
magazine freely available for the dev community.
Damir Arh
Daniel Jimenez Garcia
Like every year, we will use this milestone as a springboard to
raise the bar a little higher, and bring you some awesome tuts Dobromir Nikolov
in the coming months. Gerald Verslius
Gouri Sohoni
Enjoy this edition, and do not forget to email me your Subodh Sohoni
feedback at suprotimagarwal@dotnetcurry.com or reach out Tim Sommer
to us on twitter @dotnetcurry. Cheers!
Next Edition : Sep 2019
Disclaimer :
Copyright @A2Z Knowledge
Reproductions in whole or part prohibited except by written
Visuals Pvt. Ltd.
permission. Email requests to “suprotimagarwal@dotnetcurry.
com”. The information in this magazine has been reviewed for
accuracy at the time of its publication, however the information
Art Director : Minal Agarwal
is distributed without any warranty expressed or implied.
Editor In Chief :
Windows, Visual Studio, ASP.NET, Azure, TFS & other Microsoft products Suprotim Agarwal
& technologies are trademarks of the Microsoft group of companies. ‘DNC
(suprotimagarwal@
Magazine’ is an independent publication and is not affiliated with, nor has it
been authorized, sponsored, or otherwise approved by Microsoft Corporation.
dotnetcurry.com)
Microsoft is a registered trademark of Microsoft corporation in the United States
and/or other countries.
CONTENTS We
are 7

06
The Maybe Building a Authentication in
MONAD Cloud Roadmap ASP.NET Core,
C# with SignalR and Vue
Microsoft Azure applications
40
18

72 100
Deploy an Azure DevOps Integration
ASP.NET Core Search Testing of
application to - Deep Dive Real-time
Azure Kubernetes communication in
Service (AKS) ASP.NET Core
90

128 140
Configuration
Developing
driven Mobile Using YAML in
Desktop
DevOps Azure Pipelines
applications in
.NET
116

Chance to
feature in We at dotnetcurry are extremely happy to release our 7th edition

the next today!

edition ->>
We would love to hear from you about how long you have been

reading the magazine, what do you like/dislike, or any random

thoughts that comes to your mind!

Do write to me at suprotimagarwal@dotnetcurry.com and your comments

could get featured in our next edition !!!


PATTERNS & PRACTICES

Yacoub Massad

THE MAYBE
MONAD
IN THIS ARTICLE, I WILL TALK ABOUT
THE MAYBE MONAD; A CONTAINER THAT
REPRESENTS A VALUE THAT MIGHT OR MIGHT
NOT EXIST.

6 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Introduction:
In C#, null is a valid value for variables whose types are reference types (e.g. classes).

For example, this is valid C# code:

string str = null;

Additionally, if a method has a return type that is a reference type, null is a valid return value:

string GetLogContents(int id)


{
var filename = "c:\\logs\\" + id + ".log";

if (File.Exists(filename))
return File.ReadAllText(filename);

return null;
}

The above method returns null if a log file that corresponds to the requested id, is not found.

The caller of the GetLogContents method will receive a string after the method is called. Should the
calling method check the value for null before using it?

In this case, it should.

If it doesn’t and the method returns null, a NullReferenceException is thrown if the calling method
tries to access a member of the returned string.

What’s more dangerous is that the returned string might be passed several times from method to
method before a member of the string is finally used. It is only when a member of the string is used that
the NullReferenceException is thrown. This might make it hard to figure out the real cause of the
NullReferenceException.

A string return type can also be used in methods that never return null. In these cases, the caller
shouldn’t have to check for null.

In C# (before C# 8), there is no way to distinguish between the two cases. Methods that never return null
and methods that might return null, both have the return type string.

Note: C# 8 is expected to have nullable reference types. This means that methods that might return null
can have the return type of string? and methods that never return null can have the return type string.

Editorial Note: If you haven’t yet read about the new C# 8 features, read them here > New C# 8 Features in
Visual Studio 2019. C# 8 is currently in preview at the time of this writing.

Here is an updated version of the GetLogContents method:

www.dotnetcurry.com/magazine 7
Maybe<string> GetLogContents(int id) {
var filename = "c:\\logs\\" + id + ".log";

if (File.Exists(filename))
return File.ReadAllText(filename);

return Maybe.None;
}

The signature of the method now tells us that it may return a string or it may not return a value.
Now, methods that always return a string (non-null) can have a return type of string, and methods that may
return a string, can have a return type of Maybe<string>.

In the rest of the article, I will:

• discuss different implementations of the Maybe type


• talk about Map and Bind
• show you how to use LINQ to work with Maybe values

A sum-type implementation
Note: the source code for this section is found here: https://github.com/ymassad/MaybeExamples/tree/
master/MaybeAsASumType

In the Designing Data Objects in C# and F# article, I talked about sum types. A sum type is a data structure
that can be any one of a fixed set of types. For example, we can define a Shape sum type that has the
following three sub-types:

1. Square (int sideLength)


2. Rectangle (int width, int height)
3. Circle (int diameter)

Maybe can be designed to be a sum type in the following way:

public abstract class Maybe<T>


{
private Maybe()
{
}

public sealed class Some : Maybe<T>


{
public Some(T value) => Value = value;
public T Value { get; }
}

public sealed class None : Maybe<T>


{
}
}

Maybe<T> is a sum type. It has two subtypes: Some and None. This means that a variable of type Maybe<T>
can only hold an instance of type Maybe<T>.Some or Maybe<T>.None.

8 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Note that it cannot hold an instance of type Maybe<T> because Maybe<T> is abstract. Also, the private
constructor means that no other classes can inherit from Maybe<T>.

The Some subtype has a single property, the Value property. The None subtype has no properties because
it models the case where the value is missing.

Now, after a caller calls the GetLogContents method and gets a Maybe<string>, it can do something like
the following:

var contents = GetLogContents(1);

if (contents is Maybe<string>.Some some)


{
Console.WriteLine(some.Value);
}
else
{
Console.WriteLine("Log file not found");
}

Here, we use the pattern matching feature of C#7 to check if the contents variable is of type
Maybe<string>.Some. If so, write the log contents to the console. Otherwise, inform the user that the log
file is not found.

Note that here, there is no way (or at least it is hard) to access the value without first making sure that
there is actually a value.

Still, this does not look very elegant.

The need to write Maybe<string>.Some here is inconvenient. The fact that we have to define the some
variable here and then access the Value property, is also inconvenient.

One thing we can do is create a TryGetValue method in Maybe:

public bool TryGetValue(out T value) {


if (this is Some some)
{
value = some.Value;
return true;
}

value = default(T);
return false;
}

This method returns true if there is a value, and false otherwise. Additionally, if there is a value, the value
out parameter will get the contained value.

Here is how it can be used:

if (contents.TryGetValue(out var value))


{
Console.WriteLine(value);
}

www.dotnetcurry.com/magazine 9
else
{
Console.WriteLine("Log file not found");
}

This is better since we don’t have to type the full type of the variable (e.g. Maybe<string>.Some). Also, we
don’t have to define a some variable.

However, there is a bigger issue here. Now, the value variable can be accessed anywhere, even in the else
branch where it does not contain a valid value.

A Roslyn analyzer can be built to prevent access to the value variable in a location where TryGetValue is
not known to have returned true.

Another option is to define a Match method for Maybe. I talked about Match methods in the Designing
Data Objects in C# and F# article.

Here is how the consuming code would look like:

var contents = GetLogContents(1);

contents.Match(some: value =>


{
Console.WriteLine(value);
},
none: () =>
{
Console.WriteLine("Log file not found");
});

The source code of the Match method used above is defined here.

Similar to the first solution, the value lambda parameter can only be accessed when there is a value.

One potential issue with this solution is performance. Using lambdas to describe how to handle the two
different cases (some and none) might allocate objects that will later need to be garbage collected.

In many cases, this is not an issue. Always measure when it comes to performance.

Note: there is another overload of Match that allows you to return something in each case instead of doing
something in each case.

Defining Maybe as a struct


Note: The source code for this section can be found here: https://github.com/ymassad/MaybeExamples/
tree/master/MaybeAsAStruct

Consider this method:

static Maybe<string> GetLogContents(int id) {


return null;
}

10 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


In the previous section, because we defined Maybe as a class, null is a valid value. That
is, Maybe<string> can actually be one of three things: null, Maybe<string>.None and
Maybe<string>.Some.

This method returns null and therefore the consuming code will most likely not behave as expected. For
example, calling the Match method on null will throw a NullReferenceException.

Also, the following if statement:

if (contents is Maybe<string>.None)
{

..will evaluate to false because null is not equivalent to Maybe<string>.None.

Because of this, it makes sense to define Maybe as a struct.

Structs in C# cannot have the value null. For example, this code is invalid:

int a = null;

Therefore, if we define Maybe as a struct, we are guaranteed that it will never have the value null.

The struct version of Maybe can be found here. Here is a some of the code:

public struct Maybe<T> {


private readonly T value;

private readonly bool hasValue;

private Maybe(T value)


{
this.value = value;
hasValue = true;
}
//...
}

Structs always have a public parameterless constructor that initializes all fields to their default values. This
means that if we construct Maybe<string> like this:

var maybe = new Maybe<string>();

..the value field will get the value of null (the default for string), and the hasValue field will get the
value of false (the default of bool). This will indicate that this instance contains no value.

The constructor defined above always sets hasValue to true. It is private, so it can only be used from
within the class. It is used in the following member:

public static implicit operator Maybe<T>(T value) {


if(value == null)
return new Maybe<T>();
return new Maybe<T>(value);
}

www.dotnetcurry.com/magazine 11
This is the declaration of an implicit operator for converting T to Maybe<T>. This means that we can assign
a string value to a variable of type Maybe<string>. It also means that we can return a string value
inside a method that has Maybe<string> as the return type.

The code here checks the value for null. If it is null, it returns a Maybe instance that contains no value.
Otherwise, it uses the defined constructor to return a Maybe instance that contains the value.

This operator is the reason why the GetLogContents method returns the result of calling the
File.ReadAllText method directly (which is of type string) without constructing a Maybe<string>.

I also defined a static non-generic Maybe class that has some interesting members:

None: a static property that is implicitly convertible to Maybe<T> for any T. That is, there is an implicit
conversion operator defined (see here) that allows it to be converted to Maybe<T> that contains no value.
See the GetLogContents from before. The code in some branch returns Maybe.None.

Some: a static method that allows us to create an instance of Maybe<T> that contains a value. Unlike the
implicit operator, this method throws an exception if the value is null.

The Map method


Maybe<T> has a method called Map. Consider this example:

Maybe<string> str = "hello";


Maybe<int> length = str.Map(x => x.Length);

In this example, we have a Maybe<string> that we convert into a Maybe<int> via the Map method. Map
allows us to convert the value inside the Maybe if there is a value. If there is no value, Map simply returns
an empty Maybe of the new type. In the example above, we want to get the length of the string inside the
Maybe.

The lambda given to the Map method will only be used if there is a value inside the Maybe.

I will show you another example of Map soon.

The Bind method


Consider this example:

static Maybe<int> FindErrorCode(string logContents)


{
var logLines = logContents.Split(Environment.NewLine, StringSplitOptions.
RemoveEmptyEntries);

return
logLines

12 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


.FirstOrNone(x => x.StartsWith("Error code: "))
.Map(x => x.Substring("Error code: ".Length))
.Bind(x => x.TryParseToInt());
}

This method takes the contents of some log file and tries to find an error code inside it. Some line in the
file is expected to contain something like this:

Error code: 981

Some log files might not contain an error and thus might not contain such a line.

First, the method splits the content into lines. Then, it tries to find a line that starts with “Error code: “.

The FirstOrNone method is just like the FirstOrDefault method in LINQ. I defined this method as an
extension method over IEnumerable<T>. If there is at least one item in the enumerable, FirstOrNone will
return a Maybe<T> that contains the first item. If the enumerable is empty, a Maybe<T> that has no value is
returned.

The Map method is used to convert the value inside the Maybe<string> (if there is a value). Here we want
to take a substring of the line. More specifically, we want to remove the “Error code: “ part from the line.

Now comes the Bind method. Like Map, Bind is also about converting or transforming the value inside the
Maybe.

There is a difference though. Let’s look at the signatures of both these methods:

public Maybe<TResult> Map<TResult>(Func<T, TResult> convert)

public Maybe<TResult> Bind<TResult>(Func<T, Maybe<TResult>> convert)

The difference is in the conversion function. When calling Map, we tell it how to convert T (the original value
type) to TResult (the new value type). When calling Bind, the conversion function is expected to return
Maybe<TResult>, not TResult.

Let’s look at the signature of the TryParseToInt method used in the example above:

public static Maybe<int> TryParseToInt(this string str)

This method is similar to int.TryParse. It takes a string and tries to parse it into a Maybe<int>. If the
string can be parsed into an int, the returned Maybe<int> will contain the result. Otherwise, an empty
Maybe<int> is returned.

If we had used Map instead of Bind in the FindErrorCode method above, the type returned would have
been Maybe<Maybe<int>>.

This type is not really useful and is hard to work with. Bind simply flattens Maybe<Maybe<TResult>> into
Maybe<TResult>. This is why Bind is sometimes called FlatMap.

www.dotnetcurry.com/magazine 13
Why is Maybe called a Monad?
A Monad is a container of something C<T> that defines two functions:

Return: a function that takes a value of type T and gives us a C<T> where C is the type of the container. For
example, we can convert 1 into a Maybe<int> by using the Maybe.Some method:

var maybe = Maybe.Some(1);

Bind: a function that takes a C<T> and a function from T to C<TResult> and returns a C<TResult>.

That is, Bind looks like this:

(C<T>, T => C<TResult>) => C<TResult>

In the implementation in the source code, Bind is an instance method. So basically, the C<T> it takes is the
instance itself.

There are some rules that a Monad has to follow regarding the Bind function. I am not talking about these
rules here because I want to keep this article practical and not theoretical.

IEnumerable<T> is also a Monad.

The Return function for IEnumerable<T> is simply the creation of an array that contains a single item.
We can also define a Return method like this:

public static IEnumerable<T> Return<T>(T item)


{
yield return item;
}

The Bind function for IEnumerable<T> is SelectMany. Consider its signature here (I changed TSource
to T to make it easy to read):

public static IEnumerable<TResult> SelectMany<T, TResult>(


this IEnumerable<T> source,
Func<T, IEnumerable<TResult>> selector)

It takes an IEnumerable<T> (C<T>) and a function from T to IEnumerable<TResult> (from T to


C<TResult>) and returns an IEnumerable<TResult> (C<TResult>).

Using LINQ to work with Maybe (and


other Monads)
Consider the following method from the source code:

14 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


static void Test4()
{
var errorDescriptionMaybe =
GetLogContents(13)
.Bind(contents => FindErrorCode(contents))
.Bind(errorCode => GetErrorDescription(errorCode));
}

I have talked about GetLogContents and FindErrorCode earlier. GetErrorDescription takes an int
representing the error code and returns Maybe<string> representing the error description. This method
might return an empty Maybe if no error description can be found for the specified error. Here is the
definition of this method:

static Maybe<string> GetErrorDescription(int errorCode)


{
var filename = "c:\\errorCodes\\" + errorCode + ".txt";

if (File.Exists(filename))
return File.ReadAllText(filename);

return Maybe.None;
}

What the Test4 method does is that it gets the log contents (if any), finds the error code inside the log
contents (if any), and finally gets a description of the error code (if any).

Currently, GetErrorDescription only requires access to errorCode because it uses the file system to
find the description of the error based on what is stored in the files.

Consider this overload of GetErrorDescription:

static Maybe<string> GetErrorDescription(int errorCode, string logContents)


{
var logLines = logContents.Split(Environment.NewLine, StringSplitOptions.
RemoveEmptyEntries);

var linePrefix = "Error description for code " + errorCode + ": ";

return
logLines
.FirstOrNone(x => x.StartsWith(linePrefix))
.Map(x => x.Substring(linePrefix.Length));
}

This method expects to find the error description in the log contents in a special line. For example, a line
might contain the following:

Error description for code 534: Database is down!

This GetErrorDescription overload requires the log contents as a parameter. We can pass contents to
this method in the following way:

static void Test5()


{
var errorDescriptionMaybe =

www.dotnetcurry.com/magazine 15
GetLogContents(13)
.Bind(contents => FindErrorCode(contents)
.Bind(errorCode => GetErrorDescription(errorCode, contents)));
}

Notice how the second call to Bind is now nested. I did this to be able to access the contents lambda
parameter.

Test4 was not great when it comes to readability. Test5 is even a bit less readable. Imagine if we have 10
operations instead of just 3. That will be even less readable.

Consider this now:

static void Test6()


{
var errorDescriptionMaybe =
from contents in GetLogContents(13)
from errorCode in FindErrorCode(contents)
from errorDescription in GetErrorDescription(errorCode, contents)
select errorDescription;
}

Here in Test6, I am using LINQ query syntax to do the same thing as Test5.

Is it more readable now?

To make Maybe work with LINQ, I had to define some Select and SelectMany methods inside Maybe.
With this, Select works exactly like Map. SelectMany is similar to Bind but has an extra parameter:

public Maybe<TResult> SelectMany<T2, TResult>(


Func<T, Maybe<T2>> convert,
Func<T, T2, TResult> finalSelect)

If the first Maybe value contains a value (T), and the result of calling convert (Maybe<T2>) contains a
value (T2), then the finalSelect function is called to compute something from T and T2.

Consider this example:

var customer = from age in Maybe.Some(30)


from name in Maybe.Some("Adam")
select new Customer(name, age);

This is translated to:

Maybe<Customer> customer =
Maybe.Some(30)
.SelectMany(
convert: (int age) => Maybe.Some("Adam"),
finalSelect: (int age, string name) => new Customer(name, age));

In select new Customer(name, age), we need access to both name and age and this is what the
finalSelect function gives us. It also allows us to produce a value of a type that is different from the types
of the two involved Maybe objects.

16 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


I also added a Where method that allows us to use where in a query syntax:

static void Test7()


{
var errorDescriptionMaybe =
from contents in GetLogContents(13)
from errorCode in FindErrorCode(contents)
where errorCode < 1000
from errorDescription in GetErrorDescription(errorCode, contents)
select errorDescription;
}

Where returns an empty Maybe if the value does not meet the condition.

LINQ query syntax was designed to be extensible in a way we just saw. I might talk in details about this in
an upcoming article.

Conclusion:

In this article, I talked about Maybe; a container that may or may not contain a value.

The most important thing Maybe does is that it allows us to express when something is optional.

I showed two implementations of Maybe; one that uses a class and one that uses a struct. I think using a
struct is better because a struct Maybe models exactly two states (has a value and has no value), while a
class Maybe models another state which is null.

I talked about the Map and Bind methods and how they allow us to convert/transform the value inside
Maybe.

I also talked very briefly about what it means to be a Monad and gave an example of IEnumerable<T> as
a Monad.

Finally, I explained how we can use LINQ query syntax to work with Maybe in a more readable way.

Download the entire source code from GitHub at


bit.ly/dncm43-maybemonad

Yacoub Massad
Author
Yacoub Massad is a software developer who works mainly with Microsoft technologies. Currently, he works
at Zeva International where he uses C#, .NET, and other technologies to create eDiscovery solutions. He
is interested in learning and writing about software design principles that aim at creating maintainable
software. You can view his blog posts at criticalsoftwareblog.com.

Thanks to Damir Arh for reviewing this article.

www.dotnetcurry.com/magazine 17
AZURE

Vikram Pendse

MICROSOFT AZURE

Building a
Cloud Roadmap with
Microsoft Azure

Cloud-enabled businesses are putting their THE MASSIVE WAVE


efforts and investments to go “Global, OF “CLOUD” IS
Scalable and Available”. Right from small
startups to big enterprises, everyone has
BRINGING ABOUT
understood the importance of Cloud and A TRUE “DIGITAL
some of these businesses are now taking a TRANSFORMATION”
step ahead with Artificial Intelligence (AI) by
using Intelligent services offered by Cloud AT EVERYONE’S
providers like Microsoft Azure. DOOR STEP.
However, there are huge gaps in the following
areas:

• Adoption of Azure as a cloud platform,

• Migration to Azure from on-premise or


competing cloud provider,

• Lack of awareness about migration tools,

• Services offered by Azure at a large scale.

This article attempts to address these gaps


and concerns, and shares some advice,
best practices to educate you to make your
Microsoft Azure journey meaningful and
profitable.

18 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Building a Cloud Roadmap with Microsoft Azure
As a case study, we’ll take the fictitious “Foo Solutions Ltd.” as a reference.

The CXO board, IT Head and the Technical and Solutions Architect group of Foo Solutions have decided to
adopt Microsoft Azure as their cloud platform on the following basis:

1. They have a large .NET based application portfolio

2. Their current Datacenter contract is on the verge of expiring

3. They recently acquired a small firm who has a large Open Source Applications portfolio

4. They want to go global and reach out to their customers in different geographies

However, they don’t have any Microsoft Azure experts or Architects who can guide them through the
process.

So now, let us discuss a few things the team of Foo Solutions should know about and consider while
migrating their existing applications to Azure, and build new Cloud First applications in their due course of
adopting Microsoft Azure.

Building Migration Roadmap for Microsoft Azure


First, the decision makers should do an extensive exercise of bucketing their applications into the following
categories.

1. Low business impact, sizable userbase and with no critical or sensitive data and public facing.

2. Legacy Web applications (maybe some Classic ASP apps).

3. Applications which are stable, critical, having impact on business, public facing and handles sensitive or
critical data.

4. Applications which are on the verge of EOL a.k.a. end-of-life (like Silverlight apps which needs to be
migrated or .NET 2.0 apps which needs to be moved to the latest .NET framework)

5. Applications which needs to be scrapped and re-written again. Potential “Cloud First” apps with
minimum reusability of existing app and tending towards a new design. Applications which need to
embrace Microsoft Azure Services.

There are many assessment and migration tools offered by Microsoft and 3rd Party Partners/Vendors of
Microsoft. Ideally, the technical group at Foo Solutions should do a detailed analysis of the tool, accounting
the challenges they might face during migration, cost impact, business risks and downtimes etc.

Accordingly, a migration roadmap can be built. To ease this activity of assessment and migration, let us
discuss a few commonly used tools which will ease your initial assessment work and also help in the actual
migration to Microsoft Azure.

www.dotnetcurry.com/magazine 19
Many customers are still running Classic ASP based apps live on production, running their business as usual
with certain number of sizable users. If such customers are re-writing their apps and wish to continue with
the legacy platform, they can leverage the Azure IaaS platform to host their applications. Note that there is
no out-of-the-box tool from Microsoft Azure which will give you assurance of migration, so you may have to
do some configuration changes.

Azure PaaS does not support Classis ASP/Legacy workloads.

Azure App Service Migration Assistant


Many a times, people who are aware of the differences between Azure IaaS vs Azure PaaS can’t make the
direct decision as what to opt for, and most importantly can’t validate the approach.

It is sometimes a difficult and challenging situation if the migrations need to be performed in a short time
span. Hence some quick automated assessment is required which will rule out the risk of choosing Azure
IaaS or PaaS decision.

Microsoft addresses these concerns for their customers by a quick, handy and easy to use tool. In
order to check whether your existing on-premise hosted or any other datacenter hosted application is
suitable for moving to Azure PaaS or not, Microsoft has a App Service Migration tool, which helps you
to do the primary assessment and gives you insights about all the technologies used and whether they
can be ported on Azure as an Azure App Service (which is Azure PaaS). This is a FREE tool available at
https://appmigration.microsoft.com/ and you can also install this on your existing on-premise environment.

Figure 1: Azure App Service Migration Tool

It will scan your end point (URL of your application or in case if you install, then your on-premise

20 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


environment) and it will build a detailed report for you.

Figure 2: Azure PaaS ASP Migration Report

This is however currently available for .NET Applications and soon Microsoft will support other applications
as well. The assessment report is not just a Boolean result stating whether Application can be migrated or
not, but it does a detailed readiness check for the following points:

• Port Bindings
• Protocols
• Certificates
• Location Tags
• ISAPI Filters
• Application Pools
• Application Identity
• Authentication Type
• Application Settings
• Connection Strings
• Frameworks
• Configuration Error
• Virtual Directories

For more details, you can refer to the detailed metadata information mentioned here
https://appmigration.microsoft.com/readinesschecks

www.dotnetcurry.com/magazine 21
Migrate your SQL database to Microsoft Azure with Microsoft Data
Migration Assistant
This is one of the popular tools (also known as “DMA Tool”) to migrate your on-premise SQL database
instance to Azure SQL Server or SQL instance on Azure VM, accessible from an on-premise network.

Like the App Service Migration Tool mentioned earlier, this tool also does an assessment and gives
details of blocking issues and enlists the unsupported features. It also accounts for breaking changes and
deprecated features.

In order to run this tool, you need to have the sysadmin role assigned to you. This is also a FREE Tool and
can be downloaded from here - https://www.microsoft.com/en-us/download/details.aspx?id=53595.

Figure 3: Azure Data Migration Assistant Tool

Besides assessment, it allows you to migrate from your instance located on-premise -> to Azure SQL, Azure
SQL Managed instance or SQL on Azure VM.

Note: If you are running SQL Server 2008 for your applications/business, kindly check the end of life (EOL)
announcement for SQL Server 2008 and the newly announced “Azure Hybrid Benefit” offer from Microsoft for SQL
Server 2008 migration. More details here - https://azure.microsoft.com/en-us/pricing/hybrid-benefit/

Migrating to Cosmos DB
Microsoft Azure Cosmos DB is a revamped version of the previously available DocumentDB with many more
new features and enhancements.

22 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Cosmos DB is a truly globally distributed, multi-model database service available in an Azure PaaS flavor.
Cosmos DB is schema agnostic and no additional efforts are required to maintain indexing. It is highly
scalable and available with low latency and enterprise grade SLAs.

Cosmos DB is also mainly used in apps leveraging schema agnostic model like various IoT and e-Commerce
solutions. Cosmos DB has its own use cases. Now in order to migrate to Cosmos DB, Microsoft provides
another tool like DMA which is known as Azure Cosmos DB Data Migration Tool.

Figure 4: Azure Cosmos DB Data Migration Tool

Azure Cosmos DB Data Migration Tool is an Open Source Project from Microsoft
https://github.com/azure/azure-documentdb-datamigrationtool and you can download it from here
https://www.microsoft.com/en-us/download/details.aspx?id=46436

Azure Cosmos DB Data Migration Tool enables enterprises to move their collection/schemas in JSON, Mongo
DB, Azure Table, SQL and few other data sources. Cosmos DB provides a rich set of APIs for SQL, Graph, Table
and Gremlin. So, in case you want to replace your current Azure Table Storage with Cosmos DB, you don’t
have to make much efforts as most of your code remains as-is. This is because the Table API provides the
same set of method signatures. So, with minimum configuration changes, you can swiftly move to Azure
Cosmos DB. This is again a FREE tool from Microsoft.

Migrating with “Azure Migration” Service


This is a managed service hosted in Azure and is responsible for doing assessment of your on-premise
environment/datacenter.

www.dotnetcurry.com/magazine 23
Figure 5: Azure Migration Service for VMware

This however currently supports only the VMware environment. Hyper-V support is not made generally
available yet. It is an “agentless” discovery mechanism and it works by having a collector VM inside your
on-premise environment. Although this is a FREE service, the components getting provisioned using this
service will be charged as per their respective pricing.

Figure 6: Migration Discovery Collector

Post assessment, you can then perform the actual migration using different Azure Services. For SQL
databases, we have already discussed about DMA and you can also explore Azure Database Migration
service on the same lines.

24 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Migrating SQL databases with “Azure Database Migration Service”

Azure Database Migration Service is a fully managed online service which enables migration of multiple
databases. This still requires you to install DMA (Data Migration Assistance) to carry forward the migration
from on-premise, to Azure SQL Server.

Figure 7: Azure Database Migration Service

Beside these tools and services that we saw so far, you can always create a new infrastructure using Azure
CLI or PowerShell and can also try some popular 3rd party tools like Movere.

Securing your Azure workload


One very common question we all face during customer meetings and conversations with IT and
Compliance experts is Is Azure secure?

Although it may look like a very simple question and the obvious answer is “Yes”, you still need to have a
detailed conversation with customers or stakeholders to understand their requirements for Security.

Governance and Security always go hand in hand. So along with security, having governance is equally
important. Mechanisms like Role Based Access Control (RBAC) and Azure Policy will allow you to customize
these governance policies. Let us quickly go through the most common security challenges you face on any
cloud.

www.dotnetcurry.com/magazine 25
• Lack of Monitoring services
• Data movement in in-secure way
• Application Vulnerabilities
• Lack of Patch and Update management
• Lack of Security specific education
• Compromised users
• Lack of Role Based Access Control (RBAC)
• Lack of security specific education
• Wrong security assumptions

Security is a broader topic and has different flavors. In Microsoft Azure, we can bucket “Security” into two
parts – One is Application Security and the other is Environment Security (regardless of using IaaS or PaaS).
Data Security is also a subset of this conversation.

In Azure, data in transit is encrypted and hence it is secure. Stored Data is partially secure with the
assumption that your data stores are not compromised. Example – Data in Azure Storage is secured as long
as Keys/SAS tokens are taken care of, and not compromised. Data on VMs is secured, as long as it is not
getting accessed by unwanted users in public domain, and even within organizations.

Azure Security Center


Many customers who don’t have any cybersecurity experts or security experts on their board, always have a
concern of choosing the best security services and applying them to their organizations.

The most generic, very powerful but highly underestimated service unknown to many customers is the
Azure Security Center.

It comes with two pricing models - “Free” and “Standard”. Check what features are covered under each
pricing model here.

Many customers have a perception that it just shows the status of VM updates and patches and puts
recommendations on top of them. However today, “Azure Security Center” is one of the very powerful single
dashboard services for your entire Azure workload which closely monitors your Azure components and
gives you a real time feed of the current status of your workloads. It also gives you a compliance score
using which you can ensure whether your workload and services configurations are aligned with your IT
policies and standards, or not.

Besides being a Security Dashboard of the entire subscription, it covers five major aspects –

• Policy and Compliance – Scoring against standard compliances like PCI, SOC, ISO etc.
• Resource Secure Hygiene – Recommendations at Resource level (Compute, Identity, Networking etc.)
• Advance Cloud Defense – Recommendations at VM and VNET level by providing Just in time VM Access
and Adaptive Network hardening
• Threat Protection – Setting up custom alert rules
• Automation & Orchestration – Creating playbooks and integrating logic apps

26 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Figure 8: Azure Security Center Dashboard

Figure 9: Security Center Regulatory Compliance

Azure Security Center pricing is based on the pricing model tier you choose i.e. Free and Standard. Once you
enable Azure Security Center, it starts collecting the necessary data from your Azure components. To know
more about Data Privacy and data collection policies, do read Azure Security Center documentation before
opting.

www.dotnetcurry.com/magazine 27
Web Application Firewall (WAF)
You can configure Web Application Firewall (WAF) inside your application gateway. This enables you
to validate your application against OWASP Top 10/Mod Security Rules (ver. 2.2.9 and 3.0). This web
application firewall also works for workload deployed with Classic mode deployment along with ARM.

Figure 10: WAF Dashboard

It also prevents your application from DDoS attacks. We already have Azure DDoS as a separate service in
Azure, but it is expensive compared to WAF. WAF provides you with real time protection with Detection and
Prevention mode. Detection Mode is usually turned on in Dev/Test phase and if we keep logs on, we can
capture more details. Prevention mode is usually turned on for production phase. In case of any attack, it
throws a 403 error.

Azure Front Doors Service (aka AFD)


Azure Front Door (AFD) service has built-in WAF and DDoS Protection.

Note: There is a separate “Web Application Firewall” managed service for Azure Front Door, so avoid the name
conflict of WAF built inside Application Gateway, against managed service of Web Application Firewall for Front
Door. AFD also has traffic manager capabilities with low latency features. So based on latency, it automatically
manages these requests. Also note that AFD has a dedicated designer, unlike WAF.

28 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Figure 11: Azure Front Door Designer

In the frontend host, you can configure your app, and in backend pool, the requests are routed based on
latency by AFD.

You can configure routing rules as per your business requirements. Traffic Manager and AFD can run in
parallel and you can also replace Traffic Manager with AFD for web apps.

Note that AFD can route to only public endpoints, so while designing the architecture, you need to make
a call of what to opt out of WAF, Front Door and Traffic Manager based on what scenarios you are dealing
with. AFD can certainly be a good choice when you have multiple region origins or globally distributed
users, and performance is key.

Azure Sentinel (Currently in “Preview”)


Azure Sentinel is a cloud native Security Information and Event Management (SIEM) tool by Microsoft.

It provides state of art analytics with minute details of different Azure service components by allowing set
of different rich connectors. It has a small built-in Case Management board (very small flavor of ticketing
systems like Zen Desk) which allows you to investigate the security incidences and issues by assigning them
to respective users.

www.dotnetcurry.com/magazine 29
Figure 12: Sentinel Dashboard

With different data connectors, it captures and displays all the data as single point dashboard or SIEM
dashboard.

Figure 13: Sentinel Data Connectors

If you are familiar with Log Analytics – OMS (Operational Management Suite), the dashboard of connectors
is pretty much the same visually. It provides built-in queries and gives detailed RCA in case of any threats.

30 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Figure 14 shows an example of Security Threats detected by Sentinel from a country and you can also see
the attack details and attempt description along with the IP in Figure 15.

Figure 14: Sentinel showing malicious attacks

Figure 15: IP Address and other details of the attack

www.dotnetcurry.com/magazine 31
The Case management allows you to assign a particular incident to your Users (Users of Azure Portal with
appropriate Roles in place). To hunt down the issue, the Hunting option gives you a decent number of built-
in queries which you can run.

Figure 16: Azure Sentinel Case Management

So along with other Monitoring tools like Log Analytics (OMS) and Application Insights, the Azure Sentinel
tool serves the purpose of true cloud native SIEM tool.

General security guidance for Azure hosted workload


Azure VMs (IaaS) can be protected by the following measures –

• Applying NSGs (Network Security Group) on Subnet or at VM level to control Inbound and Outbound
traffic by providing IP range and rules

• Installing Antimalware and Antivirus and regularly patching them

• Blocking Ports which can be a threat and not needed to be exposed to other Azure Services or public
traffic. RDP can be blocked and if someone still needs to do RDP on VM for any administrative work,
then make use of Jump Server

• Use of appropriate DMZ and making use of 3rd party firewalls like Barracuda

• Azure RBAC and Policies in place for better control and governance

Azure PaaS (mostly App Service Model) hosted apps can be protected by the following measures –

• Applying WAF (Web Application Firewall) to protect your applications

32 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


• Enable Threat Protection for Azure SQL DBs

• Manage SAS Tokens and Keys effectively for Azure Storage and keys of other APIs

• Implement Multi-Factor Authentication for applications

• Implement AD Authentication to enforce policies

• Ensure to classify your data (Public Vs Confidential) and accordingly choose appropriate data source and
protect the same

• Use Azure Key Vault to store secret keys (including passwords of Azure VMs)

• Ensure to run OWASP Top 10 testing for your application and align as per OWASP Top 10 policies

• Restrict IP address by adding your resources to Virtual Network

• User Azure DDoS protection and Azure Pen Test to ensure highest level of security for your application

With this, we have covered the major items for Foo Solution Ltd. and provided guidance for their Migration
approach, Security of the applications and Cloud components.

Now let us discuss some reasons why organizations fail in their Cloud Migration journey, and how it impacts
adoption.

Common reasons of Failures and Extra Costs Incurred


during Azure Adoption and how to avoid these mistakes
Let us quickly understand some high-level points due to which enterprises/companies going on Microsoft
Azure fail to get maximum Return on Investment (ROI) from the platform or even take the decision to opt
out.

Moving to the Cloud is not an easy decision and thus opting out is equally painful. But to avoid such painful
acts, I will enlist some preventive measures and points to consider in order to illustrate an ROI on your
Cloud investment.

We will basically bucket them into two categories (Technical and Non-Technical).

Technical Challenges
• Assuming Azure IaaS is the final solution and burning out – By not designing appropriate High
Availability/Availability Zones, moving everything on Azure IaaS can be a disaster. We have discussed
couple of assessment tools in this article. Enterprises/companies should first do a thorough analysis
using the tools available, and then make a clear choice of IaaS or PaaS. Usually PaaS is cheaper and
flexible, easy to deploy, and for maintaining the overall environment.

• Lack of awareness of Azure Services and Tooling – Microsoft Azure is a dynamic cloud platform and
is continuously evolving with new features. Microsoft keeps adding and updating their value-added
services. After doing an assessment, Architects and Decision makers need to map Azure Services with

www.dotnetcurry.com/magazine 33
their existing apps and see what is best suitable for them to achieve their business goals, as well as
customer satisfaction.

• Blindly Mapping Services with Competing Cloud Providers (eg: Amazon AWS) – Many customers while
moving from Amazon AWS or while having a multi-cloud strategy, always tend to map head to head
services and assume it will work hassle free. Well, I recommend to do a quick assessment especially
for Microsoft Azure where there is a plethora of services and wider choices available. For example – In
case of mapping for AWS Lambda, off course, the equivalent choice is Azure Functions since both are
serverless offerings. But then do revisit the requirement once since it may happen that what you are
looking for, can be served using Azure API Apps as well. This is just a high-level example but besides
this, “Cost” is also a factor, so ensure you are not blindly mapping services, but rather evaluating it for a
better optimized use.

• Wrong Technical assumptions and SLA assumptions – Enterprises/companies are first required to
understand the different SLAs for different services in Azure. They also need to understand the terms
and conditions to achieve those SLAs and ensure the steps to be taken to fulfill them. “High Availability”
and “Maintenance of VMs” (especially in Azure IaaS) are the most misunderstood terminologies. For
Azure IaaS, do understand the “Shared Responsibilities” concept before opting for it.

• Wrong assumptions about Security – In an earlier section of the article, I mentioned that customers
often ask “Is Azure Secure?” Do feel free to have a conversation with the customer and ask her/him
a few questions of your own like “Is your application secure in its current environment and what
measures have been take to ensure its security?”.

While this may open up Pandora’s box, you will get the opportunity to showcase some of the built-in
security measures or cloud native security services, Microsoft offers. This should lead to a good value
proposition. You need to understand and help the customer understand the following:

o Data Classification – Difference between Public Data and Private Data. How Microsoft treats data
hosted in Azure. What are the Microsoft policies for the same (Check Microsoft Trust Center for more
details - https://www.microsoft.com/en-us/trustcenter/cloudservices/azure).

o Help customer to educate how Microsoft ensures enterprise grade security to its Data centers across
the world and compliances they have.

o Educate customer to differentiate between Application Security and Cloud Security and the
different measures and services associated with it.

o Encourage Customers to opt for Monitoring services (many customers bypass this recommendation
to save few $ in the monthly bills)

Non-Technical Challenges (Sales / Pre-Sales phase)


• Wrong mapping of services or service choices for saving the cost in proposals/RFPs.

• Lack of tools/questionnaire to capture the requirements for Azure (capturing Business goals, high level
details of current application/infrastructure etc).

• Poor understanding of Security and Compliance offerings from Azure.

34 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


• Poor knowledge of Azure cost calculator and different pricing models like:
o Cloud Solution Provider (CSP)
o Enterprise Agreement (EA)
o Pay-As-You-Go (PAYG) etc.

• Missing out non-functional requirements (NFRs).

• Lack of knowledge and wrong assumptions about 3rd Party Services integration in Azure.

• Lack of knowledge of different Support Model Microsoft offers for Azure.

• Poor knowledge of different product licensing especially in Hybrid or Lift and Shift migration scenarios
in Azure. Lack of knowledge of license reusability.

• Poor communication with ground Sales and Partner teams of Microsoft who can frequently share
publicly available value-added updates, and can share more insights.

Value added Services and Tools


We detailed out the Migration and Security aspects along with common challenges and reasons for failure
on Microsoft Azure. Now once you embrace Microsoft Azure, in order to illustrate a better ROI, here are some
services and tools which will not only ease your Azure journey, but will also add value to your customers.

App Configurator Service (Currently in “Preview”)


Many large enterprise applications have complex and huge configurations settings which play a key role in
running these apps successfully. Maintaining them is a complex task and it is a challenge when overriding
these settings.

Being an enterprise friendly organization, Microsoft understood this aspect and to resolve this problem,
they have introduced the “App Configurator Service” which is single stop repository to store all your key
values and configurations securely. Like you read your configuration files, similarly you can read these
settings with a set of APIs.

Figure 17: Azure App Configuration Explorer for storing Keys

www.dotnetcurry.com/magazine 35
You can also Import and Export them any time, and it is quite easy to manage them from the Azure Portal
too.

Cloudockit
Cloudockit is a multi-cloud third party solution to document your Cloud workload with in-depth details.
This tool produces in-depth Technical documentation and works where you may have any compliance rules
to share the documents with customers, or maintain them for customers for audit purpose. It is a quick tool
which will save you time which you would otherwise spend on building manual documentation.

Figure 18: Cloudockit Tool

Cloudockit supports Microsoft Azure (including multiple subscriptions).

This is a paid tool and you can take a free trial at cloudockit.com.

Choosing the right compute type and size


This is a complex and critical area and requires lot of exercise and extensive experience. There is no
documentation which selectively states that for 1 million users, use this VM type or for a pick load of 50
million expected users, use a different class of VM to get a good performance.

The number of cores and memory usually can be picked with the following parameters:

• Nature of the business and availability in the multiple regions


• SLAs committed to end customers/consumers
• Ballpark number of users

36 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


• Data heavy or media heavy application
• Ballpark number of concurrent users

Although this is not a clear measure to define, but at an initial level, it is good enough to pick the VM type
and size. You always have scaling mechanisms like VM Scale Sets which can scale on demand.

Usually I have seen that many people do a Proof of Concept (PoC) followed by a Load Test and check the
overall performance before choosing the VM specs. Here is a quick chart which can help you choose a series
of VMs based on the nature of your business:

How to check/validate Website is accessible and running from mul-


tiple locations
Many a times, we hear customers complaining about the availability issues of a site from their geography.
For Example – Let us say a company hosts a site for their UK and APAC customers. Now the UK consumers
complain that the site is not accessible for them and raise a ticket.

Now how do you validate this?

If it is a partner/dev team, you may ask to share the screen over Skype or Microsoft Teams and check or have
a screenshot sent over email. But for a production environment and for a large customer base where users
are consumers, it is not possible to do so. Traditionally, people would provision VMs in those regions or

www.dotnetcurry.com/magazine 37
manipulate the geo/time to test.

Figure 19: App Insight Availability

This is not a standard or proven technique especially in the Cloud era. Hence if you have Application
Insight applied to your application, you can check this with the “Availability” feature as shown in Figure 19,
and can check or run the test from different regions.

Microservices
If at all you are considering a Microservices based design or architecture, then just make a note of the
following offerings which will help you to pick the correct service in Azure –

ACR – Azure Container Registry (Deprecated Service – Kubernetes is now Industry standard hence AKS is the
new alternative)

AKS – Azure Kubernetes Service. Good for Linux/Open Source Workloads

ASF – Azure Service Fabric. Good for Windows Workloads. Ideal for Non containerized and Stateful apps

ASFM – Azure Service Fabric Mesh – Manages Service offering for ASF

DevOps
VSTS (Visual Studio Team Services) is now branded as Azure DevOps with many more new capabilities and
services. Azure DevOps enables you to build different dashboards, build CI-CD and CT pipelines with many
open source version controls and tools like Maven, Jenkins etc.

38 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


If you are new to Azure DevOps, the best way to get hands-on experience is to try out the FREE step by step
labs from Microsoft here - https://azuredevopslabs.com as well as check out some tutorials at
www.dotnetcurry.com/tutorials/devops.

Conclusion:
Microsoft Azure is one of the top leading Public Cloud Platform with unique offerings and true hybrid,
secure and enterprise grade SLA offerings.

Azure gives good ROIs provided you align your migration and new application development strategy to it. I
hope this tutorial has helped you get over common misconceptions about Microsoft Azure.

The suggestions described in this tutorial will also help you avoid mistakes, illustrate a better ROI and
enable you to take decisions and build a long term, sustainable, profitable and secure Cloud roadmap for
your organization, to serve your customers and consumers better!

Vikram Pendse
Author
Vikram Pendse is currently working as Cloud Solution Architect in e-Zest Solutions Ltd.
in (Pune) India. He has 12+ years of IT experience spanning a diverse mix of clients and
geographies in the Microsoft Domain. He is an active Microsoft MVP since year 2008
and has currently received the MVP award in Microsoft Azure. Vikram is responsible
for building "Digital Innovation" strategy for e-Zest customers globally using Microsoft
Azure and AI. He is a core member of local Microsoft Communities and participates as a
Speaker in many Microsoft and other community events talking about Microsoft Azure
and AI.

Thanks to Tim Sommer for reviewing this article.

www.dotnetcurry.com/magazine 39
ASP.NET CORE

Daniel Jimenez Garcia

AUTHENTICATION IN
ASP.NET CORE,
SIGNALR AND
VUE APPLICATIONS

40 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


e arly on this year, I
published an article
describing how to integrate
In this article, we will revisit the
same application built in the
first article with the sole aim of
live chat to the application
(Figure 2). Along the way, we will
also see how Vuex can be used
ASP.NET Core SignalR and discussing authentication. to elegantly solve the problem of
Vue.js. Through said article, a shared data in a Vue application!
minimalistic version of Stack We will start with cookie based
Overflow was built in order authentication, one of the There is no question about it,
to showcase how all these most widely used options in security is a complex topic. I
technologies can be integrated web applications that many hope you will find this article
into a working application. of you might be familiar with. both useful and interesting,
Then we will investigate how giving you enough information
Setting up an application an application can support to understand the various
where all these technologies different authentication schemes authentication choices and
collaborate to create a simple (or mechanisms). That will tools available for your Vue and
but working version of one of allow us to introduce a second SignalR applications.
the most popular (if not the most authentication scheme based
popular) website for developers, on JWT Bearer tokens, which is The companion source code
made for a long albeit sometimes favored by SPA and for the article can be found on
interesting article. However, in mobile applications. GitHub. If you want to follow
order to maintain the focus of along with the article, use the
the article and keep its length Once users can login into branch authentication-start
under control, there was a big our site (Figure 1), we will which does not contain any of the
topic not covered in that article: see how SignalR seamlessly authentication code changes.
Authentication! integrates with ASP.NET Core
authentication, adding a simple

www.dotnetcurry.com/magazine 41
Figure 1, users will now be able to login

Figure 2, a simple live chat will be created

42 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


COOKIE BASED AUTHENTICATION
The application we will use throughout this article provides users with a basic site similar to the popular
Stack Overflow site. It lets them create questions, provide answers, and even lets them vote the Q&A.

As we saw in the previous article, SignalR was used to provide real time updates of votes and answers.
However, authentication was nowhere to be seen! In this first section of the article, we will update the
application so users can login/logout and the site is effectively read only for anonymous users, including
the SignalR hubs.

We will start following the most common authentication scheme: cookie based authentication. At a very
high-level, it works like this:

1. The browser sends user entered credentials (like username and password) for a server to validate.

2. If the server determines the credentials are valid, it generates an encrypted cookie used to identify the
user and includes a Set-Cookie header in the response sent back to the browser.

3. The browser receives the response and reads the Set-Cookie header, saving the cookie to the cookie jar.

4. Upon any further requests, the browser automatically includes the cookie within the requests.

5. The server inspects the received headers on every request, expecting to find the authentication cookie
it sent upon authentication. In order to authorize the request, it can decrypt and verify the cookie
contents.

Of course, things are a little more complicated. There are multiple ways a server can login a user (not just
username and password, for example OAuth with 3rd party services like Google or Twitter), and cookies
themselves need to be configured to be secure. (They shouldn’t be accessible to JavaScript, ideally sent over
HTTPS only and restricted to specific domain/sites).

ASP.NET Core Identity takes care of it all, providing a complete solution and a very convenient way of
adding authentication to ASP.NET Core web applications.

However, there is a problem with so much convenience, and that is, its controllers and views are geared
towards traditionally server-side rendered applications! That is, Razor pages/views will render elements
like login forms, these in turn will send full page POST requests to the controllers, which finally respond
with a redirect back to the home page.

This is nothing but the well-known Post/Redirect/Get pattern.

This might not work so well in the context of SPAs applications like the one used in this article (unless you
can live with full page posts and redirects in your authentication pages). Ideally, the server will just provide
an authentication API, leaving the UX workflow to the client side of the SPA (the Vue application in our
case).

COOKIE BASED AUTHENTICATION API

We will then begin by introducing a new API into our server side ASP.NET Core application in order to
provide cookie-based authentication.

www.dotnetcurry.com/magazine 43
In order to maintain pace and focus, during this article, we will leave aside 3rd party OAuth providers and
consider local accounts only (many of the problems and techniques you will face are similar, so you will be
better equipped once you understand local accounts! Who knows, might be the subject of a future article?)

There are two ways we can build such an API.

We could use the scaffolding provided by ASP.NET Core Identity or we could manually write the controller
using the Cookie authentication services.

In this article, I will manually write the controllers due to the following reasons:

• The controller code generated by the scaffolder for login/logout actions assumes the application will
use full posts followed by redirects, instead of an API called from JavaScript.

• We need to write the client elements ourselves as part of our Vue application.

• I will not include code to manage accounts, only to login/logout.

However, there will be nothing wrong if you decide to use the provided scaffolding. Simply discard the
generated views and manually modify the generated controller code.

Enough about setting up the context, let’s start writing some code! The first thing we are going to do is to
enable the necessary services and middleware in our Startup class. First let’s define a new constant for
the Cookie authentication scheme:

public const string CookieAuthScheme = "CookieAuthScheme";

Next, add the following code to the ConfiguresServices method. It will add the authentication services
using Cookies based authentication as the default scheme:

// Add Authentication services, using


services.AddAuthentication(CookieAuthScheme)
// Now add and configure cookie authentication
.AddCookie(CookieAuthScheme, options =>
{
// Set the cookie name (optional)
options.Cookie.Name = "soSignalR.AuthCookie";
// Set the samesite cookie parameter as none,
// otherwise it won’t work with clients on uses a different domain wont work!
options.Cookie.SameSite = Microsoft.AspNetCore.Http.SameSiteMode.None;
// Simply return 401 responses when authentication fails
// as opposed to the default of redirecting to the login page
options.Events = new CookieAuthenticationEvents
{
OnRedirectToLogin = redirectContext =>
{
redirectContext.HttpContext.Response.StatusCode = 401;
return Task.CompletedTask;
}
};
})

Hopefully the code is self-explanatory.

44 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Let me direct your attention to the SameSite cookie setting. Since ASP.NET Core 2.2, its default is lax, which
means the browser will only include the cookie in requests to different sites for GET requests.

If you remember from the earlier article, our client and server side applications are deployed
independently. Even during development, the client application runs in localhost:8080 while the server
runs in localhost:5100. This means we need to change the default SameSite setting or we won’t be able to
authenticate our app (as long as they are deployed as different sites). Of course, if you are deploying both
client and server side from the same site, you should leave this with its default lax value!

Now that all the required services are added and configured, update the Configure method to add the
authentication middleware right after the CORS middleware:

app.UseAuthentication();

This is the middleware that extracts user information from the request (using the configured scheme),
enabling the application to perform authentication challenges, for example when adding the [Authorize]
attribute.

Before we continue, notice how we are not adding the Identity services that provide the functionality to
create, retrieve and validate user accounts. There is good documentation on how to add Identity services,
while at the same time it will add significant noise to the article (relies on using a database and Entity
Framework, none of which are used by our sample application).

As you will see, when we look at the AccountController implementation, we will simulate the Identity
functionality by manually validating user credentials and manually creating the ClaimsPrincipal
instances.

Let’s finish the server side changes by adding a new AccountController that provides the new /account
API with login and logout endpoints:

public class LoginCredentials


{
public string Email { get; set;}
public string Password { get; set;}
}

[Route("[controller]")]
public class AccountController : Controller
{
[HttpPost("login")]
public async Task<IActionResult> Login([FromBody]LoginCredentials creds)
{
// We will typically move the validation of credentials
// and return of matched principal into its own AuthenticationService
// Leaving it here for convenience of the sample project/article
if (!ValidateLogin(creds))
{
return Json(new
{
error = "Login failed"
});
}
var principal = GetPrincipal(creds, Startup.CookieAuthScheme);
await HttpContext.SignInAsync(Startup.CookieAuthScheme, principal);

www.dotnetcurry.com/magazine 45
return Json(new
{
name = principal.Identity.Name,
email = principal.FindFirstValue(ClaimTypes.Email),
role = principal.FindFirstValue(ClaimTypes.Role)
});
}

[HttpPost("logout")]
[Authorize]
public async Task<IActionResult> Logout()
{
await HttpContext.SignOutAsync();
return StatusCode(200);
}

// On a real project, you would use a SignInManager<ApplicationUser> to verify


the identity
// using:
// _signInManager.PasswordSignInAsync(user, password, lockoutOnFailure: false);
// With JWT you would rather avoid that to prevent cookies being set and use:
// _signInManager.UserManager.FindByEmailAsync(email);
// _signInManager.CheckPasswordSignInAsync(user, password, lockoutOnFailure:
false);
private bool ValidateLogin(LoginCredentials creds)
{
// For our sample app, all logins are successful!
return true;
}

// On a real project, you would use the SignInManager<ApplicationUser>


// to locate the user by its email and build its ClaimsPrincipal:
// var user = await _signInManager.UserManager.FindByEmailAsync(email);
// var principal = await _signInManager.CreateUserPrincipalAsync(user)
private ClaimsPrincipal GetPrincipal(LoginCredentials creds, string authScheme)
{
// Here we are just creating a Principal for any user,
// using its email and a hardcoded “User” role
var claims = new List<Claim>
{
new Claim(ClaimTypes.Name, creds.Email),
new Claim(ClaimTypes.Email, creds.Email),
new Claim(ClaimTypes.Role, “User”),
};
return new ClaimsPrincipal(new ClaimsIdentity(claims, authScheme));
}
}

This is very similar to the code you might have seen so far in scaffolded account controllers. The most
important bits are two lines that actually perform the login and logout functionality:

• In the login method, await HttpContext.SignInAsync(Startup.CookieAuthScheme,


principal); uses the Cookie scheme we configured earlier in the Startup class in order to generate
a Cookie and include it a Set-Cookie header in the HTTP response.

• In the logout method, await HttpContext.SignOutAsync(); uses the Cookie scheme and includes
another Set-Cookie header in the HTTP response that instructs the browser to remove the cookie.

46 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


There are also a few differences worth discussing when compared against the classic code provided vs the
one scaffolded by ASP.NET Core Identity:

• The controller actions do not return a ViewResult nor a RedirectResult. Instead they return
JsonResult and StatusCodeResult! This is vital for the Vue application to call this API using
JavaScript.

• The login controller expects credentials to be received as part of the body, so the client can send them
as a JSON.

• As mentioned earlier, we are not using the Identity services like the SignInManager class to validate
user credentials and create ClaimsPrincipal instances. Instead we are replacing that with the stub
functionality that will let anyone to authenticate! Replace these methods with real implementations in
your application.

At this point, you should be able to test your API using any tool like Postman or cURL to send a JSON with
some username and password credentials. You should see in the response, the Set-Cookie header:

Figure 3, testing the login endpoint using cURL

That’s it, we have a simple but functional API that allows the Vue application to use JavaScript in order to
login and logout from the application.

Let’s turn our attention to the client side.

ADDING AUTHENTICATION FUNCTIONALITY TO THE VUE CLIENT

Now that our server provides a simple authentication mechanism, we need to update the Vue application
with the necessary elements so users can login by entering their credentials and logout if already
authenticated.

www.dotnetcurry.com/magazine 47
We will update the navbar to show a Login button on the top right. Upon being clicked, a modal will be
displayed for users to enter their credentials:

Figure 4, the first iteration of the login modal, opened from the Login button in the navbar

Once the user enters the credentials, we will send an AJAX request to the login endpoint, and will update
the navbar so it now displays the user name and a Logout button:

Figure 5, once logged in, the navbar will display the username and Logout button

This brings some interesting design questions, particularly around where should the data identifying the

48 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


currently logged in user be stored.

• Should that be in the root App.vue component, passed as props to any child component like the navbar?

• What happens when authenticating in a modal component? Should events be propagated up across the
component tree until it reaches App.vue where the data is finally updated?

• How can any component know if the user is authenticated or not, for example in order to disable some
buttons?

Luckily for us, Vuex is the perfect answer for shared data like the current user context, data that belongs to
none and all components! Apart from being the perfect answer to this problem you will see how using it is
quite straightforward. (If you want to learn more, check out one of my previous article taking a closer look
at Vuex)

Now that we know what we will build and how, let’s begin.

The first thing we will do is to extract the main navbar from the App.vue component into its own
component. Create a new main-navbar.vue file inside the components folder, and copy the navbar from
App.vue into the <template></template> section of the component.

Then import the new main-navbar component inside the App.vue script section:

import MainNavbar from './components/main-navbar'


export default {
name: 'App',
components: {
MainNavbar
},

}

And finally replace the navbar in the App.vue script section with the component we just included: <main-
navbar />.

Let’s now create the login modal component, where we will make use of bootstrap-vue’s modal component
(as in the existing modal for adding questions and answers):

<template>
<b-modal id="loginModal" ref="loginModal" hide-footer title="Login" @
hidden="onHidden">
<b-form @submit.prevent="onSubmit" @reset.prevent="onCancel">
<b-alert show variant="warning">In this test app, any credentials are valid!
</b-alert>
<b-form-group label="Email:" label-for="emailInput">
<b-form-input id="emailInput"
type="email"
v-model="form.email"
required
placeholder="Enter your email address">
</b-form-input>
</b-form-group>
<b-form-group label="Password:" label-for="passwordInput">
<b-form-input id="passwordInput"
type="password"

www.dotnetcurry.com/magazine 49
v-model="form.password"
required
placeholder="Enter your password">
</b-form-input>
</b-form-group>
<button class="btn btn-primary float-right ml-2" type="submit">Login</button>
<button class="btn btn-secondary float-right" type="reset">Cancel</button>
</b-form>
</b-modal>
</template>

<script>
export default {
data () {
return {
form: {
email: '',
password: ''
}
}
},
methods: {
onSubmit (evt) {
// to be completed
},
onCancel (evt) {
this.$refs.loginModal.hide()
},
onHidden () {
Object.assign(this.form, {
email: '',
password: ''
})
}
}
}
</script>

Nothing too exciting here. Just some regular Vue code providing a modal, and an empty onSubmit method
which we will come back to later!

Before we can display the modal, it needs to be part of the Vue application. Follow the same steps we took
to include the main-navbar component inside App.vue. With this, the modal is ready to be displayed, all we
need is a button!

Update the main-navbar component and replace the form providing a sample search box with a form that
provides a Login button. This button uses bootstrap-vue’s v-b-modal directive to show the login modal we
just created and wired inside App.vue:

<form v-else class="form-inline my-2 my-lg-0">


<button v-b-modal.prevent.loginModal class="btn btn-secondary my-2 my-sm-0"
type="submit">Login</button>
</form>

If you run the application, you should see the modal appearing after clicking on the Login button. However,
we left the onSubmit method empty, so it will do nothing yet!

50 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


USING VUEX TO STORE THE SHARED AUTHENTICATION CONTEXT DATA

To implement the login functionality and keep the user context data, we will use Vuex. The very first thing
to do is to install the library. Run the following command from the root folder of the client application:

npm install --save vuex

Once installed, create a new folder named store inside the client/src folder. Create two new files, named
index.js and context.js. The first one, index.js, will be used to wire Vuex into the Vue application and to
compose together all the different Vuex modules:

import Vue from 'vue'


import Vuex from 'vuex'
import context from './context'

Vue.use(Vuex)

export default new Vuex.Store({


modules: {
context
}
})

The second file, context.js, will provide a module for everything related with the user context. Let’s start
with an empty module:

export default {
namespaced: true,
state: {
},
getters: {
},
mutations: {
},
actions: {
}
}

Don’t worry, we will fill it up as we build the functionality. Let’s start by providing the code necessary to
perform the login action. This code will send a request to the login endpoint of our server-side API, and will
save the returned user profile into the module state:

export default {
namespaced: true,
state: {
profile: {}
},
getters: {
isAuthenticated: state => state.profile.name && state.profile.email
},
mutations: {
setProfile (state, profile) {
state.profile = profile
},
},
actions: {
login ({ commit }, credentials) {
return axios.post('account/login', credentials).then(res => {

www.dotnetcurry.com/magazine 51
commit('setProfile', res.data)
})
},
logout ({ commit }) {
return xios.post('account/logout').then(() => {
commit('setProfile', {})
})
}
}
}

Our context module now provides:

• a login action that the login modal can use. This action will send a request to the server API and will
update the module state with the returned user profile

• a logout action that the navbar can use. Similar to the login action, this will send a request to the
server API and will clear out the current profile from the module’s state

• a profile property in its state, which any component can map. For example, the navbar can include a
Welcome, username message when logged in.

• an isAuthenticated getter that any component can map. This returns a Boolean indicating whether
the user is currently logged in or not, which will be widely used. For example, the navbar can use it
to render either a login or a logout button; while buttons that require authentication, can be disabled
based on its value.

Let’s finish with the login process. Update the login modal to map the login action of the module:

import { mapActions } from 'vuex'


export default {
...
methods: {
...mapActions('context', [
'login'
]),
onSubmit (evt) {
this.login({ authMethod: this.authMode, credentials: this.form }).then(() =>
{
this.$refs.loginModal.hide()
})
},
...
}
}

Here, we are just mapping the action from the context store, calling it when the form is submitted, and
closing the modal once the action succeeded.

Next, let’s update the navbar so it displays either of the following:


• a login button
• the username and a logout button

..based on the data currently stored in the context store. It is as simple as mapping the logout action, the
profile property of the state (so we can render the profile.name property) and the isAuthenticated

52 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


getter (so we can decide between the two options in the template).

Replace the login form of the template section with:

<span v-if=”isAuthenticated” class=”navbar-text mr-2”>


Welcome back, {{ profile.name }}
</span>

<form v-if=”isAuthenticated” class=”form-inline my-2 my-lg-0”>


<button class=”btn btn-secondary my-2 my-sm-0” type=”submit” @click.
prevent=”logout”>Logout</button>
</form>

<form v-else class=”form-inline my-2 my-lg-0”>


<button v-b-modal.prevent.loginModal class=”btn btn-secondary my-2 my-sm-0”
type=”submit”>Login</button>
</form>

Of course, the script section needs to be updated so it maps these elements from the context module
(otherwise they wouldn’t be available in the template):

import { mapGetters, mapState, mapActions } from 'vuex'

export default {
computed: {
...mapState('context', [
'profile'
]),
...mapGetters('context', [
'isAuthenticated'
])
},
methods: {
...mapActions('context', [
'logout'
])
}
}

That’s it, now you should be able to login and logout from the application. There is a little problem
however. As soon as you reload the page, you will appear as logged out, even if your browser still has the
auth cookie!

This is because our components rely on the state kept in the Vuex store, which is gone as soon as you
reload the page, since it is kept in memory. We will need to restore this context when our Vue application
starts!

In order to solve this problem, we will include a new endpoint in our server-side API to load the details
of the currently logged in user (Note how the properties will be empty in case the user isn’t currently
authenticated, so the isAuthenticated getter of the client application detects it):

[HttpGet("context")]
public JsonResult Context()
{
return Json(new
{
name = this.User?.Identity?.Name,

www.dotnetcurry.com/magazine 53
email = this.User?.FindFirstValue(ClaimTypes.Email),
role = this.User?.FindFirstValue(ClaimTypes.Role),
});
}

We will then provide a new Vuex action to call this endpoint and update the store profile state with its
response:

restoreContext ({ commit}) {
return axios.get('account/context').then(res => {
commit('setProfile', res.data)
})
},

Finally, we will call this endpoint from the App.vue component mounted state:

import { mapActions } from 'vuex'



export default {

created () {
this.restoreContext()
},
methods: {
...mapActions('context', [
'restoreContext'
])
}
}

After these changes, you should be able to login/logout and stay logged in when reloading the page.

That was quite a journey, but we now have the basic functionality wired end to end and we can start with
the more interesting parts!

SECURING THE APPLICATION


SECURING THE REST API

Since our users can now login and logout, we can start restricting parts of our application to authenticated
users. Let’s begin with the controller actions to create new questions, up/down vote them and create new
answers.

This is as simple as adding the [Authorize] attribute on all these endpoints. The attribute will enforce
users to be authenticated, so as long as users are logged in, the cookie will be sent and the attribute will
grant access to the controller endpoint.

Sadly, if you try to add a question, you will notice the site no longer works after we added the
[Authorize] attribute. Even when you are authenticated, the server returns a 401 response!

The problem lies again in the fact that client and server are running as different applications, one at
localhost:8080 and the other at localhost:5100. When this happens, browsers will not include cookies along
with AJAX requests, unless specifically instructed to do so.

54 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


So all you have to do, is to update main.js and setup the axios defaults so credentials (like cookies) are
included along requests:

axios.defaults.baseURL = 'http://localhost:5100'
axios.defaults.withCredentials = true

Note, this topic is closely related with the topic of CORS! The CORS server side middleware was configured during
the first article to allow communication between the client and server applications.

Of course, if your application will end up deployed with the client and server on the same domain, you would
not need to worry about these issues. If this is your case, you can add a vue.config.js file that points the Vue
development server towards your ASP.NET Core server. This means, from your browser point of view, everything
will be running in localhost:8080 and you won’t have to face these cross-site issues.

Now that we have solved this small hiccup, our application is working again!

Authenticated users can create questions, up/down vote them and create answers. However, anonymous
users can still attempt to perform these actions, just to get a 401 response in return.

We can very easily provide them with a better UX where buttons that trigger actions unavailable to
anonymous users, are disabled or invisible.

Remember the isAuthenticated getter we added to the context Vuex store? This is another use case
where Vuex shines.

For example, update the home.vue component so the add question button is disabled based on the
isAuthenticated getter. All you have to do is to map the getter and use it to set the disabled attribute
of the button:

// In the component template


<button v-b-modal.addQuestionModal :disabled="!isAuthenticated" class="btn btn-
primary mt-2 float-right">
<i class="fas fa-plus"/> Ask a question
</button>

// In the component script


computed: {
...mapGetters('context', [
'isAuthenticated'
])

},

Rinse and repeat! You can follow the same approach to disable/hide any links that trigger actions available
only for authenticated users. (Feel free to check the final code on github)

SECURING THE SIGNALR HUB

Securing the SignalR hub is as simple as adding the [Authorize] attribute to either the Hub class or
individual Hub methods.

Let’s add the attribute to our QuestionHub class. That was easy, right?

www.dotnetcurry.com/magazine 55
Well, hold on!

If you open an incognito window or logout and reload the page, you will notice an endless series of calls to
http://localhost:5100/question-hub/negotiate that end in 401.

Figure 4, having trouble connecting to the SignalR hub

This is because our Vue application will try to connect to the SignalR hub as soon as the application
starts, regardless of whether the user is authenticated or not. What’s worse, we included some code to
automatically reconnect, which ends in this endless loop.

We need to rethink this behaviour.

Since our QuestionHub now requires users to be authenticated we should then:

• On application startup, only start a connection with the hub if we are logged in
• Start a connection after a successful login action
• Stop the connection after a logout action

Luckily for us, the question-hub.js Vue plugin we created and the Vuex context module can easily play
together in order to achieve this behavior in a way that’s transparent for the rest of the application!
Let’s start with the question-hub plugin. Rather than automatically trying to establish a connection on
application startup, we will provide with methods to start and stop the connection:

export default {
install (Vue) {
// use a new Vue instance as the interface for Vue components
// to receive/send SignalR events. This way every component

56 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


// can listen to events or send new events using this.$questionHub
const questionHub = new Vue()
Vue.prototype.$questionHub = questionHub

// Provide methods to connect/disconnect from the SignalR hub


let connection = null
let startedPromise = null
let manuallyClosed = false
Vue.prototype.startSignalR = (jwtToken) => {

}
Vue.prototype.stopSignalR = () => {

// Provide methods for components to send messages back to server


// Make sure no invocation happens until the connection is established
questionHub.questionOpened = (questionId) => {
if (!startedPromise) return

return startedPromise
.then(() => connection.invoke('JoinQuestionGroup', questionId))
.catch(console.error)
}
questionHub.questionClosed = (questionId) => {
if (!startedPromise) return

return startedPromise
.then(() => connection.invoke('LeaveQuestionGroup', questionId))
.catch(console.error)
}
}
}

As you can see, the questionHub can be created straight away, meaning that components can add listeners
to SignalR events regardless of whether we are connected or not. (If we are not connected, then they will
never receive an event through the questionHub).

We are also checking if the connection process has been started before trying to send an event through the
SignalR connection. Since the connection might be instantiated but not fully opened, this is a little more
complicated than checking if it is not null. We will see more once we implement the start/stop methods.
Implementing the start method is mostly moving the initialization code, inside this method:

Vue.prototype.startSignalR = (jwtToken) => {


connection = new HubConnectionBuilder()
.withUrl(`${Vue.prototype.$http.defaults.baseURL}/question-hub`)
.configureLogging(LogLevel.Information)
.build()

// Forward hub events through the event, so we can listen for them in the Vue
components
connection.on('QuestionAdded', (question) => {
questionHub.$emit('question-added', question)
})
connection.on('QuestionScoreChange', (questionId, score) => {
questionHub.$emit('score-changed', { questionId, score })
})
connection.on('AnswerCountChange', (questionId, answerCount) => {
questionHub.$emit('answer-count-changed', { questionId, answerCount })

www.dotnetcurry.com/magazine 57
})
connection.on('AnswerAdded', answer => {
questionHub.$emit('answer-added', answer)
})

// You need to call connection.start() to establish the connection but the client
wont handle reconnecting for you!
// Docs recommend listening onclose and handling it there.
// This is the simplest of the strategies
function start () {
startedPromise = connection.start()
.catch(err => {
console.error('Failed to connect with hub', err)
return new Promise((resolve, reject) => setTimeout(() => start().
then(resolve).catch(reject), 5000))
})
return startedPromise
}
connection.onclose(() => {
if (!manuallyClosed) start()
})

// Start everything
manuallyClosed = false
start()
}

This is mostly the same code as before, with the addition of the manuallyClosed flag. Since we are adding
a stop method that we will invoke after user’s logout, we need to prevent the reconnecting code from keep
trying, something we achieve by updating this flag as true.

Next, implement the stop method, which simply calls the connection stop method and clears our flags:

Vue.prototype.stopSignalR = () => {
if (!startedPromise) return

manuallyClosed = true
return startedPromise
.then(() => connection.stop())
.then(() => { startedPromise = null })
}

All that’s needed is for our context module to automatically call the startSignalR and stopSignalR as
a result of the login, logout and restoreContext actions! Notice how we added the methods to the
Vue.prototype earlier, so we can call them from the store:

import Vue from 'vue'



actions: {
restoreContext ({ commit, getters, state }) {
return axios.get('account/context').then(res => {
commit('setProfile', res.data)
if (getters.isAuthenticated) return Vue.prototype.startSignalR()
})
},
login ({ commit }, credentials) {
return axios.post('account/login', credentials).then(res => {
commit('setProfile', res.data)
}).then(() =>

58 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Vue.prototype.startSignalR()
)
},
logout ({ commit, state }) {
return axios.post('account/logout').then(() => {
commit('setProfile', {})
return Vue.prototype.stopSignalR()
})
}
}

That’s it, the endless loop of 401 requests trying to connect to the hub when not authenticated, should be
gone now.

You will also notice the browser starting/stopping the connection as soon as you login/logout from the
app. Of course, the functionality provided by the hub should work as long as you are logged in, for example
open two browser windows, login in both and try to add new answers and votes.

Take a moment to notice how no other component of our Vue application except for these two files, had to
be modified!

ADDING A LIVE CHAT

After all this hard work, let’s have a little fun by adding a simple chat to our application! With all the
building blocks we have so far, this will require little work.

On the server side, all we need to do is to:

• add a new method to our IQuestionHub interface that defines the event received by clients when a
message is sent to the chat

• add a new method to the QuestionHub class that clients can send an event to, when they want to send
an event to the chat

These changes look like the following:

public interface IQuestionHub


{
...
Task LiveChatMessageReceived(string username, string message);
}

[Authorize]
public class QuestionHub: Hub<IQuestionHub>
{
...
public async Task SendLiveChatMessage(string message)
{
await Clients.All.LiveChatMessageReceived(Context.UserIdentifier, message);
}
}

Which means we are implementing a general chat where all messages are sent to everyone.

www.dotnetcurry.com/magazine 59
There is one little extra detail to take care of.

Notice the usage of Context.UserIdentifier in the method implementation. We basically want to


include the user name along the event payload, so we can display the name of the user who sent each
message.

We need to tell SignalR how to extract this user identifier from the ClaimsPrincipal object that results
from a successful authentication. Implement the IUserIdProvider interface, for example we will use the
principal’s name, since we were setting it from the email address:

public class NameUserIdProvider : IUserIdProvider


{
public string GetUserId(HubConnectionContext connection)
{
return connection.User?.Identity?.Name;
}
}

Then include this as part of the ConfigureServices method of the Startup class:

services.AddSingleton<IUserIdProvider, NameUserIdProvider>();

That completes the server-side part.

On the frontend, let’s start by updating the question-hub.js with the new listener for the
LiveChatMessageReceived event, and the new method to call the SendLiveChatMessage event:

connection.on('LiveChatMessageReceived', (username, text) => {


questionHub.$emit('chat-message-received', { username, text })
})
...
questionHub.sendMessage = (message) => {
if (!startedPromise) return

return startedPromise
.then(() => connection.invoke('SendLiveChatMessage', message))
.catch(console.error)
}

Next let’s create a new modal where the users can see the messages received and send new messages. Add
a new live-chat-modal.vue file inside the components folder with the following contents:

<template>
<b-modal id="liveChatModal" ref="liveChatModal" hide-footer title="Live Chat"
size="lg" @hidden="onHidden">

<div class="bg-light messages-container">


<ul v-if="messages.length" class="list-unstyled container">
<li v-for="(message, index) in messages" :key="index" class="row my-2">
<span class="col-3">
{{ message.username === profile.name ? 'You' : message.username }}
</span>
<vue-markdown
:class="{'col-9': true, 'text-muted': message.username === profile.name}"
:source="message.text" />

60 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


</li>
</ul>
<p v-else class="text-muted text-center">
Welcome to the chat...<br />
Say hi!
</p>
</div>

<b-form class="border-top mt-2 pt-2" @submit.prevent="onSendMessage">


<b-form-group label="Your message:" label-for="messageInput">
<b-form-textarea
id="messageInput"
v-model="form.message"
placeholder="What do you have to say?"
:rows="2"
:max-rows="10">
</b-form-textarea>
</b-form-group>
<button class="btn btn-primary float-right ml-2" type="submit">Send</button>
</b-form>
</b-modal>
</template>

<script>
import { mapState } from 'vuex'
import VueMarkdown from 'vue-markdown'

export default {
components: {
VueMarkdown
},
data () {
return {
messages: [],
form: {
message: ''
}
}
},
computed: {
...mapState('context', [
'profile'
])
},
created () {
// Listen to answer changes from SignalR event
this.$questionHub.$on('chat-message-received', this.onMessageReceived)
},
beforeDestroy () {
// Make sure to cleanup SignalR event handlers when removing the component
this.$questionHub.$off('chat-message-received', this.onMessageReceived)
},
methods: {
onMessageReceived ({ username, text }) {
this.messages = [...this.messages, { username, text }]
},
onSendMessage (evt) {
this.$questionHub.sendMessage(this.form.message)
this.form.message = ''
},
onHidden () {

www.dotnetcurry.com/magazine 61
Object.assign(this.form, {
message: ''
})
}
}
}
</script>

<style scoped>
.messages-container{
max-height: 450px;
overflow-y: auto;
}
</style>

While it might look scary, it is mostly presentation! Logic-wise, there is not much going on here.

The component starts with an empty array of received messages. It then listens to chat-message-
received events, adding them to the array of received messages. Whenever the user clicks on the send
button, it then emits the sendMessage event.

It’s important to note that the component will be receiving messages and updating its array regardless of
whether the modal is actually visible or not! Let’s update App.vue again to include this new modal as part
of its template, and finally update the home.vue component with a button to show the modal:

<button v-b-modal.liveChatModal :disabled="!isAuthenticated" class="btn btn-


secondary mt-2 mr-2 float-right">
<i class="fas fa-comments"/> Live chat
</button>

That’s all that is required to add a functional chat to your application! Feel free to expand on it and add
more functionality like private chats or a list of connected members!

JWT BEARER AUTHENTICATION


We now have a fully functional application where users can login and access secured APIs and SignalR
hubs, implemented using Cookie based authentication.

While this might be ideal in many scenarios, some people might want/need to use JSON Web Tokens,
particularly those in the context of SPAs and mobile applications. If this sounds new to you, don’t worry,
there are plenty of articles out there comparing both options like this one or this one, apart from the
suspect questions in stack overflow.

I will leave aside (the article is already quite long as it is!) design considerations like when to use JWT
instead of Cookies, where to securely store them or how to refresh the tokens, leaving these questions for
you to answer based on your needs and context. However, I want to provide an example that uses JWT so
you can see what this means in practical terms for SignalR and Vue.

ALLOWING THE SERVER TO CHOOSE BETWEEN MULTIPLE AUTHENTICATION


SCHEMAS

Our server currently supports a single authentication scheme, the Cookie based one. However, ASP.NET Core

62 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


supports multiple authentication schemas as long as we tell it how to choose between them:

public const string JWTAuthScheme = "JWTAuthScheme";



services.AddAuthentication(CookieAuthScheme)
// Now configure specific Cookie and JWT auth options
.AddCookie(CookieAuthScheme, options =>
{

// In order to decide the between both schemas
// inspect whether there is a JWT token either in the header or query string
options.ForwardDefaultSelector = ctx =>
{
if (ctx.Request.Query.ContainsKey("access_token")) return JWTAuthScheme;
if (ctx.Request.Headers.ContainsKey("Authorization")) return JWTAuthScheme;
return CookieAuthScheme;
};
})
.AddJwtBearer(JWTAuthScheme, options =>
{
// to be filled
});

We have basically added a second authentication scheme, the one tagged with the JWTAuthScheme
constant. We have then added the ForwardDefaultSelector to the default scheme (the
CookieAuthScheme) so the framework can choose the right scheme for each request. The logic we are
following is based on whether the request contains either of:

• The access_token query string parameter. This is where SignalR will include the token when
establishing connections

• The Authorization header. This is where our client application will include the token as part of AJAX
requests.

If any of those are found in the incoming request, then we select the JWTAuthScheme scheme. Otherwise
we choose the default CookieAuthScheme scheme.

Now we need to configure the JWT scheme:

• Define the key that will be used to sign the tokens

• Define how the token will be validated, for example based on its lifetime

Update the Startup class with:

// NOTE: you want this to be part of the configuration and a real secret!
public static readonly SymmetricSecurityKey SecurityKey =
new SymmetricSecurityKey(
Encoding.Default.GetBytes("this would be a real secret"));
...
.AddJwtBearer(JWTAuthScheme, options =>
{
options.TokenValidationParameters = new TokenValidationParameters
{
LifetimeValidator = (before, expires, token, param) =>
{

www.dotnetcurry.com/magazine 63
return expires > DateTime.UtcNow;
},
ValidateAudience = false,
ValidateIssuer = false,
ValidateActor = false,
ValidateLifetime = true,
IssuerSigningKey = SecurityKey,
};
});

Notice how we are defining the key using a publicly accessible constant. We will need to access the key
from the AccountController once we implement the actual code that logins and generates a token. For
the purposes of this app, a hardcoded secret is fine, but in a real application, make sure this is a real secret
part of your configuration!

JWT BEARER AUTHENTICATION API

With the changes made in the earlier section, our application will be able to authenticate and authorize
users as long as they include a valid token as part of their request.

However, how will the client application get hold of a token?

We need to provide with a new endpoint in the AccountController that verifies the supplied credentials
and generates a token instead of a cookie. This is relatively straightforward to implement using the
JwtSecurityToken class and the same credentials configured for the JWTAuthScheme:

public class AccountController: Controller


{
// Same key configured in startup to validate the JWT tokens
private static readonly SigningCredentials SigningCreds = new
SigningCredentials(Startup.SecurityKey, SecurityAlgorithms.HmacSha256);

private readonly JwtSecurityTokenHandler _tokenHandler = new


JwtSecurityTokenHandler();

...

[HttpPost("token")]
public async Task<IActionResult> Token([FromBody]LoginCredentials creds)
{
// We will typically move the validation of credentials
// and return of matched principal into its own AuthenticationService
// Leaving it here for convenience of the sample project/article
if (!ValidateLogin(creds))
{
return Json(new
{
error = "Login failed"
});
}
var principal = GetPrincipal(creds, Startup.JWTAuthScheme);
var token = new JwtSecurityToken(
"soSignalR",
"soSignalR",
principal.Claims,
expires: DateTime.UtcNow.AddDays(30),
signingCredentials: SigningCreds);

64 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


return Json(new
{
token = _tokenHandler.WriteToken(token),
name = principal.Identity.Name,
email = principal.FindFirstValue(ClaimTypes.Email),
role = principal.FindFirstValue(ClaimTypes.Role)
});
}
}

This should look very similar to the existing login endpoint, with the difference of generating a token that
is manually included in the JSON response as opposed to generating a Cookie sent in a Set-Cookie response
header.

Notice how no new logout endpoint or even changes to the existing logout endpoint, are needed. That is
because for a client to logout when using tokens, they just need to forget that token.

USING JWT WITH THE VUE APPLICATION

Now that our server can use either a Cookie based authentication scheme or a JWT based one, let’s update
our Vue application so users can choose in which way they want to login.

Figure 7, login modal letting you choose between cookies and JWT authentication

www.dotnetcurry.com/magazine 65
Of course, you will never ask the user to make such a decision in a real application, but this will come very
handy for the purposes of this application, which is to demonstrate how these features work!

Start by updating the login-modal.vue component, so it includes the radio buttons to select the
authentication scheme and passes the selected one down to the context store’s login action:

// On the template
<b-form-group label="Authentication mode">
<b-form-radio-group
id="authMode"
v-model="authMode"
:options="authOptions"/>
</b-form-group>

// On the script
export default {
data () {
return {
...
authMode: 'cookie',
authOptions: [
{ text: 'Cookie', value: 'cookie' },
{ text: 'JWT Bearer', value: 'jwt' }
]
}
},
methods: {
...
onSubmit (evt) {
this.login({ authMethod: this.authMode, credentials: this.form }).then(() => {
this.$refs.loginModal.hide()
})
},
...
}
}

Now the interesting part begins.

The login action of the context store needs to send a request to either the /account/login or the /account/
token endpoints based on the authMethod property. It also needs to store the received token in case of the
JWT scheme, since we will need to include it as part of the Authorization header on future AJAX requests.

state: {
profile: {},
jwtToken: null
},
mutations: {
...
setJwtToken (state, jwtToken) {
state.jwtToken = jwtToken
}
},

actions: {
...
// Login methods. Either use cookie-based auth or jwt-based auth
login ({ state, dispatch }, { authMethod, credentials }) {

66 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


const loginAction = authMethod === 'jwt'
? dispatch('loginToken', credentials)
: dispatch('loginCookies', credentials)

return loginAction.then(() => Vue.prototype.startSignalR())


},
loginCookies ({ commit }, credentials) {
return axios.post('account/login', credentials).then(res => {
commit('setProfile', res.data)
})
},
loginToken ({ commit }, credentials) {
return axios.post('account/token', credentials).then(res => {
const profile = res.data
const jwtToken = res.data.token
delete profile.token
commit('setProfile', profile)
commit('setJwtToken', jwtToken)
})
},
...
}

With these changes, you should now be able to successfully login using the JWT scheme. If you inspect the
HTTP requests in your browser developer tools, you should see the token included as part of the response:

Figure 8, response from a successful login using the JWT scheme

Unfortunately, this isn’t enough. If you then try to upvote a question, you will notice a 401 response from
the server. That’s is because even though we received a JWT token and we stored it inside our context store,
we are not sending it back along AJAX requests.

In order to do so, we will use an axios interceptor. This will be invoked by axios on every request, and it will
inspect the context store for a JWT token. In case there is a token, it will automatically add the Authorization
header to the request. Update main.js with this interceptor:

axios.interceptors.request.use(request => {
if (store.state.context.jwtToken) request.headers['Authorization'] =
'Bearer ' + store.state.context.jwtToken
return request
})

Notice the format of the header is the constant Bearer followed by the token, separated with a space. That
is exactly what the JwtAuthScheme expects on the server! After these changes, you should now be able
to interact with the site without receiving 401 responses (except for the SignalR hub, which we haven’t
updated yet).

www.dotnetcurry.com/magazine 67
Let’s now make a quick change to the logout endpoint, so we don’t send a request in case of using the JWT
scheme, as well as deleting the token from the store:

logout ({ commit, state }) {


const logoutAction = state.jwtToken
? Promise.resolve()
: axios.post('account/logout')

return logoutAction.then(() => {


commit('setProfile', {})
commit('setJwtToken', null)
return Vue.prototype.stopSignalR()
})
}

If everything went right, your users should now be able to login and logout when using the JWT scheme.
However, they will notice something odd.

As soon as they reload the page, they are logged out! The explanation is simple, the token is stored in the
vuex store, and that information is gone as soon as you reload the page. We will need to store the token
somewhere that survives a simple page refresh!

NOTE: For our purposes, we will simply use local storage. However, you should know that this simple approach
has security drawbacks. If you plan on using JWT in your SPA, read more about the storage options.

Update the context store so the token gets saved and restored from local storage:

mutations: {
...
setJwtToken (state, jwtToken) {
state.jwtToken = jwtToken
if (jwtToken) window.localStorage.setItem('jwtToken', jwtToken)
else window.localStorage.removeItem('jwtToken')
}
},
actions: {
restoreContext ({ commit, getters, state }) {
const jwtToken = window.localStorage.getItem('jwtToken')
if (jwtToken) commit('setJwtToken', jwtToken)

return axios.get('account/context').then(res => {


commit('setProfile', res.data)
if (getters.isAuthenticated) return Vue.prototype.startSignalR()
})
},
...
}

Now authentication with JWT should work as expected, even after page reloads. Let’s wrap up by making
sure we can connect to the SignalR hub when using JWT.

USING JWT WITH SIGNALR

By now, most of the heavy lifting has already been done. The server can authenticate users with a valid JWT
token and the Vue application is able to login using the JWT scheme.

68 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Allowing the SignalR hub to work with JWT is pretty straightforward. Remember the
ForwardDefaultSelector, where one of the conditions was to look at the query string for a parameter
named access_token?

We need to update the JwtAuthScheme, which by default only knows to look at the Authorization header, so
it also looks at this parameter. Update the AddJwtBearer segment of the ConfigureServices method in
the Startup class:

.AddJwtBearer(JWTAuthScheme, options =>


{
...
options.Events = new JwtBearerEvents
{
OnMessageReceived = ctx =>
{
if (ctx.Request.Query.ContainsKey("access_token")){
ctx.Token = ctx.Request.Query["access_token"];
}
return Task.CompletedTask;
}
};
});

The final part is for the client application to include this query string parameter as part of the SignalR
connection when using JWT! First update all the calls to the startSignalR method made from the context
store, so any current JWT token is provided:

Vue.prototype.startSignalR(state.jwtToken)

Then update the startSignalR method itself. We just need to include an accessTokenFactory property
as part of the HubConnectionBuilder in case we received a non-empty token:

Vue.prototype.startSignalR = (jwtToken) => {


connection = new HubConnectionBuilder()
.withUrl(
`${Vue.prototype.$http.defaults.baseURL}/question-hub`,
jwtToken ? { accessTokenFactory: () => jwtToken } : null
)
.configureLogging(LogLevel.Information)
.build()

...
}

This way the HubConnectionBuilder will include the access_token query string parameter only when a
valid token has been passed, which will only happen when users are authenticated using the JWT scheme!

And this concludes the tutorial. Your application should be fully functional regardless of whether you
choose to use JWT or Cookies as the authentication scheme.

Conclusion

ASP.NET Core is flexible enough so you can implement authentication using different schemes in a way

www.dotnetcurry.com/magazine 69
that’s transparent to the rest of the application.

True, the documentation is mostly geared towards using the default Identity implementation with Cookies,
but the flexibility is there and relatively easy to find resources such as these blog posts that have been
created by the community to fill the gap.

It is no wonder then that SignalR, built on top of ASP.NET Core, inherits this flexibility. Adding
authentication to SignalR hubs and clients is a simple step once you have already added authentication to
the rest of your application.

Finally, Vue and its ecosystem with libraries like Vuex, makes a great job at being flexible and extensible
itself! As demonstrated in the article, adding cross cutting concerns like authentication can be added
cleanly and with very little repercussion to most components other than the root ones!

As a final note, I understand there is a lot to process in the article, bringing together quite a few different
tools in order to build a working application, all of it mixed with a hairy subject like authentication.

Don’t feel discouraged if it didn’t make complete sense, the first time.

Download the entire source code from GitHub at


bit.ly/dncm40-so-signalr

Daniel Jimenez Garcia


Author

Daniel Jimenez Garcia is a passionate software developer with 10+ years of experience.
He started as a Microsoft developer and learned to love C# in general and ASP MVC in
particular. In the latter half of his career he worked on a broader set of technologies
and platforms while these days is particularly interested in .Net Core and Node.js. He
is always looking for better practices and can be seen answering questions on Stack
Overflow.

Thanks to Dobromir Nikolov for reviewing this article.

70 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


www.dotnetcurry.com/magazine 71
AZURE DEVOPS

Subodh Sohoni

USING AZURE DEVOPS

FOR CI / CD OF ASP.NET
CORE APPLICATION
TO AZURE KUBERNETES
SERVICE (AKS)

72 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


In this tutorial, I am going to do a walkthrough of the process to continuously integrate and deploy an
ASP.NET Core application (Docker support enabled), to Azure Kubernetes Service (AKS) using Azure DevOps.

As a professional software development process, one would like to completely automate the process to
create and deploy even containerized applications.

Azure DevOps provides the tools and services to do so.

The flow of what we are going to do in this walkthrough is as follows:

Concepts related to Containers and Kubernetes


Let us start with some concepts related to containerized applications:

1. A Container is a standard unit of software that packages up code and all its dependencies so the
application runs quickly and reliably from one computing environment, to another. They are light
weight virtualization units which run only one process.

2. Docker are the type of containers which are standalone, executable package of software that includes
everything needed to run an application: code, runtime, system tools, system libraries and settings.
Although technically possible, Docker containers are discouraged to run multiple processes, to keep
separate areas of concern. They are encouraged to use services provided by the Host Operating System,
which can be Linux and Windows, through the Docker engine.

Image Ref:
https://www.docker.com/resources/what-container

www.dotnetcurry.com/magazine 73
3. Docker hosts are machines / VMs that run Docker engine and support Docker containers.

4. Container images are the basis of containers. An Image is an ordered collection of root filesystem
changes and the corresponding execution parameters for use within a container runtime. An image
typically contains a union of layered filesystems stacked on top of each other. It is like a template of a
container.

5. Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of
containerized applications. Kubernetes puts containers into groups that make up an application into
logical units called pods for easy management and discovery. Pods are hosted on the VMs called as
nodes. Kubernetes manages nodes and pods. Ref: https://kubernetes.io/ and https://www.dotnetcurry.
com/microsoft-azure/1434/kubernetes

6. Azure Kubernetes Service (AKS) provides the support for implementation of Kubernetes in Azure. I will
be providing more details about this later.

7. Azure Container Registry (ACR) is an Azure service which maintains the repository of container images
in Azure.

Creation of Azure resources

Let us now start our walkthrough by creating the resources in Azure.

If you do not have an Azure Account, you can create a trial account from https://portal.azure.com. With
this 30-day free trial account, you will get a credit of USD 200 (or equivalent in your local currency for
supported countries) that you can use to create resources in Azure for trial purposes, similar to this
walkthrough. After the credit and trial period, you can take a decision to continue by converting the trial
into a paid account.

We will first create an Azure Kubernetes Cluster (AKS Cluster).

This is a cluster of nodes, which are virtual machines that will be hosting the containers. When we create
AKS cluster, along with the nodes, another VM is created in the cluster. That VM is a Cluster Master which
manages the nodes in the cluster.

To create the AKS Cluster, open the Azure Portal and login to the Azure Account. Then create a new resource
of the type Kubernetes Service which opens the wizard to create Kubernetes Cluster.

Provide the name of the cluster, resource group in which this cluster is to be created, the size of the nodes
and the number of nodes in the cluster.

The size of the node that is automatically chosen is Standard DS2 v2. This size in my opinion is ideal for
running the containers in a professional environment. Default number of nodes that are selected for the
cluster are three. Since this is just for example and not for professional use, I changed that to one node.

Note: Number of nodes to be created depend upon the load that is expected and the containers that will
need to be created.

74 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Figure 1: Create Kubernetes cluster

In this wizard, we will also ensure that a Service Principal is created for the AKS Cluster.

A Service Principal is like a service account that gets created for a service in Azure Active Directory. The
purpose of creating a Service Principal is to grant permissions to the service to access some resources.
Whatever permissions are granted to a Service Principal, are automatically transferred to the service it
represents.

In this case, a Service Principal will be created for the AKS Cluster and we will give it permission to pull

www.dotnetcurry.com/magazine 75
images from the Azure Container Registry that we will create later in the walkthrough.

Figure 2: Service principal creation

This Service Principal will be given a default name that we can check from Azure Active Directory of our
account – Registered Applications. It is also possible to create a Service Principal in advance and assign it
to the AKS Cluster at the time of creation, but we are not using that route as it is easier to create Service
Principal and link it with AKS Cluster at the time of creating the AKS Cluster.

The name of the cluster that I created is AKSDevOpsDemoCluster and the Service Principal for that is
AKSDevOpsDemoClusterSP-xxxxxxxxxx.

The next resource that we will create in Azure is the Azure Container Registry (ACR). This is sort of a
container which will store the images of Docker containers that we will create. This resource in Azure is
also created from the Azure Portal.

76 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


I have given the name of aksdevopsdemo.azurecr.io to the ACR while creating it. It is in the same resource
group where the AKS Cluster is created, so they are in the same logical subnet.

Once the ACR is ready, we now edit the Access Control of that ACR to add Service Principal of the AKS
Cluster to the role of Contributors. This role has a permission to push and pull images from this ACR.

Figure 3: ACR Creation and RBAC

Creation of Team Project and git repository

Now that the resources in Azure are ready, we will create a Team Project in Azure DevOps Account.

If you do not have an account in Azure DevOps, you can create a free account from https://dev.azure.com. To
use or create the Azure DevOps account, I strongly suggest using the same email address that was used to
create an Azure Account. This will make the authentication process seamless to access Azure resources from
Azure DevOps.

If you have to use different email accounts for Azure and Azure DevOps then you can follow the guidelines
provided at https://docs.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-
devops to create a connection to Azure.

The name of the team project that I have created is AKSDevOps. For the sake of consistency, I suggest that
you also do so while following this walkthrough.

When the team project is created, I selected git as the version control which created a git repository on
Azure DevOps. This git repository is going to work as a remote repository for all team members who are
developing the application. We can now clone this repository to create a local repository.

Clone operation can be executed in Team Explorer which is part of Visual Studio. To start with, in the Team
Explorer, connect to the newly created team project and click the Clone button once the connection is
established.

www.dotnetcurry.com/magazine 77
Figure 4: Connect to team project and clone repository

Creation of Application with Docker Support and changes


in code
Once the remote repository is cloned to create a local repository, let us create an ASP.NET Core Application
in that local repository. We are creating an ASP.NET Core Application as we are going to use Docker support
to containerize the application.

Docker containers are predominantly based upon Linux, and only .NET Core applications work on Linux
because it is cross-platform. At the time of creating the project, we will add it to the local git repository
created in the previous step. To ensure that it is added to the same repository, click the “New” link under
Solutions section of Team Explorer.

Provide a name to the project, AKSDevOpsApp and select the template of “ASP.NET Core Web Application”
from the sub-section of “.NET Core” under the section of “Visual C#”.

Figure 5: Create new ASP.NET Core Project

78 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


While creating that project, we will select “Enable Docker Support” so that the created application will
contain the Dockerfile.

Figure 6: Add Docker support to the application

Once the project is created, we will open the Dockerfile and make the following changes:

Figure 7: Dockerfile code

My observation is that the Dockerfile as created by the project creation wizard does not work as it is. It can
be taken as a starting point and modified. I have experimented with various options and finally came to the
conclusion that the code shown in Figure 7 always works.

www.dotnetcurry.com/magazine 79
We will also make some more changes to the code of the application. Open the index.cshtml from the
solution explorer and change the code of the “Welcome” message.

Figure 8: Code change in index.cshtml

We will also add a YAML file named deployment.yml. This file will be used to deploy the image from ACR to
AKS Cluster. It will be created in a folder named “Manifest” (this is only for the convenience and segregation
of code of application with that of deployment file, it is not a technical necessity). This file will be passed
by the build to the release management as part of the artifact, so that it can be used at the time of
deployment. Code of that file will be somewhat like this:

Figure 9: Deployment.yml code

This YAML file specifies the image to be used, replicas of the pods to be created, ports to be opened on the
container and LoadBalancer service to expose the containers to the outside world.

If necessary, open the Settings tab in Team Explorer and click the link to Add a “.gitignore” file so that binary
files and their folders like bin and obj will be omitted from changes that are ready for commit. Now we can
Commit code to the local repository and Push that to the remote repository on Azure DevOps.

80 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Figure 10: Commit and push code to the remote repository

Create a build pipeline

In the next step, we will create a new Build pipeline in which we will create an image based upon the
Dockerfile and then Push it to the ACR that we have created earlier.

To create the new Build pipeline, open the page of your organization in Azure DevOps https://dev.azure.
com/<your org name>/AKSDevOps and then select Builds from the Pipelines section in the left pane.

Select the link “Use the classic editor” to create the pipeline without YAML.

Figure 11: Create new build pipeline

www.dotnetcurry.com/magazine 81
On the subsequent page, select the “Docker container” template.

Figure 12: Select Docker container template for build pipeline

This template will provide tasks to create the container image and to push it to the ACR.

Before making changes in the parameters of the tasks, open the Pipeline node. On this node, change the
name of the pipeline as AKSDevOpsDemoBuild and the select the Hosted Ubuntu 1604 agent pool. Since
we are creating Docker images, agents under this pool support the actions to create and push those
images.

Let’s now set the values for parameters of the “Build an image” task. Use the following guidelines to set
those values:

Enter similar values in the Push an image task.

82 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


We need to create an artifact and put the Deployment.yml file in that artifact. To add this file in the artifact,
begin by copying that file into the folder on the agent which maps to the artifact.
This folder is represented by a built-in build variable called “Build.ArtifactStagingDirectory”. Let’s add a
“Copy File” task where

• the source folder is the Manifest folder, selected by drill down in the version control repository,

• contents are the file named deployment.yml and

• destination is folder represented by the variable “Build.ArtifactStagingDirectory”.

Figure 13: Copy files task details

The last task that we will add is “Publish Artifact” where we need not make changes in the parameters. It
will publish the artifact named “drop” in which the deployment.yml file is present.

After configuring these tasks, we will “Save and Queue” this build. At the success of the build, the image is
created as configured in the Dockerfile, pushed to ACR and the artifact as mentioned earlier, is created and
published.

Create a release pipeline

We now have to deploy the created image on AKS Cluster. We will do that using the Release Management
service.

Let’s create a new release pipeline from Pipelines – Releases section in the left-hand pane of the Azure
DevOps page. In this release pipeline we will add only one stage (for the sake of this example), but normally
there may be multiple stages in a release pipeline.

For this release pipeline, we will select the “Deploy to Kubernetes cluster” template.

www.dotnetcurry.com/magazine 83
Figure 14: Select Deploy to Kubernetes cluster template for release pipeline

This template by default adds one stage, let’s call it “QA”. We will add the build pipeline definition that we
have created earlier, as the artifact source.

Figure 15: Select artifact source and note artifact alias

Let’s now set the parameters for the task that is added by the template.

That task is of “kubectl”. Before we set other parameters, let’s setup a connection to the AKS Cluster. This is
done through the wizard that is started by clicking the New button for the parameter of Kubernetes Service
Connection.

84 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


We will base our connection on the Azure Subscription which is common between Azure and Azure DevOps.
Once we select the Azure subscription in this wizard, we can select the AKS Cluster that we had created
earlier. We will use the “default” namespace. Click OK to create a new connection.

Figure 16: Create connection to AKS Cluster

We will set the parameters for this task as shown here:

Figure 17: Kubectl task in release pipeline

Now create a release which will pull the image that we had built and deploy the containers in the pods on
a node in AKS Cluster. Let’s view those pods.

www.dotnetcurry.com/magazine 85
View pods and services

One excellent tool within the Azure Portal is called CloudShell.

It is a shell that can be accessed without going out of the portal (in the browser). Once we open it, we can
execute either PowerShell or Bash commands on the command prompt. In this example, let’s select to open
the CloudShell in Bash mode. We will now connect to our AKS Cluster by using the command:

$ az aks get-credentials --resource-group research --name AKSDevOpsDemoCluster

In the above-mentioned command, “research” and “AKSDevOpsDemoCluster” are names and may be
different in your case.

The next command is to get a list of pods:

$ kubectl get pods

This command will list the pods that are created by Azure DevOps Release.

Another command is to view the services created:

$ kubectl get services

This command will list the services including the LoadBalancer service and the service that manages the
cluster.

Figure 18: Pods and services created by release

Once we get the External IP address of LoadBalancer service, we can browse to it to view the application.

Figure 19: Application running in the container

86 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Issue related to CI/CD of image with “latest” tag

In this exercise, we have deployed the image that has a name of aksdevopsdemo and a label of Latest.
What we realize is that if we try to update the image by changing some code and build it again, the newly
created image will have a tag that is the build number and another tag that is “latest”.

If we try to redeploy the image with the tag “latest”, it will not replace the image in the running containers.
It becomes obvious that deploying image with “latest” tag is not a useful strategy if we want containers to
re-pull the image without a break.

Update running containers with new image without service


break
Let’s consider the option of deploying the image that has the tag of build id.

Every time the build executes to create the image, build id will be different and so will be the tag given
to that image. In fact, this is a standard behavior of the Docker image creation task in Azure DevOps Build
pipeline.

When redeployed, the image with the same name but a different tag will be used and the running
containers will have no problem in pulling that image. We have to now ensure that image that is pulled at
the deployment, is the one with latest build id.

Release management gets the image name and tag of the image to deploy from Deployment.yml file.
In this file, we will need to replace the tag “latest” with the ID of the build. This needs to be done every
time a new build artifact is passed to the release management service. We will need to update the
Deployment.yml file before the actual deployment takes place.

As part of the release, we will use the “sed” command to make an inline change in that file to replace
“latest” word with the value of the build id. This will have to be done in the first step of release, where we
can access the build id and pass it on to the shell script which also needs to be part of the artifact.

Let’s write a shell script as follows:

Figure 20: Code of run.sh Bash script

This shell script accepts an argument. That argument is the Build Id passed from the release task. It
replaces the first instance of the word “latest”.

Path of the file is the path of the artifact that is passed to that release so that the replacement takes
place in the artifact itself. We will save this Bash script in the Manifest folder, as “run.sh” file (name can be
different but ensure to replace it wherever we have used it) to the version control.

www.dotnetcurry.com/magazine 87
After pushing that to the remote repository, make a change in the build pipeline definition that we had
created earlier to also copy this file in “Build.ArtifactStagingDirectory”. This way, it gets added to the artifact
that is passed to the release.

Figure 21: Changes in the Copy files task in build pipeline

We will now add a task of “Bash” script execution to the release pipeline definition. Set the following
parameter values:

Figure 22: Task to execute created Bash script

88 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Release can access Build Id from the artifact that is passed to it. In this case, the variable Release.Artifacts._
AKSDevOpsDemoBuild.BuildId takes the alias of the artifact provider which is _AKSDevOpsDemoBuild and
gets the BuildID from it.

When we create a release, it will first run that Bash script on the agent. That Bash script will take the
deployment.yml file from the same artifact where the Bash script is, and replace the word “latest” with the
build id.

The deployment.yml is saved back in that artifact. It is used in the next step to do the deployment. As the
deployment proceeds, the image with the tag of the latest build id will be pulled by the running containers
and the application is updated.

Summary:

In this article, we have learnt many concepts and practices.

1. Create an Azure Kubernetes Service (AKS) Cluster with a Service Principal.


2. Create an Azure Container Registry (ACR) and add the AKS Cluster created in the earlier step to the
contributors role in the Access Control of ACR so that it can pull images from that ACR.
3. Create an application using ASP.NET Core technology that has Docker support.
4. Configure Docker containers creation using Dockerfile.
5. Configure deployment of containers to AKS Cluster using a YAML file.
6. Create a build pipeline definition to create an image based upon the configuration provided in
Dockerfile and push that to ACR that was created earlier.
7. Use Release Management service to do deployment to AKS Cluster using the YAML file.
8. Modify YAML file in artifact to reflect Build Id so that every time a new build is deployed, it will update
the running containers also.

Subodh Sohoni
Author

Subodh is a consultant and corporate trainer. He has overall 28+ years of experience. His
specialization is Application Lifecycle Management and Team Foundation Server. He is
Microsoft MVP – VS ALM, MCSD – ALM and MCT.

He has conducted more than 300 corporate trainings and consulting assignments. He is
also a Professional SCRUM Master. He guides teams to become Agile and implement CRUM.
Subodh is authorized by Microsoft to do ALM Assessments on behalf of Microsoft.
Follow him on twitter @ subodhsohoni

Thanks to Gouri Sohoni for reviewing this article.

www.dotnetcurry.com/magazine 89
AZURE DEVOPS

Imran Siddique

Azure DevOps
Search
- Deep Dive
Search service of Azure DevOps makes it easy to locate information across all your
projects, from any computer or mobile device, using just a web browser.

90 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


In this article, we will do a deep dive to see:

• how the Azure DevOps Search service is designed,
• how it functions,
• the different capabilities it has to offer and
• how DevOps Search can be leveraged to build more
impactful enterprise apps.

Introduction
The Azure DevOps service consists of dozens of microservices communicating with each other to give the
user a consistent and feature rich experience.

Azure DevOps Search (Search) service is one of the microservices of Azure DevOps that powers its search
functionality.

Search service provides support for searching different entities of Azure DevOps like code, work item, wiki,
packages to name a few. Its unique proposition comes from providing semantic relevance for query results
and deep filters during query. More examples and information can be found at search documentation.

Figure 1: A sample code search result window

Search service is a completely hosted solution, that supports a scale of billions of documents running into
peta bytes of index data spread across multiple Azure regions and Elasticsearch clusters. The platform also
supports critical Enterprise-ready features like honoring security permissions, multi-tenancy and GDPR
compliance.

In this article, we will talk more about how the platform is architected to support both the search
functionality as well as the service fundamentals at scale.

www.dotnetcurry.com/magazine 91
Why Azure DevOps Search?
Four of the biggest needs that Azure DevOps Search faced were:

1. Availability: Search is such an integral part of the product capability that there is an inherent need to
ensure it is almost always available.

2. Scale: With the large scale of users and usage in Azure DevOps, scale was always at the back of our
minds (I am from the Azure DevOps Search team) while we were designing the architecture.

3. Performance: While we were achieving high availability and tremendous scale, we could not
compromise on performance and wanted most of the important queries to open in sub seconds

4. Complexity: The way users search for code and work items is very different from your average search
requests. There are several complex scenarios that search supports such as: in code search, you can
search a code file based on a comment you wrote in the file by just typing “comment:todo” or in work
item you can search a bug, user story, or feature based on its assignment, state, creation time and other
thousands of filters that you would associate with a work item .

Azure DevOps Search - Architecture


Search service platform is based on a common framework layer, that powers all the other Azure DevOps
services. The index data is maintained on Elasticsearch indices.

Search service has two major processing pipelines – Indexing pipeline and Query pipeline.

Indexing pipeline is the set of components that come together to support pulling content from other Azure
DevOps services, processing it to add annotated semantic information, and pushing it to the Elasticsearch
indices.

Figure 2: Azure DevOps Search higher level architecture

The Query pipeline provides a REST endpoint for the Azure DevOps portal and external tools to search. It
performs key functions such as identity validations, authorization checks and retrieving the relevant and
accessible content from the Elasticsearch index. The Elasticsearch indices themselves are hosted on Azure
VMs that handle the ingestion and query of documents.

92 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Indexing pipeline

Figure 3: Index Pipeline workflow

Crawlers
The first stage of indexing is to trigger the crawling of the contents (code in case of code search, work-item
details in case of work-item search and so on) once an account is onboarded. Onboarding in context of Azure
DevOps account means enabling the search functionality for the users of the account.

Multiple types of crawlers are available and hosted in the Search service – each entity has its own crawler
implementation; and for some entities there are more than one implementation.

Crawling happens on Azure job agents, which are Azure worker roles where all our background processing
happens. The crawling either happens in chunks (split across multiple executions of the same job) or in a
single execution of the job. This is to ensure fairness across multiple accounts running in parallel and
ensuring resources are utilized efficiently.

Incremental crawling is triggered using notifications whenever there are changes in the system. For
instance, in case of code search whenever there are pushes or changes on a given repository, there is a
notification which is sent from Version Control service of Azure DevOps to Search service. Search service
then reacts to the notification and retrieves the contents of the code files that are changed.

Incremental crawling can be also be processed in chunks. Once the contents are crawled, they are
processed by the next layer – parsing.

Parsers
Once the contents are available from crawler, the documents are passed through parsing layer. In this
phase, the documents are parsed from different angles to extract some more meaningful information so
the same can be indexed as well.

www.dotnetcurry.com/magazine 93
For example, in case of code search, Search service uses language specific parsers for C, C++, Java and C#
that generate the partial Abstract Syntax Tree (AST) for each file in the repository during indexing time.
Parsers take the bare files and generate semantic token information for the file and add them to the
document content that needs to be indexed.

For example, when a C++ code file is being processed, the class, method tokens within the file are also
parsed, and added to the document mapping information for Elasticsearch. The document mapping for code
files in Elasticsearch today holds not just the content of the file, but also a per-term code token information.

Parsers run out-of-process to ensure isolation (for security reasons) as well as the ability to host language
specific runtimes. Parsing failures cause fallback to text parsing to ensure that the file is still text
searchable.

Feeders
Once the parsed content is available, the documents are fed into the Elasticsearch indices via the feeders.
Feeders convert the parsed content into an Elasticsearch compatible mapping, batch multiple parsed files
into an Elasticsearch indexing request, and index them.

To ensure that the cluster doesn’t get overwhelmed with a huge set of indexing requests at the same time,
there are throttling mechanisms to control the indexing throughput across multiple job agents.

Query pipeline
The search experience is available for Azure DevOps users both from the Azure DevOps portal as well
as the REST APIs. The Azure DevOps portal experience is built on top of these REST APIs exposed by the
Search service.

The incoming search requests go through multiple processing stages like validations and transformation.
The request is first validated to ensure the information available is correct, supported and meets all the
security/throttling criteria. The request is transformed and optimized so the same has information around
the index and shard where the search will happen, the filters that needs to be applied, the boosting that
will be carried out, the fields that will be retrieved and so on.

Figure 4: Query Pipeline workflow

94 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Search service supports full fidelity read permission on searches!

This means even preview of search results are not allowed if the users don’t have permission to the
same. This is supported for queries that are scoped across multiple projects and repos as well. The results
returned from Elasticsearch are filtered to ensure only the results the users have access to, are returned.

Search service supports queries scoped at different levels like account / project for most of the entity types
and in some cases even more granular scopes like repository/path. It also supports searching across multi-
selected entity instances at the same time.

Figure 5: A typical Azure DevOps Elasticsearch cluster

Cluster topology
Elasticsearch indices are stored in Azure Premium storage blobs and supported via nodes hosted on
Windows based Azure IaaS VMs.

Each Elasticsearch cluster contains 3 master nodes, 3+ client nodes (on the indexing and query load on the
cluster) and 3+ data nodes (depending on the size of the indices). Our largest clusters have 80+ data nodes
and have an index utilization (amount of indexed data that is queried) of ~70%.

www.dotnetcurry.com/magazine 95
To ensure Elasticsearch runs smoothly with Azure, Elasticsearch’s node allocation awareness attributes are
configured to honor the availability sets (fault domain and update domain) within Azure. These settings
ensure that a given set of primary + replica is always available during unplanned outages or planned
upgrades.

DATA NODES: 8 CORES, 28 GB RAM, 56 GB SSD


MASTER NODES: 2 CORES, 7 GB RAM, 14 GB SSD
CLIENT NODES: 2 CORES, 7 GB RAM, 14 GB SSD

Indices have primary + 2 replicas, with a quorum based write consistency model. Index refresh is set to a
minute.

The Search service has Elasticsearch clusters deployed in multiple Azure regions, at least one cluster per
each region supported by Azure DevOps. This helps ensure data sovereignty is honored, as the index data
for accounts within a given Azure DevOps region is stored within the same region.

Indices have primary + 2 replicas, with a quorum based write consistency model. Index refresh is set to a
minute.

The Search service has Elasticsearch clusters deployed in multiple Azure regions, at least one cluster per
each region supported by Azure DevOps. This helps ensure data sovereignty is honored, as the index data
for accounts within a given Azure DevOps region is stored within the same region.

Index/Data model
The mapping for documents inside Elasticsearch contains some information that is similar for all
entity types in the Search service and some which are entity specific. All the documents have metadata
information like account/project they belong to. Each entity can have additional metadata information like
the repository a document belongs to, in case of code.

Each document also has a set of information that uniquely identifies it from other documents. For instance,
work-items have a work-item Id associated with them that uniquely identifies a work-item in an account.
Similarly, a combination of branch name, file path, file name and content hash uniquely identify the code
file in a given repository of Azure DevOps account. Document Id of the Elasticsearch document is built
using some of the information mentioned above.

The mapping also contains entity specific information that helps in enabling the search experience for the
given entity type.

For instance, in case of code search, the code token information for a given term (say class “Foo”), along
with its positional information is stored as a term vector payload in the index. The entire content of the file,
including operators, is stored in the file content to support full text search.

Routing
The default index routing ensures that data in a single entity instance goes to the same shard, and
wherever possible, data from multiple entity instances of a given account go to the same shard as well.
This doesn’t suffice for very large entity instances or accounts, which have millions of documents that can’t
sit on the same shard. Based on different heuristics, when certain entity instances are deemed large, those

96 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


repositories are split across multiple shards, to ensure shards don’t become too big.

Handling growth
A single account typically sits in a single index on Elasticsearch split across multiple shards of that index
based on size. It is also possible for some very large accounts to have multiple indices dedicated to them.

At any given point of time, there are a few tens of indices that are marked “active”, so new accounts can be
indexed into them. Based on certain account/entity instance size heuristics, indices are deemed “full” and
are closed to addition of new accounts. Existing accounts continue to grow within the same indices once
assigned. When there are no active indices available, new set of active indices are created automatically to
support new account additions.

Periodically, jobs run to determine if some shards/indices are really “large” because of high growth of
accounts on that shard, and selected accounts on that shard are marked for “move” to a new index to
ensure they don’t become a bottleneck and influence other accounts on that index. These moves are then
orchestrated by the trigger/monitor job that handles re-indexing, to ensure the number of moves at any
given point of time is regulated/throttled.

Monitoring is built in to indicate capacity crunch or spare capacity. This helps react by increasing/
decreasing nodes in the cluster.

Search Service Fundamentals


Multi-tenancy

Event/Job processing pipeline

The indexing pipeline is a shared job execution model across multiple accounts that are hosted within
the Search service. Jobs are scheduled per entity instance (for example – repository in case of code, project in
case of work-item and so on) to handle any complete/incremental changes that are detected for that entity
instance.

To ensure index consistency, the event processing pipeline has robust locking semantics that ensures that
only a single operation (indexing, metadata change processing etc.) is running for a given entity instance at a
given point of time.

Metadata changes, addition of new projects and repositories are also controlled at a per account level, to
ensure semantic consistency of the account’s information.

Each entity treats its accounts differently, so the locking semantics don’t span across entities for the same
account. Indexing is typically done in a single job for an entity instance, but it can be dynamically expanded
to multiple parallel jobs (if the change to be processed or the entity instance itself is very large).

Resource Utilization Management


Multi-tenancy (support for multiple accounts within the same service) is handled throughout the indexing
pipeline, and inside Elasticsearch indices. The service has support for ensuring effective resource sharing
across accounts (basically avoidance of a single account hogging the entire pipeline, starvation etc.) through
job schedulers that look at the current job load, pending job queue and total available resources, before

www.dotnetcurry.com/magazine 97
allocating new job resources to an account for indexing.

Every job run also executes in a time-bound manner to ensure it doesn’t continue to hog resources while
starving another account for a very long time, yielding every so often to ensure that a job resource can be
allocated to another account if needed.

Similar mechanisms are applied at entity level as well, to ensure jobs for a given entity type doesn’t hog
resources needed by jobs of another entity type.

Shared Indices
Inside the Elasticsearch indices, data across multiple accounts/entity instance is shared and stored in a
single index. This helps with reducing the total number of indices and shards (partitions) that need to be
managed and caters to many small accounts that don’t have a lot of data.

At the same time, for large accounts or entity instances, the Search service scales to support dedicated
indices, thus the effects of noisy neighbors are minimized. Large is determined as a heuristic. Shared indices
have a cap on the max number of accounts/entity instances that are accepted, to ensure there is room for
growth.

The indices at entity type level are different for each entity and the same is not shared across entity types.
This gives room for each entity type to have its own indexing and querying characteristics; also, how it
wants to group the accounts’ data for optimal query performance.

Monitoring and Deployment


Logs from Elasticsearch are pumped into the Azure core monitoring system and Microsoft homegrown log
analysis systems. Deployments are completely integrated into Azure DevOps Release management system

Azure DevOps Search - Best Practices


Code Search
• You can use code type filters to search for specific kinds of code such as definitions, references,
functions, comments, strings, namespaces, and more. You can use Code Search to narrow down your
results to exact code type matches. This will be useful when all you want to do is just get quickly to the
implementation of (say) an API your code might be taking dependency on!

• You can narrow your search by using project, repository, path, file name, and other filter operators. This
will help you achieve your desired results even faster. Start with a higher-level search if you don’t know
where the results would be and keep filtering till you have a subset of results to browse through and
work on.

• You can use wildcards to widen your search and Boolean operators to fine-tune it. This will ensure you
get to the results you desire even when you are not sure of the exact term you are looking for.

• When you find an item of interest, simply place the cursor on it and use the shortcut menu to quickly
search for that text across all your projects and files. This will help you find more information about an
item of interest faster and with minimal efforts.

98 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


• Similarly, you can also easily trace how your code works by using the shortcut menu to search for
related items such as definitions and references - directly from inside a file or from the search results.

Work Item Search


You can use a text search across all fields to efficiently locate relevant work items. This will be useful when
you are trying to (say) search for all work-items that had similar exception trace!

You can also use the quick in-line search filters on any work item field to narrow down to a list of work
items in seconds. The dropdown list of suggestions helps complete your search faster.

Wiki Search
When you search from Wiki, you'll automatically navigate to wiki search results. Text search across the wiki
is supported by the search platform.

Know More
If you would like to see how Search looks like in action, you can watch the video here! In this video, Biju
Venugopal (Principal PM Manager @ Microsoft) walks us through the demo of Search and talks through
important aspects about the service.

Imran Siddique
Author
Imran Siddique is a software engineer specializing in software & distributed systems
development. He has over 11 years of experience designing and architecting different
Microsoft cloud services. Imran is passionate about distributed systems, designing at scale
and engineering improvements. Check out his LinkedIn profile at
https://www.linkedin.com/in/mohammadimransiddique/

Mahathi
Author
Mahathi is an engineering manager on the Azure DevOps Search team, leading the are
as of Code Search, scale and resilience. Prior to this, she worked on real-time media
networking protocols, .NET framework at Microsoft and Apps for Business at Google. Her
passion is to build software that is elegantly designed, highly scalable and extensible.
She holds an MS in Computer Science from Stanford. Outside work, she loves music and
performs live shows with her family. More at www.linked.in/mmahathi

Thanks to Subodh Sohoni for reviewing this article.

www.dotnetcurry.com/magazine 99
ASP.NET CORE SIGNALR

Dobromir Nikolov

Since ASP.NET Core 2.1, Microsoft


has brought SignalR back to
life again. SignalR is a library
that allows you to add real-
time functionality to your web
applications. It supports both
“server to client” anad “client to
server” communication. It does
all sorts of magic like handling
connection management, scaling
and automatically choosing the
best supported transport method.

INTEGRATION TESTING

Integration testing of
real - time communication in
ASP.NET Core using
Kestrel and SignalR
Integration testing is getting more and more popular amongst developers who
care about shipping quality products. Real-time functionality is now a norm
and is included in the requirements of modern web applications. Learn how
you can incorporate these two concepts by building a robust integration tests
infrastructure using SignalR and Kestrel.
If you’re not familiar with SignalR, I suggest going through the docs before continuing further, as the rest of
the article assumes that you have at least some basic knowledge about using the library.

Editorial Note: Check out this rock solid tutorial of building a webapp using ASP.NET Core and SignalR
www.dotnetcurry.com/aspnet-core/1480/aspnet-core-vuejs-signalr-app.

If you don’t feel like going through the docs right now, you may get away with knowing that SignalR uses a
concept called a “hub”. A SignalR hub is basically an endpoint to which clients can connect to start receiving
or sending messages.

On the server, a hub is represented by a class. You can define methods on it that can be called by the
clients, or send messages to those clients through the IHubContext<THub> interface.

Setting up the test scenario

For our test case, the hub won’t define any methods for the clients to call. It will just sit there and wait for
connections.

public class TestHub : Hub


{
}

We will expose it on the /testHub route using the UseSignalR middleware.

app.UseSignalR(routes =>
{
routes.MapHub<TestHub>("/testHub");
});

What MapHub does is it creates an endpoint that clients can connect to. If we wanted to have methods that
the clients could call, we could’ve defined them inside the TestHub class.

In our test scenario, however, we will be testing only “server to client” communication. Let’s define an object
that will provide us the ability to send messages to the hub subscribers. As we mentioned earlier, the
implementation will make use of the IHubContext<THub> interface.

public interface ITestHubDispatcher


{
Task Dispatch(Notification notification);
}

public class TestHubDispatcher : ITestHubDispatcher


{
private readonly IHubContext<TestHub> _hubContext;

public TestHubDispatcher(IHubContext<TestHub> hubContext)


{
_hubContext = hubContext;
}

public Task Dispatch(Notification notification) =>


_hubContext
.Clients
.All

www.dotnetcurry.com/magazine 101
.SendAsync(nameof(Notification), notification);
}

public class Notification


{
public string Message { get; set; }
}

I like to call such objects “dispatchers”.

In most applications, events or notifications will usually be dispatched after some API operation has
completed.

Let’s create a mock API endpoint that will use our new dispatcher to send a notification to all hub
subscribers.

[Route("[controller]")]
public class HubController : Controller
{
private readonly ITestHubDispatcher _dispatcher;

public HubController(ITestHubDispatcher dispatcher)


{
_dispatcher = dispatcher;
}

[HttpPost("test")]
public async Task<IActionResult> Test([FromBody] Notification notification)
{
await _dispatcher.Dispatch(notification);
return Ok();
}
}

The integration tests we will be writing will test that when a POST request is submitted to the hub/test
endpoint, all subscribers to the TestHub are properly notified.

Why integration tests and not just unit tests?

Unit tests are nice, but all of the mocking and setup can easily distract from what we’re actually testing. We
need to get intimately familiar with how objects are constructed, the application interfaces, their behavior
and role in the implementation.

Of course, unit tests play an important part in the act of delivering quality software. It’s not worth it to spin
up a whole integration testing infrastructure just to cover a few pure, reusable components.

However, for interactive functionality that you will expose to users, integration tests are substantially more
valuable.

Editorial Note: Read more about Integration Testing for ASP.NET Core Applications at www.dotnetcurry.com/
aspnet-core/1420/integration-testing-aspnet-core

Sure, the infrastructure setup may be a bit tedious sometimes, but with tools such as Docker, this shouldn’t

102 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


be a problem. After the initial setup, we are free to test the system in the same way it’s going to be used by
users and without polluting our tests with implementation details.

For our test setup, we will be aiming for our tests to look more like this:

// Connect to the TestHub on {appUrl}/testHub


// Submit a POST request on {appUrl}/hub/test with a valid message
// Verify that a correct message was received

and less like this:

// Mock the IHubContext<TestHub>


// Create a new instance of TestHubDispatcher using the HubContext mock
// Call the dispatcher's Dispatch function
// Verify that the HubContext mock was called with the proper arguments
// ...repeat for the controller

You see how in the second case we are required to know that there is a TestHubDispatcher
implementation that uses an IHubContext<TestHub>, and that the HubController depends on a
TestHubDispatcher instance, and so on.

All of the mocking and setup distracts us from what we’re trying to test. And what we’re trying to test is
whether the system behaves as expected when interacted with from the outside.

Setting up the tests infrastructure

Normally, as we will find in the docs, to write integration tests for an ASP.NET Core application, we would
use the TestServer class. TestServer can be used for calling the controller HTTP endpoint, but after that,
we will quickly (or not so quickly, depends on how much time we spend debugging) find out that SignalR
won’t work, because TestServer does not yet support WebSockets (more info about that here).

Fortunately, there is an easy solution to this problem, and it lies just in front of us - inside the Program.cs
file. If you open it, it probably looks something like this:

public class Program


{
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}

public static IWebHost BuildWebHost(string[] args) =>


WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.Build();
}

What this call to CreateDefaultBuilder does is call UseKestrel behind the scenes. If you haven’t heard
of Kestrel up until now, it’s the web server that was introduced together with ASP.NET Core.

I won’t get into details, but if you’ve seen the console window that pops up when you press “Ctrl + F5” in
Visual Studio, then you’ve seen Kestrel (you can learn more about it by reading the docs).

www.dotnetcurry.com/magazine 103
Kestrel is what allows .NET Core apps to be cross platform. For the sake of this article, think about Kestrel
as our application.

And how does that help?

Well, Kestrel decouples our application from specific server implementations such as IIS, Apache or Nginx
by providing a consistent startup pipeline.

We can execute this pipeline ourselves to get a running instance of our application that we can use for
integration testing. This goes around the problem of TestServer not supporting WebSockets by not using
TestServer at all.

We just need to create a class that will encapsulate this startup logic.

public class AppFixture


{
public const string BaseUrl = "http://localhost:54321";

static AppFixture()
{
var webhost = WebHost
.CreateDefaultBuilder(null)
.UseStartup<Startup>()
.UseUrls(BaseUrl)
.Build();

webhost.Start();
}

public string GetCompleteServerUrl(string route)


{
route = route?.TrimStart('/', '\\');
return $"{BaseUrl}/{route}";
}
}

AppFixture is simply mimicking what our application’s Main method is doing - starting the Kestrel web
server.

When this class is instantiated for the first time, an instance of our app will be started on port 54321.

Why a static constructor you may ask? Because we really only need one server running per test run.

AppFixture also provides a neat way of building urls through GetCompleteServerUrl, which will later
come in handy.

An example usage would look like

// Gives us a running server at http://localhost:54321


var fixture = new AppFixture();

// Returns http://localhost:54321/some/route
var url = fixture
.GetCompleteServerUrl("/some/route");

104 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


There are a few more things we need to be clear on about before starting to write the actual test code.

Communicating with our SignalR hub

For communicating with the SignalR hub, we will be using the SignalR.Client package. It gives us a way
of creating persistent connections to our hub and listening for messages that are emitted from it. Some
example usage:

HubConnection connection = new HubConnectionBuilder()


.WithUrl("http://localhost:54321/testHub")
.Build();

await connection.StartAsync();

// "Notification" being the expected event name


connection.On("Notification", notification =>
{
// Do something with the notification
});

You can read more about SignalR.Client in the docs.

For our tests, in order to instantiate new connections, we’ll be using the following helper function.

private static async Task<HubConnection> StartConnectionAsync(string hubUrl)


{
var connection = new HubConnectionBuilder()
.WithUrl(hubUrl)
.Build();

await connection.StartAsync();

return connection;
}

Verifying handlers were called with proper arguments

You see how the HubConnection’s On method accepts a callback? Later on, when verifying whether a correct
message was received, we’ll need to check whether the callback we’ve passed has been called with the
proper arguments.

This can be a pain to implement.

Fortunately, there is Moq. Moq allows us to create a mock function and then use its built-in Verify method
to check whether it was called with the correct parameters. The following snippet will create a mock
Action<Notification> and assert that it was called with a message of “whatever”.

var mockHandler = new Mock<Action<Notification>>();

mockHandler.Verify(
x => x(It.Is<Notification>(n => n.Message == "whatever")),
Times.Once());

www.dotnetcurry.com/magazine 105
It even gives us a Times struct. How cool is that!

Converting our test description to actual code

We’ve collected enough knowledge to start converting our test description into actual, working code. How
about we take one more look at it?

// 1. Connect to the TestHub on {appUrl}/testHub


// 2. Submit a POST request on {appUrl}/hub/test with a valid message
// 3. Verify that a correct message was received

Let’s go operation by operation.

// 1. Connect to the TestHub on {appUrl}/testHub


// Arrange
var fixture = new AppFixture();

var connection = await StartConnectionAsync(fixture.GetCompleteServerUrl("/


testHub"));

// Using a mock handler so we can make use of the Verify method


var mockHandler = new Mock<Action<Notification>>();
connection.On(nameof(Notification), mockHandler.Object);

This code looks pretty obvious after we’ve gotten familiar with SignalR.Client and Moq.

// 2. Submit a POST request on {appUrl}/hub/test with a valid message


var notificationToSend = new Notification { Message = "test message" };

// Act
using (var httpClient = new HttpClient())
{
// POST the notification to http://localhost:54321/hub/test
await httpClient.PostAsJsonAsync(fixture.GetCompleteServerUrl("/hub/test"),
notificationToSend);
}

We’re using the built-in HttpClient. If you need more info about it, check out the docs.

// 3. Verify that a correct message was received


// Assert
mockHandler.Verify(
x => x(It.Is<Notification>(n => n.Message == notificationToSend.Message)),
Times.Once());

This is where we thank Moq’s creators for the awesome syntax.

The whole class now looks like this:

public class TestHubTests

106 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


{
[Fact]
public async Task ShouldNotifySubscribers()
{
// Arrange
var fixture = new AppFixture();

// 1. Connect to the TestHub on {appUrl}/testHub


var connection = await StartConnectionAsync(fixture.GetCompleteServerUrl("/
testHub"));

// Using a mock handler so we can make use of the Verify method


var mockHandler = new Mock<Action<Notification>>();
connection.On(nameof(Notification), mockHandler.Object);

var notificationToSend = new Notification { Message = "test message" };

// Act
using (var httpClient = new HttpClient())
{
// 2. Submit a POST request on {appUrl}/hub/test with a valid message
await httpClient.PostAsJsonAsync(fixture.GetCompleteServerUrl("/hub/test"),
notificationToSend);
}

// Assert
// 3. Verify that a correct message was received
mockHandler.Verify(x => x(It.Is<Notification>(n => n.Message ==
notificationToSend.Message)), Times.Once());
}

private static async Task<HubConnection> StartConnectionAsync(string hubUrl)


{
var connection = new HubConnectionBuilder()
.WithUrl(hubUrl)
.Build();

await connection.StartAsync();

return connection;
}
}

A bit verbose, but working nonetheless. Sadly, there is one gotcha.

This test will pass, but only sometimes.

Testing real web sockets

You see, testing real web sockets isn’t that simple.

Since the nature of websocket communication is asynchronous and there is a real web server running in
the background, there is no guarantee that the Assert part of the test will be executed after the message
has been received.

www.dotnetcurry.com/magazine 107
In other words, the test may be valid, but might exit too early for the assertion to pass.

So what do we do?

Verify with timeout.

Thankfully, we’re using C# and we can easily “plug into” Moq through an extension method.

public static async Task VerifyWithTimeoutAsync<T>(this Mock<T> mock,


Expression<Action<T>> expression, Times times, int timeoutInMs)
where T : class
{
bool hasBeenExecuted = false;
bool hasTimedOut = false;

var stopwatch = new Stopwatch();


stopwatch.Start();

while (!hasBeenExecuted && !hasTimedOut)


{
if (stopwatch.ElapsedMilliseconds > timeoutInMs)
{
hasTimedOut = true;
}

try
{
mock.Verify(expression, times);
hasBeenExecuted = true;
}
catch (Exception)
{
}

// Feel free to make this configurable

108 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


await Task.Delay(20);
}
}

What VerifyWithTimeoutAsync does is retry the built-in Verify until either it has been completed
successfully or a timeout has been reached.

mockHandler.Verify(x => x(It.Is<Notification>(n => n.Message == notificationToSend.


Message)), Times.Once());

..now becomes

await mockHandler.VerifyWithTimeoutAsync(x => x(It.Is<Notification>(n => n.Message


== notificationToSend.Message)), Times.Once(), 1000);

If the first .Verify fails, Moq will continue retrying for 1 more second.

Our test now looks like this.

[Fact]
public async Task ShouldNotifySubscribers()
{
// Arrange
var fixture = new AppFixture();

// 1. Connect to the TestHub on {appUrl}/testHub


var connection = await StartConnectionAsync(fixture.GetCompleteServerUrl("/
testHub"));

// Using a mock handler so we can make use of the Verify method


var mockHandler = new Mock<Action<Notification>>();
connection.On(nameof(Notification), mockHandler.Object);

var notificationToSend = new Notification


{
Message = "test message"
};

// Act
using (var httpClient = new HttpClient())
{
// 2. Submit a POST request on {appUrl}/hub/test with a valid message
await httpClient.PostAsJsonAsync(fixture.GetCompleteServerUrl("/hub/test"),
notificationToSend);
}

// Assert
// 3. Verify that a correct message was received
await mockHandler.VerifyWithTimeoutAsync(x => x(It.Is<Notification>(n => n.Message
== notificationToSend.Message)), Times.Once(), 1000);
}

It definitely isn’t horrible, and it works, but it’s still not as simple as the description we started with.

// 1. Connect to the TestHub on {appUrl}/testHub


// 2. Submit a POST request on {appUrl}/hub/test with a valid message

www.dotnetcurry.com/magazine 109
// 3. Verify that a correct message was received

Let’s think a bit about how we could refactor things so the test looks more like the example description
without distancing us from the details too much.
// 1. Connect to the TestHub on {appUrl}/testHub

If we wrapped SignalR.Client’s HubConnection class into our own, we could perhaps end up with a builder
allowing us to do something like:

var connection = new TestHubConnectionBuilder()


.OnHub(_fixture.GetCompleteServerUrl("/testHub"))
.WithExpectedEvent<Notification>(nameof(Notification))
.Build();

await connection.StartAsync();

It definitely makes it more obvious that we’re connecting to the /testHub endpoint and expecting a
message called “Notification”.

// 2. Submit a POST request on {appUrl}/hub/test with a valid message

What we can do is move the HttpClient construction into the AppFixture class itself.

public async Task ExecuteHttpClientAsync(Func<HttpClient, Task> action)


{
var httpClient = new HttpClient();
httpClient.BaseAddress = new Uri(BaseUrl);

using (httpClient)
{
await action(httpClient);
}
}

Our test’s Act part then becomes:

await fixture.ExecuteHttpClientAsync(httpClient =>


httpClient.PostAsJsonAsync("/hub/test", notificationToSend));

// 3. Verify that a correct message was received

Now that we’ve wrapped SignalR’s HubConnection into a TestHubConnection, we cannot call
VerifyWithTimeoutAsync on the message handler, as it is not in scope.

What we’ll do is move the verification to the test connection itself.

await connection.VerifyMessageReceived(
n => n.Message == notificationToSend.Message,
Times.Once());

..instead of

await mockHandler.VerifyWithTimeoutAsync(
x => x(It.Is<Notification>(n => n.Message == notificationToSend.Message)),

110 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Times.Once(),
1000);

Now move the AppFixture into a private field and our test class looks like this:

public class TestHubTests


{
private readonly AppFixture _fixture;
public TestHubTests()
{
_fixture = new AppFixture();
}
[Fact]
public async Task ShouldNotifySubscribers()
{
// Arrange
var notificationToSend = new Notification { Message = "test message" };

var connection = new TestHubConnectionBuilder()


.OnHub(_fixture.GetCompleteServerUrl("/testHub"))
.WithExpectedEvent<Notification>(nameof(Notification))
.Build();
await connection.StartAsync();
// Act
await _fixture.ExecuteHttpClientAsync(httpClient =>
httpClient.PostAsJsonAsync("/hub/test", notificationToSend));
// Assert
await connection.VerifyMessageReceived(
n => n.Message == notificationToSend.Message,
Times.Once());
}
}

Much better, isn’t it? Except for the fact that it doesn’t compile, but we’ll get to that in a second.

Implementing our HubConnection wrapper

The implementation is fairly straightforward, since the squiggly red underlines tell us exactly what methods
we will need to expose. (StartAsync and VerifyMessageReceived)

Initially, the class will look something like this:

public class TestHubConnection


{
private readonly HubConnection _connection;
private readonly Dictionary<Type, object> _handlersMap;
private readonly int _verificationTimeout;

internal TestHubConnection(string url, int verificationTimeout = 1000)


{
_connection = new HubConnectionBuilder()
.WithUrl(url)
.Build();

_verificationTimeout = verificationTimeout;
_handlersMap = new Dictionary<Type, object>();
}
}

www.dotnetcurry.com/magazine 111
We keep some default verification timeout, the underlying connection (SignalR.Client.HubConnection)
and a collection of mappings between types and their handlers.

Dictionary<Type, object> may look intimidating at first, but things will become clearer in a second.

This dictionary will hold expected event types and a collection of their handlers. These handlers, as we saw
earlier, will be just mock functions created using the Moq library.

Some example key-value pairs that could be stored are:

{ key: typeof(Notification), value: new List<Mock<Action<Notification>>>() },


{ key: typeof(Notification2), value: new List<Mock<Action<Notification2>>>() },
{ key: typeof(Notification3), value: new List<Mock<Action<Notification3>>>() }

You see how we’re storing different generic types inside the values? This is why we need to use object as
the value type, so we can merge them under a common abstraction.

Later, if we want to assert that an event of type Notification was received, we can just take all its
handlers and run a predicate against them.

StartAsync will simply wrap around HubConnection’s StartAsync.

public Task StartAsync() =>


_connection.StartAsync();

..and VerifyMessageReceived<TEvent> will check whether we have a registered handler for the
specified TEvent, and if we do, call VerifyWithTimeoutAsync on it.

public async Task VerifyMessageReceived<TEvent>(


Expression<Func<TEvent, bool>> predicate,
Times times)
{
if (!_handlersMap.ContainsKey(typeof(TEvent)))
// Just a custom exception
throw new HandlerNotRegisteredException(typeof(TEvent));

var handlersForType = _handlersMap[typeof(TEvent)];

foreach (var handler in (List<Mock<Action<TEvent>>>)handlersForType)


{
await handler.VerifyWithTimeoutAsync(
x => x(It.Is(predicate)),
times,
_verificationTimeout);
}
}

Implementing TestHubConnectionBuilder

There’s not much to comment on implementing the builder, it’s a very standard implementation you’ll find
hundreds of tutorials for.

public class TestHubConnectionBuilder

112 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


{
private List<(Type Type, string Name)> _expectedEventNames;
private string _hubUrl;

public TestHubConnection Build()


{
if (string.IsNullOrEmpty(_hubUrl))
throw new InvalidOperationException($"Use {nameof(OnHub)} to set the hub
url.");

if (_expectedEventNames == null || _expectedEventNames.Count == 0)


throw new InvalidOperationException($"Use {nameof(WithExpectedEvent)} to set
the expected event name.");

var testConnection = new TestHubConnection(_hubUrl);

foreach (var expected in _expectedEventNames)


{
testConnection.Expect(expected.Name, expected.Type);
}

Clear();

return testConnection;
}

public TestHubConnectionBuilder OnHub(string hubUrl)


{
_hubUrl = hubUrl;
return this;
}

public TestHubConnectionBuilder WithExpectedEvent<TEvent>(string eventName)


{
if (_expectedEventNames == null)
_expectedEventNames = new List<(Type, string)>();

_expectedEventNames.Add((typeof(TEvent), eventName));
return this;
}

private void Clear()


{
_expectedEventNames = null;
_hubUrl = null;
}
}

The only missing item that we find out while implementing it is that we need an Expect method on the
TestHubConnection. Let’s implement that.

public class TestHubConnection


{
public void Expect<TEvent>(string expectedName)
{
var handlerMock = new Mock<Action<TEvent>>();
RegisterHandler(handlerMock);
_connection.On(expectedName, handlerMock.Object);
}

www.dotnetcurry.com/magazine 113
public void Expect(string expectedName, Type expectedType)
{
var genericExpectMethod = GetGenericMethod(
nameof(Expect),
new[] { expectedType });
genericExpectMethod.Invoke(this, new[] { expectedName });
}

private MethodInfo GetGenericMethod(string name, Type[] genericArguments)


{
var method = typeof(TestHubConnection)
.GetMethods()
.First(m => m.ContainsGenericParameters && m.Name == name)
.MakeGenericMethod(genericArguments);

return method;
}

private void RegisterHandler<TEvent>(Mock<Action<TEvent>> handler)


{
if (!_handlersMap.TryGetValue(typeof(TEvent), out object handlersForType))
{
handlersForType = new List<Mock<Action<TEvent>>>();
handlersMap[typeof(TEvent)] = handlersForType;
}

var handlers = (List<Mock<Action<TEvent>>>)handlersForType;


handlers.Add(handler);
}
}

It’s very verbose, but that’s what you get when you want to have cool syntax. What Expect does
is register a new mock handler for the type we’ve given. We can later use this handler to call
VerifyWithTimeoutAsync and assert that a correct message was received.

Sadly, this requires some reflection gymnastics, but implementation details can be ugly sometimes.

Let’s take one more look at the completed test.

[Fact]
public async Task ShouldNotifySubscribers()
{
// Arrange
var notificationToSend = new Notification { Message = "test message" };

var connection = new TestHubConnectionBuilder()


.OnHub(_fixture.GetCompleteServerUrl("/testHub"))
.WithExpectedEvent<Notification>(nameof(Notification))
.Build();

await connection.StartAsync();

// Act
await _fixture.ExecuteHttpClientAsync(httpClient =>
httpClient.PostAsJsonAsync("/hub/test", notificationToSend));

// Assert
await connection.VerifyMessageReceived<Notification>(
n => n.Message == notificationToSend.Message,

114 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Times.Once());
}

..and if we click the “Run” button.

Perfect.

We’ve now set up the foundation for a readable and functional SignalR integration test suite. For more
advanced examples of this approach that include support for access tokens, tests for notifying a specific
user, etc. visit https://github.com/dnikolovv/cafe. Look for the Api/Hubs tests in the /server folder.

And if you just want to check out and play around with the complete code of this article – you can also find
it on GitHub here - https://github.com/dnikolovv/signalr-integration-tests.

That’s it! The only thing left now is to show off your newly acquired knowledge by writing some robust and
well-tested real-time functionality!

Dobromir Nikolov
Author

Dobromir Nikolov is a software developer working mainly with Microsoft technologies, with his
specialty being enterprise web applications and services. Very driven towards constantly improving
the development process, he is an avid supporter of functional programming and test-driven
development. In his spare time, you’ll find him tinkering with Haskell, building some project on
GitHub (https://github.com/dnikolovv), or occasionally talking in front of the local tech community.

Thanks to Damir Arh for reviewing this article.

www.dotnetcurry.com/magazine 115
DEVOPS

Hardik Mistry

Configuration driven

Mobile DevOps
THE CHALLENGE
Shipping 5-star apps is easier said than
done!

While there are tools that allow us to


configure and setup a CI/CD (Continuous
Integration/Continuous Deployment)
pipeline, there are times where you as a
developer would want to be able to tune
these configurations on the fly, either
before the build starts, or after the build
succeeded/failed.

Things you’ll need to get going:

• App Center account (sign up here for a


free account: https://appcenter.ms)

• Visual Studio 2017 or higher with


Xamarin SDK and related components
installed

- Installation guide: https://developer.


xamarin.com/guides/cross-platform/
getting_started/installation/windows/
• Visual Studio for Mac with Xamarin SDK and related components installed
- Installation guide: https://docs.microsoft.com/en-us/visualstudio/mac/
installation?view=vsmac-2019

• Optional: Code used in this article: https://github.com/mistryhardik/ms-workshops

We usually focus on writing and building an app, but we do not really bother much to figure out how to
distribute it. With too many options in the market, it gets quite abstract and ambiguous.

Think of each of these options as choosing between a BMW or a Mercedes, they both are performance
vehicles with equal level of commitment to quality and luxury. Both can very well take you from point A to
point B. However, there are subtle differences between the two which makes them different and a strong
contender in their own segment/rights.

We'll explore one such tool to help you with your Mobile DevOps journey. This tool is App Center
(previously known as Visual Studio Mobile Center).

Signing up to App Center is a breeze. You can start with a free account here: https://appcenter.ms. While
you can get started for free, you may want to choose a paid plan to obtain more build time (in minutes per
month) or other additional services. Explore the pricing and plans here: https://visualstudio.microsoft.com/
app-center/pricing/

Once you are logged in, you need to define an app. If you are working across customers or have a large
team, you can define an organisation to group the apps you would be working on.

Figure 1: Appcenter default home page

At this point of time, I will click the Add app button and configure it as an iOS app developed using the
Xamarin platform, as illustrated in Figure 2. Notice that you could pick any other flavour of OS and platform
as well.

www.dotnetcurry.com/magazine 117
Figure 2: Appcenter Add new app options

Now click the build menu (on the left, the play icon) to configure our repository which contains the
solution/project we intend to build.

As you can see, at the time of writing this post, App Center supports: Azure DevOps, GitHub and BitBucket as
your repository providers.

In my case, I will connect using GitHub, by clicking the GitHub button (see that icon in Figure 3 on the right-
hand side of the screen).

Figure 3: Connect Repository Source

118 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


I will have to login using my GitHub account to allow access to use my repository for builds via App Center.
(you will have to grant permission when prompted to allow and access your repository).

Alright, once that is in place, we can see the branch(es) available under the repository we selected. I will
click on the development branch to be able to configure the build steps.

In your case this could be different. If you fork my repository, you too should see development as a branch
option as shown in Figure 4.

Figure 4: Appcenter Branch view

I will click the Configure Build button (the blue button on right hand side of the screen as seen in Figure 5).

Figure 5: Appcenter configure build for development

Next depending upon the app target, you need to choose from a variety of settings.

Figure 6 shows the settings I have used to build a Xamarin.iOS project. This will vary depending upon
which target platform you choose to build.

www.dotnetcurry.com/magazine 119
Figure 6: Appcenter build configuration

You need to select .csproj to build Xamarin.iOS projects.

If you are building native iOS apps, you would need to define shared scheme in your workspace settings
using XCODE.

Once we are set without desired configuration, click Save if you do not plan to run the build right now, or
Save and Build to save your configuration and trigger the build immediately.

I have clicked Save & Build.

Ok, after a few minutes, we will observe that the build was successful. If that’s not the output for you,
interestingly you should be able to see the cause for the failed build and make changes in your repository
to fix them.

What we would want to do now is be able to update the version number to the next one. So, say the current
version number is 1.0, we would want to update it to 1.1.

To be able to do that, we will be wiring the build with some custom build scripts.

CUSTOM BUILD SCRIPTS


Build scripts are bash scripts which you can execute post clone, pre-build and post- build in App Center.
You'll need to name the file appropriately for App Center to recognise that your code includes a build script
as well. Click here to explore more about appcenter build scripts.

120 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


If you are trying to build a native iOS app (app built using Objective C or Swift instead of Xamarin as talked
about here), keep the .sh script files in the same directory level as where you have the .xcworkspace file.

Figure 7: iOS workspace directory

If you’re trying to build native Android app (app build using Java or Kotlin), keep the .sh script files in the
/app directory.

Figure 8: Android app directory

appcenter-post.clone.sh

The appcenter-post-clone.sh script will do some housekeeping stuff such as downloading some utility on to
the build agent and installing it or setting up configuration etc. We will require to parse and edit a .json file,
and for that, we will install a utility called jq. To perform addition or other arithmetic operation, we will use
a utility called bc.

#!/usr/bin/env bash -e

cd $APPCENTER_SOURCE_DIRECTORY

www.dotnetcurry.com/magazine 121
# Attempt to update node
curl -O https://nodejs.org/dist/v8.11.3/node-v8.11.3.pkg
sudo installer -pkg node-v8.11.3.pkg -target /

# Install jq, more information here: https://stedolan.github.io/jq/download/


brew install jq

# Install bc, more information here: http://www.gnu.org/software/bc/


brew install bc

npm install

appcenter-pre-build.sh

The appcenter-pre-build.sh script will actually parse the Info.plist file or the AndroidManifest.xml file to
read the current version information. Then we will convert the .plist into a temporary json file with the help
of another utility called plist . Similarly for the .xml file, we will convert it to a temporary json file with help
of utility called grep and make use of the bc utility we installed in the post-clone step to increment the
version information by 0.1

Sample scripts

iOS
# The following is test script to execute in pre build process

#!/usr/bin/env bash
#
# For Xamarin Android or iOS, change the package name located in AndroidManifest.
xml and Info.plist.

INFO_PLIST_FILE=$APPCENTER_SOURCE_DIRECTORY/MyWeatherApp/MyWeatherApp.iOS/Info.
plist

if [ ! -n "$INFO_PLIST_FILE" ]
then
echo "You need define Info.plist in your iOS project"
exit
fi
echo "APPCENTER_SOURCE_DIRECTORY: " $APPCENTER_SOURCE_DIRECTORY
echo "INFO_PLIST_FILE: " $INFO_PLIST_FILE

# Check branch and run commands if so:


if [ "$APPCENTER_BRANCH" == "master" ]; then

# Convert .plist file to .json


plutil -convert json $INFO_PLIST_FILE -o temp.json

jq '.' temp.json

VERSION=$(jq -r '.CFBundleShortVersionString' temp.json)


BUILD=$(jq -r '.CFBundleVersion' temp.json)

echo "Current version: " $VERSION


echo "Current build: " $BUILD

122 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


# Actually increment the version in this line
UPDATED_VERSION=$(bc <<< "$VERSION + 0.1")

echo "Updated version: " $UPDATED_VERSION

echo "Updating Build to $APPCENTER_BUILD_ID in Info.plist"


plutil -replace CFBundleVersion -string $APPCENTER_BUILD_ID $INFO_PLIST_FILE

echo "Updating Version to $UPDATED_VERSION in Info.plist"


plutil -replace CFBundleShortVersionString -string $UPDATED_VERSION $INFO_PLIST_
FILE

fi
if [ "$APPCENTER_BRANCH" == "development" ]; then

# Convert .plist file to .json


plutil -convert json $INFO_PLIST_FILE -o temp.json

jq '.' temp.json

VERSION=$(jq -r '.CFBundleShortVersionString' temp.json)


BUILD=$(jq -r '.CFBundleVersion' temp.json)

# Print the values


echo "Current version: " $VERSION
echo "Current build: " $BUILD

fi

echo "Info.plist file content:"


cat $INFO_PLIST_FILE

Android

# The following is test script to execute in pre build process

#!/usr/bin/env bash
#
# For Xamarin Android or iOS, change the package name located in AndroidManifest.
xml and Info.plist.
# AN IMPORTANT THING: YOU NEED DECLARE BASE_URL, SECRET and TEST_COLOR ENVIRONMENT
VARIABLE IN APP CENTER BUILD CONFIGURATION.

# This path will vary depending upon your project structure


ANDROID_MANIFEST_FILE=$APPCENTER_SOURCE_DIRECTORY/MyWeatherApp/MyWeatherApp.
Android/Properties/AndroidManifest.xml

if [ ! -n "$ANDROID_MANIFEST_FILE" ]
then
echo "You need define AndroidManifest.xml in your Android project"
exit
fi
echo "APPCENTER_SOURCE_DIRECTORY: " $APPCENTER_SOURCE_DIRECTORY
echo "ANDROID_MANIFEST_FILE: " $ANDROID_MANIFEST_FILE

# Check branch and run commands if so:


if [ "$APPCENTER_BRANCH" == "master" ]; then

VERSIONCODE=`grep versionCode $ANDROID_MANIFEST_FILE | sed


's/.*versionCode="//;s/".*//'`

www.dotnetcurry.com/magazine 123
VERSIONNAME=`grep versionName $ANDROID_MANIFEST_FILE | sed
's/.*versionName="//;s/".*//'`

echo "Current VersionCode: " $VERSIONCODE


echo "Current VersionName: " $VERSIONNAME

# Actually increment the version in this line


UPDATED_VERSIONNAME=$(bc <<< "$VERSIONNAME + 0.1")

echo "Updating versionCode to $APPCENTER_BUILD_ID and versionName to $UPDATED_


VERSIONNAME in AndroidManifest.xml"
sed -i '' 's/versionCode="[0-9.]*"/versionCode="'$APPCENTER_BUILD_ID'"/; s/
versionName *= *"[^"]*"/versionName="'$UPDATED_VERSIONNAME'"/' $ANDROID_MANIFEST_
FILE

fi
if [ "$APPCENTER_BRANCH" == "staging" ]; then

VERSIONCODE=`grep versionCode $ANDROID_MANIFEST_FILE | sed


's/.*versionCode="//;s/".*//'`
VERSIONNAME=`grep versionName $ANDROID_MANIFEST_FILE | sed
's/.*versionName="//;s/".*//'`

# Print the values


echo "Current VersionCode: " $VERSIONCODE
echo "Current VersionName: " $VERSIONNAME

fi

echo "Manifest file content:"


cat $ANDROID_MANIFEST_FILE

If you now (after pushing the scripts, if you tried to create your own repository and project) try and check
the configuration, notice that in Figure 9, it shows the post-clone and pre-build scripts.

Note: As of today, you cannot specify the scripts directly from this configuration screen, you will have to put
the files in the right directory in with the correct file name.

Figure 9: Verify the scripts in configuration

124 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Alright, so the scripts are already in place. If we re-run the build manually right now or every time a new
build runs in App Center, these scripts will be executed and that means that the content will be executed
every time. To fail safe ourselves, if you read the scripts again, I have included a condition to check if that
branch is master, update the version, or else, only display the version.

Figure 10: Build log view

Summary

I have explored and used other alternate tools as well such as Jenkins, Bitrise etc. While these are options
worth trying, it is my personal opinion that Jenkins had an overhead while administering and managing the
server, whereas in the case of bitrise, it got overwhelming to manage the steps of build etc.

Again, this does not mean that these and the other options available in the market are not worth trying, it is
just my personal opinion and the other options might just be the best fit for your scenario.

App Center did it just right, at least for me ;)

www.dotnetcurry.com/magazine 125
In this post we explored how we could customize the default build experience using the custom build
scripts in App Center. You can review more exciting features at https://docs.microsoft.com/en-us/appcenter
and also check out their product blog for latest and greatest updates.

I would love to hear from you, be it thoughts around the post or any other help you might need with
AppCenter, tweet me @mistryhardik05.

Happy building!

Download the entire source code from GitHub at


bit.ly/dncm43-mobiledevops

Hardik Mistry
Author

Hardik Mistry is a Consultant for .NET, Azure, Xamarin and DevOps scenarios and workloads. He
is a Microsoft MVP with proven experience of 7+ years of engineering mobile-first and cloud-
first scenarios for select startups and enterprise customers. You can reach out to him via twitter
@mistryhardik05.

Thanks to Gerald Verslius for reviewing this article.

126 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


AZURE DEVOPS

Gouri Sohoni

AZURE DEVOPS
- YAML FOR
CI-CD PIPELINES
In this tutorial, I will give an overview of how to use YAML in Azure Pipelines.

Azure Pipelines is a service which provides CI (Continuous


Integration) and CD (Continuous Delivery). It can integrate with
various repositories like GitHub, GitHub Enterprise, BitBucket or
even Azure Repositories for source code.

Continuous Integration (CI) is a process which automatically


starts the server side build, the moment any team member In this article, I will discuss:
checks-in or commits the code to source control. The build can
• the basics of YAML
be automated and deployed to Microsoft Azure and tested.
• how to use it with Azure
Pipelines and
A common way to create and configure your build and release
• how it can be used
pipelines in the web portal is by using the classic editor. Though
for configuring CI-CD
Azure Pipelines can work with a classic editor (formerly called
pipelines in Azure
as vNext – which is GUI based), I am going to show how YAML
DevOps.
can be used.

128 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


You may have heard of “Configuration as Code”. YAML makes it possible to code your configuration
management by defining build and release pipelines in the YAML code.

YAML OVERVIEW
YAML stands for (YAML Ain’t Markup Language). It is a human friendly serialization language mainly used
for configuration files. It can also be used for storing debugging output or document headers. It has a very
limited syntax. It started with the reference as Yet another markup language, before it got the current YAML
Ain’t Markup Language.

The following conventions are followed when you want to create yml file:

• Comments will be followed by #


Eg: # This is a comment line

• Scalar type variables will be declared as


key: value
number: 1000

• Strings don’t need to have quotes


string_without_quotes: This string is without quotes
string_with_quotes: ‘This string is with quotes’

o Multiple lines can be specified with | or >


Line_block: |
This is
multiple lines block
line_block2: >
This is also
multiple lines block

• Nesting can be used for collection types


nested_value:
key: value
second_key: second value

• A complex key can be used starting with ?:


?:
This key is with
multiple lines with value
: this is the value

Remember that you cannot use tab as indentation, but can add space for indentation.

In order to work with Azure Pipelines, we need to have the source code we will use to create a build. For
build creation, we need to have an agent to do the job. The same agent can also be used to deploy and test
after deployment.

An agent can either be installed on a machine on-premises (self-hosted) or used from Microsoft-hosted
agents. This agent is responsible for running one job at a time, after communicating with Azure Pipelines as
to which job to run. It will also determine system capabilities like name of the machine, OS, or take care of

www.dotnetcurry.com/magazine 129
special installations. It will also create logs after the job is over.

I will first use the hosted agent, and later show how your own agent and pool can be configured and used.

CREATE AND WORK WITH OUR OWN AZURE


PIPELINES
I will use GitHub as source control, use build in Azure DevOps with YAML, and deploy to Azure Web App
Service.

Pre-requisites: GitHub account with at least 1 repository.

Let us see a walkthrough of the same to use CI CD service with Azure Pipelines. In order to get Azure
Pipelines, use this link.

Note: Figure 1 contains two buttons. Even if you use the button ‘Start free with Pipelines’, you can later
connect to GitHub for source control.

Figure 1: Create Azure Pipelines

CONNECT TO AZURE DEVOPS OR CREATE A NEW


ORGANIZATION WITH AZURE PIPELINES
After clicking on the link, use any of your Microsoft Accounts to work with Azure Pipelines. You will be asked
to continue to work with Azure DevOps and automatically a new account (organization) will be created for
you. If you already have an Azure DevOps account, you can work with that.

Now that we have Azure DevOps Account, we can create a Team Project. A Team Project can be based on
process as Basic, Agile, Scrum or CMMI. I have selected Scrum here (selecting the process will not make
any difference to build and release pipelines, I selected Scrum for demonstration purpose but feel free to
choose any other).

130 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Figure 2: Create Team Project in your organization

Now that we have created a Team Project, we need to create a Build Pipeline. Select Pipelines – Builds and
click on the New Pipeline button. Now provide the source of our code (GitHub in this case), along with the
authentication to connect to the required repository ( GitHub in this case).

AUTHENTICATE GITHUB
Figure 3: Authenticate and integrate with GitHub

When you are providing GitHub a connection to a Public


repository, you will get a warning that you are trying to
connect to Public repo from a Private one (assuming your
Azure DevOps repo is Private). The moment you add the
repository, you will get suggestions for the template for build
as follows (the template is suggested based on the code- Java,
.Net etc).

www.dotnetcurry.com/magazine 131
Figure 4: Build Pipelines which suggests Ant template

Figure 5: Build Pipelines suggest ASP.NET template based on repository

After selecting the template, we can save the yml (the extension for YAML is .yml) file and trigger build. The
created .yml files will look as follows depending upon if they are for Ant or for ASP.NET.

YAML code for Ant


# Ant
# Build your Java projects and run tests with Apache Ant.
# Add steps that save build artifacts and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/java

trigger:
- master

pool:
vmImage: 'ubuntu-latest'

132 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


steps:
- task: Ant@1
inputs:
workingDirectory: ''
buildFile: 'build.xml'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.8'
jdkArchitectureOption: 'x64'
publishJUnitResults: false
testResultsFiles: '**/TEST-*.xml'

Observe that the task of ant is added and is referring to build.xml. You can change the inputs if required.

Pipeline for VS Build

# ASP.NET
# Build and test ASP.NET projects.
# Add steps that publish symbols, save build artifacts, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/apps/aspnet/build-aspnet-4

trigger:
- master

pool:
vmImage: 'windows-latest'

variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'

steps:
- task: NuGetToolInstaller@0

- task: NuGetCommand@2
inputs:
restoreSolution: '$(solution)'

- task: VSBuild@1
inputs:
solution: '$(solution)'
msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package
/p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true
/p:PackageLocation="$(build.artifactStagingDirectory)"'
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'

- task: VSTest@2
inputs:
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'

The VSBuild task is very similar to working with classic editor (it creates a single zip file as it is referring to
web application).

The YAML file has a schema based on the following structure.

www.dotnetcurry.com/magazine 133
YAML schema for build pipelines

• Pipelines
o Stage 1 / Environment 1
 Job 1
- Step 1 for Job 1
- Step 2 for Job 1
 Job 2
- Step 1 for Job 2
o Stage 2/ Environment 2
o ….

The schema shows that we can add as many stages as required in the Pipelines, and also as many jobs as
well. Each job can have many steps. The steps in turn can have various tasks.

Observe that there are NuGet package related tasks along with build and test tasks. You can also see the
pool name is dependent on it if we are using hosted agent or default agent. If there is a single job, we do
not have to specifically mention it.

DEPLOY TO AZURE WEB APP SERVICE WITH RELEASE


DEFINITION
Let us add a release definition and deploy the web application to Azure Web App Service.

Select New Pipeline from Releases tab and select the template for App Service Deployment. You need to
have Web App Service in Azure to deploy our app to. You can create a new web app service by signing in to
Azure Portal. Use this link to learn more.

For our deployment to be successful, we need to publish the artefacts created in Build. Let us add a task to
YAML file to publish artefacts at the end.

Edit the build definition, go to the end of YAML file, search for Publish build artefacts and click on Add. The
task can be seen as shown in Figure 6:

Figure 6: Customize YAML with a task

134 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Save the build and execute it so that the artefacts can be published and used for our release.

CONFIGURE THE TASK FOR AZURE APP SERVICE IN


RELEASE DEFINITION
For any release definition, we need to provide the build from where the artefacts will be fetched.

Configure the task for app service. Authorize the task to use the service created with your Azure subscription.
Although YAML for release pipeline is not yet commonly used, it is certainly possible and has recently got
added to Azure DevOps.

Figure 7: Create Release Definition

Save the release definition and create a release. After successful deployment, you should be able to see the
application deployed.

AUTOMATED CI AND CD
Edit the build definition to enable continuous integration trigger and also enable the trigger for continuous
deployment. To enable, click on the ellipse button and select Triggers.
Save the pipeline.

www.dotnetcurry.com/magazine 135
Figure 8: Enable Continuous Trigger

Enable and save the trigger for release definition. Change the code in GitHub and ensure that both the
triggers work as expected.

CUSTOMIZE YOUR BUILD PIPELINE


You can add or modify existing tasks from the build pipeline if required. For example, you can add
variable(s), add multiple pool and jobs in it. I will show you how to add a variable and a PowerShell script
task. Let us add a variable named UserName in the yml file.

We can also select the task of PowerShell, do the required configuration and click on Add.

Figure 9: Add PowerShell Script to YAML

The code in yml for variable declaration and task looks as


follows:

variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
UserName: 'Gouri Sohoni'
configuration: debug
platform: x64
……..
- task: PowerShell@2
inputs:
targetType: 'inline'
script: '# Write your powershell commands
here.

Write-Host "Hello " $(UserName)

# Use the environment variables input below


to pass secret variables to this script.'

136 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Note: When you create the .yml file, it automatically gets added to the Source Code.

CREATE AZURE PIPELINES FOR AZURE DEVOPS TEAM


PROJECT
Let us create a build pipeline for the Azure DevOps project.

We just have to specify that the source is from Azure DevOps repository and the wizard will show you the
template to choose. In this case, I am going to work with only the .csproj and not the whole solution. In
order to achieve this, I will have to customize the yml file. When we select .sln file to build, it selects all the
projects which are part of the solution which may not be required in some cases. If we just want to create a
build for a single project (which is a part of solution), we need to change the .sln to .csproj (or .vbproj if we
are working in VB.Net).

- task: VSBuild@1
inputs:
solution: Solution1/UnitTestProject1/UnitTestProject1.csproj
msbuildArgs: '/p:OutputPath="$(build.artifactstagingdirectory)\\"'
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'

I want to copy the artefacts to a local shared folder. In order to do that, I will have to change the pool
from hosted to the default pool. For this, I need to first create PAT (Personal Access Token), download and
configure the agent pool. It will be done as follows:

pool:
name: <name of your pool>

The name should be the same as the agent you have configured. To know more on how to download and
configure the agent, follow this link.

For copy task to be successful, I created a shared folder on the machine on which I have my agent
configured and pool created, and provided the copy file task as follows:

- task: CopyFiles@2
inputs:
SourceFolder: '$(Build.ArtifactStagingDirectory)'
Contents: '**/*.dll'
TargetFolder: '\\<machine name>\<shared folder name>'

Ensure that the artefacts are published to the ArtifactStagingDirectory for the copy to be successful. After
successful creation of the build, I found the artefacts in the shared location. Customizing your YAML file is
thus very easy and straightforward.

It is very easy to create YAML from any existing classic editor build, you just have to Edit the existing build,
select the agent and click on View YAML as shown in Figure 10.

www.dotnetcurry.com/magazine 137
Figure 10: Create YAML from classic editor

Conclusion:

In this article, we have seen how to get started with the creation of Azure Pipelines. I showed how to fetch
code from GitHub repository and create a build pipeline with yml followed by release pipeline. We also
discussed how the source code can be Azure DevOps and how customization can be handled for yml.

Gouri Sohoni
Author

Gouri Sohoni is a Trainer and Consultant for over two decades. She specializes in Visual Studio -
Application Lifecycle Management (ALM) and Team Foundation Server (TFS). She is a Microsoft
MVP in VS ALM, MCSD (VS ALM) and has conducted several corporate trainings and consulting
assignments. She has also created various products that extend the capability of Team Foundation
Server.

Thanks to Subodh Sohoni for reviewing this article.

138 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


.NET

Damir Arh

DEVELOPING
DESKTOP
APPLICATIONS IN
.NET

Although there haven’t been

as many new developments

in approaches to desktop

development as there were in

web development, there are still

several different frameworks

available to .NET developers

for creating desktop apps. This

article provides an overview of

them, compares their features

and gives recommendations on

what to choose depending on

the application requirements.

140 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


Since the initial release of .NET framework almost 20 years ago, the importance of desktop application has
significantly decreased. At that time, most new applications were first developed for Windows because it
was by far the most common operating system. Today, it’s much more common to create a web application
first, because it will work on all operating systems. Despite that, there’s still a place for native applications
in scenarios where performance or user experience delivered by web applications, isn’t good enough.
There are three main application frameworks available for developing Windows applications in .NET
framework.

They don’t only differ in the user interface that can be created with them, but also in the way the code is
written, and how their interfaces are created. I will introduce these application frameworks one by one in
the order they were released.

WINDOWS FORMS
The first version of Windows Forms was released in 2002 at the same time as .NET framework 1.0. At that
time, the most popular tools for developing Windows applications were Visual Basic 6 and Borland Delphi 6.

Both followed the principles of rapid application development (RAD). To increase developer productivity,
they offered graphical designers for creating user interfaces by arranging available user interface controls
in the window. The code was written in an event-driven manner, i.e. developers were implementing event
handlers which responded to user’s interaction with the application.

Windows Forms takes the same approach. Applications consist of multiple windows, called forms. Using the
designer, the developer can place the controls on the forms and customize their appearance and behavior
by modifying their properties in the editor.

Figure 1: Windows Forms designer in Visual Studio 2019

As a result, most Windows Forms applications have a very similar appearance which is often referred to
as battleship gray. The best way to avoid this is to use custom third-party controls instead of the ones
included in the framework. Unfortunately, there aren’t many available as open-source or freeware. The most

www.dotnetcurry.com/magazine 141
important commercial control vendors are DevExpress, Infragistics and Telerik.

Since the designer output is code, each form has two separate code files so that the code generated by the
designer doesn’t interfere with manually written code. Partial classes are used so that the code from both
files gets compiled into the same class.

• Designer-generated code isn’t meant to be modified manually which is also stated in the comments of
the generated file:

/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.emailAddressLabel = new System.Windows.Forms.Label();
this.emailAddressTextBox = new System.Windows.Forms.TextBox();
this.submitButton = new System.Windows.Forms.Button();
this.resetButton = new System.Windows.Forms.Button();
this.SuspendLayout();
//
// emailAddressLabel
//
this.emailAddressLabel.AutoSize = true;
this.emailAddressLabel.Location = new System.Drawing.Point(13, 13);
this.emailAddressLabel.Name = "emailAddressLabel";
this.emailAddressLabel.Size = new System.Drawing.Size(75, 13);
this.emailAddressLabel.TabIndex = 0;
this.emailAddressLabel.Text = "Email address:";
//
// a lot code skipped for brevity
//
this.AcceptButton = this.submitButton;
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.CancelButton = this.resetButton;
this.ClientSize = new System.Drawing.Size(272, 72);
this.Controls.Add(this.resetButton);
this.Controls.Add(this.submitButton);
this.Controls.Add(this.emailAddressTextBox);
this.Controls.Add(this.emailAddressLabel);
this.Name = "SubscribeForm";
this.Text = "Subscribe";
this.ResumeLayout(false);
this.PerformLayout();
}

• All the other code belonging to the form is placed in the second file and is under full control of the
developer.

Each control raises different events during its lifetime in response to which the code in corresponding
event handlers gets executed.

142 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


public partial class SubscribeForm : Form
{
public SubscribeForm()
{
InitializeComponent();
}

private void ResetButton_Click(object sender, EventArgs e)


{
emailAddressTextBox.Text = String.Empty;
}

private void SubmitButton_Click(object sender, EventArgs e)


{
var emailAddress = emailAddressTextBox.Text;
// submit entered email address
}
}

Having the application business logic spread across many event handlers in multiple forms can make the
application difficult to maintain as it grows in size. It’s also challenging to write unit tests for it, leaving UI
tests as the only option for automated testing. UI Tests are more fragile and more time consuming to create,
than unit tests.

To avoid this issue, the model-view-presenter (MVP) design pattern can be used. This approach allows
most of the code to be moved from the form (i.e. the view) to the presenter class which is responsible for
reacting to events and updating the view. By mocking the views, presenters can be fully unit-tested.

Figure 2: Class interaction in model-view-presenter design pattern

www.dotnetcurry.com/magazine 143
The MVP design pattern requires additional plumbing code to be written. This could be avoided by using
a library for that purpose, such as Composite UI Application Block (CAB) or MVC#. Although both are still
available for download, neither is supported anymore.

All of this makes Windows Forms not very suitable for creating new applications. An exception could be
where the nature of the application to be created makes the restrictions less important (e.g. it’s a small
application that’s not customer-facing) and the developers are more experienced with this framework than
with any of the others.

Another argument in favor of choosing Windows Forms over other frameworks can be its Mono
implementation which also works on Linux and macOS. Although not developed or supported by Microsoft,
it is highly compatible and can be a good approach for developing a desktop application for multiple
operating systems.

Editorial Note: If you are still into Windows Forms development, these WinForm tutorials may come in
handy.

WINDOWS PRESENTATION FOUNDATION (WPF)


Windows Presentation Foundation (WPF) was released as a part of .NET framework 3.5 in 2007. Although
there’s still a designer available for arranging the controls in a window, the created layout is not stored as
code.

Figure 3: WPF Designer in Visual Studio 2019

Instead, it is saved as an XML file using a special syntax named XAML (Extensible Application Markup
Language). Unlike the code for Windows Forms, this XML file can be much easier to understand and edit
manually.

144 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


<StackPanel>
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto"/>
<ColumnDefinition Width="*"/>
</Grid.ColumnDefinitions>
<Label Grid.Row="0" Grid.Column="0">Email address:</Label>
<TextBox Grid.Row="0" Grid.Column="1"
Text="{Binding EmailAddress, UpdateSourceTrigger=PropertyChanged}"/>
</Grid>
<StackPanel Orientation="Horizontal" HorizontalAlignment="Right">
<Button IsCancel="True" Command="{Binding ResetCommand}">Reset</Button>
<Button IsDefault="True" Command="{Binding SubmitCommand}">Submit</Button>
</StackPanel>
</StackPanel>

Also, the synchronization between the designer and the XML file is bidirectional: any changes made directly
to the XML file are immediately visible in the designer. This allows for greater flexibility when editing
the layout: individual changes can be made either in the designer or in the XAML markup, wherever the
developer finds it easier to achieve her/his goal.

Additionally, the positioning and appearance of controls can be decoupled from control declaration:

• Instead of absolutely positioning controls in the window using offsets, it is preferred to use separate
layout controls like StackPanel and Grid for that purpose:

Figure 4: Positing controls in the window with layout controls

• Styles can be used to define appearance and then applied to controls by control type or style name. This
makes it easier to achieve unified appearance of all controls and to modify appearance of controls even
after the windows were initially created.

<Application.Resources>
<Style TargetType="StackPanel">
<Setter Property="Margin" Value="2"/>

www.dotnetcurry.com/magazine 145
</Style>
<Style TargetType="TextBox">
<Setter Property="VerticalAlignment" Value="Center"/>
</Style>
<Style TargetType="Button">
<Setter Property="Margin" Value="2"/>
<Setter Property="Padding" Value="2"/>
<Setter Property="Width" Value="60"/>
</Style>
</Application.Resources>

All the controls in the framework are highly customizable. Therefore, WPF applications show much more
visual variety than Windows Forms and their technical origin can’t be recognized as easily.

Figure 5: Screenshot of 3M ChartScriptMD WPF application designed by Next Version Systems

However, creating highly-customized visually appealing applications has a steep learning curve and
requires experienced WPF developers.

To fill the space between the plain WPF applications and those polished manually to the highest extent,
there are control collections available, both open-source (e.g. Modern UI for WPF, MahApps.Metro, and
Material Design In XAML Toolkit) and commercial (e.g. available from DevExpress, Infragistics and Telerik).

Code is still event-driven. However, because of excellent binding support, it is much easier to decouple
code from the layout. Both data properties and event handlers (in the form of commands) can be bound to
controls in XAML markup.

146 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


By taking advantage of this, model-view-viewmodel (MVVM) design pattern became the standard approach
to developing WPF applications soon after the framework was released. It is somewhat similar to the MVP
pattern except that instead of the presenter directly interacting with the view, two-way binding is used for
exchanging data and events between the view and the viewmodel.

Figure 6: Class interaction in model-view-viewmodel design pattern

To avoid some of the plumbing code, one of the many open-source MVVM frameworks can be used:

• Prism was originally developed by Microsoft’s Patterns and Practices team but was taken over by
community once that team was disbanded.

• MVVM Light Toolkit was developed by Laurent Bugnion, now a Microsoft employee.

• Caliburn.Micro was developed by Rob Eisenberg whose latest project is the Aurelia JavaScript
framework.

Although the frameworks take slightly different approaches, they all primarily make it easier to create
commands, match viewmodels to views, and navigate between views.

The following viewmodel class uses Prism:

class MainWindowViewModel : BindableBase


{
private string emailAddress;
public string EmailAddress
{
get
{

www.dotnetcurry.com/magazine 147
return emailAddress;
}
set
{
SetProperty(ref emailAddress, value);
SubmitCommand.RaiseCanExecuteChanged();
}
}

public DelegateCommand ResetCommand { get; }


public DelegateCommand SubmitCommand { get; }

public MainWindowViewModel()
{
ResetCommand = new DelegateCommand(Reset);
SubmitCommand = new DelegateCommand(Submit, CanSubmit);
}

private void Reset()


{
EmailAddress = String.Empty;
}

private void Submit()


{
var emailAddress = EmailAddress;
// submit entered email address
}

private bool CanSubmit()


{
return EmailAddress?.Length > 0;
}
}

Even today, WPF is the most versatile and flexible framework for creating Windows desktop applications
and as such the recommended choice for most new Windows desktop applications.

Editorial Note: If you are into WPF programming, check out these WPF tutorials.

UNIVERSAL WINDOWS PLATFORM (UWP)


The origin of Universal Windows Platform (UWP) can be traced back to the release of Windows 8 in 2012
and the accompanying framework for development of touch-first applications, called Metro applications.

The framework evolved through the years, making it possible to target different Windows devices with the
same codebase.

First, support was added for Windows Phone 8.1 applications. At that time, these applications were called
Windows Store applications.

With the release of Windows 10 in 2015, the framework got its final name and eventually supported
development of applications for Windows desktop, Windows Mobile (successor of Windows Phone which

148 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


was in the meantime discontinued and will reach end-of-life in December 2019), Windows IoT Core,
Windows Mixed Reality (called Windows Holographic when first introduced), and Xbox One.

At first glance, UWP is very similar to WPF.

User interfaces created in the designer are saved as XAML files. Good binding support lends itself well to
the MVVM pattern. However, the controls are different enough from their WPF counterparts to make porting
of user interfaces from one platform to the other, difficult.

From their beginnings in Metro applications, UWP controls focus on consistent recognizable design, support
for different screen sizes and different input methods, including touch. In their latest incarnation, they
follow the Fluent Design System which is also used in most if not all Microsoft’s applications distributed
through Microsoft Store today.

Figure 7: Fluent Design System used in Windows Weather application

Also, because UWP applications are designed to be published in Microsoft Store, they run in a sandbox and
don’t have direct access to all Win32 APIs. However, additional Windows 10 UWP APIs are available to them
(providing access to Microsoft Store functionalities, such as live tiles, notifications, in-app purchases etc.)
which were previously not available to WPF and Windows Forms applications.

Today, the differences between UWP applications and regular Windows desktop applications are much
smaller than they were initially. Mostly because Windows desktop applications can now call Windows
10 UWP APIs and can also be published in the Microsoft Store when using the so-called Desktop Bridge
tooling (originally named Project Centennial). They are of course still restricted to targeting Windows

www.dotnetcurry.com/magazine 149
desktop devices only.

On the other hand, UWP applications can call some Win32 APIs (support differs between the devices) when
their code is written in C++/CX (C++ component extensions).

UWP applications are your only choice if you want to target any non-desktop Windows devices. You might
also prefer them over WPF for Windows desktop applications if you want to target other Windows devices
with the same application or want to publish your application in Microsoft Store as long as you don’t need
any Win32 APIs not available to you in UWP applications.

Editorial Note: If you are an UWP developer, check out our UWP tutorials.

.NET CORE 3.0


In version 3.0, .NET Core will be expanded with support for Windows desktop applications written using
Windows Forms or WPF. Unlike other types of .NET Core applications, these will not be cross-platform and
will run only on Windows.

.NET Core 3.0 is planned for release in September 2019 and is only available in preview at the time of
writing. With the latest preview of Visual Studio 2019 and .NET Core 3.0, new Windows Forms and WPF
projects can already be created, built, and run. The biggest limitation at the moment is the fact that
Windows Forms designer doesn’t yet work with .NET Core projects which makes it difficult to do any kind
of serious development with .NET Core based Windows Forms applications. However, the issue should be
resolved until the final release.

Both Windows Forms and WPF applications are also being extended with the ability to use selected UWP
controls inside them (InkCanvas, MapControl, MediaPlayerElement, and WebView for now). This feature is
named XAML Islands and is currently available in preview for .NET Core 3.0 and.NET framework 4.6.2 or
newer. The final release for both platforms is planned to coincide with the final release of .NET Core 3.0 in
September 2019.

When this happens, .NET Core 3.0 based WPF applications will most probably replace .NET framework
based WPF applications as the recommended framework choice for most new Windows desktop
applications. Since version 4.8 was the last feature release for .NET framework, using .NET Core instead
of .NET framework for new applications will allow you to take advantage of the latest improvements (e.g.
better performance, C# 8 support) which aren’t going to be ported back to the .NET framework.

It will probably only make sense to port existing .NET framework-based Windows Forms and WPF
applications to .NET Core when they are still actively developed and would greatly benefit from .NET Core
exclusive features (e.g. side-by-side installation of different .NET Core versions). Although the process of
porting will likely improve until the final release, it will still probably require a non-trivial amount of work.

Conclusion:

The framework choice for desktop applications mostly depends on the devices which you want to target.
For applications targeting Windows desktop only, WPF is usually the best choice. Once the final release of
.NET Core 3.0 is available in September 2019, it will make sense to develop new WPF applications in it. But

150 DNC MAGAZINE 7TH ANNIVERSARY ISSUE (43) - JULY-AUG 2019


until then, the .NET framework is your only option.

Since WPF applications don’t work on other Windows devices (such as IoT Core, Mixed Reality etc.), your best
choice is to use UWP instead. This will restrict which Win32 APIs are available to you, which is the reason
why WPF is preferred for desktop-only applications in most cases.

The only desktop framework not really recommended for writing new applications is Windows Forms.
Despite that, it is still fully supported and will even be available in .NET Core 3.0 when released in
September 2019. This means that there’s no need for rewriting existing Windows Forms applications in a
different application framework.

Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his
knowledge by speaking at local user groups and conferences, blogging, and answering
questions on Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.

Thanks to Daniel Jimenez Garcia for reviewing this article.

www.dotnetcurry.com/magazine 151
Thank You
for the 7th Anniversary Edition

@yacoubmassad @dani_djg @damirarh dnikolovv

@subodhsohoni @vikrampendse @sommertim @gourisohoni

@jfversluis @mistryhardik05 Imran Siddique Mahathi

@suprotimagarwal @saffronstroke

Write for us - mailto: suprotimagarwal@dotnetcurry.com

Das könnte Ihnen auch gefallen