Beruflich Dokumente
Kultur Dokumente
th itio
7 ar yE
d
rs facebook twitter
ive
let's connect
n n facebook.com/dotnetcurry twitter.com/dotnetcurry
A
linkedin
linkedin.com/suprotimagarwal
GITHUB
github.com/dotnetcurry
socially:
editor
letter from the
contributors
Contributing Authors :
@suprotimagarwal
Damir Arh
Daniel Jimenez Garcia
If you want to go fast, go alone. If Dobromir Nikolov
you want to go far, go together! Gouri Sohoni
Hardik Mistry
Today, we celebrate DotNetCurry (DNC) Magazines' 7th Imran Siddique
Anniversary edition, and I value you all who are sharing this Mahathi
special day with us. Subodh Sohoni
Vikram Pendse
I want to take this opportunity to congratulate my team Yacoub Massad
of authors and reviewers for all their time, efforts and
accomplishments, to you the reader who continues to inspire Technical Reviewers :
us, and to our sponsors who have helped us keep this
magazine freely available for the dev community.
Damir Arh
Daniel Jimenez Garcia
Like every year, we will use this milestone as a springboard to
raise the bar a little higher, and bring you some awesome tuts Dobromir Nikolov
in the coming months. Gerald Verslius
Gouri Sohoni
Enjoy this edition, and do not forget to email me your Subodh Sohoni
feedback at suprotimagarwal@dotnetcurry.com or reach out Tim Sommer
to us on twitter @dotnetcurry. Cheers!
Next Edition : Sep 2019
Disclaimer :
Copyright @A2Z Knowledge
Reproductions in whole or part prohibited except by written
Visuals Pvt. Ltd.
permission. Email requests to “suprotimagarwal@dotnetcurry.
com”. The information in this magazine has been reviewed for
accuracy at the time of its publication, however the information
Art Director : Minal Agarwal
is distributed without any warranty expressed or implied.
Editor In Chief :
Windows, Visual Studio, ASP.NET, Azure, TFS & other Microsoft products Suprotim Agarwal
& technologies are trademarks of the Microsoft group of companies. ‘DNC
(suprotimagarwal@
Magazine’ is an independent publication and is not affiliated with, nor has it
been authorized, sponsored, or otherwise approved by Microsoft Corporation.
dotnetcurry.com)
Microsoft is a registered trademark of Microsoft corporation in the United States
and/or other countries.
CONTENTS We
are 7
06
The Maybe Building a Authentication in
MONAD Cloud Roadmap ASP.NET Core,
C# with SignalR and Vue
Microsoft Azure applications
40
18
72 100
Deploy an Azure DevOps Integration
ASP.NET Core Search Testing of
application to - Deep Dive Real-time
Azure Kubernetes communication in
Service (AKS) ASP.NET Core
90
128 140
Configuration
Developing
driven Mobile Using YAML in
Desktop
DevOps Azure Pipelines
applications in
.NET
116
Chance to
feature in We at dotnetcurry are extremely happy to release our 7th edition
edition ->>
We would love to hear from you about how long you have been
Yacoub Massad
THE MAYBE
MONAD
IN THIS ARTICLE, I WILL TALK ABOUT
THE MAYBE MONAD; A CONTAINER THAT
REPRESENTS A VALUE THAT MIGHT OR MIGHT
NOT EXIST.
Additionally, if a method has a return type that is a reference type, null is a valid return value:
if (File.Exists(filename))
return File.ReadAllText(filename);
return null;
}
The above method returns null if a log file that corresponds to the requested id, is not found.
The caller of the GetLogContents method will receive a string after the method is called. Should the
calling method check the value for null before using it?
If it doesn’t and the method returns null, a NullReferenceException is thrown if the calling method
tries to access a member of the returned string.
What’s more dangerous is that the returned string might be passed several times from method to
method before a member of the string is finally used. It is only when a member of the string is used that
the NullReferenceException is thrown. This might make it hard to figure out the real cause of the
NullReferenceException.
A string return type can also be used in methods that never return null. In these cases, the caller
shouldn’t have to check for null.
In C# (before C# 8), there is no way to distinguish between the two cases. Methods that never return null
and methods that might return null, both have the return type string.
Note: C# 8 is expected to have nullable reference types. This means that methods that might return null
can have the return type of string? and methods that never return null can have the return type string.
Editorial Note: If you haven’t yet read about the new C# 8 features, read them here > New C# 8 Features in
Visual Studio 2019. C# 8 is currently in preview at the time of this writing.
www.dotnetcurry.com/magazine 7
Maybe<string> GetLogContents(int id) {
var filename = "c:\\logs\\" + id + ".log";
if (File.Exists(filename))
return File.ReadAllText(filename);
return Maybe.None;
}
The signature of the method now tells us that it may return a string or it may not return a value.
Now, methods that always return a string (non-null) can have a return type of string, and methods that may
return a string, can have a return type of Maybe<string>.
A sum-type implementation
Note: the source code for this section is found here: https://github.com/ymassad/MaybeExamples/tree/
master/MaybeAsASumType
In the Designing Data Objects in C# and F# article, I talked about sum types. A sum type is a data structure
that can be any one of a fixed set of types. For example, we can define a Shape sum type that has the
following three sub-types:
Maybe<T> is a sum type. It has two subtypes: Some and None. This means that a variable of type Maybe<T>
can only hold an instance of type Maybe<T>.Some or Maybe<T>.None.
The Some subtype has a single property, the Value property. The None subtype has no properties because
it models the case where the value is missing.
Now, after a caller calls the GetLogContents method and gets a Maybe<string>, it can do something like
the following:
Here, we use the pattern matching feature of C#7 to check if the contents variable is of type
Maybe<string>.Some. If so, write the log contents to the console. Otherwise, inform the user that the log
file is not found.
Note that here, there is no way (or at least it is hard) to access the value without first making sure that
there is actually a value.
The need to write Maybe<string>.Some here is inconvenient. The fact that we have to define the some
variable here and then access the Value property, is also inconvenient.
value = default(T);
return false;
}
This method returns true if there is a value, and false otherwise. Additionally, if there is a value, the value
out parameter will get the contained value.
www.dotnetcurry.com/magazine 9
else
{
Console.WriteLine("Log file not found");
}
This is better since we don’t have to type the full type of the variable (e.g. Maybe<string>.Some). Also, we
don’t have to define a some variable.
However, there is a bigger issue here. Now, the value variable can be accessed anywhere, even in the else
branch where it does not contain a valid value.
A Roslyn analyzer can be built to prevent access to the value variable in a location where TryGetValue is
not known to have returned true.
Another option is to define a Match method for Maybe. I talked about Match methods in the Designing
Data Objects in C# and F# article.
The source code of the Match method used above is defined here.
Similar to the first solution, the value lambda parameter can only be accessed when there is a value.
One potential issue with this solution is performance. Using lambdas to describe how to handle the two
different cases (some and none) might allocate objects that will later need to be garbage collected.
In many cases, this is not an issue. Always measure when it comes to performance.
Note: there is another overload of Match that allows you to return something in each case instead of doing
something in each case.
This method returns null and therefore the consuming code will most likely not behave as expected. For
example, calling the Match method on null will throw a NullReferenceException.
if (contents is Maybe<string>.None)
{
Structs in C# cannot have the value null. For example, this code is invalid:
int a = null;
Therefore, if we define Maybe as a struct, we are guaranteed that it will never have the value null.
The struct version of Maybe can be found here. Here is a some of the code:
Structs always have a public parameterless constructor that initializes all fields to their default values. This
means that if we construct Maybe<string> like this:
..the value field will get the value of null (the default for string), and the hasValue field will get the
value of false (the default of bool). This will indicate that this instance contains no value.
The constructor defined above always sets hasValue to true. It is private, so it can only be used from
within the class. It is used in the following member:
www.dotnetcurry.com/magazine 11
This is the declaration of an implicit operator for converting T to Maybe<T>. This means that we can assign
a string value to a variable of type Maybe<string>. It also means that we can return a string value
inside a method that has Maybe<string> as the return type.
The code here checks the value for null. If it is null, it returns a Maybe instance that contains no value.
Otherwise, it uses the defined constructor to return a Maybe instance that contains the value.
This operator is the reason why the GetLogContents method returns the result of calling the
File.ReadAllText method directly (which is of type string) without constructing a Maybe<string>.
I also defined a static non-generic Maybe class that has some interesting members:
None: a static property that is implicitly convertible to Maybe<T> for any T. That is, there is an implicit
conversion operator defined (see here) that allows it to be converted to Maybe<T> that contains no value.
See the GetLogContents from before. The code in some branch returns Maybe.None.
Some: a static method that allows us to create an instance of Maybe<T> that contains a value. Unlike the
implicit operator, this method throws an exception if the value is null.
In this example, we have a Maybe<string> that we convert into a Maybe<int> via the Map method. Map
allows us to convert the value inside the Maybe if there is a value. If there is no value, Map simply returns
an empty Maybe of the new type. In the example above, we want to get the length of the string inside the
Maybe.
The lambda given to the Map method will only be used if there is a value inside the Maybe.
return
logLines
This method takes the contents of some log file and tries to find an error code inside it. Some line in the
file is expected to contain something like this:
Some log files might not contain an error and thus might not contain such a line.
First, the method splits the content into lines. Then, it tries to find a line that starts with “Error code: “.
The FirstOrNone method is just like the FirstOrDefault method in LINQ. I defined this method as an
extension method over IEnumerable<T>. If there is at least one item in the enumerable, FirstOrNone will
return a Maybe<T> that contains the first item. If the enumerable is empty, a Maybe<T> that has no value is
returned.
The Map method is used to convert the value inside the Maybe<string> (if there is a value). Here we want
to take a substring of the line. More specifically, we want to remove the “Error code: “ part from the line.
Now comes the Bind method. Like Map, Bind is also about converting or transforming the value inside the
Maybe.
There is a difference though. Let’s look at the signatures of both these methods:
The difference is in the conversion function. When calling Map, we tell it how to convert T (the original value
type) to TResult (the new value type). When calling Bind, the conversion function is expected to return
Maybe<TResult>, not TResult.
Let’s look at the signature of the TryParseToInt method used in the example above:
This method is similar to int.TryParse. It takes a string and tries to parse it into a Maybe<int>. If the
string can be parsed into an int, the returned Maybe<int> will contain the result. Otherwise, an empty
Maybe<int> is returned.
If we had used Map instead of Bind in the FindErrorCode method above, the type returned would have
been Maybe<Maybe<int>>.
This type is not really useful and is hard to work with. Bind simply flattens Maybe<Maybe<TResult>> into
Maybe<TResult>. This is why Bind is sometimes called FlatMap.
www.dotnetcurry.com/magazine 13
Why is Maybe called a Monad?
A Monad is a container of something C<T> that defines two functions:
Return: a function that takes a value of type T and gives us a C<T> where C is the type of the container. For
example, we can convert 1 into a Maybe<int> by using the Maybe.Some method:
Bind: a function that takes a C<T> and a function from T to C<TResult> and returns a C<TResult>.
In the implementation in the source code, Bind is an instance method. So basically, the C<T> it takes is the
instance itself.
There are some rules that a Monad has to follow regarding the Bind function. I am not talking about these
rules here because I want to keep this article practical and not theoretical.
The Return function for IEnumerable<T> is simply the creation of an array that contains a single item.
We can also define a Return method like this:
The Bind function for IEnumerable<T> is SelectMany. Consider its signature here (I changed TSource
to T to make it easy to read):
I have talked about GetLogContents and FindErrorCode earlier. GetErrorDescription takes an int
representing the error code and returns Maybe<string> representing the error description. This method
might return an empty Maybe if no error description can be found for the specified error. Here is the
definition of this method:
if (File.Exists(filename))
return File.ReadAllText(filename);
return Maybe.None;
}
What the Test4 method does is that it gets the log contents (if any), finds the error code inside the log
contents (if any), and finally gets a description of the error code (if any).
Currently, GetErrorDescription only requires access to errorCode because it uses the file system to
find the description of the error based on what is stored in the files.
var linePrefix = "Error description for code " + errorCode + ": ";
return
logLines
.FirstOrNone(x => x.StartsWith(linePrefix))
.Map(x => x.Substring(linePrefix.Length));
}
This method expects to find the error description in the log contents in a special line. For example, a line
might contain the following:
This GetErrorDescription overload requires the log contents as a parameter. We can pass contents to
this method in the following way:
www.dotnetcurry.com/magazine 15
GetLogContents(13)
.Bind(contents => FindErrorCode(contents)
.Bind(errorCode => GetErrorDescription(errorCode, contents)));
}
Notice how the second call to Bind is now nested. I did this to be able to access the contents lambda
parameter.
Test4 was not great when it comes to readability. Test5 is even a bit less readable. Imagine if we have 10
operations instead of just 3. That will be even less readable.
Here in Test6, I am using LINQ query syntax to do the same thing as Test5.
To make Maybe work with LINQ, I had to define some Select and SelectMany methods inside Maybe.
With this, Select works exactly like Map. SelectMany is similar to Bind but has an extra parameter:
If the first Maybe value contains a value (T), and the result of calling convert (Maybe<T2>) contains a
value (T2), then the finalSelect function is called to compute something from T and T2.
Maybe<Customer> customer =
Maybe.Some(30)
.SelectMany(
convert: (int age) => Maybe.Some("Adam"),
finalSelect: (int age, string name) => new Customer(name, age));
In select new Customer(name, age), we need access to both name and age and this is what the
finalSelect function gives us. It also allows us to produce a value of a type that is different from the types
of the two involved Maybe objects.
Where returns an empty Maybe if the value does not meet the condition.
LINQ query syntax was designed to be extensible in a way we just saw. I might talk in details about this in
an upcoming article.
Conclusion:
In this article, I talked about Maybe; a container that may or may not contain a value.
The most important thing Maybe does is that it allows us to express when something is optional.
I showed two implementations of Maybe; one that uses a class and one that uses a struct. I think using a
struct is better because a struct Maybe models exactly two states (has a value and has no value), while a
class Maybe models another state which is null.
I talked about the Map and Bind methods and how they allow us to convert/transform the value inside
Maybe.
I also talked very briefly about what it means to be a Monad and gave an example of IEnumerable<T> as
a Monad.
Finally, I explained how we can use LINQ query syntax to work with Maybe in a more readable way.
Yacoub Massad
Author
Yacoub Massad is a software developer who works mainly with Microsoft technologies. Currently, he works
at Zeva International where he uses C#, .NET, and other technologies to create eDiscovery solutions. He
is interested in learning and writing about software design principles that aim at creating maintainable
software. You can view his blog posts at criticalsoftwareblog.com.
www.dotnetcurry.com/magazine 17
AZURE
Vikram Pendse
MICROSOFT AZURE
Building a
Cloud Roadmap with
Microsoft Azure
The CXO board, IT Head and the Technical and Solutions Architect group of Foo Solutions have decided to
adopt Microsoft Azure as their cloud platform on the following basis:
3. They recently acquired a small firm who has a large Open Source Applications portfolio
4. They want to go global and reach out to their customers in different geographies
However, they don’t have any Microsoft Azure experts or Architects who can guide them through the
process.
So now, let us discuss a few things the team of Foo Solutions should know about and consider while
migrating their existing applications to Azure, and build new Cloud First applications in their due course of
adopting Microsoft Azure.
1. Low business impact, sizable userbase and with no critical or sensitive data and public facing.
3. Applications which are stable, critical, having impact on business, public facing and handles sensitive or
critical data.
4. Applications which are on the verge of EOL a.k.a. end-of-life (like Silverlight apps which needs to be
migrated or .NET 2.0 apps which needs to be moved to the latest .NET framework)
5. Applications which needs to be scrapped and re-written again. Potential “Cloud First” apps with
minimum reusability of existing app and tending towards a new design. Applications which need to
embrace Microsoft Azure Services.
There are many assessment and migration tools offered by Microsoft and 3rd Party Partners/Vendors of
Microsoft. Ideally, the technical group at Foo Solutions should do a detailed analysis of the tool, accounting
the challenges they might face during migration, cost impact, business risks and downtimes etc.
Accordingly, a migration roadmap can be built. To ease this activity of assessment and migration, let us
discuss a few commonly used tools which will ease your initial assessment work and also help in the actual
migration to Microsoft Azure.
www.dotnetcurry.com/magazine 19
Many customers are still running Classic ASP based apps live on production, running their business as usual
with certain number of sizable users. If such customers are re-writing their apps and wish to continue with
the legacy platform, they can leverage the Azure IaaS platform to host their applications. Note that there is
no out-of-the-box tool from Microsoft Azure which will give you assurance of migration, so you may have to
do some configuration changes.
It is sometimes a difficult and challenging situation if the migrations need to be performed in a short time
span. Hence some quick automated assessment is required which will rule out the risk of choosing Azure
IaaS or PaaS decision.
Microsoft addresses these concerns for their customers by a quick, handy and easy to use tool. In
order to check whether your existing on-premise hosted or any other datacenter hosted application is
suitable for moving to Azure PaaS or not, Microsoft has a App Service Migration tool, which helps you
to do the primary assessment and gives you insights about all the technologies used and whether they
can be ported on Azure as an Azure App Service (which is Azure PaaS). This is a FREE tool available at
https://appmigration.microsoft.com/ and you can also install this on your existing on-premise environment.
It will scan your end point (URL of your application or in case if you install, then your on-premise
This is however currently available for .NET Applications and soon Microsoft will support other applications
as well. The assessment report is not just a Boolean result stating whether Application can be migrated or
not, but it does a detailed readiness check for the following points:
• Port Bindings
• Protocols
• Certificates
• Location Tags
• ISAPI Filters
• Application Pools
• Application Identity
• Authentication Type
• Application Settings
• Connection Strings
• Frameworks
• Configuration Error
• Virtual Directories
For more details, you can refer to the detailed metadata information mentioned here
https://appmigration.microsoft.com/readinesschecks
www.dotnetcurry.com/magazine 21
Migrate your SQL database to Microsoft Azure with Microsoft Data
Migration Assistant
This is one of the popular tools (also known as “DMA Tool”) to migrate your on-premise SQL database
instance to Azure SQL Server or SQL instance on Azure VM, accessible from an on-premise network.
Like the App Service Migration Tool mentioned earlier, this tool also does an assessment and gives
details of blocking issues and enlists the unsupported features. It also accounts for breaking changes and
deprecated features.
In order to run this tool, you need to have the sysadmin role assigned to you. This is also a FREE Tool and
can be downloaded from here - https://www.microsoft.com/en-us/download/details.aspx?id=53595.
Besides assessment, it allows you to migrate from your instance located on-premise -> to Azure SQL, Azure
SQL Managed instance or SQL on Azure VM.
Note: If you are running SQL Server 2008 for your applications/business, kindly check the end of life (EOL)
announcement for SQL Server 2008 and the newly announced “Azure Hybrid Benefit” offer from Microsoft for SQL
Server 2008 migration. More details here - https://azure.microsoft.com/en-us/pricing/hybrid-benefit/
Migrating to Cosmos DB
Microsoft Azure Cosmos DB is a revamped version of the previously available DocumentDB with many more
new features and enhancements.
Cosmos DB is also mainly used in apps leveraging schema agnostic model like various IoT and e-Commerce
solutions. Cosmos DB has its own use cases. Now in order to migrate to Cosmos DB, Microsoft provides
another tool like DMA which is known as Azure Cosmos DB Data Migration Tool.
Azure Cosmos DB Data Migration Tool is an Open Source Project from Microsoft
https://github.com/azure/azure-documentdb-datamigrationtool and you can download it from here
https://www.microsoft.com/en-us/download/details.aspx?id=46436
Azure Cosmos DB Data Migration Tool enables enterprises to move their collection/schemas in JSON, Mongo
DB, Azure Table, SQL and few other data sources. Cosmos DB provides a rich set of APIs for SQL, Graph, Table
and Gremlin. So, in case you want to replace your current Azure Table Storage with Cosmos DB, you don’t
have to make much efforts as most of your code remains as-is. This is because the Table API provides the
same set of method signatures. So, with minimum configuration changes, you can swiftly move to Azure
Cosmos DB. This is again a FREE tool from Microsoft.
www.dotnetcurry.com/magazine 23
Figure 5: Azure Migration Service for VMware
This however currently supports only the VMware environment. Hyper-V support is not made generally
available yet. It is an “agentless” discovery mechanism and it works by having a collector VM inside your
on-premise environment. Although this is a FREE service, the components getting provisioned using this
service will be charged as per their respective pricing.
Post assessment, you can then perform the actual migration using different Azure Services. For SQL
databases, we have already discussed about DMA and you can also explore Azure Database Migration
service on the same lines.
Beside these tools and services that we saw so far, you can always create a new infrastructure using Azure
CLI or PowerShell and can also try some popular 3rd party tools like Movere.
Although it may look like a very simple question and the obvious answer is “Yes”, you still need to have a
detailed conversation with customers or stakeholders to understand their requirements for Security.
Governance and Security always go hand in hand. So along with security, having governance is equally
important. Mechanisms like Role Based Access Control (RBAC) and Azure Policy will allow you to customize
these governance policies. Let us quickly go through the most common security challenges you face on any
cloud.
www.dotnetcurry.com/magazine 25
• Lack of Monitoring services
• Data movement in in-secure way
• Application Vulnerabilities
• Lack of Patch and Update management
• Lack of Security specific education
• Compromised users
• Lack of Role Based Access Control (RBAC)
• Lack of security specific education
• Wrong security assumptions
Security is a broader topic and has different flavors. In Microsoft Azure, we can bucket “Security” into two
parts – One is Application Security and the other is Environment Security (regardless of using IaaS or PaaS).
Data Security is also a subset of this conversation.
In Azure, data in transit is encrypted and hence it is secure. Stored Data is partially secure with the
assumption that your data stores are not compromised. Example – Data in Azure Storage is secured as long
as Keys/SAS tokens are taken care of, and not compromised. Data on VMs is secured, as long as it is not
getting accessed by unwanted users in public domain, and even within organizations.
The most generic, very powerful but highly underestimated service unknown to many customers is the
Azure Security Center.
It comes with two pricing models - “Free” and “Standard”. Check what features are covered under each
pricing model here.
Many customers have a perception that it just shows the status of VM updates and patches and puts
recommendations on top of them. However today, “Azure Security Center” is one of the very powerful single
dashboard services for your entire Azure workload which closely monitors your Azure components and
gives you a real time feed of the current status of your workloads. It also gives you a compliance score
using which you can ensure whether your workload and services configurations are aligned with your IT
policies and standards, or not.
Besides being a Security Dashboard of the entire subscription, it covers five major aspects –
• Policy and Compliance – Scoring against standard compliances like PCI, SOC, ISO etc.
• Resource Secure Hygiene – Recommendations at Resource level (Compute, Identity, Networking etc.)
• Advance Cloud Defense – Recommendations at VM and VNET level by providing Just in time VM Access
and Adaptive Network hardening
• Threat Protection – Setting up custom alert rules
• Automation & Orchestration – Creating playbooks and integrating logic apps
Azure Security Center pricing is based on the pricing model tier you choose i.e. Free and Standard. Once you
enable Azure Security Center, it starts collecting the necessary data from your Azure components. To know
more about Data Privacy and data collection policies, do read Azure Security Center documentation before
opting.
www.dotnetcurry.com/magazine 27
Web Application Firewall (WAF)
You can configure Web Application Firewall (WAF) inside your application gateway. This enables you
to validate your application against OWASP Top 10/Mod Security Rules (ver. 2.2.9 and 3.0). This web
application firewall also works for workload deployed with Classic mode deployment along with ARM.
It also prevents your application from DDoS attacks. We already have Azure DDoS as a separate service in
Azure, but it is expensive compared to WAF. WAF provides you with real time protection with Detection and
Prevention mode. Detection Mode is usually turned on in Dev/Test phase and if we keep logs on, we can
capture more details. Prevention mode is usually turned on for production phase. In case of any attack, it
throws a 403 error.
Note: There is a separate “Web Application Firewall” managed service for Azure Front Door, so avoid the name
conflict of WAF built inside Application Gateway, against managed service of Web Application Firewall for Front
Door. AFD also has traffic manager capabilities with low latency features. So based on latency, it automatically
manages these requests. Also note that AFD has a dedicated designer, unlike WAF.
In the frontend host, you can configure your app, and in backend pool, the requests are routed based on
latency by AFD.
You can configure routing rules as per your business requirements. Traffic Manager and AFD can run in
parallel and you can also replace Traffic Manager with AFD for web apps.
Note that AFD can route to only public endpoints, so while designing the architecture, you need to make
a call of what to opt out of WAF, Front Door and Traffic Manager based on what scenarios you are dealing
with. AFD can certainly be a good choice when you have multiple region origins or globally distributed
users, and performance is key.
It provides state of art analytics with minute details of different Azure service components by allowing set
of different rich connectors. It has a small built-in Case Management board (very small flavor of ticketing
systems like Zen Desk) which allows you to investigate the security incidences and issues by assigning them
to respective users.
www.dotnetcurry.com/magazine 29
Figure 12: Sentinel Dashboard
With different data connectors, it captures and displays all the data as single point dashboard or SIEM
dashboard.
If you are familiar with Log Analytics – OMS (Operational Management Suite), the dashboard of connectors
is pretty much the same visually. It provides built-in queries and gives detailed RCA in case of any threats.
www.dotnetcurry.com/magazine 31
The Case management allows you to assign a particular incident to your Users (Users of Azure Portal with
appropriate Roles in place). To hunt down the issue, the Hunting option gives you a decent number of built-
in queries which you can run.
So along with other Monitoring tools like Log Analytics (OMS) and Application Insights, the Azure Sentinel
tool serves the purpose of true cloud native SIEM tool.
• Applying NSGs (Network Security Group) on Subnet or at VM level to control Inbound and Outbound
traffic by providing IP range and rules
• Blocking Ports which can be a threat and not needed to be exposed to other Azure Services or public
traffic. RDP can be blocked and if someone still needs to do RDP on VM for any administrative work,
then make use of Jump Server
• Use of appropriate DMZ and making use of 3rd party firewalls like Barracuda
• Azure RBAC and Policies in place for better control and governance
Azure PaaS (mostly App Service Model) hosted apps can be protected by the following measures –
• Manage SAS Tokens and Keys effectively for Azure Storage and keys of other APIs
• Ensure to classify your data (Public Vs Confidential) and accordingly choose appropriate data source and
protect the same
• Use Azure Key Vault to store secret keys (including passwords of Azure VMs)
• Ensure to run OWASP Top 10 testing for your application and align as per OWASP Top 10 policies
• User Azure DDoS protection and Azure Pen Test to ensure highest level of security for your application
With this, we have covered the major items for Foo Solution Ltd. and provided guidance for their Migration
approach, Security of the applications and Cloud components.
Now let us discuss some reasons why organizations fail in their Cloud Migration journey, and how it impacts
adoption.
Moving to the Cloud is not an easy decision and thus opting out is equally painful. But to avoid such painful
acts, I will enlist some preventive measures and points to consider in order to illustrate an ROI on your
Cloud investment.
We will basically bucket them into two categories (Technical and Non-Technical).
Technical Challenges
• Assuming Azure IaaS is the final solution and burning out – By not designing appropriate High
Availability/Availability Zones, moving everything on Azure IaaS can be a disaster. We have discussed
couple of assessment tools in this article. Enterprises/companies should first do a thorough analysis
using the tools available, and then make a clear choice of IaaS or PaaS. Usually PaaS is cheaper and
flexible, easy to deploy, and for maintaining the overall environment.
• Lack of awareness of Azure Services and Tooling – Microsoft Azure is a dynamic cloud platform and
is continuously evolving with new features. Microsoft keeps adding and updating their value-added
services. After doing an assessment, Architects and Decision makers need to map Azure Services with
www.dotnetcurry.com/magazine 33
their existing apps and see what is best suitable for them to achieve their business goals, as well as
customer satisfaction.
• Blindly Mapping Services with Competing Cloud Providers (eg: Amazon AWS) – Many customers while
moving from Amazon AWS or while having a multi-cloud strategy, always tend to map head to head
services and assume it will work hassle free. Well, I recommend to do a quick assessment especially
for Microsoft Azure where there is a plethora of services and wider choices available. For example – In
case of mapping for AWS Lambda, off course, the equivalent choice is Azure Functions since both are
serverless offerings. But then do revisit the requirement once since it may happen that what you are
looking for, can be served using Azure API Apps as well. This is just a high-level example but besides
this, “Cost” is also a factor, so ensure you are not blindly mapping services, but rather evaluating it for a
better optimized use.
• Wrong Technical assumptions and SLA assumptions – Enterprises/companies are first required to
understand the different SLAs for different services in Azure. They also need to understand the terms
and conditions to achieve those SLAs and ensure the steps to be taken to fulfill them. “High Availability”
and “Maintenance of VMs” (especially in Azure IaaS) are the most misunderstood terminologies. For
Azure IaaS, do understand the “Shared Responsibilities” concept before opting for it.
• Wrong assumptions about Security – In an earlier section of the article, I mentioned that customers
often ask “Is Azure Secure?” Do feel free to have a conversation with the customer and ask her/him
a few questions of your own like “Is your application secure in its current environment and what
measures have been take to ensure its security?”.
While this may open up Pandora’s box, you will get the opportunity to showcase some of the built-in
security measures or cloud native security services, Microsoft offers. This should lead to a good value
proposition. You need to understand and help the customer understand the following:
o Data Classification – Difference between Public Data and Private Data. How Microsoft treats data
hosted in Azure. What are the Microsoft policies for the same (Check Microsoft Trust Center for more
details - https://www.microsoft.com/en-us/trustcenter/cloudservices/azure).
o Help customer to educate how Microsoft ensures enterprise grade security to its Data centers across
the world and compliances they have.
o Educate customer to differentiate between Application Security and Cloud Security and the
different measures and services associated with it.
o Encourage Customers to opt for Monitoring services (many customers bypass this recommendation
to save few $ in the monthly bills)
• Lack of tools/questionnaire to capture the requirements for Azure (capturing Business goals, high level
details of current application/infrastructure etc).
• Lack of knowledge and wrong assumptions about 3rd Party Services integration in Azure.
• Poor knowledge of different product licensing especially in Hybrid or Lift and Shift migration scenarios
in Azure. Lack of knowledge of license reusability.
• Poor communication with ground Sales and Partner teams of Microsoft who can frequently share
publicly available value-added updates, and can share more insights.
Being an enterprise friendly organization, Microsoft understood this aspect and to resolve this problem,
they have introduced the “App Configurator Service” which is single stop repository to store all your key
values and configurations securely. Like you read your configuration files, similarly you can read these
settings with a set of APIs.
www.dotnetcurry.com/magazine 35
You can also Import and Export them any time, and it is quite easy to manage them from the Azure Portal
too.
Cloudockit
Cloudockit is a multi-cloud third party solution to document your Cloud workload with in-depth details.
This tool produces in-depth Technical documentation and works where you may have any compliance rules
to share the documents with customers, or maintain them for customers for audit purpose. It is a quick tool
which will save you time which you would otherwise spend on building manual documentation.
This is a paid tool and you can take a free trial at cloudockit.com.
The number of cores and memory usually can be picked with the following parameters:
Although this is not a clear measure to define, but at an initial level, it is good enough to pick the VM type
and size. You always have scaling mechanisms like VM Scale Sets which can scale on demand.
Usually I have seen that many people do a Proof of Concept (PoC) followed by a Load Test and check the
overall performance before choosing the VM specs. Here is a quick chart which can help you choose a series
of VMs based on the nature of your business:
If it is a partner/dev team, you may ask to share the screen over Skype or Microsoft Teams and check or have
a screenshot sent over email. But for a production environment and for a large customer base where users
are consumers, it is not possible to do so. Traditionally, people would provision VMs in those regions or
www.dotnetcurry.com/magazine 37
manipulate the geo/time to test.
This is not a standard or proven technique especially in the Cloud era. Hence if you have Application
Insight applied to your application, you can check this with the “Availability” feature as shown in Figure 19,
and can check or run the test from different regions.
Microservices
If at all you are considering a Microservices based design or architecture, then just make a note of the
following offerings which will help you to pick the correct service in Azure –
ACR – Azure Container Registry (Deprecated Service – Kubernetes is now Industry standard hence AKS is the
new alternative)
ASF – Azure Service Fabric. Good for Windows Workloads. Ideal for Non containerized and Stateful apps
ASFM – Azure Service Fabric Mesh – Manages Service offering for ASF
DevOps
VSTS (Visual Studio Team Services) is now branded as Azure DevOps with many more new capabilities and
services. Azure DevOps enables you to build different dashboards, build CI-CD and CT pipelines with many
open source version controls and tools like Maven, Jenkins etc.
Conclusion:
Microsoft Azure is one of the top leading Public Cloud Platform with unique offerings and true hybrid,
secure and enterprise grade SLA offerings.
Azure gives good ROIs provided you align your migration and new application development strategy to it. I
hope this tutorial has helped you get over common misconceptions about Microsoft Azure.
The suggestions described in this tutorial will also help you avoid mistakes, illustrate a better ROI and
enable you to take decisions and build a long term, sustainable, profitable and secure Cloud roadmap for
your organization, to serve your customers and consumers better!
Vikram Pendse
Author
Vikram Pendse is currently working as Cloud Solution Architect in e-Zest Solutions Ltd.
in (Pune) India. He has 12+ years of IT experience spanning a diverse mix of clients and
geographies in the Microsoft Domain. He is an active Microsoft MVP since year 2008
and has currently received the MVP award in Microsoft Azure. Vikram is responsible
for building "Digital Innovation" strategy for e-Zest customers globally using Microsoft
Azure and AI. He is a core member of local Microsoft Communities and participates as a
Speaker in many Microsoft and other community events talking about Microsoft Azure
and AI.
www.dotnetcurry.com/magazine 39
ASP.NET CORE
AUTHENTICATION IN
ASP.NET CORE,
SIGNALR AND
VUE APPLICATIONS
www.dotnetcurry.com/magazine 41
Figure 1, users will now be able to login
As we saw in the previous article, SignalR was used to provide real time updates of votes and answers.
However, authentication was nowhere to be seen! In this first section of the article, we will update the
application so users can login/logout and the site is effectively read only for anonymous users, including
the SignalR hubs.
We will start following the most common authentication scheme: cookie based authentication. At a very
high-level, it works like this:
1. The browser sends user entered credentials (like username and password) for a server to validate.
2. If the server determines the credentials are valid, it generates an encrypted cookie used to identify the
user and includes a Set-Cookie header in the response sent back to the browser.
3. The browser receives the response and reads the Set-Cookie header, saving the cookie to the cookie jar.
4. Upon any further requests, the browser automatically includes the cookie within the requests.
5. The server inspects the received headers on every request, expecting to find the authentication cookie
it sent upon authentication. In order to authorize the request, it can decrypt and verify the cookie
contents.
Of course, things are a little more complicated. There are multiple ways a server can login a user (not just
username and password, for example OAuth with 3rd party services like Google or Twitter), and cookies
themselves need to be configured to be secure. (They shouldn’t be accessible to JavaScript, ideally sent over
HTTPS only and restricted to specific domain/sites).
ASP.NET Core Identity takes care of it all, providing a complete solution and a very convenient way of
adding authentication to ASP.NET Core web applications.
However, there is a problem with so much convenience, and that is, its controllers and views are geared
towards traditionally server-side rendered applications! That is, Razor pages/views will render elements
like login forms, these in turn will send full page POST requests to the controllers, which finally respond
with a redirect back to the home page.
This might not work so well in the context of SPAs applications like the one used in this article (unless you
can live with full page posts and redirects in your authentication pages). Ideally, the server will just provide
an authentication API, leaving the UX workflow to the client side of the SPA (the Vue application in our
case).
We will then begin by introducing a new API into our server side ASP.NET Core application in order to
provide cookie-based authentication.
www.dotnetcurry.com/magazine 43
In order to maintain pace and focus, during this article, we will leave aside 3rd party OAuth providers and
consider local accounts only (many of the problems and techniques you will face are similar, so you will be
better equipped once you understand local accounts! Who knows, might be the subject of a future article?)
We could use the scaffolding provided by ASP.NET Core Identity or we could manually write the controller
using the Cookie authentication services.
In this article, I will manually write the controllers due to the following reasons:
• The controller code generated by the scaffolder for login/logout actions assumes the application will
use full posts followed by redirects, instead of an API called from JavaScript.
• We need to write the client elements ourselves as part of our Vue application.
However, there will be nothing wrong if you decide to use the provided scaffolding. Simply discard the
generated views and manually modify the generated controller code.
Enough about setting up the context, let’s start writing some code! The first thing we are going to do is to
enable the necessary services and middleware in our Startup class. First let’s define a new constant for
the Cookie authentication scheme:
Next, add the following code to the ConfiguresServices method. It will add the authentication services
using Cookies based authentication as the default scheme:
If you remember from the earlier article, our client and server side applications are deployed
independently. Even during development, the client application runs in localhost:8080 while the server
runs in localhost:5100. This means we need to change the default SameSite setting or we won’t be able to
authenticate our app (as long as they are deployed as different sites). Of course, if you are deploying both
client and server side from the same site, you should leave this with its default lax value!
Now that all the required services are added and configured, update the Configure method to add the
authentication middleware right after the CORS middleware:
app.UseAuthentication();
This is the middleware that extracts user information from the request (using the configured scheme),
enabling the application to perform authentication challenges, for example when adding the [Authorize]
attribute.
Before we continue, notice how we are not adding the Identity services that provide the functionality to
create, retrieve and validate user accounts. There is good documentation on how to add Identity services,
while at the same time it will add significant noise to the article (relies on using a database and Entity
Framework, none of which are used by our sample application).
As you will see, when we look at the AccountController implementation, we will simulate the Identity
functionality by manually validating user credentials and manually creating the ClaimsPrincipal
instances.
Let’s finish the server side changes by adding a new AccountController that provides the new /account
API with login and logout endpoints:
[Route("[controller]")]
public class AccountController : Controller
{
[HttpPost("login")]
public async Task<IActionResult> Login([FromBody]LoginCredentials creds)
{
// We will typically move the validation of credentials
// and return of matched principal into its own AuthenticationService
// Leaving it here for convenience of the sample project/article
if (!ValidateLogin(creds))
{
return Json(new
{
error = "Login failed"
});
}
var principal = GetPrincipal(creds, Startup.CookieAuthScheme);
await HttpContext.SignInAsync(Startup.CookieAuthScheme, principal);
www.dotnetcurry.com/magazine 45
return Json(new
{
name = principal.Identity.Name,
email = principal.FindFirstValue(ClaimTypes.Email),
role = principal.FindFirstValue(ClaimTypes.Role)
});
}
[HttpPost("logout")]
[Authorize]
public async Task<IActionResult> Logout()
{
await HttpContext.SignOutAsync();
return StatusCode(200);
}
This is very similar to the code you might have seen so far in scaffolded account controllers. The most
important bits are two lines that actually perform the login and logout functionality:
• In the logout method, await HttpContext.SignOutAsync(); uses the Cookie scheme and includes
another Set-Cookie header in the HTTP response that instructs the browser to remove the cookie.
• The controller actions do not return a ViewResult nor a RedirectResult. Instead they return
JsonResult and StatusCodeResult! This is vital for the Vue application to call this API using
JavaScript.
• The login controller expects credentials to be received as part of the body, so the client can send them
as a JSON.
• As mentioned earlier, we are not using the Identity services like the SignInManager class to validate
user credentials and create ClaimsPrincipal instances. Instead we are replacing that with the stub
functionality that will let anyone to authenticate! Replace these methods with real implementations in
your application.
At this point, you should be able to test your API using any tool like Postman or cURL to send a JSON with
some username and password credentials. You should see in the response, the Set-Cookie header:
That’s it, we have a simple but functional API that allows the Vue application to use JavaScript in order to
login and logout from the application.
Now that our server provides a simple authentication mechanism, we need to update the Vue application
with the necessary elements so users can login by entering their credentials and logout if already
authenticated.
www.dotnetcurry.com/magazine 47
We will update the navbar to show a Login button on the top right. Upon being clicked, a modal will be
displayed for users to enter their credentials:
Figure 4, the first iteration of the login modal, opened from the Login button in the navbar
Once the user enters the credentials, we will send an AJAX request to the login endpoint, and will update
the navbar so it now displays the user name and a Logout button:
Figure 5, once logged in, the navbar will display the username and Logout button
This brings some interesting design questions, particularly around where should the data identifying the
• Should that be in the root App.vue component, passed as props to any child component like the navbar?
• What happens when authenticating in a modal component? Should events be propagated up across the
component tree until it reaches App.vue where the data is finally updated?
• How can any component know if the user is authenticated or not, for example in order to disable some
buttons?
Luckily for us, Vuex is the perfect answer for shared data like the current user context, data that belongs to
none and all components! Apart from being the perfect answer to this problem you will see how using it is
quite straightforward. (If you want to learn more, check out one of my previous article taking a closer look
at Vuex)
Now that we know what we will build and how, let’s begin.
The first thing we will do is to extract the main navbar from the App.vue component into its own
component. Create a new main-navbar.vue file inside the components folder, and copy the navbar from
App.vue into the <template></template> section of the component.
Then import the new main-navbar component inside the App.vue script section:
And finally replace the navbar in the App.vue script section with the component we just included: <main-
navbar />.
Let’s now create the login modal component, where we will make use of bootstrap-vue’s modal component
(as in the existing modal for adding questions and answers):
<template>
<b-modal id="loginModal" ref="loginModal" hide-footer title="Login" @
hidden="onHidden">
<b-form @submit.prevent="onSubmit" @reset.prevent="onCancel">
<b-alert show variant="warning">In this test app, any credentials are valid!
</b-alert>
<b-form-group label="Email:" label-for="emailInput">
<b-form-input id="emailInput"
type="email"
v-model="form.email"
required
placeholder="Enter your email address">
</b-form-input>
</b-form-group>
<b-form-group label="Password:" label-for="passwordInput">
<b-form-input id="passwordInput"
type="password"
www.dotnetcurry.com/magazine 49
v-model="form.password"
required
placeholder="Enter your password">
</b-form-input>
</b-form-group>
<button class="btn btn-primary float-right ml-2" type="submit">Login</button>
<button class="btn btn-secondary float-right" type="reset">Cancel</button>
</b-form>
</b-modal>
</template>
<script>
export default {
data () {
return {
form: {
email: '',
password: ''
}
}
},
methods: {
onSubmit (evt) {
// to be completed
},
onCancel (evt) {
this.$refs.loginModal.hide()
},
onHidden () {
Object.assign(this.form, {
email: '',
password: ''
})
}
}
}
</script>
Nothing too exciting here. Just some regular Vue code providing a modal, and an empty onSubmit method
which we will come back to later!
Before we can display the modal, it needs to be part of the Vue application. Follow the same steps we took
to include the main-navbar component inside App.vue. With this, the modal is ready to be displayed, all we
need is a button!
Update the main-navbar component and replace the form providing a sample search box with a form that
provides a Login button. This button uses bootstrap-vue’s v-b-modal directive to show the login modal we
just created and wired inside App.vue:
If you run the application, you should see the modal appearing after clicking on the Login button. However,
we left the onSubmit method empty, so it will do nothing yet!
To implement the login functionality and keep the user context data, we will use Vuex. The very first thing
to do is to install the library. Run the following command from the root folder of the client application:
Once installed, create a new folder named store inside the client/src folder. Create two new files, named
index.js and context.js. The first one, index.js, will be used to wire Vuex into the Vue application and to
compose together all the different Vuex modules:
Vue.use(Vuex)
The second file, context.js, will provide a module for everything related with the user context. Let’s start
with an empty module:
export default {
namespaced: true,
state: {
},
getters: {
},
mutations: {
},
actions: {
}
}
Don’t worry, we will fill it up as we build the functionality. Let’s start by providing the code necessary to
perform the login action. This code will send a request to the login endpoint of our server-side API, and will
save the returned user profile into the module state:
export default {
namespaced: true,
state: {
profile: {}
},
getters: {
isAuthenticated: state => state.profile.name && state.profile.email
},
mutations: {
setProfile (state, profile) {
state.profile = profile
},
},
actions: {
login ({ commit }, credentials) {
return axios.post('account/login', credentials).then(res => {
www.dotnetcurry.com/magazine 51
commit('setProfile', res.data)
})
},
logout ({ commit }) {
return xios.post('account/logout').then(() => {
commit('setProfile', {})
})
}
}
}
• a login action that the login modal can use. This action will send a request to the server API and will
update the module state with the returned user profile
• a logout action that the navbar can use. Similar to the login action, this will send a request to the
server API and will clear out the current profile from the module’s state
• a profile property in its state, which any component can map. For example, the navbar can include a
Welcome, username message when logged in.
• an isAuthenticated getter that any component can map. This returns a Boolean indicating whether
the user is currently logged in or not, which will be widely used. For example, the navbar can use it
to render either a login or a logout button; while buttons that require authentication, can be disabled
based on its value.
Let’s finish with the login process. Update the login modal to map the login action of the module:
Here, we are just mapping the action from the context store, calling it when the form is submitted, and
closing the modal once the action succeeded.
..based on the data currently stored in the context store. It is as simple as mapping the logout action, the
profile property of the state (so we can render the profile.name property) and the isAuthenticated
Of course, the script section needs to be updated so it maps these elements from the context module
(otherwise they wouldn’t be available in the template):
export default {
computed: {
...mapState('context', [
'profile'
]),
...mapGetters('context', [
'isAuthenticated'
])
},
methods: {
...mapActions('context', [
'logout'
])
}
}
That’s it, now you should be able to login and logout from the application. There is a little problem
however. As soon as you reload the page, you will appear as logged out, even if your browser still has the
auth cookie!
This is because our components rely on the state kept in the Vuex store, which is gone as soon as you
reload the page, since it is kept in memory. We will need to restore this context when our Vue application
starts!
In order to solve this problem, we will include a new endpoint in our server-side API to load the details
of the currently logged in user (Note how the properties will be empty in case the user isn’t currently
authenticated, so the isAuthenticated getter of the client application detects it):
[HttpGet("context")]
public JsonResult Context()
{
return Json(new
{
name = this.User?.Identity?.Name,
www.dotnetcurry.com/magazine 53
email = this.User?.FindFirstValue(ClaimTypes.Email),
role = this.User?.FindFirstValue(ClaimTypes.Role),
});
}
We will then provide a new Vuex action to call this endpoint and update the store profile state with its
response:
restoreContext ({ commit}) {
return axios.get('account/context').then(res => {
commit('setProfile', res.data)
})
},
Finally, we will call this endpoint from the App.vue component mounted state:
After these changes, you should be able to login/logout and stay logged in when reloading the page.
That was quite a journey, but we now have the basic functionality wired end to end and we can start with
the more interesting parts!
Since our users can now login and logout, we can start restricting parts of our application to authenticated
users. Let’s begin with the controller actions to create new questions, up/down vote them and create new
answers.
This is as simple as adding the [Authorize] attribute on all these endpoints. The attribute will enforce
users to be authenticated, so as long as users are logged in, the cookie will be sent and the attribute will
grant access to the controller endpoint.
Sadly, if you try to add a question, you will notice the site no longer works after we added the
[Authorize] attribute. Even when you are authenticated, the server returns a 401 response!
The problem lies again in the fact that client and server are running as different applications, one at
localhost:8080 and the other at localhost:5100. When this happens, browsers will not include cookies along
with AJAX requests, unless specifically instructed to do so.
axios.defaults.baseURL = 'http://localhost:5100'
axios.defaults.withCredentials = true
Note, this topic is closely related with the topic of CORS! The CORS server side middleware was configured during
the first article to allow communication between the client and server applications.
Of course, if your application will end up deployed with the client and server on the same domain, you would
not need to worry about these issues. If this is your case, you can add a vue.config.js file that points the Vue
development server towards your ASP.NET Core server. This means, from your browser point of view, everything
will be running in localhost:8080 and you won’t have to face these cross-site issues.
Now that we have solved this small hiccup, our application is working again!
Authenticated users can create questions, up/down vote them and create answers. However, anonymous
users can still attempt to perform these actions, just to get a 401 response in return.
We can very easily provide them with a better UX where buttons that trigger actions unavailable to
anonymous users, are disabled or invisible.
Remember the isAuthenticated getter we added to the context Vuex store? This is another use case
where Vuex shines.
For example, update the home.vue component so the add question button is disabled based on the
isAuthenticated getter. All you have to do is to map the getter and use it to set the disabled attribute
of the button:
Rinse and repeat! You can follow the same approach to disable/hide any links that trigger actions available
only for authenticated users. (Feel free to check the final code on github)
Securing the SignalR hub is as simple as adding the [Authorize] attribute to either the Hub class or
individual Hub methods.
Let’s add the attribute to our QuestionHub class. That was easy, right?
www.dotnetcurry.com/magazine 55
Well, hold on!
If you open an incognito window or logout and reload the page, you will notice an endless series of calls to
http://localhost:5100/question-hub/negotiate that end in 401.
This is because our Vue application will try to connect to the SignalR hub as soon as the application
starts, regardless of whether the user is authenticated or not. What’s worse, we included some code to
automatically reconnect, which ends in this endless loop.
• On application startup, only start a connection with the hub if we are logged in
• Start a connection after a successful login action
• Stop the connection after a logout action
Luckily for us, the question-hub.js Vue plugin we created and the Vuex context module can easily play
together in order to achieve this behavior in a way that’s transparent for the rest of the application!
Let’s start with the question-hub plugin. Rather than automatically trying to establish a connection on
application startup, we will provide with methods to start and stop the connection:
export default {
install (Vue) {
// use a new Vue instance as the interface for Vue components
// to receive/send SignalR events. This way every component
}
Vue.prototype.stopSignalR = () => {
return startedPromise
.then(() => connection.invoke('JoinQuestionGroup', questionId))
.catch(console.error)
}
questionHub.questionClosed = (questionId) => {
if (!startedPromise) return
return startedPromise
.then(() => connection.invoke('LeaveQuestionGroup', questionId))
.catch(console.error)
}
}
}
As you can see, the questionHub can be created straight away, meaning that components can add listeners
to SignalR events regardless of whether we are connected or not. (If we are not connected, then they will
never receive an event through the questionHub).
We are also checking if the connection process has been started before trying to send an event through the
SignalR connection. Since the connection might be instantiated but not fully opened, this is a little more
complicated than checking if it is not null. We will see more once we implement the start/stop methods.
Implementing the start method is mostly moving the initialization code, inside this method:
// Forward hub events through the event, so we can listen for them in the Vue
components
connection.on('QuestionAdded', (question) => {
questionHub.$emit('question-added', question)
})
connection.on('QuestionScoreChange', (questionId, score) => {
questionHub.$emit('score-changed', { questionId, score })
})
connection.on('AnswerCountChange', (questionId, answerCount) => {
questionHub.$emit('answer-count-changed', { questionId, answerCount })
www.dotnetcurry.com/magazine 57
})
connection.on('AnswerAdded', answer => {
questionHub.$emit('answer-added', answer)
})
// You need to call connection.start() to establish the connection but the client
wont handle reconnecting for you!
// Docs recommend listening onclose and handling it there.
// This is the simplest of the strategies
function start () {
startedPromise = connection.start()
.catch(err => {
console.error('Failed to connect with hub', err)
return new Promise((resolve, reject) => setTimeout(() => start().
then(resolve).catch(reject), 5000))
})
return startedPromise
}
connection.onclose(() => {
if (!manuallyClosed) start()
})
// Start everything
manuallyClosed = false
start()
}
This is mostly the same code as before, with the addition of the manuallyClosed flag. Since we are adding
a stop method that we will invoke after user’s logout, we need to prevent the reconnecting code from keep
trying, something we achieve by updating this flag as true.
Next, implement the stop method, which simply calls the connection stop method and clears our flags:
Vue.prototype.stopSignalR = () => {
if (!startedPromise) return
manuallyClosed = true
return startedPromise
.then(() => connection.stop())
.then(() => { startedPromise = null })
}
All that’s needed is for our context module to automatically call the startSignalR and stopSignalR as
a result of the login, logout and restoreContext actions! Notice how we added the methods to the
Vue.prototype earlier, so we can call them from the store:
That’s it, the endless loop of 401 requests trying to connect to the hub when not authenticated, should be
gone now.
You will also notice the browser starting/stopping the connection as soon as you login/logout from the
app. Of course, the functionality provided by the hub should work as long as you are logged in, for example
open two browser windows, login in both and try to add new answers and votes.
Take a moment to notice how no other component of our Vue application except for these two files, had to
be modified!
After all this hard work, let’s have a little fun by adding a simple chat to our application! With all the
building blocks we have so far, this will require little work.
• add a new method to our IQuestionHub interface that defines the event received by clients when a
message is sent to the chat
• add a new method to the QuestionHub class that clients can send an event to, when they want to send
an event to the chat
[Authorize]
public class QuestionHub: Hub<IQuestionHub>
{
...
public async Task SendLiveChatMessage(string message)
{
await Clients.All.LiveChatMessageReceived(Context.UserIdentifier, message);
}
}
Which means we are implementing a general chat where all messages are sent to everyone.
www.dotnetcurry.com/magazine 59
There is one little extra detail to take care of.
We need to tell SignalR how to extract this user identifier from the ClaimsPrincipal object that results
from a successful authentication. Implement the IUserIdProvider interface, for example we will use the
principal’s name, since we were setting it from the email address:
Then include this as part of the ConfigureServices method of the Startup class:
services.AddSingleton<IUserIdProvider, NameUserIdProvider>();
On the frontend, let’s start by updating the question-hub.js with the new listener for the
LiveChatMessageReceived event, and the new method to call the SendLiveChatMessage event:
return startedPromise
.then(() => connection.invoke('SendLiveChatMessage', message))
.catch(console.error)
}
Next let’s create a new modal where the users can see the messages received and send new messages. Add
a new live-chat-modal.vue file inside the components folder with the following contents:
<template>
<b-modal id="liveChatModal" ref="liveChatModal" hide-footer title="Live Chat"
size="lg" @hidden="onHidden">
<script>
import { mapState } from 'vuex'
import VueMarkdown from 'vue-markdown'
export default {
components: {
VueMarkdown
},
data () {
return {
messages: [],
form: {
message: ''
}
}
},
computed: {
...mapState('context', [
'profile'
])
},
created () {
// Listen to answer changes from SignalR event
this.$questionHub.$on('chat-message-received', this.onMessageReceived)
},
beforeDestroy () {
// Make sure to cleanup SignalR event handlers when removing the component
this.$questionHub.$off('chat-message-received', this.onMessageReceived)
},
methods: {
onMessageReceived ({ username, text }) {
this.messages = [...this.messages, { username, text }]
},
onSendMessage (evt) {
this.$questionHub.sendMessage(this.form.message)
this.form.message = ''
},
onHidden () {
www.dotnetcurry.com/magazine 61
Object.assign(this.form, {
message: ''
})
}
}
}
</script>
<style scoped>
.messages-container{
max-height: 450px;
overflow-y: auto;
}
</style>
While it might look scary, it is mostly presentation! Logic-wise, there is not much going on here.
The component starts with an empty array of received messages. It then listens to chat-message-
received events, adding them to the array of received messages. Whenever the user clicks on the send
button, it then emits the sendMessage event.
It’s important to note that the component will be receiving messages and updating its array regardless of
whether the modal is actually visible or not! Let’s update App.vue again to include this new modal as part
of its template, and finally update the home.vue component with a button to show the modal:
That’s all that is required to add a functional chat to your application! Feel free to expand on it and add
more functionality like private chats or a list of connected members!
While this might be ideal in many scenarios, some people might want/need to use JSON Web Tokens,
particularly those in the context of SPAs and mobile applications. If this sounds new to you, don’t worry,
there are plenty of articles out there comparing both options like this one or this one, apart from the
suspect questions in stack overflow.
I will leave aside (the article is already quite long as it is!) design considerations like when to use JWT
instead of Cookies, where to securely store them or how to refresh the tokens, leaving these questions for
you to answer based on your needs and context. However, I want to provide an example that uses JWT so
you can see what this means in practical terms for SignalR and Vue.
Our server currently supports a single authentication scheme, the Cookie based one. However, ASP.NET Core
We have basically added a second authentication scheme, the one tagged with the JWTAuthScheme
constant. We have then added the ForwardDefaultSelector to the default scheme (the
CookieAuthScheme) so the framework can choose the right scheme for each request. The logic we are
following is based on whether the request contains either of:
• The access_token query string parameter. This is where SignalR will include the token when
establishing connections
• The Authorization header. This is where our client application will include the token as part of AJAX
requests.
If any of those are found in the incoming request, then we select the JWTAuthScheme scheme. Otherwise
we choose the default CookieAuthScheme scheme.
• Define how the token will be validated, for example based on its lifetime
// NOTE: you want this to be part of the configuration and a real secret!
public static readonly SymmetricSecurityKey SecurityKey =
new SymmetricSecurityKey(
Encoding.Default.GetBytes("this would be a real secret"));
...
.AddJwtBearer(JWTAuthScheme, options =>
{
options.TokenValidationParameters = new TokenValidationParameters
{
LifetimeValidator = (before, expires, token, param) =>
{
www.dotnetcurry.com/magazine 63
return expires > DateTime.UtcNow;
},
ValidateAudience = false,
ValidateIssuer = false,
ValidateActor = false,
ValidateLifetime = true,
IssuerSigningKey = SecurityKey,
};
});
Notice how we are defining the key using a publicly accessible constant. We will need to access the key
from the AccountController once we implement the actual code that logins and generates a token. For
the purposes of this app, a hardcoded secret is fine, but in a real application, make sure this is a real secret
part of your configuration!
With the changes made in the earlier section, our application will be able to authenticate and authorize
users as long as they include a valid token as part of their request.
We need to provide with a new endpoint in the AccountController that verifies the supplied credentials
and generates a token instead of a cookie. This is relatively straightforward to implement using the
JwtSecurityToken class and the same credentials configured for the JWTAuthScheme:
...
[HttpPost("token")]
public async Task<IActionResult> Token([FromBody]LoginCredentials creds)
{
// We will typically move the validation of credentials
// and return of matched principal into its own AuthenticationService
// Leaving it here for convenience of the sample project/article
if (!ValidateLogin(creds))
{
return Json(new
{
error = "Login failed"
});
}
var principal = GetPrincipal(creds, Startup.JWTAuthScheme);
var token = new JwtSecurityToken(
"soSignalR",
"soSignalR",
principal.Claims,
expires: DateTime.UtcNow.AddDays(30),
signingCredentials: SigningCreds);
This should look very similar to the existing login endpoint, with the difference of generating a token that
is manually included in the JSON response as opposed to generating a Cookie sent in a Set-Cookie response
header.
Notice how no new logout endpoint or even changes to the existing logout endpoint, are needed. That is
because for a client to logout when using tokens, they just need to forget that token.
Now that our server can use either a Cookie based authentication scheme or a JWT based one, let’s update
our Vue application so users can choose in which way they want to login.
Figure 7, login modal letting you choose between cookies and JWT authentication
www.dotnetcurry.com/magazine 65
Of course, you will never ask the user to make such a decision in a real application, but this will come very
handy for the purposes of this application, which is to demonstrate how these features work!
Start by updating the login-modal.vue component, so it includes the radio buttons to select the
authentication scheme and passes the selected one down to the context store’s login action:
// On the template
<b-form-group label="Authentication mode">
<b-form-radio-group
id="authMode"
v-model="authMode"
:options="authOptions"/>
</b-form-group>
// On the script
export default {
data () {
return {
...
authMode: 'cookie',
authOptions: [
{ text: 'Cookie', value: 'cookie' },
{ text: 'JWT Bearer', value: 'jwt' }
]
}
},
methods: {
...
onSubmit (evt) {
this.login({ authMethod: this.authMode, credentials: this.form }).then(() => {
this.$refs.loginModal.hide()
})
},
...
}
}
The login action of the context store needs to send a request to either the /account/login or the /account/
token endpoints based on the authMethod property. It also needs to store the received token in case of the
JWT scheme, since we will need to include it as part of the Authorization header on future AJAX requests.
state: {
profile: {},
jwtToken: null
},
mutations: {
...
setJwtToken (state, jwtToken) {
state.jwtToken = jwtToken
}
},
actions: {
...
// Login methods. Either use cookie-based auth or jwt-based auth
login ({ state, dispatch }, { authMethod, credentials }) {
With these changes, you should now be able to successfully login using the JWT scheme. If you inspect the
HTTP requests in your browser developer tools, you should see the token included as part of the response:
Unfortunately, this isn’t enough. If you then try to upvote a question, you will notice a 401 response from
the server. That’s is because even though we received a JWT token and we stored it inside our context store,
we are not sending it back along AJAX requests.
In order to do so, we will use an axios interceptor. This will be invoked by axios on every request, and it will
inspect the context store for a JWT token. In case there is a token, it will automatically add the Authorization
header to the request. Update main.js with this interceptor:
axios.interceptors.request.use(request => {
if (store.state.context.jwtToken) request.headers['Authorization'] =
'Bearer ' + store.state.context.jwtToken
return request
})
Notice the format of the header is the constant Bearer followed by the token, separated with a space. That
is exactly what the JwtAuthScheme expects on the server! After these changes, you should now be able
to interact with the site without receiving 401 responses (except for the SignalR hub, which we haven’t
updated yet).
www.dotnetcurry.com/magazine 67
Let’s now make a quick change to the logout endpoint, so we don’t send a request in case of using the JWT
scheme, as well as deleting the token from the store:
If everything went right, your users should now be able to login and logout when using the JWT scheme.
However, they will notice something odd.
As soon as they reload the page, they are logged out! The explanation is simple, the token is stored in the
vuex store, and that information is gone as soon as you reload the page. We will need to store the token
somewhere that survives a simple page refresh!
NOTE: For our purposes, we will simply use local storage. However, you should know that this simple approach
has security drawbacks. If you plan on using JWT in your SPA, read more about the storage options.
Update the context store so the token gets saved and restored from local storage:
mutations: {
...
setJwtToken (state, jwtToken) {
state.jwtToken = jwtToken
if (jwtToken) window.localStorage.setItem('jwtToken', jwtToken)
else window.localStorage.removeItem('jwtToken')
}
},
actions: {
restoreContext ({ commit, getters, state }) {
const jwtToken = window.localStorage.getItem('jwtToken')
if (jwtToken) commit('setJwtToken', jwtToken)
Now authentication with JWT should work as expected, even after page reloads. Let’s wrap up by making
sure we can connect to the SignalR hub when using JWT.
By now, most of the heavy lifting has already been done. The server can authenticate users with a valid JWT
token and the Vue application is able to login using the JWT scheme.
We need to update the JwtAuthScheme, which by default only knows to look at the Authorization header, so
it also looks at this parameter. Update the AddJwtBearer segment of the ConfigureServices method in
the Startup class:
The final part is for the client application to include this query string parameter as part of the SignalR
connection when using JWT! First update all the calls to the startSignalR method made from the context
store, so any current JWT token is provided:
Vue.prototype.startSignalR(state.jwtToken)
Then update the startSignalR method itself. We just need to include an accessTokenFactory property
as part of the HubConnectionBuilder in case we received a non-empty token:
...
}
This way the HubConnectionBuilder will include the access_token query string parameter only when a
valid token has been passed, which will only happen when users are authenticated using the JWT scheme!
And this concludes the tutorial. Your application should be fully functional regardless of whether you
choose to use JWT or Cookies as the authentication scheme.
Conclusion
ASP.NET Core is flexible enough so you can implement authentication using different schemes in a way
www.dotnetcurry.com/magazine 69
that’s transparent to the rest of the application.
True, the documentation is mostly geared towards using the default Identity implementation with Cookies,
but the flexibility is there and relatively easy to find resources such as these blog posts that have been
created by the community to fill the gap.
It is no wonder then that SignalR, built on top of ASP.NET Core, inherits this flexibility. Adding
authentication to SignalR hubs and clients is a simple step once you have already added authentication to
the rest of your application.
Finally, Vue and its ecosystem with libraries like Vuex, makes a great job at being flexible and extensible
itself! As demonstrated in the article, adding cross cutting concerns like authentication can be added
cleanly and with very little repercussion to most components other than the root ones!
As a final note, I understand there is a lot to process in the article, bringing together quite a few different
tools in order to build a working application, all of it mixed with a hairy subject like authentication.
Don’t feel discouraged if it didn’t make complete sense, the first time.
Daniel Jimenez Garcia is a passionate software developer with 10+ years of experience.
He started as a Microsoft developer and learned to love C# in general and ASP MVC in
particular. In the latter half of his career he worked on a broader set of technologies
and platforms while these days is particularly interested in .Net Core and Node.js. He
is always looking for better practices and can be seen answering questions on Stack
Overflow.
www.dotnetcurry.com/magazine 71
AZURE DEVOPS
Subodh Sohoni
FOR CI / CD OF ASP.NET
CORE APPLICATION
TO AZURE KUBERNETES
SERVICE (AKS)
As a professional software development process, one would like to completely automate the process to
create and deploy even containerized applications.
1. A Container is a standard unit of software that packages up code and all its dependencies so the
application runs quickly and reliably from one computing environment, to another. They are light
weight virtualization units which run only one process.
2. Docker are the type of containers which are standalone, executable package of software that includes
everything needed to run an application: code, runtime, system tools, system libraries and settings.
Although technically possible, Docker containers are discouraged to run multiple processes, to keep
separate areas of concern. They are encouraged to use services provided by the Host Operating System,
which can be Linux and Windows, through the Docker engine.
Image Ref:
https://www.docker.com/resources/what-container
www.dotnetcurry.com/magazine 73
3. Docker hosts are machines / VMs that run Docker engine and support Docker containers.
4. Container images are the basis of containers. An Image is an ordered collection of root filesystem
changes and the corresponding execution parameters for use within a container runtime. An image
typically contains a union of layered filesystems stacked on top of each other. It is like a template of a
container.
5. Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of
containerized applications. Kubernetes puts containers into groups that make up an application into
logical units called pods for easy management and discovery. Pods are hosted on the VMs called as
nodes. Kubernetes manages nodes and pods. Ref: https://kubernetes.io/ and https://www.dotnetcurry.
com/microsoft-azure/1434/kubernetes
6. Azure Kubernetes Service (AKS) provides the support for implementation of Kubernetes in Azure. I will
be providing more details about this later.
7. Azure Container Registry (ACR) is an Azure service which maintains the repository of container images
in Azure.
If you do not have an Azure Account, you can create a trial account from https://portal.azure.com. With
this 30-day free trial account, you will get a credit of USD 200 (or equivalent in your local currency for
supported countries) that you can use to create resources in Azure for trial purposes, similar to this
walkthrough. After the credit and trial period, you can take a decision to continue by converting the trial
into a paid account.
This is a cluster of nodes, which are virtual machines that will be hosting the containers. When we create
AKS cluster, along with the nodes, another VM is created in the cluster. That VM is a Cluster Master which
manages the nodes in the cluster.
To create the AKS Cluster, open the Azure Portal and login to the Azure Account. Then create a new resource
of the type Kubernetes Service which opens the wizard to create Kubernetes Cluster.
Provide the name of the cluster, resource group in which this cluster is to be created, the size of the nodes
and the number of nodes in the cluster.
The size of the node that is automatically chosen is Standard DS2 v2. This size in my opinion is ideal for
running the containers in a professional environment. Default number of nodes that are selected for the
cluster are three. Since this is just for example and not for professional use, I changed that to one node.
Note: Number of nodes to be created depend upon the load that is expected and the containers that will
need to be created.
In this wizard, we will also ensure that a Service Principal is created for the AKS Cluster.
A Service Principal is like a service account that gets created for a service in Azure Active Directory. The
purpose of creating a Service Principal is to grant permissions to the service to access some resources.
Whatever permissions are granted to a Service Principal, are automatically transferred to the service it
represents.
In this case, a Service Principal will be created for the AKS Cluster and we will give it permission to pull
www.dotnetcurry.com/magazine 75
images from the Azure Container Registry that we will create later in the walkthrough.
This Service Principal will be given a default name that we can check from Azure Active Directory of our
account – Registered Applications. It is also possible to create a Service Principal in advance and assign it
to the AKS Cluster at the time of creation, but we are not using that route as it is easier to create Service
Principal and link it with AKS Cluster at the time of creating the AKS Cluster.
The name of the cluster that I created is AKSDevOpsDemoCluster and the Service Principal for that is
AKSDevOpsDemoClusterSP-xxxxxxxxxx.
The next resource that we will create in Azure is the Azure Container Registry (ACR). This is sort of a
container which will store the images of Docker containers that we will create. This resource in Azure is
also created from the Azure Portal.
Once the ACR is ready, we now edit the Access Control of that ACR to add Service Principal of the AKS
Cluster to the role of Contributors. This role has a permission to push and pull images from this ACR.
Now that the resources in Azure are ready, we will create a Team Project in Azure DevOps Account.
If you do not have an account in Azure DevOps, you can create a free account from https://dev.azure.com. To
use or create the Azure DevOps account, I strongly suggest using the same email address that was used to
create an Azure Account. This will make the authentication process seamless to access Azure resources from
Azure DevOps.
If you have to use different email accounts for Azure and Azure DevOps then you can follow the guidelines
provided at https://docs.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-
devops to create a connection to Azure.
The name of the team project that I have created is AKSDevOps. For the sake of consistency, I suggest that
you also do so while following this walkthrough.
When the team project is created, I selected git as the version control which created a git repository on
Azure DevOps. This git repository is going to work as a remote repository for all team members who are
developing the application. We can now clone this repository to create a local repository.
Clone operation can be executed in Team Explorer which is part of Visual Studio. To start with, in the Team
Explorer, connect to the newly created team project and click the Clone button once the connection is
established.
www.dotnetcurry.com/magazine 77
Figure 4: Connect to team project and clone repository
Docker containers are predominantly based upon Linux, and only .NET Core applications work on Linux
because it is cross-platform. At the time of creating the project, we will add it to the local git repository
created in the previous step. To ensure that it is added to the same repository, click the “New” link under
Solutions section of Team Explorer.
Provide a name to the project, AKSDevOpsApp and select the template of “ASP.NET Core Web Application”
from the sub-section of “.NET Core” under the section of “Visual C#”.
Once the project is created, we will open the Dockerfile and make the following changes:
My observation is that the Dockerfile as created by the project creation wizard does not work as it is. It can
be taken as a starting point and modified. I have experimented with various options and finally came to the
conclusion that the code shown in Figure 7 always works.
www.dotnetcurry.com/magazine 79
We will also make some more changes to the code of the application. Open the index.cshtml from the
solution explorer and change the code of the “Welcome” message.
We will also add a YAML file named deployment.yml. This file will be used to deploy the image from ACR to
AKS Cluster. It will be created in a folder named “Manifest” (this is only for the convenience and segregation
of code of application with that of deployment file, it is not a technical necessity). This file will be passed
by the build to the release management as part of the artifact, so that it can be used at the time of
deployment. Code of that file will be somewhat like this:
This YAML file specifies the image to be used, replicas of the pods to be created, ports to be opened on the
container and LoadBalancer service to expose the containers to the outside world.
If necessary, open the Settings tab in Team Explorer and click the link to Add a “.gitignore” file so that binary
files and their folders like bin and obj will be omitted from changes that are ready for commit. Now we can
Commit code to the local repository and Push that to the remote repository on Azure DevOps.
In the next step, we will create a new Build pipeline in which we will create an image based upon the
Dockerfile and then Push it to the ACR that we have created earlier.
To create the new Build pipeline, open the page of your organization in Azure DevOps https://dev.azure.
com/<your org name>/AKSDevOps and then select Builds from the Pipelines section in the left pane.
Select the link “Use the classic editor” to create the pipeline without YAML.
www.dotnetcurry.com/magazine 81
On the subsequent page, select the “Docker container” template.
This template will provide tasks to create the container image and to push it to the ACR.
Before making changes in the parameters of the tasks, open the Pipeline node. On this node, change the
name of the pipeline as AKSDevOpsDemoBuild and the select the Hosted Ubuntu 1604 agent pool. Since
we are creating Docker images, agents under this pool support the actions to create and push those
images.
Let’s now set the values for parameters of the “Build an image” task. Use the following guidelines to set
those values:
• the source folder is the Manifest folder, selected by drill down in the version control repository,
The last task that we will add is “Publish Artifact” where we need not make changes in the parameters. It
will publish the artifact named “drop” in which the deployment.yml file is present.
After configuring these tasks, we will “Save and Queue” this build. At the success of the build, the image is
created as configured in the Dockerfile, pushed to ACR and the artifact as mentioned earlier, is created and
published.
We now have to deploy the created image on AKS Cluster. We will do that using the Release Management
service.
Let’s create a new release pipeline from Pipelines – Releases section in the left-hand pane of the Azure
DevOps page. In this release pipeline we will add only one stage (for the sake of this example), but normally
there may be multiple stages in a release pipeline.
For this release pipeline, we will select the “Deploy to Kubernetes cluster” template.
www.dotnetcurry.com/magazine 83
Figure 14: Select Deploy to Kubernetes cluster template for release pipeline
This template by default adds one stage, let’s call it “QA”. We will add the build pipeline definition that we
have created earlier, as the artifact source.
Let’s now set the parameters for the task that is added by the template.
That task is of “kubectl”. Before we set other parameters, let’s setup a connection to the AKS Cluster. This is
done through the wizard that is started by clicking the New button for the parameter of Kubernetes Service
Connection.
Now create a release which will pull the image that we had built and deploy the containers in the pods on
a node in AKS Cluster. Let’s view those pods.
www.dotnetcurry.com/magazine 85
View pods and services
It is a shell that can be accessed without going out of the portal (in the browser). Once we open it, we can
execute either PowerShell or Bash commands on the command prompt. In this example, let’s select to open
the CloudShell in Bash mode. We will now connect to our AKS Cluster by using the command:
In the above-mentioned command, “research” and “AKSDevOpsDemoCluster” are names and may be
different in your case.
This command will list the pods that are created by Azure DevOps Release.
This command will list the services including the LoadBalancer service and the service that manages the
cluster.
Once we get the External IP address of LoadBalancer service, we can browse to it to view the application.
In this exercise, we have deployed the image that has a name of aksdevopsdemo and a label of Latest.
What we realize is that if we try to update the image by changing some code and build it again, the newly
created image will have a tag that is the build number and another tag that is “latest”.
If we try to redeploy the image with the tag “latest”, it will not replace the image in the running containers.
It becomes obvious that deploying image with “latest” tag is not a useful strategy if we want containers to
re-pull the image without a break.
Every time the build executes to create the image, build id will be different and so will be the tag given
to that image. In fact, this is a standard behavior of the Docker image creation task in Azure DevOps Build
pipeline.
When redeployed, the image with the same name but a different tag will be used and the running
containers will have no problem in pulling that image. We have to now ensure that image that is pulled at
the deployment, is the one with latest build id.
Release management gets the image name and tag of the image to deploy from Deployment.yml file.
In this file, we will need to replace the tag “latest” with the ID of the build. This needs to be done every
time a new build artifact is passed to the release management service. We will need to update the
Deployment.yml file before the actual deployment takes place.
As part of the release, we will use the “sed” command to make an inline change in that file to replace
“latest” word with the value of the build id. This will have to be done in the first step of release, where we
can access the build id and pass it on to the shell script which also needs to be part of the artifact.
This shell script accepts an argument. That argument is the Build Id passed from the release task. It
replaces the first instance of the word “latest”.
Path of the file is the path of the artifact that is passed to that release so that the replacement takes
place in the artifact itself. We will save this Bash script in the Manifest folder, as “run.sh” file (name can be
different but ensure to replace it wherever we have used it) to the version control.
www.dotnetcurry.com/magazine 87
After pushing that to the remote repository, make a change in the build pipeline definition that we had
created earlier to also copy this file in “Build.ArtifactStagingDirectory”. This way, it gets added to the artifact
that is passed to the release.
We will now add a task of “Bash” script execution to the release pipeline definition. Set the following
parameter values:
When we create a release, it will first run that Bash script on the agent. That Bash script will take the
deployment.yml file from the same artifact where the Bash script is, and replace the word “latest” with the
build id.
The deployment.yml is saved back in that artifact. It is used in the next step to do the deployment. As the
deployment proceeds, the image with the tag of the latest build id will be pulled by the running containers
and the application is updated.
Summary:
Subodh Sohoni
Author
Subodh is a consultant and corporate trainer. He has overall 28+ years of experience. His
specialization is Application Lifecycle Management and Team Foundation Server. He is
Microsoft MVP – VS ALM, MCSD – ALM and MCT.
He has conducted more than 300 corporate trainings and consulting assignments. He is
also a Professional SCRUM Master. He guides teams to become Agile and implement CRUM.
Subodh is authorized by Microsoft to do ALM Assessments on behalf of Microsoft.
Follow him on twitter @ subodhsohoni
www.dotnetcurry.com/magazine 89
AZURE DEVOPS
Imran Siddique
Azure DevOps
Search
- Deep Dive
Search service of Azure DevOps makes it easy to locate information across all your
projects, from any computer or mobile device, using just a web browser.
Introduction
The Azure DevOps service consists of dozens of microservices communicating with each other to give the
user a consistent and feature rich experience.
Azure DevOps Search (Search) service is one of the microservices of Azure DevOps that powers its search
functionality.
Search service provides support for searching different entities of Azure DevOps like code, work item, wiki,
packages to name a few. Its unique proposition comes from providing semantic relevance for query results
and deep filters during query. More examples and information can be found at search documentation.
Search service is a completely hosted solution, that supports a scale of billions of documents running into
peta bytes of index data spread across multiple Azure regions and Elasticsearch clusters. The platform also
supports critical Enterprise-ready features like honoring security permissions, multi-tenancy and GDPR
compliance.
In this article, we will talk more about how the platform is architected to support both the search
functionality as well as the service fundamentals at scale.
www.dotnetcurry.com/magazine 91
Why Azure DevOps Search?
Four of the biggest needs that Azure DevOps Search faced were:
1. Availability: Search is such an integral part of the product capability that there is an inherent need to
ensure it is almost always available.
2. Scale: With the large scale of users and usage in Azure DevOps, scale was always at the back of our
minds (I am from the Azure DevOps Search team) while we were designing the architecture.
3. Performance: While we were achieving high availability and tremendous scale, we could not
compromise on performance and wanted most of the important queries to open in sub seconds
4. Complexity: The way users search for code and work items is very different from your average search
requests. There are several complex scenarios that search supports such as: in code search, you can
search a code file based on a comment you wrote in the file by just typing “comment:todo” or in work
item you can search a bug, user story, or feature based on its assignment, state, creation time and other
thousands of filters that you would associate with a work item .
Search service has two major processing pipelines – Indexing pipeline and Query pipeline.
Indexing pipeline is the set of components that come together to support pulling content from other Azure
DevOps services, processing it to add annotated semantic information, and pushing it to the Elasticsearch
indices.
The Query pipeline provides a REST endpoint for the Azure DevOps portal and external tools to search. It
performs key functions such as identity validations, authorization checks and retrieving the relevant and
accessible content from the Elasticsearch index. The Elasticsearch indices themselves are hosted on Azure
VMs that handle the ingestion and query of documents.
Crawlers
The first stage of indexing is to trigger the crawling of the contents (code in case of code search, work-item
details in case of work-item search and so on) once an account is onboarded. Onboarding in context of Azure
DevOps account means enabling the search functionality for the users of the account.
Multiple types of crawlers are available and hosted in the Search service – each entity has its own crawler
implementation; and for some entities there are more than one implementation.
Crawling happens on Azure job agents, which are Azure worker roles where all our background processing
happens. The crawling either happens in chunks (split across multiple executions of the same job) or in a
single execution of the job. This is to ensure fairness across multiple accounts running in parallel and
ensuring resources are utilized efficiently.
Incremental crawling is triggered using notifications whenever there are changes in the system. For
instance, in case of code search whenever there are pushes or changes on a given repository, there is a
notification which is sent from Version Control service of Azure DevOps to Search service. Search service
then reacts to the notification and retrieves the contents of the code files that are changed.
Incremental crawling can be also be processed in chunks. Once the contents are crawled, they are
processed by the next layer – parsing.
Parsers
Once the contents are available from crawler, the documents are passed through parsing layer. In this
phase, the documents are parsed from different angles to extract some more meaningful information so
the same can be indexed as well.
www.dotnetcurry.com/magazine 93
For example, in case of code search, Search service uses language specific parsers for C, C++, Java and C#
that generate the partial Abstract Syntax Tree (AST) for each file in the repository during indexing time.
Parsers take the bare files and generate semantic token information for the file and add them to the
document content that needs to be indexed.
For example, when a C++ code file is being processed, the class, method tokens within the file are also
parsed, and added to the document mapping information for Elasticsearch. The document mapping for code
files in Elasticsearch today holds not just the content of the file, but also a per-term code token information.
Parsers run out-of-process to ensure isolation (for security reasons) as well as the ability to host language
specific runtimes. Parsing failures cause fallback to text parsing to ensure that the file is still text
searchable.
Feeders
Once the parsed content is available, the documents are fed into the Elasticsearch indices via the feeders.
Feeders convert the parsed content into an Elasticsearch compatible mapping, batch multiple parsed files
into an Elasticsearch indexing request, and index them.
To ensure that the cluster doesn’t get overwhelmed with a huge set of indexing requests at the same time,
there are throttling mechanisms to control the indexing throughput across multiple job agents.
Query pipeline
The search experience is available for Azure DevOps users both from the Azure DevOps portal as well
as the REST APIs. The Azure DevOps portal experience is built on top of these REST APIs exposed by the
Search service.
The incoming search requests go through multiple processing stages like validations and transformation.
The request is first validated to ensure the information available is correct, supported and meets all the
security/throttling criteria. The request is transformed and optimized so the same has information around
the index and shard where the search will happen, the filters that needs to be applied, the boosting that
will be carried out, the fields that will be retrieved and so on.
This means even preview of search results are not allowed if the users don’t have permission to the
same. This is supported for queries that are scoped across multiple projects and repos as well. The results
returned from Elasticsearch are filtered to ensure only the results the users have access to, are returned.
Search service supports queries scoped at different levels like account / project for most of the entity types
and in some cases even more granular scopes like repository/path. It also supports searching across multi-
selected entity instances at the same time.
Cluster topology
Elasticsearch indices are stored in Azure Premium storage blobs and supported via nodes hosted on
Windows based Azure IaaS VMs.
Each Elasticsearch cluster contains 3 master nodes, 3+ client nodes (on the indexing and query load on the
cluster) and 3+ data nodes (depending on the size of the indices). Our largest clusters have 80+ data nodes
and have an index utilization (amount of indexed data that is queried) of ~70%.
www.dotnetcurry.com/magazine 95
To ensure Elasticsearch runs smoothly with Azure, Elasticsearch’s node allocation awareness attributes are
configured to honor the availability sets (fault domain and update domain) within Azure. These settings
ensure that a given set of primary + replica is always available during unplanned outages or planned
upgrades.
Indices have primary + 2 replicas, with a quorum based write consistency model. Index refresh is set to a
minute.
The Search service has Elasticsearch clusters deployed in multiple Azure regions, at least one cluster per
each region supported by Azure DevOps. This helps ensure data sovereignty is honored, as the index data
for accounts within a given Azure DevOps region is stored within the same region.
Indices have primary + 2 replicas, with a quorum based write consistency model. Index refresh is set to a
minute.
The Search service has Elasticsearch clusters deployed in multiple Azure regions, at least one cluster per
each region supported by Azure DevOps. This helps ensure data sovereignty is honored, as the index data
for accounts within a given Azure DevOps region is stored within the same region.
Index/Data model
The mapping for documents inside Elasticsearch contains some information that is similar for all
entity types in the Search service and some which are entity specific. All the documents have metadata
information like account/project they belong to. Each entity can have additional metadata information like
the repository a document belongs to, in case of code.
Each document also has a set of information that uniquely identifies it from other documents. For instance,
work-items have a work-item Id associated with them that uniquely identifies a work-item in an account.
Similarly, a combination of branch name, file path, file name and content hash uniquely identify the code
file in a given repository of Azure DevOps account. Document Id of the Elasticsearch document is built
using some of the information mentioned above.
The mapping also contains entity specific information that helps in enabling the search experience for the
given entity type.
For instance, in case of code search, the code token information for a given term (say class “Foo”), along
with its positional information is stored as a term vector payload in the index. The entire content of the file,
including operators, is stored in the file content to support full text search.
Routing
The default index routing ensures that data in a single entity instance goes to the same shard, and
wherever possible, data from multiple entity instances of a given account go to the same shard as well.
This doesn’t suffice for very large entity instances or accounts, which have millions of documents that can’t
sit on the same shard. Based on different heuristics, when certain entity instances are deemed large, those
Handling growth
A single account typically sits in a single index on Elasticsearch split across multiple shards of that index
based on size. It is also possible for some very large accounts to have multiple indices dedicated to them.
At any given point of time, there are a few tens of indices that are marked “active”, so new accounts can be
indexed into them. Based on certain account/entity instance size heuristics, indices are deemed “full” and
are closed to addition of new accounts. Existing accounts continue to grow within the same indices once
assigned. When there are no active indices available, new set of active indices are created automatically to
support new account additions.
Periodically, jobs run to determine if some shards/indices are really “large” because of high growth of
accounts on that shard, and selected accounts on that shard are marked for “move” to a new index to
ensure they don’t become a bottleneck and influence other accounts on that index. These moves are then
orchestrated by the trigger/monitor job that handles re-indexing, to ensure the number of moves at any
given point of time is regulated/throttled.
Monitoring is built in to indicate capacity crunch or spare capacity. This helps react by increasing/
decreasing nodes in the cluster.
The indexing pipeline is a shared job execution model across multiple accounts that are hosted within
the Search service. Jobs are scheduled per entity instance (for example – repository in case of code, project in
case of work-item and so on) to handle any complete/incremental changes that are detected for that entity
instance.
To ensure index consistency, the event processing pipeline has robust locking semantics that ensures that
only a single operation (indexing, metadata change processing etc.) is running for a given entity instance at a
given point of time.
Metadata changes, addition of new projects and repositories are also controlled at a per account level, to
ensure semantic consistency of the account’s information.
Each entity treats its accounts differently, so the locking semantics don’t span across entities for the same
account. Indexing is typically done in a single job for an entity instance, but it can be dynamically expanded
to multiple parallel jobs (if the change to be processed or the entity instance itself is very large).
www.dotnetcurry.com/magazine 97
allocating new job resources to an account for indexing.
Every job run also executes in a time-bound manner to ensure it doesn’t continue to hog resources while
starving another account for a very long time, yielding every so often to ensure that a job resource can be
allocated to another account if needed.
Similar mechanisms are applied at entity level as well, to ensure jobs for a given entity type doesn’t hog
resources needed by jobs of another entity type.
Shared Indices
Inside the Elasticsearch indices, data across multiple accounts/entity instance is shared and stored in a
single index. This helps with reducing the total number of indices and shards (partitions) that need to be
managed and caters to many small accounts that don’t have a lot of data.
At the same time, for large accounts or entity instances, the Search service scales to support dedicated
indices, thus the effects of noisy neighbors are minimized. Large is determined as a heuristic. Shared indices
have a cap on the max number of accounts/entity instances that are accepted, to ensure there is room for
growth.
The indices at entity type level are different for each entity and the same is not shared across entity types.
This gives room for each entity type to have its own indexing and querying characteristics; also, how it
wants to group the accounts’ data for optimal query performance.
• You can narrow your search by using project, repository, path, file name, and other filter operators. This
will help you achieve your desired results even faster. Start with a higher-level search if you don’t know
where the results would be and keep filtering till you have a subset of results to browse through and
work on.
• You can use wildcards to widen your search and Boolean operators to fine-tune it. This will ensure you
get to the results you desire even when you are not sure of the exact term you are looking for.
• When you find an item of interest, simply place the cursor on it and use the shortcut menu to quickly
search for that text across all your projects and files. This will help you find more information about an
item of interest faster and with minimal efforts.
You can also use the quick in-line search filters on any work item field to narrow down to a list of work
items in seconds. The dropdown list of suggestions helps complete your search faster.
Wiki Search
When you search from Wiki, you'll automatically navigate to wiki search results. Text search across the wiki
is supported by the search platform.
Know More
If you would like to see how Search looks like in action, you can watch the video here! In this video, Biju
Venugopal (Principal PM Manager @ Microsoft) walks us through the demo of Search and talks through
important aspects about the service.
Imran Siddique
Author
Imran Siddique is a software engineer specializing in software & distributed systems
development. He has over 11 years of experience designing and architecting different
Microsoft cloud services. Imran is passionate about distributed systems, designing at scale
and engineering improvements. Check out his LinkedIn profile at
https://www.linkedin.com/in/mohammadimransiddique/
Mahathi
Author
Mahathi is an engineering manager on the Azure DevOps Search team, leading the are
as of Code Search, scale and resilience. Prior to this, she worked on real-time media
networking protocols, .NET framework at Microsoft and Apps for Business at Google. Her
passion is to build software that is elegantly designed, highly scalable and extensible.
She holds an MS in Computer Science from Stanford. Outside work, she loves music and
performs live shows with her family. More at www.linked.in/mmahathi
www.dotnetcurry.com/magazine 99
ASP.NET CORE SIGNALR
Dobromir Nikolov
INTEGRATION TESTING
Integration testing of
real - time communication in
ASP.NET Core using
Kestrel and SignalR
Integration testing is getting more and more popular amongst developers who
care about shipping quality products. Real-time functionality is now a norm
and is included in the requirements of modern web applications. Learn how
you can incorporate these two concepts by building a robust integration tests
infrastructure using SignalR and Kestrel.
If you’re not familiar with SignalR, I suggest going through the docs before continuing further, as the rest of
the article assumes that you have at least some basic knowledge about using the library.
Editorial Note: Check out this rock solid tutorial of building a webapp using ASP.NET Core and SignalR
www.dotnetcurry.com/aspnet-core/1480/aspnet-core-vuejs-signalr-app.
If you don’t feel like going through the docs right now, you may get away with knowing that SignalR uses a
concept called a “hub”. A SignalR hub is basically an endpoint to which clients can connect to start receiving
or sending messages.
On the server, a hub is represented by a class. You can define methods on it that can be called by the
clients, or send messages to those clients through the IHubContext<THub> interface.
For our test case, the hub won’t define any methods for the clients to call. It will just sit there and wait for
connections.
app.UseSignalR(routes =>
{
routes.MapHub<TestHub>("/testHub");
});
What MapHub does is it creates an endpoint that clients can connect to. If we wanted to have methods that
the clients could call, we could’ve defined them inside the TestHub class.
In our test scenario, however, we will be testing only “server to client” communication. Let’s define an object
that will provide us the ability to send messages to the hub subscribers. As we mentioned earlier, the
implementation will make use of the IHubContext<THub> interface.
www.dotnetcurry.com/magazine 101
.SendAsync(nameof(Notification), notification);
}
In most applications, events or notifications will usually be dispatched after some API operation has
completed.
Let’s create a mock API endpoint that will use our new dispatcher to send a notification to all hub
subscribers.
[Route("[controller]")]
public class HubController : Controller
{
private readonly ITestHubDispatcher _dispatcher;
[HttpPost("test")]
public async Task<IActionResult> Test([FromBody] Notification notification)
{
await _dispatcher.Dispatch(notification);
return Ok();
}
}
The integration tests we will be writing will test that when a POST request is submitted to the hub/test
endpoint, all subscribers to the TestHub are properly notified.
Unit tests are nice, but all of the mocking and setup can easily distract from what we’re actually testing. We
need to get intimately familiar with how objects are constructed, the application interfaces, their behavior
and role in the implementation.
Of course, unit tests play an important part in the act of delivering quality software. It’s not worth it to spin
up a whole integration testing infrastructure just to cover a few pure, reusable components.
However, for interactive functionality that you will expose to users, integration tests are substantially more
valuable.
Editorial Note: Read more about Integration Testing for ASP.NET Core Applications at www.dotnetcurry.com/
aspnet-core/1420/integration-testing-aspnet-core
Sure, the infrastructure setup may be a bit tedious sometimes, but with tools such as Docker, this shouldn’t
For our test setup, we will be aiming for our tests to look more like this:
You see how in the second case we are required to know that there is a TestHubDispatcher
implementation that uses an IHubContext<TestHub>, and that the HubController depends on a
TestHubDispatcher instance, and so on.
All of the mocking and setup distracts us from what we’re trying to test. And what we’re trying to test is
whether the system behaves as expected when interacted with from the outside.
Normally, as we will find in the docs, to write integration tests for an ASP.NET Core application, we would
use the TestServer class. TestServer can be used for calling the controller HTTP endpoint, but after that,
we will quickly (or not so quickly, depends on how much time we spend debugging) find out that SignalR
won’t work, because TestServer does not yet support WebSockets (more info about that here).
Fortunately, there is an easy solution to this problem, and it lies just in front of us - inside the Program.cs
file. If you open it, it probably looks something like this:
What this call to CreateDefaultBuilder does is call UseKestrel behind the scenes. If you haven’t heard
of Kestrel up until now, it’s the web server that was introduced together with ASP.NET Core.
I won’t get into details, but if you’ve seen the console window that pops up when you press “Ctrl + F5” in
Visual Studio, then you’ve seen Kestrel (you can learn more about it by reading the docs).
www.dotnetcurry.com/magazine 103
Kestrel is what allows .NET Core apps to be cross platform. For the sake of this article, think about Kestrel
as our application.
Well, Kestrel decouples our application from specific server implementations such as IIS, Apache or Nginx
by providing a consistent startup pipeline.
We can execute this pipeline ourselves to get a running instance of our application that we can use for
integration testing. This goes around the problem of TestServer not supporting WebSockets by not using
TestServer at all.
We just need to create a class that will encapsulate this startup logic.
static AppFixture()
{
var webhost = WebHost
.CreateDefaultBuilder(null)
.UseStartup<Startup>()
.UseUrls(BaseUrl)
.Build();
webhost.Start();
}
AppFixture is simply mimicking what our application’s Main method is doing - starting the Kestrel web
server.
When this class is instantiated for the first time, an instance of our app will be started on port 54321.
Why a static constructor you may ask? Because we really only need one server running per test run.
AppFixture also provides a neat way of building urls through GetCompleteServerUrl, which will later
come in handy.
// Returns http://localhost:54321/some/route
var url = fixture
.GetCompleteServerUrl("/some/route");
For communicating with the SignalR hub, we will be using the SignalR.Client package. It gives us a way
of creating persistent connections to our hub and listening for messages that are emitted from it. Some
example usage:
await connection.StartAsync();
For our tests, in order to instantiate new connections, we’ll be using the following helper function.
await connection.StartAsync();
return connection;
}
You see how the HubConnection’s On method accepts a callback? Later on, when verifying whether a correct
message was received, we’ll need to check whether the callback we’ve passed has been called with the
proper arguments.
Fortunately, there is Moq. Moq allows us to create a mock function and then use its built-in Verify method
to check whether it was called with the correct parameters. The following snippet will create a mock
Action<Notification> and assert that it was called with a message of “whatever”.
mockHandler.Verify(
x => x(It.Is<Notification>(n => n.Message == "whatever")),
Times.Once());
www.dotnetcurry.com/magazine 105
It even gives us a Times struct. How cool is that!
We’ve collected enough knowledge to start converting our test description into actual, working code. How
about we take one more look at it?
This code looks pretty obvious after we’ve gotten familiar with SignalR.Client and Moq.
// Act
using (var httpClient = new HttpClient())
{
// POST the notification to http://localhost:54321/hub/test
await httpClient.PostAsJsonAsync(fixture.GetCompleteServerUrl("/hub/test"),
notificationToSend);
}
We’re using the built-in HttpClient. If you need more info about it, check out the docs.
// Act
using (var httpClient = new HttpClient())
{
// 2. Submit a POST request on {appUrl}/hub/test with a valid message
await httpClient.PostAsJsonAsync(fixture.GetCompleteServerUrl("/hub/test"),
notificationToSend);
}
// Assert
// 3. Verify that a correct message was received
mockHandler.Verify(x => x(It.Is<Notification>(n => n.Message ==
notificationToSend.Message)), Times.Once());
}
await connection.StartAsync();
return connection;
}
}
Since the nature of websocket communication is asynchronous and there is a real web server running in
the background, there is no guarantee that the Assert part of the test will be executed after the message
has been received.
www.dotnetcurry.com/magazine 107
In other words, the test may be valid, but might exit too early for the assertion to pass.
So what do we do?
Thankfully, we’re using C# and we can easily “plug into” Moq through an extension method.
try
{
mock.Verify(expression, times);
hasBeenExecuted = true;
}
catch (Exception)
{
}
What VerifyWithTimeoutAsync does is retry the built-in Verify until either it has been completed
successfully or a timeout has been reached.
..now becomes
If the first .Verify fails, Moq will continue retrying for 1 more second.
[Fact]
public async Task ShouldNotifySubscribers()
{
// Arrange
var fixture = new AppFixture();
// Act
using (var httpClient = new HttpClient())
{
// 2. Submit a POST request on {appUrl}/hub/test with a valid message
await httpClient.PostAsJsonAsync(fixture.GetCompleteServerUrl("/hub/test"),
notificationToSend);
}
// Assert
// 3. Verify that a correct message was received
await mockHandler.VerifyWithTimeoutAsync(x => x(It.Is<Notification>(n => n.Message
== notificationToSend.Message)), Times.Once(), 1000);
}
It definitely isn’t horrible, and it works, but it’s still not as simple as the description we started with.
www.dotnetcurry.com/magazine 109
// 3. Verify that a correct message was received
Let’s think a bit about how we could refactor things so the test looks more like the example description
without distancing us from the details too much.
// 1. Connect to the TestHub on {appUrl}/testHub
If we wrapped SignalR.Client’s HubConnection class into our own, we could perhaps end up with a builder
allowing us to do something like:
await connection.StartAsync();
It definitely makes it more obvious that we’re connecting to the /testHub endpoint and expecting a
message called “Notification”.
What we can do is move the HttpClient construction into the AppFixture class itself.
using (httpClient)
{
await action(httpClient);
}
}
Now that we’ve wrapped SignalR’s HubConnection into a TestHubConnection, we cannot call
VerifyWithTimeoutAsync on the message handler, as it is not in scope.
await connection.VerifyMessageReceived(
n => n.Message == notificationToSend.Message,
Times.Once());
..instead of
await mockHandler.VerifyWithTimeoutAsync(
x => x(It.Is<Notification>(n => n.Message == notificationToSend.Message)),
Now move the AppFixture into a private field and our test class looks like this:
Much better, isn’t it? Except for the fact that it doesn’t compile, but we’ll get to that in a second.
The implementation is fairly straightforward, since the squiggly red underlines tell us exactly what methods
we will need to expose. (StartAsync and VerifyMessageReceived)
_verificationTimeout = verificationTimeout;
_handlersMap = new Dictionary<Type, object>();
}
}
www.dotnetcurry.com/magazine 111
We keep some default verification timeout, the underlying connection (SignalR.Client.HubConnection)
and a collection of mappings between types and their handlers.
Dictionary<Type, object> may look intimidating at first, but things will become clearer in a second.
This dictionary will hold expected event types and a collection of their handlers. These handlers, as we saw
earlier, will be just mock functions created using the Moq library.
You see how we’re storing different generic types inside the values? This is why we need to use object as
the value type, so we can merge them under a common abstraction.
Later, if we want to assert that an event of type Notification was received, we can just take all its
handlers and run a predicate against them.
..and VerifyMessageReceived<TEvent> will check whether we have a registered handler for the
specified TEvent, and if we do, call VerifyWithTimeoutAsync on it.
Implementing TestHubConnectionBuilder
There’s not much to comment on implementing the builder, it’s a very standard implementation you’ll find
hundreds of tutorials for.
Clear();
return testConnection;
}
_expectedEventNames.Add((typeof(TEvent), eventName));
return this;
}
The only missing item that we find out while implementing it is that we need an Expect method on the
TestHubConnection. Let’s implement that.
www.dotnetcurry.com/magazine 113
public void Expect(string expectedName, Type expectedType)
{
var genericExpectMethod = GetGenericMethod(
nameof(Expect),
new[] { expectedType });
genericExpectMethod.Invoke(this, new[] { expectedName });
}
return method;
}
It’s very verbose, but that’s what you get when you want to have cool syntax. What Expect does
is register a new mock handler for the type we’ve given. We can later use this handler to call
VerifyWithTimeoutAsync and assert that a correct message was received.
Sadly, this requires some reflection gymnastics, but implementation details can be ugly sometimes.
[Fact]
public async Task ShouldNotifySubscribers()
{
// Arrange
var notificationToSend = new Notification { Message = "test message" };
await connection.StartAsync();
// Act
await _fixture.ExecuteHttpClientAsync(httpClient =>
httpClient.PostAsJsonAsync("/hub/test", notificationToSend));
// Assert
await connection.VerifyMessageReceived<Notification>(
n => n.Message == notificationToSend.Message,
Perfect.
We’ve now set up the foundation for a readable and functional SignalR integration test suite. For more
advanced examples of this approach that include support for access tokens, tests for notifying a specific
user, etc. visit https://github.com/dnikolovv/cafe. Look for the Api/Hubs tests in the /server folder.
And if you just want to check out and play around with the complete code of this article – you can also find
it on GitHub here - https://github.com/dnikolovv/signalr-integration-tests.
That’s it! The only thing left now is to show off your newly acquired knowledge by writing some robust and
well-tested real-time functionality!
Dobromir Nikolov
Author
Dobromir Nikolov is a software developer working mainly with Microsoft technologies, with his
specialty being enterprise web applications and services. Very driven towards constantly improving
the development process, he is an avid supporter of functional programming and test-driven
development. In his spare time, you’ll find him tinkering with Haskell, building some project on
GitHub (https://github.com/dnikolovv), or occasionally talking in front of the local tech community.
www.dotnetcurry.com/magazine 115
DEVOPS
Hardik Mistry
Configuration driven
Mobile DevOps
THE CHALLENGE
Shipping 5-star apps is easier said than
done!
We usually focus on writing and building an app, but we do not really bother much to figure out how to
distribute it. With too many options in the market, it gets quite abstract and ambiguous.
Think of each of these options as choosing between a BMW or a Mercedes, they both are performance
vehicles with equal level of commitment to quality and luxury. Both can very well take you from point A to
point B. However, there are subtle differences between the two which makes them different and a strong
contender in their own segment/rights.
We'll explore one such tool to help you with your Mobile DevOps journey. This tool is App Center
(previously known as Visual Studio Mobile Center).
Signing up to App Center is a breeze. You can start with a free account here: https://appcenter.ms. While
you can get started for free, you may want to choose a paid plan to obtain more build time (in minutes per
month) or other additional services. Explore the pricing and plans here: https://visualstudio.microsoft.com/
app-center/pricing/
Once you are logged in, you need to define an app. If you are working across customers or have a large
team, you can define an organisation to group the apps you would be working on.
At this point of time, I will click the Add app button and configure it as an iOS app developed using the
Xamarin platform, as illustrated in Figure 2. Notice that you could pick any other flavour of OS and platform
as well.
www.dotnetcurry.com/magazine 117
Figure 2: Appcenter Add new app options
Now click the build menu (on the left, the play icon) to configure our repository which contains the
solution/project we intend to build.
As you can see, at the time of writing this post, App Center supports: Azure DevOps, GitHub and BitBucket as
your repository providers.
In my case, I will connect using GitHub, by clicking the GitHub button (see that icon in Figure 3 on the right-
hand side of the screen).
Alright, once that is in place, we can see the branch(es) available under the repository we selected. I will
click on the development branch to be able to configure the build steps.
In your case this could be different. If you fork my repository, you too should see development as a branch
option as shown in Figure 4.
I will click the Configure Build button (the blue button on right hand side of the screen as seen in Figure 5).
Next depending upon the app target, you need to choose from a variety of settings.
Figure 6 shows the settings I have used to build a Xamarin.iOS project. This will vary depending upon
which target platform you choose to build.
www.dotnetcurry.com/magazine 119
Figure 6: Appcenter build configuration
If you are building native iOS apps, you would need to define shared scheme in your workspace settings
using XCODE.
Once we are set without desired configuration, click Save if you do not plan to run the build right now, or
Save and Build to save your configuration and trigger the build immediately.
Ok, after a few minutes, we will observe that the build was successful. If that’s not the output for you,
interestingly you should be able to see the cause for the failed build and make changes in your repository
to fix them.
What we would want to do now is be able to update the version number to the next one. So, say the current
version number is 1.0, we would want to update it to 1.1.
To be able to do that, we will be wiring the build with some custom build scripts.
If you’re trying to build native Android app (app build using Java or Kotlin), keep the .sh script files in the
/app directory.
appcenter-post.clone.sh
The appcenter-post-clone.sh script will do some housekeeping stuff such as downloading some utility on to
the build agent and installing it or setting up configuration etc. We will require to parse and edit a .json file,
and for that, we will install a utility called jq. To perform addition or other arithmetic operation, we will use
a utility called bc.
#!/usr/bin/env bash -e
cd $APPCENTER_SOURCE_DIRECTORY
www.dotnetcurry.com/magazine 121
# Attempt to update node
curl -O https://nodejs.org/dist/v8.11.3/node-v8.11.3.pkg
sudo installer -pkg node-v8.11.3.pkg -target /
npm install
appcenter-pre-build.sh
The appcenter-pre-build.sh script will actually parse the Info.plist file or the AndroidManifest.xml file to
read the current version information. Then we will convert the .plist into a temporary json file with the help
of another utility called plist . Similarly for the .xml file, we will convert it to a temporary json file with help
of utility called grep and make use of the bc utility we installed in the post-clone step to increment the
version information by 0.1
Sample scripts
iOS
# The following is test script to execute in pre build process
#!/usr/bin/env bash
#
# For Xamarin Android or iOS, change the package name located in AndroidManifest.
xml and Info.plist.
INFO_PLIST_FILE=$APPCENTER_SOURCE_DIRECTORY/MyWeatherApp/MyWeatherApp.iOS/Info.
plist
if [ ! -n "$INFO_PLIST_FILE" ]
then
echo "You need define Info.plist in your iOS project"
exit
fi
echo "APPCENTER_SOURCE_DIRECTORY: " $APPCENTER_SOURCE_DIRECTORY
echo "INFO_PLIST_FILE: " $INFO_PLIST_FILE
jq '.' temp.json
fi
if [ "$APPCENTER_BRANCH" == "development" ]; then
jq '.' temp.json
fi
Android
#!/usr/bin/env bash
#
# For Xamarin Android or iOS, change the package name located in AndroidManifest.
xml and Info.plist.
# AN IMPORTANT THING: YOU NEED DECLARE BASE_URL, SECRET and TEST_COLOR ENVIRONMENT
VARIABLE IN APP CENTER BUILD CONFIGURATION.
if [ ! -n "$ANDROID_MANIFEST_FILE" ]
then
echo "You need define AndroidManifest.xml in your Android project"
exit
fi
echo "APPCENTER_SOURCE_DIRECTORY: " $APPCENTER_SOURCE_DIRECTORY
echo "ANDROID_MANIFEST_FILE: " $ANDROID_MANIFEST_FILE
www.dotnetcurry.com/magazine 123
VERSIONNAME=`grep versionName $ANDROID_MANIFEST_FILE | sed
's/.*versionName="//;s/".*//'`
fi
if [ "$APPCENTER_BRANCH" == "staging" ]; then
fi
If you now (after pushing the scripts, if you tried to create your own repository and project) try and check
the configuration, notice that in Figure 9, it shows the post-clone and pre-build scripts.
Note: As of today, you cannot specify the scripts directly from this configuration screen, you will have to put
the files in the right directory in with the correct file name.
Summary
I have explored and used other alternate tools as well such as Jenkins, Bitrise etc. While these are options
worth trying, it is my personal opinion that Jenkins had an overhead while administering and managing the
server, whereas in the case of bitrise, it got overwhelming to manage the steps of build etc.
Again, this does not mean that these and the other options available in the market are not worth trying, it is
just my personal opinion and the other options might just be the best fit for your scenario.
www.dotnetcurry.com/magazine 125
In this post we explored how we could customize the default build experience using the custom build
scripts in App Center. You can review more exciting features at https://docs.microsoft.com/en-us/appcenter
and also check out their product blog for latest and greatest updates.
I would love to hear from you, be it thoughts around the post or any other help you might need with
AppCenter, tweet me @mistryhardik05.
Happy building!
Hardik Mistry
Author
Hardik Mistry is a Consultant for .NET, Azure, Xamarin and DevOps scenarios and workloads. He
is a Microsoft MVP with proven experience of 7+ years of engineering mobile-first and cloud-
first scenarios for select startups and enterprise customers. You can reach out to him via twitter
@mistryhardik05.
Gouri Sohoni
AZURE DEVOPS
- YAML FOR
CI-CD PIPELINES
In this tutorial, I will give an overview of how to use YAML in Azure Pipelines.
YAML OVERVIEW
YAML stands for (YAML Ain’t Markup Language). It is a human friendly serialization language mainly used
for configuration files. It can also be used for storing debugging output or document headers. It has a very
limited syntax. It started with the reference as Yet another markup language, before it got the current YAML
Ain’t Markup Language.
The following conventions are followed when you want to create yml file:
Remember that you cannot use tab as indentation, but can add space for indentation.
In order to work with Azure Pipelines, we need to have the source code we will use to create a build. For
build creation, we need to have an agent to do the job. The same agent can also be used to deploy and test
after deployment.
An agent can either be installed on a machine on-premises (self-hosted) or used from Microsoft-hosted
agents. This agent is responsible for running one job at a time, after communicating with Azure Pipelines as
to which job to run. It will also determine system capabilities like name of the machine, OS, or take care of
www.dotnetcurry.com/magazine 129
special installations. It will also create logs after the job is over.
I will first use the hosted agent, and later show how your own agent and pool can be configured and used.
Let us see a walkthrough of the same to use CI CD service with Azure Pipelines. In order to get Azure
Pipelines, use this link.
Note: Figure 1 contains two buttons. Even if you use the button ‘Start free with Pipelines’, you can later
connect to GitHub for source control.
Now that we have Azure DevOps Account, we can create a Team Project. A Team Project can be based on
process as Basic, Agile, Scrum or CMMI. I have selected Scrum here (selecting the process will not make
any difference to build and release pipelines, I selected Scrum for demonstration purpose but feel free to
choose any other).
Now that we have created a Team Project, we need to create a Build Pipeline. Select Pipelines – Builds and
click on the New Pipeline button. Now provide the source of our code (GitHub in this case), along with the
authentication to connect to the required repository ( GitHub in this case).
AUTHENTICATE GITHUB
Figure 3: Authenticate and integrate with GitHub
www.dotnetcurry.com/magazine 131
Figure 4: Build Pipelines which suggests Ant template
After selecting the template, we can save the yml (the extension for YAML is .yml) file and trigger build. The
created .yml files will look as follows depending upon if they are for Ant or for ASP.NET.
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
Observe that the task of ant is added and is referring to build.xml. You can change the inputs if required.
# ASP.NET
# Build and test ASP.NET projects.
# Add steps that publish symbols, save build artifacts, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/apps/aspnet/build-aspnet-4
trigger:
- master
pool:
vmImage: 'windows-latest'
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
steps:
- task: NuGetToolInstaller@0
- task: NuGetCommand@2
inputs:
restoreSolution: '$(solution)'
- task: VSBuild@1
inputs:
solution: '$(solution)'
msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package
/p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true
/p:PackageLocation="$(build.artifactStagingDirectory)"'
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
- task: VSTest@2
inputs:
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
The VSBuild task is very similar to working with classic editor (it creates a single zip file as it is referring to
web application).
www.dotnetcurry.com/magazine 133
YAML schema for build pipelines
• Pipelines
o Stage 1 / Environment 1
Job 1
- Step 1 for Job 1
- Step 2 for Job 1
Job 2
- Step 1 for Job 2
o Stage 2/ Environment 2
o ….
The schema shows that we can add as many stages as required in the Pipelines, and also as many jobs as
well. Each job can have many steps. The steps in turn can have various tasks.
Observe that there are NuGet package related tasks along with build and test tasks. You can also see the
pool name is dependent on it if we are using hosted agent or default agent. If there is a single job, we do
not have to specifically mention it.
Select New Pipeline from Releases tab and select the template for App Service Deployment. You need to
have Web App Service in Azure to deploy our app to. You can create a new web app service by signing in to
Azure Portal. Use this link to learn more.
For our deployment to be successful, we need to publish the artefacts created in Build. Let us add a task to
YAML file to publish artefacts at the end.
Edit the build definition, go to the end of YAML file, search for Publish build artefacts and click on Add. The
task can be seen as shown in Figure 6:
Configure the task for app service. Authorize the task to use the service created with your Azure subscription.
Although YAML for release pipeline is not yet commonly used, it is certainly possible and has recently got
added to Azure DevOps.
Save the release definition and create a release. After successful deployment, you should be able to see the
application deployed.
AUTOMATED CI AND CD
Edit the build definition to enable continuous integration trigger and also enable the trigger for continuous
deployment. To enable, click on the ellipse button and select Triggers.
Save the pipeline.
www.dotnetcurry.com/magazine 135
Figure 8: Enable Continuous Trigger
Enable and save the trigger for release definition. Change the code in GitHub and ensure that both the
triggers work as expected.
We can also select the task of PowerShell, do the required configuration and click on Add.
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
UserName: 'Gouri Sohoni'
configuration: debug
platform: x64
……..
- task: PowerShell@2
inputs:
targetType: 'inline'
script: '# Write your powershell commands
here.
We just have to specify that the source is from Azure DevOps repository and the wizard will show you the
template to choose. In this case, I am going to work with only the .csproj and not the whole solution. In
order to achieve this, I will have to customize the yml file. When we select .sln file to build, it selects all the
projects which are part of the solution which may not be required in some cases. If we just want to create a
build for a single project (which is a part of solution), we need to change the .sln to .csproj (or .vbproj if we
are working in VB.Net).
- task: VSBuild@1
inputs:
solution: Solution1/UnitTestProject1/UnitTestProject1.csproj
msbuildArgs: '/p:OutputPath="$(build.artifactstagingdirectory)\\"'
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
I want to copy the artefacts to a local shared folder. In order to do that, I will have to change the pool
from hosted to the default pool. For this, I need to first create PAT (Personal Access Token), download and
configure the agent pool. It will be done as follows:
pool:
name: <name of your pool>
The name should be the same as the agent you have configured. To know more on how to download and
configure the agent, follow this link.
For copy task to be successful, I created a shared folder on the machine on which I have my agent
configured and pool created, and provided the copy file task as follows:
- task: CopyFiles@2
inputs:
SourceFolder: '$(Build.ArtifactStagingDirectory)'
Contents: '**/*.dll'
TargetFolder: '\\<machine name>\<shared folder name>'
Ensure that the artefacts are published to the ArtifactStagingDirectory for the copy to be successful. After
successful creation of the build, I found the artefacts in the shared location. Customizing your YAML file is
thus very easy and straightforward.
It is very easy to create YAML from any existing classic editor build, you just have to Edit the existing build,
select the agent and click on View YAML as shown in Figure 10.
www.dotnetcurry.com/magazine 137
Figure 10: Create YAML from classic editor
Conclusion:
In this article, we have seen how to get started with the creation of Azure Pipelines. I showed how to fetch
code from GitHub repository and create a build pipeline with yml followed by release pipeline. We also
discussed how the source code can be Azure DevOps and how customization can be handled for yml.
Gouri Sohoni
Author
Gouri Sohoni is a Trainer and Consultant for over two decades. She specializes in Visual Studio -
Application Lifecycle Management (ALM) and Team Foundation Server (TFS). She is a Microsoft
MVP in VS ALM, MCSD (VS ALM) and has conducted several corporate trainings and consulting
assignments. She has also created various products that extend the capability of Team Foundation
Server.
Damir Arh
DEVELOPING
DESKTOP
APPLICATIONS IN
.NET
in approaches to desktop
They don’t only differ in the user interface that can be created with them, but also in the way the code is
written, and how their interfaces are created. I will introduce these application frameworks one by one in
the order they were released.
WINDOWS FORMS
The first version of Windows Forms was released in 2002 at the same time as .NET framework 1.0. At that
time, the most popular tools for developing Windows applications were Visual Basic 6 and Borland Delphi 6.
Both followed the principles of rapid application development (RAD). To increase developer productivity,
they offered graphical designers for creating user interfaces by arranging available user interface controls
in the window. The code was written in an event-driven manner, i.e. developers were implementing event
handlers which responded to user’s interaction with the application.
Windows Forms takes the same approach. Applications consist of multiple windows, called forms. Using the
designer, the developer can place the controls on the forms and customize their appearance and behavior
by modifying their properties in the editor.
As a result, most Windows Forms applications have a very similar appearance which is often referred to
as battleship gray. The best way to avoid this is to use custom third-party controls instead of the ones
included in the framework. Unfortunately, there aren’t many available as open-source or freeware. The most
www.dotnetcurry.com/magazine 141
important commercial control vendors are DevExpress, Infragistics and Telerik.
Since the designer output is code, each form has two separate code files so that the code generated by the
designer doesn’t interfere with manually written code. Partial classes are used so that the code from both
files gets compiled into the same class.
• Designer-generated code isn’t meant to be modified manually which is also stated in the comments of
the generated file:
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.emailAddressLabel = new System.Windows.Forms.Label();
this.emailAddressTextBox = new System.Windows.Forms.TextBox();
this.submitButton = new System.Windows.Forms.Button();
this.resetButton = new System.Windows.Forms.Button();
this.SuspendLayout();
//
// emailAddressLabel
//
this.emailAddressLabel.AutoSize = true;
this.emailAddressLabel.Location = new System.Drawing.Point(13, 13);
this.emailAddressLabel.Name = "emailAddressLabel";
this.emailAddressLabel.Size = new System.Drawing.Size(75, 13);
this.emailAddressLabel.TabIndex = 0;
this.emailAddressLabel.Text = "Email address:";
//
// a lot code skipped for brevity
//
this.AcceptButton = this.submitButton;
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.CancelButton = this.resetButton;
this.ClientSize = new System.Drawing.Size(272, 72);
this.Controls.Add(this.resetButton);
this.Controls.Add(this.submitButton);
this.Controls.Add(this.emailAddressTextBox);
this.Controls.Add(this.emailAddressLabel);
this.Name = "SubscribeForm";
this.Text = "Subscribe";
this.ResumeLayout(false);
this.PerformLayout();
}
• All the other code belonging to the form is placed in the second file and is under full control of the
developer.
Each control raises different events during its lifetime in response to which the code in corresponding
event handlers gets executed.
Having the application business logic spread across many event handlers in multiple forms can make the
application difficult to maintain as it grows in size. It’s also challenging to write unit tests for it, leaving UI
tests as the only option for automated testing. UI Tests are more fragile and more time consuming to create,
than unit tests.
To avoid this issue, the model-view-presenter (MVP) design pattern can be used. This approach allows
most of the code to be moved from the form (i.e. the view) to the presenter class which is responsible for
reacting to events and updating the view. By mocking the views, presenters can be fully unit-tested.
www.dotnetcurry.com/magazine 143
The MVP design pattern requires additional plumbing code to be written. This could be avoided by using
a library for that purpose, such as Composite UI Application Block (CAB) or MVC#. Although both are still
available for download, neither is supported anymore.
All of this makes Windows Forms not very suitable for creating new applications. An exception could be
where the nature of the application to be created makes the restrictions less important (e.g. it’s a small
application that’s not customer-facing) and the developers are more experienced with this framework than
with any of the others.
Another argument in favor of choosing Windows Forms over other frameworks can be its Mono
implementation which also works on Linux and macOS. Although not developed or supported by Microsoft,
it is highly compatible and can be a good approach for developing a desktop application for multiple
operating systems.
Editorial Note: If you are still into Windows Forms development, these WinForm tutorials may come in
handy.
Instead, it is saved as an XML file using a special syntax named XAML (Extensible Application Markup
Language). Unlike the code for Windows Forms, this XML file can be much easier to understand and edit
manually.
Also, the synchronization between the designer and the XML file is bidirectional: any changes made directly
to the XML file are immediately visible in the designer. This allows for greater flexibility when editing
the layout: individual changes can be made either in the designer or in the XAML markup, wherever the
developer finds it easier to achieve her/his goal.
Additionally, the positioning and appearance of controls can be decoupled from control declaration:
• Instead of absolutely positioning controls in the window using offsets, it is preferred to use separate
layout controls like StackPanel and Grid for that purpose:
• Styles can be used to define appearance and then applied to controls by control type or style name. This
makes it easier to achieve unified appearance of all controls and to modify appearance of controls even
after the windows were initially created.
<Application.Resources>
<Style TargetType="StackPanel">
<Setter Property="Margin" Value="2"/>
www.dotnetcurry.com/magazine 145
</Style>
<Style TargetType="TextBox">
<Setter Property="VerticalAlignment" Value="Center"/>
</Style>
<Style TargetType="Button">
<Setter Property="Margin" Value="2"/>
<Setter Property="Padding" Value="2"/>
<Setter Property="Width" Value="60"/>
</Style>
</Application.Resources>
All the controls in the framework are highly customizable. Therefore, WPF applications show much more
visual variety than Windows Forms and their technical origin can’t be recognized as easily.
However, creating highly-customized visually appealing applications has a steep learning curve and
requires experienced WPF developers.
To fill the space between the plain WPF applications and those polished manually to the highest extent,
there are control collections available, both open-source (e.g. Modern UI for WPF, MahApps.Metro, and
Material Design In XAML Toolkit) and commercial (e.g. available from DevExpress, Infragistics and Telerik).
Code is still event-driven. However, because of excellent binding support, it is much easier to decouple
code from the layout. Both data properties and event handlers (in the form of commands) can be bound to
controls in XAML markup.
To avoid some of the plumbing code, one of the many open-source MVVM frameworks can be used:
• Prism was originally developed by Microsoft’s Patterns and Practices team but was taken over by
community once that team was disbanded.
• MVVM Light Toolkit was developed by Laurent Bugnion, now a Microsoft employee.
• Caliburn.Micro was developed by Rob Eisenberg whose latest project is the Aurelia JavaScript
framework.
Although the frameworks take slightly different approaches, they all primarily make it easier to create
commands, match viewmodels to views, and navigate between views.
www.dotnetcurry.com/magazine 147
return emailAddress;
}
set
{
SetProperty(ref emailAddress, value);
SubmitCommand.RaiseCanExecuteChanged();
}
}
public MainWindowViewModel()
{
ResetCommand = new DelegateCommand(Reset);
SubmitCommand = new DelegateCommand(Submit, CanSubmit);
}
Even today, WPF is the most versatile and flexible framework for creating Windows desktop applications
and as such the recommended choice for most new Windows desktop applications.
Editorial Note: If you are into WPF programming, check out these WPF tutorials.
The framework evolved through the years, making it possible to target different Windows devices with the
same codebase.
First, support was added for Windows Phone 8.1 applications. At that time, these applications were called
Windows Store applications.
With the release of Windows 10 in 2015, the framework got its final name and eventually supported
development of applications for Windows desktop, Windows Mobile (successor of Windows Phone which
User interfaces created in the designer are saved as XAML files. Good binding support lends itself well to
the MVVM pattern. However, the controls are different enough from their WPF counterparts to make porting
of user interfaces from one platform to the other, difficult.
From their beginnings in Metro applications, UWP controls focus on consistent recognizable design, support
for different screen sizes and different input methods, including touch. In their latest incarnation, they
follow the Fluent Design System which is also used in most if not all Microsoft’s applications distributed
through Microsoft Store today.
Also, because UWP applications are designed to be published in Microsoft Store, they run in a sandbox and
don’t have direct access to all Win32 APIs. However, additional Windows 10 UWP APIs are available to them
(providing access to Microsoft Store functionalities, such as live tiles, notifications, in-app purchases etc.)
which were previously not available to WPF and Windows Forms applications.
Today, the differences between UWP applications and regular Windows desktop applications are much
smaller than they were initially. Mostly because Windows desktop applications can now call Windows
10 UWP APIs and can also be published in the Microsoft Store when using the so-called Desktop Bridge
tooling (originally named Project Centennial). They are of course still restricted to targeting Windows
www.dotnetcurry.com/magazine 149
desktop devices only.
On the other hand, UWP applications can call some Win32 APIs (support differs between the devices) when
their code is written in C++/CX (C++ component extensions).
UWP applications are your only choice if you want to target any non-desktop Windows devices. You might
also prefer them over WPF for Windows desktop applications if you want to target other Windows devices
with the same application or want to publish your application in Microsoft Store as long as you don’t need
any Win32 APIs not available to you in UWP applications.
Editorial Note: If you are an UWP developer, check out our UWP tutorials.
.NET Core 3.0 is planned for release in September 2019 and is only available in preview at the time of
writing. With the latest preview of Visual Studio 2019 and .NET Core 3.0, new Windows Forms and WPF
projects can already be created, built, and run. The biggest limitation at the moment is the fact that
Windows Forms designer doesn’t yet work with .NET Core projects which makes it difficult to do any kind
of serious development with .NET Core based Windows Forms applications. However, the issue should be
resolved until the final release.
Both Windows Forms and WPF applications are also being extended with the ability to use selected UWP
controls inside them (InkCanvas, MapControl, MediaPlayerElement, and WebView for now). This feature is
named XAML Islands and is currently available in preview for .NET Core 3.0 and.NET framework 4.6.2 or
newer. The final release for both platforms is planned to coincide with the final release of .NET Core 3.0 in
September 2019.
When this happens, .NET Core 3.0 based WPF applications will most probably replace .NET framework
based WPF applications as the recommended framework choice for most new Windows desktop
applications. Since version 4.8 was the last feature release for .NET framework, using .NET Core instead
of .NET framework for new applications will allow you to take advantage of the latest improvements (e.g.
better performance, C# 8 support) which aren’t going to be ported back to the .NET framework.
It will probably only make sense to port existing .NET framework-based Windows Forms and WPF
applications to .NET Core when they are still actively developed and would greatly benefit from .NET Core
exclusive features (e.g. side-by-side installation of different .NET Core versions). Although the process of
porting will likely improve until the final release, it will still probably require a non-trivial amount of work.
Conclusion:
The framework choice for desktop applications mostly depends on the devices which you want to target.
For applications targeting Windows desktop only, WPF is usually the best choice. Once the final release of
.NET Core 3.0 is available in September 2019, it will make sense to develop new WPF applications in it. But
Since WPF applications don’t work on other Windows devices (such as IoT Core, Mixed Reality etc.), your best
choice is to use UWP instead. This will restrict which Win32 APIs are available to you, which is the reason
why WPF is preferred for desktop-only applications in most cases.
The only desktop framework not really recommended for writing new applications is Windows Forms.
Despite that, it is still fully supported and will even be available in .NET Core 3.0 when released in
September 2019. This means that there’s no need for rewriting existing Windows Forms applications in a
different application framework.
Damir Arh
Author
Damir Arh has many years of experience with Microsoft development tools; both in
complex enterprise software projects and modern cross-platform mobile applications.
In his drive towards better development processes, he is a proponent of test driven
development, continuous integration and continuous deployment. He shares his
knowledge by speaking at local user groups and conferences, blogging, and answering
questions on Stack Overflow. He is an awarded Microsoft MVP for .NET since 2012.
www.dotnetcurry.com/magazine 151
Thank You
for the 7th Anniversary Edition
@suprotimagarwal @saffronstroke